uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,563,415
arxiv
\section{Introduction} \thispagestyle{fancy} The confinement of quarks is a remarkable phenomenon that features both static and dynamical aspects. To reveal its underlying mechanism it is on the one hand essential to restrict the analysis by considering certain limits where the system is considerably simpler and particular aspects of the mechanism become transparent. On the other hand one extends the analysis to similar systems, in order to identify the requirements and characteristic properties of the mechanism. An important limit in this respect is the static case and the corresponding confinement of fundamental color sources which presents an analytic result of non-Abelian gauge theory in the strong-coupling limit \cite{Wilson:1974sk} and which has been confirmed in numerical simulations at realistic coupling. This is reflected by an area law behavior of large Wilson loops corresponding to a linear potential between the sources and presents a genuine property of the gauge dynamics that is independent of the detailed aspects of the sources and merely depends on the representation of the gauge group. \\ Within the last years a detailed picture of the infrared (IR) sector of Yang-Mills theory and QCD has evolved, mainly due to investigations within either functional approaches as Dyson-Schwinger equations \cite{vonSmekal:1997is,Fischer:2006ub,Alkofer:2000wg,Watson:2001yv,Lerche:2002ep,Zwanziger:2001kw,Zwanziger:2002ia,Alkofer:2008tt,Schwenzer:2008vt,Fischer:2002hna,Alkofer:2004it,Huber:2007kc,Fischer:2006vf,Fischer:2007pf,Fischer:2008uz,Alkofer:2008jy,Huber:2009wh,Huber:2009tx} and functional renormalisation group equations \cite{Fischer:2008uz,Litim:1998nf,Pawlowski:2003hq,Gies:2006wv,Fischer:2004uk,Pawlowski:2005xe}, respectively, or lattice gauge theory \cite{Cucchieri:2008fc,Sternbeck:2008mv,vonSmekal:2008ws}. It has been shown that the infrared singularities of Landau gauge quantum chromodynamics (QCD) can provide a mechanism for the confinement of static quarks \cite{Alkofer:2008tt}. This mechanism, relying on the IR scaling solution of Yang-Mills theory \cite{vonSmekal:1997is,Fischer:2006vf}, is driven by a strong kinematic singularity of the quark-gluon vertex that overturns the IR suppression of the gluon propagator and leads to a long-range gluonic interaction. Yet, in this approach this is inherently obtained from the static limit of the solution of the functional equations of the quark sector of the theory. Since the mechanism exhibits a relation between chiral symmetry breaking and confinement, this poses the question whether the Dirac structure of the quarks is essential for this mechanism. In order to answer this question we present here results for a related theory where the quarks are replaced by a fundamentally charged scalar field, for details we refer to \cite{Leo}. This theory is interesting because of its simplicity owing to the absence of internal spin degrees of freedom compared to the fermionic theory and has been studied before as a model system for the QCD dynamics. On the other hand, this theory involves additional self-interactions between the scalars and thereby could exhibit a rather different dynamical behavior. Most notably it involves a Higgs phase in addition to a confining phase, and it has been shown that in the fundamentally charged scalar model the confined and the Higgs phase are not separated by a phase boundary \cite{Osterwalder:1977pc}. \\ In dynamical QCD a gluonic interaction that rises with distance is only realized over a certain range that varies continuously with the quark mass(es) \cite{Schwenzer:2008vt}. A simplified model system to study all these aspects would be highly desirable. Interestingly string breaking signatures have likewise been observed in lattice simulations of the scalar model, cf. e.g. \cite{Bock:1988kq}, before corresponding studies were possible in dynamical QCD, cf. e.g. \cite{Bali:2005fu}. \section{Dynamics of fundamentally charged scalar fields \break coupled to Landau gauge Yang-Mills Theory} In order to construct a model system for full QCD, where (fermionic) quarks carry a conserved charge and transform according to the fundamental representation of the gauge group, we attribute the same transformation properties also to the (bosonic) scalars. Therefore the scalars will firstly be implemented in the fundamental representation, and secondly the scalar field must be a complex field in order to be able to define a conserved charge. These scalars will be coupled to an $SU\!\left(N\right)$ gauge theory, and as we aim at comparing the system to Landau gauge QCD we also fix to Landau gauge, i.e. we take the limit for the gauge fixing parameter $\zeta \rightarrow 0 $. Considering only renormalizable interactions this generally results in the Lagrangian given by \begin{equation} \mathcal{L}=\left(D_{\mu,ij}\phi_{j}^{*}\right)\left(D_{\mu,ik}\phi_{k}\right)-m^{2}\phi_{i}^{*}\phi_{i}-\frac{\lambda}{4!}\left(\phi_{i}^{*}\phi_{i}\right)^{2}+\frac{1}{4}F_{\mu\nu}^{a}F_{\mu\nu}^{a}+\frac{1}{2\zeta}(\partial_{\mu}A_{\mu}^{a})^{2}+\bar{c}^{a}\partial_{\mu}D_{\mu}^{ab}c^{b}, \end{equation} with \begin{equation} D^{ab}_{\mu} = \delta^{ab} \partial_{\mu} + g f^{abc}A_{\mu}^{c}, \ \ \ \ D_{\mu,ij} = \delta_{ij} \partial_{\mu} - i g \left(\frac{t^{a}}{2}\right)_{ij} A_{\mu}^{a}, \ \ \ \ F_{\mu \nu}^{a} = \partial_{\mu} A_{\nu}^{a} - \partial_{\nu} A_{\mu}^{a} - g f^{abc} A_{\mu}^{b} A_{\nu}^{c}, \end{equation} wherein $D_{\mu,ij}$ denotes the covariant derivative, involving the Gell-Mann matrices $t^a$, for the complex scalar field $\phi^{(*)}$ with the associated mass $m$, $\lambda$ is the coupling constant for a quartic scalar interaction, $F_{\mu\nu}^{a}$ is the field-strength tensor involving the gluons $A$ and the structure constants $f^{abc}$, and $D_{\mu}^{ab}$ is the covariant derivative for the Faddeev-Popov (anti-)ghosts $(\bar{c})c$ in the adjoint representation. Lorentz-indices are written in Greek letters, roman indices starting with $a$ are color-indices and the fundamental representation is indexed by roman letters starting with $i$.\newline In contrast to QCD the tensor structure of the scalar model is strongly simplified. In the quark propagator one has to consider a Dirac-vector as well as a Dirac-scalar component. Instead for scalar bosons, there is only one (scalar) tensor component in the scalar propagator $S^{ij}$. Similarly the scalar-gluon vertex $\Gamma^{a,ij}_{\mu}$ depending on two independent momenta can be decomposed into two tensors, in contrast to 12 independent tensors in the quark-gluon vertex. This simplification becomes even more significant for higher correlation functions. We will choose the parametrization \begin{equation} S^{ij}\left(p\right)=-\delta_{ij}\frac{\tilde{S}\left(p^{2}\right)}{p^{2}}\:, \ \ \ \Gamma_{\mu}^{a,ij}\left(p_{s},p_{gl}\right)=ig\left(t^{a}\right)_{ij}\left(\tilde{\Gamma}_{s}\left(p_{s}^{2},p_{gl}^{2},p_{s}\cdot p_{gl}\right)p_{s,\mu}+\tilde{\Gamma}_{gl}\left(p_{s}^{2},p_{gl}^{2},p_{s}\cdot p_{gl}\right)p_{gl,\mu}\right), \end{equation} where the two independent momenta are conveniently chosen as the incoming scalar and the gluon momentum. Note that besides the simplification in the tensor structures two additional subtleties arise that are not present in QCD. Firstly, the classical Lagrangian contains additional 4-particle-interactions involving scalars, whose analogous terms are not present in QCD due to dimensional reasons (renormalizability). Secondly, a scalar field theory gives rise to a possible scalar condensate via the Higgs mechanism. \\ Here a note is in order. The theory described above develops a more complicated phase structure compared to QCD. As the gauge is already fixed the residual symmetry that is broken must be a global one \cite{Caudy:2007sf}. Note that it is ascertained that these two different "phases" of the system are not separated by a phase boundary \cite{Osterwalder:1977pc}, but rather by a Kert\'esz line, i.e. they are continuously connected. In that sense the terminology of "phases" is not strictly correct, but as it is frequently used in the literature we will adapt this nomenclature. A suitable quantity to investigate the phase transition is an order parameter \cite{Langfeld:2002ic} of the form \begin{equation} Q=\left(\int d^{4}x\ \phi(x)\right)\left(\int d^{4}x\ \phi^{\dagger}(x)\right)\:. \end{equation} The relation of screening to confinement has been subject to other investigations \cite{Gaete:2009xf}. In this work we will not discuss the effects due to scalar condensation and concentrate on the confinement properties of the model - leaving a detailed analysis of a condensate for future work.\\ In the following we will use the functional Dyson-Schwinger equations (DSEs) to describe the non-perturbative dynamics of the theory. The DSE analysis we perform here relies on the framework described in detail in \cite{Alkofer:2004it,Fischer:2006vf,Alkofer:2008jy} and is similar to the analysis in QCD \cite{Alkofer:2008tt,Schwenzer:2008vt}. The details on the derivation of the presented results can be found in \cite{Leo}. The DSEs for the theory can be derived algorithmically \cite{Alkofer:2008nt}. The corresponding equations in the gauge sector are given in \cite{Alkofer:2008jy}, which also hold for coupled scalars in the quenched approximation. In the case of a dynamical scalar these equations are extended by unquenching graphs with closed scalar loops analogous to the case of QCD. The leading equations in the scalar sector are shown in fig. \ref{fig:scalar-DSEs} where 2-loop diagrams arising in these equations are omitted. These equations represent an infinite tower of coupled integral equations and in general would require some truncation. In order to study interesting qualitative aspects, like the confinement of static sources, it is sufficient to study the long-range behavior of the theory. In momentum space this is encoded in the IR regime of correlation functions, and as far as only the IR scaling laws are concerned a solution of the whole tower is actually possible. This has previously been achieved via the help of a skeleton expansion \cite{Alkofer:2004it} and we will also employ this approach in this work. The skeleton expansion presents a loop expansion in terms of dressed Green functions and this way the equations for the primitively divergent Green functions are transformed into a closed system of equations. However, as argued recently the skeleton expansion is only a convenient tool and not mandatory since the IR scaling is strongly restricted and fully determined by the equations for the primitively divergent correlation functions, see e.g. \cite{Huber:2009wh,Fischer:2006vf,Schwenzer:2008vt} and references therein. Since in the scaling solution of Yang-Mills theory \cite{vonSmekal:1997is} the ghost dynamics is strongly dominant in the IR regime and there is no direct coupling between scalars and ghosts in the Lagrangian, the first ghost corrections in the scalar sector arise from two loop diagrams in the skeleton expansion. Correspondingly, we have to consider these graphs in our analysis whereas otherwise higher loop graphs in the skeleton expansion must not be more divergent than the corresponding lower order graphs. The resulting leading ghost contribution in the scalar-gluon vertex DSE is given in fig. \ref{fig:gh-box}. Similar diagrams emerge from the ghost contributions to the scalar-2-gluon vertex equation. \\ Besides masses, QCD has no inherent scales far below the MeV scale. Thus in the IR regime $p\ll\Lambda_{QCD}$ all Green functions should exhibit a scaling form in terms of the external momenta. In the case that all external momenta vanish uniformly ({\em uniform} limit) with a single external scale $p$ power law scaling solutions of the form \begin{equation} \Gamma\left(p\right)\sim\left(p^{2}\right)^{\chi+\delta}\: \end{equation} are a natural ansatz for Green functions. Herein the canonical exponent $\chi$ reflects the dimension of the corresponding operator and the anomalous exponent $\delta$ describes the anomalous scaling induced by the dynamics. In the case of the scalar propagator and scalar-gluon vertex defined above the corresponding canonical dimensions are $\chi_{s}=-1$ and $\chi_{sg}=1/2$. In the uniform limit there is effectively only a single scale, and the loop integrals that are dominated by the poles of the integrand have to scale with this external scale. The scaling of a given loop graph can then be determined by a power counting analysis. Yet, when there are additional mass scales present the loop integrals can also be dominated by scales of the order of the mass which has to be considered in detail in the power counting analysis \cite{Alkofer:2008jy}. For a massive particle the propagator can alternatively be parametrized by a mass dressing function $M$ which features a massive IR behavior for $m\!\equiv\! M\!\left(0\right)\!>\!0$ and contains in this case an additional hard scale \begin{equation} \frac{\tilde{S}\left(p^{2}\right)}{p^{2}}=\frac{1}{p^{2}+M^{2}\left(p^{2}\right)} \sim \left(p^{2}\right)^{-1+\eta}. \end{equation} Therefore the mass dressing function $M$ yields an additional exponent $\eta$ in the power counting, with $\eta=1$ for massive particles, and $\eta=0$ for massless particles, respectively. \begin{figure} \parbox[c][1\totalheight]{0.5\columnwidth}{ \includegraphics[scale=0.17]{scalar_prop_oneloop}\\ \vspace*{0.4cm}\includegraphics[scale=0.17]{sg_vertex}\\ \vspace*{0.4cm}\includegraphics[scale=0.17]{4s_vertex} }\begin{minipage}[c][1\totalheight]{0.5\columnwidth}% \flushright\includegraphics[scale=0.17]{2s2g_vertex}% \end{minipage} \caption{The matter part of the considered DSE system.\label{fig:scalar-DSEs}\emph{ left}: The DSEs for the scalar propagator, the scalar-gluon vertex and the 4-scalar vertex; \emph{right}: The equation for the scalar-2-gluon vertex. All 2-loop contributions in these equations are IR suppressed. Permutations of the given diagrams and two-loop diagrams are denoted by the ellipses.} \end{figure} \begin{figure} \parbox[c][1\totalheight]{0.5\columnwidth}{ \includegraphics[scale=0.1]{gh_box} }\begin{minipage}[c][1\totalheight]{0.5\columnwidth} \flushright \includegraphics[scale=0.17]{gh_2s2g_exp} \end{minipage} \caption{\label{fig:gh-box}\emph{left:} The lowest order ghost contribution to the scalar-gluon vertex in the skeleton expansion. \emph{right:} By the same mechanism in the 2-scalar-2-gluon vertex equation analogous diagrams arise, stemming from the last two diagrams (the ghost-triangle and the ghost-loop diagram) of fig. \ref{fig:scalar-DSEs}.} \end{figure} \section{Uniform infrared scaling} In this section we discuss the fundamentally charged scalar model in the limit where all external momenta of Green functions vanish uniformly. In order to find such infrared solutions we perform a power counting analysis. The goal of this analysis is to find the scaling behavior of Green functions under the assumption that all Green functions scale with some power of $p^2$ in the deep infrared. In this case we can simply count the exponents of the different dynamical contributions in the DSEs and determine a solvable system for the scaling exponent of each Green function. As $p\rightarrow 0$ the dominant term which determines the scaling of the corresponding Green function is the one with the most divergent power law and correspondingly the one with the minimal IR exponent. With this procedure and the help of the skeleton expansion we obtain a closed system for the IR exponents of the primitively divergent vertex functions. For an illustrative example of this counting procedure consider the scalar propagator given in fig. \ref{fig:scalar-DSEs} in the top left corner. On the left hand side the power law ansatz gives $(p^2)^{1-\delta_s}$ since it is the inverse of the propagator that arises in the DSE, giving an additional overall-sign. The right hand side involves four different diagrams, and the dominant term is correspondingly given by a minimum function of the individual exponents. The first diagram is simply the inverse bare propagator giving $(p^2)^{1-\eta}$. In the second diagram one has to count the momentum dependence of the loop integration $(p^2)^2$, a scalar-propagator $(p^2)^{-1+\delta_s}$, a gluon-propagator $(p^2)^{-1+\delta_g}$ and a scalar-gluon vertex $(p^2)^{\frac{1}{2}+\delta_{sg}}$. Performing the analogous power counting for the last two graphs and subtracting the canonical dimensions one obtains an equation for the anomalous dimension of the scalar-propagator \begin{equation} -\delta_{s}= \min\left( -\eta, \ \delta_{s}+\delta_{gl}+\delta_{sg}, \ \delta_s, \ \delta_g \right). \end{equation} Applying the same procedure for all skeleton expanded DSEs of the primitively divergent vertex functions of pure gauge theory in \cite{Alkofer:2008jy} and the scalar sector in fig. \ref{fig:scalar-DSEs} one obtains a closed system of equations for the IR exponents of the primitively divergent vertex functions. Due to the scalar self-interaction this system of equations for the anomalous IR exponents proves to be rather extensive in the scalar model and is explicitly given in \cite{Leo}. \\ The next task is then to find the leading infrared behavior of all Green functions for the different infrared fixed points as solution of this purely algebraic system. As discussed in detail in \cite{Leo} it turns out that it is a convenient starting point to consider the equations from the gauge sector first, wherein the ghost equation is the most convenient one to start with. Depending on the boundary condition of the DSE that determines if the bare term is present or absent in the corresponding renormalized equation, this equation can, in addition to the trivial perturbative case, have two qualitatively different non-trivial solutions. These solutions are standardly referred to as {\em scaling} \cite{vonSmekal:1997is,Lerche:2002ep,Zwanziger:2001kw,Alkofer:2004it,Fischer:2008uz,Fischer:2006vf,Pawlowski:2003hq} and {\em decoupling} solution \cite{Fischer:2008uz,Boucaud:2008ji,Aguilar:2008xm,Dudal:2008sp,Alkofer:2008jy}, see \cite{Cucchieri:2008fc,Sternbeck:2008mv,Bowman:2007du,Bogolubsky:2009dc,Bornyakov:2008yx} and references therein for lattice results. Note that both solutions yield a confining Polyakov-loop potential \cite{Braun:2007bx}.\\ For each of these two classes of solutions one can subsequently solve the remaining system of coupled equations, starting with the gauge sector and continuing with the matter part. In the scalar sector different solutions are found in the massive and the massless case. The complete fixed point structure is given in table \ref{tab:fixed-points} and presents the main result of our analysis. The mass of the scalar is here taken into account via the parameter $\eta$, thus these fixed points in principle hold for both, a massless and a massive scalar field, but in the following we will see that there are further subtleties for massive particles. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & $\delta_{gh}$ & $\delta_{gl}$ & $\delta_{s}$ & $\delta_{gg}$ & $\delta_{3g}$ & $\delta_{4g}$ & $\delta_{sg}$ & $\delta_{sgg}$ & $\delta_{4s}$\tabularnewline \hline scaling & $-\kappa$ & $2\kappa$ & $\eta$ & $0$ & $-3\kappa$ & $-4\kappa$ & $-\eta\!-\!\kappa\:\vee\:0$ & ($-\eta\!-\!2\kappa\:\vee\:0$) & ($-\eta$)\tabularnewline \hline decoupling & $0$ & $1$ & $\eta$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\tabularnewline \hline tree-level & $0$ & $0$ & $\eta$ & $0$ & $0$ & $0$ & $0$ & $0$ & $0$\tabularnewline \hline \end{tabular} \caption{The anomalous power law exponents of the primitively divergent Green functions for the different fixed points of the fundamentally charged scalar theory in the uniform limit. The results are given for both massive scalars $\eta=1$ and massless scalars $\eta=0$. The given anomalous exponents represent the scaling of the propagators of the ghosts $\delta_{gh}$, the gluons $\delta_{gl},$ and the scalars $\delta_{s}$ as well as of the ghost-gluon vertex $\delta_{gg}$, the 3- and 4-gluon vertices $\delta_{3g}$ and $\delta_{4g}$, the scalar-gluon vertex $\delta_{sg}$, the scalar-2-gluon vertex $\delta_{sgg}$ and the 4-scalar vertex $\delta_{4s}$. These power laws are only valid up to possible logarithmic corrections. The full vertices include also the canonical scaling dimension $-1$ for the propagators and $\frac{1}{2}$ for the 3-point vertices. Note that the uniform limit is not sufficient to determine the dominating scaling exponents of the scalar-2-gluon and the 4-scalar vertex. The values given here in parentheses will be corrected below by the inclusion of kinematic divergences for massive scalars. The value of $\kappa$ is fixed by an explicit IR solution and the best known value is $\kappa\!\approx\!0.59$ \cite{Zwanziger:2001kw,Lerche:2002ep,Pawlowski:2003hq}.}\label{tab:fixed-points} \end{table} \noindent It is a remarkable result that, as in the case of QCD \cite{Alkofer:2008tt,Schwenzer:2008vt}, the additional scalar dynamics does not change the known IR fixed point structure of the Yang-Mills sector. The latter features two rather different possible IR scenarios of the continuum theory. First, the decoupling solution with a massive gluon propagator and no IR enhanced Green functions. Second, the scaling solution with a strongly IR singular ghost propagator, which induces similar divergences in gluonic vertex functions but an IR-suppressed gluon propagator. The scaling solution provides a mechanism for the confinement of the gauge degrees of freedom within the Kugo-Ojima \cite{Kugo:1979gm} and Gribov-Zwanziger \cite{Gribov:1977wm,Zwanziger:1991gz} scenarios. Moreover, as discussed in detail below it also provides a mechanism for the confinement of the matter fields \cite{Alkofer:2008tt}. Yet, only the first of these solutions has been observed in current (4-dimensional) lattice simulations \cite{Cucchieri:2008fc,Sternbeck:2008mv,Bowman:2007du,Bogolubsky:2009dc,Bornyakov:2008yx} and it is a challenging question whether the IR behavior they show is indeed the only solution that is realized when the continuum limit is taken, cf. e.g. \cite{Fischer:2008uz,Zwanziger:2009je,Maas:2009se,Kondo:2009qz,vonSmekal:2008ws,vonSmekal:2008es}.\\ Whereas the scalar sector is entirely trivial in the decoupling solution, there are two qualitatively different IR fixed points for the scaling solution, one with trivial vertices and another one with strongly divergent vertices. These two different fixed points are analogous to the case of QCD. In the massive case the divergent solution for the full scalar-gluon vertex including its canonical dimension features in particular precisely the same IR power law $-1/2-\kappa$ as the quark-gluon vertex in QCD \cite{Alkofer:2006gz}. In both theories this IR divergence of the vertex is completely induced by the gauge sector. The above scaling laws are not altered by the neglected 2-loop diagrams, as is checked explicitly in \cite{Leo}. Furthermore, the DSEs for higher $n$-point functions with $n>4$ are linear. Accordingly these equations cannot induce additional non-trivial fixed points and therefore the infrared exponents of these $n$-point functions are purely determined by the lower $n$-point functions. Moreover, we find that in all cases there are leading graphs that do not involve 4-point functions, in complete analogy to the case of QCD where such 4-point functions are not primitively divergent. \\ In the approximation discussed so far, the scalar 4-point vertices in table \ref{tab:fixed-points} feature in the massive case only rather mild divergences. In particular, the obtained divergence of the one-particle irreducible 4-scalar vertex is less than the divergence of the corresponding one-particle reducible vertex built using a gluon exchange via two scalar-gluon vertices. We will show below that this is a shortcoming of the present restriction to uniform scaling exponents. In general a more diverse IR behavior can be realized in which Green functions have also kinematic divergences if only a subset of the external momenta vanishes whereas the others remain finite. In this case the loop integrals can receive dominant contributions from hard modes even when all external scales are small. Correspondingly, kinematic divergences can alter the uniform power laws. Due to this the present results for the 4-scalar and the scalar-2-gluon vertex from the uniform limit in table \ref{tab:fixed-points} are given in parentheses. They are only correct for massless particles but underestimate the actual divergence in the massive case. We will explain in the next section how the correct exponents are recovered once kinematic divergences are taken into account. \\ Finally, we want to point out that a solution with a divergent scalar propagator is not possible, due to the self-interaction of the scalar, cf. \cite{Huber:2009wh} for a general treatment of this issue. \section{Kinematic singularities and static confinement} As mentioned before, the uniform solution discussed so far presents only a special case of the actual possible IR behavior. In the following we will extend this by the inclusion of kinematic divergences of the vertex functions \cite{Fischer:2006vf,Alkofer:2008jy,Alkofer:2008dt}. Such kinematic divergences provide a mechanism for a long-range interaction that can confine quarks in quenched QCD \cite{Alkofer:2008tt}. Therefore we are interested whether this mechanism can also confine static scalar sources, i.e. if the relevant Green functions feature the same IR scaling exponents. A general discussion of kinematic divergences in the case of the scalar theory considered here is quite complicated due to the additional interactions and the many possible kinematic limits of the 4-point vertices, and is therefore beyond the scope of this work. However, in the following we will present the line of argument that shows that the kinematic divergence of the scalar-gluon vertex in the static case is indeed as strong as that of the quark-gluon vertex in QCD. To this end we restrict the discussion in the following to the quenched approximation where the scalar sector does not affect the gauge sector and can be analyzed independently. Since it was found above that the 4-point functions do not alter the IR scaling laws we can restrict our investigation to the equation for the scalar-propagator and the scalar-gluon vertex. We have checked explicitly in \cite{Leo} that this remains true if taking into account the enhanced scaling of the vertices obtained below. \\ A comparison with the corresponding QCD equations shows that the equations for the scalar-propagator and the scalar-gluon vertex in fig. \ref{fig:scalar-DSEs} are diagrammatically similar to the corresponding equations for the quark-propagator and the quark-gluon vertex, except for additional terms in the scalar theory that stem from the additional 4-point interactions. In order to find the IR exponents a refined power counting analysis has to be employed that takes into account the possibility of momentum and mass scales that stay finite when the IR limit is taken. As shown in detail in \cite{Alkofer:2008jy}, the scale separation between these soft and hard scales in the IR limit allows to identically decompose the arising loop integrals into several integrals that depend only on a single external scale which directly determines the scaling of the corresponding contribution. This yields a system for the IR exponents of the scalar propagator $\delta_s$ and the scalar-gluon vertex in the uniform limit $\delta_{sg}^u$ as well as $\delta_{sg}^s$ and $\delta_{sg}^{gl}$ in the limits that only a scalar respectively gluon momentum vanishes. Strikingly the additional terms from the 4-point interactions in these equations can be shown to be subleading \cite{Leo} using constraints from the inequivalent towers of DSEs and RGEs \cite{Fischer:2006vf}. Correspondingly the scalar system effectively reduces to a DSE system that is up to the different canonical dimensions identical to that of quenched QCD. The different canonical dimensions only further suppress terms that are subleading in the case of QCD. Thus the same fixed point structure is realized and the scalar-gluon vertex features the same scaling behavior as the quark-gluon vertex. \\ The solution is given in table \ref{tab:kin-fixed-points}, where the superscripts denote the soft momenta in the specific limit. It features strong kinematic divergences of the scalar-gluon vertex in the limit that only the external gluon momentum vanishes. Note that this soft-gluon divergence is of the same size as the uniform divergence and that the inclusion of kinematic divergences does not change the uniform scaling of the scalar-gluon vertex. \begin{table*} \begin{tabular}{|c|c|c|c|c|} \hline & $\delta_{s}$ & $\delta_{sg}^{u}$ & $\delta_{sg}^{gl}$ & $\delta_{sg}^{s}$\tabularnewline \hline scaling & $\eta$ & $-\eta\!-\!\kappa\:\vee\:0$ & $-\eta\!-\!\kappa\:\vee\:0$ & $0$\tabularnewline \hline decoupling & $\eta$ & $0$ & $0$ & $0$\tabularnewline \hline \end{tabular} \caption{The anomalous power law exponents of the leading correlation functions of the quenched scalar model when taking into account kinematic divergences. \label{tab:kin-fixed-points}} \end{table*} \noindent However as in the case of QCD, for higher uniform correlation functions the consideration of kinematic divergences is required in a DSE study even to get the proper scaling in the uniform limit. As explained in \cite{Alkofer:2008tt} this is a peculiarity of the DSEs owing to the fact that they involve a bare vertex in each graph. Thereby in a theory with enhanced vertices IR strength can be missing in the lowest order corrections and is represented dynamically in the contributions from correlation functions with one more external leg. To see this consider the last term in the equations of the 4-scalar vertex that involves a 5-point function. The latter satisfies its own DSE which contains a graph with a simple gluon exchange correction involving only scalar-gluon vertices. Inserting this correction into the corresponding term in the 4-scalar DSE as visualized in the left part of fig. \ref{fig:4-point} yields a 2-loop diagram that looks very similar to the ordinary gluon exchange graph in the 4-scalar DSE with the difference that the bare vertex has now a vertex correction. When the scalars are massive, this vertex correction loop receives contributions from hard loop momenta and scales only due to the kinematic divergence of the dressed vertex, cf. fig. \ref{fig:4-point} (a). In the heavy quark limit the hard loop simply shrinks to a point and precisely presents a forth dressed vertex which is present from the outset in functional RG equations. This mechanism finally yields the leading IR scaling laws for the 4-point vertices in table \ref{tab:fixed-points}.\\ Now let us discuss the interaction between static scalar sources which is described by the heavy mass limit of the full 4-scalar vertex. Here the relevant limit it is not the uniform kinematic configuration where momenta of the external scalars are in the IR regime and the scalars would correspondingly be on the light cone in Minkowski space, but the exactly opposite limit where their mass is large and the external momenta are of the order of this large scale. Nevertheless, when the scalars are far spatially separated the exchanged gluon momentum becomes soft and the correlator can be IR enhanced. Due to the same mechanism described above the leading contribution arises from the graph on the left of fig. \ref{fig:4-point} and the dominant kinematic contribution is given by graph (b) on the right hand side - again effectively adding a forth dressed vertex. The corresponding graph with a soft gluon exchange scales therefore in the IR as \begin{equation} (p^{2})^{2}\Bigl((p^{2})^{-\frac{1}{2}-\kappa}\Bigr)^{4}\Bigl((p^{2})^{-1+2\kappa}\Bigr)^{2}=(p^{2})^{-2} \end{equation} which leads after Fourier transformation in the static limit to a linear potential in coordinate space \begin{equation} V(r)\sim\int d^{3}p\frac{e^{ipr}}{p^{4}}\sim|r|\:. \end{equation} Correspondingly, scalars are subject to the same static confinement mechanism as quarks in the case of QCD. In contrast the decoupling solution does not provide a corresponding mechanism, i.e. if the decoupling solution were confining this would not be reflected in any $n$-point function with finite $n$. \\ It is easy to convince oneself that the universality of the long-range interaction between static fundamental color sources in the discussed mechanism is not restricted to the explicitly studied cases of Dirac fermions and scalars but should actually hold for matter fields in any representation of the Lorentz group. This follows since the performed IR analysis is totally independent of the Lorentz structure and depends only on the topology of the individual graphs in the DSEs and the involved propagators and vertices. The case of a complex scalar field considered here presents the renormalizable theory of matter fields coupled to a Yang-Mills sector with the most general interactions in four spacetime dimensions. The explicitly considered cases of scalars and Dirac fermions present precisely the two distinct possibilities for the dynamics as far as the topology of possible graphs is concerned. Deviations from the scaling laws obtained within a power counting analysis are only possible if there are identical cancellations of the leading graphs in the DSEs. In the case of the scalar- respectively quark-gluon vertex there is actually only a single leading diagram in the corresponding DSE, cf. figs. \ref{fig:scalar-DSEs} and \ref{fig:gh-box}, so that cancellations between different graphs are impossible here. This shows that the universality of the long-range interaction between static fundamentally charged sources, which is a natural property in the lattice framework, is indeed realized in the functional framework as well. \\ Finally, we note that although in this work we studied only gauge dependent Green functions, the above 4-point correlator represents the lowest term in a power series representation of the exponential arising in the corresponding gauge invariant correlator where the two quark sources are connected by a Wilson line. Therefore, in case this leading term is not cancelled identically by higher order terms in the series, this gauge independent quantity likewise shows the observed long-range interaction. \begin{figure} \parbox[c][1\totalheight]{0.5\columnwidth}{ \includegraphics[scale=0.16]{./4s_exp} }% \begin{minipage}[c][1\totalheight]{0.5\columnwidth}% \flushright\includegraphics[scale=0.17]{./4q_heavy_quarks}% \end{minipage} \vspace*{1.3cm} \caption{\emph{left}: An IR leading contribution to the 4-scalar vertex when inserting a corresponding contribution arising in the DSE of the 4-scalar-gluon vertex, \emph{right}: Kinematic configurations that yield the leading order contributions for the scaling of the vertex in the case that all external momenta vanish uniformly (a) and in the limit when only the momentum transfer between the scalars becomes small (b). The labels denoted that the momenta running through the corresponding propagators are soft $s$ and vanish in the IR limit or that they are hard $h$ of the order of the scalar mass. \label{fig:4-point}} \end{figure} \section{Conclusion} We have studied the IR fixed point structure of a non-Abelian gauge theory coupled to a scalar matter field in the fundamental representation of the gauge group as a schematic model for the QCD dynamics. We find that the IR fixed point structure is indeed identical to the case of QCD and that for one type of solutions a kinematic divergence of the scalar-gluon vertex induces a linear confining interaction between static sources. The qualitatively identical confinement aspects of the scalar model compared to QCD show that this confinement mechanism is indeed universal and does not depend on the particular features of the matter fields. Instead the long-range interaction between fundamental sources is a property of the gauge sector in complete analogy to results of lattice gauge simulations. Therefore, it is merely a technical complication, that within functional Green function methods the coupling of a fundamental color source to the gauge sector has to be obtained from the static limit of the dynamical equations of the corresponding matter fields. In this limit the kinematic divergence of the matter-gauge vertex simply describes the non-trivial dressing of the static color source. \\ Finally, we want to emphasize that due to the presented results the scalar theory presents an ideal model system to study a potential confinement mechanism in detail. Within functional approaches the minimal system of coupled integral equations for the matter sector decreases from 14 for fermionic fields to 3 in the scalar case. This should strongly simplify the numerical treatment and allow to study quantitative aspects of the mechanism. Even more importantly, the dramatic simplifications of scalars compared to fermions in lattice gauge simulations could allow to test this mechanism in lattice simulations. \acknowledgments We are grateful to Christian S. Fischer, Jeff Greensite, Markus Huber, Axel Maas, Stefan Olejnik and Jan M. Pawlowski for helpful discussions, furthermore we thank Markus Huber and Jan M. Pawlowski for a critical reading of the manuscript. LF acknowledges financial support by the Helmholtz-Alliance HA216/EMMI.
1,108,101,563,416
arxiv
\section{Introduction} The Rellich type theorem for the Helmholtz equation is the following assertion (\cite{Re43}): Suppose $u\in H_{loc}^2 ( {\bf R}^d )$ satisfies \begin{equation*} (-\Delta -\lambda )u =0 \quad \text{in} \quad \{ |x|>R_0 \} , \end{equation*} for some constants $\lambda $, $R_0 >0 $, and \begin{equation*} u(x) = o(|x|^{-(d-1)/2}), \quad |x|\to\infty. \end{equation*} Then $u(x)=0 $ on $\{ |x| >R_0 \}$. This theorem has been extended to a broad class of Schr{\"o}dinger operators, since it implies the non-existence of eigenvalues embedded in the continuous spectrum (see e.g. \cite{Ka59}, \cite{Ro69}, \cite{Ag70}), and also plays an important role in the proof of limiting absorption principle which yields the absolute continuity of the continuous spectrum (see e.g. \cite{Ei62}, \cite{IkSa72}). The Rellich type theorem states a local property at infinity of solutions. Namely, it proves $u(x) = 0$ on $\{|x|>R_1\}$ for some $R_1 > R_0$. By the unique continuation property, it then follows that $u(x) = 0$ for $|x| > R_0$. In the theory of linear partial differential equations (PDE), the Rellich type theorem can be regarded as the problem of division in the momentum space. In fact, given a linear PDE with constant coefficients $P(D)u = f$, $f$ being compactly supported, the Fourier transform leads to the algebraic equation $P(\xi)\widetilde u(\xi) = \widetilde f(\xi)$, where $\widetilde u(\xi)$ denotes the Fourier transform of $u(x)$. If $P(\xi)$ divides $\widetilde f(\xi)$, $u$ is compactly supported due to the Paley-Wiener theorem. This approach was pursued by Treves \cite{Tre60}, and then developed by Littman \cite{Lit66}, \cite{Lit70}, H{\"o}rmander \cite{Hor70} and Murata \cite{Mur76}. One should note that Besov spaces appear naturally through these works. In this paper, we shall consider its extension to the discrete case. Throughout the paper, we shall assume that $d \geq 2$. Let ${\bf Z}^d = \{n = (n_1,\cdots,n_d)$ $ ; n_i \in {\bf Z}\}$ be the square lattice, and $e_{1} = (1,0,\cdots,0)$, $\cdots$, $e_{d} = (0,\cdots,0,1)$ the standard bases of ${\bf Z}^d$. The discrete Laplacian $\Delta_{disc}$ is defined by \begin{equation} \big(\Delta_{disc}\widehat u\big)(n) = \frac{1}{4}\sum_{j=1}^{d} \big( {\widehat u}(n + e_{j}) + {\widehat u}(n - e_{j}) \Big) - \frac{d}{2}\,\widehat u(n) \nonumber \end{equation} for a sequence $\{\widehat u(n)\}_{n\in{\bf Z}^d}$. Our main theorem is the following. \begin{theorem}\label{rellich} Let $\lambda \in (0,d)$ and $R_0>0$. Suppose that a sequence $\{\widehat u(n)\}$, defined for $\{ n\in {\bf Z}^d \ ; \ |n| \geq R_0 \} $, satisfies \begin{equation} (- \Delta_{disc} - \lambda)\widehat u = 0 \quad \text{in} \quad \{ n\in {\bf Z}^d \ ; \ |n| > R_0 \} , \label{S1Equation} \end{equation} \begin{equation} \lim_{R\to\infty}\frac{1}{R}\sum_{R_0<|n|<R}|\widehat u(n)|^2 = 0. \label{S1DecayCond} \end{equation} Then there exists $R_1 > R_0$ such that $\widehat u(n) = 0$ for $|n| > R_1$. \end{theorem} Note that the spectrum of $ \widehat{H}_0 = -\Delta_{disc} $ is equal to $[0,d] $ and it is absolutely continuous (see e.g. \cite{IsKo}). A precursor of this theorem is given in the proof of Theorem 9 of Shaban-Vainberg \cite{Sha}. Their purpose is to compute the asymptotic expansion of the resolvent $\widehat{R}_0 (\lambda \pm i0 )=( -\Delta_{disc} -\lambda \mp i0 )^{-1}$ on ${\bf Z}^d$ and to find the associated radiation condition which implies the uniqueness of the solution to the discrete Helmholtz equation. Let \begin{equation} {\bf T}^d = {\bf R}^d/(2\pi{\bf Z)}^d \cong [-\pi,\pi]^d \label{S1Torus} \end{equation} be the $d$-dimensional flat torus and put \begin{equation} h(x)=\frac{1}{2}\Big(d - \sum_{j=1}^d\cos x_j\Big) , \quad M_{\lambda}=\big\{x\in {\bf T}^d \ ; \ h(x)= \lambda \big\}. \label{h(x)Mlambda} \end{equation} $M_{\lambda }$ is called the \textit{Fermi surface} of the discrete Laplacian. For $\lambda \in (0,d)\setminus {\bf Z}$, the Fermi surface $M_{\lambda}$ is a $(d-1)$-dimensional smooth submanifold of ${\bf T}^d$. On the other hand, if $\lambda \in (0,d)\cap {\bf Z}$, $M_{\lambda }$ has some isolated singularities. Passing to the Fourier series, $-\Delta_{disc} $ is unitary equivalent to the operator of multiplication by $h(x) $ on ${\bf T}^d$. Therefore, the computation of the behavior of $\widehat R_0(\lambda\pm i0)$ boils down to that for an integral on $M_{\lambda}$. For a compactly supported function $\widehat{f} \in \ell^2 ({\bf Z}^d )$ and $\lambda \in (0,d)\setminus {\bf Z}$, the stationary phase method gives the following asymptotic expansion as $|n| \to \infty$: \begin{gather} \begin{split} &\big( \widehat{R}_0 (\lambda \pm i0 )\widehat{f} \big)(n) \\ &= |n|^{-(d-1)/2} \sum_{j} e^{\pm i n \cdot x_{\infty} ^{(j)} (\lambda , \omega_n )} a^{(j)}_{\pm} (\lambda , \omega_n ) +O(|n|^{-(d+1)/2}), \end{split} \label{SV_asymptotic} \end{gather} where $\omega_n = n/|n|$ is assumed to be \textit{non-singular} i.e. the Gaussian curvature does not vanish on all stationary phase points $x^{(j)}_{\infty} (\lambda,\omega_n) \in M_{\lambda}$ at which the normal of $M_{\lambda}$ is parallel to $\omega_n$. It is then natural to define the radiation condition by using the first term of the above asymptotic expansion (\ref{SV_asymptotic}). To show the uniqueness of solutions to the discrete Helmholtz equation satisfying the radiation condition, they proved the assertion (which is buried in the proof actually): \medskip \noindent {\bf (S-V)} \ \textit{Let $\lambda \in (0,d)\setminus {\bf Z}$. The solution of $(-\Delta_{disc} -\lambda )\widehat{u} =0 $ in $\{|n| >R_0 \}$ for $R_0 >0$, satisfying $\widehat{u} (n)=O(|n| ^{-(d+1)/2} )$, vanishes on $\{ |n| >R_1 \}$ for sufficiently large $R_1 >0$.} \medskip The new ingredient in the present paper is the following fact to be proved in \S 4.2: Consider the equation \begin{equation} (h(x)-\lambda )u(x) = f(x), \quad {\rm on} \quad {\bf T}^d. \label{S1EquationonTd} \end{equation} If the Fourier coefficients $\widehat u(n)$ of the distribution $u$ satisfy (\ref{S1DecayCond}) and $\widehat f(n)$ is compactly supported, then $u$ is smooth on ${\bf T}^d $ except for some null sets (in fact, the exceptional set is the set of discrete points), hence $f(x) = 0$ on $M_{\lambda }$, since $f$ is an analytic function. Our proof does not depend on the asymptotic expansion of the resolvent, and uses the optimal decay assumption (\ref{S1DecayCond}). Moreover, it allows us to extend the theorem for interior threshold energy $\lambda \in (0,d)\cap {\bf Z}$, for which the Fermi surface has some singularities. We derive some basic facts of the Fermi surface in \S 4.1. Once we establish this fact, we can follow the arguments for proving the assertion (S-V) with some modifications to show that $\widehat u(n)$ is compactly supported. For the sake of completeness, in \S 4.2, we will also reproduce the proof of this part, which makes use of basic facts in theories of functions of several complex variables and algebraic geometry. As applications of Theorem \ref{rellich}, we show in \S 2 non-existence of eigenvalues embedded in the continuous spectrum for $- \Delta_{disc} + \widehat V$ in the whole space as well as in exterior domains. We also state the unique continuation property for exterior domains. The result of the present paper is used as a key step in \cite{IsMo} on the inverse scattering from the scattering matrix of a fixed energy for discrete Schr{\"o}dinger operators with compactly supported potentials. In \cite{IsKo}, the inverse scattering from all energies was studied by using complex Born approximation (see also \cite{Es}). Function theory of several complex variables and algebraic geometry have already been utilized as powerful tools not only in linear PDE but also in the study of spectral properties for discrete Schr{\"o}dinger operators or periodic problems. See e.g. Eskina \cite{Es}, Kuchment-Vainberg \cite{KuVa}, G{\'e}rard-Nier \cite{GeNi98}. We give some remarks about notation in this paper. For $x,y \in {\bf R}^d $, $x\cdot y= x_1 y_1 +\cdots +x_d y_d $ denotes the ordinary scalar product, and $|x| = (x \cdot x)^{1/2}$ is the Euclidean norm. Note that even for $n = (n_1,\cdots,n_d) \in {\bf Z}^d$, we use $|n| = (\sum_{i=1}^d|n_i|^2)^{1/2}$. For a Banach spaces $X$, $ \mathbf{B} (X)$ denotes the totality of bounded operators on $X$. For a self-adjoint operator $A$ on a Hilbert space, $\sigma (A)$, $\sigma _{ess} (A) $, $\sigma_{ac} (A)$ and $\sigma_p (A)$ denote the spectrum, the essential spectrum, the absolutely continuous spectrum and the point spectrum of $A$, respectively. For a set $S$, $\,^{\#}S$ denotes the number of elements in $S$. We use the notation \begin{equation} \langle t\rangle = (1 + |t|^2)^{1/2}, \quad t \in {\bf R}, \nonumber \end{equation} For a positive real number $a $ and $z' \in {\bf C}$, we denote $ z \equiv z' \ (\mathrm{mod} \ a )$ as a representative element of the equivalence class $ \{ z \in {\bf C} \ ; \ z=z' +a N \ \text{for} \ N \in {\bf Z} \}$. \subsection{Acknowledgement} The authors are indebted to Evgeny Korotyaev for useful discussions and encouragements. The second author is supported by the Japan Society for the Promotion of Science under the Grant-in-Aid for Research Fellow (DC2) No. 23110. \section{Some applications of Theorem \ref{rellich}} \subsection{Absence of embedded eigenvalues in the whole space} Granting Theorem 1.1, we state its applications in this section. The Schr{\"o}dinger operator ${\widehat H}$ on ${\bf Z}^d$ is defined by \begin{equation} {\widehat H} = - \Delta_{disc} + {\widehat V}, \nonumber \end{equation} where $\widehat V$ is the multiplication operator : \begin{equation} ({\widehat V}\,{\widehat u})(n) = \widehat V(n)\,{\widehat u}(n). \nonumber \end{equation} \begin{theorem}\label{NonExistenceWholeSpace} If $\widehat V(n) \in {\bf R}$ for all $n$, and there exists $R_0 > 0$ such that $\widehat V(n) = 0$ for $|n| > R_0$, we have $\sigma_p(\widehat H)\cap (0,d) = \emptyset.$ \end{theorem} Since $\widehat{V}$ is compactly supported, Weyl's theorem yields $\sigma_{ess}(\widehat H) = [0,d]$ (see \cite{IsKo}). Therefore, Theorem \ref{NonExistenceWholeSpace} asserts the non-existence of eigenvalues embedded in the continuous spectrum except for the endpoints of $\sigma_{ess} (\widehat{H})$. We also note that \cite{HSSS} has given an example of embedded eigenvalue at the endpoint of $\sigma_{ess} (\widehat{H} )$. \medskip {\it Proof of Theorem \ref{NonExistenceWholeSpace}}. Suppose that $\lambda \in (0,d) \cap \sigma_p (\widehat{H} )$ and $\widehat{u} \in \ell^2 ({\bf Z}^d )$ is the associated eigenfunction i.e. $(- \Delta_{disc} +\widehat{V}-\lambda )\widehat{u}=0$. Putting $\widehat{f}= - \widehat V\widehat{u} $, which is compactly supported by the assumption of $\widehat V$, we have the equation \begin{equation*} (- \Delta_{disc}-\lambda )\widehat{u}=\widehat{f} \quad \text{on} \quad {\bf Z}^d. \end{equation*} Since $\widehat u \in \ell^2({\bf Z}^d)$, the condition (\ref{S1DecayCond}) is satisfied. Theorem \ref{rellich} then implies that $\widehat{u} $ is compactly supported. Therefore, there exists $m_1 \in {\bf Z}$ such that $\widehat{u}(n)=0 $ if $n_1 \geq m_1$. Using the equation $(- \Delta_{disc} +\widehat{V} -\lambda )\widehat{u} =0$, which holds on the whole ${\bf Z}^d$, we then have \begin{gather*} \begin{split} &\frac{1}{4} \widehat{u} ( m_1 -1 ,n') \\ &=\left( -\Delta_{disc}^{(d-1)} +\widehat{V} (m_1 , n') -\lambda\right)\widehat{u}(m_1 ,n')+\frac{1}{2} \widehat{u} (m_1 , n') -\frac{1}{4} \widehat{u} (m_1 +1 ,n') \\ &=0 , \end{split} \end{gather*} where $n'=(n_2 , \cdots , n_d )$ and $\Delta_{disc}^{(d-1 )} $ is the discrete Laplacian on ${\bf Z}^{d-1}$. Repeating this procedure, we have $\widehat u(n) = 0$ for all $n$, and completes the proof. \qed \subsection{Unique continuation property} The next problem we address is the unique continuation property. We begin with the explanation of the exterior problem. A subset $\Omega \subset {\bf Z}^d$ is said to be {\it connected} if for any $m, n \in \Omega$, there exists a sequence $n^{(0)}, \cdots, n^{(k)} \in \Omega$ with $n^{(0)}=m, n^{(k)}=n$ such that for all $0 \leq \ell \leq k-1$, $|n^{(\ell)}-n^{(\ell+1)}|=1$. For a connected subset $\Omega \subset {\bf Z}^d$, we put \begin{equation} {\rm deg}_{\Omega}(n) =\, ^{\# }\{m \in \Omega\, ; \, |m-n|=1\}, \quad n \in \Omega. \label{S1Degree} \end{equation} The interior $\stackrel{\circ}\Omega$ and the boundary $\partial \Omega$ are defined by \begin{gather} \stackrel{\circ}\Omega\, = \{n \in \Omega\, ; \, {\rm deg}_{\Omega}(n) = 2d\}, \label{S1InteriorOmega} \\ \partial \Omega = \{n \in \Omega\, ; \, {\rm deg}_{\Omega}(n) < 2d\}. \label{S1PartialD} \end{gather} The normal derivative on $\partial \Omega $ is defined by \begin{equation} \partial _{\nu} \widehat{u}(n)= \frac{1}{4} \sum_{ m\in \stackrel{\circ}\Omega , |m-n|=1 } \big( \widehat{u}(n)-\widehat{u} (m) \big) , \quad n\in \partial \Omega . \label{S2normald} \end{equation} Then, for a bounded connected subset $\Omega $, the following Green formula holds: \begin{gather} \begin{split} &\sum_{ n\in \stackrel{\circ}\Omega } \Big( \big( \Delta_{disc} \widehat{u} \big) (n) \cdot \widehat{v} (n) - \widehat{u} (n)\cdot \big( \Delta_{disc} \widehat{v} \big) (n)\Big) \\ &= \sum_{ n\in \partial \Omega} \Big( (\partial _{\nu }\widehat{u} \big) (n) \cdot \widehat{v} (n) - \widehat{u} (n) \cdot \big( \partial _{\nu} \widehat{v} \big) (n) \Big). \end{split} \label{greenformula} \end{gather} Indeed, the standard definition of Laplacian on graph is (see e.g. \cite{Du84}) \begin{equation} -\big( \Delta_{disc}^{\Omega} \widehat{u} \big)(n):= \left\{ \begin{split} &- \big( \Delta_{disc} \widehat{u} \big)(n) , \quad ( n\in \stackrel{\circ}\Omega ), \\ &\big( \partial _{\nu} \widehat{u}\big) (n), \quad ( n\in \partial \Omega) , \end{split} \right. \label{graph_laplacian} \end{equation} which yields \begin{equation} \sum_{n\in \Omega } \big( \Delta_{disc}^{\Omega} \widehat{u} \big)(n) \cdot \widehat{v}(n) = \sum_{n\in \Omega}\widehat{u}(n) \cdot \big( \Delta_{disc}^{\Omega} \widehat{v} \big) (n) , \quad \widehat{u} , \ \widehat{v} \in \ell^2 (\Omega ). \label{S2symmetry} \end{equation} Splitting the sum (\ref{S2symmetry}) into two parts, the ones over $\stackrel{\circ}\Omega $ and over $\partial \Omega $, we have (\ref{greenformula}). Let $\Omega_{ext}$ be a exterior domain, which means that there is a bounded set $\Omega_{int}$ such that $\Omega_{ext} = {\bf Z}^d \, \setminus \stackrel{\circ}\Omega_{int}$. We assume that $\Omega_{ext} $ is connected. We consider the Schr{\"o}dinger operator \begin{equation} \widehat H_{ext} = - \Delta_{disc} + \widehat V \label{S1Hext} \end{equation} without imposing the boundary condition, where $\widehat V$ is a real-valued compactly supported potential. Now suppose there exists a $\lambda \in (0,d)$, and $\widehat u$ satisfying (\ref{S1DecayCond}) and \begin{equation} (\widehat H_{ext} - \lambda)\widehat u = 0 \quad {\rm in}\quad \stackrel{\circ}\Omega_{ext}. \label{S1ExtScroedingerEq} \end{equation} By Theorem \ref{rellich}, $\widehat u$ vanishes near infinity. However, in the discrete case the unique continuation property of Laplacian does not hold in general. It depends on the shape of the domain. To guarantee it, we introduce the following {\it cone condition}. For $1 \leq i \leq d$ and $n \in {\bf Z}^d$, let $C_{i,\pm}(n)$ be the cone defined by \begin{equation} C_{i,\pm}(n) = \Big\{ m \in {\bf Z}^d\, ; \, \sum_{k\neq i}|m_k-n_k| \leq \pm (m_i - n_i) \Big\}. \label{S1Conecond} \end{equation} \begin{definition}\label{ConeCond} An exterior domain $\Omega_{ext}$ is said to satisfy a {\it cone condition} if for any $n \in\Omega_{ext}$, there is a cone $C_{i,+}(n)$ (or $C_{i,-}(n)$) such that $C_{i,+}(n) \subset \Omega_{ext}$ (or $C_{i,-} (n)\subset \Omega_{ext}$). \end{definition} Examples of the domain satisfying this cone condition are \begin{itemize} \item $\left(\Omega_{ext}\right)^c$ = a rectangular polyhedron = $\{n \in {\bf Z}^d\, ; \, |n_i| \leq a_i, \ i = 1, \cdots, d\}$, \item $\left(\Omega_{ext}\right)^c$ = a rhombus = $\{n \in {\bf Z}^d\, ; \, \sum_{i=1}^d|n_i| \leq C\}$, \item a domain with zigzag type boundary (see Figure \ref{Fig1}). \end{itemize} \begin{figure}[t] \centering \input{fig12} \caption{The zig zag type boundary.} \label{Fig1} \end{figure} \begin{theorem} \label{UniqueConti} Let $\widehat H_{ext}$ be a Schr{\"o}dinger operator in an exterior domain $\Omega_{ext} \subset {\bf Z}^d$ with compactly supported potential. Suppose $\Omega_{ext}$ satisfies the cone condition. If there exist $\lambda \in (0,d) $ and $\widehat u$ satisfying (\ref{S1ExtScroedingerEq}) and (\ref{S1DecayCond}), then $\widehat u = 0$ on $\Omega_{ext}$. \end{theorem} Proof. Take any $n \in \Omega_{ext}$. By the cone condition, there is a cone, say $C_{1,+}(n)$, such that $C_{1,+}(n) \subset \Omega_{ext}$. By Theorem \ref{rellich}, there is $k_1$ such that $\widehat u(m) = 0$ for $m \in C_{1,+}(n)$, $k_1 < m_1$. Arguing as in the proof of Theorem \ref{NonExistenceWholeSpace}, we have $\widehat u(k_1,m') = 0$, $(k_1,m') \in C_{1,+}(n)$. Repeating this procedure, we arrive at $\widehat u(n) = 0$. \qed \bigskip An example of the domain which does not satisfy the cone condition is the one (in 2-dimension) whose boundary in the 4th quadrant has the form illustrated in Figure \ref{Fig2}, and is rectangular in the other quadrants. In this case, $\widehat u$ defined as in the figure satisfies $$ \big(\widehat H_{ext} -\frac{1}{2}\big)\widehat u = 0, \quad {\rm in} \quad \stackrel{\circ}\Omega_{ext}, $$ and $\widehat u = 0$ on $\stackrel{\circ}\Omega_{ext}$, however $\widehat u \not\equiv 0$ on $\partial\Omega_{ext}$. \medskip \begin{figure}[t] \input{fig13} \caption{A counter example for unique continuation property.} \label{Fig2} \end{figure} \subsection{Exterior eigenvalue problem} Now let $\widehat H_{ext}^{(D)}$ be $\widehat H_{ext}$ subject to the Dirichlet boundary condition \begin{equation} \widehat{u} (n)=0 \quad \text{for any} \quad n\in \partial \Omega_{ext}, \label{S1DiricletOp} \end{equation} and $\widehat H_{ext}^{(R)}$ the Robin boundary condition \begin{equation} (\partial_{\nu} \widehat{u} )(n) +c(n) \widehat{u} (n) =0 \quad \text{for any} \quad n\in \partial \Omega_{ext}, \label{S1RobinOp} \end{equation} where $c(n)$ is a bounded real function on $\partial \Omega_{ext} $. Here we deal with $\widehat{H}_{ext}^{(D)} $ as the bounded self-adjoint operator on $\ell^2 (\stackrel{\circ}\Omega_{ext} )$ with the Dirichlet boundary condition (\ref{S1DiricletOp}). On the other hand, $\widehat{H}_{ext}^{(R)} $ is the bounded self-adjoint operator on $\ell^2 (\Omega_{ext} )$, interpreting $-\Delta_{disc} +\widehat{V} $ with (\ref{S1RobinOp}) as the Laplacian on graphs in view of (\ref{graph_laplacian}). \begin{lemma} $\sigma_{ess}(\widehat{H}_{ext}^{(D)}) = \sigma_{ess}( \widehat{H}_{ext}^{(R)}) = [0,d].$ \label{s2_lem_ess_spec} \end{lemma} Since this follows from the standard perturbation theory, we omit the proof. \medskip Theorem \ref{UniqueConti} asserts the non-existence of embedded eigenvalues for these operators. In particular, it is clear from the proof that for the Dirichlet case, because we have only to assume the cone condition for $\stackrel{\circ}\Omega_{ext}$. \begin{theorem} \label{NonExistenceExterior} (1) Let $\widehat H_{ext}^{(R)}$ be a Schr{\"o}dinger operator in an exterior domain $\Omega_{ext}$ with compactly supported potential subordinate to the Robin boundary condition. Then $\sigma_p(\widehat H_{ext}^{(R)})\cap (0,d) = \emptyset$, if $\Omega_{ext}$ satisfies the cone condition. \\ \noindent (2) Let $\widehat H_{ext}^{(D)}$ be a Schr{\"o}dinger operator in an exterior domain $\Omega_{ext}$ with compactly supported potential subordinate to the Dirichlet boundary condition. Then $\sigma_p(\widehat H_{ext}^{(D)})\cap (0,d) = \emptyset$, if for any $n \in \; \stackrel{\circ}\Omega_{ext}$, there is a cone $C_{i,+}(n)$ (or $C_{i,-}(n)$) such that $C_{i,+}(n) \subset \Omega_{ext}$ (or $C_{i,-}(n) \subset \Omega_{ext}$). \end{theorem} \section{Sobolev and Besov spaces on compact manifolds} The condition (\ref{S1DecayCond}) is reformulated as a spectral condition for the Laplacian on the torus, which can further be rewritten by the Fourier transform. We do it on a compact Riemannian manifold in this section. \subsection{General case} Let $M $ be a compact Riemannian manifold of dimension $d$ with the Riemannian metric $g $ and $L$ be the Laplace-Beltrami operator on $M$ defined by $$ L = - \sum_{i,j=1}^{d}\dfrac{1}{\sqrt{g}}\frac{\partial}{\partial x_i}\Big(\sqrt{g}g^{ij}\frac{\partial}{\partial x_j}\Big) , $$ where $ \big( g^{ij} \big)_{i,j=1}^d = g^{-1}$ and $\sqrt{g} = \sqrt{ \det g }$. We introduce the Sobolev and Besov spaces on $M$ by two different ways. One way is to use functions of $L$. For $s \in {\bf R}$, we define $\mathcal H^s $ to be the completion of $C^{\infty}(M)$ by the norm $\|\langle L\rangle^{s/2}u\|_{L^2 (M)}$. We also define $\mathcal B^{\ast}$ to be the completion of $C^{\infty}(M)$ by the norm $$ \left( \sup_{R>1} \dfrac{1}{R} \big\| \chi_R(\sqrt{L})u \big\|^2_{L^2 (M)} \right)^{1/2} , $$ where $\chi_R(t) = 1$ for $t < R$, $\chi_R(t) = 0$ for $t > R$. Another way is to use the Fourier transform. Let $\{\chi_j\}_{j=1}^N$ be a partition of unity on $M$ such that on each support of $\chi_j$, we can take one coordinate patch. We define $H^s$ to be the completion of $C^{\infty}(M)$ by the norm $ \big( \sum_{j=1}^N\|\langle\xi\rangle^s(\mathcal F\chi_ju)(\xi)\|^2_{L^2 ({\bf R}^d )} \big)^{1/2} $, where $\mathcal F v = \widetilde v$ denotes the Fourier transform of $v$. We define $B^{\ast}$ to be the completion of $C^{\infty}(M)$ by the norm $$ \left( \sum_{j=1}^N\sup_{R>1}\frac{1}{ R} \big\| \chi_R( |\xi | ) ( \mathcal F \chi_j u) (\xi ) \big\| _{L^2 ({\bf R}^d )} ^2 \right) ^{1/2} .$$ The following inclusion relations hold for $s > 1/2$: \begin{equation} L^2 \subset \mathcal H^{-1/2} \subset \mathcal B^{\ast} \subset \mathcal H^{-s}, \quad L^2 \subset H^{-1/2} \subset B^{\ast} \subset H^{-s}. \label{AppendInclusion} \end{equation} These definitions of Sobolev and Besov spaces coincide. We show \begin{lemma}\label{Hs=HsBs=Bs} $\mathcal H^s = H^s$ for any $s \in {\bf R}$, and $\mathcal B^{\ast} = B^{\ast}.$ \end{lemma} Proof. We prove $\mathcal B^{\ast} = B^{\ast}$. It is well-known that $\mathcal H^s = H^s$ for $s \in {\bf R}$, whose proof is similar to, actually easier than, that for $\mathcal B^{\ast} = B^{\ast}$ given below. First let us recall a formula from functional calculus. Let $\psi(x) \in C^{\infty}({\bf R})$ be such that \begin{equation} |\psi^{(k)}(x)| \leq C_k\langle x\rangle^{m-k}, \quad \forall k \geq0, \label{psi(x)estimates} \end{equation} for some $m \in {\bf R}$. One can construct $\Psi(z) \in C^{\infty}({\bf C})$, called an {\it almost analytic extension} of $\psi$, having the following properties: \begin{equation} \left\{ \begin{split} & \Psi(x) = \psi(x), \quad \forall x \in {\bf R}, \\ & |\Psi(z)| \leq C\langle z\rangle^m, \quad \forall z \in {\bf C}, \\ & |\overline{\partial_z}\Psi(z)| \leq C_n|{\rm Im}\,z|^{n} \langle z\rangle^{m-n-1}, \quad \forall n \geq 1, \quad \forall z \in {\bf C},\\ & {\rm supp}\,\Psi(z) \subset \{z \, ; \, |{\rm Im}\,z| \leq 2 + 2|{\rm Re}\,z|\}. \end{split} \right. \label{EstimateAlmostanalytic} \end{equation} In particular, if $\psi(x) \in C_0^{\infty}({\bf R})$, one can take $\Psi(z) \in C_0^{\infty}({\bf C})$. Then, if $m < 0$, for any self-adjoint operator $A$, we have the following formula \begin{equation} \psi(A) = \frac{1}{2\pi i}\int_{{\bf C}}\overline{\partial_z}\Psi(z)(z - A)^{-1}dzd\overline{z}, \label{FomulaHelfferSjostrand} \end{equation} which is called the {\it formula of Helffer-Sj{\"o}strand}. See \cite{HeSj}, \cite{DeGe}, p. 390. We use a semi-classical analysis employing $\hbar = 1/R$ as a small parameter (see e.g. \cite{Ro87}). We show that $\psi(\hbar^2 L)$ is equal to, modulo a lower order term, a pseudo-differential operator ($\Psi DO$) with symbol $\psi(\ell(x,\hbar \xi))$, where $\ell(x,\xi) = \sum_{i,j=1}^d g^{ij}(x)\xi_i\xi_j$. In fact, take $\chi(x), \chi_0(x) \in C^{\infty}(M)$ with small support such that $\chi_0(x) = 1$ on ${\rm supp}\,\chi$. Consider a $\Psi DO$ $P_{\hbar}(z)$ with symbol $(\ell(x,\hbar \xi)-z)^{-1}$. Then $$ (\hbar^2L - z)\chi_0 P_{\hbar }(z)\chi = \chi + Q_{\hbar}(z)\chi, $$ where $Q_{\hbar}(z)$ is a $\Psi DO$ with symbol \begin{equation} \sum_{i=1}^2\hbar^i \sum_{j=1}^3\frac{q_{ij}(x,\hbar\xi)}{(\ell(x,\hbar\xi) - z)^{j}}, \quad q_{ij}(x,\xi) \in S^{2j-1}, \label{SymbolQ(z)} \end{equation} $S^{m}$ being the standard H{\"o}rmander class of symbols (see \cite{HoVol3}, p. 65). This implies $$ (\hbar^2L - z)^{-1}\chi = \chi_0 P_{\hbar}(z)\chi - (\hbar^2L-z)^{-1}Q_{\hbar}(z)\chi. $$ By the symbolic calculus, we have \begin{equation} \| \langle \hbar^2L \rangle ^{(s+1)/2} Q_{\hbar}(z)\chi \langle \hbar ^2 L \rangle ^{-s/2} \|_{{\bf B} (L^2 (M))} \leq\hbar \, C_{s,d}\,|{\rm Im}\, z|^{-N}\langle z\rangle^N, \label{qz} \end{equation} where $N > 0$ is a constant depending on $s$ and $d$, and $C_{s,d}$ does not depend on $\hbar$. We take $\psi \in C^{\infty}_0({\bf R})$ and apply (\ref{FomulaHelfferSjostrand}). Then we have \begin{equation} \psi(\hbar^2L)\chi = \chi_0 \Psi_{\hbar}\chi + \Psi_{Q,\hbar}\chi, \label{varphi(L)=} \end{equation} where $\Psi_{\hbar}$ is a $\Psi DO$ with symbol $\psi(\ell(x,\hbar\xi))$ and \begin{equation} \Psi_{Q,\hbar} = \frac{1}{2\pi i}\int_{\bf C}\overline{\partial_z}\Psi(z)(\hbar^2L-z)^{-1}Q_{\hbar}(z)dzd\overline{z}. \label{PhiQ} \end{equation} We then have, letting $z = x+iy$, \begin{gather*} \begin{split} & \| \langle \hbar^2L \rangle ^{(s+3)/2} \Psi _{Q,\hbar} \chi\langle\hbar^2 L \rangle ^{-s/2} \| _{{\bf B} (L^2 (M))} \\ &\leq C\int_{ {\bf C} } | \overline{\partial _{z} } \Psi (z)| \| \langle\hbar^2 L \rangle ^{(s+3)/2} (\hbar^2L-z)^{-1} Q_{\hbar}(z)\chi \langle \hbar^2L \rangle ^{-s/2 } \| _{{\bf B} (L^2 (M))} dzd\overline{z} \\ &\leq C\int_{{\bf C}} |\overline{\partial _z} \Psi (z)| \| \langle \hbar^2 L \rangle ^{(s+3)/2} (\hbar^2 L-z)^{-1} \langle \hbar^2 L \rangle ^{-(s+1)/2} \| _{{\bf B} (L^2 (M))}\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot \| \langle \hbar^2 L \rangle ^{(s+1)/2}Q_{\hbar}(z)\chi\langle \hbar^2L\rangle^{-s/2} \|_{{\bf B} (L^2 (M))} dzd\overline{z}. \end{split} \end{gather*} Since \begin{equation*} \| \langle \hbar^2 L \rangle ^{(s+3)/2} (\hbar^2 L-z)^{-1} \langle \hbar^2 L \rangle ^{-(s+1)/2} \| _{{\bf B} (L^2 (M))} \leq \sup_{\lambda \in {\bf R}} \frac{\langle\lambda\rangle}{|\lambda -z |} \leq |\mathrm{Im} \, z| ^{-1} \langle z \rangle, \end{equation*} we obtain, using (\ref{qz}), \begin{equation} \begin{split} \| \langle \hbar^2L & \rangle^{(s+3)/2} \Psi _{Q,\hbar} \chi\langle \hbar^2L \rangle ^{-s/2}\| _{{\bf B} (L^2 (M))} \\ & \leq \hbar C_{s,d} \int_{{\bf C}} | \overline{\partial _{z} } \Psi (z)| \ |\mathrm{Im} \, z |^{-1-N}\langle z\rangle^{N+1} dzd\overline{z} \\ &\leq \hbar C_{s,d} \int_{{\bf C}} \langle z \rangle ^{m -1 } dzd\overline{z} , \end{split} \label{EstimatePsiQhbar} \end{equation} where we have used (\ref{EstimateAlmostanalytic}) with $m < -1 , n = N+1$. This estimate implies \begin{equation} \sup_{\hbar < 1}\hbar \|\Psi_{Q,\hbar}\chi u\| ^2 _{L^2 (M)} < \infty, \quad {\rm if} \quad u \in \bigcap_{s>1/2} H^{-s}. \label{PsiQhbaruOK} \end{equation} In fact, taking $0 \leq t \leq 3/2$, we have choosing $s = -3$ in (\ref{EstimatePsiQhbar}), $$ \hbar^{1/2}\|\Psi_{Q,\hbar}\chi u\| _{L^2 (M)} \leq C\hbar^{3/2}\|\langle \hbar^2L\rangle^{-t}u\| _{L^2 (M)} \leq C\hbar^{-2t+3/2}\|\langle L\rangle^{-t}u\| _{L^2 (M)} . $$ The right-hand side is bounded if $1/4 < t \leq 3/4$. Now, by the definition of ${\mathcal B}^{\ast}$, we have the following equivalence \begin{equation} u \in \mathcal B^{\ast} \Longleftrightarrow \left\{ \begin{split} & u \in H^{-s}, \quad \forall s > 1/2, \\ & \sup_{\hbar<1}\hbar \big\| \psi(\hbar^2L)u \big\| ^2 _{L^2 (M)} < \infty, \quad \forall \psi \in C_0^{\infty}({\bf R}). \end{split} \right. \label{uinBastrewrite} \end{equation} In fact, the left-hand side is equivalent to the right-hand side for one fixed $\psi$ such that $\psi(t) = 1$ for $|t|<1$, and $\psi(t) = 0$ for $|t|>2$. By virtue of (\ref{varphi(L)=}) and (\ref{PsiQhbaruOK}), (\ref{uinBastrewrite}) is equivalent to \begin{equation} \left\{ \begin{split} & u \in H^{-s}, \quad \forall s > 1/2, \\ & \sup_{\hbar<1}\hbar \big\|\chi_0\Psi_{\hbar}\chi_j u \big\|^2 _{L^2 (M)} < \infty, \quad \forall j. \end{split} \right. \label{AppendAEquiv2} \end{equation} The symbol of $(\Psi_{\hbar})^{\ast}\chi_0(x)^2\Psi_{\hbar}$ is equal to \begin{equation*} \chi_0 (x)^2 \psi ( \ell (x,\hbar \xi ) )^2 + O(\hbar ) . \end{equation*} Then by a suitable choice of $0 < c_1 < c_2$, we have $$ \psi \Big(\frac{\hbar ^2 |\xi |^2}{c_1} \Big)^2 \leq \psi \left(\ell(x,\hbar\xi)\right) ^2 \leq \psi \Big( \frac{\hbar ^2 | \xi|^2 }{c_2 }\Big)^2. $$ Moreover, we can assume that there exists $q(x,\xi) \in C^{\infty}( M \times{\bf R}^d)$ such that \begin{equation} \left\{ \begin{split} & \psi (|\xi|^2 /c_2) ^2 - \psi(\ell(x,\xi))^2 = q(x,\xi)^2, \\ & {\rm supp}\ q(x,\xi) \subset M \times\{a < |\xi|< b\}, \end{split} \right. \label{Appndqxxisupport} \end{equation} for some $0 < a < b$. Since the symbol of $\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big)\chi_0(x)^2\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big) $ is equal to \begin{equation*} \chi_0 (x)^2 \psi \Big( \frac{\hbar^2 |\xi |^2 }{c_2 } \Big)^2 +O(\hbar), \end{equation*} we see that the symbol of $\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big)\chi_0(x)^2\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big) - \Psi_{\hbar}^{\ast}\chi_0(x)^2\Psi_{\hbar}$ is estimated as \begin{gather*} \begin{split} & \chi_0 (x)^2 \psi \Big( \frac{\hbar^2|\xi|^2 }{c_2 } \Big) ^2 -\chi_0 (x)^2 \psi \Big( \ell (x,\hbar \xi ) \Big) ^2 +O(\hbar) \\ =& \chi_0 (x)^2 q(x,\hbar\xi )^2 +O(\hbar ). \end{split} \end{gather*} Then we have \begin{equation} \begin{split} & \psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big)\chi_0(x)^2\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big) - (\Psi_{\hbar})^{\ast}\chi_0(x)^2\Psi_{\hbar} \\ =&( Q_{0,\hbar} )^{\ast}\, \chi_0 (x)^2 Q_{0,\hbar} + Q_{1,\hbar}. \end{split} \label{Psihchi0Psihfromabove} \end{equation} In the right-hand side, $Q_{0,\hbar}$ is a $\Psi DO$ with symbol $q(x,\hbar\xi)$, where $q(x,\xi )$ is given by (\ref{Appndqxxisupport}), and $Q_{1,\hbar}$ is a $\Psi DO$ with symbol $q_1(x,\xi;\hbar)$ admitting the asymptotic expansion \begin{equation} q_1(x,\xi;\hbar) \sim \sum_{j\geq 1}\hbar^{j} q_j (x,\hbar\xi ), \label{q1xxiasymptoticexpand} \end{equation} with $q_j (x,\xi )$ having the same support property as in (\ref{Appndqxxisupport}). Since the 1st term of the right-hand side of (\ref{Psihchi0Psihfromabove}) is non-negative, we have proven \begin{equation} \begin{split} \psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big)\chi_0(x)^2\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big) \geq (\Psi_{\hbar})^{\ast}\chi_0(x)^2\Psi_{\hbar} + Q_{1,\hbar}. \end{split} \label{Fromabove} \end{equation} By a similar computation, we can prove \begin{equation} \begin{split} (\Psi_{\hbar})^{\ast}\chi_0(x)^2\Psi_{\hbar} \geq \psi\Big(\frac{-\hbar^2\Delta}{c_1}\Big)\chi_0(x)^2\psi\Big(\frac{-\hbar^2\Delta}{c_1 }\Big) + Q_{1,\hbar}'. \end{split} \label{Frombelow} \end{equation} By (\ref{Fromabove}) and (\ref{Frombelow}), we have \begin{equation} \begin{split} \hbar\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big)\chi_0(x)^2\psi\Big(\frac{-\hbar^2\Delta}{c_2 }\Big) - \hbar Q_{1,\hbar} & \geq \hbar(\Psi_{\hbar})^{\ast}\chi_0(x)^2\Psi_{\hbar} \\ & \geq \hbar\psi\Big(\frac{-\hbar^2\Delta}{c_1}\Big)\chi_0(x)^2\psi\Big(\frac{-\hbar^2\Delta}{c_1 }\Big) + \hbar \widetilde Q_{1,\hbar}', \end{split} \label{Fromabovebelow} \end{equation} where $Q_{1,\hbar}$ and $Q_{1,\hbar}'$ have the property (\ref{q1xxiasymptoticexpand}). As $u \in H^{-s}, \ \forall s >1/2$, we have \begin{equation} \sup_{\hbar<1}\hbar\big|(Q_{1,\hbar}u,u)\big| + \sup_{\hbar<1}\hbar\big|(Q_{1,\hbar}'u,u)\big|< \infty. \nonumber \end{equation} Therefore, by (\ref{Fromabovebelow}), $\displaystyle{ \sup_{\hbar<1}\hbar\|\chi_0\Psi_{\hbar}\chi_j u\|^2 < \infty}$ is equivalent to $$ \sup_{R>1}\frac{1}{R}\int_{{\bf R}^d} \Big|\psi \Big (\frac{|\xi |^2 }{c R^2 } \Big) \widehat{\chi _j u } (\xi ) \Big|^2 d\xi < \infty, $$ for some $c > 0$, which is equivalent to $u \in B^{\ast}$. We have thus completed the proof of Lemma \ref{Hs=HsBs=Bs}. \qed \medskip By the same argument, we can also prove the following lemma. \begin{lemma} If $u \in \mathcal B^{\ast}$, we have the following equivalence $$ \lim_{R\to \infty} \frac{1}{\sqrt{R}}\Big\| \psi\Big(\frac{ L }{R^2 }\Big)u \Big\| _{L^2 (M)} = 0 \Longleftrightarrow \lim_{R\to \infty} \frac{1}{\sqrt R}\Big\| \psi\Big(\frac{|\xi|^2}{R^2} \Big)\big(\mathcal F\chi_j u \big)(\xi) \Big\| _{L^2 ({\bf R}^d ) } = 0 , $$ for any $j$ and any $\psi \in C_0^{\infty}({\bf R})$, where $\{\chi_j\}_{j=1}^N$ is the partition of unity on $M$. \label{S3eq1} \end{lemma} \subsection{Torus} We interpret the above results for the case of the torus ${\bf T}^d $ defined by (\ref{S1Torus}). Let ${\mathcal U}$ be the unitary operator from $\ell^{2}({\bf Z}^d)$ to $L^{2}({\bf T}^d)$ defined by \begin{equation} ({\mathcal U}\,{\widehat f})(x) = (2\pi)^{-d/2}\sum_{n\in{\bf Z}^d}{\widehat f}(n) e^{-in\cdot x}. \label{S2fourier} \end{equation} Letting \begin{equation} H_0 = {\mathcal U}\, {\widehat H_0}\, {\mathcal U}^{\ast}, \quad \widehat{H}_0 = -\Delta_{disc}, \nonumber \end{equation} we have \begin{equation} H_0 = h(x)= \frac{1}{2}\Big(d - \sum_{j=1}^d\cos x_j\Big) =\sum_{j=1}^d \sin^2 \Big( \frac{x_j }{2} \Big) , \label{S2H0cosine} \end{equation} We define operators $\widehat N_j$ and $N_j$ by $$ \big(\widehat N_j\widehat f)(n) = n_j\widehat f(n), \quad N_j = \mathcal U\widehat N_j\mathcal U^{\ast} = i\frac{\partial}{\partial x_j}. $$ We put $N = (N_1,\cdots,N_d)$, and let $N^2$ be the self-adjont operator defined by \begin{equation} N^2 = \sum_{j=1}^dN_j^2 = - \Delta, \quad {\rm on} \quad {\bf T}^d, \nonumber \end{equation} where $\Delta$ denotes the Laplacian on ${\bf T}^d = [-\pi,\pi]^d$ with periodic boundary condition. We can then apply the results in the previous subsection to $L = - \Delta$. We put \begin{equation} |N| = \sqrt{N^2} = \sqrt{-\Delta}. \nonumber \end{equation} In the following, we simply denote $ \| \, \cdot \, \|_{L^2 ({\bf T}^d )} = \| \, \cdot \, \| $. For $s \in {\bf R}$, let ${\mathcal H}^s$ be the completion of $D(|N|^s)$ with respect to the norm $\|u\|_{s} = \|\langle N\rangle^{s}u\|$ i.e. \begin{equation} {\mathcal H}^s = \big\{u \in {\mathcal D}^{\prime}({\bf T}^d)\, ; \, \|u\|_{s} = \|\langle N\rangle^{s}u\| < \infty \big\}, \nonumber \end{equation} where $ \mathcal{D}' ({\bf T}^d ) $ denotes the space of distribution on ${\bf T}^d $. Put $\mathcal H = \mathcal H^0 = L^2({\bf T}^d)$. For a self-adjoint operator $T$, let $\chi(a \leq T < b)$ denote the operator $\chi_{I}(T)$, where $\chi_I(\lambda)$ is the characteristic function of the interval $I = [a,b)$. The operators $\chi(T < a)$ and $\chi(T \geq b)$ are defined similarly. Using the series $\{r_j\}_{j=0}^{\infty}$ with $r_{-1} = 0$, $r_j = 2^j \ (j \geq 0)$, we define the Besov space $\mathcal B$ by \begin{equation} \mathcal B = \Big\{f \in {\mathcal H}\, ; \|f\|_{\mathcal B} = \sum_{j=0}^{\infty}r_j^{1/2}\|\chi(r_{j-1} \leq |N| < r_j)f\| < \infty\Big\}. \nonumber \end{equation} Its dual space $\mathcal B^{\ast}$ is the completion of $\mathcal H$ by the following norm \begin{equation} \|u\|_{\mathcal B^{\ast}}= \sup_{j\geq 0}2^{-j/2}\|\chi(r_{j-1} \leq |N| < r_j)u\|.\nonumber \end{equation} The following Lemma \ref{S3AgHo} is proved in the same way as in \cite{AgHo76}. \begin{lemma} (1) There exists a constant $C > 0$ such that \begin{equation} C^{-1}\|u\|_{\mathcal B^{\ast}}\leq \left(\sup_{R>1}\frac{1}{R}\|\chi(|N| < R)u\|^2\right)^{1/2}\leq C\|u\|_{\mathcal B^{\ast}}. \nonumber \end{equation} (2) For $s > 1/2$, the following inclusion relations hold: \begin{equation} \mathcal H^{s} \subset \mathcal B \subset \mathcal H^{1/2} \subset \mathcal H \subset \mathcal H^{-1/2} \subset \mathcal B^{\ast} \subset \mathcal H^{-s}. \nonumber \end{equation} \label{S3AgHo} \end{lemma} In view of the above lemma, in the following, we use \begin{equation} \|u\|_{\mathcal B^{\ast}} = \left(\sup_{R>1}\frac{1}{R}\|\chi(|N| < R)u\|^2\right)^{1/2} \nonumber \end{equation} as a norm on $\mathcal B^{\ast}$. We also put $\widehat{\mathcal H} = \ell^2({\bf Z}^d)$, and define $\widehat{\mathcal H}^s$, $\widehat{\mathcal B}$, $\widehat{\mathcal B}^{\ast}$ by replacing $N$ by $\widehat N$. Note that $\widehat{\mathcal H}^s = \mathcal U^{\ast}\mathcal H^s$ and so on. In particular, Parseval's formula implies that \begin{equation} \|u\|_{\mathcal H^s}^2 = \|\widehat u\|_{\widehat{\mathcal H}^s}^2 = \sum_{n\in {\bf Z}^d}(1 + |n|^2)^s|\widehat u(n)|^2 , \nonumber \end{equation} \begin{equation} \|u\|_{{\mathcal B}^{\ast}}^2 = \|\widehat u\|_{\widehat{\mathcal B}^{\ast}}^2 = \sup_{R>1}\frac{1}{R}\sum_{|n|<R}|\widehat u(n)|^2, \nonumber \end{equation} $\widehat u(n)$ being the Fourier coefficient of $u(x)$. \section{Proof of Theorem 1.1} \subsection{Some remarks for the Fermi surface} The Fermi surface $M_{\lambda}$ is not a smooth submanifold of ${\bf T}^d $ in general. Here we consider some properties of $M_{\lambda }$ as an analytic set (see e.g. \cite{Chi}, \cite{KuVa}). Let $ {\bf T}^d _{{\bf C}} = {\bf C}^d /(2\pi {\bf Z} )^d $ be the complex torus and define \begin{equation} M_{\lambda }^{{\bf C} } =\big\{ z \in {\bf T}^d_{{\bf C}} \ ; \ h(z)=\lambda \big\} . \label{isospectral_complex} \end{equation} Then $M_{\lambda}^{{\bf C}} \cap {\bf R}^d = M_{\lambda}$. . \begin{lemma} (1) For $\lambda \in (0,d) \setminus {\bf Z} $, $M_{\lambda}^{{\bf C}}$ is a $(d-1)$-dimensional, closed submanifold of ${\bf T}^d_{{\bf C}}$. \\ (2) For $\lambda \in (0,d) \cap {\bf Z} $, $M_{\lambda}^{{\bf C}}$ consists of a disjoint union $\big( \mathrm{reg}\, M_{\lambda}^{{\bf C}} \big) \cup \big( \mathrm{sng} \, M_{\lambda}^{{\bf C}} \big) $, where \begin{gather} \mathrm{sng} \, M_{\lambda}^{{\bf C}} = \big\{ z\in M_{\lambda}^{{\bf C}} \ ; \ z_j \equiv 0 \ (\mathrm{mod} \ \pi ) \ \text{for all} \ j=1, \cdots ,d \big\} , \label{sing_part} \\ \mathrm{reg} \, M_{\lambda}^{{\bf C}} = M_{\lambda}^{{\bf C}} \setminus \big( \mathrm{sng} \, M_{\lambda}^{{\bf C}} \big). \label{reg_part} \end{gather} Moreover, $\mathrm{reg} \, M_{\lambda}^{{\bf C}} $ is a $(d-1)$-dimensional, open submanifold of ${\bf T}_{{\bf C}}^d $. \label{s4_lem_analyticset} \end{lemma} Proof. Since $\nabla h(z)= \frac{1}{2} ( \sin z_1 , \cdots , \sin z_d )$, $\nabla h(z)=0$ if and only if $z_j \equiv 0 \ (\mathrm{mod} \ \pi )$ for all $j=1, \cdots , d$. For $\lambda \in (0,d)\setminus {\bf Z}$, the intersection $M_{\lambda}^{{\bf C}} \cap \{ z\in {\bf T}^d_{{\bf C}} \ ; \ z_j \equiv 0 \ (\mathrm{mod} \ \pi) \ \text{for all} \ j=1, \cdots ,d \} $ is empty, and for $\lambda \in (0,d)\cap {\bf Z}$, we see $ \mathrm{sng}\, M_{\lambda}^{{\bf C}} \not= \emptyset $. By the definitions of $\mathrm{reg} \, M_{\lambda}^{{\bf C}} $ and $\mathrm{sng} \, M_{\lambda}^{{\bf C}} $, we see that $M_{\lambda}^{{\bf C}} = \big( \mathrm{reg}\, M_{\lambda}^{{\bf C}} \big) \cup \big( \mathrm{sng} \, M_{\lambda}^{{\bf C}} \big) $ as a disjoint union. \qed \medskip For $\lambda \in (0,d) \setminus {\bf Z}$, $M_{\lambda }^{{\bf C}} $ is irreducible i.e. a connected complex submanifold (see \cite{Sha}, proof of Theorem 9). On the other hand, if $\lambda \in (0,d)\cap {\bf Z}$, $ \mathrm{reg}\, M_{\lambda}^{{\bf C}} $ may not be a connected submanifold. In fact, for $d=2 $ and $\lambda =1$, we have $M_{\lambda }^{{\bf C}} =A_+ \cup A_-$ where $$ A_{\pm} =\{ z\in {\bf T}_{{\bf C}}^2 \ ; \ z_2 \equiv \pm z_1 +\pi \ (\mathrm{mod} \ 2\pi ) \}. $$ Moreover, $A_+ \cap A_- = \mathrm{sng} \, M_{\lambda}^{{\bf C}}$. \medskip Now we give slightly weaker statement than the irreducibility of $M_{\lambda}^{{\bf C}} $. The following fact implies that any irreducible components of $\mathrm{reg} \, M_{\lambda}^{{\bf C}}$ intersects an open part of $M_{\lambda}$. \begin{lemma} Let $\lambda \in (0,d)$ and $ \{ S_{\lambda ,j} \}_j $ be the set of connected components of $\mathrm{reg}\, M_{\lambda}^{{\bf C}}$. Then, for any $S_{\lambda ,j} $, the intersection $S_{\lambda , j} \cap M_{\lambda} $ contains an open part of $\mathrm{reg}\, M_{\lambda}^{{\bf C}} $. \label{s4_lem_connect} \end{lemma} Proof. Using the change of variables ${\bf T}_{{\bf C}} \ni z_j \mapsto w_j \in \mathcal{X}^d$ which is the Riemann surface of $\arccos$, more precisely $ w_j = \arccos z_j $, $j=1,\cdots ,d $, we can reduce the equation $h(z)-\lambda =0$ on ${\bf T}^d_{{\bf C}} $ to $ w_1 +\cdots +w_d = d-2\lambda $. Then we can see that any connected component of the algebraic variety $$ V_{\lambda} =\{ w\in \mathcal{X}^d \ ; \ w_1 +\cdots + w_d =d-2\lambda \} $$ have some intersection containing an open part of $ V_{\lambda} \cap {\bf R}^d $. \qed \subsection{Helmholtz equation} We extend $\widehat{u} (n) $ to be zero for $|n|\leq R_0$ and denote it by $\widehat{u} $ again. Then we have \begin{equation} (\widehat{H}_0 -\lambda )\widehat{u} =\widehat{f}, \label{helmholtz} \end{equation} where $\widehat f$ is compactly supported. In fact, letting ${\widehat P}(k)$ be the projection onto the site $k$, it is written as $\widehat f = \sum_{|k|\leq R_0+1}c_k\widehat P(k)\widehat u$. Here we derive the proof for $\lambda \in (0,d)\cap {\bf Z}$. When $\lambda \in (0,d)\setminus {\bf Z}$, we can follow the same argument as $ \mathrm{sng}\, M_{\lambda}^{{\bf C}} =\emptyset $ and the proof is slightly easier than that for $\lambda \in (0,d)\cap {\bf Z}$. We first note the following Lemma. \begin{lemma} Let $\lambda \in (0,d)$ and $\widehat{u} $ satisfy (\ref{S1Equation}) and (\ref{S1DecayCond}). Then $u \in C^{\infty}({\bf T}^d \setminus (\mathrm{sng}\, M_{\lambda}^{{\bf C}}))$ and $\widehat{f} $ satisfies \begin{equation} (\mathcal{U} \widehat{f} )(x)= f(x)=0 \quad {\rm on} \quad (\mathrm{reg} \,M_{\lambda } ^{{\bf C}} )\cap {\bf T}^d . \label{rellich_perturbation} \end{equation} \label{lem_s4_fzero_real} \end{lemma} Proof. Passing to the Fourier series, (\ref{S1DecayCond}) implies \begin{equation} \lim_{R \to \infty} \frac{1}{R} \int_{{\bf T}^d} | \chi (|N|<R ) u(x) |^2 dx=0 . \label{rellich_asymptoticzero} \end{equation} Take a point $ x^{(0)}\in (\mathrm{reg}\, M_{\lambda}^{{\bf C}}) \cap {\bf T}^d $ and fix it. Let $U$ be a sufficiently small neighborhood of $x^{(0)} $ in ${\bf T}^d $ such that $ U\cap (\mathrm{sng} \, M_{\lambda}^{{\bf C}} ) = \emptyset $. Making the change of variables $x \mapsto (y_1 , y')$ so that $ y_1 =h(x)-\lambda $ and $y'=(y_2 ,\cdots ,y_d )$ in $U$, the Laplacian $N^2 =-\Delta $ on ${\bf T}^d $ is translated as the Laplace-Beltrami operator in $y$-coordinate using the Jacobian. Letting $\chi \in C^{\infty } ({\bf T}^d )$ be such that $\chi (x^{(0)} )=1 $, $\mathrm{supp}\, \chi \subset U$, we have that, by Lemma \ref{S3eq1}, (\ref{rellich_asymptoticzero}) leads to \begin{equation} \lim_{R \to \infty} \frac{1}{R} \int_{|\eta |<R} |\widetilde{ \chi u} (\eta )|^2 d\eta =0, \label{rellich_asymptoticzero2} \end{equation} where $\widetilde{\chi u} (\eta )$ is the Fourier transform of $\chi u$: $$ \widetilde{\chi u} (\eta )= (2\pi )^{-d/2} \int_{{\bf R}^d } e^{-iy\cdot \eta } \chi (y) u(y) dy . $$ By (\ref{helmholtz}), $u$ satisfies \begin{equation} (h(x)-\lambda )u=f \quad \text{on} \quad {\bf T}^d , \label{helmholtz2} \end{equation} where $f$ is a polynomial of $e^{ix_j } $, $j=1 , \cdots , d $, since $\widehat f$ is compactly supported. Letting $ u_{\chi} (x)=\chi(x)u(x) $, $f_{\chi} (x)=\chi (x)f(x)$ and making the change of variable $x \mapsto y$ as above, we have by passing to the Fourier transform, $\frac{\partial}{\partial \eta _1} \widetilde{u_{\chi}} (\eta)= -i \widetilde{f_{\chi}} (\eta )$. Integrating this equation, we have $$ \widetilde{u_{\chi} }(\eta) = - i\int_0^{\eta_1}\widetilde{f_{\chi}}(s, \eta' )ds + \widetilde{u_{\chi}} (0, \eta' ), \quad \eta'=(\eta_2 ,\cdots , \eta_d ). $$ Since $\widetilde{f_{\chi}} (\eta)$ is rapidly decreasing, we then see the existence of the limit $$ \lim_{\eta_1\to\infty}\widetilde{u_{\chi}}(\eta) = - i \int_0^{\infty}\widetilde{f_{\chi}} (s, \eta' )ds + \widetilde{u_{\chi}} (0, \eta' ). $$ We show that this limit vanishes. Let $D_R $ be the slab such that \begin{equation*} D_R = \left\{ \eta \ ; \ |\eta '|<\delta R , \ \frac{R}{3 } < \eta_1 < \frac{2R}{3} \right\}. \end{equation*} Then we have $D_R \subset \{ |\eta |<R \} $ for a sufficiently small $\delta >0 $. We then see that \begin{equation*} \frac{1}{R} \int_{ D_R } | \widetilde{u_{\chi}} (\eta )|^2 d\eta =\frac{1}{R} \int_{|\eta '|<\delta R} \int_{R/3}^{2R/3} |\widetilde{u_{\chi}} (\eta_1,\eta' )|^2 d\eta_1 d\eta' \leq \frac{1}{R} \int_{ |\eta |<R} |\widetilde{u_{\chi}} ( \eta )|^2 d\eta. \end{equation*} As $R\rightarrow \infty $, the right-hand side tends to zero by (\ref{rellich_asymptoticzero2}), hence so does the left-hand side, which proves that $\lim_{\eta_1\to\infty}\widetilde{u_{\chi} } (\eta) = 0$. We have, therefore, \begin{equation} \widetilde{u_{\chi}} (\eta )= i\int_{\eta_1 } ^{\infty } \widetilde{f_{\chi}} (s ,\eta ')ds. \nonumber \end{equation} This shows that $u_{\chi} =\chi u \in C^{\infty } ({\bf T}^d \setminus ( \mathrm{sng}\, M_{\lambda}^{{\bf C}}))$. It is easy to see that $u$ is smooth outside $M_{\lambda }$. Then $u\in C^{\infty} ({\bf T}^d \setminus (\mathrm{sng}\, M_{\lambda}^{{\bf C}}))$. In particular, $f (x) =0$ in $ (\mathrm{reg}\, M_{\lambda}^{{\bf C}} ) \cap {\bf T}^d $ by (\ref{helmholtz2}). \qed \medskip In the following discussion, we use the function theory of several complex variables. We extend $f(x)$ to the polynomial of $e^{iz_j}$ for $z_j \in {\bf C}$, $j=1, \cdots ,d$. Lemmas \ref{s4_lem_connect} and \ref{lem_s4_fzero_real} imply that the zeros of $f$ can be extended to $ M_{\lambda}^{{\bf C}}$. For the proof, see Corollary 7 of \cite{KuVa}. \begin{lemma} Let $\lambda \in (0,d)$. We have $f(z)=0$ on $\mathrm{reg} \, M_{\lambda}^{{\bf C}}$. \label{rellich_continuation} \end{lemma} \medskip Let us return to the equation (\ref{helmholtz}). Take any domain $D \subset {\bf T}^d_{{\bf C}} \setminus (\mathrm{sng}\, M_{\lambda}^{{\bf C}} )$. By Lemma \ref{rellich_continuation}, and using the Taylor expansion, we see that there exists a holomorphic function $g$ in the domain $D$ such that $f(z)= \big( h(z)-\lambda \big) g(z) $, hence $u(z)= f(z)/ \big( h(z)-\lambda \big)$ is an holomorphic function of $z \in D$, since $ (\partial _{z_1} h(z), \cdots , \partial _{z_d} h(z) ) \not= 0 $ in $D$. The uniqueness theorem for holomorphic functions leads that $ f(z)/(h(z)-\lambda ) $ is holomorphic in $ {\bf T}^d_{{\bf C}} \setminus ( \mathrm{sng}\, M_{\lambda}^{{\bf C}} )$. However, since we have assumed $d\geq 2$, and $\mathrm{sng}\, M_{\lambda}^{{\bf C}} $ is a $0$-dimensinal set, $\mathrm{sng}\, M_{\lambda}^{{\bf C}} $ is a set of removable singularities (from Hartogs' extension theorem or see e.g. Corollary 7.3.2 of \cite{Kr82}). Then $ f(z)/(h(z)-\lambda )$ can be extended to an entire function on ${\bf T}_{{\bf C}} ^d$ uniquely. We denote it $f(z)/(h(z)-\lambda )$ again. Here we pass to the variables $w_j =e^{iz_j }$, $j=1, \cdots , d $. Note that the map \begin{equation*} {\bf T}_{{\bf C}} ^d \ni z \mapsto w \in {\bf C}^d \setminus \bigcup_{j=1}^d A_j , \quad A_j =\{ w \in {\bf C}^d \ ; \ w_j =0 \} , \end{equation*} is biholomorphic. Then there exist positive integers $\alpha _j $ such that \begin{equation*} f(z)=\sum c_{\beta}w^{\beta}=F(w) \prod_{j=1}^d w_j ^{-\alpha_j }, \end{equation*} where $F(w) $ is a polynomial, $F(w) = \sum a_{\gamma}w^{\gamma}$, with the property that $$ a_{\gamma} = 0, \ {\rm if} \ {\rm one} \ {\rm of}\ \gamma_j \leq 0 \ {\rm in} \ \gamma = (\gamma_1,\cdots,\gamma_d). $$ We factorize $h(z)-\lambda $ as \begin{equation*} h(z)-\lambda = \frac{d}{2}-\lambda -\frac{1}{4} \sum_{j=1}^d (w_j + w_j^{-1} ) = H_{\lambda } (w) \prod_{j=1}^d w_j^{-1} , \end{equation*} where \begin{equation} H_{\lambda } (w)= \Big( \frac{d}{2} -\lambda \Big) \prod_{j=1}^d w_j -\frac{1}{4} \Big( \sum_{ j=1}^d w_j \Big) \prod_{j=1} ^d w_j - \frac{1}{4}\sum_{j=1}^d \Big( \prod_{i \not= j } w_i \Big). \label{AppBHlambdaw} \end{equation} Then \begin{equation} \frac{f(z)}{h(z)-\lambda } =\frac{ F(w)}{H_{\lambda } (w) } \prod_{j=1}^d w_j^{1-\alpha_j } . \label{variableschange} \end{equation} Since $f(z)/(h(z)-\lambda ) $ is analytic, $F(w)/H_{\lambda } (w) $ is also analytic except possibly on hyperplanes $A_j $, $j=1, \cdots , d$. However, due to the expression (\ref{AppBHlambdaw}), we have \begin{equation} H_{\lambda}(w) \neq 0, \ {\rm if} \ w \in \bigcup_{k=1}^d V_k, \nonumber \end{equation} $$ V_k = \{(w_1,\cdots,w_{k-1},0,w_{k+1},\cdots,w_d)\, ; \, w_i \neq 0, i \neq k\}.$$ Hence $F(w)/H_{\lambda } (w) $ is analytic except only on some sets of complex dimension $d-2$ (the intersection of two hyperplanes). Therefore, \begin{itemize} \item $F(w)/H_{\lambda } (w) $ is an entire function. \end{itemize} See e.g. Corollary 7.3.2 of \cite{Kr82} again. In particular, \begin{itemize} \item $ F(w)=0 $ on the set $\{ w\in {\bf C}^d \ ; \ H_{\lambda }(w)=0\} $. \end{itemize} Finally, we use the following fact, a corollary of the Hilbert Nullstellensatz (See e.g. Appendix 6 of \cite{Shafa}). Let ${\bf C}[w_1,\cdots,w_d]$ be the ring of polynomials of variables $w_1,\cdots,w_d$. \begin{lemma} If $f, g \in {\bf C}[w_1,\cdots,w_d] $, and suppose that $f$ is irreducible. If $g=0$ on all zeros of $f$, there exists $h \in {\bf C}[w_1,\cdots,w_d]$ such that $g=fh$. \label{hilbertnullstellensatz} \end{lemma} \medskip We factorize $H_{\lambda}(w)$ so that $$ H_{\lambda } (w)= H^{(1)} _{\lambda } (w) \cdots H^{(N)} _{\lambda } (w), $$ where each $H_{\lambda}^{(j)} (w) $ is an irreducible polynomial. We prove inductively that $$ F(w)/H_{\lambda }^{(1)}(w)\cdots H_{\lambda}^{(k)}(w)\ {\rm is}\ {\rm a}\ {\rm polynomial}\ {\rm for}\ 1 \leq k \leq N. $$ Note that, since we know already that $F(w)/H_{\lambda}(w)$ is entire, \begin{itemize} \item $F(w)/H_{\lambda }^{(1)}(w)\cdots H_{\lambda}^{(k)}(w)$ is also entire, \item $F(w) = 0$ on the zeros of $H_{\lambda }^{(1)}(w)\cdots H_{\lambda}^{(k)}(w)$. \end{itemize} Consider the case $k=1$. Since $F(w) = 0$ on the zeros of $H_{\lambda}^{(1)}(w)$, Lemma \ref{hilbertnullstellensatz} implies that $F(w) / H_{\lambda}^{(1)} (w)$ is a polynomial. Assuming the case $k \leq \ell -1$, we consider the case $k=\ell$. By the induction hypothesis, there exists a polynomial $P_{\ell -1}(w)$ such that \begin{equation*} \frac{F(w)}{ H_{\lambda}^{(1)} (w) \cdots H_{\lambda}^{(\ell-1 )} (w) }=P_{\ell-1}(w) . \end{equation*} Then we have $F(w)/ ( H_{\lambda}^{(1) } (w) \cdots H_{\lambda}^{(\ell)} (w) ) =P_{\ell-1}(w)/H^{(\ell)}_{\lambda}(w)$. This is entire. Therefore, $P_{\ell-1}(w) = 0$ on the zeros of $H^{(\ell)}_{\lambda}(w)$. By Lemma \ref{hilbertnullstellensatz}, there exists a polynomial $Q_{\ell}(w) $ such that \begin{equation*} \frac{P_{\ell-1}(w)}{H_{\lambda}^{(\ell)} (w)}=Q_{\ell}(w). \end{equation*} Therefore, $F(w)/H_{\lambda}^{(1)} (w)\cdots H_{\lambda}^{(k)}(w)$ is a polynomial for $1 \leq k \leq N$. Taking $k = N$, we have that $F(w)/H_{\lambda}(w)$ is a polynomial of $w$, hence $f(z)/(h(z)-\lambda )$ is a polynomial of $e^{iz_j}$ by (\ref{variableschange}). This implies that $\widehat{u}(n) $ is compactly supported. We have thus completed the proof of Theorem \ref{rellich}.
1,108,101,563,417
arxiv
\section*{Methods} \subsection*{Single crystal growth} Single crystal PBA samples were grown using slow-diffusion methodologies. Typical preparations involved counterdiffusion of aqueous solutions of a potassium hexacyanometallate(III) and a divalent transition-metal nitrate, chloride, sulfate, or acetate. Crystals of Cd[Co], Mn[Co]$^\prime$, Mn[Fe], and Zn[Co] were grown in H-cells, while those of Mn[Co], Cu[Co], and Co[Co] were grown from silica gel (see SI.1). Care was taken not to dehydrate our samples. \subsection*{Single crystal diffuse scattering} Single crystal diffuse scattering measurements were carried out using the I19 beamline at the Diamond Light Source (U.K.) and the BM01 beamline at the European Synchrotron Radiation Facility (France). Each measurement involved a full sphere of data collection carried out in a single run. Bragg peaks were indexed and integrated using the package {\sc xds} \cite{kabsch2010xds}. Reciprocal space reconstruction and averaging was performed using the software {\sc meerkat} \cite{meerkat} (see SI.2). \subsection*{3D-$\Delta$PDF analysis} Diffuse scattering was analysed using the 3D-$\Delta$PDF method \cite{weber2012three,simonov2014experimental}. The experimental diffuse scattering was reconstructed as stated above. Then the background air scattering was subtracted by using an empty instrument run and manually selecting the optimal scale coefficient. The resulting diffuse scattering was averaged in the $m\bar3m$ Laue group using outlier rejection as described by Blessing \cite{blessing1997outlier}. Bragg peaks were removed using the ``punch and fill'' procedure: spheres of intensity around the Bragg peaks were removed to ensure omission of thermal diffuse scattering contributions from subsequent analysis. The resulting holes were filled with the median intensity from a small surrounding region of reciprocal space. Finally, the 3D-$\Delta$PDF map was calculated using fast Fourier transform. Quantitative 3D-$\Delta$PDF refinement was carried out using the program {\sc yell} \cite{simonov2014yell} (see SI.4). \subsection*{Monte Carlo simulations and analysis} Monte Carlo (MC) simulations were carried out using a parallel tempering approach \cite{Earl_2005} implemented within custom-written code. For each $J^\prime$ value, an ensemble of 129 configurations was generated and MC simulations carried out at a suitable distribution of temperatures $T^\prime$ (evenly spread in $\log T^\prime$). Replica exchange steps were implemented following regular intervals of successful MC steps. Configurations were equilibrated for a fixed number of epochs, and 40 configurations for diffuse scattering calculations were selected from a further production run. Diffuse scattering patterns were calculated from this ensemble of configurations with $m\bar3m$ symmetry applied. Convergence was almost universal except for a small family of polymorph II configurations at the very lowest sampled temperatures. Surface area and accessible pore volume calculations were calculated using the {\sc zeo++} code \cite{Willems_2012} for small configurations, and a related custom-written code for larger configurations (see SI.5). \cleardoublepage \baselineskip24pt
1,108,101,563,418
arxiv
\section{I. Introduction} The first-quantized method for the calculation of particle scattering amplitudes was suggested and used by Feynman \cite{Feynman:1950ir}, Nambu \cite{Nambu:1950rs} and Schwinger \cite{Schwinger:1951nm} in as early as the 1950's. However, it was not developed much until string theory was invented and used first-quantization as a key method to calculate the string scattering amplitude \cite{Susskind:1970qz}. Then, how to apply this method back to particles was studied in detail. From a certain limit of string theory, Bern and Kosower obtained a set of first-quantized rules for calculating the scattering amplitudes in Yang-Mills theory at the 1-loop level \cite{Bern:1991aq}. This new method was shown to be equivalent to the ordinary second-quantized method and much more efficient when calculating 1-loop gluon-gluon scattering \cite{Bern:1991an}. Later Strassler gave an alternative derivation of the same rules directly from the first-quantized formalism of the field theory \cite{Strassler:1992zr}. The generalization to the rules for scalar theory at multi-loop level was then studied by Schmidt and Schubert \cite{Schmidt:1994zj} and later by Roland and Sato \cite{Roland:1996im}. Green functions on multi-loop vacuum diagrams were obtained by considering these diagrams as one-loop diagrams with insertions of propagators. The Green function on any vacuum diagram containing a ``Hamiltonian circuit" could be found by this method. A natural hope of further generalization is to find the worldline Green function on an arbitrary one-dimensional topology, without the limitation that this topology must be one-loop with insertions. In this paper, we give such a general method to obtain the Green function for scalar field theory at arbitrary multi-loop level. We show that the electric circuit can be an analog in solving this problem. On the other hand, Mathews \cite{Mathews:1959} and Bjorken \cite{Bjorken:1979dk}, from the second-quantized method (usual field theory), gave a method with the electric circuit analogy to evaluate Feynman diagrams at arbitrary loop level. Their result of the Feynman parameter integral representation of scattering amplitudes was, in principle, the most general result, but because of the limitation of second-quantization, diagrams with the same topology except for different number or placement of the external lines were treated separately, and the analogy between the kinetic quantities on the Feynman diagram and the electric quantities on the circuit was not completely clear. Fairlie and Nielsen generalized Mathews and Bjorken's circuit analog method to strings, and obtained the Veneziano amplitude and 1-loop string amplitude \cite{Fairlie:1970tc}. Although they didn't use the term ``Green function", they in fact obtained the expression of the Green function on the disk and annulus worldsheets of the bosonic string with the help of the 2D electric circuit analogy. These attempts indicate that the problem of solving for the Green function on a certain topological space and the problem of solving a circuit may be related. Indeed, we show that there is an exact analogy between the two kinds of problems in the 1D case. In this paper, we first give an introduction to the first-quantized formalism in Section II. In Section III, we find that the differential equation the Green function should satisfy can be solved by an analogous method for solving the electric circuit. We give a complete analogy among the problems of finding the Green function, the static electric field and the electric circuit. In section IV, we derive a general method to solve the electric circuit and give a compact expression of the Green function, \[ \tilde{G}(\tau,\tau')=-\frac{1}{2} s + \frac{1}{2} {\mathbf{v}}^{\mbox{T}}\Omega ^{-1}{\mathbf{v}} \] where the scalar $s$, vector $\mathbf{v}$, and matrix $\Omega$ are quantities depending only on the topological and geometrical properties of the 1D space and the position of the external sources $\tau$ and $\tau'$. This expression is similar to that of the bosonic string \cite{D'Hoker:1988ta}. In Section V, a calculation of the vacuum bubble amplitude is given to complete the discussion. The method is summarized in section VI. Examples of Green functions on topologies at the tree, 1-loop and 2-loop levels are given in Section VII. Section VIII contains the conclusions. \section{II. First-quantization} The first-quantized path integral on the ``worldline" of any given topology (graph) for a scalar particle interacting with a background potential can be written as \begin{equation}\label{eq1} \mathscr {A}\left(\mbox{M}\right) = \int \frac{\mathscr{D}e\mathscr{D}X}{V_{\mbox{rep}}} \exp \left\{ - \int_{\mbox{M}} d \tau \left[ \frac{1}{2} \left( e^{-1} \dot{X}^\mu \dot{X}_\mu + em^2 \right) + e V(X) \right] \right\} \end{equation} where $\mbox{M}$ means the particular topology of the worldline being considered and the potential $V(X)$ is specialized to a background consisting of a set of plane waves for our purpose to calculate the scattering amplitude \[ V \left( X \right)=\sum\limits_{i=1}^N {e^{ik_i \cdot X\left( {\tau _i } \right)}} \] $V_{\mbox{rep}}$ in (\ref{eq1}) is the volume of the reparametrization group. The reparametrization symmetry of the worldline must be fixed to avoid overcounting. The path integral (\ref{eq1}) gives the sum of all graphs of a given topology with an arbitrary number of external lines. This can be seen by expanding the potential exponential. If we consider only graphs with a certain number of external lines, the amplitude can be written as \begin{equation}\label{eq2} \mathscr {A}\left(\mbox{M},N\right) =\int {\frac{\mathscr{D}e\mathscr{D}X} {V_{\mbox{rep}}}\exp \left[{-\frac{1}{2}\int_{\mbox{M}} {d\tau \left( {e^{-1}\dot {X}^\mu \dot {X}_\mu +em^2} \right)} } \right]\cdot \prod\limits_{i=1}^N {\int_{\mbox{M}} { d\tau_i e \left[ e^{ik_i \cdot X\left( {\tau _i } \right)} \right] } } } \end{equation} where $N$ is the number of external lines. To fix the reparametrization symmetry, we can simply set the vielbein $e$ to $1$. However, by doing this, we have left some part of the symmetry unfixed (the ``Killing group", like the conformal Killing group in string theory), as well as over-fixed some non-symmetric transformation (the ``moduli space", also like in string theory). To repair these mismatches by hand, we add integrals over the moduli space and take off some of the integrals over the topology. The general form of the amplitude (\ref{eq2}) after fixing the reparametrization symmetry is (up to a constant factor from possible fixing of the discrete symmetry which arises due to the indistinguishable internal lines) \begin{equation}\label{eq3} \mathscr {A}( \mbox{M},N)=\prod\limits_{a=1}^\mu \int_{\mbox{F}_a } dT_a \int \mathscr{D}{\rm X}\exp \left[ -\frac12\int_{\mbox{M}} d\tau \left( \dot X^\mu \dot X_\mu +m^2 \right) \right] \int_{\mbox{M}} \left( \prod\limits_{c\notin \mbox{C}}d\tau _c \right) \prod_i e^{ik_i \cdot X_i( \tau _i)} \end{equation} where $C$ denotes the external lines whose position has to be fixed to fix the residual symmetry, and in the particle case the moduli $T_a$ are just the ``proper times" represented by the lengths of the edges in the graph of topology $\mbox{M}$. Further evaluation by the usual method of 1D field theory gives the following expression (with the delta function produced by the zero mode integral suppressed since it trivially enforces the total momentum conservation in the calculation of the scattering amplitude), \begin{equation}\label{eq4} \mathscr {A}( \mbox{M},N)=\prod\limits_{a=1}^\mu \int_{\mbox{F}_a } dT_a\, \mathscr{V}_{\mbox{M}}( T_a) \int_{\mbox{M}} \left( \prod\limits_{c\notin \mbox{C}}d\tau _c \right) \exp \left[ -\frac12\sum\limits_{i,j} k_i \cdot k_j G_{\mbox{M}}( \tau _i ,\tau _j ) \right] \end{equation} where $\mathscr{V}_{\mbox{M}}\left( T_a \right)$ is the amplitude of the vacuum bubble diagram, \begin{equation}\label{vacuum bubble} \mathscr{V}_{\mbox{M}}\left( T_a \right) = \int {\mathscr{D}{\rm X}\exp \left[ {-\frac{1}{2}\int_{\mbox{M}} {d\tau \left( {\dot {X}^\mu \dot {X}_\mu +m^2} \right)} } \right] } \end{equation} and $G_{\mbox{M}}$ is the Green function which satisfies the following differential equation on the worldline of topology M: \begin{equation}\label{eq5} \ddot {G}_{\mbox{M}} \left( {\tau ,{\tau }'} \right)=-\delta \left( {\tau -{\tau }'} \right)+\rho \end{equation} where $\rho$ is a constant, of which the integral over the whole 1D space gives $1$, i.e., the inverse of the total volume (length) of the 1D space \[ \rho = \frac {1} {\int_{\mbox{M}} {d\tau }} \] This constant is required by the compactness of the worldline space. \begin{figure} \includegraphics{fig1} \caption{\label{fig1} (A) One-loop diagram with $N$ external lines. The topology of a circle has one modulus, the circumference $T$ of the loop. Also there is one residual symmetry, which has to be fixed by taking off one of the proper-time integrals. (B) Two-loop diagrams with $N$ external lines. The topology of this kind has three moduli, the lengths $T_1$, $T_2$ and $T_3$ of the three edges. And there is no unfixed residual symmetry.} \end{figure} The scattering amplitude (\ref{eq4}) is a general expression for a worldline of any topology with an arbitrary number of external lines. The form of the vacuum bubble amplitude and Green function depends on the topology. For example, the amplitude of the one-loop 1PI graph with $N$ external lines (Fig.\ \ref{fig1}(A)) is \[ \mathscr {A}\left( {\bigcirc , N} \right)=\int_0^\infty dT\, \mathscr{V}_{\bigcirc} {\int_\bigcirc {\left( \prod\limits_{c=1}^{N-1}{d\tau_c} \right) \exp\left[ {-\frac{1}{2}\sum\limits_{i,j} {k_i \cdot k_j G_{\bigcirc} \left( {\tau _i ,\tau _j } \right)} } \right]} } \] where \[ \mathscr{V}_{\bigcirc}\left( T \right) = \exp \left( -\frac{1}{2} T m^2 \right) T ^ {-D/2} \] and \[ G_\bigcirc \left( {\tau ,{\tau }'} \right)=-\frac{1}{2}\left[ {\left| {\tau -{\tau }'} \right|-\frac{\left( {\tau -{\tau }'} \right)^2}{T}}\right] \] Another example is the amplitude of the two-loop 1PI graph with $N$ external lines (Fig.\ \ref{fig1}(B)), \[ \mathscr {A}\left( {\ominus , N} \right)=\prod\limits_{a=1}^3 {\int_0^\infty {dT_a } } \, \mathscr{V}_{\ominus}\left(T_1,T_2,T_3\right) {\int_{\ominus} {\left( \prod\limits_{c=1}^N{d\tau_c} \right) \exp \left[ {-\frac{1}{2}\sum\limits_{i,j} {k_i \cdot k_j G_{\ominus} \left( {\tau _i ,\tau _j } \right)} } \right]} } \] where $\mathscr{V}_{\ominus}$ and $G_{\ominus}$ will be determined in the following sections. \section{III. Green function} We note that the differential equation (\ref{eq5}) is just the Poisson equation the electric potential should satisfy when there is a unit positive charge at ${\tau}'$ and a constant negative charge density of magnitude $\rho$ over the whole space. This suggests to us to consider the corresponding static electric problem where the Green function is just the electric potential at $\tau$ of the above setup. To demonstrate the general solution of the Poisson equation, we solve for the Green function on the two-loop worldline as an example. Consider the 1D topological space as shown in Fig.\ \ref{fig_sum}(A) with a unit positive charge at $\tau'$ and a unit negative charge uniformly distributed over the whole space. According to the above argument, the Green function $G \left( \tau , \tau' \right)$ is the electric potential at $\tau$. Now we can use Gauss' law and single-valuedness of the potential to write down equations and solve for the expression of the potential. The potential indeed gives a right form for the Green function, but it contains many terms that will be canceled out when calculating the scattering amplitude using equation (\ref{eq4}), and hence can be further simplified. Note that the setup of the static electric problem in Fig.\ \ref{fig_sum}(A) can be regarded as the superposition of the 2 setups in Fig.\ \ref{fig_sum}(B)(C). \begin{figure} \includegraphics{sum} \caption{\label{fig_sum} (A) Two-loop topological space with a unit positive charge at $\tau'$ and a unit negative charge uniformly distributed over the whole space. (B) Two-loop topological space with a unit positive charge at $\tau'$ and a unit negative charge at $\tau$. (C) Two-loop topological space with a unit positive charge at $\tau$ and a unit negative charge uniformly distributed over the whole space.} \end{figure} Let $G\left( \tau , \tau' \right)$ denote the potential at $\tau$ of the setup shown in \ref{fig_sum}(A), and $\bar {G} \left( \tau , \tau' \right)$ denote the potential at $\tau$ in Fig.\ \ref{fig_sum}(B). The potential at $\tau$ in Fig.\ \ref{fig_sum}(C) is then $G\left( \tau , \tau \right)$. Thus \[ G\left( \tau , \tau' \right) = \bar {G} \left( \tau , \tau' \right) + G\left( \tau , \tau \right) \] We further define $\tilde {G} \left( \tau , \tau' \right)$ as the symmetric part of $\bar {G} \left( \tau , \tau' \right)$, i.e., \begin{equation} \label{G_tilde} \tilde {G} \left( \tau , \tau' \right) \equiv \frac{1}{2} \left[ \bar {G} \left( \tau , \tau' \right) + \bar {G} \left( \tau , \tau' \right) \right] =\left[ \frac{1}{2} G \left( \tau , \tau' \right) + \frac{1}{2} G \left( \tau' , \tau \right) \right] -\frac{1}{2} G \left( \tau , \tau \right) -\frac{1}{2} G \left( \tau' , \tau' \right) \end{equation} If we use $\tilde {G} \left( \tau , \tau' \right)$ instead of $G \left( \tau , \tau' \right)$ in equation (\ref{eq4}), the sum can be rewritten as \begin{eqnarray*} -\frac{1}{2}\sum\limits_{i,j} {k_i \cdot k_j \tilde{G}\left( {\tau _i ,\tau _j } \right)} & = & -\frac{1}{2}\sum\limits_{i,j} {k_i \cdot k_j \left\{ \frac{1}{2} \left[ G \left( {\tau _i ,\tau _j} \right)+ G \left( {\tau _j ,\tau _i} \right) - G \left( {\tau _i ,\tau _i} \right) -G \left( {\tau _j ,\tau _j} \right) \right]\right\}}\\ & = & -\frac{1}{2}\sum\limits_{i,j} {k_i \cdot k_j \left\{ \frac{1}{2} \left[ G \left( {\tau _i ,\tau _j} \right)+ G \left( {\tau _j ,\tau _i} \right) \right]\right\}}\\ & = & -\frac{1}{2}\sum\limits_{i,j} {k_i \cdot k_j G\left( {\tau _i ,\tau _j } \right)} \end{eqnarray*} We have used conservation of momentum $\sum{k_i}=0$ in the second step and rearranged the summands in the third step. This shows that using $\tilde {G} \left( \tau , \tau' \right)$ in equation (\ref{eq4}) is equivalent to using $G \left( \tau , \tau' \right)$. \begin{figure} \includegraphics{symmetry} \caption{\label{fig_symmetry} (A) Two-loop topological space with a unit positive charge at $\tau'$ and a unit negative charge at $\tau$. (B) Two-loop topological space with a unit negative charge at $\tau'$ and a unit positive charge at $\tau$. (C) Two-loop topological space with a half unit positive charge at $\tau'$ and a half unit negative charge at $\tau$.} \end{figure} Note that same procedure is usually applied to construct the Green function for bosonic strings \cite{D'Hoker:1988ta}, \[ \tilde {G} \left( z , w \right)= G \left( z , w \right) - \frac{1}{2} G \left( z , z \right) -\frac{1}{2} G \left( w, w \right) \] which is similar to equation (\ref{G_tilde}). Now we have to look for an electric field analog of $\tilde {G} \left( \tau , \tau' \right)$. Since $\bar {G} \left( \tau , \tau' \right)$ is the electric potential at $\tau$ of the setup shown in Fig.\ \ref{fig_symmetry}(A), it can be written as the potential at $\tau'$ plus the potential difference from $\tau$ to $\tau'$. And $\bar {G} \left( \tau' , \tau \right)$ is the potential at $\tau'$ of the setup shown in Fig.\ \ref{fig_symmetry}(B). The sum of $\bar {G} \left( \tau , \tau' \right)$ and $\bar {G} \left( \tau' , \tau \right)$ just gives the potential difference from $\tau$ to $\tau'$ of Fig.\ \ref{fig_symmetry}(A) because the potential at $\tau'$ of Fig.\ \ref{fig_symmetry}(A) cancels the potential at $\tau'$ of Fig.\nobreak\ \ref{fig_symmetry}(B). $\tilde {G} \left( \tau , \tau' \right)$ is half that potential difference, and therefore is just the potential difference from $\tau$ to $\tau'$ of Fig.\ \ref{fig_symmetry}(C). It is now clear that, to write down the expression of the scattering amplitude, we only have to know the symmetric Green function $\tilde {G}$, and use the following formula: \begin{equation}\label{eq10} \mathscr {A}( \mbox{M},N)=\prod_{a=1}^\mu \int_{\mbox{F}_a} dT_a\, \mathscr{V}_{\mbox{M}} \prod_{c\notin \mbox{C}} \int_{\mbox{M}} d\tau _c \exp \left[ -\sum_{i<j} k_i \cdot k_j \tilde G( \tau _i ,\tau _j) \right] \end{equation} This simplifies the expression of the Green function. \begin{figure} \includegraphics{fig3} \caption{\label{fig3} (A) Two-loop topological space with a half unit positive charge at $\tau'$ and a half unit negative charge at $\tau$. The lengths of the three arcs are $T_1$, $T_2$, $T_3$. $\tau'$ and $\tau$ are respectively on $T_1$ and $T_3$ and denote the lengths from the origin. The magnitudes of the electric field on each part of the space are denoted by $a$ - $e$ and the directions are chosen arbitrarily. (B) Two-loop circuit with a half unit current input at $\tau'$ and withdrawn at $\tau$. The resistances of the three arcs are $T_1$, $T_2$, $T_3$. $\tau'$ and $\tau$ are respectively on $T_1$ and $T_3$ and denote the resistance from the origin. The currents on the parts of the circuit are denoted by $a$ - $e$ and the directions are chosen arbitrarily.} \end{figure} Now that $\tilde {G} ( \tau , \tau')$ is the potential difference from $\tau$ to $\tau'$ in Fig.\ \ref{fig_symmetry}(C), we can apply Gauss' law (of 1D space) and single-valuedness of the potential to write down the equations the Green function (electric potential $\Phi$) and its first derivative (electric field $E$) should satisfy. We assume the direction and value of $E$ as shown in Fig.\ \ref{fig3}(A) and $\tau'$ and $\tau$ are respectively on $T_1$ and $T_3$. According to Gauss' law, we have the following equations: \begin{eqnarray*} a+b&=&+\frac{1}{2}\\ -a-c+d&=&0\\ -b+c+e&=&0\\ -d-e&=&-\frac{1}{2} \end{eqnarray*} The single-valuedness of the potential requires \begin{eqnarray*} cT_2+b(T_1-\tau')&=&a\tau'\\ cT_2+d\tau&=&e(T_3-\tau) \end{eqnarray*} If we regard the electric field $E$ as the current $I$, the electric potential $\Phi$ as the voltage $V$, and the length on the 1D space $l$ as the resistance $r$, these equations are just the Kirchhoff equations of a circuit of the same shape as the worldline and with half unit current going into the circuit at $\tau'$ and half unit current coming out at $\tau$, as shown in Fig.\ \ref{fig3}(B). It is easy to see that this equivalence between 1D static electric field and circuit is valid for an arbitrary 1D topological space. We have the following relations for 1D (note that there is no cross-sectional area in the 1D case): \begin{eqnarray*} \sigma E &=& I\\ \rho l &=& r\\ \Phi &=& V \end{eqnarray*} where $\sigma$ and $\rho$ ($=1/\sigma$) are respectively the 1D conductivity and resistivity, and the relation $El=\Phi$ is equivalent to Ohm's law $Ir=V$. In addition, Gauss' law is equivalent to Kirchhoff's current law, and the single-valuedness of the potential is equivalent to Kirchhoff's voltage law. Thus the two problems are indeed equivalent. The Green function $\tilde{G} (\tau , \tau')$ on a particular 1D topological space $\mbox{M}$ can be understood as the voltage difference from $\tau$ to $\tau'$ when a half unit current is input into the ``circuit" (1D space) at $\tau'$ and withdrawn at $\tau$. Above we have shown that in the 1D case the problem of solving for the first-quantized particle Green function, the potential difference in a static electric field and the voltage difference in a circuit are analogous. Before we solve the circuit problem for the Green function, it is interesting to give a complete analogy among the quantities in these three kinds of problems (Table \ref{tab1}). \begin{table}[!h] \tabcolsep 0pt \caption{\label{tab1}Analogy among quantities in three kinds of problems} \vspace*{-12pt} \begin{center} \def0.8\textwidth{0.8\textwidth} {\rule{0.8\textwidth}{1pt}} \begin{tabular*}{0.8\textwidth}{@{\extracolsep{\fill}}cccccc} Particle & Static electric field & Electric circuit \\ \hline Green function $\tilde {G}$ & potential difference $\Delta \Phi$ & voltage difference $\Delta V$ \\ position $x$ & electric potential $\Phi$ & voltage $V$ \\ momentum $p$ & electric field $E$ & current $I$ \\ proper time $\tau$ & length $l$ & resistance $r$ \\ action $S$ & energy $U$ & power $P$ \\ external force $F$ & electric charge $Q$ & emf $\mathscr{E}$ \end{tabular*} {\rule{0.8\textwidth}{1pt}} \end{center} \end{table} \section{IV. Solving the circuit} The next thing we have to do is to find a general expression for the Green function, i.e., a method to obtain the voltage difference described above. A formula has already been given in graph theory (see, e.g., \cite{Bollobas:1979bq}). Here we summarize this result and develop it into a formula which is similar to the known form of the Green function on the 2D worldsheet of bosonic string theory \cite{D'Hoker:1988ta}. The voltage difference from $\tau$ to $\tau'$ can be obtained by the following steps: (1) Connect $\tau$ and $\tau'$ by a wire with zero resistance. (2) The resulting graph has vertices $V={v_1,...,v_n}$ and edges $E={e_1,...e_{m-1},e_m}$. Denote by $e_m$ the edge (wire) we have just added in. Set the directions of all the edges arbitrarily. (3) Assign voltage, current and resistance on each edge; they can be written in the form of vectors: \begin{eqnarray*} \mathbf{U} & = &(U_1,U_2,...,U_m)^{\mbox{T}}\\ \mathbf{I} & = &(I_1,I_2,...,I_m)^{\mbox{T}}\\ \mathbf{r} & = & (r_1,r_2,...,r_m)^{\mbox{T}} \end{eqnarray*} Also define the external-force-driven voltage $\mathbf{u}$, which has, in our case, only one non-zero component (the last one). Assume it has magnitude $1$: \begin{equation} \label{eq6} \mathbf{u}=(u_1,u_2,...,u_m)^{\mbox{T}}=( 0,0,...,1 )^{\mbox{T}} \end{equation} (4) Find all the ``independent" loops in the graph: There should be $(m-n+1)$. These loops can be identified by the following method: Choose an arbitrary spanning tree (a connected subgraph that contains all the vertices and is a tree) of the graph. There are always $(m-n+1)$ chords (the edges not belonging to the spanning tree). Adding a chord to the spanning tree will generate a one-loop graph. Thus each chord gives a loop in the graph, and all the loops obtained by this way are independent of each other. Therefore there are $(m-n+1)$ loops. Assign an arbitrary direction to each loop. (5) Define matrices $\mathcal{B}$, $\mathcal{C}$ and $\mathcal{R}$ as follows: {\jot=12pt \begin{eqnarray*} {\mathcal{B}_{ij} } & = & \cases{ 1 & if $v_i$ is the initial vertex of $e_j$ \cr -1 & if $v_i$ is the terminal vertex of $e_j$ \cr 0 & otherwise \cr} \\ {\mathcal{C}_{ij} } & = & \cases{ 1 & if $e_i$ is in the same direction of the loop $l_j$ \cr -1 & if $e_i$ is in the opposite direction of the loop $l_j$ \cr 0 & otherwise \cr} \\ {\mathcal{R}_{ij} } & = & \cases{ r_i & $i=j$ \cr 0 & $i \neq j$ \cr} \end{eqnarray*} } (6) With the above definitions, we can write down Kirchhoff's current law, Kirchhoff's voltage law and Ohm's law in compact forms as follows: \begin{eqnarray*} \mathcal{B}\mathbf{I}&=&\mathbf{0}\\ \mathcal{C}^{\mbox{T}}\mathbf{U}&=&\mathbf{0}\\ \mathbf{U}&=&\mathcal{R}\mathbf{I}+\mathbf{u} \end{eqnarray*} The solution to the current on each edge is \[ \mathbf{I} = -\mathcal{C}\left( {\mathcal{C}^{\mbox{T}}\mathcal{RC}} \right)^{-1}\mathcal{C}^{\mbox{T}}\mathbf{u} \] (7) The total resistance between $\tau$ and $\tau'$ (excluding the added [last] edge) is then minus the voltage on the last edge divided by the current on that edge, i.e., \[ R(\tau,\tau') = - \frac{U_m}{I_m} = - \frac{u_m}{I_m} = - \frac{1}{I_m} \] The Green function is then minus this resistance times the current, one half, since it is the voltage difference from $\tau$ to $\tau'$, \begin{equation} \label{eq7} \tilde G(\tau,\tau') = - \frac{1}{2} R(\tau,\tau') = \frac{1}{2I_m} = - \frac{1}{2\mathbf{u}^{\mbox{T}} \mathcal{C} \left( {\mathcal{C}^{\mbox{T}}\mathcal{RC}} \right)^{-1} \mathcal{C}^{\mbox{T}} \mathbf{u}} \end{equation} where the last step comes from extracting the last component of $\mathbf{I}$ by using the vector $\mathbf{u}$ defined in equation (\ref{eq6}). We can further simplify equation (\ref{eq7}), by considering the physical meaning of each part of the denominator: (1) $\mathcal{C}^{\mbox{T}}\mathbf{u}$ and ${\mathbf{u}}^{\mbox{T}}\mathcal{C}$: Since $\mathbf{u}=\left( {0,...,0,1} \right)^{\mbox{T}}$, $\mathcal{C}^{\mbox{T}}\mathbf{u}$ is an $\left( {m-n+1} \right)$-component column vector whose $i$th component is the direction of the last ($m$th) edge on the $i$th loop ($1$ for ``same", $-1$ for ``opposite", and $0$ for ``not on the loop"). If we appropriately choose our independent loops, we can achieve that the $m$th edge is only on the $\left( {m-n+1} \right)$th (last) loop and has the same direction as this loop. This is always achievable in steps: (a) Choose the spanning tree in such a way that the $m$th edge doesn't belong to the spanning tree, i.e., is a chord. (b) Define the loop generated by adding the $m$th edge to the spanning tree to be the $\left( {m-n+1} \right)$th loop. (c) Define the direction of the $\left( {m-n+1} \right)$th loop to be the same as the $m$th edge. By doing so, we find that $\mathcal{C}^{\mbox{T}}{\rm {\bf u}}$ is just a $\left( {m-n+1} \right)$ component column vector with the last component non-vanishing and of value $1$. \[ \mathcal{C}^{\mbox{T}}\mathbf{u}=\left( {0,...,0,1} \right)^{\mbox{T}} \] And $\mathbf{u}^{\mbox{T}}\mathcal{C}$ is the transpose of $\mathcal{C}^{\mbox{T}}\mathbf{u}$. Define for convenience \[ \mathbf{P} \equiv \mathcal{C}^{\mbox{T}}\mathbf{u} \] Note that $\mathbf{P}^{\mbox{T}}\mathcal{M}{\mathbf{P}}$ gives the $\left[ {\left( {m-n+1} \right),\left( {m-n+1} \right)} \right]$ component of any matrix $\mathcal{M}$ with dimension $\left( {m-n+1} \right)\times \left( {m-n+1} \right)$. (2) $\mathcal{C}^{\mbox{T}}\mathcal{RC}$: $\mathcal{C}^{\mbox{T}}\mathcal{RC}$ is an $\left( {m-n+1} \right)\times \left( {m-n+1} \right)$ matrix. The components can be interpreted as the sum of the ``signed" resistances, \[ \left( {\mathcal{C}^{\mbox{T}}\mathcal{RC}} \right)_{ij} =\sum\limits_{k\,\in\,\rm{all\,edges}} {f\left( {k,i,j} \right)r_k } \] where $r_k$ is the resistance on $k$th edge and $f$ is the ``sign": \begin{equation}\label{eq9} f(n,i,j) = \cases{ 1 & $n$th edge is on both loops $i$ and $j$ with the same orientation \cr -1 & $n$th edge is on both loops $i$ and $j$ with opposite orientations \cr 0 & $n$th edge is not on both loops $i$ and $j$ \cr} \end{equation} We define for convenience \begin{equation}\label{convenience} \mathcal{M} \equiv \pmatrix{ \Omega & {\mathbf{v}} \cr {\mathbf{v}^{\mbox{T}}} & s \cr} \equiv \mathcal{C}^{\mbox{T}}\mathcal{RC} \end{equation} where $\Omega$ is an $\left( {m-n} \right)\times \left( {m-n} \right)$ matrix, $\mathbf{v}$ is an $\left( {m-n} \right)$-component vector, and $s$ is a scalar. Now the formula for the Green function (\ref{eq7}) becomes \[ \tilde G = -\frac {1} { 2 \mathbf{P}^{\mbox{T}} \mathcal{M}^{-1} \mathbf{P} } \] Since ${\mathbf{P}}^{\mbox{T}}\mathcal{M}^{-1}{\mathbf{P}}$ is just the $\left[ {\left( {m-n+1} \right),\left( {m-n+1} \right)} \right]$ component of $\mathcal{M}^{-1}$, we have \[ {\mathbf{P}}^{\mbox{T}}\mathcal{M}^{-1}{\mathbf{P}}=\frac{\det \Omega}{\det \mathcal{M}} \] So, we have \[ \tilde G=-\frac{\det \mathcal{M}}{2 \det \Omega } \] Next we evaluate $\det \mathcal{M}$. By the usual matrix algebra (e.g., defining the determinant by a Gaussian integral and doing the ``$m-n$" integrals first), \[ \det \mathcal{M} = \det \pmatrix{ \Omega & {\mathbf{v}} \cr {\mathbf{v}^{\mbox{T}}} & s \cr} =( {\det \Omega })\left[ {s-{\mathbf{v}}^{\mbox{T}}\Omega ^{-1}\mathbf{v}} \right] \] So we have the following expression for the Green function: \begin{equation} \label{eq8} \tilde{G}=-\frac{\det \mathcal{M}}{2 \det \Omega } =-\frac{1}{2} s + \frac{1}{2} {\mathbf{v}}^{\mbox{T}}\Omega ^{-1}{\mathbf{v}} \end{equation} \section{V. Vacuum bubble} To complete this general method of writing down the scattering amplitude, we need to give the expression for the vacuum bubble amplitude with a given topology $\mathscr{V}_{\mbox{M}}\left( T_a \right)$ defined in equation (\ref{vacuum bubble}). This can always be achieved by evaluating the bubble diagram by the second-quantized method, but here we give a derivation by direct calculation in first-quantization. Note that the path integral (\ref{vacuum bubble}) is the sum over all possible momentum configurations of the product of the expectation values of the free evolution operator between two states at the ends of each edge: \[ \mathscr{V}_{\mbox{M}}\left( T_a \right) = \sum\limits_{\{p\}}\left\{\prod\limits_{a=1}^\mu { \left\langle p_a \left| e^{-T_a\left(p^2+m^2\right)/2} \right| p_a \right\rangle } \right\} = \sum\limits_{\{p\}}\left\{ \exp \left[ \sum\limits_{a=1}^\mu {-\frac{1}{2} T_a\left(p_a^2+m^2\right) }\right] \right\} \] where $p_a$ is the momentum of the particle traveling on the $a$th edge. The sum over all the configurations of $p_a$ can be written as the the integration over all the possible values of $p_a$, but they are not independent from each other. Each of the $\mu$ $p_a$'s can be written as a linear combination of $L$ $k_i$'s where $L$ is the number of loops of the graph and $k_i$ is the loop momentum on the $i$th loop. So the amplitude $ \mathscr{V}_{\mbox{M}}\left( T_a \right)$ can be written as \begin{equation}\label{eq11} \mathscr{V}_{\mbox{M}}\left( T_a \right) = \exp \left[ -\frac{1}{2} \left( \sum\limits_{a=1}^\mu { T_a } \right) m^2 \right] \int {\left( \prod\limits_{i}^{L}{dk_i} \right)\exp \left[ -\frac{1}{2} \underline{\sum\limits_{a=1}^\mu { T_a \left( \sum\limits_{i=1}^{L} {g_{ai}k_i} \right)^2 }}\right]} \end{equation} where \[ g_{ai} = \cases{ 1 & edge $a$ has the same direction as loop $i$ \cr -1 & edge $a$ has the opposite direction to loop $i$ \cr 0 & edge $a$ is not on loop $i$ \cr} \] It is easy to see the following points: (a) If edge $a$ is on loop $i$, there is a $T_a k_i^2$ term in the underlined sum in equation (\ref{eq11}), and vice versa. (b) If edge $a$ is on both loop $i$ and loop $j$, there is a term $2 T_a k_i k_j$ in the sum, and vice versa. Further, if on edge $a$ both loops have the same (opposite) direction, there is a positive (negative) sign before the term, and vice versa. Thus if we use the factor $f(a,i,j)$ defined in equation (\ref{eq9}), we can write the underlined part in equation (\ref{eq11}) in a compact form, and further in terms of the period matrix according to the definition (\ref{convenience}), or definition (\ref{period matrix 2}) below: \[ \sum\limits_{a=1}^\mu { T_a \left( \sum\limits_{i=1}^{L} {g_{ai}k_i} \right)^2 } =\sum\limits_{a=1}^\mu { \sum\limits_{i,j=1}^{L} { f\left(a,i,j\right) T_a k_i k_j } } =\sum\limits_{i,j=1}^{L} { \left[ \sum\limits_{a=1}^\mu{f\left(a,i,j\right) T_a } \right] k_i k_j } =\sum\limits_{i,j=1}^{L} { \Omega_{ij} k_i k_j } \] The amplitude $\mathscr{V}_{\mbox{M}}\left( T_a \right)$ can then be calculated easily: \begin{eqnarray}\label{determinant} \mathscr{V}_{\mbox{M}}\left( T_a \right) &=& \exp \left[ -\frac{1}{2} \left( \sum\limits_{a=1}^\mu { T_a } \right) m^2 \right] \int {\left( \prod\limits_{i}^{L}{dk_i} \right) \exp \left[ -\frac{1}{2} \sum\limits_{i,j=1}^{L} { \Omega_{ij} k_i k_j } \right]}\nonumber\\ &=& \exp \left[ -\frac{1}{2} \left( \sum\limits_{a=1}^\mu { T_a } \right) m^2 \right] \left( \det \Omega \right) ^ {-D/2} \end{eqnarray} where $D$ is the dimension of the spacetime. \section{VI. Summary} We now summarize our results for general amplitudes. To find the Green function: (1) Consider the ``circuit" (topology $\mbox{M}$ with two more vertices at $\tau$ and $\tau'$ respectively) as a graph. Assign a number to each edge. Define arbitrarily the direction of each edge. Assign a number to each loop. Define arbitrarily the direction of each loop. (2) Find the period matrix $\Omega$ by the following definition: \begin{equation}\label{period matrix} \Omega_{ij} =\sum\limits_{k\,\in\,\rm{all\,edges}} f(k,i,j)r_k \end{equation} where \[ f(n,i,j) = \cases{ 1 & $n$th edge is on both loops $i$ and $j$ with the same orientation \cr -1 & $n$th edge is on both loops $i$ and $j$ with opposite orientations \cr 0 & $n$th edge is not on both loop $i$ and $j$ \cr} \] Each $r_k$ may be $\tau$, $\tau'$, a $T_a$ of the topological space $\mbox{M}$ without external lines, $T_a-\tau$, or $T_a-\tau'$. It is obvious that $\Omega$ does not depend on $\tau$ and $\tau'$, and only depends on the properties of $\mbox{M}$: To see this, note that the graph of the circuit has just two more vertices (at positions $\tau$ and $\tau'$) than the graph of $\mbox{M}$ and each of them may seperate an edge in the graph of $\mbox{M}$ into two ``sub-edges". But these changes on the graph do not affect the period matrix: They neither give new loops nor eliminate loops and the sub-edges will always be on or off a loop simultaneously. Thus the period matrix of the graph of the circuit is just the period matrix of the graph of $\mbox{M}$. And one only has to calculate it once for one certain topology, \begin{equation}\label{period matrix 2} \Omega_{ij} =\sum\limits_{a=1}^\mu f(a,i,j)T_a \end{equation} (3) Find a path (call it ``reference path") connecting $\tau$ and $\tau'$. Choose its direction arbitrarily. Define the scalar $s$ as the total resistance on the reference path. (4) Find the vector $\mathbf{v}$ defined as follows: \[ \mathbf{v}_i =\sum\limits_{k\,\in\,\rm{all\,edges}} f(k,i,0)r_k \] where ``0" means the reference path. (5) The Green function is given by \[ \tilde{G}= -\frac{1}{2} s + \frac{1}{2} {\mathbf{v}}^{\mbox{T}}\Omega ^{-1}{\mathbf{v}} \] The amplitude is then given by \[ \mathscr {A}\left( {\mbox{M},N} \right)=\prod\limits_{a=1}^\mu {\int_{\mbox{F}_a } {dT_a } } \, \mathscr{V}_{\mbox{M}} \prod\limits_{c\notin \mbox{C}} {\int_{\mbox{M}} {d\tau _c \exp \left[ -\sum\limits_{i<j} {k_i \cdot k_j \tilde {G}\left( {\tau _i ,\tau _j }\right)} \right]} } \] with the Green function as above, and the vacuum bubble amplitude is \[ \mathscr{V}_{\mbox{M}}\left( T_a \right) = \exp \left[ -\frac{1}{2} \left( \sum\limits_{a=1}^\mu { T_a } \right) m^2 \right] \left( \det \Omega \right) ^ {-D/2} \] Although we have defined and derived all these quantities $s$, $\mathbf{v}$ and $\Omega$ in terms of the concepts in the circuit problem, it is easy to give the worldline geometric interpretation by noting that the resistance is an analog of the proper time, i.e., the length of the worldline (Table \ref{tab1}). $s$ is just the total proper time of the reference path, and the components of $\mathbf{v}$ are the sums of the ``signed" proper time on the common edges of the reference path and each loop. Entries of $\Omega$ are sums of the ``signed" proper time on the common edges of each pair of loops. (All the signs are given by $f$.) The components of $\mathbf{v}$ and entries of $\Omega$ can also be expressed as integrals of the ``Abelian differentials'' on the loops of the worldline, \begin{eqnarray*} v_i &=& \int_{\tau'}^{\tau} { \omega_i }\\ \Omega_{ij} &=& \ointctrclockwise_i \ { \omega_j } \end{eqnarray*} where $\omega_i$ is the line element on loop $i$ and the second integral is around loop $i$ along its direction. Then our expression for the particle amplitude has a similar structure to the bosonic string amplitude, with the Green function on the 2D worldsheet \cite{D'Hoker:1988ta}: \[ \tilde {G} \left( w , z \right) = - 2 \ln \left| E\left( w,z \right) \right| + 2 \pi\ \text{Im} \int_w^z \mathbf{\omega} \left( \text{Im}\ \Omega \right)^{-1} \text{Im} \int_w^z \mathbf{\omega} \] where $E$ is the prime form, the vector $\mathbf{\omega}$ contains the basis of the Abelian differentials and the matrix $\Omega$ is the period matrix. \section{VII. Examples} Here we give some examples of obtaining the Green functions on different topologies. For the finite line of length $T$ in Fig.\ \ref{fig4}, there is no loop, so there is no period matrix $\Omega$ nor vector $\mathbf{v}$. $s$ is just the total resistance between $\tau$ and $\tau'$. So the Green function is \[ \tilde{G}_{-} \left( \tau , \tau' \right)=-\frac{1}{2}s=-\frac{1}{2}\left|{\tau-\tau'}\right| \] and \[ \mathscr{V}_{-} \left( T \right)= \exp \left( -\frac{1}{2} T m^2 \right) \] The amplitude is then given by equation (\ref{eq10}). One just has to note that there is one modulus $T$ for this case and to fix the residual symmetry: Two of the external lines should be fixed at one end of the line and another two should be fixed at the other end. \begin{figure} \includegraphics{fig4} \caption{\label{fig4} The topology of a line with length $T$. There is no loop and hence no period matrix nor vector $\mathbf{v}$. The only path between $\tau$ and $\tau'$ is the edge connecting the two vertices $e_1$, so we choose it as the reference path.} \end{figure} \begin{figure} \includegraphics{fig5} \caption{\label{fig5} (A) The one-loop topology. $T$ is the circumference of the circle. There are two edges ($e_1$,$e_2$) and two vertices ($v_1$,$v_2$). $\tau$ and $\tau'$ are the lengths from $v_1$ and $v_2$ to the origin though a counterclockwise path. The directions of the edges are chosen arbitrarily and marked in the figure. There is only one loop and it is marked by dotted lines. The reference path is marked by a dashed line. (B) A two-loop topology. $T_1$, $T_2$ and $T_3$ are the lengths of the three arcs. $\tau$ and $\tau'$ are respectively the length on the 3rd and 1st arc from the origin. (If the Green function with $\tau$ and $\tau'$ on different arcs is needed, simply repeat the steps for this special case.) There are 5 edges ($e_1$ to $e_5$) and 4 vertices ($v_1$ to $v_4$). The directions of the edges are chosen arbitrarily and marked in the figure. There are two independent loops and they are marked by dotted lines. The reference path is marked by a dashed line.} \end{figure} For the circle, there is one loop as shown in Fig.\ \ref{fig5}(A). So the period matrix is $1 \times 1$. The only entry of $\Omega$ is then \[ \Omega = T \] And according to the definition, $s$ and $\mathbf{v}$ are \[ s = \left|{\tau-\tau'}\right| \] \[ \mathbf{v} = \left|{\tau-\tau'}\right| \] Thus the Green function on the circle is \[ \tilde{G}_{\bigcirc} \left( \tau , \tau' \right) =-\frac{1}{2} s + \frac{1}{2} {\mathbf{v}}^{\mbox{T}}\Omega ^{-1}{\mathbf{v}} =-\frac{1}{2}\left|{\tau-\tau'}\right| + \frac{(\tau-\tau')^2}{2T} \] and the vacuum bubble amplitude is \[ \mathscr{V}_{\bigcirc}\left( T \right) = \exp \left( -\frac{1}{2} T m^2 \right) T ^ {-D/2} \] For the Green function on the 2-loop graph shown in Fig.\ \ref{fig5}(B), the period matrix is $2\times2$ and $\mathbf{v}$ is a two-component vector. $\Omega$, $s$ and $\mathbf{v}$ are \begin{eqnarray*} \Omega &=&\pmatrix{ T_1 +T_2 & -T_2 \cr -T_2 & T_2 +T_3 \cr}\\ s&=&\tau'+\tau\\ \mathbf{v}&=&( \tau',\tau )^{\mbox{T}} \end{eqnarray*} So the Green function is, by plugging all the above into equation (\ref{eq8}), \[ \tilde {G}_{\ominus} \left( \tau , \tau' \right) =-\frac{T_1 \left( {T_2 +T_3 -\tau } \right)\tau +T_1 \left( {T_2 +T_3 } \right){\tau }'-T_3 {\tau }'^2+T_2 \left( {T_3 -\tau -{\tau }'} \right)\left( {\tau +{\tau }'} \right)}{2\left( {T_1 T_2 +T_2 T_3 +T_3 T_1 } \right)} \] and the vacuum bubble amplitude is \[ \mathscr{V}_{\ominus}\left( T_1,T_2,T_3 \right) = \exp \left[ -\frac{1}{2} \left( T_1 + T_2 + T_3 \right) m^2 \right] \left( T_1 T_2 + T_2 T_3 + T_3 T_1 \right) ^ {-D/2} \] \section{VIII. Conclusions} Based on the first-quantized formalism, the discussion in this paper gives a new method to evaluate the scattering amplitude of scalar field theory at arbitrary loop level. (The procedure was given in section VI.) The form of the Green function is shown to be similar to that on the worldsheet in bosonic string theory. The method applies not only to 1PI graphs, but arbitrary graphs that might appear in S-matrices. The amplitude obtained by the first-quantized method in this paper easily can be shown to be equivalent to the amplitude from second-quantization. Further, the singularities of general diagrams can be discussed and the Landau conditions can be obtained. The occurrence of a singularity on the physical boundary was shown to be related to a sequence of interactions connected by real particle paths by Coleman and Norton \cite{Coleman:1965xm}. Here this same picture emerges more naturally: The Feynman diagram is interpreted as particle path (through background fields) and the integral over the proper time is introduced from the beginning. The extension to diagrams with spinning particles will be the next thing to understand. It is expected to be parallel to Strassler's discussion on various theories \cite{Strassler:1992zr}, with the Green function on a circle replaced by those on other topologies. However, although the 2-loop Green function has been known for a long time, no full-fledged extension to even 2-loop Yang-Mills theory has yet been found, because of the complexity and difficulty entering at multi-loop level. Previous attempts include generalization from Bern and Kosower's string approach \cite{DiVecchia:1995in}\cite{DiVecchia:1996uq} and Strassler's particle approach \cite{Sato:1998sf}\cite{Sato:2000cr}. It is now clear that scalar field theory can be seen as the limit of the bosonic string theory, by suppressing the length of the string. Can the bosonic string worldsheet be seen as the sum of all the Feynman diagrams of the scalar field theory that are random lattices of the worldsheet? This is another interesting question that may be related to the result in this paper. \section*{Acknowledgment} This work was supported in part by National Science Foundation Grant No.\ PHY-0354776.
1,108,101,563,419
arxiv
\section{Introduction} The Universe is opaque to gamma-rays. Very high-energy photons from extragalactic sources are absorbed by the ambient soft radiation and converted into electron-positron pairs \citep{gould_opacity_1967,wdowczyk_primary_1972}. These leptons are deflected by the ExtraGalactic Magnetic Field (EGMF) and cool through inverse Compton scattering, producing new gamma-rays that may, in turn, be absorbed. The observable properties of the resulting electromagnetic cascade depend on the characteristics of the intergalactic medium. The development of cascades has three main observable effects. First, the source spectrum is altered because each high energy TeV photon is reprocessed into thousands of GeV photons \citep{protheroe_effect_1986, roscherr_constraining_1998,aharonian_origin_2002,neronov_evidence_2010}. Second, due to the deflection of leptons by the EGMF, new gamma-rays are emitted along different lines of sight, so that a point source may appear as extended \citep{aharonian_very_1994, eungwanichayapant_very_2009}. Third, as leptons are deflected, cascade photons travel a longer distance and arrive with a significant time delay, as compared to unabsorbed, primary photons \citep{kronberg_time_1995,plaga_detecting_1995, ichiki_probing_2008,murase_probing_2008,takahashi_detectability_2008}. Observation (or non-detection) of electromagnetic cascades is crucial to several astrophysical issues. It offers a unique tool to probe the intergalactic medium, especially the Extragalactic Background Light (EBL) and the EGMF. The background photons involved in the cascades have two distinct origins. Inverse Compton scattering mainly occurs on photons from the Cosmic Microwave Background (CMB) while high energy gamma-rays are mostly absorbed by the EBL of stars and dust, which extends from infrared to ultraviolet. Our knowledge of the EBL is limited. Direct observations at these wavelengths are very inaccurate due to strong contamination from the zodiacal light. The predictions of the different models proposed in the literature can differ by up to an order of magnitude, depending on wavelength and redshift \citep[see Fig.~\ref{fig:EBL_CMB}]{franceschini_extragalactic_2008, dominguez_extragalactic_2011,finke_modeling_2010,kneiske_lower-limit_2010, gilmore_semi-analytic_2012}. Absorption of the gamma ray spectrum of high-energy sources provides unequal constrains on the EBL \citep{stecker_tev_1992}. Recently, cosmological cascades were also used to probe the properties of the extragalactic magnetic field, the origin of which is still debated \citep{durrer_cosmological_2013}. A primordial magnetic field could have been generated during inflation or during the phase transition when electroweak and QCD forces decoupled. This field would have remained unaffected during the evolution of the extragalactic medium. Alternatively magnetic fields generated by galaxies during large scale structure formation could have propagated in the intergalactic medium through plasma jets. Depending on the properties of the field generation and evolution, its value $B$ is expected to lie in the range $10^{-17}$ to $10^{-9}$ Gauss \citep{essey_determination_2011,finke_constraints_2015}, with coherence length $\lambda_B$ (scale of de-correlation of two nearby field lines) between $10^{-6}$ and $10^4$ Mpc. Electromagnetic cascades represent a unique tool to probe the intergalactic magnetic field \citep{aharonian_ultrahigh_1985} when conventional methods such as Faraday rotation cannot be applied. \cite{neronov_method_2007} and \cite{elyiv_gamma-ray_2009} have suggested to measure the extension of pair halos to probe the EGMF. Indeed \cite{neronov_degree-scale_2010} demonstrated that for a strong enough magnetic field, halos in the GeV energy band can remain long after the TeV blazar's end of activity. Alternatively, the spectral analysis can also provide constraints on the EGMF \cite[e.g.][]{davezac_cascading_2007,neronov_evidence_2010, kachelries_constraints_2010}. Although most studies focus on the average intensity and coherence length of the field, it has been shown recently that anisotropies in the images of pair halos could also provide crucial information on the magnetic helicity \citep{long_morphology_2015, batista_probing_2016}. In the past years, all three effects have been searched intensely in the data of the space gamma-ray telescope {\it Fermi} and of Cherenkov air telescopes such as MAGIC, H.E.S.S, or VERITAS. However none of the methods has provided undisputed evidence yet. Cascade contribution to the GeV spectrum has mostly provided upper limits, and most Blazar observations remain compatible with no cascade emission \citep{arlen_intergalactic_2014}. Such constraints however provide lower limits on the EGMF intensity \citep{neronov_evidence_2010}. No time delay has been clearly detected either, which also provides lower limits on the amplitude of the random component of the magnetic field \citep{neronov_no_2011}. Detection of pair halos requires a very accurate modelling of the instrument point spread function (PSF) and has not given undisputed results either \citep{krawczynski_x-ray/tev-gamma-ray_2000, aharonian_search_2001, abramowski_search_2014, prokhorov_search_2016}. A detection in Fermi-LAT data sets was claimed recently \citep{chen_search_2015}, but has not been confirmed by the {\it Fermi} collaboration yet. Much better constraints are expected from CTA \citep{meyer_sensitivity_2016}. Regardless of the detection method, a deep understanding of the cascade physics is crucial to interpret observational data. In the past decade, the cascade physics has been investigated through fast, analytical (or semi-analytical) methods that allow to quickly cover a large parameter space, and Monte Carlo simulations. Although much slower, the latter have proven to be mandatory to derive quantitative results and to interpret precise observations. Several codes have been developed over the years but only the most recent include the cosmological expansion on the particle trajectory \citep{taylor_extragalactic_2011,kachelriess_elmag_2012, arlen_intergalactic_2014, settimo_propagation_2015}. To our knowledge, only one is publicly available \citep[ELMAG][]{kachelriess_elmag_2012} but the lepton trajectories in the magnetised, intergalactic medium is treated in a simple, 1D, diffusion approach. In this paper we present a new Monte Carlo code that is publicly available\footnote{\url{https://gitlab.com/tfitoussi/cascade-simulation}}. This code is dedicated to cascades induced by high-energy photons (or leptons) and does not take into account hadronic processes \citep[see e.g. ][for results on cosmic-ray induced cascades]{oikonomou_synchrotron_2014,essey_determination_2011}. It computes the physics of leptonic cascades at the highest level of precision and with the fewest approximations. Using this code, we present a systematic exploration of the parameter space. In Section~\ref{sec:cascades} we present the basic analytical theory of cosmological cascades and simple analytical estimates of their observables. The results of our code are presented in Section~\ref{sec:simple_case} and are tested against analytical approximations and other published numerical results. The last sections of the paper is devoted to an exploration of the parameter space. We study the impact of the source properties (redshift, spectrum, anisotropy) in Section \ref{sec:source_profile}. Then, in Section~\ref{sec:intergalactic_medium}, we explore the effects of the intergalactic medium (EBL, EGMF). Technical aspects of the code are presented in Appendix \ref{appendix:code}. \section{Physics of cosmological cascades} \label{sec:cascades} Cosmological electromagnetic cascades involves three main processes: pair production through photon-photon annihilation, inverse Compton scattering, and propagation of charged particles in a magnetised, expanding universe. All other processes are negligible. In particular, as long as primary photons do not exceed 100 TeV and the extragalactic magnetic field remains below $B=10^{-10}$ G, synchrotron cooling is orders of magnitude weaker than Compton cooling, and synchrotron photons only contribute at low energy, below the infrared range ($< 0.02$ eV). This section presents a simple analytical view of the cascade physics. \subsection{Propagation of particles in a magnetised, expanding Universe} Cosmological cascades develop on kpc to Gpc scales. On the largest scales, the geometry and the expansion of the universe must be taken into account. Throughout this paper we assume a $\Lambda$-CDM model. A complete description of particle trajectories can be found in appendix \ref{appendix:travel}. However a few important points must be noted here. First, in an expanding Universe and in absence of any interaction, the particle (photons and leptons) momentum $p$ evolves with redshift $z$ as $p \propto (1+z)$, also meaning that their energy continuously decreases with time. In the specific case of photons, the energy scales with momentum: $E_\gamma \propto p \propto (1+z)$, providing the well-known cosmological redshift. The cosmological evolution of lepton energy is slightly more complex. However, in the limit of highly relativistic particles, it also scales as $(1+z)$. The propagation of leptons is also affected by the extragalactic magnetic field (EGMF). In this work, we assume that no field is created or dissipated in the cosmological voids, and that it is simply diluted as the universe expands: $B(z)\propto (1+z)^2$ \citep[see][eq. 22]{durrer_cosmological_2013}. In that case, the Larmor radius evolves as: \begin{equation} \label{eqn:RL} R_L = \frac{E_e}{e_cB (1+z)} \approx 1.1 ~(1+z)^{-1} \left( \frac{E_e}{1\rm{TeV}} \right) \left( \frac{B}{10^{-15}\rm{G}} \right)^{-1} \textrm{Mpc}, \end{equation} where $e_c$ is the lepton charge, $E_{e}$ and $B$ are respectively the lepton energy and the EGMF intensity at $z=0$. In comoving coordinates, this means that the comoving Larmor radius $ R_L (1+z)$ is constant and that for a uniform magnetic field, the perpendicular comoving motion of leptons is a pure circle. When Compton losses are included, lepton trajectories become converging spirals. In practice, the cosmological magnetic field is expected to be highly turbulent \citep{caprini_gamma-ray_2015,durrer_cosmological_2013}. Although it should be described by a full turbulent spectrum, its properties are often characterised by its intensity $B$ and its coherence length $\lambda_B$. In this paper, we will consider that the field structure can be modelled by uniform magnetic cells of same intensity and size $\lambda_B$ but with random orientations. Inside a particular cell, the lepton comoving trajectories become simple helicoidal trajectories. \subsection{Photon absorption by the EBL} High-energy photons (of energy $E_\gamma$) annihilate with soft, ambient photons. The annihilation cross section being maximal close to the threshold, the interaction is most efficient with soft photons of energy $\sim (m_ec^2)^2/E_{\gamma}$ where $m_e$ is the lepton mass and $c$ is the speed of light. This explains why TeV photons are absorbed preferentially by eV photons of the EBL \citep{gould_opacity_1967}. Fig.~\ref{fig:EBL_CMB} shows six models of EBL that can be found in the literature \citep{franceschini_extragalactic_2008,dominguez_extragalactic_2011,finke_modeling_2010, kneiske_lower-limit_2010,gilmore_semi-analytic_2012} and illustrates the uncertainty of EBL intensity and spectrum. It can be seen that at $z=2$, the EBL photon densities can differ by one order of magnitude from one model to the other. \begin{figure} \centering \includegraphics[width=\columnwidth]{ExtragalacticLight} \caption{Comoving spectral energy distribution of target photons at a redshift $z=2$, including the Cosmological Microwave Background (black line) and different models of EBL (color lines).} \label{fig:EBL_CMB} \end{figure} The photon mean annihilation distance is plotted in Fig.~\ref{fig:lambda_gg} as a function of the initial photon energy, assuming the EBL model from \cite{dominguez_extragalactic_2011}. The solid lines show the results for an expanding universe while the dashed lines show the results for a static universe. Below 1 TeV, the absorption mean free path quickly becomes larger than the typical distance of targeted sources (100 Mpc to Gpc), so that only TeV photons are significantly absorbed. Below 200-300 GeV, photons tend to travel over such large distances that two cosmological effects work in concert to produce a diverging mean free path. First, the target photon density vanishes as $(1+z)^3$ as the universe expands (horizon event). Second, gamma-rays photons are more and more redshifted before reaching the next annihilation point, requiring higher and higher energy target photons. As the EBL photon density drops at 10 eV, the mean free path diverges at low energy. Photons emitted at low redshift, with an energy of 1~TeV travel a few hundred Mpc before producing pairs. This distance decreases quickly at higher energy to reach few Mpc to few kpc. Considering a blazar like Mrk421 ($z=0.0308$, 135~Mpc) emitting very high-energy photons up to 100~TeV, primary gamma-rays are typically absorbed over a distance of a few Mpc. \begin{figure} \centering \includegraphics[width=\columnwidth]{lambda_gg} \caption{Gamma-ray annihilation mean free path $\lambda_{\gamma\gamma} = c t_{\gamma\gamma}$ (where $t_{\gamma\gamma}$ is the mean cosmic time between two interactions) as a function of their initial energy, and for different emission redshifts (solid lines). For comparison, the mean free path for a static universe (with properties frozen at their values at the initial redshift) is shown in dashed lines. The black line shows the thin/thick transition where the mean free path equals the source distance $D_s$ and where the energy equals the absorption energy $E_{\rm abs}$.} \label{fig:lambda_gg} \end{figure} Since the density of the EBL photons decreases with their energy, gamma-ray absorption is more efficient at high energy. The transition from optically thin to optically thick photon-photon absorption where the radiation becomes fully absorbed occurs at an initial energy $E_{\rm abs}=(1+z)E_{\rm cut}$ where $E_{\rm cut}$ is the corresponding energy cut-off observed in the spectra at $z=0$. Fig.~\ref{fig:cutoff_energy} shows the cut-off energy as a function of source redshift. Distant sources are more absorbed and their absorption occurs at lower energy. Significant differences are observed between the different EBL models. At large redshift ($z>1$), the different EBL models are very different, and differences up to a factor 6 are observed in the cutoff energy. At lower redshifts the EBL models are consistent with each other and only differ by a factor 2 at the lowest energies as, at such low distance, gamma-gamma absorption occurs mainly above 10 TeV. The effects of the EBL model on the cascades will be discussed in section \ref{sec:EBL}. \begin{figure} \centering \includegraphics[width=\columnwidth]{cutoff_energy} \caption{Cut-off energy $E_{\rm cut}$ observed at $z=0$ as a function of the source redshift for different EBL models. } \label{fig:cutoff_energy} \end{figure} \subsection{Compton scattering by the CMB} High-energy leptons Compton up-scatter soft ambient photons to gamma-ray energies. In the Thomson regime, the Compton cross-section does not depend on the energy, and the scattering rate scales linearly with the target photon number density, hence Compton scattering mostly occurs on CMB photons. The CMB is modelled by a blackbody with a temperature $T_{\rm cmb} = (1+z) T_{\rm cmb,0}$ (Black curve in Fig.~\ref{fig:EBL_CMB}), where $T_{\rm cmb,0}=2.725$ K is the temperature at $z=0$. The associated CMB mean density, average energy and mean energy density are respectively: \begin{align} \label{CMB} n_{cmb} &= 16 \pi \zeta(3) \left( \frac{k_B T_{\rm cmb}}{hc} \right)^3 \approx 411 (1+z)^3 \quad \rm{cm}^{-3}, \\ \epsilon_{cmb} &= \frac{\pi^4}{30 \zeta(3)} k_B T_{\rm cmb} \approx 6.34 \times 10^{-4} (1+z) \quad \rm{eV}, \\ \rho_{cmb} &= n_{cmb} \epsilon_{cmb} \approx 0.26 (1+z)^4\quad \rm{eV.cm}^{-3}, \end{align} where $\zeta(3) \approx 1.202$, $k_B$, and $h$ are the Ap\'ery's, the Boltzmann's, and the Planck's constants respectively. In the Thomson regime, leptons of local energy $E_e$ up-scatter soft photons to typical energy: \begin{equation} \label{eqn:Egamma} E_{\gamma} = \frac{4 \epsilon_{cmb} E_e^2}{3 m_e^2 c^4} \approx 3.23 (1+z) \left( \frac{E_e}{1\rm{TeV}} \right)^{2} \quad \rm{GeV}. \end{equation} Between two Compton scatterings, the leptons travel a Compton mean free path $\lambda_{ic} = 1/(n_{cmb} \sigma_{T}) \approx 1.19 $ kpc (corresponding to a scattering time of $t_{ic}=\lambda_{ic}/c = 3870$ yr), where $\sigma_T$ is the Thomson cross section. And on average, they loose energy at rate: \begin{equation} \frac{dE_e}{dt} = \frac{4}{3} c \sigma_T \rho_{cmb} \left( \frac{E_e}{m_e c^2} \right)^2. \label{eqn:dE_dt} \end{equation} After a flight of length $x$, the lepton energy is $E_e(x) = E_e^0 / (1+x/D_{ic}^0)$ where \begin{equation} \label{eqn:Dic} D_{ic}^0 = \frac{3 m^2 c^4}{4 \sigma_T \rho_{cmb} E_e^0} \approx 367~ (1+z)^{-4} \left( \frac{E_e^0}{1\rm{TeV}} \right)^{-1} \quad \rm{kpc}, \end{equation} is the initial Compton cooling distance, and $E_e^0$ is the lepton energy at the production site. \subsection{Observables} \label{sec:observables} Based on these simple properties, some analytical estimates for typical cascade observables can be derived. Here we make the following standard assumptions: \begin{itemize} \item The source of primary gamma-rays is isotropic and mono-energetic with an energy $E_{\gamma}^0$. \item The primary gamma-rays all annihilate at exactly the same distance $\lambda_{\gamma\gamma}$ as given in Fig.~\ref{fig:lambda_gg}. \item The leptons are produced by photon-photon annihilation in the direction of the parent gamma-ray photon and have exactly half of its energy. \item Compton interactions occur in the Thomson regime and the leptons travel exactly over one mean free path $\lambda_{ic}$ between two Compton scatterings. \item The leptons are deflected by a uniform magnetic field perpendicular to the motion. \item Magnetic deflections occur locally on scales much smaller than the photon annihilation length and the source distance. \item The scattered photons all get exactly the average energy given by equation \ref{eqn:Egamma} and are emitted in the propagation direction of the scattering lepton. \item The absorption of these scattered photons (hereafter refereed to as first generation photons) and the associated pair production is neglected. The contribution of higher generation particles is not considered. \item Cosmological effects are neglected. \end{itemize} With the above assumptions we can derive the following cascade geometrical and distribution properties: \subsubsection*{Geometry:} Geometrical effects (halo effects and time delay) are due to the lepton deflection in the extragalactic magnetic field. Two cases can be studied: \begin{itemize} \item If the coherence length is large ($\lambda_B \gg D_{ic}^0$) the magnetic field can be considered as uniform and the lepton deflection after a travel distance $x$ is: $\delta = \int_0^x ds/R_L(s)$. It means that a lepton of initial energy $E_e^{0} = E_\gamma^0/2$ having cooled down to energy $E_e$ has been deflected from its original trajectory by an angle: \begin{equation} \delta = \frac{D_{ic}^0}{2 R_L^0} \left[ \left(\frac{E_e^0}{E_e}\right)^2-1 \right], \label{eqn:delta_ic0} \end{equation} where $R_L^0$ (Eq. \ref{eqn:RL}) is the initial Larmor radius of the lepton. As soon as the lepton has lost a significant fraction of its energy, the previous equation reduces to: \begin{equation} \delta \approx \frac{D_{ic}}{2 R_L} = \frac{e_cB \lambda_{ic}}{2E_{\gamma}}, \label{eqn:delta_ic} \end{equation} where $D_{ic}$ and $R_L$ are no longer the initial values but are now evaluated locally at energy $E_e\ll E_e^0$, corresponding to photons scattered to energy $E_\gamma$ (Eq. \ref{eqn:Egamma}). \item If the coherence length is short ($ \lambda_B \ll D_{ic}^0$), the leptons travel across many zones of a highly turbulent field. We assume that the field is composed of many cells of size $\lambda_{B}$, with uniform field and random directions. In each cell, the leptons are deflected by an angle $\sim \lambda_{B}/R_{L}$ in a random direction. Considering this process as a random walk leads to: \begin{align} \delta & = \frac{\sqrt{D_{ic}^0 \lambda_{B}}}{2 R_L^0} \left[ \left(\frac{E_e^{0}}{E_e}\right)^2-1 \right] \nonumber \\ & \approx \frac{\sqrt{D_{ic} \lambda_{B}}}{2 R_L}, \label{eqn:delta_ic2} \end{align} where the last approximation is obtained similarly to Eq. \ref{eqn:delta_ic}. \end{itemize} In both cases, the magnetic deflection is a function of the lepton energy $\delta(E_e)$ and of the secondary gamma-ray energy $\delta(E_\gamma)$. In the following, we will concentrate on large-coherence case ($\lambda_B \gg D_{ic}^0$). However, similar constraints are easily obtained for short coherence lengths. The geometrical properties of the cascade (extension, time delay) can be derived from the magnetic deflection angle \cite[e.g.][]{neronov_method_2007,dermer_time_2011}. They are illustrated in Fig.~\ref{fig:triangle}. \begin{figure} \centering \includegraphics[width=\columnwidth]{triangle} \caption{Geometry of the one-generation model. } \label{fig:triangle} \end{figure} In the one-generation approximation, halo photons observed with a finite angle $\theta$ were emitted out of the line of sight and then deflected back to the observer. Assuming the lepton deflection occurs on very short distances compared to the photon absorption length, the detection angle and time delay are: \begin{align} \theta &= \arcsin \left( \frac{\lambda_{\gamma \gamma}}{D_s} \sin\delta \right) \approx \frac{\lambda_{\gamma \gamma}}{D_s} \delta, \label{eqn:theta} \\ c \Delta t &= \lambda_{\gamma\gamma} (1- \cos \delta) - D_s (1- \cos \theta) \approx \lambda_{\gamma\gamma} \frac{\delta^2}{2}, \label{eqn:time_delay} \end{align} where $D_s$ is the distance to the source, $\lambda_{\gamma\gamma}$ is the annihilation distance of the primary photons, and where the last approximations were obtained in the small-angle approximation ($\lambda_{\gamma\gamma}\ll D_s, \delta \ll 1$). These relations are illustrated in Figs. \ref{fig:simple_case-observables} in the case of large coherence length. As high-energy leptons travel and get deflected, the detection angle and time delay decrease as energy increases. In this regime, the small angle approximation is well satisfied and for large coherence length ($\lambda_B \gg D_{ic}^0$) the latter equations combined to Eq. \ref{eqn:Egamma} and \ref{eqn:delta_ic} yield respectively to: \begin{align} \theta & \approx 0.79^\circ \left( \frac{\tau_{\gamma\gamma}}{397.4} \right)^{-1} \left(\frac{B}{10^{-14}\,\textrm{G}}\right) \left(\frac{E_\gamma}{1\,\textrm{GeV}}\right)^{-1} \label{eqn:theta_approx}, \\ \Delta t & \approx 65 \left(\frac{\lambda_{\gamma\gamma}}{1.32 \rm Mpc}\right) \left(\frac{B}{10^{-17}\textrm{G}}\right)^2 \left(\frac{E_\gamma}{1\,\textrm{GeV}}\right)^{-2} \,\textrm{yrs} \label{eqn:Dt_approx}, \end{align} where $\tau_{\gamma\gamma} =D_s/ \lambda_{\gamma\gamma}$ is the annihilation optical depth. All values are calculated for a source at $z=0.13$ emitting primary photons at $E_\gamma^0=100$ TeV. These equations show the complementarity in the search for pair halos and pair echoes. Pair halos can only be observed if they are larger than the instrument point spread function (PSF). For a typical PSF of $0.1^\circ$, the above values correspond to magnetic fields larger than $10^{-14}$ G. Hence, large magnetic field strengths can be constrained through detection of pair halos. In this case, time delays are as long as $10^8$ yrs and echoes cannot be observed. On the other hand, still with the values above, observable echoes shorter than $5$ yr require magnetic fields lower than $10^{-18}$-$10^{-17}$ G. Hence low magnetic field strengths can be constrained though detection of pair echoes. In that case, pair halos are typically smaller than $ 0.0001^\circ$ and cannot be resolved. As the lepton energy decreases, both the detection angle and the time delay increase until the maximal deflection $\delta = \pi/2$ is reached for which $\theta_{\rm max} \approx 1/\tau_{\gamma\gamma} = 5.7^\circ (\tau_{\gamma\gamma}/10)^{-1}$. This corresponds to the maximal halo size. At a lower lepton energy, the Larmor radius becomes smaller than the magnetic coherence length. The leptons are trapped by the magnetic field and cannot travel farther. This leads to the formation of a cloud of $e^{+}-e^{-}$ pairs around the source. The size of the observed halo then corresponds to the physical extension of the pair cloud, i.e. $\lambda_{\gamma\gamma}$. \subsubsection*{Distributions:} In the following, unless otherwise specified, all distributions are normalised to one single primary photons emitted. The cascade spectrum produced by one single high-energy photon is computed as $dN/dE_\gamma = 2 (dN/dt)_{ic} / (dE_\gamma/dE_e) / (dE_e/dt)$, where the factor 2 accounts for the two leptons produced by one single primary photon. Noting that the number of photons up-scattered by one lepton per unit time is $(dN/dt)_{ic} = c/\lambda_{ic}$, and using Eq. \ref{eqn:Egamma} and \ref{eqn:dE_dt} to derive the last terms leads to: \begin{equation} \label{eqn:spectrum} E_{\gamma}^2 \frac{dN}{dE_{\gamma}} = \frac{m_e c^2}{2} \sqrt{\frac{3 E_{\gamma}}{\epsilon_{cmb}}} \approx 556 \left( \frac{E_{\gamma}}{1\,{\rm GeV}}\right)^{1/2} (1+z)^{-1} \textrm{GeV}. \end{equation} It is a simple power-law $dN/dE_\gamma \propto E_{\gamma}^{-\Gamma}$ with index $\Gamma =3/2$. If unabsorbed, this spectrum extends up to the energy of photons scattered by the highest-energy leptons, i.e. $E_{\gamma,\rm max} = 0.8 \left(E_{\gamma,0}/1~{\rm TeV}\right)^2$~GeV. However, for distant sources, such a power-law spectrum is typically cut at lower energies by photon absorption (Fig.~\ref{fig:cutoff_energy}). In principle, a new generation of particles is then produced. Higher photons generation have often been neglected although they may contribute significantly to the overall spectrum (see section \ref{sec:simple_case}). The angular distribution can be computed as $dN/d\theta = (dN/dE_\gamma) (dE_\gamma/d\delta)(d\delta/d\theta)$. Using Eq. \ref{eqn:spectrum}, \ref{eqn:delta_ic}, and \ref{eqn:theta} to estimate the three factors respectively allows to derive the angular distribution of first-generation photons produced by one single primary photon, for large coherence length and small angle: \begin{align} \theta \frac{dN}{d\theta} &= m_e c^2 \left( \frac{3}{2 \lambda_{ic} \epsilon_{\rm cmb}} \frac{\tau_{\gamma\gamma} \theta}{ e_cB} \right)^{1/2} \nonumber \\ &= 1972.3 \left( \frac{\tau_{\gamma\gamma}}{397.4} \right)^{1/2} \left( \frac{B}{10^{-15} \rm G} \right)^{-1/2} \left( \frac{\theta}{1^\circ} \right)^{1/2}. \label{eq:theta_distrib} \end{align} Identically, writing $dN/dt = (dN/dE_\gamma)(dE_\gamma/d\delta)(d\delta/dt)$ and using Eq. \ref{eqn:time_delay} gives the following distribution of time delays: \begin{align} \Delta t \frac{dN}{dt} &= m_e c^2 \left( \frac{3 }{ 4\lambda_{ic} \epsilon_{cmb} e_c B} \right)^{1/2} \left(\frac{c\Delta t}{2\lambda_{\gamma\gamma}} \right)^{1/4} \nonumber \\ & = 437.2 \left(\frac{\lambda_{\gamma\gamma}}{1.32 \rm Mpc} \right)^{-1/4} \left(\frac{\Delta t}{10^6 \rm yr} \right)^{1/4} \left(\frac{B}{10^{-15} \rm G} \right)^{-1/2}. \label{eq:Dt_distrib} \end{align} These distributions are pure power-laws and do not show any specific angular (or time) scale because they are composed by the contributions of photons detected with all possible energies. However, as discussed below, real observations are obtained in a limited energy range, with a given aperture angle, and with a finite observation duration. This produces characteristic scales in energy, arrival angle and time delay, that appear as cuts or breaks in the corresponding distributions. \section{Code description and test cases} \label{sec:simple_case} The analytical approximations presented in the previous sections are useful to understand the physics of the cascades and to obtain orders of magnitude estimates, but accurate calculations require numerical simulations. For this purpose, we have developed a new Monte Carlo code. In this section, we outline the main features of this code and compare its results to the analytical predictions presented in section \ref{sec:observables} as well as to other published results. \subsection{Code and simulation set-up}\label{sec:code} Our Monte Carlo code is designed to track the propagation of all particles and to reproduce the properties of the cascade as precisely as possible. A complete description of the code can be found in appendix \ref{appendix:code}. However, the main algorithm is summarised here (see also Fig.~\ref{simulation_process}): \begin{itemize} \item Primary particles (photons or leptons) are launched at a given redshift and with a given energy. Particle energy can also be sampled from a power-law distribution. In this paper will focus only on cascades initiated by photons. \item Interaction distances are generated randomly according to the probability distribution given by the exact cross-sections ($e^{+}-e^{-}$ pair production for a photon, Klein-Nishima cross-section for a lepton) and taking into account the cosmological evolution of the target particles. \item The particles are propagated taking into account all cosmological effects (redshift, expansion). In particular, the transport of leptons in the EGMF is computed as described in appendix \ref{appendix:travel}. If the particles interact before reaching the Earth, the interaction outcome is computed. Energy and direction of the outcoming particles are generated according to the probability distributions given by the exact differential cross-sections. Particles with updated parameters and new particles are then stored for later treatment. The physical parameters of the particles reaching the Earth without any interaction are stored for post-processing. \item The extragalactic magnetic field is modeled by dividing the comoving space into a number of cells with size $\lambda_B$, defining regions of uniform magnetic field. For each cube a random magnetic field direction is computed and is kept for the entire simulation. The EGMF is set by its strength at redshift $z=0$. For very short coherence lengths, the lepton interaction distance ($\lambda_{IC}\approx 1$ kpc) can become larger than $\lambda_B$. In such a case, the motion to the next interaction location is divided in shorter steps of length a fraction of $\lambda_B$, to ensure that the leptons are deflected by all cells. \end{itemize} The following subsections present the results of test simulations. For the sake of comparison, we first consider a simple canonical model consisting in a mono-energetic source (100 TeV) emitting isotropically at z=0.13 (around 557 Mpc). We use the EBL model of \cite{dominguez_extragalactic_2011}. With this set-up the characteristic annihilation distance of primary photons is $\lambda_{\gamma\gamma}=1.32$ Mpc. The EGMF is set to $B=3 \times 10^{-16}$ G with a coherence length of $\lambda_B=1$ Mpc. We focus on the three main observables characterising the photons detected on Earth: their energy, their arrival angle, and their time delay. We derive the 3D photon distribution in this space of parameter. We also concentrate on the contribution of the different {\it generations} of particles, i.e. their rank in the cascade tree. In the following, the zeroth generation corresponds to the primary photons emitted at the source. These photons annihilate, produce pairs, that in turn produce the photons of first generation. Again, these can produce pairs which up-scatter target photons producing the second photon generation, and so on. As we are only interested in the detection of photons, their energy will be written $E$, where the subscript $\gamma$ has been dropped for simplicity. \subsection{Correlations between observables} \label{sec:correlation} In the framework of the approximations used in the one-generation analytical model, the photon energy, detection angle and time delay are linked exactly through simple relations. When these approximations are relaxed a significant scatter is expected. Fig. \ref{fig:simple_case-observables} show the three possible correlations between the photon energy $E$, the detection angle $\theta$, and the time delay $\Delta t$. \begin{figure} \centering \includegraphics[width=\columnwidth]{simple_case-observables} \caption{{\bf Top panel:} Correlation between the time delay $\Delta t$ and energy $E$ of detected photons. The density map shows the number of photons per unit energy and time delay $(E \Delta t) d^2N/(dE d\Delta t)$ with a log color scale. The blue, green and black solid lines show the average time delay for first-generation photons only, second-generation photons, and all photons, respectively. The blue and green dashed lines show the analytical estimates for the first and second generations only (see sec. \ref{sec:observables}). With the same notations, the {\bf middle and bottom panels} show the energy-detection angle and detection angle-time delay correlations respectively. } \label{fig:simple_case-observables} \end{figure} In addition, the {\it average} behaviour is plotted in solid line for each generation by binning the x-axis (with 4 bins per decade) and averaging the y-values. The analytical estimates of the one-generation model (from Eq. \ref{eqn:theta} and \ref{eqn:time_delay}) are shown for comparison as blue, dashed lines. The expected trends are recovered: the smaller the energy, the larger the observation angle and the arrival time delay. As the leptons cool down, they produce more and more gamma-ray photons per unit time. As a result, the total cascade emission is dominated by a very large number of photons with low-energy, large angle, long time delay, as can be seen in Fig.~\ref{fig:simple_case-observables}. However, several important points can be made. The averaged values for the first-generation photons (blue line) are consistent with the analytical results of sec. \ref{sec:observables}: both the saturation at low energy and the slopes in the small-angle regime are well recovered. The saturations observed in time correspond to the maximal delay (photons that are emitted away from, and scattered back to the observer) while the saturation observed in detection angle corresponds to photons deflected by an angle of $\delta=\pi/2$. The power-law regimes correspond to Eq. \ref{eqn:theta_approx} and \ref{eqn:Dt_approx}. Most of the observed deviations at high energy (e.g. fluctuations and peaks) come from the limited statistics of the simulations: averaged values can be contaminated by a few photons with very large values (typically photons scattered by EBL targets instead of CMB photons). Physical processes that are not taken into account in the analytical estimates (Klein-Nishina regime, dispersion in the annihilation distance, non-uniform magnetic field, energy dispersion around the averaged value \dots) are also responsible for deviations to the approximations, but the effects are weaker. These results also show that the one-generation model underestimates the detection angle and the time delay by at least two orders of magnitude when the contribution of second-generation photons is significant. Indeed the highest energy, second-generation photons are almost all produced at the location where the parent leptons which emitted them where produced. As these photons have lower energy than the primary photons, their annihilation distance is larger $\lambda_{\gamma\gamma}^{gen=1} \gg \lambda_{\gamma\gamma}^{gen=0}$. As a result, the highest energy, second-generation photons are typically produced at a distance $\lambda_{\gamma\gamma}^{gen=1}$ from the source. The geometry remains however similar so that an estimate for the second-generation observable quantities can be found by substituting $\lambda_{\gamma\gamma}$ by $\lambda_{\gamma\gamma}^{gen=1}$ in the results of sec. \ref{sec:observables}. For primary photons at 100 TeV, the highest-energy, first-generation photons ($8$ TeV) have mean free path $\lambda_{\gamma\gamma}^{gen=1}=117$ Mpc. This is shown in Fig.~\ref{fig:simple_case-observables} as green dashed lines. As can be seen, this new estimate matches well the average results of second-generation dominated cascades. We emphasise that a single generation never dominates at all energies, angles and time delays. In our canonical simulation for instance, only the low-energy spectrum is dominated by second-generation photons while the highest energy photons are mostly first-generation photons (see Fig.~\ref{fig:simple_case-spectrum}). This is why the average time delay drops below the second-generation estimates at high energy. In principle, the ratio of first to second generation photons depends on the energy of primary photons and the source distance. If primary photons have energy smaller than or close to the absorption energy $E_{\rm abs}$, then the absorption is so weak that the production of second-generation photons is quenched, and the cascade is dominated by first-generation photons (this will be illustrated in Fig. \ref{fig:spectrum_vs_powerlaw} for instance). However, the results presented here remain general as soon as the energy of primary photons is significantly larger than the absorption energy \citep[see also][]{berezinsky_high-energy_2016}. In any case, there is a large dispersion around the average quantities, showing that the latter might be of limited practical use. \subsection{Photon distributions} The energy, angle and time distributions are obtained by integrating over the other two parameters. In this section we focus on photons with energy above 1 GeV which corresponds to the typical energy range of gamma-ray observatories. The spectrum of our fiducial model, integrated over the entire sky, over all possible arrival times, and normalised to one primary photon, is shown in Fig.~\ref{fig:simple_case-spectrum}. It is compared to Fig.~1 of \citet{taylor_extragalactic_2011}\footnote{The normalisations of the published results where chosen arbitrary} and to the results of the Elmag code \citep{kachelriess_elmag_2012} using the same set-up. Spectral shapes provided by the different codes are compatible each other. \begin{figure} \centering \includegraphics[width=\columnwidth]{simple_case-spectrum} \caption{Full spectrum for an isotropic and mono-energetic source (100~TeV) at $z$=0.13 (black line). The blue, green and red lines show the contributions of generations 1, 2 and 3 respectively. The analytic expression (Eq.~\ref{eqn:spectrum}) is shown in blue dashed line. The dotted-dashed, and dotted lines show the results from Elmag \citep{kachelriess_elmag_2012} and \citet{taylor_extragalactic_2011} respectively. } \label{fig:simple_case-spectrum} \end{figure} All primary photons at 100 TeV are absorbed and produce the first-generation spectrum (blue line). Below $E_{\rm cut}$ analytical estimates (Eq. \ref{eqn:spectrum}) are well reproduced by the first-generation population, in spite of the approximations made. Above 1 TeV the photon of the first generation are absorbed (Fig.~\ref{fig:cutoff_energy}) and produce a second generation which dominates the spectrum below about 100 GeV (green line). The spectrum of the second generation is similar to the spectrum of the first generation ($dN/dE\propto E^{-3/2}$) except it is softer in the energy range shown in this figure (it can be shown that $dN/dE \propto E^{-7/4}$). A few second-generation photons are also absorbed and produce a weak third-generation population (red line) which does not contribute to the total spectrum. The angle distribution integrated over energies $E>1$ GeV and over all arrival times, normalised to one primary photon is shown in Fig.~\ref{fig:simple_case-arrival_angle_distribution}. \begin{figure} \centering \includegraphics[width=\columnwidth]{simple_case-angle_distribution} \caption{Detection angle distribution for $E>1$ GeV photons. The colors are the same as in Fig. \ref{fig:simple_case-spectrum}. The blue, dashed line shows the analytical estimate of Eq. \ref{eq:theta_distrib}. } \label{fig:simple_case-arrival_angle_distribution} \end{figure} The emission is peaked at the center and decreases as a power-law with increasing angle. At small angles, the distribution of first-generation photons is well approximated by the analytical estimate given in Eq. \ref{eq:theta_distrib}: $dN/d\theta \propto \theta^{-1/2}$. However, only photons above 1 GeV are considered here. Hence, low-energy photons with large angles are not observed. And, as compared to the analytical estimate, the angular distribution drops at a typical size that depends on the minimal energy and the magnetic field. As second-generation photons have a larger mean free-path, they typically arrive with larger angles. Interestingly, the distribution is dominated by second-generation photons at observable scales ($\theta>0.1^\circ$). This result remains general as long as the energy of primary photons is significantly larger than the absorption energy $E_{\rm abs}$ (i.e. photons have absorption depth $\tau_{\gamma\gamma}(E^0)>>1$). Primary photons with energy comparable to the absorption energy ($\tau_{\gamma\gamma}(E^0) \approx 1$) are weakly absorbed and do not produce high-generation photons. For sources with an extended intrinsic spectrum, the contribution of first- and second-generation photons to the angular distribution is more complex (see sec. \ref{sec:source_spectrum}). The time delay distribution integrated over energies $E>1$ GeV and all detection angles, and normalised to one primary photon, is shown in Fig. \ref{fig:simple_case-time_delay_distribution}. \begin{figure} \centering \includegraphics[width=\columnwidth]{simple_case-time_delay_distribution} \caption{Distribution of time delays for $E>1$ GeV photons. The colors are the same as in Fig. \ref{fig:simple_case-spectrum}. The blue, dashed line shows the analytical estimate of Eq. \ref{eq:Dt_distrib}. } \label{fig:simple_case-time_delay_distribution} \end{figure} As time evolves after a source flare, less and less photons are observed. The one-generation model (Eq. \ref{eq:Dt_distrib}) provides a good estimate of the first-generation distribution for time delays ranging from about 1 month to 100 years. Shorter time delays correspond to high energy photons that are absorbed, producing a drop below the analytical estimate. Longer time delays correspond to low-energy photons below the selection criterion $E>1$ GeV, producing a cut of the distribution above 1000 years. Interestingly, accessible time delays ($\Delta t < 1$ yr) after a flaring event (such as a GRB) are short enough to be dominated by first-generation photons only, allowing for the use of simple formulae to derive constraints from the potential detection of pair echoes. The three distributions presented on Figs.~\ref{fig:simple_case-spectrum}, \ref{fig:simple_case-arrival_angle_distribution} and \ref{fig:simple_case-time_delay_distribution} ($dN/dE$, $dN/d\theta$, and $dN/dt$) are global distributions in the sense that they are integrated over large ranges of the 2 others quantities. For instance, the spectrum shown in Fig.~\ref{fig:simple_case-spectrum} is the integrated over all detection angles and all arrival times. However, in a more realistic situation, only limited ranges of these quantities are accessible. As low-energy photons arrive with large angle, large time delay, and are products of primary photons emitted far away from the line of sight, any of the following effects will damp the spectrum at low energy, while cutting the large angle and large time delay part of the associated distributions: \begin{enumerate} \item If the instrument is only sensitive above a given energy (see discussion before). \item If the instrument aperture is limited. \item If the exposure time is finite after an impulsive flaring event. \item If the source emission is beamed within a limited opening angle. \end{enumerate} The effect of finite exposure time is illustrated in Fig.~\ref{fig:simple_case-spectrum_tobs} \citep[see also][]{ichiki_probing_2008}. \begin{figure} \centering \includegraphics[width=\columnwidth]{simple_case-spectrum_tobs} \caption{Total spectrum of an flaring event integrated over a finite exposure time ($t_{\rm obs}=\tau$), or equivalently: instantaneous spectrum of a source that has been active for a give time ($t_{\rm act}=\tau$) in the past.} \label{fig:simple_case-spectrum_tobs} \end{figure} This can be interpreted in two ways. If the source produces a strong, impulsive flaring event (such as a GRB for instance), this figure shows the integrated spectra as data is accumulated from the detection of the unabsorbed, primary photons up to time $t_{\rm obs}=\tau$. As time evolves lower energy photons are detected and the low-energy part of the spectrum builds up slowly. Alternatively, this figure also shows the instantaneous spectra observed at present time, if the source (such as an AGN) has been active for a time $t_{act}=\tau$ in the past, with constant luminosity \citep{dermer_time_2011}. As the activity period increases, we are able to detect secondaries produced by primaries emitted earlier. As leptons had more time to cool down, these secondaries have lower energy. As a result, long activity sources have spectra that extend to lower energy. \section{Source properties} \label{sec:source_profile} The simple case presented in section~\ref{sec:simple_case} allows us to understand general behavior of electromagnetic cascades. However, several effects must be included to produce realistic cascades. Gamma-ray sources (AGNs, GRBs) do not emit photons at a single energy or isotropically but instead produce non-thermal, beamed radiation. Here we investigate the following intrinsic properties of the source: \begin{itemize} \item Redshift $z$. \item Intrinsic spectrum: here we consider power-law spectra in the form $dN/dE \propto E^{-\Gamma}$ for $100$ MeV$ \ge E \ge E_{\rm max}$. \item Emission profile: here we assume a disk emission, i.e. an axisymmetric angular distribution $dN_e/d\Omega_e$ uniform up to a given half-opening angle $\theta_{\rm jet}$ observed with an angle $\theta_{\rm obs}$ away from its axis (see Fig. \ref{fig:triangle}). \end{itemize} In the following we use simulation parameters corresponding to the Blazar 1ES0229+200 \citep{tavecchio_hard_2009,taylor_extragalactic_2011,vovk_fermi/lat_2012} with a redshift $z=0.14$ corresponding to a distance of 599 Mpc, and a hard spectrum with $\Gamma=1.2$. The unconstrained maximal energy of the intrinsic spectrum is set to $E_{\rm max}=100$ TeV and the emission is assumed to be isotropic. The EGMF has an averaged intensity of $B=10^{-15}$ G, and a coherence length $\lambda_B=1$ Mpc. We use the EBL model of \cite{dominguez_extragalactic_2011}. In this section, the spectra are normalised to $L_0$, the intrinsic luminosity of the source as observed at $z=0$ (i.e. the intrinsic luminosity decreased by a factor $1+z$). \subsection{Source redshift} The spectral evolution with redshift is shown in Fig.~\ref{fig:spectrum_vs_redshift} for $z=0.04$ ($D_s\sim 175$~Mpc), $z=0.14$ ($D_s\sim 599$~Mpc), $z=1$ ($D_s\sim 3.4$~Gpc), and $z=2$ ($D_s\sim5.3$~Gpc). \begin{figure} \centering \includegraphics[width=\columnwidth]{Spectrum_vs_redshift} \caption{Full spectrum of sources at different redshifts ($z=0.04$, $0.14$, $0.4$, $1$ and $2$), normalised to the intrinsic luminosity attenuated by a factor $1+z$ to account for the universe expansion.} \label{fig:spectrum_vs_redshift} \end{figure} As expected, the cut-off energy decreases with increasing distance as the column density of target photons increases. At high redshift, we find that the absorption depth goes simply as $\tau_{\gamma\gamma}\propto E^2$ producing a super-exponential cutoff $\propto e^{-E^2/E_{\rm cut}^2}$. However the spectra of nearby sources show a more complex and harder absorption cutoff. In our setup, the maximal energy of primary photons (100 TeV) is large enough to generate an efficient cascade. As a result, almost all shown spectra are dominated by second-generation photons. Only when the source is close enough ($z=0.04$), the absorption is weak enough to quench the production of second-generation photons. As discussed by \cite{berezinskii_cosmic_1975} the spectrum softens as the generation order increases, which is consistent with the harder spectrum observed at $z=0.04$. The intrinsic spectrum used here is hard ($\Gamma=1.2$), so that most of the intrinsic luminosity is concentrated at the highest energies ($E_{0} \sim E_{\rm max}$). Most of the spectrum is then fully absorbed and redistributed as cascade contribution. As a result, the amplitude of the observed spectra is almost insensitive to the absorption energy, i.e. also to the source redshift. The evolution of the angular distribution and time delays of photons with energy $E > 1$ GeV, are illustrated by their average values in Fig.~\ref{fig:mean_Dt_theta_vs_redshift} for different EGMF strengths. \begin{figure} \includegraphics[width=\columnwidth]{mean_Dt_theta_vs_redshift} \caption{Average arrival angle (top) and time delay (bottom) of photons with energy $E > 1$ GeV, as a function of the source redshift, for different EGMF strengths ($B=10^{-13}, 10^{-15}$, and $10^{-17}$ G). } \label{fig:mean_Dt_theta_vs_redshift} \end{figure} Both the halo extension and the time delay increase with magnetic field, as leptons of given energy are more deflected by stronger fields. Their evolution with redshift is the result of several effects. The halo extension decreases with distance. To zeroth order, it is simply due to geometrical effects: the same annihilation distance to the source corresponds to a smaller angle as seen from a more distant observer (see for instance Eq. \ref{eqn:theta} for first-generation photons: $\theta\propto \lambda_{\gamma\gamma}/D_s$). In contrast, the time delay does not suffer from any geometrical dependence on distance (see for instance Eq. \ref{eqn:time_delay} for first-generation photons) and shows only little evolution with redshift. To first order however, the cosmological evolution of the universe also influences the angular size and time delay of secondary photons (namely through $\lambda_{\gamma\gamma}(z)$, $\lambda_{ic}(z)$, $B(z)$, and $E(z)$), explaining the remaining evolution. \subsection{Source spectrum} \label{sec:source_spectrum} Fig.~\ref{fig:spectrum_vs_powerlaw} shows the observed spectra when the source intrinsic spectrum is changed. \begin{figure} \centering \includegraphics[width=\columnwidth]{Spectrum_vs_powerlaw} \\ \includegraphics[width=\columnwidth]{Spectrum_vs_Emax} \caption{{\bf Top panel:} Observed spectra for $E_{\rm max}=100$ TeV and different spectral indices $\Gamma$ (solid lines) . The contributions of the primary source and the cascade are shown in dotted and dashed lines respectively. Spectra are normalised to $L_0$, the intrinsic luminosity attenuated by $1+z$. {\bf Bottom panel:} Spectra for $\Gamma=1.2$ and different maximal energies $E_{\rm max}$.} \label{fig:spectrum_vs_powerlaw} \end{figure} The top and bottom panel show the results for different spectral indices ($\Gamma$=1.2, 2 and 2.2, $E_{\rm max}=100$ TeV) and different maximal energies ($E_{\rm max}=10$, 50, 100 TeV and 1 PeV, $\Gamma=1.2$) respectively. At source distance $z=0.14$, photons with energy higher than a few TeV are absorbed and redistributed towards low energies. For hard spectra ($\Gamma<2$), many primary photons are absorbed. This induces a strong cascade which dominates the intrinsic source emission at all energies. The observed spectrum is very similar to the mono-energetic model shown in Fig.~\ref{fig:simple_case-spectrum}. In contrast, when the intrinsic spectra are soft ($\Gamma>2$), only few primary photons are absorbed. The cascade contribution is negligible and only the absorbed, intrinsic spectrum is observed. As far as only the cascade emission is concerned (dashed lines), the spectrum is only weakly dependent on the intrinsic hardness. This strong universality of the cascade emission is also illustrated in the bottom panel where the shape of the observed spectrum is highly insensitive to the maximal energy, Only the case with $E_{\rm max}=10$ TeV shows significant departure from the generic spectral shape. In that specific case, the intrinsic spectrum does not extend much beyond the absorption energy $E_{\rm abs}$ at that distance, and the spectrum is dominated by first-generation photons which were produced below the average annihilation distance. \begin{figure} \centering \includegraphics[width=\columnwidth]{dNdtheta_vs_powerlaw} \caption{Angular distribution of $E>1$ GeV photons, for different intrinsic spectral indices $\Gamma$ and $E_{\rm max}=100$ TeV (solid lines). The dashed and dotted lines show the contributions of first-generation photons and second-generation photons respectively.} \label{fig:dNdtheta_vs_powerlaw} \end{figure} Fig.~\ref{fig:dNdtheta_vs_powerlaw} shows the angular distribution in the 1 GeV-1 TeV band, for different intrinsic spectral indices. As can be seen, the angular distribution is rather insensitive to the intrinsic spectrum. Although the angular distribution of cascades induced by one single primary photon depends on the energy of this primary (it scales as $\lambda_{\gamma\gamma}(E_0)$, see Eq. \ref{eqn:theta_approx}), two effects contribute to keep the final distribution induced by an extended intrinsic spectrum quite universal. First, most of the cascade is induced by primary photons just above the absorption threshold (see Fig. \ref{fig:cutoff_energy}), i.e. in the 1-10 TeV range, because they are more numerous. In this range the mean absorption length is rather insensitive to the primary energy (see Fig. \ref{fig:lambda_gg}). Second, when the intrinsic spectrum extends to higher energy and is hard enough ($\Gamma \lesssim 2$), part of the cascade is in principle also induced by these high-energy photons with shorter absorption length. However the first-generation photons produced by these high-energy primaries are also absorbed. They produce second-generation photons that eventually dominate the angular distribution and have larger absorption depth (see sec. \ref{sec:correlation}). Here also, the second-generation photons contribute dominantly to the cascade when they have energy just above the absorption energy, that is when they have absorption length comparable to that of low-energy primaries. This makes the spectrum insensitive to the energy of primary photons, that is to the intrinsic spectrum. The time delay distribution is even less sensitive to the properties of the intrinsic spectrum, and is not shown here. These results show that the emission properties of cascades (spectrum, angular distribution and time distribution) do not depend on the properties of the intrinsic spectrum as long as the maximal energy of the source is large enough compared to the absorption energy. Although the cascade properties are highly dependent on several other parameters such as the distance and the EGMF, this illustrates the universal properties of the cascade emission with respect to the source intrinsic spectrum \citep[e.g.][]{berezinsky_high-energy_2016}. \subsection{Source beaming} Anisotropic emission can be modelled at the post-processing stage if the source emission is axisymmetric (e.g. a beamed emission) and the intergalactic medium is isotropic (see appendix \ref{appendix:anisotropy}). If the source axis is not aligned with the line of sight, a jet structure is observed in the cascade emission \citep{neronov_degree-scale_2010, arlen_intergalactic_2014}. This is illustrated in Fig.~\ref{fig:image_tobs}. \begin{figure} \centering \includegraphics[width=\columnwidth]{image_tobs} \caption{Images of a misaligned source at $z = 0.14$, emitting a power-law spectrum with index $\Gamma =1.2$ in a uniform cone of half-opening angle $\theta_{\rm jet}=3^\circ$. Here $B=10^{-15}$ G, and $\lambda_B=1$ Mpc. Observations at three angles at shown: $\theta_{\rm obs}=0^\circ$, $5^\circ$, $10^\circ$ from top to bottom. } \label{fig:image_tobs} \end{figure} The general analysis of the misaligned case is beyond the scope of the current paper. In the following, we shall concentrate on the case of a beamed emission aligned with the line of sight. The effect of the jet half-opening angle $\theta_{\rm jet}$ on the observed spectrum is shown in Fig.~\ref{fig:spectrum_vs_tjet}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Spectrum_vs_tjet} \caption{Energy spectra for different jet half-opening angles $\theta_{\rm jet}$. All spectra are normalised to the luminosity of an isotropic source of equal flux inside the emission cone: $L_{\rm iso} = 2L_0/(1-\cos\theta_{\rm jet})$. } \label{fig:spectrum_vs_tjet} \end{figure} Lower energy photons typically originate from primary photons emitted farther away from the line of sight. Therefore in the case of a beamed source with no photon emitted at large angle, the cascade emission is suppressed below some critical energy \citep[see for instance][]{tavecchio_intergalactic_2010}. Then, the more collimated the jet, the larger the critical energy. The precise transition energy depends on the magnetic field strength and coherence length. Although its value can be derived in the framework of a one-generation model, such results do not apply to sources emitting at high energy and producing high-generation dominated cascades. The effect of the jet opening angle on the observed angular distribution and time delays is illustrated in Fig.~\ref{fig:meantheta_vs_jet_opening_angle} for different source distances. \begin{figure} \centering \includegraphics[width=\columnwidth]{mean_Dt_theta_vs_jet_opening_angle} \caption{Average detection angle (top) and time delay (bottom) of photons with energy $E > 1$ GeV as a function of the jet opening angle for sources at different redshifts. } \label{fig:meantheta_vs_jet_opening_angle} \end{figure} Photons emitted with a large angle $\theta_e$ with respect to the line of sight are typically observed with a large angle $\theta \approx \theta_e / \tau_{\gamma\gamma}$. As a result, the averaged detection angle in the 1 GeV - 100 TeV band is expected to scale as $\langle \theta \rangle\ \propto\ \theta_{\rm jet}$. This is clearly observed for distant sources with $z>0.4$ at small angle. The increase of the average angle with the jet opening angle is less pronounced for nearby sources. This results from a combination of the source intrinsic spectrum (different primary energies correspond to different annihilation distances) and the complex absorption feature for nearby sources (see Fig.~\ref{fig:spectrum_vs_redshift} and the associated discussion). At larger jet opening angle, the average detection angle is limited by the physical extension of the halo around the sources. For all opening angles, the average angle decreases with the source redshift mostly because of geometrical effect (halos look smaller when observed from larger distances). The average time delay increases with the jet opening angle. Indeed with a large jet opening, there is a larger halo effect. Then photons are more deflected and arrive at latter times. The redshift dependence is less pronounced than that of the average angle, as already mentioned. These results show that the effect of the source beaming on the cascade properties cannot be modelled by simple one-generation models, and that it must be investigated numerically. \section{Properties of the intergalactic medium} \label{sec:intergalactic_medium} In section \ref{sec:source_profile} we explored the effect of the source parameters on the development and observability of a cosmological cascade. We now illustrate the impact of the intergalactic medium, in particular the effects of: \begin{itemize} \item the extragalactic background light model \item the extragalactic magnetic field (amplitude $B$ and coherence length $\lambda_B$) \end{itemize} The same fiducial simulation is used, as in section \ref{sec:source_profile}. \subsection{Extragalactic background light} \label{sec:EBL} The extragalactic light (EBL) affects the absorption of gamma-rays. Fig.~\ref{fig:spectrum_vs_EBL} shows the spectrum computed for an isotropic source with a spectral index of $\Gamma=1.2$ at a redshift $z=2$ using 6 different EBL models \citep{franceschini_extragalactic_2008, dominguez_extragalactic_2011, finke_modeling_2010, kneiske_lower-limit_2010, gilmore_semi-analytic_2012}. The different EBL models predict different cut-off energies. This dependence on EBL models is similar to the analytical expectation from Fig. \ref{fig:cutoff_energy}. Below the cutoff energy, the cascade spectrum is however quite universal (in shape and intensity). Indeed the intrinsic spectrum used here is hard so that most of the absorbed energy corresponds to photons with energy close to the maximal energy $E_{\rm max}$ of the intrinsic spectrum, independently of the absorption energy $E_{\rm abs}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{Spectrum_vs_EBL} \caption{Full-sky spectrum for different EBL models. The source is located at redshift $z=2$.} \label{fig:spectrum_vs_EBL} \end{figure} Fig.~\ref{fig:dNdtheta_vs_EBL} shows the distribution of detection angles for photons with energies $E>1$ GeV and for the 6 different EBL models. \begin{figure} \centering \includegraphics[width=\columnwidth]{dNdtheta_vs_EBL} \caption{Detection angle distribution of $E>1$ GeV photons emitted at $z=2$ and for different EBL models.} \label{fig:dNdtheta_vs_EBL} \end{figure} Although the general shape of the distribution is similar from one model to the other, the typical angular scales involved can vary by a factor from 1 to 6. \subsection{Extragalactic magnetic field} The amplitude and coherence length of EGMF have no effect on the full integrated spectrum over an infinite time. It can however have an effect if the spectrum is integrated only over a finite observational time or if a limited aperture is used. These questions have already been largely studied in the literature \citep[e.g.][]{taylor_extragalactic_2011,vovk_fermi/lat_2012, arlen_intergalactic_2014}. We do not reproduce these studies here. Fig.~\ref{fig:mean_Dt_theta_vs_EGMF} shows the average time delay and arrival angle of photons with energy $E > 1$ GeV versus the amplitude of the EGMF for a coherence length $\lambda_B= 1$ Mpc. \begin{figure} \includegraphics[width=\columnwidth]{mean_Dt_theta_vs_EGMF} \caption{Average detection angle (top) and average time delay (bottom) of photons with energy $E > 1$ GeV as a function of the EGMF strength for different source redshifts. The coherence length is $\lambda_B=1$ Mpc.} \label{fig:mean_Dt_theta_vs_EGMF} \end{figure} For strong magnetic fields ($B>10^{-14}$ G), leptons are trapped near their production site by the magnetic field and produce an isotropic source of typical size $\lambda_{\gamma\gamma}$. This translates into a typical average angle (and time delay), that is independent of the field intensity \citep{aharonian_very_1994}. For weak magnetic fields ($B <10^{-14}$G), there is no isotropisation of the emission and the average angle increases with the magnetic field intensity: the stronger the field, the larger the deflection and the larger the detection angle \citep{elyiv_gamma-ray_2009}. For very weak magnetic fields ($B <10^{-21}$G, the arrival angle and time delay saturate at minimal values corresponding to the intrinsic extension of the cascade, resulting from the small misalignment of the product particles with respect to their parent particles during pair production and Compton interactions. This intrinsic extension is independent on the magnetic field strength \citep[Eq. 41]{neronov_sensitivity_2009}. Also, nearby sources naturally appear as more extended than distant sources. Nonetheless, as for the effect of the jet opening angle, a combination of the intrinsic spectrum of the source and the complex absorption profile at low redshift produces an increase of the average angle that can be slower than the linear expectation of Eq. \ref{eqn:theta} and that depends on the source redshift. The properties of the cascade emission also depend on the coherence length. The general shape of the angular and time delay distributions (not shown here) are quite insensitive to that length scale, but the typical angular (and time) scales depend on it. The average angle and time delays of photons with energy $E > 1$ GeV are shown as a function of the coherence length in Fig.~\ref{fig:mean_Dt_theta_vs_LB}. \begin{figure} \includegraphics[width=\columnwidth]{mean_Dt_theta_vs_LB} \caption{Average detection angle (top) and average time delay (bottom) of photons with energy $E > 1$ GeV as a function of the EGMF coherence length $\lambda_B$ for different source redshifts. The field strength is $B=10^{-15}$ G.} \label{fig:mean_Dt_theta_vs_LB} \end{figure} At large coherence length ($\lambda_B > 1$ Mpc), the leptons only visit a single uniform magnetic cell and the deflection is governed by the orientation of the field in that cell. This produces a halo size that is quite independent of the coherence length. At small coherence length ($\lambda_B < 10$ kpc), the leptons travel through many magnetic cells and have a random walk. The deflection angle then increases as $\lambda_B^{1/2}$, and so does the detection angle. Although the angular scales cannot be derived from the simple one-generation model, the expected behaviour $\theta_{avg}\propto\lambda_B^{1/2}$ is well recovered by the full simulations. In the simple model presented in section \ref{sec:observables}, the transition between the two regimes occurs when the lepton cooling distance associated to the observation energy $E$ equals the magnetic coherence length. We consider only photons above 1 GeV and at $z=0.14$, the absorption energy is about 1 TeV. The corresponding cooling distance of the parent leptons is then in the range $10<D_{ic}<500$ kpc (according to eq. \ref{eqn:Egamma} and \ref{eqn:Dic}). The transition is then expected to occur in the same range $10<\lambda_B<500$ kpc as observed in Fig \ref{fig:mean_Dt_theta_vs_LB}. Whereas the absorption energy depends only moderately on the source distance, the cooling distance scales as $D_{ic} \propto (1+z)^{-4}$, so that the transition occurs at much smaller coherence length for high-z sources, as observed in Fig. \ref{fig:mean_Dt_theta_vs_LB}. \section{Conclusion} \label{sec:conclusion} In this article, we have presented a new, publicly available, Monte Carlo code to model the emission of electromagnetic cascades. This code makes very few approximations: it uses the exact interaction cross sections, both the EBL and CMB photons are targets for the lepton and photon interaction, it computes the exact 3D trajectory of leptons in cubic cells immersed in an uniform magnetic field and it takes into account the cosmological expansion in the evolution of the target properties, the particle energy and the particle trajectories. It can model the emission of cascades initiated by sources at any distance (as long as EBL models are accessible), with any intrinsic spectrum and any axisymmetric emission. The code was validated by comparison to published results and analytical estimates. With this code, we have studied the role of the different physical parameters (source spectrum, redshift, anisotropic emission, EBL spectrum and EGMF) involved in the cascade properties. This study also emphasises the limitations of the analytical estimates often used in the interpretation of high-energy observations. In particular, high-generation photons quickly dominate the cascade properties as soon as the intrinsic spectrum extends to energies significantly larger than the source absorption energy (typically a few TeV). Most studies use one-generation models to interpret the results of search for pair halos. We have shown that the angular distribution at potentially observable scales ($\theta>0.1^\circ$) can be fully dominated by second-generation photons if the intrinsic spectrum of the source is hard and extends to high energy. In that case, analytic expressions and their interpretations can be misleading. The dependence of cosmological cascades on the characteristics of the sources (variability, spectrum, anisotropy) as well as those of the intergalactic medium (EBL, EGMF) is subtle and complex. Moreover, the observable properties of cascades depend also crucially on the way observations are conducted (aperture angle of the instrument, energy bands, time of the observation and time of exposure\dots). A detailed numerical modelling of the cascades is a prerequisite to disentangling all these effects. The code can be used: to interpret data, to design observational strategies for present and future high-energy telescopes and to tighten the constraints on the EGMF. \section*{Acknowledgments} This work has been carried out thanks to the support of the OCEVU Labex (ANR-11-LABX-0060) and the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French government program managed by the ANR. The authors are indebted to Guillaume Dubus for many useful comments and suggestions. \bibliographystyle{mnras}
1,108,101,563,420
arxiv
\section*{Acknowledgements} We would like to thank Martin Bordemann, Kurusch Ebrahimi-Fard, Dominique Manchon and Ander Murua for many valuable remarks during discussions on the topics of this paper. Thanks also to H\aa{}kon Marthinsen and to the anonymous referees for their helpful comments. \section{Introduction} We present some aspects of the field \emph{Geometric Numerical Integration} (GNI), whose aim it is to construct and study discrete numerical approximation methods for continous dynamical systems, exactly preserving some underlying geometric properties of the system. Many systems have conserved quantities, e.g. the energy in a conservative mechanical system or the symplectic structures of Hamiltonian systems, and numerical methods that take this into account are often superior to those constructed with the more classical goal of achieving high order. An important tool in the study of numerical methods is \emph{Butcher series} (B-series), invented by John Butcher in the 1960s. These are formal series expansions indexed by rooted trees, and have been used extensively for order theory and the study of structure preservation. We will put particular emphasis on B-series and their generalization to methods for dynamical systems evolving on manifolds, called \emph{Lie--Butcher series} (LB-series). It has become apparent that algebra and combinatorics can bring a lot of insight into this study. Many of the methods and concepts are inherently algebraic or combinatoric, and the tools developed in these fields can often be used to great effect. Several examples of this will be discussed throughout. \section{Numerical integration on vector spaces}\label{sect:classgeomint} In numerical analysis one of the main objects of study is flows of vector fields, given by initial value problems of the type\footnote{Non-autonomous differential equations can also be written on this form by adding a component to the $y$ vector}: \begin{equation}\label{ivp} y'(t) = F(y(t)), \quad y(0)=y_0. \end{equation} The function $y$ can be real-valued or vector-valued (giving rise to a system of coupled differential equations). The flow of the differential equation is the map $\Psi_{t,F}: \mathbb{R}^n \rightarrow \mathbb{R}^n$ defined by $y(t) = \Psi_{t,F}(y_0).$\footnote{Here we assume Lipschitz continuity of $F$ for the flow to exist and be unique.} Note that $F(y) = d/dt \Psi_{t,F}(y_0).$ In many practical settings the vector field $F$ is Hamiltonian, and their flows have several interesting geometric properties. We seek to construct good approximations to flows, where `good' can mean several different things, depending on the context. Sometimes what we want are approximations of high order, other times we need them to preserve some qualitative or geometric structure of the underlying dynamical system. Preserving geometric structure is particularly important when studying systems over long time intervals. An early illustration of this fact was made by Wisdom and Holman in \cite{wisdom1991smf}, where they computed the solar system's evolution over a billion-year time period using a symplectic method, making an energy-error of only $2 \times 10^{-11}$. As there are several excellent introductions to geometric numerical integration on vector spaces we will not go into a detailed study here, but merely describe some of the main ideas. The book \cite{hairer2006gni} is the standard reference; other introductions can be found in \cite{ mclachlan2006gif, leimkuhler2004shd, budd2003gia, mclachlan2001slo, sanz-serna1994nhp, tsethesis, vilmart08edi}. The focus of this paper will be on some of the algebraic and combinatorial tools of geometric numerical integration, with particular emphasis on the tools we will use when studying flows on more general manifolds in Section \ref{sect:gengeomint}. Lately, there has been quite a lot of interest in these algebraic aspects of geometric integration, and this has resulted in both an increased understanding of the field, and also of its relations to other areas of mathematics. \subsection{Numerical methods and structure-preservation}\label{structurepreservation} Consider an initial value problem of the form (\ref{ivp}): \[y'(t) = F(y(t)), \quad y(0)=y_0 \tag{1}\] representing the flow of the (sufficiently smooth) vector field $F$. A numerical method for (\ref{ivp}) generates approximations $y_1, y_2, y_3, \dots$ to the solution $y(t)$ at various values of $t$. One of the simplest methods is the (explicit) \emph{Euler method}. It computes approximations $y_n$ to the values $y(nh)$, where $n \in \mathbb{N}$ and $h$ is the step size, using the rule: \begin{equation}\label{classeuler} y_{n+1} = y_n + hF(y_n). \end{equation} This generates a numerical flow $\Phi_h$ approximating the exact flow $\Psi$ of $F$. The accuracy of the method can be measured by its \emph{order}: we say that a one-step method $y_{n+1} = \Phi_h(y_n)$ has order $n$ if $|\Phi_h(y) - \Psi_h(y)| = O(h^{n+1})$ as $h \rightarrow 0$. Another way to put this is in terms of the curve traced out by the numerical flow: by comparing its Taylor series to the Taylor series for the curve of the exact flow term by term, we can read off the order of the method. The Taylor series for $y$ has the form \begin{equation} y(h) = y_0 + hF(y_0) + \frac 12 h^2 F'(y_0)F(y_0) + O(h^3), \end{equation} and we note that the Euler method is of order 1. \paragraph{Runge--Kutta methods.} The Euler method is an example of a \emph{ Runge--Kutta method}, a class of methods that are extremely prevalent in applications \cite{hairer1993sod, butcher2008nmf}. A Runge--Kutta method is a one-step method computing an approximation $y_1$ to $y(h)$ with $y_0$ as input, as follows: \begin{definition}\label{def:RK} An $s$-stage Runge--Kutta method for solving the initial value problem (\ref{ivp}) is a one-step method given by \begin{align} \begin{split} Y_i =& y_0 + h\sum_{j=1}^s a_{ij} F(Y_j), \,\,\, i = 1,\dots s \\ y_1 =& y_0 + h\sum_{i=1}^s b_i F(Y_i), \end{split} \end{align} where $b_i, a_{ij} \in \mathbb{R}$, $h$ is the step size and $s \in \mathbb{N}$ denotes the number of \emph{stages}, i.e.\ the number of evaluations of $F$. \end{definition} A Runge--Kutta method can be presented as a \emph{Butcher tableau}, which characterizes the method completely: \begin{center} \begin{tabular}{c|ccc} $c_1$ & $a_{11}$ &$\dots$ & $a_{1s}$ \\ $\vdots$ & $\vdots$ && $\vdots$ \\ $c_s$ & $a_{s1}$ & $\dots$ &$a_{ss}$\\ \hline & $b_1$ &$\dots$ & $b_s$ \end{tabular} \end{center} The coefficients $c_i = \sum_{j=1}^s a_{ij}$, appear explicitly only in integration methods for non-autonomous equations. The method is called \emph{explicit} if the matrix $\{a_{i,j}\}$ is strictly lower triangular, and \emph{diagonally implicit} if it is lower triangular. Otherwise the method is called \emph{implicit}. \begin{example} We note that the Euler method~(\ref{classeuler}) is the Runge--Kutta method with Butcher tableau: \begin{center} \begin{tabular}{c|c} $0$ & $0$ \\ \hline & $1$ \end{tabular} \end{center} Another well-known example is the explicit midpoint method: \begin{equation} y_{n+1} = y_n + hF\left(y_n + \frac 12 hF(y_n)\right), \end{equation} given by: \begin{center} \begin{tabular}{c|cc} $0$ & $0$ & $0$\\ $1/2$ & $1/2$ & $0$ \\ \hline & $0$ & $1$ \end{tabular} \end{center} An implicit method with good structure preserving properties is the implicit midpoint rule \begin{equation} y_{n+1} = y_n + hF\left(\frac{y_n+y_{n+1}}{2}\right), \end{equation} represented by the tableau \begin{center} \begin{tabular}{c|c} $\frac12$ & $\frac12$ \\ \hline & $1$ \end{tabular} \end{center} \end{example} Given any number $m$, there exist Runge--Kutta methods of order $m$ \cite{butcher2008nmf}. Verifying this involves expanding the methods into series involving the derivatives of $F$, and already at low orders the expressions get quite complicated. However, in Section \ref{B-series} we shall see that the Runge--Kutta methods are special cases of \emph{Butcher series methods}, and that one can find nice descriptions of the order theory and also structure preservation properties for numerical methods within this framework. \paragraph{Differential equations and geometric structures.} When presented with a system modeled by a differential equation one will often first try to determine its qualitative properties: are there any invariants? What kind of geometric structure does the system have? Structures of interest can be energy and volume preservation, symplectic structure, first integrals, restriction to a particular manifold (as studied in Section \ref{sect:gengeomint}), etc. Then, when choosing (or designing) a numerical method for approximating the solution of the differential equation, it might make sense for the method to share these qualitative features. In that way one has control over what kind of errors the method introduces, obtaining a method tailor-made to the problem at hand. A rich source of problems with geometric structures are the \emph{Hamiltonian systems}. Let $H: \RR^{2n} \rightarrow \RR$ be a smooth function. A \emph{ Hamiltonian vector field} is a vector field on $\RR^{2n}$ of the form $X_H = \Omega^{-1}\nabla H$, where $\Omega$ is an antisymmetric, invertible $2n\times 2n$ matrix.\footnote{Hamiltonian vector fields can be defined on any symplectic manifold \cite{arnold1989mmo}.} The flow of $X_H$ is given by \begin{equation} \frac{d}{dt} z = \Omega\nabla_z H(z). \end{equation} The function $H$ represents the total energy of the system. Two important properties of the flow of a Hamiltonian vector field $X_H$ is that it is constant along the Hamiltonian function $H$ (conservation of energy) and that it preserves a symplectic form $\omega$ on $R^{2n}$. Using numerical integrators constructed to preserve these properties has been shown to lead to dramatic improvements in accuracy. For examples of this phenomenon see e.g. \cite{hairer2006gni, hairer2005iao, leimkuhler2004shd}, and references therein. \subsection{Trees and Butcher series}\label{B-series} Starting with the work of John Butcher in the 1960s and 70s \cite{butcher1963cft, butcher1972aat} the study of methods for solving ordinary differential equations has been closely connected to the combinatorics of rooted trees. Many numerical methods $y_{n+1} = \Phi_h(y_n)$ (including all Runge--Kutta methods) can be expressed as certain formal series, called \emph{ Butcher series} by Hairer and Wanner in \cite{hairer1974otb}. By a clever representation of the terms the series can be indexed over the set of rooted trees. Consider the differential equation \begin{equation} y'(t) = F(y(t)). \end{equation} Denote the components of $F: \mathbf{R}^n \rightarrow \mathbf{R}^n$ by $f^i$ and write \begin{equation}\label{eq:components} f^i_{j_1j_2\cdots j_k} = \frac{\partial^k f^i}{\partial x_{j_1} \partial x_{j_2} \cdots \partial x_{j_k}}. \end{equation} Summing over repeated indices, the first few derivatives of $y$ can be written as: \begin{align} \begin{split} \frac{dy^i}{dt} &= f^i \\ \frac{d^2y^i}{dt^2} &= f^i_j f^j \\ \frac{d^3y^i}{dt^3} &= f^i_{jk}f^jf^k + f^i_j f^j_k f^k \\ \frac{d^4y^i}{dt^4} &= f^i_j f^j_k f^k_l f^l + f^i_j f^j_{kl} f^k f^l + 3f^i_{jk} f^j_l f^k f^l + f^i_{jkl} f^j f^k f^l. \end{split} \end{align} These expressions soon get very complicated, but the structure can be made much more transparent by observing that the derivatives of $F$ can be associated in a bijective way with rooted trees, an observation already made by Cayley in 1857 \cite{cayley1857taf}. Before giving the exact correspondence between differential equations, rooted trees and Butcher series, we will take a closer look at trees. \paragraph{Rooted trees.}\label{sect:trees} A \emph{tree} is a connected graph with no cycles $$ T = \{\ab,\aabb,\aababb,\aaabbb, \aabababb, \aabaabbb, \aaababbb,\ldots \}.$$ A \emph{rooted tree} is a tree with one vertex designated as the root. In the pictorial representation of trees, the root will always be drawn as the bottom vertex, and the trees will be ordered from the root to the top. More precisely, a tree $\tau$ is a graph consisting of a set of vertices $V(\tau)$ and edges $E(\tau)\subset V(\tau) \times V(\tau)$ so that there is exactly one path connecting any two vertices. A \emph{ path} between $v_i$ and $v_j$ is a set of edges $\{v_{s_l}, v_{t_l}\}$ so that $l=1,2,\dots, r$, $s_1=i$, $t_l = s_{l+1}$ and $t_r=j$. This gives a partial ordering of the tree in terms of paths from the root to the vertices of the tree. A vertex $v_i$ is smaller than another distinct vertex $v_j$, e.g. $v_i\prec v_j$, if the unique path from the root to $v_j$ goes via $v_i$. A vertex $v_i$ is called a \emph{ leaf} if there is no vertex $v_j$ with $v_i \prec v_j$. A \emph{child} of a vertex $v_i$ is a vertex $v_j$ with $v_i \prec v_j$ so that there is no vertex $v_k$ with $v_i\prec v_k \prec v_j$. The \emph{ order} $|\tau|$ of a tree $\tau$ is the number of vertices of the tree. We define a symmetry group on a tree $\tau$ as all automorphisms on the vertices. The order of this group, $\sigma(\tau)$, is called the \emph{symmetry} of the tree $\tau$. For example, \[\sigma\left(\aabb\right) = 1, \quad \sigma\left(\aababb \right) = 2, \quad \sigma \left( \aabababb\right) = 6.\] A recursive definition of $\sigma$ can be found in \cite{hairer2006gni}. A \emph{forest} of rooted trees is a graph whose connected components are rooted trees, e.g. $\omega = \tau_1 \dots \tau_n.$ We include the \emph{empty tree} $\one$, i.e. the graph with no vertices, in the set of forests. The set of forests can be put in bijection to the set of trees via the operator $B^+:F \rightarrow T$, defined on a forest $\omega = \tau_1 \dots \tau_n$ by connecting the trees to a new root by addition of edges. For example, \[B^+(\aababb\, \ab) = \aaababbabb.\] This operator can be used to generate all trees recursively from the tree $\ab$ by the following procedure: \begin{itemize} \item[(i)] The graph $\ab$ belongs to $\T$ \item[(ii)] If $\tau_1,\dots, \tau_n \in \T$ then $\tau = B^+(\tau_1\dots \tau_n)$ is in $\T$. \end{itemize} \noindent The \emph{tree factorial} $\tau!$ is given recursively by: \begin{itemize} \item[(i)] $\ab! = 1$ \item[(ii)] $B^+(\tau_1 \dots \tau_n)! = |B^+(\tau_1 \dots \tau_n)|\tau_1! \dots \tau_n!$. \end{itemize} An important operation on trees is the Butcher product, defined in terms of \emph{grafting}. \begin{definition}\label{bpr} The \emph{Butcher product} $\tau \bpr \omega$ of a tree $\tau = \Bplus(\tau_1\dots \tau_n)$ and a forest $\omega = \omega_1 \cdots \omega_m$ is given by grafting $\omega$ onto the root of $\tau$: \begin{equation} \tau \bpr \omega = \Bplus(\tau_1 \dots \tau_n\, \omega_1\dots \omega_m) \end{equation} \end{definition} \paragraph{Butcher series.} The calculations of the derivatives of $y'(t) = F(y(t))$ performed at the beginning of the section can be written in terms of the elementary differentials of $F$. \begin{definition}\label{def:elementdiff} Let $F:\mathbb{R}^n \rightarrow \mathbb{R}^n$ be a vector field. The \emph{elementary differential} $\mathcal{F}$ of $F$ is \begin{align} \begin{split} \mathcal{F}(\ab)(y) &= F(y) \\ \mathcal{F}(\tau)(y) &= F^{(m)}(y)(\mathcal{F}(\tau_1)(y), \dots, \mathcal{F}(\tau_m)(y)), \end{split} \end{align} where $F^{(m)}$ is the $m$-th derivative of the vector field $F$ and $\tau = B^+(\tau_1, \dots, \tau_m)$ is a rooted tree. \end{definition} \noindent We will discuss another way to write elementary differentials in Section \ref{sect:prelie}. With the notation from Equation (\ref{eq:components}), the first few elementary differentials are shown in Table (1.1). The vector field $F$ corresponds to the leaves of the tree, the first derivative $F'$ corresponds to a vertex with an edge with one child, the second derivative $F''$ corresponds to a vertex with two children, etc. \begin{table}[h!]~\label{table:elementdiff} \begin{equation*} \begin{array}{c|c} \tau & \F(\tau)(y)^i \\[1mm] \hline\\[-2mm] \ab& f^i \\[1mm] \aabb & f^i_jf^j \\[1mm] \aababb & f^i_{jk}f^jf^k \\[1mm] \aaabbb & f^i_jf^j_kf^k \\[1mm] \aabababb & f^i_{jkl}f^jf^kf^l \\[1mm] \aabaabbb & f^i_{jk}f^jf^k_lf^l \end{array} \end{equation*} \caption{Elementary differentials associated to a vector field $F$ with components $f^i$.} \end{table} Butcher series are (formal) Taylor expansions of elementary differentials indexed over trees: \begin{definition} A \emph{Butcher series} (B-series) is a (formal) series expansion in a parameter $h$: \begin{align} \begin{split} \Bs_{h,F} (\alpha) &= \alpha(\one)\F(\one) + \sum_{\tau \in \T}h^{|\tau|}\frac{\alpha(\tau)}{\sigma(\tau)} \F(\tau)\\ &= \sum_{\tau \in \tilde{\T}}h^{|\tau|}\frac{\alpha(\tau)}{\sigma(\tau)} \F(\tau), \end{split} \end{align} where $\tilde{\T} = \T \cup \{\one\}$, $F$ is a vector field, $\alpha$ is a function $\alpha: \tilde{\T} \rightarrow \RR$, $\sigma(\tau)$ is the symmetry of $\tau$, $h$ is a real number (representing the step size), and $\F$ is the elementary differential of $F$, extended to the empty tree $\one$ by $\F(\one)(y) = y$. \end{definition} We shall see that these series can be used to represent numerical methods $y_{n+1} = \Phi_{h}(y_n)$ approximating the flow of a vector field $F$, in the sense that the Taylor series for $\Phi_h$ can be expanded into a B-series: $\Phi_{h} = \Bs_{h,F}(\alpha)$.\footnote{A numerical method for solving a differential equation is called a \emph{B-series method} if it can be written as a B-series.} By computing the Taylor expansion of the solution to the initial value problem (\ref{ivp}) one obtains the following result: \begin{proposition}[{\cite{hairer2006gni}}] The Taylor series for the solution of the differential equation (\ref{ivp}) can be written as a B-series: \begin{equation} B_{h,F}(\gamma) = \sum_{\tau \in \tilde{\T}} h^{|\tau |} \frac{\gamma(\tau)}{\sigma(\tau)} \mathcal{F}(\tau), \end{equation} where $\gamma(\tau) = 1/\tau!$. That is, $y(t + h) = \Bs_{h,F}(\gamma)(y(t))$. \end{proposition} Runge--Kutta methods can also be written as B-series expansions, with coefficients given by the \emph{elementary weights} of the method \cite{butcher1963cft}. \begin{definition}[Elementary weights] Let $b_i$ and $a_{ij}$ be coefficients of a RK-method as in Definition \ref{def:RK}, where $i \in \mathbb{N}$. The \emph{elementary weight function} $\Phi$ is defined on trees as follows: \begin{align} \begin{split} &\Phi_i(\ab) = c_i\\ &\Phi(\ab) = \sum_{j=1}^s b_j \\ &\Phi_i(B^+(\tau_1,\dots, \tau_k)) = \sum_{j=1}^s a_{ij} \Phi_j(\tau_1) \Phi_j(\tau_2) \dots \Phi_j(\tau_k)\\ &\Phi(B^+(\tau_1,\dots, \tau_k)) = \sum_{j=1}^s b_j \Phi_j(\tau_1) \Phi_j(\tau_2) \dots \Phi_j(\tau_k) \end{split} \end{align} Here $i=1, \dots, s$. \end{definition} For example, \[\Phi(\aabb) = \sum_{j=1}^s b_jc_j, \quad \Phi(\aababb) = \sum_{j=1}^s b_jc_j^2, \quad \Phi(\aaababbb) = \sum_{j,k = 1}^s b_ja_{jk}c_k^2\] \begin{theorem}[{\cite{butcher1963cft}}] The B-series for a RK-method given by the elementary weights $\Phi(\tau)$ is \begin{equation} \Bs_{h,F}(\Phi) = \sum_{\tau \in \tilde{\T}} h^{|\tau|}\frac{\Phi(\tau)}{\sigma(\tau)}\F(\tau) \end{equation} \end{theorem} \paragraph{Order theory for B-series methods.} Once we have the B-series of the exact solution and the B-series of a numerical method, it is straightforward to compare the coefficients and read off the order of the method. For Runge--Kutta methods, we obtain the following result: \begin{proposition}[{\cite{butcher1963cft}}] A Runge--Kutta method given by a B-series with coefficients $\Phi(\tau)$ has order $n$ if and only if \begin{equation} \Phi(\tau) = \gamma(\tau), \hspace{0.5cm}\text{ for all } \tau \in T \text{ such that } |\tau| < n. \end{equation} \end{proposition} \paragraph{B-series methods and structure preservation.} The class of B-series methods includes all Taylor series methods and Runge--Kutta methods. It does not, however, include all numerical methods, an example being the class of \emph{splitting methods}. It is important to point out that focusing only on B-series methods has its drawbacks. Besides the fact that the class does not contain all methods, it is also known that there are certain geometric structures that cannot be preserved by B-series methods. For example, no B-series method can preserve the volume for all systems \cite{iserles2007bsm}. However, we will be content with this loss of generality and focus exclusively on methods based on B-series in this section, and on their generalization -- Lie--Butcher series -- in the next. A case which is particularly well-studied is Hamiltonian vector fields. The following two theorems serve as prime examples: \begin{theorem}[\cite{hairer1994bao}] Let $G = \Bs_{h,F}(\alpha)$ be a vector field with $\alpha(\one) = 0,$ $\alpha(\ab) \neq 0$. Then $G$ is Hamiltonian for all Hamiltonian vector fields $F(y) = \Omega^{-1} \nabla H(y)$ if and only if \begin{equation} \alpha(\tau_1 \bpr \tau_2) + \alpha(\tau_2 \bpr \tau_1) = 0 \end{equation} for all $\tau_1, \tau_2 \in \T$. Here $\bpr$ denotes the Butcher product of Definition \ref{bpr}. \end{theorem} \begin{theorem}[\cite{calvo1994cbs}] Consider a numerical method given by a B-series $\Bs_{h,F}(\alpha)$. The method is symplectic if and only if \begin{equation} \alpha(\tau_1 \bpr \tau_2) + \alpha(\tau_2 \bpr \tau_1) = \alpha(\tau_1)\alpha(\tau_2) \end{equation} for all $\tau_1, \tau_2 \in \T$, where $\alpha(\one)=0$. \end{theorem} The paper \cite{celledoni2010epi} gives an overview of what is known about structure preservation for B-series, including characterizations of the various subsets of trees corresponding to energy-preserving, Hamiltonian and symplectic B-series. \subsection{Hopf algebras and the composition of Butcher series}\label{GI:composition} Consider two numerical methods given by $\Phi^1$ and $\Phi^2$. Using the method $\Phi^1$ to advance a point $y_0$ to a point $y_1$, and then applying the method $\Phi^2$ using $y_1$ as initial point, results in a point $y_2$: \[y_1 = \Phi^1(y_0), \hspace{1cm} y_2 = \Phi^2(y_1).\] This is the idea behind \emph{composition} of numerical methods. In the case where both methods are given by B-series, $\Phi^1 = \Bs^1_{h,F}(\alpha)$, $\Phi^2 = \Bs^2_{h,F}(\beta)$, then the composition method $\Phi^2 \circ \Phi^1$ is again a B-series: $\Phi^2\circ \Phi^1 = \Bs_{h,F}(\gamma)$. Concretely, if $y_1 = \Bs^1_{h,F}(\alpha)(y_0)$ and $y_2 = \Bs^2_{h,F}(\beta)(y_1)$, then $y_2 = \Bs_{h,F}(\gamma)(y_0)$. This is the Hairer--Wanner theorem from \cite{hairer1974otb}. The coefficient function $\gamma$ of this B-series was first studied by John Butcher in \cite{butcher1972aat}, where he found that composition of B-series is a group operation (giving rise to the \emph{Butcher group}) on the coefficient functions, and gave expressions for the product, identity and inverse in this group. In \cite{kreimer1998oth, connes1998har} Connes and Kreimer introduced a Hopf algebra of rooted trees connected to the renormalization procedure in quantum field theory. Later \cite{brouder2000rkm} it was pointed out that a variant of this Hopf algebra is closely related to the Butcher group. More precisely, the Butcher group is the group of characters in a Hopf algebra $\BCK$ defined by Connes and Kreimer. We will describe the Butcher group indirectly by describing the Hopf algebra $\BCK$. But first we will present some basic definitions from the theory of Hopf algebras. For a comprehensive introduction, see \cite{sweedler1969ha, abe80ha}. Other excellent references include \cite{cartier2006apo, manchon2008hai}. \paragraph{Hopf algebras.} Let $\field$ be a field of characteristic zero. An \emph{algebra} $A$ over $\field$ is a $\field$-vector space equipped with a multiplication map $\mu: A \otimes A \rightarrow A$ and a unit $u: \field \rightarrow A$ so that \begin{itemize} \item \begin{tabular}{lr} $\mu \circ (id \otimes \mu) = \mu \circ (\mu \otimes id): A\otimes A \otimes A \rightarrow A$ & (associativity)\end{tabular} \item \begin{tabular}{lr} $\mu \circ (u \otimes id) = \mu \circ (id \otimes u): k \otimes A \cong A \rightarrow A$ &(unitality)\end{tabular} \end{itemize} A \emph{coalgebra} $C$ over $\field$ is the dual notion. It consists of a comultiplication map $\Delta: C \rightarrow C \otimes C$ and a counit $\epsilon: C \rightarrow \field$ so that \begin{itemize} \item \begin{tabular}{lr} $(\Delta \otimes id) \circ \Delta = (id \otimes \Delta) \circ \Delta: C \rightarrow C \otimes C \otimes C$ & (coassociativity)\end{tabular} \item \begin{tabular}{lr} $(\epsilon \otimes id) \circ \Delta = (id \otimes \epsilon) \circ \Delta: C \rightarrow C \otimes \field \cong C$ &(counitality)\end{tabular} \end{itemize} A \emph{Hopf algebra} is at once an algebra and a coalgebra, and it comes equipped with an antipode $S: H \rightarrow H$. These structures have to satisfy certain compatibility conditions, written as the following diagrams, where $\tau$ denotes the flip operation $\tau(h_1, h_2) = (h_2, h_1)$: \begin{minipage}[b]{0.5\linewidth} \begin{center} \begin{diagram}[labelstyle=\scriptstyle] H^{\otimes 4} &&\rTo^{I \otimes \tau \otimes I}& & H^{\otimes 4}\\ \uTo^{\Delta \otimes \Delta} &&&& \dTo_{\mu \otimes \mu}\\ H \otimes H&\rTo_{\mu}& H &\rTo_{\Delta}& H \otimes H \end{diagram} \end{center} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \begin{center} \begin{diagram}[labelstyle=\scriptstyle] H \otimes H &\rTo^{\epsilon \otimes \epsilon}& k \otimes k\\ \dTo^{\mu} && \dTo_{\cong} \\ H &\rTo_{\epsilon}& k \end{diagram} \end{center} \end{minipage} \vspace{0.5cm} \begin{diagram} & & H\otimes H & & \rTo^{\scriptstyle S\otimes 1} & & H\otimes H \\ & \ruTo^{\scriptstyle\Delta} & & & & & & \rdTo>{\scriptstyle\mu} \\ H & & \rTo^{\scriptstyle \varepsilon} & & k & & \rTo^{\scriptstyle u} & & H \\ & \rdTo<{\scriptstyle\Delta} & & & & & & \ruTo>{\scriptstyle \mu} \\ & & H\otimes H & & \rTo_{\scriptstyle 1\otimes S} & & H\otimes H \\ \end{diagram} The first two diagrams ensure that the coproduct and the counit are both algebra homomorphisms. The last diagram is best interpreted in terms of the characters in a Hopf algebra. Let $A$ be a commutative $k$-algebra, and let $\Lin(H,A)$ denote the set of linear maps from $H$ to $A$. An element $\alpha \in \Lin(H,A)$ is called a \emph{character} if $\alpha(x\cdot y) = \alpha(x) \cdot \alpha(y)$ for all $x,y \in H$, where the product on the left-hand side is in $H$, and on the right-hand side in $A$. The set of characters in $\Lin(H,A)$ form a group under the \emph{convolution product}: \begin{equation}\label{eq:convolution} \phi \star \psi = \mu \circ (\phi \otimes \psi) \circ \Delta. \end{equation} The unit is the composition of the unit and the counit in $H$, e.g. $\eta := u \circ \epsilon.$ The bottom diagram above corresponds to the antipode being the inverse of the identity under this product, and we have $\alpha^{\star -1}=\alpha \circ S$. Later we will also need the concept of \emph{infinitesimal characters} (also called \emph{derivations}), which are maps $\alpha$ in $\Lin(H,A)$ satisfying \begin{equation} \alpha(x\cdot y) = \eta(x)\cdot\alpha(y) + \alpha(x)\cdot\eta(y). \end{equation} A Hopf algebra $H$ is \emph{graded} if it is graded as an algebra, i.e. $H = \bigoplus_{n\geq 0} H_n$ with $\mu(x_r, x_s) \in H_{r+s}$ for $x_r \in H_r$ and $x_s \in H_s$, and its coproduct satisfies \[\Delta(H_n) \subset \bigoplus_{r+s = n} H_r \otimes H_s.\] \paragraph{The Butcher--Connes--Kreimer Hopf algebra.} The composition of B-series is governed by a certain Hopf algebra $\BCK$ based on the set $T$ of rooted trees, called the \emph{Butcher-Connes-Kreimer Hopf algebra}. In Section \ref{sect:gengeomint} we will see that a generalization of this Hopf algebra governs the composition of Lie-Butcher series (Section 2.\ref{sect:compLB}). To describe the BCK Hopf algebra we need to define its structure as a vector space, an algebra, a coalgebra, and define the antipode. As a $\RR$-vector space $\BCK$ is generated by the set $T$ of rooted trees, and graded by the order (i.e. number of vertices) of the trees. The algebra structure is that of the symmetric algebra $S(\RR\{T\})$. The product is written as (commutative) concatenation of trees (i.e. disjoint union), giving rise to forests of trees. The unit is the empty tree $\one$. \[\aabb \, \aababb = \aababb \, \aabb, \qquad \aabb \,\,\, \one = \one \,\,\,\aabb = \aabb\] The coproduct of $\BCK$ is the map $\DeltaBCK: \BCK \rightarrow \BCK \otimes \BCK$ determined recursively by: \begin{equation} \DeltaBCK \circ B^+(\omega) = B^+(\omega) \otimes \one + (Id \otimes B^+) \circ \DeltaBCK(\omega), \end{equation} where $\omega$ is a forest\footnote{Recall that $\DeltaBCK$ is an algebra morphism and is therefore defined on forests as well as trees, since $\DeltaBCK(\tau_1\tau_2) = \DeltaBCK(\tau_1)\DeltaBCK(\tau_2)$.}. The counit is the map $\epsilon: \BCK \rightarrow \RR$ given by $\epsilon(\one)= 1$ and $\epsilon(\tau) = 0$ if $\tau \neq \one$. The coproduct can also be written in a non-recursive manner using cuttings of trees. \paragraph{Cutting trees.} An \emph{admissible cut} of a tree $\tau$ is a set $c \subset E(\tau)$ of edges of $\tau$ such that $c$ contains at most one edge from any path from the root to a leaf. The case $c = \emptyset$ is called the empty cut. Let $\omega$ denote the forest with vertices $V(\tau)$ and edges $E(\tau)\setminus c$. We write $R^c(\tau)$ for the component of $\omega$ containing the root of $\tau$, and $P^c(\tau)$ for the forest consisting of the remaining components. The cut resulting in $P^c(\tau) = \tau$ and $R^c(\tau) = \one$ is also admissible, and called the \emph{full cut} (f.c.). \begin{theorem}[{\cite{connes1998har}}]\label{thm:BCKcoprod} The coproduct in $\BCK$ can be written as \begin{equation} \DeltaBCK(\tau) = \sum_{c\in Adm(\tau)} P^c(\tau) \otimes R^c(\tau) \end{equation} \end{theorem} \noindent Examples of the coproduct can be found in Table \ref{BCKcoprod}. \begin{table}[h! ] \centering \begin{equation*} \begin{array}{c@{\,\,}|@{\quad}l} \hline \\[-2mm] \tau & \DeltaBCK(\tau) \\[1mm] \hline \\[-2mm] \one & \one\tpr\one \\[1mm] \ab & \ab\tpr\one+\one\tpr\ab \\[2mm] \aabb & \aabb\tpr\one+\ab\tpr\ab+\one\tpr\aabb \\[2mm] \aaabbb &\aaabbb\tpr\one+\ab\tpr\aabb+\aabb\tpr\ab+\one\tpr\aaabbb \\[2.5mm] \aababb & \aababb\tpr\one+\ab\ab\tpr\ab+2\,\ab\tpr\aabb+\one\tpr\aababb \\[2.5mm] \aaaabbbb & \aaaabbbb\tpr\one+\aaabbb\tpr\ab+\aabb\tpr\aabb+ \ab\tpr\aaabbb+\one\tpr\aaaabbbb \\[2.5mm] \aaababbb & \aaababbb\tpr\one+\aababb\tpr\ab+\ab\ab\tpr\aabb+ 2\,\ab\tpr\aaabbb+\one\tpr\aaababbb \\[2.5mm] \aabaabbb & \aabaabbb \otimes \one + \ab\aabb\otimes \ab + \ab\ab \otimes \aabb + \aabb\otimes \aabb + \ab \tpr \aababb + \ab \otimes \aaabbb + \one \otimes \aabaabbb \end{array} \end{equation*} \caption{Examples of the coproduct $\DeltaBCK$ in the Hopf algebra $\BCK$}\label{BCKcoprod} \end{table} The antipode can be defined recursively as $S(\one) = \one$ and: \begin{equation} S(\tau) = -\tau - \sum _{c \in Adm(\tau)\setminus \{\emptyset \cup f.c.\}}S(P^c(\tau))R^c(\tau) \end{equation} The Hairer--Wanner theorem gives the exact correspondence between $\BCK$ and composition of B-series: \begin{theorem}[{\cite{hairer1974otb}}]\label{hairerwannertheorem} Let $\Bs^1_{h,F}(\alpha)$ and $\Bs^2_{h,F}(\beta)$ be two B-series, with coefficients $\alpha, \beta: T \rightarrow \RR$. The composition $\Bs^2_{h,F}(\beta) \circ \Bs^1_{h,F}(\alpha)$ is again a B-series, and we have \begin{equation} \Bs^2_{h,F}(\beta) \circ \Bs^1_{h,F}(\alpha) = \Bs_{h,F}(\alpha \star \beta), \end{equation} where $\star$ denotes convolution in the Hopf algebra $\BCK$. \end{theorem} \subsection{Substitution and backward error analysis for Butcher series}\label{substB} Consider a numerical method $\Phi_h$ used to solve a differential equation of the form \begin{equation}\label{diffeq} y' = F(y). \end{equation} The basic idea of \emph{backward error analysis} of the method $\Phi_h$ is to interpret it as giving the exact solution of a modified equation: \begin{equation}\label{modifieddiffeq} \tilde{y}' = \tilde{F_h}(\tilde{y}). \end{equation} If we can find such an equation, we can use it to study the properties of the numerical method. In other words, the numerical method $\Phi_h$ will be represented by a modified vector field $\tilde{F}$, which then can be used to study the method. The idea is based on work by Wilkinson in the context of algorithms for solving equations given by matrices \cite{wilkinson1960eao}, and has been explored in several papers \cite{warming1974tme, hairer1994bao, calvo1994mef, hairer2006gni, chartier2007nib}. Recurrence formulas for the modified equation were first obtained in \cite{hairer1994bao, calvo1994mef}. A related notion is the \emph{modifying integrators} of \cite{chartier2007nib}. The idea is to look for a vector field $\tilde{F}_h$ so that the numerical method $\Phi_h$ applied to the flow equation of $\tilde{F}_h$ (Equation \ref{modifieddiffeq}) is the exact solution of Equation \ref{diffeq}. It turns out that the case where $\Phi_h$ is a B-series method is particularly nice \cite{chartier2005asl, chartier2007nib, calaque2009tih}. The vector fields $\tilde{F_h}$ can then be written as B-series whose coefficients are derived from the coefficients of $\Phi_h$, and these coefficients can be expressed by the \emph{substitution law} for B-series methods (Corollaries \ref{corBEA} and \ref{corMI}). \paragraph{The substitution law.} Let $\Bs_{h,F}(\alpha)$ and $\Bs_{h,G}(\beta)$ be two B-series, where $\alpha(\one) = 0$. Then $\Bs_{h,F}(\alpha)$ is a vector field, and we can consider the B-series obtained by using this as the vector field $G$ in the B-series $\Bs_{h,G}(\beta)$. This is called \emph{substitution of B-series}. The result is given in terms of a bialgebra $\CEFM$ by the following theorem: \begin{theorem}[{\cite{chartier2005asl, chartier2007nib, calaque2009tih}}]\label{thm:substB} Let $F$ be a vector field, $\alpha, \beta$ linear maps $\alpha,\beta: \T \rightarrow \RR$ where $\beta$ is an infinitesimal character of $\BCK$, and $\alpha(\one) = 0$. Then the vector field $(1/h)\Bs_{h,F}(\alpha)$ inserted into the B-series $\Bs_{h,\cdot}(\beta)$ is again a B-series, given by \begin{equation} \Bs_{h, (1/h) \Bs_{h,F}(\alpha)}(\beta) = \Bs_{h,F} (\alpha \star \beta), \end{equation} where $\star$ denotes convolution of characters in the bialgebra $H_{CEFM}$. \end{theorem} The bialgebra $\CEFM$ is the symmetric algebra over rooted trees $S(\T)$, with $\ab$ as unit, equipped with a coproduct given by contracting subforests in trees: \begin{equation}\label{CEFMcoprod} \Delta(\tau) = \sum_{\omega \subseteq \tau} \omega \tpr \tau/\omega. \end{equation} If $\tau$ is a tree then the notation $\omega \subset \tau$ means that $\omega$ is a spanning subforest of $\tau$, i.e. that $\omega$ is a collection of subtrees of $\tau$ so that each vertex of $\tau$ belongs to exactly one tree in $\omega$. Then $\tau/\omega$ denotes the tree obtained by contracting each subtree (with at least two vertices) of $\tau$ contained in $\omega$ onto a vertex. Some examples of the coproduct can be found in Table \ref{table:CEFM}. The bialgebra is graded by the number of edges. There is a Hopf algebra related to $\CEFM$, obtained by considering the symmetric algebra over the set of rooted trees $\T'$ \emph{with at least one edge} (e.g. $\ab$ is not included), and then adding $\ab$ back as the \emph{unit} for the product. The coproduct is defined as in Equation (\ref{CEFMcoprod}). The resulting bialgebra is \emph{connected}, which makes it a Hopf algebra \cite{manchon2008hai}. For details on these constructions, consult \cite{calaque2009tih}. \begin{table}[!ht] \centering \begin{equation*} \begin{array}{c@{\,\,}|@{\quad}l} \hline \\[-2mm] \tau & \Delta_{CEFM}(\tau) \\[1mm] \hline \\[-2mm] \ab & \ab\tpr\ab \\[2mm] \aabb & \aabb\tpr\ab+\ab\ab\tpr\aabb \\[2mm] \aaabbb &\aaabbb\tpr\ab+\ab\ab\ab\tpr\aaabbb+2\,\aabb\ab\tpr\aabb \\[2.5mm] \aababb & \aababb \tpr \ab + \ab\ab\ab\tpr \aababb + 2\,\aabb\ab\tpr \aabb \\[2.5mm] \aaaabbbb & \aaaabbbb\tpr\ab+\ab\ab\ab\ab\tpr\aaaabbbb+2\,\aaabbb\ab\tpr\aabb+ 3\,\aabb\ab\ab\tpr\aaabbb+\aabb\aabb\tpr\aabb \\[2.5mm] \aaababbb & \aaababbb\tpr \ab + \ab\ab\ab\ab\tpr \aaababbb + 2\, \aabb\ab\ab\tpr \aaabbb + \aabb\ab\ab\tpr\aababb + 2\, \aaabbb\ab\tpr \aabb + \aababb\ab\tpr \aabb\\[2.5mm] \aabaabbb & \aabaabbb \tpr \ab + \ab\ab\ab\ab\tpr \aabaabbb + \aabb\ab\ab\tpr \aaabbb + 2\, \aabb\ab\ab\tpr \aababb + \aabb\,\aabb \tpr \aabb + \aababb\ab\tpr \aabb + \aaabbb\ab\tpr \aabb \end{array} \end{equation*} \caption{Examples of the coproduct $\Delta_{CEFM}$ in the substitution bialgebra}\label{table:CEFM} \end{table} \paragraph{Backward error analysis and modifying integrators.} Once Theorem \ref{thm:substB} is established one can obtain expressions for backward error analysis and modifying integrators. \begin{corollary}[{\cite{chartier2005asl, chartier2007nib}}) (Backward error analysis]\label{corBEA} Let $\Bs_{G}(\gamma)$ denote the B-series for the exact flow of the vector field $G$, and let $\Bs_{F}(\alpha)$ be a B-series giving a numerical flow for $F$. The modified vector field $\tilde{F}$ given by $\Bs_{\tilde{F}}(\gamma) = \Bs_{F}(\alpha)$ is a B-series $\Bs_{F}(\beta)$ with coefficients given by \begin{equation} \beta \star \gamma = \alpha \end{equation} \end{corollary} \begin{corollary}[{\cite{chartier2005asl, chartier2007nib}}) (Modifying integrators]\label{corMI} Let $\Bs_{G}(\gamma)$ denote the B-series for the exact flow of the vector field $G$, and let $\Bs_{F}(\alpha)$ be a B-series giving a numerical flow for $F$. The modified vector field $\tilde{F}$ so that $\Bs_{\tilde{F}}(\alpha) = \Bs_{F}(\gamma)$ is a B-series $\Bs_{F}(\beta)$ whose coefficients are given by \begin{equation} \beta \star \alpha = \gamma \end{equation} \end{corollary} \subsection{Pre-Lie Butcher series}\label{sect:prelie} The space of vector fields on $\RR^n$ has the structure of a \emph{ pre-Lie algebra}, and in this section we will see that B-series can be formulated purely in terms of this pre-Lie structure. This allows us to lift the concept of B-series to the free pre-Lie algebra, giving rise to \emph{ pre-Lie B-series} \cite{ebrahimi-fard2010plb}. Viewing B-series as objects in the free pre-Lie algebra gives a clearer focus on the core algebraic structures at play, and it also enables the application of tools and results from other fields where pre-Lie algebras appear. Examples of this phenomenon can be found in \cite{ebrahimi-fard2009ama} (see Remark \ref{remark:magnus}), \cite{calaque2009tih} and \cite{chapoton2002rta}. We give the basic constructions here because formulating Butcher series in terms of pre-Lie algebras will find an analogue in Section \ref{sect:gengeomint}, where Lie--Butcher series will be constructed from the so-called post-Lie algebras. \paragraph{Pre-Lie algebras.} The concept of pre-Lie algebras is a relaxation of associative algebras that still preserve their \emph{Lie admissible} property. In other words, for an associative algebra $(A,*)$ antisymmetrization of the product $*$ gives a Lie bracket, making it a Lie algebra: $[a,b] = a*b - b*a$, and this property also holds for pre-Lie algebras. Note, however, that not all pre-Lie algebras are associative. They were first introduced and studied by Vinberg \cite{vinberg1963chc}, Gerstenhaber \cite{gerstenhaber1963tcs}, and Agrachev and Gamkrelidze \cite{agrachev1981caa}, under various names. A nice introduction to pre-Lie algebras can be found in \cite{manchonass}. \begin{definition}\label{preLie} A (left) \emph{ pre-Lie algebra}\footnote{Also called a \textit{Vinberg}, \textit{left-symmetric} or \textit{chronological} algebra} $(A, \tr)$ is a $k$-vector space $A$ equipped with an operation $\tr: A \otimes A \rightarrow A$ subject to the following relation: \begin{align} a_{\tr}(x,y,z) - a_{\tr}(y,x,z) = 0, \end{align} where $a_{\tr}(x,y,z)$ is the associator $a_{\tr}(x,y,z) = x \tr (y \tr z) - (x\tr y) \tr z$. \end{definition} \begin{example}[The pre-Lie algebra of vector fields] The space of vector fields $\mathcal{X}(M)$ on a differentiable manifold $M$ equipped with a flat, torsion-free connection $\nabla$ can be given the structure of a pre-Lie algebra by defining $\tr$ as $F \tr G = \nabla_F G$. In the case $M = \mathbb{R}^n$ with the standard flat and torsion-free connection we have that for $F = \sum_{i=1}^n F_i \partial_i$ and $G=\sum_{j=1}^n G_j \partial_j$, \begin{equation} F \tr G = \sum_{i=1}^n \left(\sum_{j=1}^n F_j(\partial_jG_i)\right)\partial_i. \end{equation} In Section \ref{sect:gengeomint} we will see that allowing for torsion leads to the concept of \emph{ post-Lie algebras}. See also \cite{munthe-kaas2012opl}. \end{example} \paragraph{The free pre-Lie algebra.} The free pre-Lie algebra has been studied in several papers, most notably by Chapoton and Livernet in \cite{chapoton2001pla}, Dan Segal in \cite{segal1994fls}, Agrachev and Gramkrelidze in \cite{agrachev1981caa}, Dzhumadil'daev and L\"{o}fwall in \cite{dzhumadildaev2002tfr}. These papers give different bases for the free pre-Lie algebra, and one can choose to work in the basis most beneficial for the problem at hand. A basis for the free pre-Lie algebra $PL(V)$ over a vector space $V$ was described by Chapoton and Livernet in terms of nonplanar rooted trees \cite{chapoton2001pla, chapoton2009art}: \[\left\{\ab,\aabb,\aababb,\aaabbb, \aabababb, \aabaabbb, \aaababbb,\ldots \right\}\] decorated by elements of $V$. The pre-Lie product $\tau_1 \car \tau_2$ of two rooted trees is given by grafting: $\tau_1 \car \tau_2$ is the sum of all the trees resulting from the addition of an edge from the root of $\tau_1$ to one of the vertices of $\tau_2$: \begin{equation} \tau_1 \car \tau_2 := \sum_{v \in V(\tau_2)}\tau_1 \circ_v \tau_2 \end{equation} Here $\tau_1 \circ_{v} \tau_2$ denotes grafting at the vertex $v$ of $\tau_2$. \[\ab \car \ab = \aabb, \qquad \ab \car \aabb = \aababb + \aaabbb, \qquad \aabb \car \aabb = \aaabbabb + \aaaabbbb\] \begin{theorem}[{\cite{chapoton2001pla}}]\label{fpla} $PL(V)$ is the free pre-Lie algebra on the vector space $V$: for any pre-Lie algebra $P$ equipped with a morphism $V \rightarrow P$, there is a unique pre-Lie morphism $PL(V) \rightarrow P$ making the following diagram commute: \begin{diagram}[labelstyle=\scriptstyle] V &\rTo& PL(V)\\ &\rdTo& \dTo_{\exists !}\\ && P \end{diagram} \end{theorem} We write $PL$ for the free pre-Lie algebra on a space with only one element. The free pre-Lie algebra is related to the Hopf algebra $\BCK$ defined in Section \ref{GI:composition}: \begin{theorem}[{\cite{chapoton2001pla}}] The universal enveloping algebra $U(PL)$ of the free pre-Lie algebra on the one-vertex tree, viewed as a Lie algebra, is isomorphic to the dual of the Butcher--Connes--Kreimer Hopf algebra $\BCK$. \end{theorem} In fact, the dual of the Butcher--Connes--Kreimer Hopf algebra is isomorphic to the \emph{Grossman-Larson Hopf algebra} defined in \cite{grossman89has}. The isomorphism was proven in \cite{hoffman2003cor}. \paragraph{Pre-Lie Butcher series.} Now we can formulate the pre-Lie Butcher series \begin{definition} A \emph{ pre-Lie Butcher series} is a formal series in $\RR\langle \PL\rangle$: \begin{equation}\label{eq:plB} X(\alpha) = \sum_{t \in \PL} h^{|t|}\alpha(t)t. \end{equation} \end{definition} \noindent The classical B-series are recovered by applying the unique pre-Lie morphism associated to a vector field $F$: \begin{equation} \F: \PL \rightarrow \mathcal{X}(\RR^n) \quad \text{such that} \quad \F(\ab) = F. \end{equation} This is the elementary differential function of $F$ as defined in \ref{def:elementdiff}. It is given recursively by $\F(\ab) = F$ and \begin{equation} \F(t) = F^{(n)} (\F(\tau_1), \dots, \F(\tau_n)), \end{equation} if $t = \Bplus(\tau_1,\dots,\tau_n)$. B-series in any other pre-Lie algebra $(A, \tr)$ can be defined in the same way: by applying the unique pre-Lie algebra morphism $F:\PL \rightarrow A$ to the series (\ref{eq:plB}). \begin{remark}\label{el.diff-grafting} Since $\F: \PL \rightarrow \mathcal{X}(\RR^n)$ is a pre-Lie morphism, the trees associated to the derivatives of $y'(t) = F(y(t))$ can be generated by iterated grafting onto the one-vertex tree: \[\text{The $n$ graftings}\quad \ab \car (\ab \car (\ab \car \dots (\ab \car \ab) \dots )) \quad \text{corresponds to} \quad \frac{d^ny}{dt^n}.\] This way of looking at elementary differentials will reappear in a different setting in Section \ref{sect:gengeomint}. \end{remark} \begin{remark}\label{remark:magnus} The formulation of differential equations in terms of pre-Lie algebras has seen some use in numerical analysis. In \cite{ebrahimi-fard2009ama} Ebrahimi-Fard and Manchon rephrased differential equations of the type $X'(t) = A(t)X(t)$, where $X,A$ are linear operators in a vector space, as combinatorial equations in pre-Lie algebras. In this context they obtained an analogue of the Magnus expansion \cite{Magnus1954ote}, a series expansion of the solution to the equation in the magma generated by monomials of pre-Lie elements. In this setting it becomes apparent that one can use the pre-Lie relation to cancel out some of the terms in the expansion, leading to a hitherto unknown reduction of the number of terms in the Magnus expansion. \end{remark} \section{Geometric numerical integration on manifolds}\label{sect:gengeomint} Our objects of study are now dynamical systems evolving on \emph{manifolds}: \begin{equation}\label{manifolddiffeq} y'=F(y), \hspace{0.6cm} y_0 \in M, \quad F \in \XM, \end{equation} where $M$ is a smooth manifold and $\XM$ denotes the vector fields on $M$. As in the previous chapter, the aim is to find good numerical approximations to the flow $\exp(tF) := \Psi_{t,F}$ of (\ref{manifolddiffeq}). The study of such systems comprises several different approaches: One simple way to attack the problem is to embed the manifold in $\mathbb{R}^N$, for some $N$, and use methods developed for $\mathbb{R}^N$ to solve the equation. But then the numerical flow of the method may drift off the manifold, and this can in some cases cause problems \cite{engo1998mas, iserles1997nmo, calvo1996rkm, iserles1996qna}. A more satisfying and often better way is to use methods that are intrinsic to the manifold, and not rely on any embedding, which is the approach taken by \emph{Lie group integrators} \cite{iserles2000lgm}. Consider for instance a system evolving on the manifold $S^3$. By embedding $S^3$ in $\mathbb{R}^4$ one can use numerical methods that approximate the flow of the system using the basic motions of translations in $\mathbb{R}^4$. Another approach is to use \emph{rotations} to move around $S^3$: $y_{n+1} = Q_n y_n$ where $Q_n$ are orthogonal matrices, i.e. to use the action of the Lie group $SO(4)$ on $S^3$. This illustrates the intrinsic approach, where we are guaranteed not to drift off $S^3$. Methods developed for manifolds include the Crouch--Grossman and RKMK-methods (and variants thereof) \cite{munthe-kaas1998rkm, munthe-kaas1999hor, crouch1993nio, owren1999rkm, engo2000otc}. In this chapter we will study a generalization of B-series called \emph{Lie--Butcher series}. In analogy to the previous chapter we will look at the composition and substitution of Lie--Butcher series. \subsection{Setting the stage: homogeneous manifolds and differential equations}\label{homogeneous} The flows we would like to approximate evolve on smooth manifolds, and so the tools of differential geometry play an important role. We will not review the general theory of smooth manifolds here, but assume a basic knowledge of differential geometry; for excellent introductions see e.g. \cite{abraham1988mta, spivak2005aci, sharpe1997dg}. For a viewpoint oriented toward geometric numerical integration, see \cite{iserles2000lgm}. More precisely, we will be working with smooth manifolds equipped with transitive actions by Lie groups, so called \emph{homogenous manifolds}, where the Lie group provides a way to move around on the manifold.\footnote{Note that other manifolds with \emph{local} actions could also be considered, but to avoid unnecessary complications we only consider homogeneous manifolds.} Because the action is not in general free, the differential equation expressed on the Lie group is not in general unique. \begin{definition} An \emph{action of a Lie group} $G$ on a smooth manifold $M$ is a group homomorphism $\lambda: G \rightarrow \Diff(M)$, $g \mapsto \lambda_g$, where $\Diff(M)$ is the group of diffeomorphisms on $M$. We will mostly write such an action as a map $\Lambda: G \times M \rightarrow M$. \end{definition} \noindent For convenience of notation we write $g$ for the diffeomorphism $\lambda_g$, and also $g \cdot m$ for $\lambda_g(m)$. The \emph{orbit} through a point $p \in M$ is the set $G\cdot p = \lambda_G(p)$. The action is called \emph{transitive} if the manifold $M$ is a single $G$-orbit. That is, if for all $p, q \in M$ there is a $g \in G$ so that $p = g \cdot q$. A manifold equipped with a transitive action by a Lie group $G$ is called a \emph{homogeneous manifold}. A consequence of this is that $M$ is diffeomorphic to the right cosets $G/G_x$ of $G$, where $G_x$ is the closed Lie subgroup of isotropies, $G_x = \{g \in G \,\,|\,\, gx = x\}$ (the point stabilizer): the smooth manifold structure of $G/G_x$ comes from the quotient map, and the diffeomorphism $F: G/G_x \rightarrow M$ is given by $F(gG_x) = g \cdot x$. The group $G_x$ is called {\it the} subgroup of isotropies because if $x'$ is another point in $G$, then $G_x$ and $G_{x'}$ are conjugate, and therefore isomorphic. Some interesting examples of homogeneous manifolds are the spheres $S^n = SO(n+1)/SO(n)$. Other important examples come from Lie groups $G$ themselves. The action $\Lambda: G \times G \rightarrow G$ is then the Lie group multiplication. A (somewhat degenerate) example is the homogeneous manifold $(\RR^n, (\RR^n, +))$. Here the action of $\RR^n$ on itself is given by translations. The theory developed for homogeneous manifolds in this chapter will reduce to the theory developed in the previous chapter when applied to this particular case. Actions by Lie groups on manifolds can be associated to actions by Lie algebras. Let $\Lambda: G\times M \rightarrow M$ be an action of $G$ on $M$. The associated Lie algebra action $\lambda_*: \g \rightarrow \XM$ of $\g$ on $M$ is the Lie algebra anti-homomorphism defined by: \begin{equation}\label{eq:infinitesimal} \lambda_*(v)(p) = \left.\frac{d}{dt}\right|_{t=0} \Lambda(\exp(tV),p). \end{equation} This satisfies $\lambda_*([U,V]) = -[\lambda_*(U),\lambda_*(V)]$, where the bracket on the right is the Jacobi bracket of vector fields. We sometimes write $v \cdot y$ for the element $\lambda_*(v)(y) \in T_yM$. The \emph{Lie--Palais theorem} \cite{palais1957agf} ensures us that as long as the Lie group $G$ is simply connected, then every action by $\g$ comes from an action by $G$.\footnote{If the Lie group is not simply connected, then we can only lift the $\g$-action to the universal covering group of $G$.} If $F \in \XM$ is a vector field, then an element $v$ so that $\lambda_*(v) = F$ is called an \emph{infinitesimal generator} for $F$. \begin{remark} In some cases it makes sense to use other maps $\phi: \g \rightarrow G$ (satisfying $\phi(0)=e$ and $\phi'(0)=V$) besides the exponential map to construct maps $\g \rightarrow \XM$ as in Equation (\ref{eq:infinitesimal}). An overview of various maps of this kind, and their usefulness, can be found in \cite{engo2000otc}. \end{remark} \paragraph{Differential equations in homogeneous manifolds.} Consider the differential equation on a homogeneous manifold $(M,G,\lambda)$: \begin{equation}\label{ivp2} y' = F(y), \qquad y_0 \in M, \quad F: M \rightarrow TM. \end{equation} The solution is the flow $\Psi_{t, F} = \exp(tF)$ of the vector field $F$. The vector field can be written in terms of its infinitesimal generator as $F = \lambda_*(v): M \rightarrow TM$ for an element $v \in \g$, and the transitivity of the action also allows us to construct a map $f:M \rightarrow \g$ so that \begin{equation} F(y) = \lambda_*(f(y))(y) = f(y)\cdot y \end{equation} Note that as long as the action is not free, this $f$ is not unique: if $f: M \rightarrow \g$ is such a map, then $f + i: M \rightarrow \g$, where $i(p)$ is in the isotropy subalgebra $\g_p$ of $\g$, is another map of the same type. This choice of isotropy class can be helpful when constructing numerical integrators \cite{lewis2002gia}. The differential equation (\ref{ivp2}) can be written as: \begin{equation}\label{eq:diffeqLB} y' = f(y) \cdot y, \quad \text{where} \quad f:M \rightarrow \g, \end{equation} and this is the type of differential equation we will consider in this chapter. Note that in the classical case of $(\RR^n, (\RR^n, +))$, the exponential is $\exp(v) = v$, and Equation \ref{eq:diffeqLB} reduces to the ordinary differential equation (\ref{ivp2}). We also note that the class contains the equations formulated in terms of \emph{frames}. \begin{remark}[Frames and differential equations] In the literature for numerical integration of differential equations on manifolds the equations are often simplified by using a frame on the manifold \cite{owren1999rkm, owren2006ocf, celledoni2003cfl}. A \emph{frame} is a set of vector fields $\{E_i\}$ that at each point on the manifold spans the tangent space at that point, so that any vector field $F$ can be written as $F=\sum_i f_i E_i$. The flow equation (\ref{ivp2}) for $F$ can then be written as \begin{equation}\label{eq:frameDE} y' = \sum_i f_i(y) E_i(y), \quad \text{where} \quad f_i: M \rightarrow \RR \quad \text{are smooth}. \end{equation} If we write $\g \subset \XM$ for the Lie subalgebra generated by the frame vector fields $\{E_i\}$, and let $\lambda_*: \g \rightarrow \Diff(M)$ be as in (\ref{eq:infinitesimal}), we see that Equation (\ref{eq:frameDE}) is a special case of Equation (\ref{eq:diffeqLB}), with $f:M \rightarrow \g$ defined by $f(y) = \sum_i f_i(y) E_i$. \end{remark} \begin{remark} In \cite{engo2000otc}, K. Eng\o{} discusses the general operation of `moving' differential equations between manifolds using equivariance of actions and relatedness of vector fields. In particular, every differential equation of the form (\ref{eq:diffeqLB}) is shown to be equivalent to a differential equation on $\g$. The following diagram from \cite{engo2000otc} summarizes this: \begin{diagram}[labelstyle=\scriptstyle] T\g &\rTo^{T(\exp)}& TG &\rTo^{T(\lambda_{\cdot}(p))}& TM \\ \uTo && \uTo && \uTo_{\lambda_*(v)(p)} \\ \g &\rTo_{\exp}& G &\rTo_{\lambda_{\cdot}(p)}& M \end{diagram} In other words, the differential equation on a homogeneous manifold $(M,G)$ is moved to the Lie group $G$ (the middle vertical arrow) and then to the Lie algebra $\g$ (the first vertical arrow). As before, the exponential map $\exp:\g \rightarrow G$ can in many cases be replaced by other maps. The construction of the vertical arrows can be found in \cite{engo2000otc}. This is the result exploited in the so-called RKMK methods \cite{munthe-kaas1995lbt, munthe-kaas1998rkm, munthe-kaas1999hor}. \end{remark} \subsection{Lie group integrators in applications} \begin{definition} Given a smooth manifold $M$ and a Lie group $G$ with Lie algebra $\mathfrak{g}$ acting on $M$, consider a differential equation for $y(t)\in M$ written in terms of the infinitesimal action as \begin{equation}\label{eq:lgeqn} \dot{y}(t) = f(t,y)\cdot y, \quad y(0) = y_0, \end{equation} for a given function $f:\mathbb{R}\times M\rightarrow \mathfrak{g}$. Note that we now make the non-autonomousness explicit. A \emph{Lie group integrator} is a numerical time-stepping procedure for (\ref{eq:lgeqn}) based on intrinsic Lie group operations, such as exponentials, commutators and the group action on $M$. \end{definition} Applications of LGI generally involve the following steps: \begin{enumerate} \item Choose a Lie group and Lie group action that can be computed fast and which captures some essential features of the problem to be solved. This is similar to the task of finding a preconditioner in iterative solution of linear algebraic equations. \item Identify the Lie algebra, commutator and exponential map of the Lie group action. \item Write the differential equation in terms of the infinitesimal Lie algebra action, as in Equation~(\ref{eq:lgeqn}). \item Choose a Lie group integrator, plug in all the building blocks, and solve the problem. \end{enumerate} \paragraph{Examples of Lie group integrators.} We list some important Lie group integrators:\\ \noindent{\bf Lie Euler: } \[\quad y_{n+1} = \exp(h f(t_n,y_n))\cdot y_n\] \noindent{\bf Lie midpoint:} \begin{align*}K &= hf\left(t_n+ h/2,\exp\left(K/2\right)\cdot y_n\right)\\ y_{n+1} & = \exp(K)\cdot y_n\end{align*} \noindent{\bf Lie RK4:} There are several similar ways of turning the classical RK4 method into a fourth order Lie group integrator \cite{munthe-kaas1998rkm,munthe-kaas1999cia}. The following version requires only two commutators: \begin{align*} K_1 & = hf(t_n,y_n)\\ K_2 & = hf(t_n/2,\exp(K_1/2)\cdot y_n)\\ K_3 & = hf\left(t_n+h/2, \exp(K_2/2-[K_1,K_2]/8)\cdot y_n\right)\\ K_4 & = hf(t_n+h/2,\exp(K_3)\cdot y_n)\\ y_{n+1} & = \exp\left(K_1/6+K_2/3+K_3/3+K_4/6 -[K_1,K_2]/3-[K_1,K_4]/12\right)\cdot y_n \end{align*} \noindent{\bf RKMK methods:} There is a general procedure to turn any classical Runge--Kutta method into a Lie group integrator of the same order \cite{munthe-kaas1999hor,iserles2000lgm}. Let $\{a_{i,j},b_j,c_i\}_{i,j=1}^s$ be the coefficients of a Runge--Kutta method of order $p$. The following method is a Lie group method of order $p$. \begin{proglist} for $i = 1,s$\+\\ $U_i = \sum_{j=1}^s a_{i,j}{{{K}_j}}$\\ $K_i = d\exp^{-1}_{U_i}\left(h f(\exp(U_i)\dpr y_n)\right)$ \-\\ end\\ $y_{n+1} = {{\exp\left(\sum_{j=1}^s b_{j}{{{K}_j}}\right)\dpr y_n}}$ , \end{proglist} where \[d\exp^{-1}_{U}(K)=\sum_{n=0}^\infty\frac{B_n}{n!} \ad^n(U)(K) = { K - \frac12[U,K] + \frac{1}{12}[U,[U,K]] + \cdots}\] is the inverse of the right trivialized tangent map of the exponential, see~\cite{iserles2000lgm}. \ \\ \noindent{\bf Crouch--Grossman and commutator free methods:} Commutators pose a problem in the application of Lie group integrators to stiff problems, since the commutator often increases the stiffness of the equations dramatically. Crouch--Grossman~\cite{crouch1993nio,owren1999rkm}, and more generally commutator-free methods~\cite{celledoni2003cfl}, avoid them by doing basic time-stepping using a composition of exponentials. An example of such a method is CF4~\cite{celledoni2003cfl}: \begin{align*} K_1 & = hf(t_n,y_n)\\ K_2 & = hf(t_n/2,\exp(K_1/2)\cdot y_n)\\ K_3 & = hf\left(t_n+h/2, \exp(K_2/2)\cdot y_n\right)\\ K_4 & = hf(t_n+h/2,\exp(K_1/2)\cdot\exp(K_3-K_1/2)\cdot y_n)\\ y_{n+1} & = \exp\left(K_1/4+K_2/6+K_3/6 -K_4/12\right)\cdot\\ & \hspace{0.5cm} \exp\left(K_2/6+K_3/6+K_4/4 -K_1/12\right)\cdot y_n \end{align*} \noindent{\bf Magnus methods:} In the case where $f(t,y)=f(t)$ is a function of time alone, then~(\ref{eq:lgeqn}) is called an equation of \emph{Lie type}. Specialized numerical methods have been developed for such problems~\cite{iserles1999sld, iserles1999imm, blanes2009tme}. Explicit Magnus methods can achieve order 2p using only p function evaluations, and they are also easily designed to be time symmetric. \paragraph{Examples of group actions.} Some group actions of interest when applying Lie group integrators:\\ \noindent{\bf Rotational problems:} Consider a differential equation $\dot{y}(t) = v(t)\times y(t)$, where $y,v\in \mathbb{R}^3$ and $||y(0)||=1$. Since $||y(t)||=1$ for all $t$, we can take $M$ to be the surface of the unit sphere. Let $G=SO(3)$ be the special orthogonal group, consisting of all orthogonal matrices with determinant 1. Let $\gamma(t)\in G$ be a curve such that $\gamma(0)=e$. By differentiating $\gamma(t)^T\gamma(t)=e$, we find that $\dot{\gamma}(0)^T+\dot{\gamma}(0)=0$, thus $\mathfrak{g}=\mathfrak{so}(3)$, the set of all skew-symmetric $3\times 3$ matrices. The infinitesimal Lie algebra action is left multiplication with a skew matrix, the commutator is the matrix commutator, and the exponential map is the matrix exponential. Written in terms of the infinitesimal Lie algebra action, the differential equation becomes $\dot{y}(t) = \widehat{v(y)} y$, and we may apply any Lie group integrator. Note that for low dimensional rotational problems, all basic operations can be computed fast using Rodrigues type formulas~\cite{iserles2000lgm}.\\ \noindent{\bf Isospectral action:} Isospectral differential equations are matrix valued equations where the eigenvalues are first integrals (invariants of motion). Consider $M= \mathbb{R}^{n\times n}$ and the action of $G=SO(n)$ on $M$ by similarity transforms, i.e.\ for $a\in G$ and $y\in M$ we define $a\cdot y = aya^T$. By differentiation of the action we find the infinitesimal action for $V\in\mathfrak{g}=\mathfrak{so}(n)$ as $V\cdot y = Vy - yV$, thus for this action~(\ref{eq:lgeqn}) becomes \[\dot{y}(t) = f(t,y)\cdot y = f(t,y)y-yf(t,y),\] where $f\colon \mathbb{R}\times M\rightarrow \mathfrak{g}$. See~\cite{calvo1997nso,iserles2000lgm} for more details.\\ \noindent{\bf Affine action:} Let $G=Gl(n)\rtimes \mathbb{R}^n$ be the \emph{affine linear group}, consisting of all pairs $a,b$ where $a\in \mathbb{R}^{n\times n}$ is an invertible matrix and $b\in \mathbb{R}^n$ is a vector. The \emph{affine action} of $G$ on $M=\mathbb{R}^n$ is $(a,b)\cdot y = ay+b$. The Lie algebra of $G$ is $\mathfrak{g} = \mathfrak{gl}(n)\rtimes \mathbb{R}^n$, i.e.\ $\mathfrak{g}$ consists of all pairs $(V,b)$ where $V\in \mathbb{R}^{n\times n}$ and $b\in \mathbb{R}^n$. The infinitesimal action is given as $(V,b)\cdot y = Vy + b$. This action is useful for differential equations of the form $\dot{y}(t) = L(t)y + N(y)$, where $L(t)$ is a stiff linear part and $N$ is a non-stiff non-linear part. Such equations are cast in the form~(\ref{eq:lgeqn}) by choosing $f(t,y) = (L(t),N(y))$. Applications of Lie group integrators to such problems is closely related to exponential integrators. In this case it is important to use a commutator-free Lie group method. \\ \noindent{\bf Coadjoint action:} Many problems of computational mechanics are naturally formulated as \emph{Lie--Poisson systems}, evolving on coadjoint orbits of the dual of a Lie algebra~\cite{marsden1999itm}. Lie group integrators based on the coadjoint action of a Lie group on the dual of its Lie algebra are discussed in \cite{engo2002nio}. \\ \noindent{\bf Classical integrators as Lie group integrators:} The simplest of all group actions in our setting arises when $G=M=\mathbb{R}^n$. We can then use vector addition as group operation and group action. From the definitions we find that in this case $\mathfrak{g} = \mathbb{R}^n$, the commutator is 0, and the exponential map is the identity map from $\mathbb{R}^n$ to itself. The infinitesimal Lie algebra action becomes $V\cdot y = V$, thus~(\ref{eq:lgeqn}) reduces to $\dot{y}(t) = f(t,y)$, where $f(t,y)\in \mathbb{R}^n$. We see that classical integration methods are special cases of Lie group integrators, and all the examples of methods above reduce to well-known Runge--Kutta methods. \subsection{Geometry meets algebra}\label{sect:geometry} We will discuss how the B-series of Section~\ref{sect:classgeomint} can be generalized to \emph{Lie--Butcher series} for analyzing exact and approximate flows on manifolds. The basic building blocks of the numerical methods discussed above are commutators in the Lie algebra of frame vector fields $\g$, flows of frozen vector fields, evaluation of frozen vector fields, and parallel transport of tangent vectors. The Lie algebra $\g$ defines an \emph{absolute parallelism} on the Lie group, which yields a parallel transport of tangent vectors \cite{abraham1988mta}. For a vector field $F: \M\rightarrow T\M$ represented by a function $f: \M\rightarrow \g$, parallel transport between $T_{gy}\M$ and $T_y\M$ is defined as $\tau_g f(y)= f(gy)$. This transport is independent of the path between $gy$ and $y$, and hence it is a parallel transport induced by a \emph{flat connection}. Furthermore, we will see that this connection has \emph{constant torsion}. This is the geometric setting of Lie--Butcher series: a manifold with a connection which is flat with constant torsion, giving rise to a \emph{post--Lie algebra}~\cite{vallette2007hog,munthe-kaas2012opl} of vector fields on $\M$. The enveloping algebra of a post--Lie algebra is called a \emph{D-algebra} \cite{munthe-kaas2008oth,lundervold2009hao,lundervoldthesis}. Let $(G,\M)$ be a homogeneous manifold and let $\g$ denote the Lie algebra of $G$, which can be identified with right invariant vector fields (i.e.\ invariant derivations on $C^\infty(\M)$) via~(\ref{eq:infinitesimal}). We let $U(\g)$ denote the universal enveloping algebra of $\g$, which similarly can be identified with higher order right invariant derivations on $C^\infty(\M)$. Let $\one$ denote the identity operator on $C^\infty(\M)$. We define \begin{align} \begin{split} \UgM & := C^\infty(M)\tpr_\RR U(\g)\\ \gM & := C^\infty(M)\tpr_\RR \g \subset \UgM. \end{split} \end{align} Let $\left\{\partial_i\right\}_i$ be a basis for $\g$. Then $f\in \gM$ can be written as $f = \sum_i f^i\tpr \partial_i$. This represents a function $f\colon \M\rightarrow\g$ as $f(x) = \sum_i f^i(x)\partial_i\in \g$, and $f$ also acts as a derivation $f[g]:=\sum_i f^i\partial_i g$ for $g\in C^\infty(\M)$. Similarly, higher order derivations on $C^\infty(\M)$ can be represented by \begin{equation} h =\sum_I h^I \tpr\partial_I = \sum_{i,j,k\ldots}h^{ijk\ldots}\tpr \partial_i\partial_j\partial_k\ldots\in \UgM, \end{equation} where $I=(i,j,k,\ldots)$ is a multi-index. The space $\UgM$ is equipped with two operations: \emph{frozen composition} $g,h\mapsto gh$ and \emph{covariant derivation} $g,h\mapsto g[h]$ defined for $g,h\in \UgM$, $g=\sum_I g^I\tpr\partial_I$, $h=\sum_J h^J\tpr\partial_J$ as \begin{align} gh & = \sum_{I,J}g^Ih^J\tpr \partial_I\partial_J\\ g[h] & = \sum_{I,J} g^I\partial_Ih^J\tpr \partial_J. \end{align} Note that these operations are independent of the choice of basis for $\g$. The covariant derivation $g[h]$ can be understood as a (higher order) derivative of $h$ as it moves under parallel transport defined by the absolute parallelism. We will frequently apply the alternative notation \begin{equation} g\tr h := g[h] \end{equation} to emphasize the similarity between this operation and the product $\tr$ in a Pre-Lie algebra. Let $\D(\UgM)\subset\UgM$ denote the (first order) \emph{derivations}, defined as \[\D(\UgM) = \stset{f\in\UgM}{f[gh] = (f[g])h + g(f[h])\quad\mbox{for all $g,h \in \UgM$}}.\] It can be shown that \begin{equation}\D(\UgM)=\gM . \end{equation} $\UgM$ with the operations of frozen composition and covariant derivation satisfies the following fundamental relationships: For any derivation $f \in \D(\UgM)$ and any $g,h\in \UgM$ we have \begin{align} \begin{split} g[f] &\in \mathcal{D}(\UgM)\\ f[g[h]] &= (fg)[h] + (f[g])[h]. \end{split} \end{align} Such an algebraic structure is called a \emph{D-algebra} in \cite{munthe-kaas2008oth}. We shall see that the \emph{free D-algebra} is the algebra of forests of ordered trees, where $f,g\mapsto fg$ is the concatenation of forests and $f,g\mapsto f[g]$ is left grafting. \subsection{Ordered trees and D-algebras.} The set \[\OT = \{\ab,\aabb,\aababb,\aaabbb, \aabababb, \aabaabbb, \aaabbabb, \aaababbb,\ldots \}.\] of ordered rooted trees consists of {all} \emph{planar} rooted trees. In other words, an ordered rooted tree is a tree $\tau$ together with a chosen order of the branches connected to each vertex of $\tau$. Unlike the set $\T \subset \OT$ of rooted trees, we do not identify trees who differ in the order of their branches. The set of ordered words of elements from $\OT$ is denoted by $\OF$, and also includes the empty word. $\OF$ is called the set of \emph{ordered forests}, and we write $\OF_{\C}$ for the set of forests colored by $\C$. Let $\N = \RR\langle \OT\rangle$ be the non-commutative polynomials over $\OT$. The linear dual $\N^* := \Hom(\N, \RR)$ is identified with the infinite combinations of forests, and we write $\langle \cdot, \cdot \rangle$ for the pairing making forests orthogonal. That is, $\langle \omega_1, \omega_2\rangle = \delta_{\omega_1, \omega_2}$, for all $\omega_1, \omega_2 \in \OF$. It is sometimes convenient to allow the trees to be \emph{decorated} by a set $\C$, often called the set of colors. This is done via a map from the vertices of the tree to the set $\C$. We write $\OT_{\C}$ and $\OF_{\C}$ for the set of trees and forests colored by $\C$. A basic operation on $\N$ is the \emph{left grafting product} $\car: \N \otimes \N \rightarrow \N$ of \cite{munthe-kaas2008oth}. It is defined recursively by \begin{align} \begin{split} &\one\car\omega = \omega \\ &\omega\car\one = 0 \\ &\omega\car\ab = \Bplus(\omega),\\ &\tau\car\omega_1 \omega_2 = (\tau\car\omega_1)\omega_2 + \omega_1(\tau\car\omega_2) \\ &(\tau\omega)\car\omega_1 = \tau\car(\omega\car\omega_1) - (\tau\car\omega)\car\omega_1, \end{split} \end{align} where $\tau$ is a tree and $\omega_1$, $\omega_2$ are forests. The grafting product can be used to define the \emph{Grossman-Larson product} (GL product) $\bpr: \N \otimes \N \rightarrow \N$: \begin{align} \Bplus(\omega_1 \bpr \omega_2) = \omega_1\car \Bplus(\omega_2), \end{align} extended by linearity. Concatenation and left grafting gives $\N$ the structure of a \emph{D-algebra}, as defined in \cite{munthe-kaas2008oth} (see also \cite{lundervold2009hao, lundervold2010bea, munthe-kaas2012opl}), where the composition $\tr$ is left grafting. \begin{definition}\label{Dalg} Let $A$ be a unital associative algebra with product $f,g \mapsto fg$ and unit $\one$, equipped with a non-associative composition $\tr: A \otimes A \rightarrow A$ such that $\one\tr g = g$ for all $g\in A$. Write $\mathcal{D}(A)$ for the set of derivations: $$\mathcal{D}(A) = \{f\in A \,\,|\,\, f\tr (gh) = (f\tr g)h + g(f\tr h) \hspace{0.2cm} \text{for all } g,h \in A\}.$$ Then $A$ is called a \emph{D-algebra} if for any derivation $f \in \mathcal{D}(A)$ and any $g,h\in A$ we have \begin{align*} &\text{(i)} g\tr f \in \mathcal{D}(A)\\ &\text{(ii)} f\tr(g\tr h) = (fg)\tr h + (f\tr g)\tr h. \end{align*} \end{definition} In \cite{munthe-kaas2008oth} it was shown that the D-algebra $\N$ is the \emph{free} D-algebra: \begin{theorem}[\cite{munthe-kaas2008oth}]\label{universalD-alg} The vector space $\N = \fieldk\langle\OT_{\C}\rangle$ is the free D-algebra over the set $\C$. That is, for any $D$-algebra $\A$ and any map $\nu\colon\C\rightarrow D(\A)$ there exists a unique D-algebra homomorphism $\F_\nu\colon N\rightarrow \A$ such that $\F_\nu(c) = \nu(c)$ for all $c\in \C$. \begin{diagram}[labelstyle=\scriptstyle] \C &\rInto& \N \\ \dTo^{\nu} && \dTo_{\exists\,! \,\, \F_{\nu}} \\ D(\A) &\rInto& \A \end{diagram} \end{theorem} A D-algebra homomorphism between two D-algebras $A$ and $B$ is an algebra morphism $F:A \rightarrow B$ such that $F(\mathcal{D}(A)) \subset \mathcal{D}(B)$, and $F(a[b]) = F(a)[F(b)]$. By applying this theorem to the D-algebra $\UgM$ of differential operators on a homogeneous manifold (defined in Section \ref{sect:geometry}), we will construct elementary differentials and the Lie--Butcher series (Definition \ref{LBelementdiff} and Definition \ref{LBseries}). \paragraph{Post-Lie algebras.} In Section \ref{sect:geometry} we noted that the derivations in the D-algebra $\UgM$ of differential operators could be identified with $\gM$. In general, the derivations in a D-algebra form what is called a \emph{post-Lie algebra}, and the D-algebra can be identified with the universal enveloping algebra of its post-Lie algebra of derivations. This point of view is developed in \cite{munthe-kaas2012opl}, and is also being studied further in an ongoing project \cite{ebrahimi-fard2011otp} where the \emph{operad} behind post-Lie and D-algebras (also called \emph{post-associative algebras}) is explored. Post-Lie algebras were introduced independently by Vallette in \cite{vallette2007hog}, in a different context. \begin{definition} A \emph{post-Lie algebra} is a Lie algebra $(A, [\cdot,\cdot])$ equipped with a non-commutative, non-associative product $\triangleright: A \otimes A \rightarrow A$ satisfying: \begin{align} x \tr [y,z] &= [x\tr y, z] + [y, x \tr z] \hspace{1cm} \text{(derivation property)}\\ [x,y] \tr z &= a_{\tr}(x,y,z) - a_{\tr}(y,x,z),\label{pL:assoc} \end{align} where $a_{\tr}(x,y,z)$ is the associator $a_{\tr}(x,y,z) = x \tr (y \tr z) - (x\tr y) \tr z$. \end{definition} \noindent Notice that relation (\ref{pL:assoc}) implies that a pre-Lie algebra (Section \ref{sect:prelie}) is a post-Lie algebra with vanishing bracket. In \cite{munthe-kaas2012opl} it is shown that the free Lie algebra over rooted trees colored by a set $\C$, equipped with a post-Lie operation derived from grafting of trees, is the \emph{free post-Lie algebra}, and that its universal enveloping algebra is the free D-algebra defined above. We will not pursue this point of view in our present study of Lie--Butcher series. \subsection{Lie--Butcher series} Analogous to the B-series of Section \ref{sect:classgeomint}, the Lie--Butcher series can be used to represent flows -- numerical or exact -- on homogeneous manifolds. To achieve this one combines the concept of \emph{Lie series} in free Lie algebras with ideas from the theory of B-series. An exposition of free Lie algebras and Lie series can be found in the book \cite{reutenauer93fla} by Reutenauer. The \emph{free Lie algebra} $\FLA(A)$ over a set $A$ of generators is the closure of the generators under commutation and linear combination. In particular, we have the free Lie algebra $\FLA(\OT)$ over the set of ordered rooted trees. A \emph{Lie series} is a series expansion: \begin{equation} S = \sum_{n \geq 0} S_n, \end{equation} where each homogeneous component is an element of $\FLA(\OT)$, i.e. the $S_n$'s are \emph{Lie polynomials}. A Lie series of particular interest to us appears when computing the pullback of functions along flows of vector fields on homogeneous manifolds. Let $F \in \XM$ be a vector field with flow $\Phi_{t,F}$, and $\psi:M \rightarrow \g$ a function. Then \begin{equation} \left.\frac{d}{dt}\right|_{t=0}\Phi_{t,F}^*\psi = F[\psi]. \end{equation} The Taylor expansion of $\Phi_{t,F}^*\psi$ around $0$ therefore takes the form of a Lie series \begin{align}\label{LieSeries} \begin{split} \Phi_{t,F}^*\psi &= \sum_{n=0}^{\infty} \frac{t^n}{n!}\left(\left.\frac{\partial^n}{\partial t^n}\right|_{t=0} \Phi_{t,F}^*\psi \right)\\ &= \psi + tF[\psi] + \frac{t^2}{2!} F[F[\psi]] + \frac{t^3}{3!}F[F[F[\psi]]] + \cdots. \end{split} \end{align} \paragraph{Bell polynomials.} The higher order derivatives of the pullbacks can be written in terms of non-commutative analogs of the classical Bell polynomials of \cite{bell1927pp}. These polynomials have also appeared in \cite{munthe-kaas1995lbt, schimming1996nbp, munthe-kaas1998rkm, lundervold2009hao}. \begin{definition} Let $D = \RR\langle \I \rangle$ be the free associative algebra over an alphabet $\I = \{d_i \}$, and let $\partial: D \rightarrow D$ denote the derivation given by $\partial(d_i) = d_{i+1}$. The \emph{non-commutative Bell polynomials} $B_n = B_n(d_1,\dots, d_n) \in \RR\langle \I\rangle$ are defined by the recursion \begin{align} \begin{split} B_0 &= \one \\ B_n &= (d_1 + \partial)B_{n-1}, \quad n > 0. \end{split} \end{align} \end{definition} The first few are: \begin{align*} B_0 & = \one\\ B_1 & = d_1\\ B_2 & = d_1^2 + d_2 \\ B_3 & = d_1^3 + 2d_1d_2 + d_2d_1+d_3\\ B_4 & = d_1^4+ 3d_1^2 d_2 +2d_1d_2d_1 + d_2d_1^2 +3d_1d_3+ d_3d_1 + 3d_2^2 + d_4. \end{align*} \begin{theorem}[{\cite{munthe-kaas1995lbt, lundervold2009hao}}] The derivatives of the pullback of a function $\psi$ along the time-dependent flow $\Phi_{t,F}$ can be written as: \begin{equation}\label{eq:Bnderiv1} \frac{d^n}{d t^n}\Phi_{t,F}^*\psi = B_n(F)[\psi] ,\end{equation} where $B_n(F_t)$ is the image of the Bell polynomials $B_n$ under the homomorphism given by $d_i\mapsto F^{(i-1)}$ ($(i-1)$th derivative). In particular \begin{equation} \left.\label{eq:Bnderiv} \frac{d^n}{d t^n}\right|_{t=0}\Phi_{t,F_t}^*\psi = B_n(F_1,\ldots,F_n)[\psi] =: B_n(F_i)[\psi], \end{equation} where $F_{n+1} = d^n/dt^n|_{t=0} F$. \end{theorem} This result allows us to obtain a Lie series corresponding to (\ref{LieSeries}) for the case when $F$ is non-autonomous \cite{munthe-kaas1995lbt}: \begin{equation} \Phi^*_{t,F} \psi = \sum_{n=0}^{\infty} B_n(F_i)[\psi] \frac{t^n}{n!}. \end{equation} \begin{remark} It is well known that the classical Bell polynomials \cite{bell1927pp} can be defined in terms of determinants. As an interesting side note, the non-commutative Bell polynomials can be defined in the same way, only now in terms of a non-commutative analog of the determinant: the \emph{quasideterminants} of Gelfand and Retakh (\cite{gelfand1991dom}, see also \cite{gelfand2005q}). For example, we have \begin{align*} \det\begin{bmatrix} x_1 & -1 & 0 \\ \\ {3-1 \choose 1} x_2 & x_1 & -1 \\ \\ \text{\circled{${3-1 \choose 2}x_3$}} & {3-2 \choose 1} x_2 & x_1 \end{bmatrix} &= \det\begin{bmatrix} x_1 & -1 & 0 \\ \\ 2 x_2 & x_1 & -1 \\ \\ \text{\circled{$x_3$}} & x_2 & x_1 \end{bmatrix}\\ &\\ &= x_1^3+2x_1 x_2 + x_2 x_1 + x_3\\ &= B_3, \end{align*} where $\det$ denotes the quasideterminant, computed at the circled element. See \cite{gelfand2005q} for details about the computation and properties of quasideterminants. \end{remark} The non-commutative \emph{partial Bell polynomials} $B_{n,k}:= B_{n,k}(d_1,\ldots,d_{n-k+1})$ are defined as the part of $B_n$ consisting of words $\omega$ of length $k>0$, e.g. $B_{4,3} = 3d_1^2 d_2 +2d_1d_2d_1 + d_2d_1^2$. Thus \begin{equation} B_n = \sum_{k=1}^n B_{n,k} . \end{equation} \paragraph{A Fa\`a di Bruno bialgebra.} The non-commutative \emph{Dynkin--Fa\`a di Bruno bialgebra} $\Hfdb$ is obtained by using the algebra structure of $\Hfdb$ and defining the coproduct $\cpfdb$ as \begin{align}\label{eq:fdb2} \begin{split} \cpfdb(\one) & = \one\tpr \one\\ \cpfdb(d_n) & = \sum_{k=1}^n B_{n,k}\tpr d_k . \end{split} \end{align} This extends to all of $\Hfdb$ by the product rule $\cpfdb(d_i d_j) = \cpfdb(d_i)\cpfdb(d_j)$. For example \begin{align*} \cpfdb(d_1) & = d_1\tpr d_1\\ \cpfdb(d_2) & = d_1^2\tpr d_2 + d_2\tpr d_1\\ \cpfdb(d_1d_2) & = d_1^3\tpr d_1d_2 + d_1d_2\tpr d_1^2. \end{align*} Note that the coproduct is not graded by $|\cdot|$ \begin{lemma}[{\cite{lundervold2009hao}}]~\label{coprod of Bell} The coproduct of the partial Bell polynomials is: \begin{equation}\label{eq:fdb3} \cpfdb(B_{n,k}) = \sum_{\ell=1}^n B_{n,\ell}\tpr B_{\ell,k} . \end{equation} \end{lemma} \noindent Note that $B_{n,1}=d_n$, so (\ref{eq:fdb2}) is a special case of (\ref{eq:fdb3}). Summing the partial $B_{n,k}$ over $k$, we find the coproduct of the full Bell polynomials: \begin{equation} \cpfdb(B_{n}) = \sum_{k=1}^n B_{n,k}\tpr B_{k}. \end{equation} \noindent Using Lemma \ref{coprod of Bell} and the fact that $B_{n,k} = 0$ for $k > n$, one can show that $\Hfdb$ is a bialgebra. \begin{proposition}[{\cite{lundervold2009hao}}] $\Hfdb = \RR\langle \I \rangle$ with the non-commutative concatenation product and the coproduct $\cpfdb$ form a bialgebra $\Hfdb$, which is neither commutative nor cocommutative. \end{proposition} \paragraph{Lie--Butcher series.} The Lie--series (\ref{LieSeries}) can also be written as the \emph{Lie--Butcher series} for the exact flow. In general, the Lie--Butcher series $\Bs_f(\alpha)$ are constructed to represent flows given by $y_0 \mapsto y_t = \Psi_{t}(y_0)$: \begin{equation} \Psi_{t}(y(t)) = \Bs_f(\alpha)[\Psi_t](y_0). \end{equation} Before giving the definition of Lie--Butcher series we define the elementary differentials of a vector field $F$: \begin{definition}\label{LBelementdiff} Let $\F_f:\N \rightarrow U(\g)^M$ be the unique D-algebra morphism given by Theorem \ref{universalD-alg} by associating \,$\ab$\, to a vector field $f:M \rightarrow \g$. This is called the \emph{elementary differentials} of the vector field $f$. \end{definition} \noindent Note that $\F_f:\N \rightarrow U(\g)^M$ is given recursively by \begin{itemize} \item[(i)] $\F_{f}(\one) = \one$ \item[(ii)] $\F_{f}(\Bplus(\omega)) = \F_{f}(\omega)[f]$ \item[(iii)] $\F_{f}(\omega_1\omega_2) = \F_{f}(\omega_1)\F_{f}(\omega_2)$ \end{itemize} \noindent The general Lie--Butcher series are expansions of elementary differentials indexed over ordered rooted forests. \begin{definition}\label{LBseries} A \emph{Lie--Butcher series} (LB-series) is a formal series expansion over $U(\g)^M$: \begin{equation} \Bs_f(\alpha) = \sum_{\omega \in \OF} h^{|\omega|}\alpha(\omega)\F_f(\omega), \end{equation} where $\alpha: \N \rightarrow \RR$. \end{definition} \noindent It turns out \cite{lundervold2009hao} that the Lie series (\ref{LieSeries}) can be written as \begin{equation} \Phi^*_{t,f} \psi = \sum_{\omega\in\OT} \gamma(\omega) \F_{f}(\omega), \end{equation} where $\gamma$ are the coefficients appearing when iteratively (left) grafting $\ab$ onto $\ab$. This is the Lie--Butcher series for the exact flow. To understand how Lie--Butcher series can be used to represent numerical flows we conduct a closer study of the coefficients $\alpha: \N \rightarrow \RR$, and understand them as characters in a certain Hopf algebra. This Hopf algebra allows us to formulate the concept of \emph{composition} of LB-series. \subsection{Composition of Lie--Butcher series}\label{sect:compLB} We would like to understand the result of composing LB-series methods in a similar way as we did for B-series methods in Section \ref{GI:composition}. The basic problem is to determine whether the method $\Phi$ resulting from composing two methods $\Phi^2 \circ \Phi^1$--both given by LB-series--is another LB-series, and in that case, what its coefficients are. Just as there is a Hopf algebra governing composition of B-series (the BCK Hopf algebra discussed in Section \ref{GI:composition}), there is a Hopf algebra $\HMKW$ behind the composition of LB-series. This Hopf algebra was studied in \cite{munthe-kaas2008oth}, where its properties and its relation to the BCK Hopf algebra was explored. An introduction can also be found in \cite{lundervold2009hao}. This Hopf algebra is the dual of a version of Grossman and Larsons Hopf algebras in the case of ordered trees. \paragraph{The Hopf algebra of composition.} As a vector space $\HMKW$ is equal to $\N$: $\HMKW = \RR\langle \OT\rangle$. The product is given by \emph{shuffling}: \begin{align} \begin{split} &\one \shuffle \omega = \omega = \omega \shuffle \one\\ &(\tau_1\omega_1) \shuffle (\tau_2 \omega_2) = \tau_1(\omega_1 \shuffle \tau_2 \omega_2) + \tau_2(\tau_1\omega_1 \shuffle \omega_2) \end{split} \end{align} where $\tau_1, \tau_2 \in \OT$ and $\omega_1, \omega_2 \in \OF$. The coproduct is given recursively by $\DeltaMKW(\one) = \one \otimes \one$ and \begin{equation} \DeltaMKW(\omega \tau) = \omega \tau \otimes \one + \DeltaMKW(\omega) \shuffle \, \cdot \, (I \otimes B^+) \DeltaMKW(B^-(\tau)), \end{equation} where $\tau \in \OT$, $\omega \in \OF$. Here $\shuffle \,\cdot: \N^{\otimes 4} \rightarrow \N \otimes \N$ denotes shuffle on the left and concatenation on the right: $(\omega_1 \otimes \omega_2) \shuffle \, \cdot\, (\omega_3 \otimes \omega_4) = (\omega_1 \shuffle \omega_3) \otimes (\omega_2 \omega_4).$ Note that the shuffle product also gives rise to the \emph{shuffle Hopf algebra} $\Hsh$, whose coproduct is given by deconcatenation \cite{reutenauer93fla}: \begin{equation} \DeltaC(\omega) = \one \otimes \omega + \omega \otimes \one + \sum_{i=1}^{n-1} \tau_1 \cdots \tau_i \otimes \tau_ {i+1} \cdots \tau_n, \end{equation} where $\omega = \tau_1 \cdots \tau_n$. The set of ordered forests can be generated recursively from the empty forest $\one$ by a \emph{magmatic} operation $\times: \N \times \N \rightarrow \N$ on $\N$, given by $\mu_{\times}(\omega_1,\omega_2) = \omega_1 \Bplus(\omega_2)$. For a forest $\omega$, write $\omega_L$ and $\omega_R$ for the \emph{left-} and \emph{right part}: $\omega = \omega_L \times \omega_R$. The above operations can be written recursively in terms of this operation: \begin{itemize} \item[] Concatenation: $\omega\,\one = \one\,\omega = \omega$, and $(\omega_1\times \omega_2)\,\omega_3 = \omega_1 \times (\omega_2 \,\omega_3)$. \item[]Shuffle: $\omega \shuffle \one = \one \shuffle \omega = \omega$, and $\omega_1 \shuffle \omega_2 = (\omega_1 \shuffle \omega_{2L}) \times \omega_{2R} + (\omega_{1L} \shuffle \omega_2) \times \omega_{1R}$ \item[]Coproduct $\DeltaMKW(\one) = \one \otimes \one$, and $\DeltaMKW(\omega) = \omega \otimes \one + \DeltaMKW(\omega_L) \shuffle \times \DeltaMKW(\omega_R)$ \end{itemize} The coproduct can also be written in terms of \emph{left admissible cuts}, analogous to the coproduct in $\BCK$ (Theorem \ref{thm:BCKcoprod}): \begin{theorem}[{\cite{munthe-kaas2008oth}}] The coproduct in $\HMKW$ can be written as \begin{equation}\label{MKWcoprod_cuts} \DeltaMKW(\omega) = \sum_{c \in \FLAC(\omega)} P^c(\omega) \tpr R^c(\omega), \end{equation} where $\omega$ is a forest in $\OT$. Here $\FLAC(\omega)$ consists of all left admissible cuts of $\omega$, including the full cut. \end{theorem} A \emph{left admissible cut} differs from the admissible cuts defined in Section \ref{GI:composition}: an \emph{elementary cut} $c$ of a tree $\tau$ is a selection of edges to be removed from $\tau$, chosen in such a way that if an edge $e$ is removed, then all the branches on the same level and to the left of $e$ must also be removed. A cut results in a collection of trees concatenated together to form a forest $P^{c}_{el}(\tau)$ (the \emph{pruned part}), and a remaining tree $R^c_{el}(\tau)$, containing the root. A left admissible cut $c = \{c_1, \dots, c_n\}$ on $\tau$ is a collection of such elementary cuts, with the property that any path from the root to any vertex crosses at most one cut $c_i$. The pruned parts from each cut together form the pruned part $P^c(\tau)$ of the left admissible cut, where the parts coming from different cuts are shuffled together. We also include the \emph{full cut} and the \emph{empty cut}, which results in $P^c(\tau) = \tau$ and $P^c(\tau)=\one$, respectively. The cutting operation is extended to forests $\omega$ as follows: apply the $\Bplus$ operation to $\omega$ to get a tree, cut this according to the above rules, without using the cut removing all the edges coming out of the root, and, finally, remove the added root from $R^c(\omega)$. See Table \ref{MKWcoprod} for some examples of the coproduct $\DeltaMKW$, and see \cite{munthe-kaas2008oth} or \cite{lundervold2009hao} for further examples and properties of $\HMKW$. \begin{table}[h! ] \centering \begin{equation*} \begin{array}{c@{\,\,}|@{\quad}l} \hline \\[-2mm] \omega & \DeltaMKW(\omega) \\[1mm] \hline \\[-2mm] \one & \one\otimes \one \\[1mm] \ab & \ab\otimes \one + \one \otimes \ab \\[2mm] \ab\,\ab & \ab\,\ab\otimes \one + \ab\otimes \ab+ \one \otimes \ab\,\ab \\[2mm] \aabb & \aabb\otimes \one + \ab \otimes \ab+ \one\otimes \aabb \\[2mm] \ab\,\aabb & \ab\,\aabb \otimes \one + 2\,\ab\,\ab\otimes \ab + \ab\otimes \aabb + \ab \otimes \ab\,\ab+ \one \otimes \ab\,\aabb \\[2mm] \aabb\,\ab &\aabb\,\ab\otimes\one + \aabb\otimes\ab + \ab\otimes\ab\,\ab+ \one \otimes \aabb\,\ab \\[2mm] \end{array} \end{equation*} \caption{Examples of the coproduct $\DeltaMKW$}\label{MKWcoprod} \end{table} The main result linking $\HMKW$ to LB-series is the following, which is an analog of the Hairer-Wanner theorem (Theorem \ref{hairerwannertheorem}) for B-series: \begin{theorem}[{\cite{munthe-kaas2008oth}}]\label{compositionOfLB} The composition of two LB-series is again a LB-series: \begin{equation} \Bs_f(\alpha)[\Bs_f(\beta)] = \Bs_f(\alpha * \beta), \end{equation} where $*$ is the convolution product in $\HMKW$. \end{theorem} \subsection{Lie--Butcher series and flows on manifolds.} We shall see how LB-series can be used to represent numerical flows. More details and examples can be found in \cite{munthe-kaas1995lbt, munthe-kaas1998rkm, owren1999rkm, owren2006ocf, munthe-kaas2008oth, lundervold2009hao, lundervold2010bea}. Flows $y_0 \mapsto y(t) = \Psi_t(y_0)$ on the manifold $\M$ can be represented in several different ways. Here are three procedures, giving rise to what can be called LB-series of Type 1, 2 and 3: \begin{enumerate} \item In terms of pullback series: Find $\alpha\in G(\HMKW)$ such that \begin{equation}\label{LBpullback} \Psi(y(t)) = \Bs_t(\alpha)(y_0)[\Psi] \quad\mbox{for any $\Psi\in U(\g)^\M$.} \end{equation} This representation is used in the analysis of Crouch--Grossman methods by Owren and Marthinsen~\cite{owren1999rkm}. In the classical setting, this is called a $S$-series~\cite{murua1999fsa}. \item In terms of an autonomous differential equation: Find $\beta\in \g(\HMKW)$ such that $y(t)$ solves \begin{equation} y'(t) = \Bs_h(\beta)(y(t)). \end{equation} This is called backward error analysis (confer Section \ref{LGI:substitution}). \item In terms of a non-autonomous equation of \emph{Lie type} (time dependent frozen vector field): Find $\gamma\in \g(\Hsh)$ such that $y(t)$ solves \begin{equation}\label{lietype} y'(t) = \left(\frac{\partial}{\partial t} \Bs_t(\gamma)(y_0)\right) y(t). \end{equation} This representation is used in~\cite{munthe-kaas1995lbt,munthe-kaas1998rkm}. In the classical setting this is (close to) the standard definition of $B$-series. \end{enumerate} The algebraic relationships between the coefficients $\alpha$, $\beta$ and $\gamma$ in the above LB-series are \cite{lundervold2009hao}: \begin{align*} \beta &= \alpha\opr e &\mbox{$e$ is eulerian idempotent in $\HMKW$.}\\ \alpha &= \exp^\bpr(\beta)&\mbox{Exponential wrt.\ GL-product}\\ \gamma &= \alpha\opr Y^{-1}\opr D &\mbox{Dynkin idempotent in $\Hsh(\OT)$.}\\ \alpha &= Q(\gamma)&\mbox{$Q$-operator in $\Hsh(\OT)$.} \end{align*} The eulerian idempotent $e$ in a commutative, connected and graded Hopf algebra $H$ is the formal series $e := \log^{*}(Id)$, where $Id$ is the identity endomorphism and $*$ the convolution product in $H$. The Dynkin map $D$ is the convolution of the antipode $S$ and the grading operator $Y$, $D = S * Y$, and $Y^{-1} \circ D$ is an idempotent. See e.g. \cite{lundervold2009hao} for details. The operator $Q$ is a rescaling of the Bell polynomials: \begin{align} \begin{split} Q_{n,k}(d_1,\ldots,d_{n-k+1}) &= \frac{1}{n!}B_{n,k}(1!d_1,\ldots,j!d_j,\ldots) = \mathop{\sum}_{|\omega| = n, \#(\omega)=k} \kappa({\omega}) \omega\\ Q_n(d_1,\ldots,d_n)&=\sum_{k=1}^n Q_{n,k}(d_1,\ldots,d_{n-k+1})\\ Q_0 &:= \one, \end{split} \end{align} where, for $\omega = d_{j_1}d_{j_2}\cdots d_{j_k}$, the coefficients $\kappa({\omega})$ are defined as \[\kappa(\omega) := \kappa(|d_{j_1}|,|d_{j_2}|,\ldots,|d_{j_k}|) := \frac{j_1j_2\cdots j_k}{j_1(j_1+j_2)\cdots(j_1+j_2+\cdots+j_k)}.\] \noindent By using these relationships one can convert between the various representations of flows. \begin{example}[The exact solution] The exact solution of a differential equation \[y'(t) = F(y(t))\] can be written as the solution of \[y' = F_t\dpr y, \quad y(0)= y_0, \] where $F_t=F(y(t))\in \g$ is the pullback of $F$ along the time dependent flow of $F$. Let $F_t = \frac{\partial}{\partial t}\Bs_t(\gamma)$. By \cite[Proposition 4.9]{lundervold2009hao} the pullback is given by $\Bs_t(Q(\gamma_{\text{Exact}}))[F]$, so \[Y\opr\gamma_{\text{Exact}} = Q(\gamma_{\text{Exact}})[\ab]\Rightarrow \gamma_{\text{Exact}} = Y^{-1}\opr \Bplus(Q(\gamma_{\text{Exact}})).\] Note that this is reminiscent of a so-called combinatorial Dyson--Schwinger equation~\cite{foissy2008fdb}. Solving by iteration yields {\small \begin{align*} \gamma_{\text{Exact}} & = \ab + \frac{1}{2!}\aabb + \frac{1}{3!}\left(\aababb+\aaabbb\right) + \frac{1}{4!}\left(\aabababb+\aaabbabb+2\aabaabbb +\aaababbb+\aaaabbbb\right)+ \frac{1}{5!}(\aababababb \\ & +\aaabbababb+2\aabaabbabb +3\aababaabbb+\aaababbabb+ \aaaabbbabb+3\aaabbaabbb+3\aabaababbb+3\aabaaabbbb+\aaabababbb+\aaaabbabbb\\ & +2\aaabaabbbb+\aaaababbbb+\aaaaabbbbb)+\frac{1}{6!}\left(\aabababababb+\cdots\right)+\cdots \end{align*} } \noindent A formula for the LB-series for the exact solution was given in \cite{owren1999rkm}. We observe that there cannot be any commutators of trees in this expression. Therefore, in LB-series of numerical integrators, commutators of trees must be zero up to the order of the method. \end{example} \begin{example}[The exponential Euler method]\label{eulerMethod} The exponential Euler method \cite{iserles2000lgm} can be written as follows: \[y_{n+1} = \exp(hf(y_n))y_n,\] or, by rescaling the vector field $f$, as \[y_{n+1} = \exp(f(y_n))y_n.\] This equation can be interpreted as a pullback equation of the form $\Phi(y_{n+1}) = \Bs(\exp(\ab))[\Phi]y_n$, so \[\alpha = \exp(\ab) = \one + \ab + \frac{1}{2!}\ab\ab + \frac{1}{3!}\ab\ab\ab + \cdots.\] (Here the Grossman-Larson product is the same as concatenation). Note that $\exp(\ab) = Q(\ab),$ so the Type 3 LB-series for the Euler method is simply \[\gamma_{\text{Euler}} = \ab.\] \end{example} \begin{example}[The Lie--implicit midpoint method]\label{midpointMethod} The Lie--implicit midpoint method \cite{iserles2000lgm} can be presented as: \begin{align}\label{midpoint} \sigma &= f(\exp(\frac{1}{2}\sigma)y_n) \\ y_{n+1} &= \exp(\sigma)y_n \end{align} We make the following ansatz: \begin{equation}\label{mp:ansatz} \sigma = \sum_{\omega} \alpha(\omega)\omega = \alpha(\ab)\ab + \alpha\left(\aabb\right)\aabb + \alpha\left(\aababb\right)\aababb + \alpha\left([\ab,\aabb]\right)[\ab,\aabb] + \alpha\left(\aaabbb\right)\aaabbb + \cdots, \end{equation} i.e. that $\sigma$ can be written as an infinitesimal LB-series. From Equation (\ref{midpoint}), we get that \begin{equation}\label{mp:sigma} \sigma = \sum_{j=0}^{\infty} \frac{(\sigma)^j}{2^jj!} [\ab]. \end{equation} Since there are no forests in this expression, we must have $\alpha([\omega, \omega']) = 0$ for all $\omega, \omega' \in \OT$. If we write $\tau= \Bplus(\tau_1 \cdots \tau_j)$ then, by combining Equation (\ref{mp:sigma}) with the ansatz, we see that coefficients of the LB-series are given recursively as $\alpha(\ab) = \frac{1}{2}$, \begin{equation} \alpha(\tau) = \frac{1}{2^jj!} \alpha(\tau_1) \cdots \alpha(\tau_j). \end{equation} Therefore, \begin{align*} \alpha_{\text{Midpoint}} & = \ab + \frac{1}{2!}\aabb + \frac{1}{2}\left(\frac{1}{4}\aababb+\aaabbb\right) +\cdots. \end{align*} \end{example} \subsection{Substitution and backward error analysis for Lie--Butcher series}\label{LGI:substitution} In \cite{lundervold2010bea} the substitution law for LB-series methods was developed, culminating in a formula that can be used to calculate the modified vector field used in backward error analysis. \paragraph{The substitution law.} The basic idea is as for B-series (Section \ref{substB}): We consider substituting a LB-series into another LB-series, e.g. $\Bs_{\Bs_f(\beta)}(\alpha)$, and the questions are as before: is this a LB-series, and in that case, which one? The result is given in terms of the \emph{substitution law}, defined using the freeness of the D-algebra $\N = \RR\langle \OT\rangle$ (Theorem \ref{universalD-alg}): \begin{multicols}{2} \begin{definition}\label{substlaw}For any map $\alpha: \C \rightarrow D(\N)$ Theorem \ref{universalD-alg} implies that there a unique D-algebra homomorphism $\alpha\ast:\N\rightarrow \N$ such that $\alpha(c) = \alpha\ast c$ for all $c\in \C$. This homomorphism is called $\alpha$-substitution. \vspace{-0.3cm} \begin{diagram}[labelstyle=\scriptstyle] \C &\rInto& \N\\ \dTo^{\alpha} && \dTo_{\alpha\ast} \\ D(\N) &\rInto& \N \end{diagram} \end{definition} \end{multicols} \begin{theorem}[{\cite{lundervold2010bea}}]\label{substBseries} The substitution law defined in Definition \ref{substlaw} corresponds to the substitution of LB-series in the sense that \begin{equation} \Bs_{\Bs_f(\alpha)}(\beta) = \Bs_f(\alpha \ast \beta). \end{equation} \end{theorem} \paragraph{Calculating the substitution law.} To obtain a formula for the substitution law we consider the dual $\astar$ of $\alpha$-substitution: \begin{equation} \langle \alpha \ast \beta, \omega \rangle = \langle \beta, \astar(\omega)\rangle, \end{equation} called the \emph{substitution character}. The dual pairing $\langle \cdot, \cdot \rangle$ is the one induced by requiring that all forests in $\OF$ are orthogonal, and we may write $\langle \alpha, \omega \rangle = \alpha(\omega)$. The map $\astar$ is a character for the shuffle product \cite{lundervold2010bea}: $\astar(\omega_1\shuffle\omega_2) = \astar(\omega_1) \shuffle \astar(\omega_2)$. The formula for the substitution law is based on the cutting of trees as in the coproduct $\DeltaMKW$. More specifically, it is based on the dual of grafting, called \emph{pruning}: \begin{equation} \mathcal{P}_{\nu}(\omega) = \sum_{c \in LAC(\omega)} \langle \nu, P^c(\omega)\rangle R^c(\omega). \end{equation} Here the sum is over the left admissible cuts, but as opposed to the cuts in the formula (\ref{MKWcoprod_cuts}) for $\DeltaMKW$, the full cut is not included. In \cite{lundervold2010bea} the following inductive formula for $\astar$ was obtained: \begin{theorem}[{\cite{lundervold2010bea}}]\label{substlawformula} We have \begin{equation}\label{eq:substlaw} \astar (\omega) = \sum_{(\omega) \in \DeltaC} \sum_{c \in LAC(\omega_{(2)})} \astar(\omega_{(1)}) \Bplus\left(\astar(P^c(\omega_{(2)}))\right)\alpha(R^c(\omega_{(2)})), \end{equation} if $\omega \neq 1$ and $\astar(\one) = \one$. Here $\DeltaC$ denotes the deconcatenation coproduct. \end{theorem} \noindent By using the magmatic operation $\times$ on $\N$, this can also be written as a composition of operators: \begin{equation} \astar = \mu \circ (\mu_{\times} \otimes I)\circ (\astar \otimes \astar \otimes a) \circ (I\otimes \DeltaMKW') \circ \DeltaC. \end{equation} Here $\DeltaMKW'$ denotes the coproduct in (\ref{MKWcoprod_cuts}) with the full cut removed, and $\mu$ denotes concatenation. Some examples of the substitution character can be found in Table \ref{subst_character}. Many more examples and details can be found in \cite{lundervold2010bea}. \begin{table}[h! ] \centering \begin{equation*} \begin{array}{c@{\,\,}|@{\quad}l} \hline \\[-2mm] \omega & \astar(\omega) \\[1mm] \hline \\[-2mm] \one & \one \\[1mm] \ab & \alpha(\ab)\ab \\[2mm] \ab\ab & \alpha(\ab)^2\ab\ab\\[2mm] \aabb & \alpha(\aabb)\ab + \alpha(\ab)^2\aabb\\[2mm] \ab\aabb & \alpha(\ab\aabb)\ab + \alpha(\ab)\alpha(\aabb) \ab\ab + \alpha(\ab)^3\ab\aabb\\[2mm] \aabb\ab & \alpha(\aabb\ab)\ab + \alpha(\ab)\alpha(\aabb)\ab\ab + \alpha(\ab)^3 \aabb\ab \\[2mm] \end{array} \end{equation*} \caption{Examples of the substitution character $\astar$}\label{subst_character} \end{table} \begin{remark} One would like the substitution law $\ast$ for LB-series to be a convolution product in a Hopf or bialgebra, analogous to the substitution of B-series (Theorem \ref{thm:substB}). One way to achieve this is by obtaining a concrete description of the operations in the post-Lie operad. In that case one can follow the procedure in \cite{calaque2009tih}, which, roughly, is the following: The post-Lie operad has a pre-Lie structure (general phenomenon for augmented operads), there is an associated Lie algebra structure, its universal enveloping algebra is a Hopf algebra, and its dual is the Hopf algebra for the substitution law. This is a project currently under investigation \cite{ebrahimi-fard2011otp}. \end{remark}
1,108,101,563,421
arxiv
\section{Proposed Method} \label{sec:proposedmethod} The idea of this paper follows the observation that the search space can be represented by a large directed graph, of which any architecture in the search space can be represented by a sub-graph. In other word, a neural network architecture can be obtained by pruning a subset of edges and nodes in the directed graph. Figure~\ref{DAG} illustrates an example DAG, whose nodes and edges represent local computation $\O$ and information flow respectively. \begin{wrapfigure}{l}{7cm} \includegraphics[angle=270,scale=0.35]{figs/graph111.pdf} \caption{The whole search space can be represented by a DAG. Nodes 1 and 6 are input and output respectively, any architecture can be obtained by pruning a subset of edges and nodes in the DAG.} \label{DAG} \end{wrapfigure} The output of node $i$, $\x^i$ can be calculated by: \begin{align} \label{equ:xx}x^{i}=O^i(\sum_{j<i}x^j) \end{align} Instead of applying controller or evolution, we address the problem in a more simple and effective way. While searching for sub-graph, we try to delete useless connection and nodes in the large DAG, leaving the ones of great importance. To achieve this goal, we apply scaling parameter on every edge in the DAG to scale the flow of information, and the input of a node is the sum of the outputs of former nodes multiplied by the scaling parameters, equation~\ref{equ:xx} can be modified to: \begin{align} x^{i}=O^i(\sum_{j<i}\lambda_{j}^i x^j) \end{align} Where $\lambda_{j}^i$ is the scaling factor applied on the connection between node $i$ and $j$. The scaling factor $\lambda$ relaxing the search space to be continues and differentiable. The sparsity constraint is applied on $\lambda$ during training to push them to zero. When $\lambda_{j}^i$ is zero, the corresponding edge can be removed safely. In the searching precess, the weights of architecture and scaling factors $\lam$ are updated alternately As weights and $\lambda$ are optimized together while training and a effective sparsity optimizer proposed by (\cite{SSS2018}) is applied. In the following, we introduce the optimizer of our method in section~\ref{section:sss}. We then describe our search space in the form of directed acyclic graph in section~\ref{section:search space}. Finally, we introduce the searching algorithm and give a summary of SSNAS in section~\ref{section:opt and train}. \subsection{Overview of Sparse Structure Selection} \label{section:sss} Sparse Structure Selection was proposed by (\cite{SSS2018}) to prune network structure successfully. Instead of directly pushing the weights of network to zero, (\cite{SSS2018}) introduce a new kind of parameter $\lam$ to scale the outputs of specific structures like neurons, groups or blocks. sparsity constraint is added on $\lam$ during training to pushing $\lam$ to be sparse, if certain $\lam$ if zero, the corresponding structure can be removed as it make no contribution to the network. $\lam$ is updated by APG during training: \begin{align} \label{equ:z}{\bm{z}}_{(t)}&=\lam'_{(t-1)}-\eta_{(t)}\nabla \MG(\lam'_{(t-1)})\\ \label{equ:v}{\bm{v}}_{(t)}&=S_{\eta_{(t)}\gamma}(\z_{(t)})-\lam'_{(t-1)}+\mu_{(t-1)}\v_{(t-1)}\\ \label{equ:lam}\lam'_{(t)}&=S_{\eta_{(t)}\gamma}({\bm{z}}_{(t)})+\mu_{(t)}{\bm{v}}_{(t)} \end{align} In practice, (\cite{SSS2018}) follow a stochastic approach with mini-batches and $\mu$ is fixed to a constant value. Weights and $\lam$ are updated in each iteration. The depth and width can be adaptively adjusted by adjusting the sparse constrain on $\lam$. (\cite{SSS2018}) shows very promising acceleration performances on the CIFAR datasets and the large scale ILSVRC2012 image classification datasets. However, Sparse Structure Selection can only modify the structure like channels, blocks of exiting architecture, with no ability to discover novel architecture within a rich search space. Instead of applying scaling factors on the outputs of structure, we apply $\lam$ to scale the connection in the cell, getting rid of the limitation of relaying on exiting architecture. Through scaling and pruning, SSNAS can obtain a new architecture with state-of-the-art performance. \subsection{Search Space} \label{section:search space} As well as searching for building cell just as (\cite{pham2018efficient}) do, SSNAS can alos be applied to search for whole network, still keeping a competitive searching time. We assume that the architecture is hierarchical and cell is stacked for a certain number of time. A cell is a directed acyclic graph consisting of an ordered sequence of $M$ layers, every layer consists of the $N$ operations. The structure of cell is illustrated in Figure~\ref{method}. Every operation is connected to the input of cell and operations in the formal layers with a scaling factor $\lambda$, the input of operation is the sum of all outputs of previous nodes multiplied by the scaling factors. The output of the $j$ th operation in $i$ th layer $x_{i,j}$ is computed based on all of its predecessors: \begin{align} \label{equ:x}{\tens{X}}_{(i,j)}=O_{(i,j)}(\sum_{m<i,n\in(1,N)}\lam_{(m,n)}^{(i,j)}{\tens{X}}_{(m,n)} + \lam^{(i,j)}{\tens{I}}) \end{align} \begin{figure*}[htbp] \centering \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[scale=0.25]{figs/graph3-1.pdf} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[scale=0.25]{figs/graph3-2.pdf} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[scale=0.25]{figs/graph3-3.pdf} \end{minipage} } \centering \caption{An searching process of two-layer-block, which has two layers with two operations:(a)Edges are applied with a scaling factor $\lam$, which is trained with Sparse Structure Selection~\cite{SSS2018}. (b)Any edge whose $\lam$ is zero can be removed safely as no contribution is made to the network. (c)Generating new architecture by pruning useless edges and operations.} \label{method} \end{figure*} Where $O_{m,n}$ is the local computation of the $n$ th operation in $m$ th layer and ${\tens{I}}$ is the input of block. The connections between operations and output of cell are also learn-able with scaling factors $\lam$ applied on the corresponding connections. The output of the block ${\tens{O}}$ is obtained by applying a reduction operation(e.g.concatenation) $C$ to all the nodes that have contribution to the output: \begin{align} \label{equ:O}{\tens{O}}=C(\sum_{m,n}\lam_{(m,n)}{\tens{X}}_{(m,n)})+id \end{align} Identity mapping is applied between the input and output in case all operations are pruned. We include following four kinds of operations in convolution cell: separable convolution with kernel 3$\times$3 and 5$\times$5, average pooling and max pooling with kernel 3$\times$3. As for reduction block,we simply use normal convolutions with kernel size 1$\times$1 and 3$\times$3, and apply them with a stride of 2 to reduction feature map and double the number of filters. The task of searching block therefore reduces to learning $\lam$ on every edges. Thus the architecture searching problem has been reformulated as a sparse problem: \begin{align} \min_{{\bm{W}},\lam}\frac{1}{N}\sum_{i=1}^{N}L({\tens{Y}}_i,Net({\tens{X}}_i,{\bm{W}},\lam))+R_s(\lam) \end{align} Where $R_s(.)$ is a sparsity regularization for $\lam$, ${\tens{X}}_i$ and ${\tens{Y}}_i$ are input data and label respectively, ${\bm{W}}$ represents the weights of the network. \subsection{Optimization and Training} \label{section:opt and train} In SSNAS, there are two kinds of learn-able parameters: the weights of the network ${\bm{W}}$ and the structure parameters $\lam$. We updata ${\bm{W}}$ with NAG and APGNAG proposed by \cite{SSS2018} for $\lam$. However, in \cite{SSS2018}, the weight and $\lam$ are updated together in same training set, which may cause overfitting in small datasets like CIFAR-10, leading to the poor performance of network. To solve this problem, we divide training set into two parts, training set and validation set, the weights of network ${\bm{W}}$ are updated in the training set and structure parameters $\lam$ are updated in the validation set. The scalers in batch normalizations are fixed during searching process to prevent their effect to $\lam$. To keep the balance among different cells and prevent necessary operation from being pruned. We adjust the intensity of regularity, $\gamma$ adaptively according to the flops in the block. When optimizing $\lam$ using APGNAG, $\gamma$ in ~\ref{equ:v}~\ref{equ:lam} is calculating by: \begin{align} \label{equ:gamma}\gamma_{iter}=\frac{\b{FLOPS}_{iter}}{\b{FLOPS}_{block}}\gamma \end{align} where $\b{FLOPS}_{block}$ represents the flops of the block keeping all operation and $\b{FLOPS}_{iter}$, which can be calculating based on $\lam$ represents the flops in the process of searching. Using this strategy, we can prevent cells from making no contribution to the network by prevent all operation from being pruned. \begin{algorithm} \caption{SS-NAS} \label{alg:A} \begin{algorithmic} \STATE update ${\bm{w}}$ by optimizing $L_{train}({\bm{W}})$ for several epochs \REPEAT \STATE update ${\bm{w}}$ by optimizing $L_{train}({\bm{W}},\lam)$ \STATE update $\lam$ by optimizing $L_{val}({\bm{W}},\lam)$ \UNTIL{converged} \end{algorithmic} \end{algorithm} We denote the loss on two datasets as $L_{train}({\bm{W}},\lam)$ and $L_{val}({\bm{W}},\lam)$ respectively, and the task of searching architecture is to find the $w^*$ that minimizing the $L_{train}({\bm{W}}^*,\lam)$ and $\lam^*$ that minimizing the $L_{val}({\bm{W}},\lam^*)$. We also find it important to update ${\bm{W}}$ for several epochs in training set with $\lam$ fixed to obtain good architecture as the searching process is highly depend on the performance of architecture. \section{Introduction} \label{sec:introduction} With no doubt, Deep Neural Networks (DNN) have been the engines for the AI renaissance in recent years. Dating back to 2012, DNN based methods have refreshed the records for many AI applications, such as image classification (\cite{krizhevsky2012imagenet, szegedy2015going, he2016deep}), speech recognition (\cite{2012deep_speech, 2013rnn_speech}) and Go Game (\cite{2016alpha_go, 2017alpha_zero}). Considering its amazing representation power, DNNs have shifted the paradigm of these applications from manually designing the features and stagewise pipelines to end-to-end learning. Although DNNs have liberated researchers from such feature engineering, another tedious work has emerged -- ``network engineering". In most cases, the neural networks need to be designed based on the specific tasks, which again leads to endless hyperparameters tuning and trails. Therefore, designing a suitable neural network architecture still requires considerable amounts of expertise and experience. To democratize the techniques, Neural Architecture Search (NAS) or more broadly, AutoML has been proposed. There are mainly two streams for NAS: The first one is to follow the pioneering work~\cite{zoph2016neural}, which proposed a reinforcement learning algorithm to train an Recurrent Neural Network (RNN) controller that generates coded architectures (\cite{zoph2017learning, pham2018efficient}). The second one is the evolutionary algorithm, which iteratively evaluates and proposes new models for evaluation (\cite{real2017large, stanley2002evolving}). Despite their impressive performance, the search processes are incredibly resource-hungry and unpractical for large datasets like ImageNet, though some acceleration methods have been proposed (\cite{zhong2018practical, pham2018efficient}). Very recently, DARTS (\cite{liu2018darts}) proposed a gradient-based method in which the connections are selected by a softmax classifier. Although DARTS achieves decent performance with great acceleration, its search space is still limited to fix-length coding and block-sharing search as in previous works. In this work, we take another view to tackle these problems. We reformulate NAS as pruning the useless connections from a large network which contains the complete network architecture hypothesis space. Thus only one single model is trained and evaluated. Since the network structure is directly optimized during training, we call our method Direct Sparse Optimization NAS (DSO-NAS). We further demonstrate that this sparse regularized problem can be efficiently optimized by a modified accelerated proximal gradient method opposed to the inefficient reinforcement learning or revolutionary search. Notably, DSO-NAS is much simpler than the existing search methods as it unifies the neural network weight learning and architecture search into one single optimization problem. DSO-NAS does not need any controller (\cite{zoph2016neural, zoph2017learning, pham2018efficient}) or performance predictor (\cite{liu2017progressive}) or relaxation of the search space (\cite{zoph2016neural, zoph2017learning, pham2018efficient, liu2018darts}). As a result of the efficiency and simplicity, \emph{DSO-NAS first demonstrate that NAS can be directly applied to large datasets like ImageNet with no block structure sharing}. Our experiments shows that DSO-NAS can achieve 2.84\% average test error on CIFAR-10, as well as top-1 error 25.4\% on ImageNet with FLOPs (the number of multiply-adds) under 600M. In summary, our contributions can be summarized as follows: \begin{itemize} \item We propose a novel model pruning formulation for neural architecture search based on sparse optimization. Only one model needs to be trained during the search. \item We propose a theoretically sound optimization method to solve this challenging optimization problem both effectively and efficiently. \item We demonstrate the results of our proposed method are competitive or better than other NAS methods, while significantly simplifying and accelerating the search process. \end{itemize} \section{Experiments} \label{sec:experiments} The searching process can be divided into following three stages:(1) A large network consisting of all operations and connections is trained for several epoch to specify the importance of connections and operations . (2) In the searching process, the structure parameter $\lam$ and weights are trained together with sparse regulation applied on the $\lam$. (3) The searched architecture is trained from scratch and evaluated on test data. In the first two stages, the scalers in batch normalizations are fixed during searching process to prevent their effect to $\lam$. We apply SSNAS in two search spaces: 1.the micro search space where $\lambda$ is shared among cells. 2. the macro search space over whole network and $\lambda$ in different cells are updated identity. In the searching process, we find the pruned connections have little chance to reconnect as the relating weights of operations are not updated, so the pruned connections and operations are deleted every five epoch to accelerate the searching process. As for architecture evolution \footnote{where do you describe the three stage training? It is hard to understand the meaning of evolution.}, we adjust the number of filter to satisfy the computation budget and train the architecture from scratch. We test our algorithm on two famous dataset, the CIFAR-10 dataset and the Imagenet dataset. On the CIFAR-10 dataset, we search architecture on the macro search space and micro search space, we run every experiment for five times and report the mean $x$ and standard deviation $y$ as $x\pm y$. As for ImageNet dataset, we evaluate the architecture searched on the CIFAR-10 dataset, proving the transform ability of the architecture and try to directly search on the ImageNet dataset directly, also achieving impressive results. \subsection{CIFAR} The CIFAR-10 dataset consists of 50000 training images and 10000 testing images. We divide the training data into two parts: 25000 for training and 25000 for validation. Standard data pre-processing and augmentation techniques are used including subtracting the mean and dividing the standard deviation of images, randomly flipping horizontally, padding images to 40 $\times$ 40 and randomly cropping to 32 $\times$ 32. Firstly, we train the whole model for 120 epochs with structure parameters fixed, getting a good initialize of weights, then search network batch size 128 in two GPUs until converge. NAG is applied to optimize weight $W$ with initial learning rate $lr = 0.1$, momentum 0.9, weight decay $1\times10^{-4}$. APGNAG proposed by \cite{SSS2018} is applied to optimize $\lambda$ with same initial learning rate $lr = 0.1$. In the whole searching process, test data is never used. Adoptive flops~\ref{section:opt and train} is applied in the searching process. It takes 0.5 days for two GPUs to finiash the searching process. \begin{wrapfigure}{l}{7cm} \includegraphics[scale=0.30,angle=270]{figs/graph-arch.pdf} \caption{Convolution cell searched on CIFAR-10} \label{cifar_architecture} \end{wrapfigure} As for architecture evaluation, the searched model is trained for 640 epochs using cosine learning scheduler with batch size 128. Following existing evaluation method~\cite{pham2018efficient,liu2018darts}, additional enhancements including dropout with probability 0.6, cutout~\cite{cutout2017} with size 16, drop path with probability 0.5, auxiliary towers located at 2/3 of total length of total network with weight 0.4 are added. The results of CIFAR-10 are shown in table~\ref{res_cifar10} and the searched architecture of cell is shown in figure~\ref{cifar_architecture}. \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table}[htb] \caption{Comparison with state-of-the-art image classifiers on CIFAR-10.} \begin{center} \begin{tabular}{l c c c c c} \hline Architecture & Test Error & Params(M) & \tabincell{c}{Search Cost\\(GPU days)} & \tabincell{c}{Search \\Method}\\ \hline DenseNet & 3.46 & 25.6 & - & manual\\ \hline NASNet-A+cutout \cite{zoph2017learning} & 2.65 & 3.3 & 1800 & RL\\ AmoebaNet-A \cite{real2018regularized} & 3.34 & 3.2 & 3150 & evolution\\ AmoebaNet-B+cutout \cite{real2018regularized} & 2.55 & 2.8 & 3150 & evolution \\ PNAS \cite{liu2017progressive} & 3.41 & 3.2 & 150 & SMBO\\ ENAS+cutout \cite{pham2018efficient} & 2.89 & 4.6 & 0.5 & RL \\ DARTS+cutout \cite{liu2018darts} & 2.83 & 3.4 & 4 & gradient-based \\ \hline SSNAS-micro+cutout & 2.84 $\pm$ 0.07& 3.0 & 1 & gradient-based \\ SSNAS-macro+cutout & 2.95 $\pm$ 0.12 & 3.0 & 1 & gradient-based \\ random-micro + cutout & 3.58 $\pm$ 0.21 & 3.4 $\pm$ 0.1 & - & - \\ random-macro + cutout & 3.52 $\pm$ 0.19 & 3.5 $\pm$ 0.1 & - & - \\ \hline \end{tabular} \end{center} \label{res_cifar10} \end{table} It is obvious that SSNAS achieve comparable results with stat of the art while using much less computation resource. When compared with ENAS~\cite{pham2018efficient}, and DARRTS~\cite{liu2018darts}, SSNAS outperforms them with less parameters. Finally, it is interesting to note that searching in micro search space is slower than searching in macro search space which is due to the fact that when searching in macro search space, the operations and connections are easier to be pruned, accelerating searching process. \subsubsection{The Effective of Flops-Aware Technique} The flops-aware technique also play a important role in the searching process by preventing some cells from being pruned totally and balancing the distribution of FLOPS shown in figure~\ref{flops-distribution}~\ref{param-distribution}. In the searching process, the regulation intensity is changed adoptively according to equation~\ref{equ:gamma_flops}. It is obvious to see that the operation in the third stage of network are more inclined to be pruned while searching on small dataset like CIFAR-10, causing the imbalance distribution of FLOPS. With flops-aware technique, more operations in the third stage are kept, smoothing the curve of distribution of FLOPS and parameters. Figure~\ref{flop-vs-acc}~\ref{param-vs-acc} also show the promise performance enhance of flops-aware technique under same computation budget. \begin{figure*}[htbp] \centering \subfigure[]{ \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[scale=0.4]{figs/figure-flops.pdf} \end{minipage} \label{flops-distribution} } \subfigure[]{ \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[scale=0.4]{figs/figure-param.pdf} \end{minipage} \label{param-distribution} } \subfigure[]{ \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[scale=0.4]{figs/graph_adflops.pdf} \end{minipage} \label{flop-vs-acc} } \subfigure[]{ \begin{minipage}[t]{0.42\linewidth} \centering \includegraphics[scale=0.4]{figs/graph_adparam.pdf} \end{minipage} \label{param-vs-acc} } \caption{(a)(b) Comparison of FLOPS and parameters distribution. By using flops-aware technique, the curve of distribution of flops is more smoother. (c)(d)Error rates vs number of parameters and FLOPS, we can see guarantee performance enhancement of flops-aware technique under certain computation budget.} \centering \label{distribution} \end{figure*} \subsubsection{The Effective of Adoptive MAC Technique} \begin{wrapfigure}{l}{7cm} \includegraphics[scale=0.40]{figs/graph_admac.pdf} \caption{Error Vs number of MAC with adoptive MAC technique} \label{mac-vs-acc} \end{wrapfigure} SSNAS can also search architecture based on certain computational target, take MAC for example. We also try to optimize MAC in the searching process by penaltying the connections related to high-MAC-operations with larger regulation intensity, forcing architecture to select MAC-saving operation. The number of MAC of every operation is calculated and normalized by the maximum number of MAC and regulation intensity is calculated according to equation~\ref{equ:gamma_mac}. The result is shown in figure~\ref{mac-vs-acc}. It is obvious to see that SSNAS can generate architecture with higher accuracy under certain MAC budget, proving the effectiveness of adoptive MAC technique. The similar method can be applied to optimize many other computation budget like number of parameters, FLOPS, which can leave for further study. \subsubsection{Other Factors For Searching Architecture} The searching process may be affected by many other hyper-parameters or settings, we do experiment on different setting of architecture searching for further research work. The result is shown in table~\ref{com_cifar10}. \begin{table}[htb] \caption{Comparison of different setting while search on the CIFAR-10 dataset in the macro search space.} \begin{center} \begin{tabular}{c c c c c} \hline split training & Ratio of T\&V &Initialize & Params(M) & Test Error \\ \hline Yes & 4:1 & pretrain & 2.9 & 3.05 $\pm$ 0.09\\ Yes & 1:1 & random & 2.9 & 3.26 $\pm$ 0.08\\ No & - & pretrain & 3.0 & 3.20 $\pm$ 0.1\\ Yes & 1:1 & pretrain & 3.0 & 3.02 $\pm$ 0.09\\ \hline \end{tabular} \end{center} \label{com_cifar10} \end{table} where Ratio of T\&V means the ratio of images number in training set and validation set, as for ratio of training and validation set $x:y$, we update weight for $x$ times and update $\lam$ for $y$ times for every $x+y$ iterations. Initialize means whether to search architecture based on pretrain model or from scratch while split training means whether to split the whole training set into training set and validation set. The pretrain model is only trained on the training set. It is notable that the utilize of validation set plays an important role in the search process as it help prevent architecture from overfitting training data and improve the performance by 0.2\% while the ratio of training set and validation set has little influence. Besides, the good initialize of weight if also of great importance as random initialize of weight may lead to a 0.3\% drop on accuracy under same parameter budget. \begin{comment} \begin{table}[htb] \caption{Comparison with state-of-the-art image classifiers on CIFAR-100} \begin{center} \begin{tabular}{c c c c c c} \hline Architecture & Test Error & Params(M) & \tabincell{c}{Search Cost\\(GPU days)} & \tabincell{c}{Search \\Method}\\ \hline DenseNet+cutout \cite{baker2016cutout} & 15.20 & 25.6 & - & manual\\ \hline MetaQNN(top model) \cite{baker2016designing} & 27.14 & - & 80 & LR\\ Block-QNN-A,N=4 \cite{zhong2018practical} & 18.64 & - & 96 & LR\\ Block-QNN-B,N=4 \cite{zhong2018practical} & 18.72 & - & 96 & LR\\ \hline SSNAS & 18.43 & 3.0 & 2 & gradient-based\\ \hline \end{tabular} \end{center} \end{table} \end{comment} \subsection{ILSVRC 2012} As for architecture searching on the ImageNet dataset, we consider the mobile setting of ImageNet where the input image size if 224$\times$224. We also divide the training data into two parts: four fifths for training and one fifth for validation. We firstly pretrain the whole network with 27 cells for 30 epochs with learing rate 0.1, weight decay 4e-5, and search network with batch size 256 in eight GPUs. In the searching process, the pruned connection and operations are deleted to accelerate searching process every epoch just like the searching process on the CIFAR dataset. As for architecture evaluation, the searched model is trained using NAG for 240 epochs in eight GPUs, with batch size 1024, weight decay $4\times10^{-5}$ and initial learning rate 0.1 (gradually raise to 0.5 in five epochs and drop linearly to 0 in the rest epochs). We restrict the number of multi-adds to around 600M. Additional enhance including smoothing label and auxiliary towers located at the second to last stages of network. We evaluate the architecture of cell searched on the CIFAR-10 dataset and the architecture searched directly on the ImageNet, the results for ImageNet are shown in table~\ref{imagenet}, where results marked with * were obtained by searching on the CIFAR-10 dataset. \begin{table}[ht] \caption{Comparison with state-of-the-art image classifiers on ImageNet} \begin{center} \begin{tabular}[h]{c c c c c} \hline Architecture & Top-1/5 & Params(M) & FLOPS(M) &\tabincell{c}{Search Cost\\(GPU days)} \\ \hline Inception-v1 \cite{szegedy2015going} & 30.2/10.1 & 6.6 & 1448 & - \\ MobileNet \cite{howard2017mobilenets} & 29.4/10.5 & 4.2 & 569 & - \\ ShuffleNet-v1 2x \cite{shufflenet} & 26.3/10.2 & ~5 & 524 & - \\ ShuffleNet-v2 2x \cite{shufflenetv2} & 25.1/- & ~5 & 591 & - \\ NASNet-A* \cite{zoph2017learning} & 26.0/8.4 & 5.3 & 564 & 1800 \\ NASNet-B* \cite{zoph2017learning} & 27.2/8.7 & 5.3 & 488 & 1800 \\ NASNet-C* \cite{zoph2017learning} & 27.5/9.0 & 4.9 & 558 & 1800\\ AmoebaNet-A* \cite{real2018regularized} & 25.5/8.0 & 5.1 & 555 & 3150\\ AmoebaNet-B* \cite{real2018regularized} & 26.0/8.5 & 5.3 & 555 & 3150 \\ AmoebaNet-C* \cite{real2018regularized} & 24.3/7.6 & 6.4 & 570 & 3150 \\ PNAS* \cite{liu2017progressive} & 25.8/8.1 & 5.1 & 588 & 150\\ OSNAS \cite{OSNAS2018} & 25.8/- & 5.1 & - & - \\ DARTS* \cite{liu2018darts} & 26.9/9.0 & 4.9 & 595 & 4\\ \hline SSNAS* & 26.2/8.6 & 4.7 & 571 & 1 \\ SSNAS-micro & 26.1/8.4 & 4.6 & 608 & 4\\ SSNAS-macro & 25.7/8.4 & 4.9 & 598 & 4\\ \hline \end{tabular} \end{center} \label{imagenet} \end{table} It is notable that SSNAS achieves competitive performance with the state-of-the-art architecture. The architecture searched on the CIFAR-10 dataset also achieve good performance, proving the generalization capability of architecture, besides, we find searching in macro search space yield better result, on the contrary to the experiments on the CIFAR-10 dataset, indicating searching in larger search space require more data. \section{Related Works} \label{sec:relatedworks} In this section, we briefly review two research fields that may be related to our work. \subsection{Network Pruning} Network pruning is a widely used technique for model acceleration and compression. The early works of pruning focus on removing unimportant connections (\cite{lecun1990optimal, NIPS1992BrainSurgeon,han2015learning,guo2016dynamic_prune}). Though connection level pruning can yield effective compression, it is hard to harvest actual computational savings because modern GPU cannot utilize the irregular weights well. To tackle this issue, a significant amount of works on structure pruning have been proposed. For neuron level pruning, several works prune the neurons directly by evaluating the importance of neuron based on specific criteria (\cite{hu2016network,li2016pruning,mariet2016diversity,liu2017learning}). More generally, \cite{wen2016learning} proposed sparse structure learning. They adopted group sparsity on multiple structures of networks, including filter shapes, channels and layers. Recently, \cite{SSS2018} proposed a simpler way for structure pruning. They introduced scaling factors to the outputs of specific structures (neural, groups or block) and apply sparse regularizations on them. After training, structures with zero scaling factors can be safely removed. Compared with (\cite{wen2016learning}), the proposed method is more effective and stable. In this work, we extend (\cite{SSS2018}) into a more general and harder case, neural architecture search. \subsection{Neural Architecture Search} Recently, there has been growing interest in developing methods to generate neural network architecture automatically. One heavily investigated direction is evolutionary algorithm (\cite{meyerevolving, miller1989designing, real2017large, stanley2002evolving}). They designed the modifications like inserting layers, changing filter sizes or adding identity mapping as the mutations in evolution. Not surprisingly, their methods are usually computationally intensive and less practical in large scale. Another popular direction is to utilize reinforcement learning with an RNN agent to design the network architecture. The pioneering work (\cite{zoph2016neural}) applies an RNN network as the controller to sequentially decide the type, parameters of layers. Then the controller is trained by RL with the reward designed as the accuracy of the proposed model. Although it achieves remarkable results, it needs 800 GPUs to get such results, which is not affordable for broad applications. Based on this work, several methods have been proposed to accelerate the search process by limiting the search space (\cite{zoph2017learning}), early stopping with performance prediction (\cite{zhong2018practical}), progressive search (\cite{liu2017progressive}) or weight sharing (\cite{pham2018efficient}). Despite their success, the above methods treat the search of network architecture as a black-box optimization problem, besides the search spaces of them are limited due to the fixed-length coding of architecture. Our most related work is a gradient based method DARTS (\cite{liu2018darts}). In DARTS, a special parameter $a$ is applied on every connection and updated during training process. A Softmax classifier is then applied to select the connection to be used for nodes. However, the search space of DARTS is also limited: every operation can only have exactly two inputs; the number of nodes are fixed within a block. \section{Proposed Method} \label{sec:proposedmethod} In this section, we will elaborate the details of our proposed method. We start with the intuition and motivations, then followed by the design of search space and the formulation of our method. Lastly, we will describe the optimization and training details. \subsection{Motivations} The idea of DSO-NAS follows the observation that the architecture space of neural network (or a micro structure in it) can be represented by a completely connected Directed Acyclic Graph (DAG). Any other architecture in this space can be represented by a sub-graph of it. In other words, a specific architecture can be obtained by selecting a subset of edges and nodes in the full graph. Prior works (\cite{zoph2016neural}, \cite{liu2017progressive}, \cite{liu2018darts}) focus on searching the architecture of two types of blocks, convolution block and reduction block. Following the idea of micro structure searching, we adopt the complete graph to represent the search space of an individual block. Then the final network architecture can be represented by a stacking of blocks with residual connections. Fig.~\ref{DAG} illustrates an examplar DAG of a specific block, whose nodes and edges represent local computation $\MO$ and information flow respectively. \begin{wrapfigure}{ltb}{7cm} \includegraphics[angle=270,scale=0.35]{figs/graph3_1_1_1.pdf} \caption{The whole search space can be represented by a completely connected DAG. Here node 1 and 6 are the input and output node, respectively. The dashed line and dashed circle represent that the corresponding connections and nodes are removed. For example, the initial output of node 5 can be calculated by $\h^{(5)}=\MO^{(5)}(\sum_{j=1}^{4} \h^{(j)})$, while it becomes $\h^{(5)}=\MO^{(5)}(\h^{(2)}+\h^{(4)})$ for the pruned sub-graph.} \label{DAG} \vspace{-30pt} \end{wrapfigure} For a DAG with $T$ nodes, the output of $i$th node $\h^{(i)}$ can be calculated by transforming the sum of all the outputs of the predecessors, $\h^{(j)},j<i$, by the local operation $\MO^{(i)}$, namely: \begin{align} \label{equ:ori}\h^{(i)}=\MO^{(i)}(\sum_{j=1}^{i-1}\h^{(j)}). \end{align} Then the structure search problem can be reformulated as an edge pruning problem. In the search procedure, we remove useless edges and nodes in the full DAG, leaving the most important structures. To achieve this goal, we apply scaling factors on every edge to scale the output of each node. Then Eqn.~\ref{equ:ori} can be modified to: \begin{align} \h^{(i)}=\MO^{(i)}(\sum_{j=1}^{i-1}\lambda_{(j)}^{(i)} \h^{(j)}), \end{align} where $\lambda_{(j)}^{(i)}$ is the scaling factor applied on the information flow from node $j$ to $i$. Then we apply sparse regularizations on scaling parameters to force some of them to be zero in search. Intuitively, if $\lambda_{(j)}^{(i)}$ is zero, the corresponding edge can be removed safely and isolated nodes can also be pruned as no contribution is made. \subsection{Search Space} \label{section:search space} DSO-NAS can search the structure of each building block in DNN, and then share it for all the blocks in the DNN, just as all previous works did. It can also directly search the whole network structure without block sharing, while still keeping a competitive searching time. In the following, we will discuss the search space of each individual block first, and then specify the entire macro-structure. \begin{figure*}[tb] \centering \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[scale=0.22,angle=270]{figs/graph3_2_1modmod.pdf} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[scale=0.22,angle=270]{figs/graph3_2_2modmod.pdf} \end{minipage} } \subfigure[]{ \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[scale=0.22,angle=270]{figs/graph3_2_3modmod.pdf} \end{minipage} } \centering \caption{An example of search block, which has two levels with two operations: (a) The completely connected block. (b) In the search process, we jointly optimize the weights of neural network and the $\lam$ associated with each edge. (c) The final model after removing useless connections and operations.} \label{searching_method} \end{figure*} A block consists of $M$ sequential levels which are composed of $N$ different kinds of operations. In each block, every operation has connections with all the operations in the former levels and the input of the block. Also, the output of the block is connected with all operations in the block. Then for each connection, we scale its output by a multiplier $\lambda$, and imposing a sparse regularization on it. After optimization, the final architecture is generated by pruning the connections whose corresponding $\lam$ are zero and all isolated operations. The procedure of the block search is illustrated in Fig. \ref{searching_method}. Formally, the output of the $j$-th operation in $i$-th layer of $b$-th block $\h_{(b,i,j)}$ is computed as: \begin{align} \label{equ:h}\h_{(b,i,j)}=\MO_{(b,i,j)}(\sum_{m=1}^{i-1}\sum_{n=1}^{N}\lambda_{(b,m,n)}^{(i,j)}\h_{(b,m,n)} + \lambda^{(i,j)}_{(b,0,0)}\O_{(b-1)}), \end{align} where $\MO_{(b,i,j)}$ is the transformation of the $j$-th operation in $i$-th layer of $b$-th block, $\lambda_{(b,m,n)}^{(i,j)}$ is the scaling factor from node $\h_{(b,m,n)}$ to $\h_{(b,i,j)}$, and $\O_{(b-1)}$ is the output of the $(b-1)$-th block. Here we note $\h_{(b,0,0)} = \O_{(b-1)}$ as the input node and $\h_{(b,M+1,0)} = \O_{(b)}$ as the output node of $b$-th block, respectively. The operation in the $m$-th layer may have $(m-1)N + 1$ inputs. Note that the connections between operations and the output of block are also learnable. The output of the $b$-th block $\O_{(b)}$ is obtained by applying a reduction operation (concatenation followed by a convolution with kernel size 1$\times$1) $\MR$ to all the nodes that have contribution to the output: \begin{align} \label{equ:O}\O_{(b)}=\MR([\lambda_{(b,1,1)}^{(M+1, 0)}\h_{(b,1,1)}],&\lambda_{(b,1,2)}^{(M+1, 0)}\h_{(b,1,2)},...,\lambda_{(b,m,n)}^{(M+1, 0)}\h_{(b,m,n)},...\lambda_{(b,M,N)}^{(M+1, 0)}\h_{(b,M,N)})\notag \\&+\O_{(b-1)},m\in[1,M],n\in[1,N] \end{align} where identity mapping is applied in case all operations are pruned. The structure of whole network is shown in Fig. \ref{structure}: a network consists of $S$ stages with $B$ convolution blocks in every stage. Reduction block is located at the end of stage except for the last stage. We try two search spaces: (1) the share search space where $\lam$ is shared among blocks. (2) the full search space where $\lam$ in different blocks are updated independently. \begin{wrapfigure}{ltb}{5cm} \vspace{-10pt} \centering \includegraphics[angle=-90,scale=0.3]{figs/graph_structure.pdf} \caption{The structure of network.} \label{structure} \vspace{-60pt} \end{wrapfigure} We use the Conv-Bn-ReLU order for convolutional operations and adopt following four kinds of operations in convolution block: \begin{itemize} \item Separable convolution with kernel 3$\times$3 \item Separable convolution with kernel 5$\times$5 \item Average pooling with kernel 3$\times$3 \item Max pooling with kernel 3$\times$3 \end{itemize} As for reduction block, we simply use convolution with kernel size 1$\times$1 and 3$\times$3, and apply them with a stride of 2 to reduce the size of feature map and double the number of filters. The outputs of reduction block can be calculated by adding the outputs of two convolutions. The task of searching blocks therefore reduces to learning $\lam$ on every edges, which can be formulated as: \begin{align} \min_{\W,\lam}\frac{1}{K}\sum_{i=1}^{K}\ML(\y_i,Net(\x_i,\W,\lam))+ \delta\|\W\|_F^2 + \gamma\|\lam\|_1, \end{align} where $\x_i$ and $\y_i$ are input data and label respectively, $K$ denotes the number of training samples, $\W$ represents the weights of network. $\delta$ and $\gamma$ represent the weight of regularization, respectively. \subsection{Optimization and Training} \label{section:opt and train} The sparse regularization of $\lam$ induces great difficulties in optimization, especially in the stochastic setting in DNN. Though heuristic thresholding could work, the optimization is unstable and without theoretical analysis. Fortunately, a recently proposed method Sparse Structure Selection (SSS) (\cite{SSS2018}) solved this challenging problem by modifying a theoretically sound optimization method Accelerated Proximal Gradient (APG) method, by reformulating it to avoid redundant forward and backward in calculating the gradients: \begin{align} \label{equ:z}\z_{(t)}&=\lam_{(t-1)}-\eta_{(t)}\nabla \MG(\lam_{(t-1)})\\ \label{equ:v}\v_{(t)}&=S_{\eta_{(t)}\gamma}(\z_{(t)})-\lam_{(t-1)}+\mu_{(t-1)}\v_{(t-1)}\\ \label{equ:lam}\lam_{(t)}&=\MS_{\eta_{(t)}\gamma}(\z_{(t)})+\mu_{(t)}\v_{(t)} \end{align} where $t$ is the number of iterations, $\MS_{\eta_{(t)}\gamma}$ represents the soft-threshold operator as $\MS_\alpha(\z)_i=\text{sign}(z_i)(|z_i|-\alpha)_{+}$, $\eta_{(t)}$ represents gradient step size and $\mu$ is the momentum. In (\cite{SSS2018}), the authors named it as APG-NAG. The weights $\W$ and $\lam$ are updated using NAG and APG-NAG jointly on the same training set. However, APG-NAG cannot be directly applied in our algorithm since DNN usually overfits the training data in some degree. Different from pruning, which the search space is usually quite limited, the search space in NAS is much more diverse and huge. If the structure is learned on such overfitting model, it will generalize badly on the test set. To avoid this problem, we divide training data into two parts: training set for $\W$ and for $\lam$ separately. This configuration guarantees that the $\lam$ (i.e. network structure) is learned on a different subset of training data which is not seen during the learning of $\W$. Therefore, the sample distribution in the structure learning is more similar to that in testing, which may lead to better performance. \subsection {Incorporating Different Budgets} \label{section:adaptive_technique} Hand-crafted network usually incorporates many domain knowledge. For example, as highlighted in (\cite{shufflenetv2}), memory access may be the bottleneck for lightweight network on GPU because the use of separable convolution. Our method can easily consider these priors in the search by adaptively adjust the $\gamma$ for each connection. The first example is to balance the FLOPs for each block. As indicated in (\cite{residual_connections}), most intense changes of the main branch flow of ResNet are concentrated after reduction block. In our experiments, we empirically find that the complexity of the block after each reduction block is much higher than the others' if all $\gamma$ are fixed. Consequently, to balance of FLOPs among different blocks we adjust the regularization weight for the $\lam$, $\gamma^{t}$ at iteration $t$ adaptively according to the FLOPs of block: \begin{align} \label{equ:gamma_flops}\gamma^{t}=\frac{\text{FLOPs}^{t}}{\text{FLOPs}_{block}}\gamma, \end{align} where $\text{FLOPs}_{block}$ represents the FLOPs of the completely connected block and $\text{FLOPs}^{t}$, which can be calculated based on $\lam$, represents the FLOPs of the kept operations at iteration $t$ for a block. Using this simple strategy, we can smooth the distribution of FLOPs by penalizing $\lam$ according to the FLOPs of the block. We call this method \emph{Adaptive FLOPs} in the following. The second example is to incorporate specific computation budget such as Memory Access Cost (MAC). Similarly, the $\gamma$ applied on the $n$-th operation in $m$-th level at iteration $t$ is calculated by: \begin{align} \label{equ:gamma_mac}\gamma_{(m,n)}^t=\frac{\text{MAC}_{(m,n)}}{\text{MAC}_{max}^t}\gamma, \end{align} where $\text{MAC}_{(m,n)}^t$ represents MAC of the $n$th operation in $m$ level, and $\text{MAC}_{max}$ represents the maximum MAC in network. Using this strategy, DSO-NAS can generate architectures with better performance under same budget of MAC. We call this method \emph{Adaptive MAC} in the following. \section{Experiments} \label{sec:experiments} In this section, we first introduce the implementation details of our method, then followed by results on CIFAR-10 and ImageNet datasets. At last, we analyze each design component of our method in detail. \subsection{Implementation Details} The pipeline of our method consists of three stages: \begin{enumerate} \item Training the completely connected network for several epochs to get a good weights initialization. \item Searching network architecture from the pretrained model. \item Training the final architecture from scratch and evaluating on test dataset. \end{enumerate} In the first two stages, the scaling parameters in batch normalization layers are fixed to one to prevent affecting the learning of $\lam$. After step two, we adjust the number of filters in each operation by a global width multiplier to satisfy the computation budget, and then train the network from scratch as done in (\cite{pham2018efficient}). For benchmark, we test our algorithm on two standard datasets, CIFAR-10 (\cite{cifar}) and ImageNet LSVRC 2012 (\cite{imagenet}). We denote the model searched with and without block sharing as \emph{DSO-NAS-share} and \emph{DSO-NAS-full}, respectively. In each block, we set number of level $M = 4$, the number of operation $N = 4$ as four kinds of operation are applied for both CIFAR and ImageNet experiments indicated in Sec.~\ref{section:search space}. For the hyper-parameters of optimization algorithm and weigth initialization, we follow the setting of (\cite{SSS2018}). We set the weight decay of $\W$ to 0.0001. The momentum is fixed to 0.9 for both NAG and APG-NAG. All the experiments are conducted in MXNet (\cite{chen2015mxnet}). We will release our codes if the paper is accepted. \subsection{CIFAR} The CIFAR-10 dataset consists of 50000 training images and 10000 testing images. As described in Sec.\ref{section:opt and train}, we divide the training data into two parts: 25000 for training of weights and rest 25000 for structure. During training, standard data pre-processing and augmentation techniques (\cite{xie2017aggregated}) are adopted. The mini-batch size is 128 on 2 GPUs. Firstly, we pre-train the full network for 120 epochs, and then search network architecture until convergence (120 epochs), both with constant learning rate 0.1. The network adopted in CIFAR-10 experiments consists of three stages, and each stage has eight convolution blocks and one reduction block. Adaptive FLOPs (see section ~\ref{section:opt and train}) is applied in the search. It costs about half days with two GPUs for the search. \begin{figure*}[tb] \centering \subfigure[CIFAR-10]{ \centering \includegraphics[scale=0.3,angle=270]{figs/graph4_cifar_block.pdf} \label{cifar_architecture} } \subfigure[ImageNet]{ \centering \includegraphics[scale=0.33,angle=270]{figs/graph4_img_block.pdf} \label{img_architecture} } \centering \caption{Block structures learned on different dataset.} \label{method} \end{figure*} After search, we train the final model from scratch with the same setting of (\cite{pham2018efficient}). Additional improvements including dropout (\cite{srivastava2014dropout}) with probability 0.6, cutout (\cite{cutout2017}) with size 16, drop path (\cite{droppath}) with probability 0.5, auxiliary towers located at the end of second stage (\cite{DSN}) with weight 0.4 are also adopted during training. Table \ref{res_cifar10} shows the performance of our searched models, including DSO-NAS-full and DSO-NAS-share, where "c/o" represents evaluate model with cutout technique. We report the mean and standard deviation of five independent runs. Due to limited space, we only show the block structure of DSO-NAS-share in Fig.~\ref{cifar_architecture}. We also compare the simplest yet still effective baseline -- random structure, both our DSO-NAS-share and DSO-NAS-full yield much better performance with less parameters. Comparing with other state-of-the-art methods, our method demonstrates competitive results with similar or less parameters while costing only one GPU day. \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \begin{table}[] \caption{Comparison with state-of-the-art NAS methods on CIFAR-10.} \begin{center} \begin{tabular}{l c c c c c} \hline Architecture & Test Error & Params(M) & \tabincell{c}{Search Cost\\(GPU days)} & \tabincell{c}{Search \\Method}\\ \hline DenseNet & 3.46 & 25.6 & - & manual\\ \hline NASNet-A + c/o (\cite{zoph2017learning}) & 2.65 & 3.3 & 1800 & RL\\ AmoebaNet-A (\cite{real2018regularized}) & 3.34 & 3.2 & 3150 & evolution\\ AmoebaNet-B + c/o (\cite{real2018regularized}) & 2.55 & 2.8 & 3150 & evolution \\ PNAS (\cite{liu2017progressive}) & 3.41 & 3.2 & 150 & SMBO\\ ENAS + c/o (\cite{pham2018efficient}) & 2.89 & 4.6 & 0.5 & RL \\ DARTS + c/o (\cite{liu2018darts}) & 2.83 & 3.4 & 4 & gradient \\ \hline DSO-NAS-share+c/o & 2.84 $\pm$ 0.07& 3.0 & 1 & gradient \\ DSO-NAS-full+c/o & 2.95 $\pm$ 0.12 & 3.0 & 1 & gradient \\ random-share + c/o & 3.58 $\pm$ 0.21 & 3.4 $\pm$ 0.1 & - & - \\ random-full + c/o & 3.52 $\pm$ 0.19 & 3.5 $\pm$ 0.1 & - & - \\ \hline \end{tabular} \end{center} \label{res_cifar10} \end{table} \subsection{ILSVRC 2012} In the ILSVRC 2012 experiments, we conduct data augmentation based on the publicly available implementation of 'fb.resnet'\footnote{https://github.com/facebook/fb.resnet.torch}. Since this dataset is much larger than CIFAR-10, the training dataset is divided into two parts: $4/5$ for training weights and $1/5$ for training structure. In the pre-training stage, we train the whole network for 30 epochs with learning rate 0.1, weight decay $4\times10^{-5}$. The mini-batch size is 256 on 8 GPUs. The same setting is adopted in the search stage, which costs about 0.75 days with 8 GPUs. After search, we train the final model from scratch for 240 epochs, with batch size 1024 on 8 GPUs. We set weight decay to $4\times10^{-5}$ and adopt linear-decay learning rate schedule (linearly decreased from 0.5 to 0). Label smoothing (\cite{szegedy2016rethinking}) and auxiliary loss (\cite{DSN}) are used during training. There are four stages in the ImageNet network, and the number of convolution blocks in these four stages is 2,~2,~13,~6, respectively. We first transfer the block structure searched on CIFAR-10. We also directly search the network architecture on ImageNet. The final structure generated by DSO-NAS-share is shown in \ref{img_architecture}. The quantitative results for ImageNet are shown in Table~\ref{imagenet}, where result with * is obtained by transferring the generated CIFAR-10 block to ImageNet. \begin{table}[tb] \caption{Comparison with state-of-the-art image classifiers on ImageNet} \begin{center} \begin{tabular}[h]{c c c c c} \hline Architecture & Top-1/5 & Params(M) & FLOPS(M) &\tabincell{c}{Search Cost\\(GPU days)} \\ \hline Inception-v1 \cite{szegedy2015going} & 30.2/10.1 & 6.6 & 1448 & - \\ MobileNet \cite{howard2017mobilenets} & 29.4/10.5 & 4.2 & 569 & - \\ ShuffleNet-v1 2x \cite{shufflenet} & 26.3/10.2 & ~5 & 524 & - \\ ShuffleNet-v2 2x \cite{shufflenetv2} & 25.1/- & ~5 & 591 & - \\ NASNet-A* \cite{zoph2017learning} & 26.0/8.4 & 5.3 & 564 & 1800 \\ AmoebaNet-C* \cite{real2018regularized} & 24.3/7.6 & 6.4 & 570 & 3150 \\ PNAS* \cite{liu2017progressive} & 25.8/8.1 & 5.1 & 588 & 150\\ OSNAS \cite{OSNAS2018} & 25.8/- & 5.1 & - & - \\ DARTS* \cite{liu2018darts} & 26.9/9.0 & 4.9 & 595 & 4\\ \hline DSO-NAS* & 26.2/8.6 & 4.7 & 571 & 1 \\ DSO-NAS-full & 25.7/8.1 & 4.6 & 608 & 6\\ DSO-NAS-share & 25.4/8.4 & 4.8 & 586 & 6\\ \hline \end{tabular} \end{center} \label{imagenet} \end{table} It is notable that given similar FLOPs constraint, DSO-NAS achieves competitive or better performance than other state-of-the-art methods with less search cost and parameters. The block structure transferred from CIFAR-10 dataset also achieves decent performance, proving the generalization capability of the searched architecture. Moreover, directly searching on target dataset (ImageNet) brings additional improvements. \emph{This is the first time that NAS can be directly applied on large-scale datasets like ImageNet.} \subsection{Ablation Study} In this section, we present some ablation analyses on our method to illustrate the effectiveness and necessity of each component. \subsubsection{The Effectiveness of Budget Aware Search} \begin{figure*}[tb] \vspace{-5mm} \centering \subfigure[Distribution of FLOPs]{ \centering \includegraphics[scale=0.27]{figs/figure-flops.pdf} \label{distribution_of_flops} } \subfigure[Err./FLOPs for adaptive FLOPs]{ \centering \includegraphics[scale=0.27]{figs/graph_adflops.pdf} \label{flop-vs-acc} } \subfigure[Err./Params. for adaptive FLOPs]{ \centering \includegraphics[scale=0.27]{figs/graph_adparam.pdf} \label{param-vs-acc} } \centering \caption{Performance of adaptive FLOPs techniques.} \label{distribution} \end{figure*} \begin{wrapfigure}{ltb}{5cm} \centering \vspace{-20pt} \includegraphics[scale=0.33]{figs/graph_admac.pdf} \caption{Performance of adaptive MAC technique} \label{mac-vs-acc} \vspace{-30pt} \end{wrapfigure} With adaptive FLOPs technique, the weight of sparse regularization for each block will be changed adaptively according to Eqn.~\ref{equ:gamma_flops}. We first show the distribution of FLOPs among different blocks in Fig.~\ref{distribution_of_flops}. This strategy can prevent some blocks from being pruned entirely as expected. We also show the error rates of different settings in Fig.~\ref{flop-vs-acc} and Fig.~\ref{param-vs-acc}. It is clear that the networks searched with adaptive FLOPs technique are consistently better than the ones without under the same total FLOPs or parameters. DSO-NAS can also search architecture based on certain computational target, such as MAC discussed in Sec.~\ref{section:adaptive_technique}. The results are shown in Fig.~\ref{mac-vs-acc}. It is obvious to see that DSO-NAS can generate architecture with higher accuracy under certain MAC budget, proving the effectiveness of adaptive MAC technique. The method can similarly be applied to optimize many other computation budgets of interest, which we leave for further study. \subsubsection{Other Factors For Searching Architecture} We conduct experiments on different settings of our proposed architecture search method to justify the need of each component we designed. The results are shown in Table~\ref{com_cifar10}. \begin{table}[tb] \caption{Comparison of different settings on CIFAR-10 dataset.} \begin{center} \begin{tabular}{c c c c c c} \hline Search space & Split training & Pre-train model & Ratio of W\&S & Params(M) & Test Error \\ \hline full & \checkmark & & 1:1 & 2.9 & 3.26 $\pm$ 0.08\\ full & \checkmark & \checkmark & 1:1& 3.0 & 3.02 $\pm$ 0.09\\ full & \checkmark & \checkmark & 4:1 & 2.9 & 3.05 $\pm$ 0.09\\ full & & \checkmark & - & 3.0 & 3.20 $\pm$ 0.10\\ \hline share & \checkmark & & 1:1 & 3.0 & 3.07 $\pm$ 0.08\\ share & \checkmark & \checkmark & 1:1 & 3.0 & 2.86 $\pm$ 0.09\\ share & \checkmark & \checkmark & 4:1 & 2.9 & 2.89 $\pm$ 0.06\\ share & & \checkmark & - & 3.0 & 3.14 $\pm$ 0.06\\ \hline \end{tabular} \end{center} \label{com_cifar10} \end{table} ``Pre-train model'' means whether we conduct step one in Sec. \ref{sec:experiments}, while "Split training" means whether to split the whole training set into two sets for weight and structure learning separately. The Ratio of W\&S means the ratio of training sample for weight learning and structure learning. As for the ratio of $x:y$, we update weight for $x$ times and update $\lam$ for $y$ times for every $x+y$ iterations. Note that we only pre-train the model on the weight learning set. It is notable that the use of a separate set for structure learning plays an important role to prevent overfitting training data, and improve the performance by 0.2\%. The ratio of these two sets has minor influence. Besides, a good initialization of weight is also crucial as random initialization of weights may lead to another 0.2\% drop on accuracy under that same parameter budgets. \section{Discussions} \label{sec:discussions} Then the loss of FitNet can be defined as: \begin{equation}\label{equ:FN} \mathcal{L}_{FN}(\textbf{W}_S)=\mathcal{H}(\bm{y}_{true},\bm{p}_S^1)+ \frac{\lambda}{2}||V(\bm{F}_T)-V(R(\bm{F}_S;\textbf{W}_R))||^2, \end{equation} where $R(\bm{F}_S;\textbf{W}_R)$ is a convolutional regressor which converts $\bm{F}_S$ from $\mathbb{R}^{C_S \times H \times W}$ to $\mathbb{R}^{C_T \times H \times W}$ if $C_T \neq C_S$, else it is an identity mapping, and $V(\cdot)$ is an vectorized operation. The loss of AT can be formulated by following: \begin{equation}\label{equ:AT} \mathcal{L}_{AT}(\textbf{W}_S)=\mathcal{H}(\bm{y}_{true},\bm{p}_S^1)+ \frac{\lambda}{2}||V(A(\bm{F}_T))-V(A(\bm{F}_S))||^2, \end{equation} where $A(\bm{F})$ is an attention mapping which has several definitions \cite{Zagoruyko2017AT}, such as sum of absolute values defined by: \begin{equation}\label{equ:sumabs} A_{abssum}(\bm{F})=\sum_{k=1}^{C}{|\bm{F}^k|}. \end{equation} Substituting Eq. \ref{equ:sumabs} into Eq. \ref{equ:AT}, we can get: \begin{equation}\label{equ:sumabsAT} \mathcal{L}_{AT_{S}}(\textbf{W}_S)=\mathcal{H}(\bm{y}_{true},\bm{p}_S^1)+ \frac{\lambda}{2}||\sum_{i=1}^{C_T}{|\bm{f}_T^i|}-\sum_{j=1}^{C_S}{|\bm{f}_S^j|}||^2. \end{equation} \section{Conclusions and Future Work} \label{sec:conclusions} Neural Architecture Search has been the core technology for realizing AutoML. In this paper, we have proposed a Direct Sparse Optimization method for NAS. Our method is appealing to both academic research and industrial practice in two aspects: First, our unified weight and structure learning method is fully differentiable in contrast to most previous works. It provides a novel model pruning view to the NAS problem. Second, the induced optimization method is both efficient and effective. We have demonstrated state-of-the-art performance on both CIFAR and ILSVRC2012 image classification datasets, with affordable cost (single machine in one day). In the future, we would like to incorporate hardware features for network co-design, since the actual running speed of the same network may highly vary across different hardware because of cache size, memory bandwidth, etc. We believe our proposed DSO-NAS opens a new direction to pursue such objective. It could push a further step to AutoML for everyone's use.
1,108,101,563,422
arxiv
\section{Introduction \& summary of results} Lately, there has been an increasing interest in combining linear and dependent types \cite{schreiber2014quantization}, \cite{krishnaswami}, \cite{vakar14}, \cite{nothing}. The idea is that such a theory would inherit the higher-order nature of dependent types, while maintaining a careful account of how assumptions are used in a derivation. It is not completely clear, however, what the synthesis looks like, since in dependent type theory, variables may appear in both terms and types, but linear type theory only allows each variable to appear freely exactly once. Here, we take an approach inspired by \cite{krishnaswami} and \cite{vakar14}, in which we distinguish between non-linear, dependent types (which we call \textit{cartesian}), and linear types, and circumvent the issue by only allowing cartesian terms to appear in types (both cartesian and linear). The theory splits contexts into two parts, divided by a semicolon, where the first part contains cartesian assumptions, for which weakening and contraction is admissible, while the second part contains linear assumptions, for which only exchange is allowed. We introduce two new type formers, $\sqcap_{x : A}B$ and $\sqsubset_{x : A}B$, akin to $\Pi$ and $\Sigma$, but where the dependent type $B$ (and therefore the resulting construct) is a linear. The traditional $!$ modality is deconstructed as a comonad arising from the adjoint pair $L \dashv M$, where $L$ is a functor (or modality) sending cartesian types into linear, and $M$ sends linear types to cartesian. We have $\Pi_{x : A}B_M \cong (\sqcap_{x :A}B)_M$, for linear $B$, and, assuming a few additional rules, a linear isomorphism $(\Sigma_{x :A}C)_L \cong \sqsubset_{x :A}C_L$ for cartesian $C$. Compared to ordinary dependent type theory, we get additional elimination and computation rules for both $\Sigma$ and $Id$-types when eliminating into a linear type. We postulate the existence of two universes, $L$ and $U$, containing codes of linear and cartesian types, respectively and assumed to be closed under all type formers. We develop categorical semantics for the theory by defining a model as a \textit{comprehension category} \cite{jacobs}, $\pi : \mathcal{T} \to \mathcal{C}$ equipped with a \textit{split symmetric monoidal fibration} $q : \mathcal{L} \to \mathcal{C}$ over the same base. A split symmetric monoidal fibration has just enough structure to make the fibers $\mathcal{L}_\Gamma$ over a context $\Gamma \in \mathcal{C}$ into symmetric monoidal categories, and reindexing functors (strict) monoidal functors. The traditional linear type formers $\&, \oplus, 0, \top, \multimap$ correspond to the existence of binary products and coproducts, initial and terminal object and internal homs in each fiber, such that these are preserved under reindexing. The new type formers $\sqcap_{x : A}B$ and $\sqsubset_{x :A}B$ correspond to right and left adjoints to the reindexing functor $\pi_A^* : \mathcal{L}_{\Gamma} \to \mathcal{L}_{\Gamma.A}$, while the modalities $L$ and $M$ give rise to a fiber adjunction between $\mathcal{L}$ and $\mathcal{T}$. The new rules for $\Sigma$ are automatically satisfied by the semantic interpretation of $\Sigma_A$ as a left adjoint to the reindexing functor $\pi_A^* : \mathcal{T}_\Gamma \to \mathcal{T}_{\Gamma.A}$. The new rules for $Id$-types impose an additional condition on the semantic interpretation of $Id$, which are always fulfilled if our identity types are extensional. We consider two concrete models, the first being the \textit{families model}, in which cartesian types consist of families of sets, indexed by their context set $\Gamma$, and a linear type in the context $\Gamma$ is a $\Gamma$-indexed family of objects in a given symmetric monoidal category $\mathcal{V}$. Examples of suitable $\mathcal{V}$ supporting all type formers present in our syntax are $\mathbf{AbGrp}$, $\mathbf{GCTop}_*$, $\mathbf{Vect}_F$, i.e. the category of abelian groups, the category of compact generated, pointed topological spaces and the category of vector spaces over a field $F$, respectively. Generalizing the families model, we get the \textit{diagrams model}, in which contexts are interpreted as groupoids, and cartesian types over a groupoid $\Gamma$ are diagrams in $\mathbf{Gpd}$ over $\Gamma$, and linear types over $\Gamma$ are diagrams in a given symmetric monoidal category $\mathcal{V}$ over $\Gamma$. Just as the groupoid model \cite{hofmann1998} can be shown to support a univalent universe, we construct a linear analogue of the univalence axiom and show that it holds in the diagrams model if the adjunction $L \dashv M$ factors through sets. \section{Syntax}\label{syntax} As cartesian type formers, we use the standard $\Sigma$, $\Pi$, and identity type formers as well as universe types $U$ and $L$ for linear and cartesian types, respectively. The purely linear part of our type theory contains all the type formers of intuitionistic linear logic; the additive connectives $\&,\oplus,\mathbf{0},\top$ and the multiplicatives $\otimes,\mathbf{1},\multimap$. In addition to these, we have the new type formers $\sqsubset$, $\sqcap$, which play a role analogous to that of $\Sigma$ and $\Pi$ in the cartesian setting. Finally, we have the two modalities, $M$ and $L$, which turns linear types into cartesian, and vice versa. A detailed presentation of our syntax can be found in \cite{MLundfall}. For the familiar, ``purely'' dependent or linear type formers, our presentation offers no significant surprises, except for a couple of additional rules for $\Sigma$ and the identity type. Therefore, we focus on presenting the syntax for the new type formers $\sqsubset$, $\sqcap$ and the modalities $M$ and $L$. \subsection{Auxiliary elimination rules}\label{newElim} Besides the traditional rules for $\Pi$, $\Sigma$ and the identity type, we find that since we can now eliminate into linear types, we must introduce an extra elimination and computational rule for each one. These additional rules are presented next along with the traditional elimination rules in Figure \ref{elimfig}. \begin{figure} \fbox{ \begin{minipage}{.40\linewidth} \footnotesize \[ \inference[{\footnotesize$\Sigma$-E$_1$}]{\Gamma, t : \Sigma_{x : A}B \vdash C \type\\ \Gamma, x : A, y : B \vdash c : C[(x, y)/t]\\ \Gamma \vdash s : \Sigma_{x : A}B} {\Gamma \vdash \hat c[s] : C[s/t]} \]\\ \[ \inference[{\footnotesize =-E$_1$}]{\Gamma, x, y : A, p : x =_Ay \vdash C \type\\ \Gamma, z : A \vdash c: C[z/x, z/y, \text{refl}(z)/p]\\ \Gamma \vdash M, N : A\\ \Gamma \vdash P : M =_A N} {\Gamma \vdash R^{Id} : C[M/x, N/y, P/p]} \] \end{minipage} \begin{minipage}{.60\linewidth} \footnotesize \[ \inference[{\footnotesize $\Sigma$-E$_2$}]{\Gamma, t : \Sigma_{x : A}B \vdash C \linear\\ \Gamma, x : A, y : B; \Xi \vdash c : C[(x, y)/t]\\ \Gamma \vdash s : \Sigma_{x : A}B} {\Gamma; \Xi[pr_1(s)/x][pr_2(s)/y] \vdash \hat c[s] : C[s/t]} \]\\ \[ \inference[{\footnotesize =-E$_2$}]{\vdash \Gamma, x, y : A, p : x =_Ay; \Xi \text{ ctxt}\\ \Gamma, x, y : A, p : x =_Ay \vdash C \linear\\ \Gamma, z : A; \Xi[z/x, z/y, \text{refl}(z)/p] \vdash c: C[z/x, z/y, \text{refl}(z)/p]\\ \Gamma \vdash M, N : A\\ \Gamma \vdash P : M =_A N} {\Gamma; \Xi[M/x, N/y, P/p] \vdash R^{Id} : C[M/x, N/y, P/p]} \] \end{minipage} } \caption{Elimination rules for $\Sigma$ and $Id$} \label{elimfig} \end{figure} \subsection{The modalities $M$ and $L$}\label{syntLandM} We introduce two the modal operators $M$ and $L$, which transfers a linear type/term to its cartesian counterpart and vice versa. Semantically, this will establish a fiberwise monoidal adjunction between the categories of linear and cartesian types: \[ \begin{tikzcd} \mathcal{L}_{\Gamma} \ar[r, "M"{name=A, below}, bend right] & \mathcal{T}_{\Gamma} \ar[l, "L"{name=B, above}, bend right] \ar[from=A, to=B, symbol=\vdash] \end{tikzcd} \] where the exponential modality from traditional linear logic is understood as the comonad $! = LM$. The decomposition of the exponential into an adjunction goes back to at least \cite{benton1995mixed}, and is given an interesting new light in \cite{licata2017fibrational}, where it is seen as a particular case of a more general procedure of encoding structure in contexts. The rules for the operators $M$ and $L$ are presented in Figure \ref{MLfig}. \begin{figure} \fbox{ \small \begin{minipage}{.6\linewidth} \[ \inference{\Gamma \vdash A \type}{\Gamma \vdash A_L \linear}[L-F] \] \[ \inference{\Gamma \vdash a : A}{\Gamma ; \cdot \vdash a_L : A_L}[L-I] \] \[ \inference{(\Gamma \vdash B \linear) \\ (\vdash \Gamma; \Xi' \text{ ctxt})\\ \Gamma; \Xi \vdash y : A_L \quad \Gamma, x : A; \Xi' \vdash t : B}{\Gamma; \Xi, \Xi' \vdash \text{let $x$ be $y$ in }t : B}[L-E] \] \[ \inference{\Gamma; \Xi \vdash \text{let $x$ be $s_L$ in }t : B}{\Gamma; \Xi \vdash \text{let $x$ be $s_L$ in }t \equiv t[s/x] : B}[L-C] \] \[ \inference{\Gamma; y : A_L, \Xi \vdash t : B \\ \Gamma; \Xi' \vdash a : A_L } {\Gamma; \Xi, \Xi' \vdash \text{let $x$ be $a$ in $t[x_L/y]$} \equiv t[a/y] : B}[L-U] \] \end{minipage} \begin{minipage}{.4\linewidth} \small \[ \inference{\Gamma \vdash B \linear}{\Gamma \vdash B_M \type}[M-F] \] \[ \inference{\Gamma ; \cdot \vdash b : B}{\Gamma \vdash \sigma(b) : B_M}[M-I] \] \[ \inference{ \Gamma \vdash t : B_M\\ }{\Gamma; \cdot \vdash \sigma^{-1}(t) : B \\}[M-E] \] \[ \inference{ \Gamma \vdash \sigma(b) : B_M\\ }{\Gamma; \cdot \vdash \sigma^{-1}(\sigma(b)) \equiv b : B \\}[M-C$_1$] \] \[ \inference{ \Gamma; \cdot \vdash \sigma^{-1}(t) : B_M\\ }{\Gamma \vdash \sigma(\sigma^{-1}(t)) \equiv t : B_M \\}[M-C$_2$] \] \end{minipage} } \caption{Typing rules for $M$ and $L$} \label{MLfig} \end{figure} The interpretation of $L$ and $M$ as an adjoint pair is already present at the syntactic level. We can show that they form instances of a Haskell-like \textbf{Functor} class, by constructing terms: $\textup{\texttt{fmapM}} : (A \multimap B)_M \to A_M \to B_M$ and $\textup{\texttt{fmapL}} : L(A \to B) \multimap (LA \multimap LB)$, satisfying the functor laws. Furthermore, we can construct a ``counit'' $\epsilon : LM \implies 1$ satisfying the universal property of adjunction (thanks to L-U). The syntactic formulation of the statement becomes: \begin{theorem}[$L \dashv M$] There is a term $\Gamma; \beta_1 : B_{LM} \vdash \epsilon_B: B$ with the following property:\\ For any term: $\Gamma; y : A_L \vdash f : B$, there is a unique term $\Gamma, x : A \vdash g : B_{M}$ such that $\Gamma; y : A_L \vdash \epsilon_B[\text{let $x$ be $y$ in }g_L/\beta_1] \equiv f : B$. \end{theorem} Based on this knowledge we expect the right adjoint $M$ to preserve limits, and indeed we find an isomorphism: $A_M \times B_M \cong (A \& B)_M$. We can now also reformulate some common results about the exponential modality using $! = LM$, such as $(A \& B)_{LM} \cong A_{LM} \otimes B_{LM}$. \subsection{$\sqcap$ and $\sqsubset$} Since we allow linear types to depend on terms of cartesian types, we can form new versions of the $\Pi$- and $\Sigma$-types, denoted $\sqcap$ and $\sqsubset$, respectively. The typing rules for these are presented in Figure \ref{clawhousefig}. \begin{figure} \fbox{ \begin{minipage}{.47\linewidth} \small \[ \inference{\Gamma \vdash A \type \quad \Gamma, x : A \vdash B \linear}{\Gamma \vdash \sqcap_{x : A}B \linear}[$\sqcap$-F] \]\\ \[ \inference{\vdash \Gamma; \Xi \text{ ctxt}\\ \Gamma, x : A; \Xi \vdash b : B } {\Gamma; \Xi \vdash \lambda x. b : \sqcap_{x: A}B}[$\sqcap$-I] \]\\ \[ \inference{\Gamma; \Xi \vdash t : \sqcap_{x : A}B \quad \Gamma \vdash a : A} {\Gamma; \Xi \vdash t(a) : B[a/x]}[$\sqcap$-E] \]\\ \[ \inference{\Gamma; \Xi \vdash \lambda x. b (a) : \sqcap_{x : A}B} {\Gamma; \Xi \vdash \lambda x. b (a) \equiv b[a/x] : B[a/x]}[$\sqcap$-C] \] \end{minipage} \begin{minipage}{.53\linewidth} \small \[ \inference{\Gamma \vdash A \type \quad \Gamma, x : A \vdash B \linear}{\Gamma \vdash \sqsubset_{x : A}B \linear}[$\sqsubset$-F] \]\\ \[ \inference{\Gamma \vdash s : A \quad \Gamma; \Xi \vdash b : B[s/x]}{\Gamma; \Xi \vdash (s, b) : \sqsubset_{x : A}B}[$\sqsubset$-I] \]\\ \[ \inference{\vdash \Gamma;\Xi' \text{ ctxt}\\ \Gamma, x : A \vdash C \linear\\ \Gamma; \Xi \vdash t : \sqsubset_{x : A}B \quad \Gamma, x : A; \Xi', y : B \vdash c : C}{\Gamma; \Xi, \Xi' \vdash \text{let $x, y$ be $t$ in c : C}}[$\sqsubset$-E] \]\\ \[ \inference{\Gamma; \Xi \vdash \text{let $x, y$ be $(s, t)$ in c : C}} {\Gamma; \Xi \vdash \text{let $x, y$ be $(s, t)$ in c} \equiv c[s/x][t/y] : C}[$\sqsubset$-C] \] \end{minipage} } \caption{Typing rules for $\sqsubset$ and $\sqcap$} \label{clawhousefig} \end{figure} The sense in which $\sqcap$ and $\sqsubset$ are ``linear analogues'' of $\Pi$ and $\Sigma$ can be formalized in the following way: \begin{proposition}\label{M-sqcap} For all $\Gamma \vdash A \type$ and $\Gamma, x : A \vdash B \linear$, there is an isomorphism: \[ \Pi_{x : A}B_M \cong (\sqcap_{x : A}B)_M \] \end{proposition} We would like to show a similar result relating $\Sigma$ and $\sqsubset$, but for this we need a couple of additional rules. First, we assume uniqueness rules for $\Sigma$ and $\sqsubset$, asserting that the elimination rule followed by the introduction rule is the identity. In other words, for any $p : \Sigma_{x : A}B$ and $q : \sqsubset_{x : C}D$, we have $(pr_1(p),pr_2(p)) \equiv p$ and $\text{let $x, y$ be $q$ in $(x, y)$} \equiv q$ \footnote{The former is provable as a propositional identity \cite[Corollary~2.7.3]{hott-book}. Perhaps it is possible to obtain a similar result for $\sqsubset$, using the ``surrogate equality'' described in the end of Section \ref{syntLandM}}: Second, we assume a kind of naturality rule for the $L$ modality: \[ \inference{ \Gamma; \Xi, y : B \vdash e : C\\ \Gamma, x : A; \Xi'\vdash u : B\\ \Gamma; \Xi'' \vdash t : A_L } { \Gamma; \Xi, \Xi', \Xi'' \vdash e[\text{let $x$ be $t$ in $u$}/y] \equiv \text{let $x$ be $t$ in $e[u/y]$} : C}[Nat$_L$] \] \begin{proposition}\label{L-subset} Assuming the Nat$_L$ and the uniqueness rules for $\Sigma$ and $\sqsubset$, there is a linear isomorphism\footnote{Here a linear isomorphism, $A \cong B$, means a pair $f : A \multimap B$, $g : B \multimap A$ such that the composite is judgmentally equal to the identity. We discuss the weaker notion of linear equivalence in Section \ref{highermodel}} \[ (\Sigma_{x :A}B)_L \cong \sqsubset_{x:A}B_L \] \end{proposition} As outlined in section \ref{semtypform}, the semantic interpretation of the type formers $\Pi$, $\sqcap$ and $\Sigma$, $\sqsubset$ are as right and left adjoints to reindexing functors respectively. Based on this interpretation we can understand these equivalence results through the diagram: \[ \begin{tikzcd}[row sep=huge, column sep=large] \mathcal{L}_{\Gamma.A} \ar[d] \ar[r, "M_{\Gamma.A}"{name=C, above}, bend right]& \mathcal{T}_{\Gamma.A} \ar[l, "L_{\Gamma.A}"{name=D, below}, bend right, swap] \ar[from=D, to=C, symbol=\dashv] \ar[d]\\ \mathcal{L}_\Gamma \ar[u, "\sqcap_A"{name=A}, bend right,swap] \ar[u, "\sqsubset_A"{name=B}, bend left] \ar[r, "M_{\Gamma}"{name=C, above}, bend right] \ar[from=B, to=A, symbol=\dashv,xshift=-5] \ar[from=B, to=A, symbol=\dashv,xshift=4] & \mathcal{T}_\Gamma \ar[l, "L_{\Gamma}"{name=D, below}, bend right,swap] \ar[u, "\Pi_A"{name=F}, bend right,swap] \ar[u, "\Sigma_A"{name=E}, bend left] \ar[from=D, to=C, symbol=\dashv] \ar[from=E, to=F, symbol=\dashv,xshift=-5] \ar[from=E, to=F, symbol=\dashv,xshift=4] \end{tikzcd} \] \section{Semantics}\label{semantics} \subsection{Structural semantic core} Our semantic exploration of linear dependent type theory begins with the notion of a model. For the cartesian fragment of our theory, we follow \cite{jacobs} and ask for a \textit{comprehension category}, $\pi : \mathcal{T} \to \mathcal{C}^\to$, where $\mathcal{C}$ is a category of context with terminal object, and the fibrations $\mathcal{T}_\Gamma$ contains the cartesian types over $\Gamma$. For the linear fragment of our theory, we would like a fibration $q : \mathcal{L} \to \mathcal{C}$ where each fiber $\mathcal{L}_\Gamma$ is a symmetric monoidal category and the reindexing functors are symmetric monoidal. This is captured in the notion of a \textit{(lax) monoidal fibration}: \begin{definition} A \textbf{lax monoidal fibration} \cite{zawadowski} is a fibration $p : E \to B$ along with \begin{enumerate} \item Two functors $\otimes : E \times_{B} E \to E$ and $I : B \to E$ fitting into the following diagram: \[ \begin{tikzcd} E \times_{B} E \ar[r, "\otimes"] \ar[rd] & E \ar[d, "p"] & B \ar[l, "I",swap] \ar[ld, "1_B"] \\ & B & \end{tikzcd} \] \item Three fibred natural isomorphisms $\alpha, \lambda$ and $\rho$ associated with the diagrams: \[ \begin{tikzcd} E \times_B E \times_B E \ar[r, "1_E \times_B \otimes"] \ar[d, "\otimes \times_B 1_E",swap] & E \times_B E \ar[d, "\otimes", swap] \\ E \times_B E \ar[r, "\otimes"] \ar[ru, "\alpha", Rightarrow, shorten <=10pt,shorten >=10pt] & E \end{tikzcd} \] and \[ \begin{tikzcd} B \times_B E \ar[r, "I \times_B 1_{E}"] \ar[rdd, "\pi_2",swap] & E \times_B E \ar[dd, "\otimes",swap] & E \times_B B \ar[ldd, "\pi_1"] \ar[l, "1_E \times I",swap] \\ \ar[r, "\lambda", xshift=10 pt, Rightarrow, shorten <=20pt, shorten >=20pt] & {}& \ar[l, "\rho", Rightarrow, xshift=-10pt, shorten <=20pt, shorten >=20pt,swap] \\ & E & \end{tikzcd} \] \item such that $\alpha$, $\lambda$ and $\rho$ satisfies the pentagon and triangle identities in each fiber. \item for every $b \in B$, $\rho_{I_b} = \lambda^{-1}_{I_b} : I_b \otimes I_b \to I_b$ \end{enumerate} \end{definition} To avoid any coherence problems, we require the both the comprehension category and the monoidal fibration to be \textit{split}. \begin{definition} A \textbf{model for linear dependent type theory} consists of a split comprehension category $\pi : \mathcal{T} \to \mathcal{C}^\to$ and a split symmetric monoidal fibration $q : \mathcal{L} \to \mathcal{C}$, as illustrated in the following picture: \[ \begin{tikzcd} \mathcal{L} \ar[rd, "q"] & \mathcal{T} \ar[d, "p"] \ar[r, "\pi"] & \mathcal{C^\rightarrow} \ar[ld, "\text{cod}"]\\ & \mathcal{C} \end{tikzcd} \] \end{definition} where $\text{cod}$ denotes the codomain fibration functor. This provides the necessary machinery to interpret all the structural rules of our theory as well as the rules for $\otimes$ and $I$, by constructing an interpretation function $[[-]]$, which sends: \begin{itemize} \item Cartesian contexts $\Gamma$ to objects of $\mathcal{C}$, considered up to judgmental equality and renaming of bound variables. \item Linear contexts $\Xi = a_1 : A_1, a_2 : A_2, \dots a_n : A_N$ in $\Gamma$ to objects $[[\Xi]] = \bigotimes^n_{i = 1}[[A_i]]$ of $\mathcal{L}_{[[\Gamma]]}$. \item Cartesian types $A$ in $\Gamma$ to objects of $\mathcal{T}_{[[\Gamma]]}$. \item Linear types $B$ in $\Gamma$ to objects of $\mathcal{L}_{[[\Gamma]]}$. \item Cartesian terms $M : A$ in $\Gamma$ to sections of the projection morphism $\pi([[A]]) : [[\Gamma,A]] \to [[\Gamma]]$. \item Linear terms $b : B$ in $\Gamma; \Xi$ to morphisms $[[b]] : [[\Xi]] \to [[B]]$. \end{itemize} \subsection{Semantic type formers}\label{semtypform} Equipped with the baseline structure of a model in which we can interpret the structural rules of our theory, we formulate the conditions under which such models support various type formers. From now on, we will assume that the comprehension category comprising the core of our syntax is full, i.e. that the functor $\pi : \mathcal{T} \to \mathcal{C}^{\to}$ is full and faithful. This simplifies the semantic interpretation of many type formers. The interpretation of the purely linear type formers $\otimes, I, \multimap, \&, \oplus, \top$ and $0$ in symmetric monoidal categories is well known. See for instance \cite{mellies}. Notice that $\otimes$ and $I$ types are supported in any model. For a model to support the type formers $\multimap, \&, \oplus, \top$ and $0$, correspond to the condition that the fibers of $\mathcal{L}$ have weak versions of internal homs, binary products and coproducts, and terminal and initial object, and that these are stable under reindexing functors. \subsubsection{$\Pi$ and $\Sigma$} What it means for a model of linear dependent type theory to support $\Pi$-types is directly inherited from the standard, non-linear case; we require right adjoints to reindexing functors satisfying a Beck-Chevalley condition. As the rules $\Sigma$ contains one more eliminator than usual ($\Sigma$-E$_2$ from Section \ref{newElim}), one might wonder whether this poses additional clauses in the definition of the semantic type former. But as it turns out, the relevant condition will always hold in any model supporting $\Sigma$-types: \begin{definition}A model of LDTT \textbf{supports $\Sigma$-types} if it satisfies the following: \begin{enumerate} \item For all $A \in \mathcal{T}_{\Gamma}$, the induced functor $\pi_A^* : \mathcal{T}_{\Gamma} \to \mathcal{T}_{\Gamma.A}$ has a left adjoint, $\Sigma_A$, \item (Beck-Chevalley) such that for all pullbacks \[ \begin{tikzcd} \Gamma.E \ar[d, "\pi_E"] \ar[r, "q_{E, E'}"] & \Delta.E' \ar[d, "\pi_{E'}"] \\ \Gamma \ar[r, "f"] & \Delta \end{tikzcd} \] the natural transformation: $\Sigma_Eq^* \to f^*\Sigma_{E'}$ is a natural isomorphism, and \item the induced map $pair: \Gamma.A.B \to \Gamma.\Sigma_AB$ is an isomorphism \end{enumerate} \end{definition} We will denote the inverse of $pair$ by $(pr_1, pr_2)$, when it exists. This structure is sufficient to support new elimination rule ($\Sigma$-E$_2$). \begin{theorem} If a model of LDTT supports $\Sigma$-types, then for every object $C \in \mathcal{L}_{\Gamma.\Sigma_AB}$, morphism $c : \Xi \to (pair_{A,B})^*C$ in $\mathcal{L}_{\Gamma.A.B}$ and section $s : \Gamma \to \Gamma.\Sigma_AB$, there exists a morphism $\hat c_s : s^*(pr_1, pr_2)^*\Xi \to s^*C$ such that given sections $a : \Gamma \to \Gamma.A$ and $b : \Gamma.A \to \Gamma.A.B$ we have $\hat c_{(a, b)} = a^*b^*c : a^*b^*\Xi \to a^*b^*C$. \begin{proof} Let $\hat c_s = c(pr_1,pr_2)^*s^*c$. Given sections $a : \Gamma \to \Gamma.A$ and $b : \Gamma.A \to \Gamma.A.B$ we compose with $pair$ to get the section $(a, b) = pair \circ ba : \Gamma \to \Gamma.\Sigma_AB$. We have: \[ \hat c_{(a,b)} = (pr_1,pr_2)^*(a,b)^*c = (pr_1,pr_2)^*pair^*b^*a^*c = b^*a^*c \] \end{proof} \end{theorem} \subsubsection{Identity types} The situation for Id-types requires a bit more care. If one wants to keep the theory intensional, we need to add condition (2) to make sure that the semantic identity types satisfy the added elimination rule, =-E$_2$. \begin{definition}[Id-types]\label{idsemantic} A model of LDTT \textbf{supports Id-types} if, for all $A \in \mathcal{T}_{\Gamma}$, there exists an object $Id_A \in \mathcal{T}_{\Gamma.A.\pi_A^*A}$ and a morphism $r_A : \Gamma.A \to \Gamma.A.\pi_A^*A.Id_{A}$ such that $\pi_{Id_A} \circ r_A = v_A$, and: \begin{enumerate} \item For any commutative diagram: \[ \begin{tikzcd} \Gamma.A \ar[d, "r_A"] \ar[r] & \Delta.C \ar[d, "\pi_C"]\\ \Gamma.A.\pi_A^*A.Id_A \ar[r] & \Delta \end{tikzcd} \] there exists a lift $J: \Gamma.A.\pi_A^*A.Id_A \to \Delta.C$ making the two triangles commute. \item For any pair of objects, $C, \Xi \in \mathcal{L}_{\Gamma.A.\pi_A^*A.Id}$, sections $M, N : \Gamma \to \Gamma.A$, $P : \Gamma \to \Gamma.M^*(N^+)^*Id_A$, and morphism $c : r_A^*\Xi \to r_A^*C$, there exists a morphism $\hat c_{[M,N,P]} : M^*(N^+)^*P^*\Xi \to M^*(N^+)^*P^*C$ such that $\hat c_{[M,M,\text{refl}]} = M^*c$. \end{enumerate} \end{definition} Notice that if our type theory has extensional id-types, in the sense that $a =_A b$ implies $a \equiv b$, then the second condition is always met. \subsubsection{$\sqcap$- and $\sqsubset$-types} The semantic type formers for the linear dependent $\sqcap$ and $\sqsubset$ is akin to that of $\Pi$ and $\Sigma$. They are given by adjoints to the functors between fibers of $\mathcal{L}$ induced by the projection maps in $\mathcal{C}$. \begin{definition} A model of LDTT \textbf{supports $\sqcap$-types} if, for all $A \in \mathcal{T}_{\Gamma}$, the induced monoidal functor $\pi_A^* : \mathcal{L}_{\Gamma} \to \mathcal{L}_{\Gamma.A}$ has a monoidal right adjoint, $\sqcap_A$ satisfying the following Beck-Chevalley condition: For all pullback squares in $\mathcal{C}$ of the following form: \[ \begin{tikzcd} \Gamma.E \ar[d, "\pi_E"] \ar[r, "q_{E, E'}"] & \Delta.E' \ar[d, "\pi_{E'}"] \\ \Gamma \ar[r, "f"] & \Delta \end{tikzcd} \] the canonical natural transformation $f^*\sqcap_{E'} \to \sqcap_{E}q^*_{E, E'}$ is a natural isomorphism.\\ \end{definition} \begin{definition} It \textbf{supports $\sqsubset$-types} if, for all $A \in \mathcal{T}_{\Gamma}$, the functor every $\pi_A^*$ has a monoidal left adjoint, satisfying the following: \begin{enumerate} \item (Beck-Chevalley): For all pullbacks squares as above, the natural transformation $\sqsubset_Eq^* \to f^*\sqsubset_{E'}$ is a natural isomorphism. \item (Frobenius reciprocity): For all objects $\Xi \in \mathcal{L}_{\Gamma}$ and $B \in \mathcal{L}_{\Gamma.A}$, the canonical morphism $\sqsubset_A(\Xi\{\pi_A\} \otimes B) \to \Xi \otimes \sqsubset_AB$ is an isomorphism. \end{enumerate} \end{definition} \subsubsection{The operators $M$ and $L$} \begin{definition}\label{semanticML} A model of LDTT with unit \textbf{supports the operators $M$ and $L$} if there exists functors $M : \mathcal{L} \leftrightarrow \mathcal{T} : L$ which are cartesian with respect to the fibrations $p : \mathcal{T} \to \mathcal{C}$ and $q : \mathcal{L} \to \mathcal{C}$, such that \begin{itemize} \item $L \dashv M$ is a fibred adjunction, \item $L(1) \cong I$ \item and there is an isomorphism of hom-sets: \[ \mathcal{L}_{\Gamma.A}(\pi_A^*(\Xi'), \pi_A^*(B)) \cong \mathcal{L}_\Gamma(LA \otimes \Xi', B)\]. \end{itemize} \end{definition} Recall that a fibred adjunction implies that there are natural isomorphisms making the following diagram commute: \[ \begin{tikzcd} \mathcal{L}_{\Gamma.A} \ar[r, "M_{\Gamma.A}", bend right] & \mathcal{T}_{\Gamma.A} \ar[l, "L_{\Gamma.A}", bend right] \\ \mathcal{L}_{\Gamma} \ar[r, "M_{\Gamma}", bend right] \ar[u, "\pi^*_A"] & \mathcal{T}_{\Gamma} \ar[l, "L_{\Gamma}", bend right] \ar[u, "\pi^*_A"] \\ \end{tikzcd} \] which from a syntactic perspective ensures that $M$ and $L$ commute with substitution. Note that the interpretation of a term $\Gamma \cdot \vdash \sigma(a) : A_M$ arises from the adjunction via $\sigma : \mathcal{L}_\Gamma(I, A) \cong \mathcal{L}_\Gamma(L(1), A) \cong \mathcal{T}_\Gamma(1, M(A))$. The final condition of the definition is what yields the elimination and computation rules L-U, and while it might appear somewhat unnatural semantically, it does turn out to hold in a broad variety of models, due to the following result: \begin{theorem}In a model of LDTT that supports $\multimap$ type formers, then any fibred adjunction $L \dashv M$ where $L(1) \cong I$ satisfies $\mathcal{L}_{\Gamma.A}(\pi^*_A(\Xi'), \pi^*(B)) \cong \mathcal{L}_{\Gamma}(LA \otimes \Xi', B)$. \begin{proof} A model supporting internal homs must have reindexing functions which preserve these. That is, we have an isomorphism $\pi_A^*[\Xi, B] \cong [\pi_A^*\Xi, \pi^*_AB]$. We get a chain of isomorphisms: \[ \begin{split} &\mathcal{L}_{\Gamma}(LA \otimes \Xi, B) \cong \mathcal{L}_{\Gamma}(LA, [\Xi, B]) \cong \mathcal{T}_{\Gamma}(A, M_{\Gamma}[\Xi, B]) \cong \\ &\mathcal{T}_{\Gamma.A}(1, \pi_A^*(M_{\Gamma}[\Xi, B])) \cong \mathcal{T}_{\Gamma.A}(1, M_{\Gamma.A}\pi_A^*[\Xi, B])) \cong \mathcal{L}_{\Gamma.A}(L_{\Gamma.A}(1), \pi_A^*[\Xi, B])) \cong \\ &\mathcal{L}_{\Gamma.A}(I, \pi_A^*[\Xi, B])) \cong \mathcal{L}_{\Gamma.A}(I, [\pi_A^*\Xi, \pi_A^*B])) \cong \mathcal{L}_{\Gamma.A}(\pi_A^*\Xi, \pi_A^*B). \end{split} \] \end{proof} \end{theorem} \section{Diagram Model}\label{models} The main novelty of this paper is the Diagram model of linear dependent type theory. This model extends the groupoid model of dependent type theory \cite{hofmann1998} to support linear types, while still maintaining a higher dimensional interpretation of the identity type. Most interestingly, perhaps, it provides a model in which we can support univalent universes, both for cartesian and linear types. The diagram model can be seen as a natural generalization of the set indexed families model described by \cite{vakar14}. We briefly recall the set indexed families model below as a useful comparison to the diagrams model. \subsection{Set indexed families} \begin{definition}[$Fam(\mathcal{C})$] For an arbitrary category $\mathcal{C}$, let $Fam(\mathcal{C})$ denote the category whose objects consists of pairs $(S, f)$ where $S$ is a set and $f$ is a function $f : S \to Ob(\mathcal{C})$. Morphisms of $Fam(\mathcal{C})$ are pairs $(u, \alpha) : (S, f) \to (S', g)$ where $u : S \to S'$ and $\alpha : S \to \text{Mor}(\mathcal{C})$ such that $\alpha(s) : f(s) \to g(u(s))$ for all $s \in S$. \end{definition} By projecting a family to its indexing set, we get a fibration $p : Fam(\mathcal{C}) \to \mathbf{Set}$ and a comprehension category by defining $\pi(S, f) = fst: \{(s, t) \; | \; s \in S, t : \top \to f(s)\} \to S$. \footnote{As long as $\mathcal{C}$ has a terminal object and the hom-sets $\mathcal{C}(\top,A)$ are small for all $A \in \mathcal{C}$.} Letting $\mathcal{C} = \mathbf{Set}$ thus gives us a (full, split) comprehension category, forming the cartesian part of our model. For the linear part, we can for any symmetric monoidal category $\mathcal{V}$ form a monoidal fibration by a simple pointwise construction, giving us the following picture: \[ \begin{tikzcd} Fam(\mathcal{V}) \ar[rd, "q"] & Fam(\mathbf{Set}) \ar[d, "p"] \ar[r, "\pi"] & \mathbf{Set}^\to \ar[ld, "cod"] \\ & \mathbf{Set} \end{tikzcd} \] In this setting, most type formers will be given by a simple pointwise construction which are preserved under reindexing. It turns out that the families model supports the type formers $\otimes, I, \multimap, \oplus, 0, \&$, and $\top$ if $\mathcal{V}$ is a monoidal category which is closed, has binary coproducts, initial object, binary products and terminal object respectively. It supports $\sqcap$-types if $\mathcal{V}$ has small products, and $\sqsubset$ if $\mathcal{V}$ has small coproducts that distribute over $\otimes$ (Frobenius reciprocity). The families model of course also supports $\Pi$ and $\Sigma$-types, and since its identity types are extensional, the extra condition posed on our semantic identity types poses no additional difficulty. Whenever $\mathcal{V}$ is a concrete category, the adjunction $F \dashv U$ will induce a fiber adjunction between the corresponding fibrations, which forms support for the operators $M$ and $L$ (as long as $F(\mathbf{1}) \cong I$). \subsection{Diagrams in monoidal categories} For any category $\mathcal{C}$, there is a fibration $cod : \diag(\mathcal{C}) \to \mathbf{Cat}$, where $\diag(\mathcal{C})$ refers to the category of diagrams in $\mathcal{C}$, i.e. consisting of objects $J : \mathcal{A} \to \mathcal{C}$, and morphisms between $J : \mathcal{A} \to \mathcal{C}$ and $J' :\mathcal{B} \to \mathcal{C}$ are functors $F : \mathcal{A} \to \mathcal{B}$ equipped with a natural transformation $\eta: J \implies J' \circ F$. In other words, the fibers of $\diag(\mathcal{C})$ are functor categories, which we write $[\Gamma, \mathcal{C}]$, for any small category $\Gamma$. Any functor $F : \mathcal{A} \to \mathcal{B}$ in the base induces a canonical lift $F^* : [\mathcal{B}, \mathcal{C}] \to [\mathcal{A},\mathcal{C}]$ simply given by precomposition. When $\mathcal{C}$ has a terminal object $\top$ such that the collections $\mathcal{C}(\top, A)$ are small for any $A \in \mathcal{C}$, we form a comprehension category: \[ \begin{tikzcd} \diag(\mathcal{C}) \ar[d, "dom"] \ar[r,"\pi"] & \mathbf{Cat}^\to \ar[ld, "cod"] \\ \mathbf{Cat} \end{tikzcd} \] where the functor $\pi$ sends a diagram $A : \Gamma \to \mathcal{C}$ to the \textbf{Grothendieck construction} for $A$, i.e. the category whose objects are pairs $(\gamma, t_\gamma)$ where $\gamma \in \Gamma$, $t_\gamma : \top \to A(\gamma)$. Morphisms $(\gamma, t_\gamma) \to (\gamma', t'_{\gamma'})$ consists of morphisms $u : \gamma \to \gamma'$ such that $ A(u) \circ t_\gamma = t'_{\gamma'}$. \footnote{If $\mathcal{C}$ is a 2-category, this can be weakened so that morphisms $(\gamma, t_\gamma) \to (\gamma', t'_{\gamma'})$ are pairs $(u, \alpha)$, where $u : \gamma \to \gamma'$ and $\alpha$ is a 2-cell $\alpha : A(u) \circ t_\gamma \implies t'_{\gamma'}$.} When $\mathcal{C}$ is any symmetric monoidal category $\mathcal{V}$, there is an obvious symmetric monoidal structure on each fiber $[\Gamma, \mathcal{V}]$, given pointwise. Restricting the base of the fibration to groupoids instead of categories, and setting $\mathcal{C} = \mathbf{Gpd}$ we get a model of linear dependent type theory which expands the groupoid model by Hofmann and Streicher \cite{hofmann1998}: \[ \begin{tikzcd} \diag(\mathcal{V}) \ar[rd, "dom"] & \diag{\mathbf{(Gpd)}} \ar[d, "dom"] \ar[r, "\pi"] & \mathbf{Gpd}^\to \ar[ld, "cod"] \\ & \mathbf{Gpd} \end{tikzcd} \] Since $\top$ is the groupoid $\mathbf{1}$ consisting of a single object, we will equate the functor $t_\gamma : 1 \to A(\gamma)$ with an object $a_\gamma$ of $A(\gamma)$, and the natural transformation $\alpha : A(u) \circ t_\gamma \implies t'_{\gamma'}$ with a morphism $\alpha_\gamma : A(u)(a_\gamma) \to a'_{\gamma'}$. As shown in \cite{hofmann1998}, this model supports $\Pi$ and $\Sigma$ type formers, and provides an interesting interpretation of the identity type $Id_A$ as the arrow category of $A$. This construction satisfies the additional requirement in our definition of semantic identity types: \begin{theorem}Given a $\Gamma$-indexed groupoid $A$, diagrams $C, \Xi \in [\Gamma.A^\to, \mathcal{V}] \cong [\Gamma.A.\pi_A^*A,Id_A, \mathcal{V}]$, sections $M, N : \Gamma \to \Gamma.A$, $P : \Gamma.A \to \Gamma.Id_A$ and a natural transformation $c : \Xi \circ r_A \implies C\circ r_A$, there exists a natural transformation $\hat c_{[M,N,P]} : \Xi \circ P^+ \circ N^+ \circ M \implies C \circ P^+ \circ N^+ \circ M$ such that $\hat c_{[M,M,\text{refl}]} = c \circ M$ \footnote{Where the sections $N^+ : \Gamma.A \to \Gamma.A.\pi_A^*A$ and $P^+ : \Gamma.A.\pi_A^* \to \Gamma.A.\pi_A^*.Id_A$ are weakenings of $N$ and $P$, i.e. functors ignoring the additional arguments} \begin{proof} The key point to observe is that there is always an isomorphism $(\gamma, P_\gamma : M_\gamma \to N_\gamma) \cong (\gamma, 1_{M_\gamma} : M_\gamma \to M_\gamma)$ given by the commutative diagram: \[ \begin{tikzcd} (\gamma, M_\gamma) \ar[d, "P"] \ar[r, "1_M"] & (\gamma, M_\gamma) \ar[d, "1_M"]\\ (\gamma, N_\gamma) \ar[r, "P^{-1}"] & (\gamma, M_\gamma) \end{tikzcd} \] forming a collection of isomorphisms in $\Gamma.A^\to$ giving rise to a natural isomorphism $\phi : r_A \circ M \implies P^+ \circ N^+ \circ M$. We define $\hat c[M,N,P]$ as the composite: \[ \Xi \circ P^+ \circ N^+ \circ M \xrightarrow{\Xi_{\phi}} \Xi \circ r_A \circ M \xrightarrow{cM} C \circ r_A \circ M \xrightarrow{C_{\phi^{-1}}} C \circ P^+ \circ N^+ \circ M \] To see that the computation rule holds, we only need to notice that when $M \equiv N$ and $P = \text{refl}(M)$, $\phi$ is the identity natural transformation. \end{proof} \end{theorem} As in the families model, limits and colimits are constructed pointwise, and preserved by precomposition, so the model supports $\&$, $\top$, $\oplus$, $0$, if $\mathcal{V}$ has binary products, terminal object, binary coproducts and initial object respectively. When it comes to $\multimap$, we utilize the following result: \begin{theorem}\label{diaghoms}If $\mathcal{V}$ has internal homs and is complete, $[\mathcal{C},\mathcal{V}]$ also has internal homs, defined for $F, G \in [\mathcal{C}, \mathcal{V}]$ by the end: \[ [F, G] := \int_{x \in \mathcal{C}}[Fx, Gx]) \] \end{theorem} These are preserved under reindexing, implying that the diagrams model supports $\multimap$ if $\mathcal{V}$ is monoidal closed and complete. \begin{definition} For any functor $p : \mathcal{A} \to \mathcal{B}$ in the base, a left or right adjoint to the induced functor $p^* : [\mathcal{B}, \mathcal{V}] \to [\mathcal{A}, \mathcal{V}]$ is called a \textbf{left or right Kan extension} along $p$. \end{definition} We recall the following fact about Kan extensions: \begin{theorem}\label{kanlimits} Left (right) Kan extensions along $p : \mathcal{A} \to \mathcal{B}$ between two arbitrary small categories $\mathcal{A}$ and $\mathcal{B}$ exists if and only if $\mathcal{V}$ has all colimits (limits). \end{theorem} The result above ensures the existence of left and right adjoints to reindexing functors in the diagrams model as long as $\mathcal{V}$ is co-complete or complete, respectively. Since our reindexing functors are given by precomposition they will always satisfy the Beck-Chevalley condition. Again, in order to support $M$ and $L$, we can lift an adjunction between $\mathcal{V}$ and $\mathbf{Gpd}$ to fiber adjunction between the respective diagram categories. Therefore, for any diagrams model which supports $\multimap$ to support $M$ and $L$, it suffices to display an adjunction \[ \begin{tikzcd} \mathcal{V} \ar[r, "L_0"{name=D, below}, bend right] & \ar[l, "M_0"{name=C, above}, bend right] \mathbf{Gpd} \end{tikzcd} \] such that $L(1) \cong I$. \begin{remark}\label{factorsets} When the functor $\mathcal{V}(I, -) : \mathcal{V} \to \mathbf{Set}$ has a left adjoint $F$, we get an adjunction between $\diag(\mathcal{V})$ and $\diag(\mathbf{Gpd})$, induced by: \[ \begin{tikzcd} \mathcal{V} \ar[r, "{\mathcal{V}(I, -)}"{name=D, below}, bend right] & \ar[l, "F"{name=C, above}, bend right] \ar[from=C, to=D, symbol=\dashv] \mathbf{Set} \ar[r, "\delta"{name=B, below}, bend right] & \mathbf{Gpd} \ar[l, "\pi_0"{name=A, above}, bend right] \ar[from=A, to=B, symbol=\dashv] \end{tikzcd} \] where $\pi_0$ is the functor sending a groupoid to its set of connected components. \end{remark} \begin{theorem}\label{M-faith}There are models in which $M$ is not faithful. \begin{proof} Let $\mathcal{V}$ to be $\mathbf{Gpd}$ so that $L = \delta \pi_0$ and $M = \delta \mathbf{Gpd}(1, -)$. This induces a fiber adjunction $L \dashv M$ where $L(1) = 1$, but $M$ is not faithful. \end{proof} \end{theorem} \subsubsection{Universes in the diagrams model} To support universes, assuming one inaccessible cardinal allows us to shift our perspective to from the category of small groupoids, $\mathbf{Gpd}$, to the category $\mathbf{GPD}$ of all groupoids. Among the objects of $\mathbf{GPD}$ we find the core (i.e. maximal sub-groupoid) of $\mathbf{Gpd}$ and $\mathcal{V}^{core}$. This allows us to define our cartesian and linear universes in any context $\Gamma$ as the functors: \[ \begin{split} \mathbb{U} : \Gamma \to \mathbf{GPD}\\ \mathbb{L} : \Gamma \to \mathbf{GPD} \end{split} \] which are constant at $\mathbf{Gpd}^{core}$ and $\mathcal{V}^{core}$, respectively. Any section $s : \Gamma \to \Gamma.\mathbb{U}$ will determine a functor $\hat s : \Gamma \to \mathbf{Gpd}$, which we embed $\mathbf{Gpd} \to \mathbf{GPD}$ to get an interpretation of $El(s)$. Similarly, we get from each section $s : \Gamma \to \Gamma.\mathbb{L}$, a functor $El(s) : \Gamma \to \mathcal{V}$. It is easily seen that defining the linear universe via the core of $\mathcal{V}$ gives rise to the following interesting property, hinting at the possibility of a linear univalence axiom: \begin{proposition}For a linear universe defined as above via $\mathcal{V}^{core}$, and two sections $s, t : \Gamma \to \Gamma.\mathbb{L}$, an isomorphism $\alpha : El(t) \cong El(s)$ gives rise to a section $p : \Gamma \to \Gamma.Id_{\mathbb{L}}\{s\}\{t\}$. \end{proposition} \subsection{Univalence in linear dependent types}\label{highermodel} A key feature of the groupoid model is that it provides a model of dependent type theory where there might be nontrivial terms of the identity type. A natural question to ask is whether this higher dimensional feature of type theory can be extended to the linear dependent setting. In particular, we might wish for a linear analogue to the univalence axiom to hold: \[ \inference{ \Gamma \vdash A : \mathbb{L}\\ \Gamma \vdash B : \mathbb{L}\\ \Gamma; \cdot \vdash f : El(A) \multimap El(B)\\ \Gamma; \cdot \vdash g : El(B) \multimap El(A)\\ \Gamma; \cdot \vdash h : El(B) \multimap El(A)\\ \Gamma \vdash p : \sigma(g \circ f) =_{(El(A) \multimap El(A))_M} (id_A)_M\\ \Gamma \vdash q : \sigma(f \circ h) =_{(El(B) \multimap El(B))_M} (id_B)_M\\ }{\Gamma \vdash ua(f) : A =_{\mathbb{L}} B}[L-ua-I] \] To define the corresponding computation rule, we will the make use of a linear version of \texttt{transport}, which is easily definable through identity elimination. Given $\Gamma, x : C \vdash D \linear$, and an identity $p : a =_C b$, we get a function: \[ p^* : D[a/x] \multimap D[b/x] \] which we call the linear \textbf{transport along p}. This function has an inverse and thus yields for any $q : A =_L B$ a linear equivalence $El(A) \cong El(B)$. The computation rules for the univalence axiom asserts that the process creating equivalences from identities forms an inverse to univalence: \[ \inference{ \Gamma \vdash ua(f)^* : El(A) \multimap El(B)} {\Gamma \vdash ua(f)^* \equiv f : El(A) \multimap El(B)} [L-ua-C$_1$] \] \[ \inference{ \Gamma \vdash ua(p^*) : A =_\mathbb{L} B} {\Gamma \vdash ua(p^*) \equiv p : A =_{\mathbb{L}} B} [L-ua-C$_2$] \] The semantic interpretation of the procedure of turning an identity to an equivalence becomes the following: \begin{lemma}In the diagram model, equivalences $El(A) \cong El(B)$ are in one-to-one correspondence with identities $p : A =_L B$. \begin{proof}A section $p : \Gamma \to \Gamma.{\mathcal{V}^{core}}^{\to}$ defines for every morphism $\alpha : \gamma \to \gamma' \in \Gamma$ a naturality square: \[ \begin{tikzcd} A(\gamma) \ar[r,"p(\gamma)"] \ar[d,"A(\alpha)"] & B(\gamma) \ar[d,"B(\alpha)"]\\ A(\gamma') \ar[r, "p(\gamma')"] & B(\gamma') \end{tikzcd} \] where $A(\gamma)$ and $B(\gamma)$ are isomorphisms. Conversely, every natural isomorphism $El(A) \cong El(B)$ clearly defines such a section. \end{proof} \end{lemma} \begin{theorem} When $M$ factors through $\mathbf{Sets}$ as in Remark \ref{factorsets}, the linear univalence axiom holds in the diagram model. That is, given the following data: \begin{itemize} \item sections: $A, B : \Gamma \to \Gamma.{\mathbb{L}}$ \item morphisms: $f : I \to [El(A), El(B)]$ and $g, h : I \to [El(B),El(A)]$ in $[\Gamma, \mathcal{V}]$, \item sections: $p : \Gamma \to \Gamma.(M(g f))^*(M(id_A))^*Id_{(M[El(A),El(A)])}$ \item and $q : \Gamma \to \Gamma.(M(f g))^*(M(id_B))^*Id_{(M[El(B),El(B)])}$ \end{itemize} There is a natural isomorphism $El(A) \cong El(B)$. \begin{proof} The section $p$ selects for every $\gamma \in \Gamma$ an isomorphism between $\sigma(gf)$ and $\sigma(id_A)$ of $M[El(A),El(A)]$. But since $M$ factors through sets, $M[El(A), El(A)]$ is a discrete groupoid, so $\sigma(gf)$ and $\sigma(id_A)$ must be identical, and the same is true for $\sigma(fg)$ and $\sigma(id_B)$. Transporting back through the isomorphisms $[\Gamma,\mathbf{GPD}](1,M[El(A),El(A)]) \cong [\Gamma, \mathcal{V}](I, [El(A),El(A)]) \cong [\Gamma,\mathcal{V}](El(A),El(A))$ we find $gf = 1_{El(A)}$, and similarly $fg = 1_{El(B)}$. \end{proof} \end{theorem} By the lemma above, equivalences $El(A) \cong El(B)$ are in one-to-one correspondence with identities $p : A =_L B$, demonstrating that the linear univalence axiom holds in the diagrams model as long as $M$ factors through sets. \subsubsection{Examples} To summarize, these are the conditions imposed on $\mathcal{V}$ in order for the diagram model to support all of the type formers of our theory (including a universe of linear types satisfying the univalence axiom): \begin{itemize} \item A bicomplete symmetric monoidal closed category $\mathcal{V}$ \item An adjunction $L \dashv M$ between $\mathcal{V}$ and $\mathbf{Sets}$, such that $L\{*\}$ is isomorphic to the unit of the monoidal structure of $\mathcal{V}$. \end{itemize} Some concrete choices for $\mathcal{V}$ that fulfill these conditions are: \begin{itemize} \item The category $\mathbf{AbGroups}$ of abelian groups with the monoidal structure given by the tensor product of abelian groups. Here $L \dashv M$ arises from the free functor on abelian groups. \item More generally, for any commutative ring $R$, the category $R$-$\mathbf{Mod}$ of modules over $R$ with the free functor/forgetful functor adjunction \item The category $\mathbf{CGTop}_*$, of pointed compactly generated topological spaces, with the smash product as monoidal structure. The functor $M$ is here the forgetful functor which both forgets the base point and the topology, which has a left adjoint given by the discrete topology, and then taking the coproduct with the point to create a pointed space. The unit of $\mathbf{CGTop}_*$ is the two point discrete set $S^0$, which is precisely the image of the point in the adjunction above. \end{itemize} \subsection{Discussion} Although the diagram model supports the univalence axiom, we are forced to truncate any higher dimensional structure by factoring $M$ through $\mathbf{Set}$, just as we can support it in the groupoid model for a universe only containing discrete groupoids. From the perspective of homotopy type theory, we may think of the set-indexed families model as a 0-dimensional model of linear dependent type theory and the diagram model as a 1-dimensional one. We conclude the paper by sketching what a 2-dimensional model might look: As outlined in \cite{smcat}, there is a symmetric monoidal structure on $\mathbf{SMCat}$, the category of small symmetric monoidal categories, symmetric monoidal functors and monoidal natural transformations. \footnote{Technically, the structure on $\mathbf{SMCat}$ is not quite symmetric monoidal, as the associators, unitors and symmetry functors are only invertible up to higher homotopy. However, if one applies these homotopies whenever necessary, one does get a model of linear dependent type theory.} \begin{definition} Let the \textbf{2-dimensional model of LDTT} be given by the diagrams model where $\mathcal{V}$ is the 2-category of small symmetric monoidal categories, symmetric monoidal functors and monoidal natural transformations: \[ \begin{tikzcd} \diag(\mathbf{SMCat}) \ar[rd, "cod"] & \diag(\mathbf{Gpd}) \ar[r, "\pi"] \ar[d, "cod"] & \mathbf{Gpd}^\to \ar[ld, "dom"]\\ & \mathbf{Gpd} \end{tikzcd} \] \end{definition} For two symmetric monoidal categories $\mathcal{A}$ and $\mathcal{B}$, the (monoidal) functor category $[\mathcal{A}, \mathcal{B}]$ between them carries a natural monoidal structure \cite{smcat}, and gives $\mathbf{SMCat}$ a monoidal closed structure. Since $\mathbf{SMCat}$ is complete, with limits inherited from $\mathbf{Cat}$ equipped with a pointwise monoidal structure, we have support for $\sqcap$ and $\&$, and theorem \ref{diaghoms} gives us that this model supports $\multimap$ type formers. \footnote{Note, however, that we do not have all coproducts in $\mathbf{SMCat}$. Therefore, we cannot support $\oplus$ or $\sqsubset$. An alternative to be explored is the category $\mathbf{Mult}$, of multicategories, which is a symmetric monoidal closed, complete and co-complete \cite{elmendorf2009permutative}} There is a natural candidate for the adjunction $L \dashv M$, based on the composite: \[ \begin{tikzcd} \mathbf{SMCat} \ar[r, "U"{name=D, below}, bend right] & \ar[l, "F"{name=C, above}, bend right] \ar[from=C, to=D, symbol=\dashv] \mathbf{Cat} \ar[r, "\text{core}"{name=B, below}, bend right] & \mathbf{Gpd} \ar[l, "U"{name=A, above}, bend right] \ar[from=A, to=B, symbol=\dashv] \end{tikzcd} \] through which one should be able to construct a univalent universe containing nontrivial 1-dimensional linear types. Eventually, one would like to go all the way up and construct a $\infty$-dimensional formulation of linear dependent type theory. It has been speculated that models of a higher dimensional linear dependent type theory can be expressed through stable homotopy type theory \cite{schreiber2014quantization}, although it is unclear what the syntax for such a theory looks like. \newpage
1,108,101,563,423
arxiv
\section{Introduction} Diagnostic assessments are created with the goal of making classification-based decisions about respondents' possession of multiple latent traits, also known as attributes. Researchers have brought a number of tools to bear on the problem of diagnostic classification, including multidimensional IRT, factor analysis, the rule-space method, the attribute hierarchy method, clustering methods, and cognitive diagnosis models (CDMs); for a recent review, see \citeA{Rupp}. CDMs are multidimensional latent variable models with a vector of binary latent variables representing mastery of a finite set of skills whose analysis results in a probabilistic attribute profile; this makes them well-suited to diagnostic classification. Well known models include the Deterministic Input, Noisy ``And'' Gate (DINA) model, the Deterministic Input, Noisy ``Or'' Gate (DINO) model, the Noisy Inputs, Deterministic ``And'' Gate (NIDA) model, the Noisy Inputs, Deterministic ``Or'' Gate (NIDO) model, and the Conjunctive Reparameterized Unified Model (C-RUM), among others \cite{Rupp, Haertel, Junker, dela,Maris,Templin, TatsuokaC, Templin2006, dela2008}. The DINA model is one of the best known and widely used CDMs. Underlying the model is the assumption that, before slipping and guessing come into play, a respondent must have mastered all necessary (as specified by a loading matrix known as the Q-matrix) attributes required by a particular item in order to answer that item correctly. Thus it is a conjunctive, non-compensatory model, well-suited to educational assessments in areas such as mathematics where correct answers are obtained by correctly employing all of an item's required skills together. The DINA model has been frequently employed in the analysis of assessments, including the widely analyzed fraction subtraction data set of \citeA{Tatsuoka1990} (see \citeNP{dela2009, dela, delaDoug08, THD06, deCarlo2011, HensonTemplin09}). However, even after many refinements to the methodology there are still some persistent issues. In the fraction subtraction data, for example, respondents who answer all items incorrectly are often classified as having most of the skills \cite{deCarlo2011}. Classification issues of this type can result model misspecification, but they can also be the unavoidable consequence of the structure of the assessment. Specifically in the DINA model, attributes that appear solely in conjunction with other attributes are problematic \cite{deCarlo2011}. This is due to an issue with attribute identifiability, which has long been known \cite{Tatsuoka1991,DiBello,deCarlo2011}, but tends to be ignored in practice. This paper gives a formal treatment of the identifiability issue of the DINA model and related CDMs. Since an assessment with fully identifiable attributes is often unavailable, we include guidelines for classification under non-identifiability and a consistent measure for the extent of non-identifiability. This allows classification error control and assessment evaluation in terms of identifiability. The paper begins by reviewing some basic concepts, including the DINA model and its variants, in Section~\ref{sec:background}. We introduce the issue of identifiability for Q-matrix based assessments in Section~\ref{sec:prob}. In Section~\ref{sec:part}, we explain the use of equivalence classes and partitions to fully describe the structure of the attribute profile space, in terms of identifiability; an algorithm to generate the partition, given any Q-matrix, is included. Partitioning allows consistent estimation of the proportion of individuals in each group of equivalent attribute profiles, as explained in Section~\ref{sec:est}. These results are extended to individual attributes via {marginal identifiability} in Section~\ref{sec:single}. In fact, the consistent estimation of the proportion of the population for which each attribute is marginally identifiable leads to a reliable measure of exam quality (with respect to identifiability). We also create a decision rule for respondent classification which controls misclassification probabilities in Section~\ref{sec:class}. Section~\ref{sec:exts} examines the implications of these methods to several variants of the DINA, in addition to another Q-matrix based CDM, the DINO model. Finally, results derived from both simulation and \citeauthor{Tatsuoka1990}'s fractions data set are reported in Section~\ref{sec:results}. \section{Background}\label{sec:background} Throughout the paper, we will be using some standard concepts from the study of CDMs. Some specific terminology and notations are listed below. \begin{description} \item[Attributes] are the respondent's (unobserved) mastery of certain skills. If we suppose that there are $N$ respondents and $K$ attributes, let the matrix of attributes be $A = (\alpha_{ik})$, where where $\alpha_{ik} \in \{0,1\}$ indicates the presence or absence of the $k$-th attribute in the $i$-th respondent. An \emph{attribute profile} $\boldsymbol{\alpha} = (\alpha_1,\ldots, \alpha_K)^\top$ is the vector of all attributes; an individual respondent $i$ will have attribute profile $\boldsymbol{\alpha}^i$ such that $\alpha^i_k = \alpha_{ik}$. \item[Responses] are the respondent's binary responses to items. Given $N$ respondents and $J$ items, the responses can be written as a $N\times J$ matrix $X = (X_{ij})$, where $X_{ij} \in \{0,1\}$ is the response of the $i$-th respondent to the $j$-th item. The $i$-th respondent's responses will be denoted by the vector $\mbox{$\mathbf X$}^i$, where the $j$-th element $X^i_j = X_{ij}$ for all $i,j$. \item[The Q-matrix] is the link between the items and their attribute requirements. It is a $J\times K$ matrix $Q = (q_{jk})$, where for each $j,k$, $q_{jk} \in \{0,1\}$ indicates whether the $j$-th item requires the $k$-th attribute. From the Q-matrix we can extract the attribute requirements of an item $j$ as the vector $\mbox{$\mathbf q$}^j$, where the $k$-th element $q^j_k = q_{jk}$ for all $j,k$. \end{description} \subsection{The DINA Model} This paper focuses on the DINA model, one of the most widely used CDMs. Under the DINA model, given an attribute profile $\boldsymbol{\alpha}$ and a Q-matrix $Q$, we can further define the quantity $$\xi_j(Q,\boldsymbol{\alpha}) = \prod_{k=1}^K (\alpha_k)^{q_{jk}} = \textbf{1}(\alpha_k \geq q_{jk}: k = 1, \ldots, K),$$ which indicates whether a respondent with attribute profile $\boldsymbol{\alpha}$ possesses all the attributes required for item $j$. If we suppose no uncertainty in the response, then a respondent $i$ with attribute profile $\boldsymbol{\alpha}$ will have responses $X_{ij} = \xi_j(Q,\boldsymbol{\alpha})$ for $j=1,\ldots, J$. Thus, the vector $\boldsymbol{\xi} = (\xi_1,\ldots, \xi_J)^\top$ is known as the \emph{ideal response vector}. In the DINA model, uncertainty is incorporated at the item level. With each item $j = 1, \ldots, J$, we associate a slipping parameter $s_j = P(X_j = 0|\xi_j = 1)$ and a guessing parameter $g_j = P(X_j = 1|\xi_j = 0)$. Each $X_j$ is Bernoulli with success probability $(1-s_j)^{\xi_j}g_j^{1-\xi_j}$. Thus, the probability of a particular response vector $\mbox{$\mathbf x$}$ given the ideal response vector $\boldsymbol{\xi}$ is \begin{align}\label{eqDINA} P(\mbox{$\mathbf x$}|\boldsymbol{\xi},\mbox{$\mathbf s$},\mbox{$\mathbf g$}) &= \prod_{j=1}^J [(1-s_j)^{\xi_j} g_j^{1-\xi_j}]^{x_j} [1-(1-s_j)^{\xi_j} g_j^{1-\xi_j}]^{1-x_j}\nonumber\\ &= \prod_{j=1}^J (1-s_j)^{\xi_jx_j} g_j^{(1-\xi_j)x_j}s_j^{\xi_j(1-x_j)}(1- g_j)^{(1-\xi_j)(1-x_j)} \end{align} In addition to $\mbox{$\mathbf s$}$ and $\mbox{$\mathbf g$}$, the response distribution also depends on $\boldsymbol{\nu} = (\nu_{\boldsymbol{\alpha}})_{\boldsymbol{\alpha}\in\{0,1\}^K}$, the proportion of individuals possessing each attribute profile. Generally, diagnostic classification is based on the posterior $p(\boldsymbol{\alpha}|\mbox{$\mathbf x$})$, which is calculated using and can be very sensitive to the prior, $\boldsymbol{\nu}$. When $\mbox{$\mathbf s$}$, $\mbox{$\mathbf g$}$, and $\boldsymbol{\nu}$ are unknown, these parameters must be simultaneously estimated. \subsection{Variants of the DINA Model}\label{sec:DINAvar} Several variants of the DINA can be constructed by restricting $\boldsymbol{\nu}$ to some lower-dimensional subspace. For example, assuming independence among the attributes so that $$\nu_{\boldsymbol{\alpha}} = p(\boldsymbol{\alpha}) = \prod_{k=1}^K p(\alpha_k)$$ reduces the $2^K-1$-dimensional parameter space to a $K$-dimensional one. This restriction is referred to as the independent DINA (ind-DINA) from hereon. It is convenient to model each $\alpha_k$ with a logistic link, so that $$p(\alpha_k) = \exp(\alpha_kb_k)/[1+\exp(b_k)],$$ where $b_k$ denotes the attribute's `difficulty.' Another alternative is the higher-order DINA (HO-DINA) model \cite{dela, THTR2008}. This model assumes that the probability of possessing a skill is dependent on a continuous skill factor $\theta$ following the standard normal distribution, so that $$\nu_{\boldsymbol{\alpha}} = p(\boldsymbol{\alpha}) = \int_\theta p(\boldsymbol{\alpha}|\theta)p(\theta) d\theta.$$ Each individual attribute is assumed to be conditionally independent given $\theta$, so that $$p(\boldsymbol{\alpha}|\theta) = \prod_{k=1}^K p(\alpha_k|\theta).$$ Finally the individual probabilities $p(\alpha_k|\theta)$ can be modeled with a logistic link, $$p(\alpha_k|\theta) = \exp(\alpha_k(b_k + a_k\theta))/[1+\exp(b_k+a_k\theta)],$$ where $b_k$ denotes the attribute's `difficulty,' and $a_k$ is the attribute discrimination parameter. It is also possible to fit a restricted version of this model, for which all the $a_k$ must be equal, as in \citeA{dela}. This is referred to as the restricted higher order DINA (RHO-DINA) model \cite{deCarlo2011}. \section{Identifiability Issues in the DINA}\label{sec:prob} Diagnostic assessments are meant to provide detailed information about respondents' possession of a variety of traits. Preferably, a well-designed exam will be able to provide information about each trait for every respondent. However, recovering information about the latent variables from a `0' response may be difficult; in comparison to a `1' response, which suggests that a respondent is more likely to possess each attribute associated with that item, a `0' response may indicate the failure to master ony one or several of the required attributes. Consider the following two simple Q-matrices for the DINA model: \begin{equation} Q_1 = \begin{pmatrix}1&0\\0&1\end{pmatrix},\ Q_2 = \begin{pmatrix}1&0\\1&1\end{pmatrix}. \end{equation} In assessments based on the Q-matrix $Q_1$, a correct response to each item generally indicates a higher probability that the respondent possesses the corresponding attribute, while an incorrect response indicates a lower probability of the same. However, with $Q_2$, an incorrect response to the second item only implies that at least one of the attributes is probably missing. In fact, given that a student does not possess Attribute 1, Item 2 provides no information about his or her mastery of Attribute 2, and so respondents with attribute profiles $(0,0)$ and $(0,1)$ have statistically identical responses. Thus, the assessment as a whole is incapable of differentiating between the two profiles, and any classification decision between them will solely be a reflection of the prior information. A slightly more complicated situation appears if we add a third attribute to the example above. Consider an assessment following the DINA model with Q-matrix $Q_3$, where \begin{equation} Q_3 = \begin{pmatrix}1&0 & 0 \\1&1 & 0 \\ 0 & 1 & 1\end{pmatrix}. \label{eq:Q3} \end{equation} The attribute requirements of the first two items match those of the items corresponding to $Q_2$. Now, however, the proportion of individuals for whom Attribute 2 is not identifiable is smaller. Of those who do not possess Attribute 1, some will possess Attribute 3. Then Attribute 2 is identifiable because of differing response distributions on Item 3. However, response distributions for those with attribute profiles $(0,1,0)$ and $(0,0,0)$ are still indistinguishable. Thus, although the assessment provides no information about Attribute 2 for a smaller part of the population, the issue has not been completely resolved. \section{Partitioning the Attribute Profile Space}\label{sec:part} We begin with an intuitive criterion for deciding whether an assessment has the ability to differentiate between two attribute profiles. \begin{definition}\label{def:sep} Two attribute profiles are \emph{separable} if they lead to different response distributions. \end{definition} The differing response distributions of separable attribute profiles imply that the data will favor one profile or the other; there is some differential effect on the likelihood and thus the posterior. Profiles that are not separable are statistically identical, with equivalent likelihood functions, making any differences in their posteriors simply artifacts of the prior. Determining whether attributes are separable can be done without the full response distribution in fact, only the ideal responses $\boldsymbol{\xi}(Q,\boldsymbol{\alpha})$ are necessary. \begin{proposition}\label{thmIdeal} Given a Q-matrix $Q$ and slipping and guessing parameters $\mbox{$\mathbf s$}$ and $\mbox{$\mathbf g$}$, two attribute profiles $\boldsymbol{\alpha}^1$ and $\boldsymbol{\alpha}^2$ can be separated if and only if they produce ideal response vectors $\boldsymbol{\xi}^1 = \boldsymbol{\xi}(Q,\boldsymbol{\alpha}^1)$ and $\boldsymbol{\xi}^2 = (Q,\boldsymbol{\alpha}^2)$ such that for some $j \in \{1,\ldots, J\}$, $\boldsymbol{\xi}^1_j \neq \boldsymbol{\xi}^2_j$ and $1-s_j \neq g_j$. \end{proposition} Throughout the rest of this paper we assume that $1-s_j \neq g_j$ for each $j = 1,\ldots, J$, which simplifies Proposition \ref{thmIdeal} into Corollary \ref{corrIdeal}. Should such an item indeed be present, then it has no discriminating power and may be omitted. \begin{corollary}\label{corrIdeal} If every item $j$ has different success probabilities given $\xi_j = 1$ or given $\xi_j = 0$, i.e.\ $1-s_j \neq g_j$ for $j = 1,\ldots, J$, then two attribute profiles can be separated if and only if they produce different ideal response vectors. \end{corollary} Lastly, it is also of interest whether an attribute profile can be separated from all other attribute profiles, and is thus identifiable. This definition of identifiability will be tied to the general statistical concept in Section~\ref{sec:est}. \begin{definition} An attribute profile $\boldsymbol{\alpha}$ is \emph{identifiable} when it can be separated from any other attribute profile $\boldsymbol{\alpha}'\neq \boldsymbol{\alpha}$. \end{definition} \subsection{Complete Separation of Attribute Profiles} The first step in understanding the identifiability issue is determining under what circumstances all attribute profiles are identifiable. This depends on the Q-matrix, which is called complete when it leads to full identifiability\cite{Chiu}. Formally, we have the following definition: \begin{definition} Under a \emph{complete} Q-matrix, all attribute profiles are identifiable, i.e.\ $\boldsymbol{\xi}(Q,\boldsymbol{\alpha}) \neq \boldsymbol{\xi}(Q,\boldsymbol{\alpha}')$ iff $\boldsymbol{\alpha} \neq \boldsymbol{\alpha}'$. \end{definition} The requirements for completeness have long been known \cite{Tatsuoka1991,DiBello,Chiu}. In essence, the assessment must contain at least one item devoted solely to each attribute. In terms of the Q-matrix, this means that for each $k \in \{1,\ldots, K\}$, there should be at least one row with an entry of `1' solely in the $k$-th position. \begin{proposition}\label{prop:complete} Let $R_Q$ be the set of row vectors of Q-matrix $Q$. Then $Q$ is \emph{complete} iff $\{e_k: k = 1,\ldots, K\} \subset R_Q$, where $e_k$ is a vector such that the $k$-th element is one and all other elements are zero. \end{proposition} \subsection{Partial Separation of Attribute Profiles} While a complete Q-matrix is necessary for full identifiability, many of the Q-matrices used in practice are unfortunately incomplete. In fact, creating assessments with complete Q-matrices is oftentimes infeasible, and requiring a complete matrix for analysis would make models like the DINA model highly impractical. This makes the partition, a standard mathematical construct, an essential tool in accurately and systematically describe the structure of nonidentifiability in the DINA. Partitions are formed from equivalence relations, which have the following requirements: \begin{definition} The relation `$a \sim b$' is an \emph{equivalence relation} if it is \begin{itemize} \item reflexive: $a \sim a$ \item symmetric: $a \sim b$ iff $b \sim a$ \item transitive: If $a\sim b$ and $b \sim c$, then $a \sim c$. \end{itemize} \end{definition} \begin{proposition}\label{prop:equiv} Let `$\sim$' denote the binary relation `cannot be separated,' where $\boldsymbol{\alpha}^1 \sim \boldsymbol{\alpha}^2$ if and only if $\boldsymbol{\xi}(Q,\boldsymbol{\alpha}^1) = \boldsymbol{\xi}(Q,\boldsymbol{\alpha}^2)$. Then `$\sim$' is an equivalence relation. \end{proposition} Putting profiles into groups, known as equivalence classes, based on an equivalence relation results in a partition; in this case, any two attribute profiles in the same equivalence class cannot be separated, while any two in different classes can be. We denote a particular equivalence class by $[\boldsymbol{\alpha}]$, where $\boldsymbol{\alpha}$ may be any attribute profile in the class; literally, $[\boldsymbol{\alpha}]$ can be read as ``the set of attribute profiles equivalent to $\boldsymbol{\alpha}$.'' The simplest way of determining the partition would be to calculate the ideal response vector of each of the $2^K$ attribute profiles and sort them lexicographically. This runs quickly in $\mathcal{O}(JK\cdot 2^K)$ time (refer to Table~\ref{tab:alg} for the step-by-step algorithm). For an alternative algorithm using Boolean algebra, see \citeA{Tatsuoka1991}. Note that our algorithm results in equivalence classes labeled by their smallest member, which shall be called the \emph{minimal representative}. The minimal representative has additional meaning as the attribute requirements of the corresponding ideal response vector and is therefore the most preferable member for labeling. \begin{table}[ht] \begin{center \caption{Algorithm for partitioning an attribute profile space } \label{tab:alg} \begin{tabular}{rp{3.75in}} \toprule Step & Procedure\\ \midrule Input: & A $J\times K$ Q-matrix $Q$. \\ \noalign{\medskip} (0) & (optional) Remove items with duplicate attribute requirements\\ (1) & List all $2^K$ attribute profiles $\boldsymbol{\alpha}$.\\ (2) & Find the ideal response vector $\boldsymbol{\xi}(Q,\boldsymbol{\alpha})$ for each $\boldsymbol{\alpha}$.\\ (3) & Do a lexicographic (alphabetic) sort of the ideal response vectors.\\ (4) & Check whether each successive profile has the same ideal response vector as the previous profile. If so, $\boldsymbol{\alpha}$ is the first member of a new equivalence class $[\boldsymbol{\alpha}]$. Else, $\boldsymbol{\alpha}$ is part of the current equivalence class.\\ \noalign{\medskip} Output: & A list of equivalence classes $[\boldsymbol{\alpha}]$ and their members.\\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[ht] \begin{center \caption{Generating the partition associated with the Q-matrix $Q_3$.} \label{tab:ex} \begin{tabular}{cccccccccc} \toprule $Q_3$&&$\boldsymbol{\alpha}$ & $\boldsymbol{\xi}(Q_3,\boldsymbol{\alpha})$&& $\boldsymbol{\alpha}$ & $\boldsymbol{\xi}(Q_3,\boldsymbol{\alpha})$ &&$[\boldsymbol{\alpha}]$\\ \cmidrule(lr){1-1} \cmidrule(lr){3-4} \cmidrule(lr){6-7} \cmidrule(lr){9-9} \multirow{8}{*}{$\begin{pmatrix}1&0&0\\1&1&0\\0&1&1\end{pmatrix}$} & \multirow{8}{*}{$\underrightarrow{\text{ (1),(2)}}$}&000&000&\multirow{8}{*}{$\underrightarrow{(3)}$}&000&000&\multirow{8}{*}{$\underrightarrow{\text{(4)}}$}&[000]\\ &&100&100&&010&000\\ &&010&000&&001&000\\ &&001&000&&011&001&&[011]\\ &&110&110&&100&100&&[100]\\ &&101&100&&101&100\\ &&011&001&&110&110&&[110]\\ &&111&111&&111&111&&[111]\\ \bottomrule \end{tabular} \flushlef Steps from Table~\ref{tab:alg} labeled (1), (2), (3), and (4).\end{center} \end{table} As seen in Table~\ref{tab:ex}, performing the algorithm on the $3\times 3$ Q-matrix $Q_3$ from \eqref{eq:Q3} results in five different equivalence classes, each of which is labeled with by its minimal representative: $[000] = \{000,010,001\}$, $[011] = \{011\}$, $[100] = \{100,101\}$, $[110] = \{110\}$, and $[111] = \{111\}$. Note that since the bracket notation may be read as `the equivalence class containing,' it is possible to change the labeling of each equivalence class by choosing any other member as the titular profile: $[000]$, $[010]$, and $[001]$ all refer to the same equivalence class, for example. \section{Consistent Estimation}\label{sec:est} We now consider the problem of parameter estimation, specifically that of $\nu_{\boldsymbol{\alpha}}$, the proportion of the population possessing each attribute profile $\boldsymbol{\alpha}$. Unless $\boldsymbol{\nu}$ is assumed known, its consistent estimation has important consequences for respondent classification and exam validity. Unfortunately, when an assessment's Q-matrix is incomplete, it is impossible to consistently estimate $\boldsymbol{\nu}$. For each equivalence class $[\boldsymbol{\alpha}]$, let $\nu_{[\boldsymbol{\alpha}]}$ be the proportion of the population possessing an attribute profile within that equivalence class. Then, \begin{equation}\label{eq:priorClass} \nu_{[\boldsymbol{\alpha}]} = \sum_{\boldsymbol{\alpha}'\in[\boldsymbol{\alpha}]}\nu_{\boldsymbol{\alpha}'}. \end{equation} The probability of observing any particular set of data depends only on $\nu_{[\boldsymbol{\alpha}]}$, since the probability of any response depends only on equivalence class membership, not on the respondent's possession of a specific profile. With an incomplete Q-matrix it is possible to observe populations with different distributions $\boldsymbol{\nu}^1 \not\equiv \boldsymbol{\nu}^2$ over the attribute profile space that have identical distributions over the equivalence classes $[\boldsymbol{\alpha}]$, i.e., $\nu^1_{[\boldsymbol{\alpha}]} = \nu^2_{[\boldsymbol{\alpha}]}$ for all $\boldsymbol{\alpha} \in \{0,1\}^K$, and thus identical response distributions. The phenomenon where different parameter values lead to identical response distributions is generally known as non-identifiability, and it destroys the ability of likelihood-based estimation methods to achieve consistency. While consistent estimation of $\nu_{\boldsymbol{\alpha}}$ cannot be achieved, it is possible to consistently estimate the proportion of individuals within each equivalence class $[\boldsymbol{\alpha}]$. \begin{theorem}\label{thm:consistent} Suppose an assessment follows the DINA model, with known Q-matrix $Q$ and item parameters $\mbox{$\mathbf s$}$ and $\mbox{$\mathbf g$}$. Let $\nu_{[\boldsymbol{\alpha}]}$, representing the proportion of the population possessing an attribute profile $\boldsymbol{\alpha}' \in [\boldsymbol{\alpha}]$, be defined as in \eqref{eq:priorClass}, and let the population parameter $\boldsymbol{\nu}$ be the vector of all $\nu_{[\boldsymbol{\alpha}]}$. We may write its likelihood as $$L(\boldsymbol{\nu}) = p(X|\boldsymbol{\nu}) = \prod_{i=1}^N p(\mbox{$\mathbf x$}^i|\boldsymbol{\nu}) = \prod_{i=1}^N \sum_{[\boldsymbol{\alpha}]}p(\mbox{$\mathbf x$}^i|[\boldsymbol{\alpha}])\nu_{[\boldsymbol{\alpha}]}.$$ Then the maximum likelihood estimate $\hat\boldsymbol{\nu}$ of $\boldsymbol{\nu}$ is consistent as $N\rightarrow \infty$. \end{theorem} Consistent estimation of the $\nu_{[\boldsymbol{\alpha}]}$ is an important result, justifying the results of the following sections. To emphasize the differences in parameter space and procedure, work based on equivalence classes $[\boldsymbol{\alpha}]$ rather than profiles $\boldsymbol{\alpha}$ will from hereon be referred to under the name of the Non-Identifiability ADjusted DINA (NIAD-DINA) model. \section{Marginal Identifiability}\label{sec:single} We now wish to extend the concept of identifiability to individual attributes. This is motivated by the fact that, though the presence of multiple profiles in the same equivalence class signals non-identifiability, some individual attributes may still be identifiable within the class. To illustrate, consider the the Q-matrix $Q_3$ from \eqref{eq:Q3} and one of its equivalence classes, $[000] = \{000,010,001\}$. If a profile $\boldsymbol{\alpha} \in [000]$, then its first component $\alpha_1 = 0$, but the values of $\alpha_2$ and $\alpha_3$ are uncertain. Thus, posterior weight $p([000]|x)$ on this class counts as positive evidence that $\alpha_1 = 0$, but does not help in deciding $\alpha_2$ or $\alpha_3$. This observation motivates the following definition: \begin{definition} An attribute is \emph{marginally identifiable} within an equivalence class when either all members of that class possess that attribute or none of them do. \end{definition} Define the marginal identifiability indicator $\delta_{[\boldsymbol{\alpha}],k}$ as follows: \begin{equation}\label{eq: identifiable} \delta_{[\alpha],k} = \prod_{\boldsymbol{\alpha}' \in [\boldsymbol{\alpha}]} \alpha'_k + \prod_{\boldsymbol{\alpha}'\in[\boldsymbol{\alpha}]} (1-\alpha'_k). \end{equation} Then, $\delta_{[\boldsymbol{\alpha}],k} = 1$ when Attribute $k$ is marginally identifiable within equivalence class $[\boldsymbol{\alpha}]$. Posterior weight on a class $[\boldsymbol{\alpha}]$ only provides information about the $k$-th attribute when $\delta_{[\boldsymbol{\alpha}],k} = 1$; otherwise, there is no information beyond the prior. \subsection{Exam Quality: the Marginal Identifiability Rate}\label{sec:eval} Since non-identifiability is frequently unavoidable with Q-matrix based CDMs, it is important to measure its extent. For a more nuanced view, this is done on a marginal, basis. Given the proportion $\nu_{\boldsymbol{\alpha}}$ of each attribute profile $\boldsymbol{\alpha}$, the proportion of the population for which the $k$-th attribute is marginally identifiable can be quantified by $\zeta_k$, as follows: \begin{equation}\label{eq:zeta} \zeta_k = \sum_{\{\boldsymbol{\alpha}: \delta_{[\boldsymbol{\alpha}],k} = 1\}}\nu_{\boldsymbol{\alpha}}. \end{equation} Let $\boldsymbol{\zeta}$ be the vector of all $\zeta_k$. Then $\boldsymbol{\zeta}$ is the proportion of the population for which each attribute is marginally identifiable, i.e., the marginal identifiability rate. Oftentimes $\nu_{\boldsymbol{\alpha}}$, and thus $\boldsymbol{\zeta}$, is unknown. Under the conditions of Theorem~\ref{thm:consistent}, $\boldsymbol{\zeta}$ can be consistently estimated by its maximum likelihood estimator $\hat\boldsymbol{\zeta}$. \begin{proposition} \label{prop:marg} Suppose an assessment follows the DINA model, with known Q-matrix $Q$ and item parameters $\mbox{$\mathbf s$}$ and $\mbox{$\mathbf g$}$. Let $\hat\nu_{[\boldsymbol{\alpha}]}$ be the MLE estimate of $\nu_{[\boldsymbol{\alpha}]}$. Then \begin{equation}\label{eq:propID_hat} \hat\zeta_k = \sum_{\{[\boldsymbol{\alpha}]: \delta_{[\boldsymbol{\alpha}],k} = 1\}}\hat\nu_{[\boldsymbol{\alpha}]},\ k = 1,\ldots, K. \end{equation} is consistent as $N\rightarrow \infty$. \end{proposition} The consistency of $\hat\boldsymbol{\zeta}$ is a direct consequence of the consistency of $\hat\nu_{[\boldsymbol{\alpha}]}$ in Theorem~\ref{thm:consistent}. We thus obtain a very reasonable measure of exam quality, in terms of the proportion of the population for which each attribute is marginally identifiable. \section{Classification}\label{sec:class} Non-identifiability has potentially serious effects on respondent classification. Classification is generally conducted based on the posterior distribution $p(\boldsymbol{\alpha}|\mbox{$\mathbf x$}) \propto p(\mbox{$\mathbf x$}|\boldsymbol{\alpha}) p(\boldsymbol{\alpha})$. Recall that profiles in the same equivalence class have the same likelihood. Thus, the posterior will simply be a reflection of the prior, without any added information from the data \cite{deCarlo2011}. In fact, within an equivalence class the posteriors are proportional to the priors. Given a prior $p(\boldsymbol{\alpha})$, for any $\boldsymbol{\alpha}^1,\boldsymbol{\alpha}^2 \in [\boldsymbol{\alpha}]$, $$ \frac{p(\boldsymbol{\alpha}^1|\mbox{$\mathbf x$})}{p(\boldsymbol{\alpha}^2|\mbox{$\mathbf x$})} = \frac{p(\mbox{$\mathbf x$}|\boldsymbol{\alpha}^1|)p(\boldsymbol{\alpha}^1)}{p(\mbox{$\mathbf x$}|\boldsymbol{\alpha}^2)p(\boldsymbol{\alpha}^2)} = \frac{p(\mbox{$\mathbf x$}|[\boldsymbol{\alpha}])p(\boldsymbol{\alpha}^1)}{p(\mbox{$\mathbf x$}|[\boldsymbol{\alpha}])p(\boldsymbol{\alpha}^2)} = \frac{p(\boldsymbol{\alpha}^1)}{p(\boldsymbol{\alpha}^2)}. $$ Posteriors are often calculated by maximizing the marginal maximum likelihood $L(\boldsymbol{\nu},\mbox{$\mathbf s$},\mbox{$\mathbf g$})$ via the E-M algorithm \cite{Haertel,dela2009, Rupp}. Then, since all vectors $\boldsymbol{\nu}$ with identical weights on each class $\nu_{[\boldsymbol{\alpha}]}$ have identical likelihoods, any $\boldsymbol{\nu}$ achieving the maximizing $\nu_{[\boldsymbol{\alpha}]}$ may result. The values chosen are determined by the starting values, which have little validity for classification. When the posterior is sensitive to the prior, it is important to work with $p([\boldsymbol{\alpha}])$, which can be estimated consistently, rather than $p(\boldsymbol{\alpha})$. Thus classification here will be conducted based on $p([\boldsymbol{\alpha}]|\mbox{$\mathbf x$} \propto p(\mbox{$\mathbf x$}|[\boldsymbol{\alpha}])p([\boldsymbol{\alpha}])$ instead of the usual posterior. This calculation does not require a separate fitting of the model, since \begin{equation} \label{eq:postClass} p([\boldsymbol{\alpha}]|\mbox{$\mathbf x$}) = \frac{p(\mbox{$\mathbf x$}|[\boldsymbol{\alpha}])\nu_{[\boldsymbol{\alpha}]} }{p(\mbox{$\mathbf x$})} = \frac{p(\mbox{$\mathbf x$}|[\boldsymbol{\alpha}])\sum_{\boldsymbol{\alpha}'\in[\boldsymbol{\alpha}]}\nu_{\boldsymbol{\alpha}'}}{p(\mbox{$\mathbf x$})} = \sum_{\boldsymbol{\alpha}'\in[\boldsymbol{\alpha}]}p(\boldsymbol{\alpha}'|\mbox{$\mathbf x$}) \end{equation} From this posterior, we then define \begin{eqnarray} p_k^{\min}(\mbox{$\mathbf x$}) & = & \sum_{[\boldsymbol{\alpha}]:\alpha_k=1,d_{[\boldsymbol{\alpha}],k}=1} p([\boldsymbol{\alpha}]|\mbox{$\mathbf x$}),\label{eq:pmin} \\ p_k^{\max}(\mbox{$\mathbf x$}) & = & p_k^{\min}(\mbox{$\mathbf x$}) +\sum_{[\boldsymbol{\alpha}]:d_{[\boldsymbol{\alpha}],k}=0} p([\boldsymbol{\alpha}]|\mbox{$\mathbf x$}),\label{eq:pmax} \end{eqnarray} where $\delta_{[\boldsymbol{\alpha}],k}$ is the marginal identifiability indicator defined in \eqref{eq: identifiable}. Classification follows from the fact that, depending upon the specific hyperprior on $\boldsymbol{\nu}$ or starting point of the E-M algorithm, the DINA model may produce marginal posterior probabilities of mastery $p(\alpha_k=1|\mbox{$\mathbf x$})$ anywhere in the range $[p_k^{\min}(\mbox{$\mathbf x$}), p_k^{\max}(\mbox{$\mathbf x$})]$. Thus, it is only appropriate to conclude that $\alpha_k = 1$ when $p_k^{\min}(\mbox{$\mathbf x$})$ is high, or that $\alpha_k = 0$ when $p_k^{\max}(\mbox{$\mathbf x$})$ is low. A natural cutoff for both is 0.5, but it may be adjusted as necessary. This classification method, from hereon referred to as the NIAD-DINA classification algorithm, accounts for both uncertainty in the prior and uncertainty caused by slipping and guessing. It is summarized in Table~\ref{tab:algClass}. \begin{table}[ht] \begin{center \caption{ NIAD-DINA classification algorithm} \label{tab:algClass} \begin{tabular}{rp{3in}c} \toprule Step&Procedure &q.v.\\ \midrule Input: & Q-matrix $Q = (q_{jk})_{J\times K}$, data $X = (x_{ik})_{N\times J}$.\\ \noalign{\medskip} (1) & Fit the model to produce $p(\boldsymbol{\alpha}|\mbox{$\mathbf x$})$.\\ (2) & Partition the attribute profile space. & Table~\ref{tab:alg}\\ (3) & Calculate the marginal identifiability vector $\delta_{[\boldsymbol{\alpha}]}$. & \eqref{eq: identifiable}\\ (4) & Sum posteriors $p(\boldsymbol{\alpha}|x)$ for $p([\boldsymbol{\alpha}]|\mbox{$\mathbf x$})$. & \eqref{eq:postClass}\\ (5) & Calculate $p_k^{\min}(\mbox{$\mathbf x$})$ and $p_k^{\max}(\mbox{$\mathbf x$})$ for every $k,\mbox{$\mathbf x$}$. & \eqref{eq:pmin}, \eqref{eq:pmax}\\ (6) & Classify: \\ & \hspace{.25in}If $p_k^{\min}>0.5$, then $\hat\alpha_k = 1$. \\ &\hspace{.25in}If $p_k^{\max} < 0.5$, then $\hat\alpha_k = 0$. \\ &\hspace{.25in}Else, $\hat\alpha_k = *$ (unclassified).\\ \noalign{\medskip} Output: & Classifications $\hat\alpha^i_k \in \{0,1,*\}$ for all $i,k$.\\ \bottomrule \end{tabular} \end{center} \end{table} \section{Extensions}\label{sec:exts} \subsection{Variants of the DINA} The methodology of partitioning in Section \ref{sec:part} applies to any model where the presence of a difference in the ideal response pattern fully determines the presence of a difference in the likelihood function. Included among these models are all variants of the DINA model listed in Section~\ref{sec:DINAvar}. Since these variants are, in essence, a restriction on the space of $\nu$, the consistency result for $\nu_{[\boldsymbol{\alpha}]}$ in Section~\ref{sec:est}, along with all the following results, holds when the model is in fact correct. However, if the true $\nu_{[\boldsymbol{\alpha}]}$ do not fall under the set of values consistent with the restriction on the parameter space, then even estimates of $\nu_{[\boldsymbol{\alpha}]}$ will no longer be consistent. Thus, large differences in the $\hat\nu_{[\boldsymbol{\alpha}]}$ calculated under restricted models from those calculated under the NIAD-DINA model are symptomatic of model misspecification, and may imply that the DINA variant chosen is overly restrictive on the prior. Goodness-of-fit measures such as the AIC and BIC will reflect lack of fit appropriately if the saturated model has the correct number of parameters $2J + L$, rather than $2J + 2^K-1$. \subsection{The DINO Model} The DINO model also specifies item and attribute relationships using a Q-matrix, but the ideal responses are calculated as $$\xi_j(Q,\boldsymbol{\alpha}) = 1- \prod_{k=1}^K (1-\alpha_k)^{q_{jk}} = \textbf{1}(\alpha_k = q_{jk}=1\text{ for some }k).$$ As in the DINA model, the response probabilities are functions of item parameters $s_j = P(X_j = 0|\xi_j = 1)$ and $g_j = P(X_j = 1|\xi_j = 0)$. Under the DINA model, ideal responses are \emph{correct} when the respondent possesses all required attributes; under the DINO model, ideal responses are \emph{incorrect} when the respondent \emph{does not} possess all required attributes. Thus, for responses $X$ following the DINO model, the reverse responses $1-X$ follow the DINA model (with a reversed interpretation of the attribute profile vectors). This dualism implies that all the results of this paper apply to the DINO model. \section{Results}\label{sec:results} \subsection{Simulation Results} We first demonstrate the procedures on simulated data. Responses are generated for $N = 5000$ resondents taking an assessment with $J = 6$ items measuring $K = 3$ distinct attributes. The respondents' mastery or nonmastery of the measured attributes is randomly generated according to the probability $p_{sim}(\boldsymbol{\alpha})$ of each profile $\boldsymbol{\alpha} \in\{0,1\}^3$, as listed in Table~\ref{tab:p}. \begin{table}[ht] \centerin \caption{Population proportions of each attribute profile} \label{tab:p} \begin{tabular}{lcccccccc} \toprule &\multicolumn{8}{c}{ $\boldsymbol{\alpha}$ }\\ \cmidrule(lr){2-9} & 000 & 001 & 010 & 011 & 100 & 101 & 110 & 111 \\ \midrule $p_{sim}$ & 0.27 & 0.00 & 0.01 & 0.04 & 0.10 & 0.16 & 0.20 & 0.21\\ \bottomrule \end{tabular} \end{table} The responses themselves follow the DINA model according to the Q-matrix $Q_{sim}$ with slipping $s_{sim}$ and guessing $g_{sim}$ as shown in Table~\ref{tab:simParam}. \begin{table}[ht]\footnotesize \centerin \caption{Q-matrix, slipping, and guessing for simulated data.} \begin{tabular}{cccc} \toprule Item ($j$) & Attribute vector ($\mbox{$\mathbf q$}^j$) &Slipping ($s_j$) &Guessing ($g_j$) \\ \midrule 1. & 100 & 0.14 & 0.10 \\ 2. & 110 & 0.12 & 0.15 \\ 3. & 011 & 0.18 & 0.18 \\ 4. & 100 & 0.17 & 0.18 \\ 5. & 110 & 0.08 & 0.06 \\ 6. & 011 & 0.05 & 0.06 \\ \bottomrule \end{tabular} \label{tab:simParam} \end{table} The Q-matrix $Q_{sim}$ is incomplete, and the resulting instability in the posterior becomes clear once the data is fitted multiple times. As an example, the posterior probabilities of each attribute profile given the zero response vector $\mbox{$\mathbf 0$} = (0,0,\ldots, 0)$ are summarized in Table~\ref{tab:simPost0_profiles}. \begin{table}[ht] \centerin \caption{Posterior probabilities given zero correct responses, $p(\boldsymbol{\alpha}|\mbox{$\mathbf x$} = \mbox{$\mathbf 0$})$} \begin{tabular}{llcccccccc}\toprule &&\multicolumn{8}{c}{ $\boldsymbol{\alpha}$ }\\ \cmidrule(lr){3-10} && 000 & 001 & 010 & 011 & 100 & 101 & 110 & 111 \\ \midrule \multicolumn{2}{l}{truth} \\ && 0.91 & 0.02 & 0.05 & 0.00 & 0.01 & 0.01 & 0.00 & 0.00\\ \multicolumn{2}{l}{minimums} \\ & DINA & 0.01 & 0.02 & 0.03 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00\\ & HO-DINA & 0.13 & 0.09 & 0.03 & 0.00 & 0.01 & 0.01 & 0.00 & 0.00\\ &RHO-DINA & 0.55 & 0.11 & 0.29 & 0.00 & 0.02 & 0.01 & 0.00 & 0.00\\ &ind-DINA & 0.29 & 0.24 & 0.37 & 0.00 & 0.02 & 0.02 & 0.00 & 0.00\\ \multicolumn{2}{l}{maximums} \\ &DINA & 0.71 & 0.86 & 0.56 & 0.00 & 0.02 & 0.03 & 0.00 & 0.00\\ &HO-DINA & 0.62 & 0.81 & 0.71 & 0.00 & 0.02 & 0.02 & 0.00 & 0.00\\ &RHO-DINA & 0.58 & 0.13 & 0.30 & 0.00 & 0.02 & 0.01 & 0.00 & 0.00\\ &ind-DINA & 0.31 & 0.28 & 0.40 & 0.01 & 0.03 & 0.02 & 0.00 & 0.00\\ \noalign{\medskip} \bottomrule \end{tabular} \flushlef Note: Minimum and maximum values of the posterior $p(\boldsymbol{\alpha}|\mbox{$\mathbf x$} = 0)$, as generated over ten runs of the (random start) E-M algorithm. \label{tab:simPost0_profiles} \end{table} Here, the DINA, HO-DINA, and RHO-DINA are overparameterized and produce a wide range of results for Profiles $[000]$, $[001]$, and $[010]$. The slight variability in the ind-DINA estimates is a numerical artifact. While the ind-DINA does not suffer from nonidentifiability, it still does not give accurate estimates in this case since the model assumptions are incorrect. Partitioning the attribute profile space as directed by Table~\ref{tab:alg} produces the five equivalence classes listed in Table~\ref{tab:simPart}, two of which have multiple members. The table also reports the marginal identifiability vector $\delta_{[\boldsymbol{\alpha}]}$ for each class. Note that since Items 1 and 4 are devoted to Attribute 1, it is always marginally identifiable and $\delta_{[\boldsymbol{\alpha}],1}\equiv 1$. It is also clear that nonidentifiability most seriously affects Attribute 3, which is marginally non-identifiable for members of both [000] and [100]. Finally, Table ~\ref{tab:simPart} also reports E-M estimates of the proportion of respondents in each class under the DINA and several variants, along with the true proportion. Note the accuracy of the DINA estimates, which are consistent, and the inaccuracy of the ind-DINA estimates due to model misfit. \begin{table}[ht] \centerin \caption{Equivalence classes, along with their class sizes, true and maximum likelihood probabilities, and marginal identifiability vectors.} \begin{tabular}{cccccccc}\toprule &&\multicolumn{5}{c} {$\nu_{[\boldsymbol{\alpha}]}$}&\\ \cmidrule(lr){3-7} $[\boldsymbol{\alpha}]$ & Size & True & DINA & HO-DINA & RHO-DINA & ind-DINA &$\delta_{[\boldsymbol{\alpha}]}$\\ \midrule $[000]$ & 3 & 0.29 & 0.30 & 0.29 & 0.29 & 0.22 & 100\\ $[100]$ & 2 & 0.26 & 0.26 & 0.27 & 0.26 & 0.31 & 110\\ $[011]$ & 1 & 0.04 & 0.04 & 0.04 & 0.04 & 0.08 & 111\\ $[110]$ & 1 & 0.20 & 0.20 & 0.19 & 0.20 & 0.20 & 111\\ $[111]$ & 1 & 0.21 & 0.21 & 0.21 & 0.21 & 0.18 & 111\\ \bottomrule \end{tabular} \label{tab:simPart} \end{table} We now consider variability in the marginal posterior probabilities $p(\alpha_k = 1|\mbox{$\mathbf x$})$. Table~\ref{tab:simPost0_alphaK} gives the sample range of $p(\alpha_k = 1|\mbox{$\mathbf x$})$ after ten runs of the E-M algorithm, in addition to the theoretical range. Note the large theoretical ranges for $p(\alpha_k=1|\mbox{$\mathbf x$} = \mbox{$\mathbf 0$}), k = 2,3$. \begin{table}[ht] \centerin \caption{Variability in $p(\alpha_k=1|\mbox{$\mathbf x$}=\mbox{$\mathbf 0$})$, the marginal posterior given the zero response vector.} \begin{tabular}{lccc}\toprule &\multicolumn{3}{c}{ $k$ }\\ \cmidrule(lr){2-4} & 1 & 2 & 3\\ \midrule sample min & 0.03 & 0.03 & 0.04\\ $p_k^{min}(\mbox{$\mathbf 0$})$ & 0.03 & 0.00 & 0.00\\ \noalign{\medskip} sample max & 0.03 & 0.56 & 0.89\\ $p_k^{max}(\mbox{$\mathbf 0$})$ & 0.03 & 0.97 & 1.00\\ \bottomrule \end{tabular} \flushlef Note: Probabilities calculated by fitting the DINA model over ten runs of E-M algorithm with random starts. \label{tab:simPost0_alphaK} \end{table} Classification was conducted on a marginal basis, based on $p(\alpha_k=1|\mbox{$\mathbf x$})$, under each of the models. In addition, NIAD-DINA classification was performed (see Table~\ref{tab:algClass}). Marginal misclassification rates $p(\hat\alpha_k \neq \alpha_k)$ are compared in Table~\ref{tab:simMisclass}. Note that NIAD-DINA classification results in unclassified individuals; for example, $\hat\boldsymbol{\alpha} = (0**)$ for those with the zero response vector. This rate is also listed in Table~\ref{tab:simMisclass}. The DINA and HO-DINA are overparameterized and the misclassification rate for Attribute 3 may reach over 40\% in both models. The ind-DINA also performs poorly, but due to an overly restricted parameter space rather than nonidentifiability. Adjusting classification under the DINA to account for nonidentifiability according to the method described in Section~\ref{sec:class} solves both these issues. It may leave a large proportion of individuals unclassified, but this is a necessary consequence of the assessment design. Classifying these individuals would require further assumptions beyond the model. \begin{table}[h] \centerin \caption{Marginal misclassification rates under a variety of models.} \begin{tabular}{cccccc}\toprule &\multicolumn{5}{c}{ Model }\\ \cmidrule(l){2-6} k & DINA & HO-DINA & RHO-DINA & ind-DINA& NIAD-DINA\\ \midrule 1 & 0.07 & 0.07 & 0.07 & 0.09 & 0.07 (0.00)\\ 2 & 0.07-0.32 & 0.07-0.32 & 0.07 & 0.26 & 0.04 (0.32)\\ 3 & 0.19-0.44 & 0.19-0.43 & 0.20 & 0.21 & 0.04 (0.56)\\ \bottomrule \end{tabular} \flushlef Note: Range over 10 runs reported for overparameterized models. All cut-offs equal to 0.5. The proportion of respondents left unclassified under the NIAD-DINA is displayed within parentheses. \label{tab:simMisclass} \end{table} In addition to controlling misclassification errors, we may also evaluate the quality of the assessment by measuring the marginal identifiability rate $\boldsymbol{\zeta}$ defined in \eqref{eq:zeta}. Table~\ref{tab:simID} shows both true and estimated values for $\boldsymbol{\zeta}$. Note once again that nonidentifiability affects Attribute 3 more severely than it does Attribute 2. In addition, estimates are generally accurate, except in the case of the ind-DINA, which suffers from lack of fit. \begin{table}[h] \centerin \caption{True and estimated values for $\boldsymbol{\zeta}$, marginal identifiability rate.} \begin{tabular}{cccccc}\toprule &&\multicolumn{4}{c}{ Model }\\ \cmidrule(l){3-6} k & true & DINA & HO-DINA & RHO-DINA & ind-DINA\\ \midrule 1 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00\\ 2 & 0.71 & 0.70 & 0.71 & 0.71 & 0.78\\ 3 & 0.44 & 0.45 & 0.45 & 0.45 & 0.47\\ \bottomrule \end{tabular} \label{tab:simID} \end{table} In terms of model selection, reducing the number of parameters for the DINA model to $2M+L$ from the original $2M + 2^K$ reduces the comparative advantage of the restricted models. In Table~\ref{tab:simAIC}, the AIC value of the RHO-DINA barely edges out that of the DINA with identifiability adjustment. \begin{table}[h] \centerin \caption{AIC and BIC for the DINA, RHO-DINA, and ind-DINA.} \begin{tabular}{lccc}\toprule & parameters & AIC & BIC\\ \midrule NIAD-DINA & 17 & 32862.8 & 32973.6 \\ RHO-DINA & 16 &32861.2 & 32965.5 \\ ind-DINA & 15 & 32995.9 & 33093.6 \\ \bottomrule \end{tabular} \label{tab:simAIC} \end{table} \subsection{A Fractions Data Example} We now turn to the widely analyzed fraction subtraction data set of \citeA{Tatsuoka1990}. It is composed of the twenty items listed in Table~\ref{tab:frac_jtems}. \begin{table}[h] \caption{Items from the fraction subtraction data set \protect\cite{Tatsuoka1990}.} \begin{tabular}{cccccc} \toprule No. & Item & No. & Item & No. & Item\\ \midrule \,1. & $\nicefrac{5}{3}-\nicefrac{3}{4}$ & \,8. & $\nicefrac{2}{3}-\nicefrac{2}{3}$ & 15. & $2 - \nicefrac{1}{3}$ \\ \,2. & $\nicefrac{3}{4} - \nicefrac{3}{8}$ & \,9. & $3\,\nicefrac{7}{8}-2$ & 16. & $4\,\nicefrac{5}{7} - 1\,\nicefrac{4}{7}$\\ \,3. & $\nicefrac{5}{6}-\nicefrac{1}{9}$ & 10. & $4\,\nicefrac{4}{12} - 2\,\nicefrac{7}{12}$ & 17. & $7\,\nicefrac{3}{5} - \nicefrac{4}{5}$\\ \,4. & $3\,\nicefrac{1}{2}-2\,\nicefrac{3}{2}$ & 11. & $4\,\nicefrac{1}{3}-2\,\nicefrac{4}{3}$ & 18. & $4\,\nicefrac{1}{10} - 2\,\nicefrac{8}{10}$\\ \,5. & $4\,\nicefrac{3}{5} - 3\,\nicefrac{4}{10}$ & 12. & $\nicefrac{11}{8}-\nicefrac{1}{8} $& 19. & $4 - 1\,\nicefrac{4}{3}$\\ \,6. & $\nicefrac{6}{7}-\nicefrac{4}{7}$ & 13. & $3\,\nicefrac{3}{8}-2\,\nicefrac{5}{6}$ & 20. & $4\,\nicefrac{1}{3} - 1\,\nicefrac{5}{3}$\\ \,7. & $3-2\,\nicefrac{1}{5}$ & 14. & $3\,\nicefrac{4}{5}-3\,\nicefrac{2}{5}$ \\ \bottomrule \end{tabular} \label{tab:frac_jtems} \end{table} The Q-matrix in Table~\ref{tab:Qfrac} comes from \citeA{dela}, and specifies the following eight attributes: $\alpha_1 = $ convert a whole number to a fraction; $\alpha_2 = $ separate a whole number from a fraction; $\alpha_3 = $ simplify before subtracting; $\alpha_4 = $ find a common denominator; $\alpha_5 = $ borrow from whole number part; $\alpha_6 = $ column borrow to subtract the second numerator from the first; $\alpha_7 = $ subtract numerators; and $\alpha_8 = $ reduce answers to simplest form. \begin{table}[ht]\footnotesize \centerin \caption{Q-matrix from \protect\citeA{dela}.} \begin{tabular}{cccccc} \toprule Item & Attributes ($\mbox{$\mathbf q$}^j$) &Item &Attributes ($\mbox{$\mathbf q$}^j$) &Item &Attributes ($\mbox{$\mathbf q$}^j$) \\ \midrule \ 1. & 00010110 & \ 8. & 00000010 & 15. & 10000010 \\ \ 2. & 00010010 & \ 9. & 01000000 & 16. & 01000010\\ \ 3. & 00010010 & 10. & 01001011 & 17. & 01001010\\ \ 4. & 01101010 & 11. & 01001010 & 18. & 01001110\\ \ 5. & 01010011 & 12. & 00000011 & 19. & 11101010\\ \ 6. & 00000010 & 13. & 01011010 & 20. & 01101010\\ \ 7. & 11000010 & 14. & 01000010 \\ \bottomrule \end{tabular} \label{tab:Qfrac} \end{table} As pointed out by \citeA{deCarlo2011}, this assessment exemplifies the identifiability issues of the DINA model. While Attributes 2 and 7 have items dedicated solely to them, all other attributes appear only in combination. In fact, Attribute 3 only appears in Item 4, in conjunction with Attributes 2, 5, and 7. Attribute 7 is required for all items except one, making it difficult to draw conclusions about other attributes when it has not been mastered. Table~\ref{tab:frac_post0} displays the marginal posterior probabilities of mastery for each attribute, given the zero response vector. The posterior displayed for the DINA is just one possible output of the E-M algorithm for this data; meanwhile, note the high probabilities of mastery under the ind-DINA model. Common sense dictates that something is out of place when the analysis states that students with a score of zero cannot subtract numerators, but can do everything else, from finding a common denominator to borrowing to reducing to simplest form. \begin{table}[!ht] \begin{center \caption{Marginal posterior probabilities of mastery given the zero response vector, $p(\alpha_k=1|\mbox{$\mathbf x$}=\mbox{$\mathbf 0$})$ \begin{tabular}{lcccccccc} \toprule &\multicolumn{8}{c}{k}\\ \cmidrule(l){2-9} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \midrule DINA & 0.50 & 0.08 & 0.50 & 0.52 & 0.53 & 0.41 & 0.00 & 0.59\\ HO-DINA & 0.00 & 0.13 & 0.31 & 0.05 & 0.02 & 0.30 & 0.00 & 0.25\\ RHO-DINA & 0.02 & 0.13 & 0.12 & 0.05 & 0.02 & 0.25 & 0.00 & 0.18\\ ind-DINA & 0.74 & 0.86 & 0.96 & 0.86 & 0.75 & 0.98 & 0.00 & 0.94\\ \bottomrule \end{tabular} \label{tab:frac_post0} \end{center} \end{table} \begin{table}[!ht] \centerin \caption{Multiple-member equivalence classes, along with their class sizes, maximum likelihood probabilities, and marginal identifiability vectors.} \begin{tabular}{ccccccc}\toprule &&\multicolumn{4}{c} {$\nu_{[\boldsymbol{\alpha}]}$}&\\ \cmidrule(lr){3-6} $[\boldsymbol{\alpha}]$ & Size & DINA & HO-DINA & RHO-DINA & ind-DINA &$\delta_{[\boldsymbol{\alpha}]}$\\ \midrule $[00000000]$ & 64 & 0.15 & 0.12 & 0.12 & 0.02 & 01000010\\ $[01000000]$ & 64 & 0.04 & 0.06 & 0.06 & 0.31 & 01000010\\ $[00000010]$ & 8 & 0.01 & 0.02 & 0.03 & 0.00 & 11010011\\ $[10000010]$ & 8 & 0.00 & 0.00 & 0.00 & 0.00 & 11010011\\ $[00000011]$ & 8 & 0.02 & 0.03 & 0.02 & 0.00 & 11010011\\ $[10000011]$ & 8 & 0.00 & 0.00 & 0.00 & 0.00 & 11010011\\ $[01000010]$ & 4 & 0.03 & 0.04 & 0.04 & 0.00 & 11011011\\ $[01000011]$ & 4 & 0.11 & 0.09 & 0.08 & 0.00 & 11011011\\ $[11000010]$ & 4 & 0.00 & 0.00 & 0.00 & 0.00 & 11011011\\ $[11000011]$ & 4 & 0.01 & 0.01 & 0.02 & 0.01 & 11011011\\ $[00010010]$ & 4 & 0.00 & 0.00 & 0.00 & 0.00 & 11010111\\ $[10010010]$ & 4 & 0.00 & 0.00 & 0.00 & 0.00 & 11010111\\ $[00010011]$ & 4 & 0.00 & 0.00 & 0.00 & 0.00 & 11010111\\ $[10010011]$ & 4 & 0.00 & 0.00 & 0.00 & 0.00 & 11010111\\ $[00010110]$ & 4 & 0.02 & 0.00 & 0.00 & 0.00 & 11010111\\ $[10010110]$ & 4 & 0.00 & 0.00 & 0.00 & 0.00 & 11010111\\ $[00010111]$ & 4 & 0.00 & 0.01 & 0.01 & 0.01 & 11010111\\ $[10010111]$ & 4 & 0.00 & 0.00 & 0.00 & 0.03 & 11010111\\ $[01010010]$ & 2 & 0.00 & 0.00 & 0.00 & 0.00 & 11011111\\ $[11010010]$ & 2 & 0.00 & 0.00 & 0.00 & 0.00 & 11011111\\ $[01010011]$ & 2 & 0.01 & 0.01 & 0.00 & 0.00 & 11011111\\ $[11010011]$ & 2 & 0.00 & 0.00 & 0.00 & 0.00 & 11011111\\ $[01010110]$ & 2 & 0.00 & 0.01 & 0.01 & 0.00 & 11011111\\ $[11010110]$ & 2 & 0.00 & 0.00 & 0.00 & 0.01 & 11011111\\ $[01010111]$ & 2 & 0.05 & 0.06 & 0.06 & 0.03 & 11011111\\ $[11010111]$ & 2 & 0.06 & 0.06 & 0.06 & 0.09 & 11011111\\ \bottomrule \end{tabular} \label{table:fracParts} \end{table} With eight attributes in the Q-matrix, there are a total of 256 possible attribute profiles. They can be divided into just 58 different equivalence classes by the partitioning algorithm, 32 of them containing a single identifiable element. The 26 multiple-profile equivalence classes are listed in Table~\ref{table:fracParts}, which also displays their class sizes, maximum likelihood probabilities, and marginal identifiability vectors. Within these multiple-profile equivalence classes, Attributes 2 and 7 are always marginally identifiable, while Attribute 3 is never so; this is natural considering our previous observations about the Q-matrix. Profiles within the largest classes contain many zeroes, since under the DINA model non-identifiability affects a particular attribute only for respondents who do not possess other attributes used in combination with that attribute. Also note that the ind-DINA shows signs of model misspecification, since its estimates $\hat\nu_{[\boldsymbol{\alpha}]}$ deviate strongly from the estimates derived from the other models. Table~\ref{table:fracEval} shows the estimated marginal identifiability rates, $\hat\boldsymbol{\zeta}$. At the low end, $\hat\zeta_3 = 0.48$, bringing into question the ability of this assessment to measure mastery of Attribute 3. Attribute 6 does only slightly better, with $\hat\zeta_6 = 0.64$. Note that Attribute 6 is only utilized in Items 1 and 18; in both cases it appears in conjunction with at least two other attributes. \begin{table}[!ht] \begin{center \caption{Estimated marginal identifiability rates $\zeta_k$.} \begin{tabular}{lcccccccc} \toprule &\multicolumn{8}{c}{k}\\ \cmidrule(l){2-9} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \midrule DINA & 0.81 & 1.00 & 0.48 & 0.81 & 0.75 & 0.64 & 1.00 & 0.81\\ HO-DINA & 0.82 & 1.00 & 0.47 & 0.82 & 0.75 & 0.64 & 1.00 & 0.82\\ RHO-DINA & 0.82 & 1.00 & 0.48 & 0.82 & 0.75 & 0.63 & 1.00 & 0.82\\ ind-DINA & 0.66 & 1.00 & 0.47 & 0.66 & 0.62 & 0.64 & 1.00 & 0.66\\ \bottomrule \end{tabular} \label{table:fracEval} \end{center} \end{table} Finally, it is useful to consider how the reduction of the parameter space for the DINA model, based on identifiability, affects model selection by AIC and BIC. In particular, the AIC no longer prefers the ind-DINA to the full DINA model once the reduced parameter space has been applied. The BIC, which will generally choose sparser models than the AIC, still reports lower values for the ind-DINA, but the comparison is much tighter. \begin{table}[!ht] \centerin \caption{AIC and BIC for the DINA, RHO-DINA, and ind-DINA.} \begin{tabular}{lccc}\toprule & parameters & AIC & BIC\\ \midrule DINA & 296 & 9397.0 & 10665.2 \\ NIAD-DINA & \ 98 & 9001.0 & \ 9420.9 \\ HO-DINA & \ 56 & 8959.7 & \ 9199.6 \\ RHO-DINA & \ 49 & 8961.9 & \ 9171.9 \\ ind-DINA & \ 48 & 9208.3 & \ 9413.9 \\ \bottomrule \end{tabular} \label{tab:fracAIC} \end{table} \section{Discussion} In general, it is difficult to obtain a complete Q-matrix. Oftentimes, due to the demands of practicality, assessments must involve items that require a combination of skills. Using the tools discussed in this paper, it is possible to determine the extent to which nonidentifiability affects classification and estimation under the DINA model. Marginal identifiability rates $\boldsymbol{\zeta}$, which can be estimated consistently, provide an overall measure of the extent of non-identifiability; meanwhile, NIAD-DINA classification takes marginal identifiability into consideration in order to control classification errors that are otherwise quite sensitive to the prior information. The results here suggest that when designing items to test a particular attribute, if using a combination of skills is unavoidable, it is best to combine that attribute with basic attributes mastered by a large proportion of the population. After all, it is only impossible to recover information about a particular attribute when the respondent does not posess one or more of the other attributes tested by the same item. Many of the methods currently in use may resolve idenifiability issues by enforcing restrictions on the attribute profile space. Variants of the DINA such as the ind-DINA, HO-DINA, and RHO-DINA accomplish this by specifying a structure and a prior on the probabilities $p(\boldsymbol{\alpha})=\nu_{\boldsymbol{\alpha}}$. Although this may eliminate non-identifiability and create a unique global maximum for the likelihood, model misspecification becomes a risk. Thus, careful comparison of these variants to the NIAD-DINA becomes important. \vspace{\fill}\pagebreak
1,108,101,563,424
arxiv
\section{Introduction} One of the most important aspects in the study of correlated system is the study of the effect of spatial inhomogeneity\cite{belitz}. There have recently been several examples of such systems where these inhomogeneities can occur intrinsically via quenched disorder in the system, or can occur spontaneously. For example, impurities can be driven in a superconductor by irradiation or chemical substitution. On the other hand, holes in the cuprate superconductors or magnetic or charge ordered domains in manganites spontaneously arrange in geometric patterns like stripes or a checkerboard at certain fillings \cite{mcelroy,mook,hanaguri,tran,renner}. Using scanning tunneling microscopy evidence for electronic inhomogeneity has been reported in the high-Tc superconductor $Bi_2Sr_2CaCu_2O_{8+x}$ by McElroy, et al. \cite{mcelroy}. Most of the high-Tc materials are ceramic in nature and inhomogeneities are present in even the best prepared samples. This inhomogeneity is manifested as spatial variations in both the local density of states and the superconducting energy gap\cite{millis}. There has been a number of theoretical attempts to understand the effect of quenched disorder in superconductors \cite{dagotto, ghosal}. In the context of repulsive models, both in the weak-coupling models and their strong-coupling counterparts, considerable numerical work has been done with inhomogeneities \cite{machida, kato}. In such cases stripes and checkerboard patterns have been reported \cite{white, voita}, the presence of d-wave superconductivity, however, is less convincing. Enhancement of T$c$ due to inhomogeneities in the weak-coupling regime is demonstrated by Martin et al. \cite{martin}. Aryanpour et al. \cite{dagotto} studied an s-wave superconductor with quenched disorder starting from a negative-U Hubbard model using a mean-field theory. The disorder enters in their model through a random choice of two values of the attractive interaction (bimodal distribution) at different sites. Quite interestingly, it was shown that below a certain value of the average attraction, the zero temperature superconducting gap is larger than that of the homogeneous superconductor with same (uniform) attraction. Ghosal et al., \cite{ghosal} use an s-wave superconductor and look for the effect of disorder using a mean-field treatment. Both these calculation observe strong effects of disorder on the superconducting order parameters. In certain specific cases, lower dimensional orders, like stripes or checkerboard order \cite{tran,emery} has been found \cite{dagotto} in the numerical calculations. One possible application of these inhomogeneities is to manipulate the degree of superconductivity and the transition temperature through varying degree of inhomogeneities. Another possibility is the study of the coupling of two adjacent systems, where superconductivity in either could be controlled by the degree of disorder in them. In many of the tunneling devices such conditions are obtained. We consider a similar situation with the local attractive interaction taken completely random, which, we believe, is more realistic than using a bimodal distribution. We take a system of two such disordered, superconducting blocks connected by a channel. This model conforms to various actual situations in tunnel junctions and the study of them is quite interesting. We look into the problem of disordered superconductivity and compare the results with homogeneous systems. Using mean-field exact diagonalization techniques, we work out the case of random disorder (as opposed to forcing specific geometrical patterns like stripes\cite{dagotto} from the outset) in a two dimensional square lattice and also for a system of two such blocks with a channel connecting them. We compute various microscopic parameters such as the local pairing amplitude $\Delta_i$, the average electron density $n_i$, the local chemical potential $\mu_i$ and the density of states for different fillings for various average interactions. We also look for any emergent one dimensional pattern in the real-space. The effect of coupling of two such systems is studied in detail. \section{Model and calculation} Our starting point is the attractive Hubbard model given by \begin{equation} H\ =\ -t \sum_{<ij> \sigma} \big( c^{\dag}_{i \sigma} c_{j \sigma}\ +\ c^{\dag}_{j \sigma} c_{i \sigma} \big)\ -\ \mu \sum_{i} c^{\dag}_{i \sigma} c_{i \sigma}\ -\ \sum_{i} |U_{i}|\hat{n_{i \uparrow}} \hat{n_{i \downarrow}} \end{equation} \noindent where $t$ is the hopping potential, $\mu$ is the chemical potential and $U_i$ is the local attractive interaction between the fermion of the opposite spins residing on the same lattice site $i$. $c^\dag_{i\sigma} $ and $c_{i\sigma}$ are creation and destruction operators for an electron with spin $\sigma$ on site $i$, $\hat{n}_{i\uparrow}$ and $\hat{n}_{i\downarrow}$ are number operators at site $i$ with spin up and down. Superconductivity and other charge instabilities in the homogeneous case have been studied in great detail \cite{at1,at2} in the past. At half filling the superconducting state is degenerate with a charge density wave state and they are connected by a pseudospin rotation. In order to study the effect of disorder, an inhomogeneous calculation is necessary. We use the Bogoliubov-de Gennes (BdG) mean-field approximation and replace local electron correlation by the local superconducting pairing amplitude $\Delta_i$, at site $i$ (only s-wave pairing considered here). $\Delta_i = \langle c_{i\uparrow} c_{i\downarrow} \rangle$ and $n_{i\sigma}= c^\dag_{i\sigma} c_{i\sigma}$. Assuming $\langle n_{i \uparrow}\rangle $=$\langle n_{i \downarrow}\rangle =\langle n_i\rangle /2$ we get, \begin {equation} {\cal{H}}_{eff}=\sum_{<ij>,\sigma}(c^{\dag}_{i\sigma}c_{j\sigma} + c^{\dag}_{j\sigma}c_{i\sigma}) - \sum_{i\sigma}{\tilde\mu_i} c^{\dag}_{i\sigma}c_{i\sigma} \nonumber \\ \sum_{i} \big|U_{i}\big|\big[\Delta_{i}c^{\dag}_{i\uparrow}c^{\dag}_ {i\downarrow}+\Delta^{*}_{i}c_{i\downarrow}c_{i\uparrow}\big]\, \end{equation} \noindent where $\tilde\mu_i=\mu+ |U_i|\langle n_i\rangle /2 $ is a site-dependent Hartree shift with $\langle n_i\rangle =\sum_{\sigma} {\langle n_{i\sigma}\rangle }$. All energies are scaled to $t = 1.$ We use an initial value of the chemical potential $\mu$ and use different random configurations of $U_i$ to realize the inhomogeneous superconductor. We then compare its tendency of superconductivity with the homogeneous system, that is, $U_i=U$ for all $i$. For the inhomogeneous case we have taken $U_i$ to be uniformly random between two specified values. In all these cases we use the self-consistent diagonalization of the Hamiltonian - at every iteration compute the averages at every site, recalculate the Hamiltonian and set out for diagonalization again. When the iterations converge we obtain a self-consistent result as usual and then compute the values of the physical the quantities. \section{Results and discussion} We take a square lattice of finite size and set up the BdG Hamiltonian. On reaching self-consistency, the average electron density, i.e., the average filling, is calculated. This is repeated for many initial $\mu$ and the results are inverted to find the necessary physical quantities for a fixed $\langle n\rangle $ as is customary in grand canonical ensemble (the local inhomogeneity and self-consistent interactions forbid fixing the average electron density). We note that ideally one would expect only topological order in two dimensions. However, this does not apply to mean-field order. The variation of $\Delta_i$ with $U_i$ is shown in Fig.1. As expected, the sites with large negative attraction have larger superconducting amplitude. It is interesting to note that although there are sites with strictly zero or very small $|U_i|$, there are no sites with zero $\Delta_i$. This is not unexpected, as a large local fluctuation of order parameter is quite unfavourable with respect to quantum fluctuations and are energetically costly in terms of the loss of interaction energy. Such finding have also been reported earlier \cite{dagotto}, but we do not find any value of $\langle U \rangle$ for which the disordered system has a larger average gap than the uniform case. In the case of a bimodal distribution of $U_i$, though, the proximity effect seems to be much stronger and the disordered system has a larger gap than the uniform one below a certain $\langle U \rangle$. We also observed that unless a specific geometry of the disorder is quenched into the lattice, there is no possibility of a stripe like state to organize self-consistently as observed by Dagotto\cite{dagotto}. We believe in a situation where the disorder is annealed and randomly distributed, such highly anisotropic states are quite unlikely in the present model. \begin{figure}[ht] \begin{center} \mbox{\epsfig{file=fig1a.eps,width=1.75in,angle=-90}} \mbox{\epsfig{file=fig1b.eps,width=1.75in,angle=-90}} \mbox{\epsfig{file=fig1c.eps,width=1.75in,angle=-90}} \mbox{\epsfig{file=fig1dn.eps,width=1.5in,angle=-90}} \mbox{\epsfig{file=fig1e.eps,width=1.75in,angle=-90}} \end{center} \caption{(Colour online) (a)Variation of the local pairing amplitude $\Delta_i$ with the interaction $U_i$, distributed uniformly in the range 0-4 (b)Comparison between the variation of $<\Delta>$ with $<U>$ for uniform and disordered cases at $<n>=1$, $<n>=0.8$ and $<n>=0.6$. (c)Variation of $<\Delta_{uniform}>/<\Delta_{disorder}> $with $<U>$. (d)Variation of average local pairing amplitude$<\Delta>$ with $<n>$)(e) Superconducting regions for U=0-2. The red islands represent superconducting regions with large amplitude (see text).} \end{figure} Fig. 1a shows that $\Delta_i$ increases with $U_i$ and saturates at large $U_i$, there is a broad distribution of $\Delta_i$ for lower values of $U_i$. Fluctuation in the pairing amplitude $\Delta_i$ are indeed larger in the weak-coupling region. The variation of the average $U_i$ and the average $<\Delta>= \frac{1}{N}\sum_i \Delta_i$ gives a good indication of their relationship and can be compared with the uniform case. The comparison in Fig.1b shows an enhancement of the average superconducting pairing amplitude $\langle \Delta \rangle $ with $\langle U \rangle $ as expected saturating at a higher value of the attraction. The value of average pairing amplitude is higher for the uniform case than the disorder case for all values o $<n>=1.0,0.8$ and $0.6$ contrary to that of bimodal distribution. Fig. 1c shows the ratio as a function of average interaction, and it rises sharply at low $U$ and always stays above one. This implies that the random disorder inhibits long range superconducting order. Fig.1d shows the usual bell-shaped curve of $<\Delta>$ versus $<n>$ with maximum at $<n>=1$. In order to glean the real-space picture of the superconducting regions, we mark the sites where the value of $\Delta_i$ is greater than the corresponding value in the homogeneous case (Fig.1e). Since all sites have non-zero $\Delta_i$, this prescription allows us to locate preferred patterns, if any, in the real space. We do not see any such pattern while the regions of strong $\Delta_i$ are quite randomly distributed forming patches. The formation of islands in turn reduces the overall observed $T_c$ (which is determined by the phase coupling across such regions) in a real system and the corresponding superfluid stiffness. Plotting the frequency distribution of $\Delta_i$ gives a good account of the effects of disorder. The peak in Fig. 2a is certainly above the mean value. With higher disorder the peak shifts towards right. Further on we calculate the pair correlation function (Fig. 2b), defined as $c (r_i-r_j)=<\Delta_i\, \Delta_j>$. A disorder averaging is done as usual to restore the translational invariance. The correlation function drops sharply at short distances but saturates at large distances indicating a true (mean-field) long range order. The correlation function for four ranges of $U$ are shown in Fig. 2b, they all merge to a single curve (except for small fluctuations due to finite size) when normalized. \begin{figure}[ht] \begin{center} \mbox{\epsfig{file=fig2a.eps,width=1.75in,angle=-90}} \mbox{\epsfig{file=fig2b.eps,width=1.75in,angle=-90}} \end{center} \caption{(a)Frequency Distribution P($\Delta$) of $\Delta$ for U=0-2 (b)Plot of correlation for four U ranges: 0-2 0-4 0-6 0-8} \end{figure} \begin{figure}[ht] \begin{center} \mbox{\epsfig{file=fig3a.eps,width=2in,angle=-90}} \end{center} \caption{Density of States plot for U=0-2 and 0-4, the two sets of curves are offset vertically to make them separate.} \end{figure} The presence of a superconducting gap is another indication of the true long range order \cite{ghosal}. It is, therefore, very important to look at the density of states (DOS) of the system. As mentioned above there are no sites with zero pairing amplitude. This is reflected in the DOS where the gap at zero energy is clearly seen in Fig. 3, while the very sharp peaks (divergences in a homogeneous superconductor) are now broadened into two symmetrical regions with a broad distribution of states at higher and lower energies. The presence of the energy gap even in the highly disordered systems clearly indicates that s-wave superconductivity is possible even in the presence of large disorder. \begin{figure}[ht] \begin{center} \epsfig{file=fig4a.eps,width=1.75in,angle=-90} \epsfig{file=fig4b.eps,width=1.75in,angle=-90} \epsfig{file=fig4c.eps,width=1.75in,angle=-90} \epsfig{file=fig4d.eps,width=1.75in,angle=-90} \epsfig{file=fig4e.eps,width=1.9in,angle=-90} \epsfig{file=fig4f.eps,width=1.9in,angle=90} \end{center} \caption{ (a)Variation of $<\Delta>$ with $<U>$ for individual blocks of the coupled system for both disorder and uniform case and its comparison with single block system. (b) $\langle \Delta\rangle_2$ as a function of $\langle U\rangle_2 $ for $\langle n\rangle_2=1$ and comparison with superconductivity in a single system. (c) $\langle \Delta\rangle $ as a function of $\langle U\rangle $ with $\langle n\rangle =1$ for the whole system. (d) Variation of $ \Gamma= \langle \Delta\rangle_2 - \langle \Delta\rangle_1 $ with $ q= \langle U\rangle_2 -\langle U\rangle_1 ,\, \langle n\rangle =1$. Here the legend $l$ refers to length of channel and $y$ refers to its width. (e) Variation of $\nu= \langle n\rangle_2 - \langle n\rangle_1$ with $ q $ while $ \langle n\rangle =1.$ (f) Two superconducting blocks connected by a channel} \end{figure} To understand the effect of coupling of two disordered superconductors, we take up two such superconductors and join them via a narrow channel (Fig. 4f) and observe the effects due to the proximity of each other. We find that contrary to the common expectation the width and length of the channel do not have any significant effect on the superconductivity in either systems (4d,e).More interestingly, results from the comparative variation of $\langle \Delta\rangle, \langle U\rangle, \langle n\rangle $ of the two blocks (Fig. 4a-e) do not give any clear influence of the channel. For example, we observed the variation of $\langle\Delta\rangle$ of one superconductor with the $ \langle U\rangle $ in both uniform and disordered case and compared with similar results of the single block system we had obtained earlier. Results are symmetric with respect to interchange of systems 1 and 2. It is clearly seen from Fig. 4a that the average superconducting pairing amplitude in blocks of the coupled system follow identically the pattern of a single block system in both disorder and uniform regime. The channel was kept homogeneous. The results do not change if the channel is maintained homogeneous but non superconducting ($U_i=0$ for all $i$ in the channel).This unexpected behaviour, we believe, is due to the formation of islands of low pairing amplitude discussed earlier (Fig 1e). These regions localize the electron in the block since it would be energetically unfavourable for the electrons to percolate through the channel. This inhibits correlations between the blocks,effectively making them behave as independent systems. The average superconducting pairing amplitude in the second system increases at the expense of $\langle \Delta_1\rangle $ (Fig. 4b since the enhanced attraction in system 2 provides extra stabilization energy in that region. Note that the variation of $<\Delta_2>$ with $<U_2>$ is also unaffected (Fig. 4b) by the presence of system 1 and nearly similar to that for a single system. The variations of overall average pairing amplitude $\langle \Delta_1 + \Delta_2 \rangle$ with total average interaction at a fixed overall density is shown in Fig. 4c. The combined system behaves like an isolated single system and expectedly shows the typical rise and saturation behaviour seen in a single system. Fig. 4d demonstrates clearly how the average order parameter in the two systems are correlated with the corresponding interactions. The larger the attraction, more is the average $ \langle \Delta \rangle $. As $ \langle U \rangle $ increases in any system, the density of electrons tends to increase for the extra stabilization available in that region. Fig. 4e shows this tendency, with $ \langle n_i \rangle $ in each superconductor increasing rapidly with the average attractive interaction there. \section{Conclusion} In conclusion, we have studied inhomogeneous s-wave superconductors using the Bogoliubov de-Gennes mean-field theory. Superconductivity is suppressed over the homogeneous weak-coupling value due to disorder, though the proximity effect is strong with non-zero pairing amplitude at all sites (even with $U_i=0$). The frequency distribution of $ p(\Delta)$ shows a peak towards higher values of $\Delta$. The gap in the density of states persists even for the high disordered case lending support to the existence of strongly disordered inhomogeneous superconductivity. When two such blocks of inhomogeneous superconductors are connected via a channel, there is no appreciable effect of one on the other. The individual blocks are not affected significantly by the other block connected by the channel . We argue that this is due to the fact that the fluctuating local order parameters remain pinned to the individual values thereby preventing significant correlation between the blocks. \section{Acknowledgment} BC acknowledges a junior research fellowship from the Council of Scientific and Industrial Research, India. \newpage
1,108,101,563,425
arxiv
\section{Introduction} It has long been believed, and pre-explosion progenitors have proven for nearly a dozen cases, that type Ib/c and II supernovae are produced in the collapse of massive stars. Although we have yet to observe a pre-explosion gamma-ray burst (GRB) progenitor, the evidence that many long-duration GRBs are also produced via massive star collapse has grown since their discovery in 1973 \citep[for reviews, see][]{woosleyBloom06,fryerEA07b}. The engine at the heart of these long-duration GRBs is believed to be powered either by a rapidly-accreting black hole or a rapidly-spinning magnetar \citep{woosley93,woosleyBloom06,barkovKomissarov11}, but the exact progenitor of these engines is still unknown. More than a dozen scenarios have been proposed in which a massive star or tightly-coupled binary system can collapse to form such disks \citep{fryerWoosley98,fryerEA99b, fryerEA07b,zhangFryer01}. The simplest progenitor is the classic collapsar, a massive star that sheds its hydrogen envelope by strong winds and violent mass eruptions (akin to luminous blue variable outbursts) prior to the collapse of its core \citep{woosley93}. Additionally, a host of massive star models exist, a set of which invoke binary merger events. Like the violent mass ejection in massive stars, these merger events eject shells of material into the immediate surroundings before the GRB outburst. Both single star and binary mass ejections can occur just a few thousand years prior to stellar collapse. In the case of the helium merger model, we expect the merger-driven mass ejection to occur less than a few years before the launch of the GRB jet. The jet must plow through this shell as it is producing the gamma-ray emission we observe. The structure of the circumburst media (CSM) should imprint signatures on their afterglows that identify the mode of collapse \citep{meszaros02, woosley11}, providing a mechanism that can be used to better understand the progenitors of these cosmic explosions. Analytical models of relativistic jets in both uniform circumburst densities and free-streaming wind profiles have yielded light curves that are in reasonable agreement with observations \citep{bergerEA00, yostEA03, curranEA11, priceEA02}. Recently, collisions of the jet with more complicated structures have been examined, such as wind-termination shocks \citep{ramirez-RuizEA01, ramirez-RuizEA05, daiLu02, nakarGranot07}, shocks due to collisions between stellar winds from nearby stars. \citep{mimicaGiannios11}, clumps \citep{ramirez-RuizEA05}, and magnetic shocks \citep{yostEA03}. \citet{ramirez-RuizEA01, ramirez-RuizEA05} and \citet{daiLu02} found that sharp features in circumburst densities due to clumps and shocks create discernible features in GRB light curves, although \citet{nakarGranot07}, who consider the dynamics of the reverse shock and assume that the jet remains relativistic after encountering a jump in density, do not. \citet{mimicaGiannios11} found gradual shallowing in the light curve when the jet passes through a shock formed when stellar winds from the progenitor and a neighbor collide. Flares are often observed in GRB lightcurves within the first few thousand seconds, notably in GRB 081029 \citep{nardiniEA11}, GRB 071112C \citep{huangEA12a}, GRB 050502B \citep{falconeEA06}, GRB 060607A \citep{ziaeepourEA08}, and GRB 050421 \citep{godetEA06}. It is generally accepted that these flares are a product of late-time injections of energy into the system by the GRB's central engine, but this is not the only possible scenario. When the jet from a GRB encounters a shell produced by a stellar progenitor, it experiences an abrupt increase in the medium density. GRB light curves are much more sensitive to a decrease in medium density than to an increase \citep{yostEA03}, meaning that an abrupt increase in density of the order of $\sim$ a few will leave almost no discernible imprint on the observed light curve. In contrast, our hydrodynamical models can produce sudden enhancements in the density of five orders of magnitude or more at the trailing edge of the shell (Fig.\ \ref{figure1_windBubbles}), which is sufficient to produce bright flares despite the weak dependence of the light curve on an increase in density. Winds, outbursts and ionizing UV radiation from GRB progenitors all disperse the molecular cloud cores that created them and create far more complex ambient morphologies for the jet than those considered in afterglow studies to date \citep[but note][]{fryerEA06,whalenEA08b}. In this paper, we model such environments for a variety of GRB scenarios with ZEUS-MP. Our 1D models span a wide variety of winds and outbursts with complex gas chemistry and cooling that capture the true structure of the circumburst medium. We have also developed a semi-analytical approach based on previous work by \citet{panaitescuKumar00}, \citet{huangEA99}, and \citet{peer12} for computing GRB light curves in any general density profile, not just the uniform media, winds and simple density jumps in previous studies. We apply this new method to compute light curves for relativistic jets propagating through circumburst shells, and examine the imprint of these shells on the light curves in order to determine if they constrain the mode of collapse. In $\S \, 2$ we discuss the ZEUS-MP code, how it is used to simulate the environments of long-duration GRBs, and our grid of wind models. We review the results of our shell ejection calculations in $\S \, 3$. In $\S \, 4$ we describe how our new analytical models of GRB jets are applied to hydrodynamical profiles from these simulations to compute light curves for a variety of energies. We calculate GRB light curves in our grid of shell profiles, which correspond to a variety of collapse scenarios, and determine if specific light curve features constrain the nature of the progenitor. In $\S \, 5$ we conclude. \section{Numerical Method} \subsection{ZEUS-MP} ZEUS-MP is a massively-parallel astrophysical hydrodynamics code that solves nonequilibrium H and He gas chemistry and photon-conserving ionizing UV radiation transport together with Eulerian fluid dynamics in a self-consistent manner \citep{whalenNorman06, whalenNorman08b, whalenNorman08a}. The hydrodynamics equations are \vspace{0.1in} \begin{eqnarray} \frac{\partial \rho}{\partial t} & = & - \nabla \: \cdotp \; (\rho {\bf v}) \\ \frac{\partial \rho v_{i}}{\partial t} & = & - \nabla \: \cdotp \; (\rho v_{i} {\bf v}) \: - \: \nabla p \: - \: \rho \nabla \Phi \: - \: \nabla \cdotp {\bf Q} \\ \frac{\partial e}{\partial t} & = & - \nabla \: \cdotp \; (e {\bf v}) \: - \: p\nabla \: \cdotp \: {\bf v} \: - \: \bf{Q} : \nabla {\bf v}, \end{eqnarray} \vspace{0.05in} \noindent where $\rho$, $e$, and the $v_{i}$ are the gas density, internal energy density, and velocity of each zone and $p = (\gamma-1)\, e$ and {\bf{Q}} are the gas pressure and the von Neumann-Richtmeyer artificial viscosity tensor. We evolve mass fractions for H, H$^{+}$, He, He$^{+}$, He$^{2+}$, H$^{-}$, H$^{+}_{2}$, H$_{2}$, and e$^{-}$ with nine additional continuity equations (the species are assumed to have the same velocities) and the nonequilibrium rate equations of \citet{anninosEA97} \vspace{0.05in} \begin{equation} \frac{\partial \rho_{i}}{\partial t} = - \nabla \: \cdotp \; (\rho {\bf v}) + \sum_{j}\sum_{k} {\beta}_{jk}(T){\rho}_{j}{\rho}_{k} + \sum_{j} {\kappa} _{j}{\rho}_{j}, \label{eqn:network}\vspace{0.05in} \end{equation} where ${\beta}_{jk}$ is the rate coefficient for the reaction between species j and k that creates (+) or destroys (-) species i, and the ${\kappa}_{j}$ are the radiative reaction rates. Microphysical cooling and heating are calculated with operator-split isochoric updates to the gas energy density that are performed every time the reaction network is solved: \vspace{0.05in} \begin{equation} {\dot{e}}_{gas} = \Gamma - \Lambda. \label{eqn: egas} \vspace{0.05in} \end{equation} Here, $\Gamma$ is the photoionization heating rate for all species over all photon energies and $\Lambda$ is the sum of all cooling rates. We include collisional excitation and ionization cooling by H and He, recombinational cooling, H$_2$ cooling, and bremsstrahlung cooling, with our nonequilibrium reaction network providing the species mass fractions needed to accurately calculate these collisional cooling processes. We also calculate fine structure cooling due to C, O, N, Si and Fe using the \citet{dalgarnoMcCray72} cooling curves, generalized to arbitrary elemental abundances. We exclude cooling by dust. Fluid flow, gas heating and cooling, and H and He chemistry can occur on highly disparate timescales whose relative magnitudes can widely vary throughout the course of a calculation. The many chemical reaction timescales can be consolidated into a single chemistry time step defined as \begin{equation} t_{chem} = 0.1 \, \displaystyle\frac{n_{e}}{{\dot{n}}_{e}}, \label{eqn: t_chem} \end{equation} which is formulated to ensure that the fastest reaction operating at any place or time on the grid determines the maximum time by which the reaction network may be accurately advanced. The timescale on which the gas heats or cools is given by \vspace{0.1in} \begin{equation} t_{h/c} = 0.1 \, \displaystyle\frac{e_{gas}}{{\dot{e}}_{gas}}. \label{eqn: t_hc} \end{equation} To evolve each physical process on its respective timescale without restricting the entire algorithm to the shortest one, we subcycle the reaction network and energy equation over the time step on which we update the hydrodynamics equations. We first compute the minimum $t_{chem}$ and $t_{h/c}$ for the entire grid and then perform consecutive updates of species mass fractions and gas energy densities over the smaller of these two times until the lesser of $t_{h/c}$ and $t_{CFL}$ is reached, where $t_{CFL}$ is the Courant time. At this point a full update of the hydrodynamics equations is performed and the cycle repeats. The prefactor of 0.1 in equations \ref{eqn: t_chem} and \ref{eqn: t_hc} guarantees that mass fractions never change by more than 10\% when the reaction network is solved and that gas energies do not change by more than 10\% over a time step. This prevents catastrophic runaway cooling in gas at high densities and metallicities by either H$_2$ or fine-structure cooling, like those in dense shells swept up by strong winds or mass ejections. \subsection{Problem Setup} To model mass ejections in ZEUS-MP, we treat stellar winds and outbursts as time-dependent inflows at the inner boundary of a 1D spherical grid with 32,000 zones. The gas is assigned H and He mass fractions of 0.76 and 0.24, respectively, and a metallicity $Z =$ 0.1 $Z_{\odot}$. The mesh extends from 10$^{-4}$ pc to 0.3 pc with outflow conditions on the outer boundary. The inflow is imposed at the inner boundary in the form of a time-varying density and velocity: \vspace{0.05in} \begin{equation} \rho \, = \, \frac{\dot{m}}{4 \pi {r_{ib}}^2 v_{w}}, \vspace{0.05in} \end{equation} where $r_{ib}$ is the radius of the inner boundary and $v_{w}$ is the wind velocity. Outbursts are modeled by increasing $\dot{m}$ and lowering $v_{w}$. Because stellar winds clear out gas from the vicinity of the star prior to any outbursts, we initialize the grid with a free-streaming density and velocity profile \vspace{0.05in} \begin{equation} \rho(r) \, = \, \frac{\dot{m}}{4 \pi r^2 v_{w}}, \vspace{0.05in} \end{equation} where the wind velocity is assumed to be constant. The temperatures of the initial density profile is set to 100 K. We launch the outburst at the beginning of the simulation. The grid is domain decomposed into 8 tiles, with 4000 mesh zones per tile and one tile per processor. We neglect the effect of ionizing radiation from the star on the structure of the dense shell. This treatment is approximate, given that the progenitor illuminates the flow over its entire lifetime and that its luminosity evolves over this period. However, the heat deposited in the wind by photoionizations is small in comparison to its bulk kinetic energy and is unlikely to alter the properties of the flow in the proximity of the GRB. \subsection{Grid of Shell Models} For collapsar and He mergers we consider 3 mass loss rates $\dot{m}_w = 10^{-6}, 10^{-5}$, and $10^{-4}$ M$_{\odot}$/yr and outbursts $\dot{m}_b = 10^{-2}$ M$_{\odot}$/yr lasting for 10 yr and 100 yr that correspond to total shell masses of 0.1 and 1.0 M$_{\odot}$, respectively. We take the velocities of the fast wind and slow shell to be 2000 km/s and 200 km/s, respectively, and in each model use the given $\dot{m}_w$ to initialize the density across the entire grid, assuming that the star sheds mass at the same rate before and after the outburst. In general, larger shell masses are expected for He mergers but both kinds of progenitors can exhibit light, moderate and heavy winds, so there is some degeneracy across our grid of models. We also consider He-He mergers by simulating the loss of the common envelope with a single massive outburst $\dot{m}_b$ = 10 M$_{\odot}$/yr that lasts for one year and has a velocity of 200 km/s. \subsection{Wind Bubble Test} Before calculating circumburst environments for a GRB progenitor, we first model the bubble blown by its fast wind during the life of the star. This bubble is the primary circumstellar structure formed by the star and grows to radii of 20 - 30 pc before outbursts alter its interior on scales of a few tenths of a parsec late in the life of the star. Since the gamma-ray spectra and afterglow of the GRB are primarily governed by the interaction of the jet with its surroundings out to $\sim$ 0.1 pc and 10 pc, respectively, this primary shell does not affect the observational signature of the GRB at early times. However, we simulate this bubble first to verify that for reasonable choices of ambient density it is indeed at least 10 pc from the star when it dies and does not immediately impact the afterglow. Also, the effect of local density and radiative cooling on the structure and kinematics of shells in general is more easily seen with this first bubble than the more complicated structures created by interactions between slow outbursts and fast winds just before the death of the star. The wind bubble has the classic two-shock structure first described by \citet{castorEA75} and \citet{weaverEA77}. As we show in Fig.~\ref{figure1_windBubbles}, one shock forms at the interface between the emergent wind and the surrounding gas as it is swept up at speeds greatly exceeding the sound speed of the gas. As gas accumulates on the bubble, the shock detaches from the wind and moves ahead of it, forming an intervening shell of dense postshock gas. At the same time, the expansion of the bubble evacuates a cavity into which the wind freely streams. Since the shell moves more slowly than the wind, a termination shock also forms where the wind piles up against the inner surface of the shell. If gas in the shell can radiatively cool, it flattens into a cold, dense structure that is prone to fragmentation into clumps. We first performed four tests with steady winds of $\dot{m}_w$ = 10$^{-5}$ M$_{\odot}$/yr and $v_w$ = 1000 km/s in uniform densities $n= $ 10, 100, 1000 and 1.8 $\times$ 10$^4$ cm$^{-3}$ to investigate how local densities govern the radius of the bubble at intermediate times. For simplicity, these calculations were done with no chemistry or radiative cooling. As we show in the left panel of Fig.~\ref{figure1_windBubbles}, ambient density governs only how far the bubble is driven from the star, not the profile of the free-streaming region in the immediate vicinity of the star, which is determined only by $\dot{m}$ and $v_{w}$ (and fluctuations thereof). We find that radiative cooling has a dramatic effect on the structure of the primary shell but no influence on the flow up to the shell, as we show in the right panel of Fig.~\ref{figure1_windBubbles} for the same wind and $n =$ 100 cm$^{-3}$. The three plots show the structure of the bubble with no cooling, H$_2$ cooling, and fine-structure cooling due to metals at $Z = $ 0.1 $Z_{\odot}$. The cooling flattens the shell into a cold dense structure and radiates away some of its thermal energy, slightly retarding its advance. The free-streaming zone is again unaffected because the wind velocity ensures that the termination shock is at least 0.6 pc from the star by the end of the simulation. Thus, chemistry and cooling will clearly cause any subsidiary shells ejected by the progenitor at late times to be much thinner and denser, with potentially important consequences for the propagation of the jet. \begin{figure*} \plottwo{fig1a.eps}{fig1b.eps} \epsscale{2.0} \caption{Left panel: wind-blown bubbles at 6.25 $\times$ 10$^4$ yr for $\dot{m}_w = 10^{-5}$ M$_{ \odot}$/yr and $v_w = 10^3$ km/s for four fiducial ambient densities, 10, 100, 100 and 1.8 $\times$ 10$^4$ cm$^{-3}$. These profiles include no radiative cooling. As can be seen in the plots, densities within $\sim$ 1 pc of the star depend only on $\dot{m}$ and $v_w$ for $n \lesssim$ 100 cm$^{-3}$. Right panel: structure of the shell at 1.25 $\times$ 10$^4$ yr in an ambient density of 100 cm$^{-3}$ with no cooling, H$_2$ cooling and fine structure cooling by C, O, N, Si and Fe at $Z =$ 0.1 $Z_{ \odot}$ (from the Dalgarno--McCray, or DM, cooling curves). Radiative cooling flattens the shell plowed up by the wind into a cold dense structure, with no effect on the free-streaming region in the vicinity of the star. Note also that efficient cooling in the shell also radiates away some of its thermal energy and slows its advance.} \vspace{0.1in} \label{figure1_windBubbles} \end{figure*} \section{Circumburst Density Profiles of Collapsars and He Mergers} \begin{figure}[t] \centering \begin{tabular}{c} \epsfig{file=fig2a.eps, width=0.5\linewidth,clip=} \\ \epsfig{file=fig2b.eps, width=0.5\linewidth,clip=} \\ \end{tabular} \caption{Density profiles (a) and temperature profiles (b) for a 100 yr outburst with $\dot{m}_b =$ 10$^{-2}$ M$_{\odot}$/yr in a stellar wind with $\dot{m}_w =$ 10$^{-5}$ M$_{\odot}$/yr. Black: 120 yr; red: 600 yr; blue: 1000 yr. \label{figure2_exampleProfiles}} \end{figure} We now consider the more complicated structures in the immediate vicinity of the star at the time of the GRB. We examine a fiducial case from our grid of shell models, the 100 yr outburst in a stellar wind with $\dot{m}_w = $ 10$^{-5}$ M$_{\odot}$/yr, whose density and temperature profiles we show at 120, 600 and 1000 yr in Fig.~\ref{figure2_exampleProfiles}. As the shell emerges from the star, it promptly cools to extremely low temperatures, as we show in panel B of Fig.~\ref{figure2_exampleProfiles} at 120 yr. This happens because fine-structure cooling timescales are less than a year in the dense 0.1 \ensuremath{Z_\odot}\ shell. At the same time, the fast wind detaches from and races ahead of the shell, as we show in panel A of Fig.~\ref{figure2_exampleProfiles}. This creates a zone of rarefaction into which the shell freely streams, as shown by its roughly $r^{-2}$ density profile. As the wind pulls away from the shell, the abrupt adiabatic expansion causes temperatures at its inner edge to drop sharply. The expansion of the rarefied region likewise causes its temperatures to fall to $\sim$ 1 K. In reality, gas temperatures cannot fall below the cosmic microwave background temperature at this epoch, $T_{CMB} = 2.73 (1+z)$ K, but this is not included in our simulations. We also ignore any heating of the gas due to other external processes such as cosmic rays or photons from the progenitor itself. Although these processes could conceivably alter the density profile in the regions of free-streaming wind if those regions were to become ionized, the temperature of the shell is set by the conversion of the free-streaming wind's kinetic energy into thermal energy at the shell's inner edge (as well as through radiative cooling throughout its interior). Any modifications of the shell structure due to heating by other processes is negligible. The density dip at the beginning of the rarefaction region and the density jump at the rear edge of the detached wind are complementary and due to mass conservation. When the fast wind breaks away from the shell a thin shell of dense gas from its outer layers breaks off with it, leaving a thin layer whose density is even lower than that of the rarefied region and which remains a persistent feature of the flow out to $\sim$ 500 yr. A transient structure with multiple strong density jumps results: the emerging massive shell and the inner edge of the rapidly receding wind separated by an intervening low density region. The shell soon begins to plow up the low-density gas, as can be seen in the density bump just beyond its leading edge. After the shell has been fully ejected (and the fast wind has exited the grid), it soon evolves into a wind bubble as the fast wind piling up at its inner surface forms a termination shock that detaches and recedes from it in the frame of the shell. The termination shock heats the gas piling up at the inner surface of the shell to $\sim$ 10$^8$ K while the shell itself remains cold due to fine-structure cooling. Densities in the postshock gas between the termination shock and the shell become roughly uniform because sound speeds in the shell are $\sim$ 1100 km/s and it is nearly isothermal, so pressure gradients in this region arising from any initial density gradients across it are erased by acoustic waves on timescales that are short in comparison to the expansion times of the shell. Temperatures in the shell are lower than those of canonical wind bubbles because the ambient density is too low to heat the shell and activate H and He line cooling, in contrast to Fig.~\ref{figure1_windBubbles}. Strong bremsstrahlung x-ray flux from shocked gas at the inner surface of the shell likely ionizes it to some degree, but probably does not otherwise alter its properties because, at these photon energies, most of the kinetic energy of the photoelectrons goes into secondary ionizations rather than heat \citep[e.g.][]{shullvanSteenberg85,ricottiEA01,ricottiEA02,ricottiEA05}. By 760 yr the shell has swept up enough low density gas to form a shock, which soon separates from the shell and advances beyond it as shown in the density and temperature plots at 1000 yr. This feature is temporary: the shell eventually subsumes the secondary shock as it expands. A free-streaming region forms behind the termination shock and extends to $\sim$ 0.1 pc by 1000 yr. Part of the reason the shell drives a shock at intermediate times is that it accelerates as it is pushed by the fast wind and as it expands into low densities that continue to fall over time, as we show in all three density plots. A velocity gradient develops across the shell as its inner and outer surfaces accelerate to 220 km/s and 280 km/s respectively over 1000 yr. Radiative cooling flattens the ejected shell as it expands into the surrounding medium. Its width at first is 0.02 pc at 100 yr but later decreases essentially to the resolution limit of the grid by 1000 yr. This is not unexpected: when a dense shell driven by winds or swept up by a supernova can radiatively cool, its width usually drops to a few mesh zones, with the number of zones being determined by the numerical viscosity of the hydrodynamic algorithm. All seven models exhibit the same general evolution, with the only variations being in the thickness of the shell over time, its peak densities, and its location on the grid at the time of the GRB. Depending on when the explosion occurs, the jet will encounter three density jumps or four, with jump ratios of up to 10$^{10}$. The thickness and average density of the shell varies from 0.02 to 0.0001 pc and 10$^{5}$ - 10$^{9}$ cm$^{-3}$ over 1000 yr. All our models of massive shell ejection follow the evolutionary sequence we have just described, with the only differences being in the magnitudes of density jumps, the mass and thickness of the shell at a given time, and the positions of the regions on the grid. \section{Afterglow Light Curves} We now discuss the general method we have devised for computing light curves for relativistic jets in the complicated wind structures we have modeled in ZEUS-MP. \subsection{Blast Wave Hydrodynamics} In the canonical fireball model, gamma ray bursts are modelled as initially highly-relativistic, adiabatic jets that propagate outward into an ambient medium. We will assume that the jet expands adiabatically, i.e., only a very small fraction of the total burst energy is available to the electrons to be radiated away, and that all of the circumburst material in the jet's path is swept up. In the following discussion, primed quantities refer to the reference frame that is comoving with the jet, unprimed quantities with no subscript refer to the reference frame in which the ISM is at rest, and quantities with the subscript $\earth$ refer to the reference frame of an Earthbound observer. Energy conservation requirements yield a formula for the evolution of the jet as it propagates through the external medium. In the case where the jet expands adiabatically, \begin{equation} \frac{d\Gamma}{dm} = -\frac{\hat{\gamma}\left(\Gamma^2 - 1\right) - \left(\hat{\gamma} - 1\right)\Gamma\beta^2}{M_\text{ej} + m\left[2\hat{\gamma}\Gamma - \left(\hat{\gamma} - 1\right)\left(1 + \Gamma^{-2}\right)\right]}, \label{GammaEquation} \end{equation} \noindent where $\Gamma$ is the Lorentz factor of the jet, $M_\text{ej}$ is the initial mass of the jet ejecta, m is the total mass that has been swept up by the jet, $\beta = \left(1-\Gamma^{-2}\right)^{1/2}$ in the normalized bulk velocity, and $\hat{\gamma}$ is the adiabatic index \citep{peer12}. The high resolution of our simulations ($10^{-4}\ \text{pc}$) allows us to take the density to be constant across each mesh zone. The total mass swept up by the jet by the time it reaches grid point $n$ is then approximately \begin{equation} M(r) = \frac{4}{3}\pi\left(\rho_1 r_1^3 + \sum_{i=2}^{n}{\rho_i\left(r_i^3 - r_{i-1}^3\right)} \right), \end{equation} where $r_i$ and $\rho_i$ are the radius and density of the $i$th grid point, respectively. The time $t_\text{obs}$ at which a photon emitted at the shock boundary reaches an observer along the line of sight can be calculated by integrating equation 12 from \citet{huangEA99}: \begin{equation} t = \frac{1}{c}\int{\frac{dr}{\beta\Gamma\left(\Gamma + \sqrt{\Gamma^2 - 1}\right)}}\, \label{tObsEquation} \end{equation} \noindent where $\beta = v/c$ is the jet velocity and $c$ is the speed of light. Many GRBs are thought to produce collimated jets rather than isotropic outflows. The center of a relativistic jet of half-opening angle $\theta_j$ is not in causal contact with its edge until $t_\text{obs} = t_\text{jet}$, which is defined as the time where $\Gamma \simeq 1/\theta_j$. The jet evolves in an identical manner as in the isotropic case until $t_\text{obs} = t_\text{jet}$, at which point the jet experiences rapid lateral expansion. The increase in external mass being swept up by the jet after $t_\text{obs} = t_\text{jet}$ causes the jet to decelerate at an increasing rate. Additionally, the afterglow's decreasing luminosity is no longer being partially offset by an increase in the size of the emitting region as seen by the observer, leading to a break in the light curve. The overall evolution of the jet's angular size is \begin{equation} \frac{d\theta_j}{dt} = \frac{c'_s\left(\Gamma + \sqrt{\Gamma^2 - 1}\right)}{r}, \end{equation} where $c'_s$ is the comoving sound speed and $r$ is the radius of the jet \citep{huangEA00}. \subsection{The Injection Break} If we assume that a constant fraction $\epsilon_B$ of the total fireball energy is stored in magnetic fields, then the equipartition magnetic field strength at the shock boundary is \citep[i.~e.][]{panaitescuKumar00}: \begin{equation} \frac{B'^2}{8\pi} = 4\epsilon_Bm_pc^2n(r)(\Gamma-1)\left(\Gamma + \frac{3}{4}\right), \end{equation} \noindent where $m_p$ is the proton mass and $n(r)$ is the number density of the medium at radius $r$. The electrons that are injected into the shock are assumed to have a velocity distribution $N(\gamma) \propto \gamma^{-p}$ with a minimum Lorentz factor $\gamma_m$. Electrons with a Lorentz factor $\gamma_e$ emit synchrotron radiation at a characteristic frequency \citep{rybickiLightman79}: \begin{equation} \nu(\gamma_e) = \Gamma\gamma_e^2\frac{q_eB}{2\pi m_ec}. \label{characteristicFrequencyEquation} \end{equation} \noindent The injection break, $\nu_m$, corresponds to the characteristic frequency at which the electrons having the minimum Lorentz factor radiate. The minimum Lorentz factor is \citep{sariEA98} \begin{equation} \gamma_m = \left(\frac{p - 2}{p - 1}\right)\frac{m_p}{m_e}\epsilon_e\left(\Gamma - 1\right), \label{minLorentzFactorEquation} \end{equation} \noindent where $m_e$ is the electron masses, respectively. Note that equation \ref{minLorentzFactorEquation} is only valid in the relativistic limit. Substituting equation \ref{minLorentzFactorEquation} into equation \ref{characteristicFrequencyEquation} will therefore only yield $\nu_m$ as long as $\Gamma \gtrsim 2.0$. \citet{frailEA00} show that, for $\Gamma \lesssim 2.0$, the injection break frequency follows the simple relation $\nu_m \propto t^{-3}$. \subsection{The Cooling Break} Relativistic electrons in the shock cool radiatively through inverse Compton (IC) scattering and synchrotron emission on a co-moving frame timescale \begin{equation} t'_\text{rad}(\gamma) = \frac{6\pi}{Y+1}\frac{m_ec}{\sigma_e\gamma B'^2}, \label{remnantAgeEquation} \end{equation} \noindent where $Y$ is the Compton parameter and $\sigma_e$ is the Thompson scattering cross section \citep{panaitescuKumar00}. An electron with Lorentz factor $\gamma_c$ cools radiatively on a timescale equal to the current age of the remnant. Solving Eqn. \ref{remnantAgeEquation} for $\gamma_c$, we find that the Lorentz factor for electrons that cool on a timescale equal to the observer-frame age of the remnant is \begin{equation} \gamma_c = \frac{6\pi m_ec^2}{B'^2\sigma_e(Y + 1)t'}. \label{coolingBreakEquation} \end{equation} \subsubsection{Fast-Cooling Electrons} Electrons in the GRB jet can cool by adiabatic expansion of the gas or by emission of radiation. When the cooling timescale for electrons with Lorentz factor $\gamma_m$ is less than the age of the jet ($\nu_c < \nu_m$, where $\nu_c$ is the frequency of the cooling break) the electrons in the jet lose a significant portion of their energy through emission of radiation and are said to be radiative, or fast-cooling. Conversely, if the cooling timescale is greater than the age of the jet ($\nu_c > \nu_m$) the electrons do not lose significant energy to radiation and are said to be adiabatic, or slow-cooling. To calculate the Compton parameter, $Y$, we only account for one upscattering of the synchrotron photons. If the injected electrons are fast-cooling and the frequency of the absorption break $\nu_a < \text{min}(\nu_m, \nu_c)$, then $Y$ can be approximated by \citet{panaitescuMeszaros00}: \begin{equation} Y_r = \gamma_\text{m}\gamma_\text{c}\tau_\text{e}, \label{radiativeComptonParameterEquation1} \end{equation} \noindent where a constant of order unity has been ignored and $\tau_e$ is the optical depth to electron scattering, given by \begin{equation} \tau'_e = \frac{\sigma_eM(r)}{4\pi m_\text{p}r'^2}. \end{equation} \noindent The medium becomes optically thick to synchrotron self-absorption at the absorption break frequency $\nu_a$. When both the injection break and the cooling break lie in the optically thick regime, $Y$ becomes \begin{equation} Y_r = Y_* = \tau'_e\left(C_2^{2-p}\gamma_c^7\gamma_m^{7(p-1)}\right)^{1/(p+5)}, \end{equation} \noindent where $C_2 \equiv 5q_e\tau'_e/\sigma_eB'$ \citep{panaitescuMeszaros00}. \subsubsection{Slow-Cooling Electrons} If the electrons are slow-cooling, then $Y$ becomes \begin{equation} Y_a = \tau'_\text{e}\gamma_\text{i}^{p-1}\gamma_\text{c}^{3-p}, \end{equation} \noindent as long as $\nu_a < \text{min}(\nu_m, \nu_c)$ \citep{panaitescuMeszaros00} and $1 < p < 3$. Once again we have ignored a constant of order unity. If both the injection and cooling breaks lie in the part of the spectrum that is optically thick to synchrotron self-absorption, then $Y$ is identical to the corresponding fast-cooling case and \begin{equation} Y_a = Y_*. \end{equation} \subsection{The Absorption Break} At lower frequencies, the medium through which the jet propagates becomes optically thick to synchrotron self-absorption. The result is a transition to a $F_\nu \propto \nu^2$ drop-off in the flux at some absorption break frequency $\nu_a$ where the optical depth to self-absorption is $\tau_\text{ab} = 1$. The frequency of the absorption break depends on the electron cooling regime (fast or slow) and on the order and values of both the injection and cooling breaks. In the fast-cooling regime, \citet{panaitescuMeszaros00} find that \begin{equation} \nu'_\text{a, fast-cooling} = \begin{cases} C_2^{3/10}\gamma_c^{-1/2} , & \gamma_a < \gamma_c < \gamma_m \\ \left(C_2\gamma_c\right)^{1/6} , & \gamma_c < \gamma_a < \gamma_m \\ \left(C_2\gamma_c\gamma_m^{p-1}\right)^{1/(p+5)}, & \gamma_c < \gamma_m < \gamma_a, \\ \end{cases} \end{equation} \noindent whereas in the case where the electrons are slow-cooling \begin{equation} \nu'_\text{a, slow-cooling} = \begin{cases} C_2^{3/10}\gamma_m^{-1/2} , & \gamma_a < \gamma_m < \gamma_c \\ \left(C_2\gamma_m^{p-1}\right)^{1/(p-4)} , & \gamma_m < \gamma_a < \gamma_c \\ \left(C_2\gamma_m^{p-1}\gamma_c\right)^{1/(p+5)}, & \gamma_m < \gamma_c < \gamma_a. \\ \end{cases} \end{equation} \subsection{Light Curves} In order to produce light curves, we must first find the time dependence of $\Gamma(r)$, $n(r)$, and $M(r)$. Equation \ref{GammaEquation} can be solved numerically for $\Gamma(r)$, and equation \ref{tObsEquation} can then be used to relate the observer time $t_\text{obs}$ to the jet position $r$, allowing us to rewrite the equations defining the three break frequencies in terms of $t_\text{obs}$, $\Gamma(t_\text{obs})$, $n(t_\text{obs})$, and $M(t_\text{obs})$. Given the three break frequencies and the peak flux density, analytical light curves can then be calculated that are valid from the radio to the $\gamma$-ray regions of the spectrum. If $\nu_a < \text{min}(\nu_m, \nu_c)$, then the peak flux density $F_{\nu,\earth}^\text{max}$ occurs at the injection break if $\nu_\text{m} < \nu_\text{c}$ and at the cooling break if $\nu_\text{m} > \nu_\text{c}$: \begin{equation} F_{\nu,\earth}^\text{max} = \frac{\sqrt{3}\phi_\text{p}}{4\pi D^2}\frac{e^3}{m_\text{e}c^2}\frac{\Gamma B'M(r)}{m_\text{p}}, \end{equation} \noindent where $\phi_\text{p}$ is a factor calculated by \citet{wijersGalama99} that depends on the value of p, and $D = (1+z)^{-1/2}D_l$, where $D_l$ is the luminosity distance to the source \citep{panaitescuKumar00}. The flux at any frequency $\nu$ (ignoring relativistic beaming and the spherical nature of the emitting region) has been derived by \citet{sariEA98} and \citet{panaitescuKumar00}. \subsubsection{Fast-Cooling Electrons} When the electrons are in the fast-cooling regime, the peak flux density occurs at the cooling break as long as $\nu_a < \nu_c$: \begin{equation} F_{\nu, \earth} = F_{\nu,\earth}^\text{max} \begin{cases} (\nu/\nu_a)^2(\nu_a/\nu_c)^{1/3}, & \nu < \nu_a \\ (\nu/\nu_c)^{1/3}, & \nu_a < \nu < \nu_c \\ (\nu/\nu_c)^{-1/2}, & \nu_c < \nu < \nu_m \\ (\nu/\nu_m)^{-p/2}(\nu_m/\nu_c)^{-1/2}, & \nu_m < \nu. \end{cases} \label{fluxEquation1} \end{equation} \noindent If the medium is optically thick to synchrotron self-absorption at the cooling break frequency, then the maximum flux moves to the absorption break frequency. Between the absorption break and the cooling break, $F_\nu \propto \nu^{5/2}$ but it becomes $\propto \nu^{2}$ below the cooling break: \begin{equation} F_{\nu, \earth} = F_{\nu,\earth}^\text{max} \begin{cases} (\nu/\nu_c)^2(\nu_c/\nu_a)^{5/2}, & \nu < \nu_c \\ (\nu/\nu_a)^{5/2}, & \nu_c < \nu < \nu_a \\ (\nu/\nu_a)^{-1/2}, & \nu_a < \nu < \nu_m \\ (\nu/\nu_m)^{-p/2}(\nu_m/\nu_a)^{-1/2}, & \nu_m < \nu. \end{cases} \end{equation} In the canonical afterglow models that assume a uniform density environment, the cooling break and the injection break move to lower frequencies with time. Eventually, both the cooling break and the injection break can lie below the absorption break, but far too late in the evolution of the burst to be relevant to anything but the radio afterglow, and long after the time at which the electrons in the jet have transitioned to the slow-cooling regime. In our more realistic density profile models, the extremely high density encountered by the jet as it passes through the thick shell causes it to abruptly transition from highly relativistic to Newtonian expansion. The decrease in $\Gamma$ leads to a sharp drop in the injection break frequency, while the increased medium density leads to a larger magnetic field strength, which in turn causes a drop in the cooling break frequency. The result is that the absorption break frequency can be several orders of magnitude higher than the cooling and injection break frequencies as the jet traverses the thick shell. Multiple transitions between fast and slow electron cooling can also occur. In the vicinity of the thick shell, when $\nu_a > \nu_m$ and the electrons are in the fast-cooling regime: \begin{equation} F_{\nu, \earth} = F_{\nu,\earth}^\text{max} \begin{cases} (\nu/\nu_c)^2(\nu_c/\nu_a)^{5/2}, & \nu < \nu_c \\ (\nu/\nu_a)^{5/2}, & \nu_c < \nu < \nu_a \\ (\nu/\nu_a)^{-p/2}, & \nu_m < \nu. \end{cases} \label{fluxEquation3} \end{equation} \subsubsection{Slow-Cooling Electrons} Our models yield the same flux as the canonical wind models until the jet encounters the shocked wind that has piled up behind the thick ejecta shell. If it does not encounter the shocked wind in the first few hours, the electrons in the shock transition to the slow-cooling regime, with $\nu_a \ll \nu_m$ and \begin{equation} F_{\nu, \earth} = F_{\nu,\earth}^\text{max} \begin{cases} (\nu/\nu_a)^2(\nu_a/\nu_m)^{1/3}, & \nu < \nu_a \\ (\nu/\nu_m)^{1/3}, & \nu_a < \nu < \nu_m \\ (\nu/\nu_m)^{-(p-1)/2}, & \nu_m < \nu < \nu_c \\ (\nu/\nu_c)^{-p/2}(\nu_c/\nu_m)^{-(p-1)/2}, & \nu_c < \nu. \end{cases} \end{equation} \noindent In the case where $\nu_m < \nu_a < \nu_c$, \begin{equation} F_{\nu, \earth} = F_{\nu,\earth}^\text{max} \begin{cases} (\nu/\nu_m)^2(\nu_m/\nu_a)^{5/2}, & \nu < \nu_a \\ (\nu/\nu_a)^{5/2}, & \nu_a < \nu < \nu_m \\ (\nu/\nu_a)^{-(p-1)/2}, & \nu_m < \nu < \nu_c \\ (\nu/\nu_c)^{-p/2}(\nu_c/\nu_a)^{-(p-1)/2}, & \nu_c < \nu. \end{cases} \end{equation} As noted earlier, as the jet passes through the thick shell, it can experience multiple transitions between fast and slow electron cooling. When the electrons are in the slow-cooling regime and $\nu_a > \nu_c$, \begin{equation} F_{\nu, \earth} = F_{\nu,\earth}^\text{max} \begin{cases} (\nu/\nu_m)^2(\nu_m/\nu_a)^{5/2}, & \nu < \nu_m \\ (\nu/\nu_a)^{5/2}, & \nu_m < \nu < \nu_a \\ (\nu/\nu_a)^{-p/2}, & \nu_a < \nu. \end{cases} \label{fluxEquation6} \end{equation} \subsection{Spherical Emission and Beaming} The spherical nature of GRB afterglow emission can substantially alter the observed light curve \citep{fenimoreEA96}. The burst ejecta is initially ultra-relativistic, with $100 \lesssim \Gamma \lesssim 1000$, meaning that radiation that is emitted along the observer's line of sight will be beamed toward the observer more than radiation that is emitted away from the line of sight. There will also be a delay in the arrival time of photons that are emitted from regions of the jet that lie away from the observer's line of sight, since these regions are further away from the observer \citep{fenimoreEA96}. The overall effect will be to increase the total observed flux at early times, and to make the light curve broader and more smooth. \begin{figure}[t] \centering \begin{tabular}{c} \epsfig{file=fig3a.eps, width=0.5\linewidth,clip=} \\ \epsfig{file=fig3b.eps, width=0.5\linewidth,clip=} \\ \epsfig{file=fig3c.eps, width=0.5\linewidth,clip=} \\ \end{tabular} \caption{Light curves and break frequencies for a GRB in a dense shell. Panel (a): densities encountered by the jet over time; panel (b): synchrotron light curves; panel (c): break frequencies. Region V, a second region of unshocked wind, is not shown here because the fast detached wind has exited the grid. \label{figure3_lightCurves}} \end{figure} \subsection{Imprint of Dense Shells on GRB light curves} The shell and wind enclosing the GRB can be partitioned into four distinct regions, which we show in Fig.~\ref{figure3_lightCurves}a for a progenitor with a 0.1 M$_{\odot}$ shell ejected 100 yr before the burst and mass loss rates of 10$^{-6}$ M$_{\odot}$/yr before and after the ejection of the shell. A fifth region, not shown in Fig.\ \ref{figure3_lightCurves}, also exists, but is located at such large radii at the time of the GRB that the jet does not encounter it until very late times. Even so, this region is discussed below for completeness. We now examine the imprint of each region on the GRB light curves and break frequencies, which we show in Figures ~\ref{figure3_lightCurves}b and \ref{figure3_lightCurves}c. \subsubsection{Region I -- Unshocked Wind} Region I is the unshocked wind blown by the progenitor after the ejection of the shell and prior to the burst. Not surprisingly, the light curve in this region is what would be expected for a $\rho(r) \propto r^{-2}$ wind profile (Fig.~\ref{figure3_lightCurves}b). The jet electrons are initially in the fast cooling regime, and the absorption break frequency is generally higher than the cooling break frequency for the first few seconds, as is evident in Fig.~\ref{figure3_lightCurves}c. \subsubsection{Region II -- Shocked Wind} Region II is the shocked wind that has piled up behind the shell, where the density jumps by about an order of magnitude and transitions from an $r^{-2}$ density profile to a nearly flat one (Fig.~\ref{figure3_lightCurves}a). Although the increased density has little effect on the injection break frequency, both the cooling break and the absorption break each change abruptly by approximately an order of magnitude (Fig.~\ref{figure3_lightCurves}c). There is a moderate change in the flux at all frequencies, though the magnitude of the change at each frequency and whether it is an increase or a decrease depends upon the values of the break frequencies. Because region II's imprint on the light curve is modest at best even when the emitting region's spherical nature is ignored, it is likely that the inclusion of spherical emission would eliminate the imprint entirely. This is because the jet is still highly relativistic when it enters region II, and the observer does not see the radiation that is emitted when the jet enters region II all at once. The result is that the small bump in the light curve that occurs at the interface between regions I and II would be smoothed out, and region II would have essentially no effect on the light curve. Depending on the progenitor's mass-loss rate, the mass of the shell, and the delay between the expulsion of the shell and the burst, the jet can arrive at this region in several minutes in the limit of low mass-loss rates, high shell masses and short delays between the shell ejection and the burst to a year in the limit of high loss rates, low shell mass and long delays between the shell ejection and the burst. In Fig.~\ref{figure4_ShellB004} we show how the time between shell ejection and the burst governs when the jet reaches region II, and hence when the light curves and breaks would diverge from those expected for $r^{-2}$ wind profiles or uniform density fields. In Fig.~\ref{figure5_modelComparison}a we show how mass-loss rates and shell masses impact these arrival times. \subsubsection{Region III -- Dense Shell} Region III is the dense shell ejected by the progenitor. The jet collides with it in about an hour to several years after the burst, depending upon the progenitor's wind mass-loss rate, the mass of the shell, and the delay between shell ejection and the burst (Figs.~\ref{figure4_ShellB004} \& \ref{figure5_modelComparison}a). When the jet crosses into region III it abruptly becomes non-relativistic because of the large density jump there, which can be up to ten orders in magnitude (Fig.~\ref{figure5_modelComparison}a). A bright, highly relativistic reverse shock almost certainly forms, which we have neglected for simplicity. The transit time through the shell depends on its mass and the time between its ejection and the burst. Massive shells decelerate the jet more than less massive ones, increasing the time that the jet is within the shell. The time between the ejection and burst governs the degree to which radiative cooling flattens the shell into a thin, cold, dense structure, shortening the time the jet is inside the shell. Both factors cause the crossing time to vary from as little as a day to as much as several hundred days (Fig.~\ref{figure5_modelComparison}a). The shell leaves a clear imprint on the spectrum of the jet. Upon collision, there is a sharp drop in the cooling and injection break frequencies and an increase in the absorption break frequency. This causes an abrupt increase in the flux at all frequencies (Fig.~\ref{figure5_modelComparison}b). Inside the shell, the cooling break frequency rises and the magnetic field strength falls as the jet decelerates, and the injection break evolves to lower frequencies. The delay in arrival time of photons emitted away from the observer's line of sight will certainly decrease the imprint of the circumstellar shell on the light curve. At x-ray frequencies and higher, where the imprint of the shell is least dramatic, there may be little if any increase in the flux at the time that the jet enters the shell, though there should be a break in the light curve at the time that the jet enters the shell where the flux will begin to decay at a slower rate. The jet becomes nonrelativistic soon after it enters the circumstellar shell, greatly diminishing the importance of the spherical nature of the emitting region. As the jet traverses the shell, the observed flux must therefore approach the value that it would have if spherical emission was ignored. The maximum flux $F_\nu$ occurs at some frequency $\nu_{\rm{shell}}^{\rm{max}} = \rm{min}(\nu_m, \nu_c)$ in the circumstellar shell if $\nu_a < \rm{min}(\nu_m, \nu_c)$, and at $\nu_{\rm{shell}}^{\rm{max}} = \nu_a$ otherwise. The increase in flux is largest for frequencies below $\nu_{\rm{shell}}^{\rm{max}}$, (Fig.~\ref{figure5_modelComparison}), and the shell's imprint on the lightcurve is also consequently the largest. Even including the effect of spherical emission, a significant rebrightening should still occur for frequencies below $\nu_{\rm{shell}}^{\rm{max}}$ (typically optical or infrared frequencies and below). While spherical emission will have a strong effect on the lightcurve at the time that the jet enters the shell, it will not significantly alter the light curve when the jet exits the shell because the jet is non-relativistic. The abrupt drop in the flux that occurs when the jet exits the shell should therefore remain clearly visible in the light curve at all frequencies. \subsubsection{Region IV -- Low-Density Cavity} Region IV is the extremely low-density cavity created when the fast wind beyond the shell detaches from and races ahead of it. The densities in this region are so low ($10^{-5} - 10^{-8}\ \text{cm}^{-3}$, Fig.~\ref{figure5_modelComparison}a) that the flux density drops at all wavelengths when the jet exits the shell (Figs.~\ref{figure3_lightCurves}b \& \ref{figure5_modelComparison}b). By this stage, the jet has become non-relativistic, and photons emitted away from the line of sight are no longer significantly delayed with respect to those emitted by the portion of the jet that moves directly toward the observer. The observer sees the entire leading edge of the jet reach region IV at nearly the same time, resulting in an abrupt drop in the flux of several orders of magnitude at all but radio wavelengths, where the drop is not quite as large. Because the jet does not sweep up much material in this region, its magnetic field strength and velocity taper off slowly. Consequently, the break frequencies are roughly constant (Fig.~\ref{figure3_lightCurves}b) with $\nu_m \ll \nu_a \ll \nu_c$, and the spectrum is fairly constant for tens to hundreds of days (Figs.~\ref{figure3_lightCurves}b \& \ref{figure5_modelComparison}b). The absorption break generally lies below 1 GHz so the flux density becomes negligible for frequencies above the radio band (Fig.~\ref{figure3_lightCurves}b). \subsubsection{Region V -- Detached Wind} Eventually, the jet crosses the rarefied region and catches up to the wind that preceded the ejection of the shell. The detached wind, region V, exhibits a sudden density jump of several orders of magnitude followed by an $r^{-2}$ dropoff thereafter (black plot of Fig.~\ref{figure3_lightCurves}a). Consequently, another reverse shock may form at the interface between regions IV and V. Unlike the reverse shocks expected between regions I and II and between regions II and III, this one is Newtonian. We evolved the wind and the shell out to a radius of 0.3 pc, and only when the burst occurs within 100 years of shell ejection does the jet overtake the detached wind before it exits the grid. Our models predict that the jet will not reach region V for at least several years after the burst, which is why we do not show it in Fig. \ref{figure3_lightCurves}a. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{fig4.eps} \caption{Density encountered by the jet as it passes through the shell as a function of observer time $t_{obs}$ for shells ejected 100, 200, and 500 yr prior to the GRB (left to right). Here, the progenitor had a mass loss rate of $10^{-6}$ M$_{\odot}$ yr$^{-1}$ and a hydrogen shell of mass 0.1 M$_{\odot}$. These plots, and our calculations, implicitly include the slowing of the jet in the dense shell. \label{figure4_ShellB004}} \end{figure} \section{Discussion and Conclusions} Our calculations show that GRB light curves in dense shells exhibit clear departures from those in canonical winds and uniform densities. These features can broadly discriminate between classes of GRB collapse scenarios. Analytical models predict that light curves for bursts in uniform or $r^{-2}$ density profiles are piecewise power law segments separated by breaks. In contrast, light curves for GRBs in dense shells ejected by the progenitor (predicted for both single and binary He mergers) initially follow those for simpler environments, but deviate from them in most cases on timescales of a few hours to a few days. The first departure is a sudden change in the flux of about an order of magnitude when the jet enters the shocked wind piled up behind the dense shell. A second, much more significant departure occurs soon thereafter when the jet collides with the dense shell. The features discussed above are distinct signatures of a GRB in the dense shell associated with some collapse scenarios. The GRB 091127 lightcurve may provide evidence for a jet encountering a thick shell of material. \citet{filgasEA11} determined that the temporal evolution of the GRB 091127 cooling break frequency could only be consistent with the standard fireball model if the GRB occured in an $r^{11}$ medium. Such a sharp increase in density with radius is easily produced in a wind bubble environment at the trailing edge of the shell. Our work also provides an alternative mechanism for the bright flares that are sometimes seen in GRB lightcurves within the first few thousand seconds of the onset of the afterglow emission. GRB 081029 produced a flare in the optical band that began at $\sim0.035$ days, peaked at about 0.07 days after a total increase in brightness of 1.1 magnitudes, and then slowly dimmed until $\sim0.2$ days \citep{nardiniEA11}. The structure and duration of the GRB 081029 optical flare is somewhat similar to the feature in Fig. \ref{figure3_lightCurves}b that peaks at 0.3 days. A less-intense flare was observed in the GRB 071112C X-ray afterglow. The fact that no corresponding flare was detected in the optical was used by \citet{huangEA12a} to argue that different mechanisms were responsible for the time evolution of the X-ray and optical afterglows. The simple power law structure of the optical light curve was shown to be consistent with the external shock model, whereas late injection and internal shocks inside the jet itself were invoked to explain the excess X-ray emission. Our work shows that there can be prominent, chromatic features in the light curves due solely to variations in the circumburst medium. For example, Fig.~3b clearly illustrates that the structure of the light curves vary with energy band, and that it is not necessary to invoke delayed energy injection or other effects to account for these differences. A similar case is that of the ``giant'' X-ray flare of GRB 050502B, wherein the photon count rate was seen to increase by a factor of 500 at $345\pm30$s \citep{falconeEA06}. Like GRB 071112C, GRB 050502B was not observed to have an optical counterpart \citep{falconeEA06}, which has been used as evidence of a late-injection event. Our models can produce flares of comparable magnitude and duration, but can also explain the broad plateau in the X-ray flux that lasts from the end of the flare up until about a day after the burst. A GRB jet that encounters a thick shell of circumburst material will produce a flare that will rapidly dim as the jet becomes nonrelativistic. The flux then remains nearly constant until the jet emerges from the leading edge of the roughly uniform-density shell and encounters a sharp drop in the medium density. \begin{figure}[h] \centering \begin{tabular}{c} \epsfig{file=fig5a.eps, width=0.5\linewidth,clip=} \\ \epsfig{file=fig5b.eps, width=0.5\linewidth,clip=} \\ \end{tabular} \caption{Density profiles (a) and gamma-ray light curves (b) ($3\times10^8$ GHz, or 1.24 keV) for a GRB occurring 500 yr after ejection of the progenitor's hydrogen shell. Black: ${\dot{m}}_w = 10^{-6}$ M$_{\odot}$ yr$^{-1}$; blue: ${\dot{m}}_w = 10^{-5}$ M$_{\odot}$ yr$^{-1}$; red: ${\dot{m}}_w = 10^{-4}$ M$_{\odot}$ yr$^{-1}$. Solid lines: 0.1 M$_{\odot}$ shells; dotted lines: 1 M$_{\odot}$ shells. \label{figure5_modelComparison}} \end{figure} The unique imprint of dense shells on GRB light curves can be used to constrain their properties. As shown in Fig.~\ref{figure5_modelComparison}b, the afterglow flux is somewhat sensitive to the ambient density out to $\sim$ a day for a shell ejected 500 yr before the GRB. The light curve is also sensitive to the mass of the shell. The crossing time in a 0.1 M$_\odot$ shell is a day or less, after which there is a sharp drop in the flux. On the other hand, a 1.0 M$_\odot$ shell has crossing times of tens to hundreds of days, which creates a broad plateau in the light curve. The mass-loss rate of the progenitor and the time at which the jet reaches the shell are also manifest in the gamma-ray light curve (Fig.~\ref{figure5_modelComparison}b). These collectively constrain the mass of the shell and the properties of the wind before and after the ejection. It is difficult to distinguish exact GRB progenitors because many progenitors predict very similar mass ejections. The morphology and location of the dense shell at the time of the burst are determined by three factors: the delay time between shell ejection and the burst, the mass of the shell, and the wind mass-loss rate of the progenitor. In general, binary mass ejecta will be slower, but more massive, than stellar eruptions. Timing alone of a flare, however, does not provide a unique constraint (the position of the shell is a function of its velocity and time between ejection and collapse). However, some progenitors predict specific structures. The helium-merger model, for example, predicts a massive shell very close to the exploding star. The first possible evidence of such a progenitor may be the recent ``Christmas'' burst~\citep{thoneEA11}. With detailed models, we may be able to place velocity and mass constraints on these shells. With such information, we will both be able to better understand massive star evolution and constrain the progenitors of GRBs. In our models we have adopted some approximations and neglected some effects on our light curves. First, we neglect the spherical nature of the emitting region, which, if included, would undoubtedly result in a modification of our light curves \citep{fenimoreEA96}. We have investigated this effect, and the dominant result is that the light curve is broadened and becomes more smooth, decreasing the imprint of the circumstellar shell. For the majority of the duration of the jet's passage through the shell, however, the jet is non-relativistic, which will tend to diminish the effect. The shell's imprint on the light curve, though lessened, remains detectable, especially at frequencies where the imprint of the shell is largest when spherical emission is ignored. Reverse shocks might form at the interfaces between regions I and II, regions II and III, and regions IV and V, and could seriously affect the flux of the burst \citep[see i.e.][]{nakarGranot07} at those radii. Since magnetic fields in GRB jets are not well understood, it is also a simplification to assume that they are in equipartition. Finally, we present synchrotron light curves only. Inverse Compton scattering may become important after the jet emerges from the dense shell and could increase the flux at high frequencies and late times. That said, our semi-analytical method can broadly discriminate between progenitors of GRBs and compute approximate GRB light curves and light curves in general density fields. Consequently, it is applicable to many other GRB host environments besides uniform media and winds. Our modeling of the circumburst environment with the Zeus-MP code shows a wide departure from the canonical power law density models, with sudden jumps in density of up to ten orders of magnitude that have a measureable effect on the light curve. Efforts are now underway to apply this method to model observational signatures and detection thresholds for Population III gamma-ray bursts in primordial H II regions at $z \sim$ 20 \citep{whalenEA04, kitayamaEA04, alvarezEA06, abelEA07, wiseAbel08b} and both Pop II and Pop III GRBs in primeval galaxies at $z \sim$ 10 \citep{wiseEA12} for potential successors to \textit{Swift}, such as the \textit{Joint Astrophysics Nascent Satellite} \citep[\textit{JANUS},][]{roming08,burrowsEA10} and \textit{Lobster}. Our method provides a means to obtain a qualitative understanding of gamma-ray burst light curves. For a more precise calculation, we must turn to computer simulations. Both special-relativistic magnetohydrodynamical simulations and particle-in-cell (PIC) calculations of GRB jets in circumburst media are now under development, and will soon reveal the light curves of these cosmological explosions in unprecedented detail. \acknowledgments RM was supported by LANL IGPP grant 10-150, and DW was supported by the Bruce and Astrid McWilliams Center for Cosmology at Carnegie Mellon University. Work at LANL was done under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. All ZEUS-MP simulations were performed with allocations from Institutional Computing (IC) on the Conejo cluster at LANL. \nocite{alvarezEA06} \nocite{anninosEA97} \nocite{abelEA07} \nocite{barkovKomissarov11} \nocite{bergerEA00} \nocite{burrowsEA10} \nocite{castorEA75} \nocite{curranEA11} \nocite{daiLu02} \nocite{dalgarnoMcCray72} \nocite{falconeEA06} \nocite{fenimoreEA96} \nocite{filgasEA11} \nocite{frailEA00} \nocite{fryerEA99b} \nocite{fryerEA06} \nocite{fryerEA07b} \nocite{fryerWoosley98} \nocite{godetEA06} \nocite{huangEA99} \nocite{huangEA00} \nocite{huangEA12a} \nocite{kitayamaEA04} \nocite{meszaros02} \nocite{mimicaGiannios11} \nocite{moderskiEA00} \nocite{nakarGranot07} \nocite{nardiniEA11} \nocite{panaitescuKumar00} \nocite{panaitescuMeszaros00} \nocite{peer12} \nocite{priceEA02} \nocite{ramirez-RuizEA01} \nocite{ramirez-RuizEA05} \nocite{ricottiEA01} \nocite{ricottiEA02} \nocite{ricottiEA05} \nocite{roming08} \nocite{rybickiLightman79} \nocite{sariEA98} \nocite{shullvanSteenberg85} \nocite{thoneEA11} \nocite{weaverEA77} \nocite{whalenEA04} \nocite{whalenEA08b} \nocite{whalenNorman06} \nocite{whalenNorman08a} \nocite{whalenNorman08b} \nocite{wijersGalama99} \nocite{wiseAbel08b} \nocite{wiseEA12} \nocite{woosley93} \nocite{woosley11} \nocite{woosleyBloom06} \nocite{yostEA03} \nocite{zhangFryer01} \nocite{ziaeepourEA08} \bibliographystyle{apj}
1,108,101,563,426
arxiv
\section{Goal} In order to investigate the properties of the thick disk and its interface with the thin disk we have compiled a catalogue of elemental abundances of O, Na, Mg, Al, Si, Ca, Ti, Ni, Fe including 830 stars (Girard and Soubiran 2004). The classification of thin disk and thick disks stars has been performed on the basis of their (U,V,W) velocities. The two populations overlap greatly in metallicity but at a given [Fe/H] the thick disk shows on average an enhancement of 0.07 dex in [$\alpha$/Fe] (Fig.1). In order to go further in this investigation we want to be able to measure [Fe/H] and [$\alpha$/Fe] from a large collection of spectra with an automatic procedure. \begin{figure}[h] \begin{center} \leavevmode \centerline{\epsfig{file=girardF2.ps,angle=90, width=8.0cm}} \end{center} \caption{[$\alpha$/Fe] vs [Fe/H]} \label{} \end{figure} \begin{figure}[h!] \begin{center} \leavevmode \centerline{\epsfig{file=girardF1.ps,angle=0, width=7.0cm}} \end{center} \caption{Comparison of [Fe/H] from the TGMET code with [Fe/H] from the catalogue of abundances. Rms = 0.11.} \label{} \end{figure} \section{Tools and material} In this section we summerize the libraries and the codes used for this investigation : \begin{itemize} \item The ELODIE library of 1962 spectra ($\lambda \lambda$390.6-681.1nm, R=42000) of 1388 stars with measured Lick indices (Prugniel \& Soubiran 2004) and its intersection with the abundance catalogue : 449 spectra of 308 stars. \item The grid of synthetic spectra with 3 values of [$\alpha$/Fe] (Barbuy et al. 2003). \item The TGMET code : a minimun distance algorithm to measure (Teff, logg, [Fe/H]) (Katz et al. 1998). \item The ETOILE code : a modified version of TGMET with determination of [$\alpha$/Fe] (D.Katz, priv. com.). \\ \\ \end{itemize} \begin{figure}[h] \begin{center} \leavevmode \centerline{\epsfig{file=girardF4.ps,angle=90, width=8.0cm}} \end{center} \caption{[Fe/H] from ETOILE vs [Fe/H] from the catalogue. The modification of Teff in the input of the code provides a variation of [Fe/H].} \label{} \end{figure} \begin{figure}[h!] \begin{center} \leavevmode \centerline{\epsfig{file=girardF3.ps,angle=90, width=8.0cm}} \end{center} \caption{[$\alpha$/Fe] from ETOILE vs [$\alpha$/Fe] from the catalogue.} \label{} \end{figure} TGMET relies on the least-square comparison of an ELODIE spectrum of a target star to a library of ELODIE spectra of reference stars with well determined atmospheric parameters.\\ ETOILE is a minimum distance algorithm based on the perturbation method described in Cayrel et al. (1991). With this method, the reference library must sample the parameter space with regular steps. That is why synthetic spectra are used instead of empirical spectra.\\ We use the grid of synthetic spectra computed by Barbuy et al.(2003) : $\lambda \lambda$460-560nm, 4000 $\leq$ Teff $\leq$ 7000 K in steps of 250 K, 0.5 $\leq$ log g $\leq$ 5.0 in steps of 0.5, [Fe/H] : -3.0, -2.5, -2.0, -1.5, -1.0, -0.5, -0.3, -0.2, -0.1, 0.0 and +0.3 and [$\alpha$/Fe] : 0.0, +0.2 and +0.4. \\ A first step is to validate the grid, that is verify that computed spectra and observed ones with same parameters match on the whole wavelength interval. \section{Results} A bootstrap method is used to test the performances of TGMET. Based on 449 spectra, TGMET is able to retrieve the atmospheric parameters with a typical accuracy of 134K in Teff and 0.11 in [Fe/H] (Fig.2). The main limitation of TGMET is its empirical reference library which does not sample perfectly the parameter space. A limitation overcome with the use of ETOILE and a grid of synthetic spectra. As a starting point ETOILE uses the TGMET solution. Preliminary results from ETOILE suggest that the catalogue of abundances and the grid are not on the same temperature scale : metallicities are correctly recovered if a hotter temperature is given in input (Fig.3). [$\alpha$/Fe] is not yet correctly estimated (Fig.4). Possible causes are currently investigated.
1,108,101,563,427
arxiv
\section{Introduction} Serezha Natanzon was a very original scientist. He taught us a lot about his beloved Hurwitz numbers and related world of interesting special functions. One of these lessons was about the $Q$ Schur functions, which, in his opinion, had to be applied to counting of holomorphic coverings with spin structures (spin Hurwitz numbers), and they really did \cite{MMNspin}. Once one knows about the $Q$ Schur functions, it gets immediately clear that they are applicable to the Kontsevich model \cite{Kon}, because they have just the needed properties: do not depend on even times, are labeled by special Young diagrams, satisfy the BKP equations, and behave nicely under action of the Virasoro algebra \cite{VirJ}. Since the $Q$ Schur functions are related to characters (of the Sergeev group \cite{Serg,Sergrev}), it comes without a surprise that they exactly what is necessary to formulate the {\it superintegrability} property \cite{super,MMcharchar} $<character>\,\sim character$ of the Kontsevich model \cite{DIZ,MMkon}, moreover, as usual in this case, the coefficients at the r.h.s are made from the same $Q$ Schur functions at special delta-loci. At least in this case, this follows from magnificent factorization formulas for the $Q$ Schur characters \cite{Alex,MMNO}. Knowing all this, one is tempted to move towards the generalized Kontsevich model \cite{GKM,UFN3} (GKM), with a monomial potential $X^{n+1}$ to begin with. The first obvious question is what are the relevant $Q^{(n)}$ functions, which were just Schur's $Q=Q^{(2)}$ for $n=2$. In this paper, we suggest as a plausible candidate the Hall-Littlewood polynomials at the $n$-th root of unity, which is a straightforward generalization of {\it one of} the many definitions at $n=2$. We explain that at least some of the needed properties are captured by this suggestion. But some are not, and Serezha is no longer here to teach us how to resolve these problems. They remain for the future work. \section{Generalized $Q$ Schur polynomials} In this section, we briefly list the main properties of a generalization of $Q$ Schur polynomials from $n=2$ to $n>2$. To this end, we need a couple of new quantities: for the Young diagram $\Delta=\{\delta_1\geq \delta_2\geq\ldots\delta_{l_\Delta}>0\} = \{1^{m_1},2^{m_2},\ldots\}$, we define the standard symmetry factor \begin{eqnarray} z_\Delta:=\prod_{a=1}^\infty m_a! \cdot a^{m_a} \end{eqnarray} and two less conventional $n$-dependent quantities \begin{eqnarray} L_\Delta:=\prod_{a=1}^\infty \prod_{k=1}^{m_a}\beta_k \end{eqnarray} and \begin{eqnarray} \beta_\Delta:=\prod_i^{l_\Delta}\beta_{\delta_i} \end{eqnarray} where \begin{eqnarray} \beta_k^{(n)}:=1-\omega^k_n = 1 - e^{\frac{2\pi ik}{n}} \end{eqnarray} Thus $L_\Delta$ is vanishing when diagram $\Delta$ has $n$ (or more) lines of the same length. Roots of unity of the degree dictated by power of the monomial potential in the GKM is a new twist in the QFT-Galois relation with far-going applications. \bigskip The properties of our new functions $Q_R^{(n)}\{p\}$ are: \begin{itemize} \item They are equal to the special values of the Hall-Littlewood polynomials \cite{Mac}: at $t=\omega_{n}$ which is the primary $n$-th root of unity \begin{eqnarray} Q^{(n)}_R:= \sqrt{L_R}\cdot\hbox{Mac}_R(q=0, t = e^{2i\pi/n}) \end{eqnarray} where $\hbox{Mac}_R$ denotes the Macdonald polynomial. Because of the factor $L_R$, $Q^{(n)}_R$ is non-zero only for the Young diagram $R$ which has no more than $n-1$ lines of the same length. We denote this set of diagrams by $S_nP$, and its subset of diagrams of size $|R|=m$, by $S_nP(m)$. \item Hereafter we deal with the symmetric functions $Q^{(n)}_R$ of variables $x_i$ as (graded) polynomials of power sums $p_k:=\sum_ix_i^k$, which we call time-variables (they are proportional to times of integrable hierarchy associated with the GKM). The polynomials $Q^{(n)}_R\{p\}$ are independent of the time-variables $p_{kn}$. This property is preserved by an arbitrary rescaling of time-variables, but we do not apply this rescaling, and use the choice associated with the Cauchy formula in the form (\ref{QCauchy}) below. \item{They form a closed algebra } \begin{eqnarray} Q^{(n)}_{R_1}\{p\}\cdot Q^{(n)}_{R_2}\{p\} = \!\!\!\sum_{{R \in R_1\otimes R_2}\atop{R\in {\tiny S_nP}}}\!\!\! N_{R_1,R_2}^R Q^{(n)}_R\{p\} \end{eqnarray} i.e. the Littlewood-Richardson coefficients of Macdonald polynomials vanish when $q=0, t=\omega_{n}$ and $R_1,R_2\in S_nP$ and $R\notin S_nP$. Like in the case of $n=2$, $\hbox{Mac}_R(q=0,t=\omega_{n})$ themselves do not vanish for $R\notin S_nP$, and then they can also depend on even $p_{nk}$, thus the set of $Q^{(n)}_R\{p\}$ is a {\it sub-set} of that of the Hall-Littlewood polynomials, and it is a non-trivial fact that it is a {\it sub-ring}. \item The Fr\"obenius formula for the generalized $Q$ Schur polynomials is \begin{eqnarray}\label{FQ} Q_R^{(n)}\{p_k\}=\sum_{\Delta\in O_nP} {\Psi^{(n)}_R(\Delta)\over z_\Delta}p_\Delta \end{eqnarray} At the l.h.s. $R\in S_nP$. The set $O_nP$ at the r.h.s. consists of all diagrams with line lengths not divisible by $n$, and this formula reflects a remarkable one-to-one correspondence between $O_nP$ and $S_nP$: these sets have the same sizes, and the map is non-degenerate as follows from the orthogonality relations: \begin{eqnarray}\label{QOR} \sum_{\Delta\in {\footnotesize O_nP}}{\Psi_R^{(n)}(\Delta)\Psi_{R'}^{(n)}(\Delta)\over \beta_\Delta z_\Delta}=\delta_{RR'}, \ \ \ \ \ \ \ \sum_{R\in {\footnotesize S_nP}}{\Psi_R^{(n)}(\Delta)\Psi_{R}^{(n)}(\Delta') }= \beta_\Delta z_\Delta \delta_{\Delta\Delta'} \end{eqnarray} Hence, one can construct an inverse map \begin{eqnarray}\label{FQi} p_\Delta =\sum_{R\in S_nP}{\Psi^{(n)}_R(\Delta)Q_R^{(n)}\{p_k\}\over \beta_\Delta} \end{eqnarray} \item In the scalar product \begin{eqnarray} \Big< p_{k} \Big| p_{l}\Big> = {k\over\beta_k} \cdot \delta_{k,l} \end{eqnarray} the $Q$-functions are orthogonal: \begin{eqnarray} \Big< Q_R^{(n)}\Big| Q_{R'}^{(n)} \Big> = ||Q_R^{(n)}||^2\cdot\delta_{R,R'} \end{eqnarray} with \begin{eqnarray} || Q_R^{(n)}||^2 = 1 \end{eqnarray} \item As usual, one can introduce the skew $Q$-Schur functions $Q^{(n)}_{R/P}$ defined as \begin{eqnarray} Q^{(n)}_{R}\{p+p'\}=\sum_{P\in\hbox{\footnotesize SP}}Q^{(n)}_{R/P}\{p\}Q^{(n)}_P\{p'\} \end{eqnarray} They are given by \begin{eqnarray} Q^{(n)}_{R/P}\{p\}=\sum_{P\in\hbox{\footnotesize SP}}{\cal N}_{PS}^{R}Q^{(n)}_S\{p\} \end{eqnarray} \item $Q$ Schur polynomials satisfy the Cauchy formula, \begin{eqnarray}\label{QCauchy} \sum_{R\in {\footnotesize S_nP}} Q^{(n)}_{R}\{p\}\cdot Q^{(n)}_{R}\{{\rm Tr}\, X^k\} = \exp\left(\sum_{k=1} \frac{\beta_k^{(n)}\, p_{k}\, {\rm Tr}\, X^{k}}{k}\right) \end{eqnarray} Since $\beta_k^{(n)}$ vanishes whenever $k$ is divisible by $n$, the r.h.s. is independent of all $p_{kn}$. We write the second set of times in Miwa variables, $p_k'={\rm Tr}\, X^k$, because we need this form of the Cauchy formula in consideration of correlators below. \item The Virasoro and $W$ algebras act rather simple on the generalized $Q$ Schur polynomials, see sec.\ref{VW}. \item The $Q$ polynomials themselves are {\it not} $\tau$-functions of the KP hierarchy and its reductions (like the KdV and Boussinesq ones). Instead, for $n=2$ they satisfy the BKP hierarchy \cite{You,O2003,MMNspin} which does not yet have any direct counterpart for $n>2$. \item The main difficulty at this stage is that there is yet no formula for $Q_R^{(n)}$ per se, without referring to the Hall-Littlewood and Macdonald polynomials. Indeed, the ordinary Schur polynomials at $n=1$ have a determinant representation \cite{Mac}, or can be realized as an average over charged fermions (see a review in \cite{JM}); the $Q$ Schur polynomials at $n=2$ have a Pfaffian representation instead of the determinant one (see, e.g., \cite[Eq.(74)]{MMNspin}), or can be realized as an average over neutral fermions \cite{DJKM,JM,You,O2003,MMNO}; what happens for $n>2$ is yet unclear. One could expect some expressions in terms of parafermions, which generalizes the reduction from charged to neutral fermions in the $n=2$ case. \end{itemize} \section{Monomial GKM \label{GKMdef}} \subsection{Properties of GKM \cite{GKM,UFN3}} The monomial Generalized Kontsevich model is defined by the $N\times N$ Hermitian matrix integral \cite{GKM} \begin{eqnarray}\label{GKMint} Z^{(n)}(L) :={\cal N}(L)\cdot \int \exp\left(- {{\rm Tr}\, X^{n+1}\over n+1}+{\rm Tr}\, L^{n} X\right) dX \end{eqnarray} $Z^{(n)}(L)$ depends only on the eigenvalues of the background matrix field $L$, and, with a proper choice of the normalization factor ${\cal N}(L)$, it can be treated as a formal series either in positive or in negative powers of $L$ \cite{GKMU}. In fact, $Z^{(n)}(L)$ is a {\it symmetric} functions of the eigenvalues $\lambda_i^{\pm 1}$ of the external matrix $L$, and, hence, can be considered as a function of the power sums or the ``time-variables" $p_k^{\pm} := {\rm tr}\, L^{\pm k}$. These two cases require proper (different) choices of the normalization factors and are referred to as {\it character} and {\it Kontsevich} phases \cite{GKMU}. In this paper, we are interested in the more sophisticated Kontsevich phase, and in what follows we omit the superscript "-": $p_k:=p^-_k$. The potential in the exponent has an extremum at $X=L$, and, in the Kontsevich phase, one expands around it in inverse power of $L$. In this phase, one has to choose the normalization factor \begin{eqnarray} {\cal N}(L):= {\displaystyle{\exp\left({1\over n+1}{\rm Tr}\, L^{n+1}\right)}\over \displaystyle{\exp\left(-\frac{1}{2}\sum_{a+b=n-1} {\rm Tr}\, L^a X L^b X\right) dX}} \end{eqnarray} This provides that $Z^{(n)}(L)$, which depends on the eigenvalues $\lambda_i$ of the matrix $L$, can be understood as a formal power series in $\lambda_i^{-1}$, and, in fact, is a power series in $p_k:={\rm Tr}\, L^{-k}$ \cite{GKM}. It possesses more advanced definitions as a $D$-module and/or peculiarly reduced KP $\tau$-function. Namely, \begin{itemize} \item{ For a given $n$ the partition function $Z^{(n)}(L)$ is actually independent of $p_{kn}$, this explains the choice of notation for the potential: what matters is usually not the potential $X^{n+1}$ but its derivative $X^n$. } \item{ $Z^{(n)}(L)$ as a (symmetric) function of $\lambda_i$ is a $\tau$-function of the KP hierarchy in Miwa variables, i.e. satisfies the bilinear difference Hirota equations and can be expressed as a determinant. $Z^{(n)}(L)$ as a function of power sums $p_k/k$ is a $\tau$-function of the KP hierarchy in the ordinary higher time variables (hence, the name ``time-variables" for $p_k$), and satisfies the bilinear differential Hirota equations. Moreover, for a given $n$, it is actually an $n$-{\it reduction} of the KP hierarchy, say, the KdV hierarchy for $n=2$, or the Boussinesq hierarchy for $n=3$. } \item{ $Z^{(n)}(L)$ satisfies the Ward identities \cite{MMM91,GN}. When rewritten in terms of $p^-_k$, these constraints form Borel subalgebras of the Virasoro and $W$-algebras, $\hat W^{(p)}_mZ (L)= 0$ with $2\leq p\leq n$, $m\ge 1-p$ \cite{GKM,Mikh}. In the character phase, the Ward identities in terms of $p^+_k$ are rather the $\tilde W$-constraints \cite{GKMU}. } \item{The lowest of these constraints, $\hat L_{-1}Z^{(n)}(L)=\hat W^{(2)}_{-1}Z^{(n)}(L)=0$ called string equation along with the integrable hierarchy equations generates the whole set of the Ward identities. } \end{itemize} We see that this list has some parallels with the list of properties of the $Q$-functions in the previous section. Thus it comes without a surprise that \begin{itemize} \item{$Z^{(n)}(L)$ in the Kontsevich phase can be expanded in functions $Q^{(n)}\{p\}$.} \end{itemize} This {\it character expansion} is the subject of the present paper. We will see that it is not yet as powerful as in the case of $n=2$ \cite{MMkon}, still generalization to $n>2$ clearly exists. \subsection{The GKM propagator} One can calculate the GKM integral (\ref{GKMint}) perturbatively. To this end, one has to expand around the extremum of potential at $X=L$, i.e. to shift $X=L+Y$, and deal with the integral \begin{eqnarray} Z^{(n)}(L)= \nonumber\\ =\int \exp \left(-{\rm Tr}\, \frac{(L+Y)^{n+1} - L^{n+1}}{n+1} +L^{n}Y +\frac{1}{2}\sum_{a+b=n-1} {\rm Tr}\, L^a Y L^b Y\right) \exp\left(-\frac{1}{2}\sum_{a+b=n-1} {\rm Tr}\, L^a Y L^b Y\right) dY \label{GKMkph} \end{eqnarray} expanding the first exponential and evaluating the obtained Gaussian integral. The measure in this integral is defined so that $<1>=1$. Thus, we define the correlation function by the Gaussian integral \begin{eqnarray} \left< \ldots \right> \ := \int \ldots \exp\left(-\frac{1}{2}\sum_{a+b=n-1} {\rm Tr}\, L^a Y L^b Y\right) dY \end{eqnarray} and first evaluate the propagator. In terms of the eigenvalues $\lambda_i$ of $L$, the propagator is \begin{eqnarray} \left<Y_{ij}Y_{kl}\right>_n=\frac{ \delta_{il}\delta_{jk}} {\!\!\!\!\displaystyle{\sum_{a+b=n-1}} \lambda_i^a \lambda_j^b\ } \label{prop} \end{eqnarray} When this does not lead to a confusion, in what follows we omit the index $n$ in the notation of the average, but we should remember that the propagator depends on $n$ and has grading level, i.e. the power in $L^{-1}$ equal to $n-1$. \subsection{Correlation functions} Correlation functions with the propagator (\ref{prop}) have complicated denominators and often can not be expressed in terms of the time-variables \begin{eqnarray} p_k={\rm tr}\, L^{-k} = \sum_{i=1}^N \lambda_i^{-k} \end{eqnarray} From this perspective it looks like a miracle that there are many exceptions: plenty of {\it admissible} correlators exist, i.e. those expressible through the time-variables. In particular, as we already pointed out an important result from the theory of GKM \cite{GKM} is that $Z^{(n)}(L)$ in the Kontsevich phase actually {\it is} a power series in time variables. Thus we understand that at least the correlators which comes from perturbative expansion of the GKM are admissible. In fact, expanding exponential in (\ref{GKMkph}), one obtains rather sophisticated averages \begin{eqnarray} \left< \left({\rm Tr}\, \frac{(L+Y)^{n+1} - L^{n+1}}{n+1} - L^{n}Y -\frac{1}{2}\sum_{a+b=n-1} {\rm Tr}\, L^a Y L^b Y\right)^m \right>_{\! n} \end{eqnarray} and they should depend on $\lambda_i$ only through $p_k$. For the ordinary Kontsevich model with $n+1=3$, these correlators are just $\left< \left({\rm Tr}\, Y^3\right)^{2m} \right>$, but already in the quartic case, $n+1=4$ they contain $L$: powers of $ {\rm Tr}\, Y^4$ should be combined with those of ${\rm Tr}\, LY^3$. These correlators do not exhaust all the admissible correlators, but, as a first step, we concentrate on this special set in this paper. \subsection{Character expansion} Now we are going to study the perturbative expansion of (\ref{GKMkph}) as a function of time-variables. A natural full basis in the set of such functions is provided by {\it characters}, for instance, by the Schur functions $\chi_R\{p\}$, which form a set labeled by the Young diagrams with a natural grading by the size of these diagrams $|R|$: \begin{eqnarray} Z^{(n)}(L) = \sum_R C_R\chi_R\{p\} \end{eqnarray} The question is what are the coefficients $C_R$. Since $Z^{(n)}(L)$ is a KP $\tau$-function, they satisfy the Pl\"ucker relations. But actually from the theory of GKM \cite{GKM} we know more: for a given $n$, it is {\it independent} of all $p_{kn}$ and is a $\tau$-functions of the (appropriately reduced) KP hierarchy. For example, at $n+1=3$, it is a $\tau$-function of the KdV hierarchy, which depends only on odd time-variables $p_{2k+1}$. This means that $\chi_R\{p\}$ is actually not the most adequate basis, because this type of reduction looks complicated in it. This is clear already from the case of $n=2$, where the coefficients $C_R$ are quite involved \cite{Zh,BY}. From what we already know from sec.2, it is clear that much better for a given $n$ is a basis formed by the $Q$-functions $Q^{(n)}_R$ with $R\in S_nP$. Thus, more precisely, our interest is in \begin{eqnarray} \boxed{ Z^{(n)}(L) = \sum_{R \in S_nP} C^{(n)}_R \,Q_R^{(n)}\{p\} } \label{GKMQexpan} \end{eqnarray} As we demonstrated in \cite{MMkon}, the coefficients $C_R^{(2)}$ in this basis are very simple and natural in the case of $n=2$, in contrast with expansion into $C_R$. \paragraph{Remark.} It could look appealing to extract $Q^{(n)}$ at a special delta-locus, $Q_R^{(n)}\{\delta_{k,n+1}\}$ from the coefficients $C_R^{(n)}$: \begin{eqnarray} Z^{(n)}(L) \stackrel{?}{=} \sum_{R \in S_nP} c^{(n)}_R\cdot Q_R^{(n)}\{\delta_{k,n+1}\}\,Q_R^{(n)}\{p\} \label{Znc} \end{eqnarray} like we did in \cite{MMkon} for $n=2$. This may seem natural because applying the Cauchy identity to the original integral (\ref{GKMint}), one can conclude that \begin{eqnarray} e^{-\frac{1}{n+1}\,{\rm Tr}\, X^{n+1}} = \exp\left(-\sum_k \frac{1}{k}\,{\rm Tr}\, X^k \cdot \delta_{k,n+1}\right) = \sum_R (-1)^{|R|}\,Q^{(n)}_{R^\vee}\{{\rm Tr}\, X^k\} \cdot Q^{(n)}_R\{\delta_{k,n+1}\} \end{eqnarray} Of course, this is far from a reliable argument, and it is not a big surprise that things are not so simple. As we will see shortly, the expansion (\ref{Znc}) is actually not possible for $n>2$. \subsection{The reminder from \cite{MMkon}: $n=2$} For $n+1=3$ the relevant averages involve only $L$-independent operators: $\sum_{m=0}^\infty \frac{1}{(2m)!\cdot 3^{2m}}\left< ({\rm Tr}\, Y^3)^{2m} \right>$. For example, for the first two terms \begin{eqnarray} \frac{1}{2!\cdot 3^2}\left< ({\rm Tr}\, Y^3)^{2} \right> = \frac{1}{48}\cdot\left(p_3+4p_1^3\right) = \frac{1}{48}\cdot\left(Q^{(2)}_{[2,1]}\{p_k\}-\frac{5\sqrt{2}}{2}\,Q^{(2)}_{[3]}\{p_k\} \right) =\nonumber \\ = \frac{1}{32}\cdot\left(5 \,Q^{(2)}_{[3]}\{\delta_{k,3}\}Q^{(2)}_{[3]}\{p_k\} - Q^{(2)}_{[2,1]}\{\delta_{k,3}\}Q^{(2)}_{[2,1]}\{p_k\}\right) \end{eqnarray} and \begin{eqnarray} \frac{1}{4!\cdot 3^4}\left< ({\rm Tr}\, Y^3)^{4} \right> = \frac{1}{9\cdot 512}\cdot\left(144p_5p_1 + 25p_3^2+200p_3p_1^3+16p_1^6\right) =\ \ \ \ \ \nonumber \\ =- \frac{5}{9\cdot 512}\cdot\left( \frac{77\sqrt{2}}{2}\,Q^{(2)}_{[6]}\{p_k\} - 7\,Q^{(2)}_{[5,1]}\{p_k\} + 7\,Q^{(2)}_{[4,2]}\{p_k\} + \sqrt{[2]}Q^{(2)}_{[1,2,3]}\{p_k\} \right) = \end{eqnarray} \vspace{-0.4cm} \begin{eqnarray} = \frac{5}{1024}\cdot\left( 77 \,Q^{(2)}_{[6]}\{\delta_{k,3}\}Q^{(2)}_{[6]}\{p_k\} - 7\,Q^{(2)}_{[5,1]}\{\delta_{k,3}\}Q^{(2)}_{[5,1]}\{p_k\} - 7\,Q^{(2)}_{[4,2]}\{\delta_{k,3}\}Q^{(2)}_{[4,2]}\{p_k\} - Q^{(2)}_{[1,2,3]}\{\delta_{k,3}\}Q^{(2)}_{[1,2,3]}\{p_k\} \right) \nonumber \end{eqnarray} We can note that the coefficient in front of $Q^{(2)}_{[1,2,3]}\{\delta_{k,3}\}Q^{(2)}_{[1,2,3]}$ is exactly the product of those in front of $Q^{(2)}_{[3]}\{\delta_{k,3}\}Q^{(2)}_{[3]}\{p_k\}$ and $Q^{(2)}_{[2,1]}\{\delta_{k,3}\}Q^{(2)}_{[2,1]}\{p_k\}$, i.e. $\,-\frac{ 5}{1024} = \frac{ 5}{32} \cdot \left(-\frac{1}{32}\right)$. This is a manifestation of the general property \cite{Alex} that the coefficient in front of $Q^{(2)}_{R}\{\delta_{k,3}\}Q^{(2)}_{R}\{p_k\}$ in the character expansion of the cubic Kontsevich partition function $Z^{(2)}(L)$ is factorized: equal to \begin{eqnarray} {\rm coeff}_{Q^{(2)}_{R}\{\delta_{k,3}\}Q^{(2)}_{R}\{p_k\}}\Big(Z^{(2)}(L)\Big) \ =\ \prod_{i=1}^{l_R} f^{(2)}(R_i) \label{n2fact} \end{eqnarray} It is actually equal to \cite{MMkon} \begin{eqnarray} {\rm coeff}_{Q^{(2)}_{R}\{\delta_{k,3}\}Q^{(2)}_{R}\{p_k\}}\Big(Z^{(2)}(L)\Big) \ =\ {1\over 2^{5|R|/3}}\cdot\frac{Q_{2R}^{(2)}\{\delta_{k,3}\}}{Q_{R}^{(2)}\{\delta_{k,3}\}} \frac{Q_{R}^{(2)}\{\delta_{k,1}\}}{Q_{2R}^{(2)}\{\delta_{k,1}\}} \end{eqnarray} which has exactly such factorization property due to elegant factorization identities, see \cite{MMNO} for details. Here $2R$ means the Young diagram obtained from $R$ by doubling its line lengths. Thus, one finally obtains \begin{eqnarray} Z^{(2)}(L)=\sum_{R\in S_2P}{1\over 2^{5|R|/3}}\cdot {Q_R\{\delta_{k,1}\}\over Q_{2R}\{\delta_{k,1}\}}\cdot Q_R\{p_k\}Q_{2R}\{\delta_{k,3}\} \end{eqnarray} \subsection{The basic example: $n=3$} After reminding the already known situation at $n=2$, we now make the first step into {\it terra incognita} at $n>2$. For $n+1=4$ \begin{eqnarray} \left< {\rm Tr}\, Y^4\right> = \left<Y_{ij} Y_{jk} Y_{kl} Y_{li}\right> = 2P_{ij,jk}P_{kl,li} + P_{ij,kl}P_{jk,li} = \nonumber \\ = \frac{2\delta_{ik}}{(\lambda^2_i+\lambda_i\lambda_j+\lambda_j^2)(\lambda_k^2+\lambda_k\lambda_l+\lambda_l^2)} + \frac{\delta_{i,j,k,l}}{(\lambda^2_i+\lambda_i\lambda_j+\lambda_j^2)(\lambda_j^2+\lambda_j\lambda_k+\lambda_k^2)} = \nonumber \\ = \sum_{ijl}\frac{2 }{(\lambda^2_i+\lambda_i\lambda_j+\lambda_j^2)(\lambda_i^2+\lambda_i\lambda_l+\lambda_l^2)} + \sum_i \frac{1}{9\lambda_i^4} \end{eqnarray} This correlator is {\it not} expressed through the time variables. However, there is another term of the same {\it grading}, i.e. of the same degree in $L^{-1}$, with two extra powers of $L$ in the operator compensated by those in the extra propagator: \ $\left<\Big({\rm Tr}\, L Y^3\Big)^2\right>$. When it is added to $\left<{\rm Tr}\, Y^4\right>$ with an appropriate coefficient, the sum gets expressed through the time variables: \begin{eqnarray} -\frac{1}{4}\left(\Big< {\rm Tr}\, Y^4\Big> + 2\left<\Big({\rm Tr}\, L Y^3\Big)^2\right>\right) \ =\ {p_4+6p_1^2p_2 \over 36} = \label{n3first} \end{eqnarray} $$ = {1\over 36}\left(\beta_1^{-1/2}\Big(7Q^{(3)}_{[4]}-\sqrt{3}Q^{(3)}_{[2,1,1]}\Big) -(2\sqrt{3}+i) \Big(Q^{(3)}_{[2,2]} -iQ^{(3)}_{[3,1]}\Big)\right) ={\beta_2\over 27} \sum_{\stackrel{R\in S_3P}{|R|=4\ }}c_R\cdot Q^{(3)}_R\{\delta_{k,4}\} Q^{(3)}_{R}\{p_k\} $$ \vspace{-.5cm} \noindent with \begin{eqnarray} c_{[4]}=7,\ \ \ \ \ \ c_{[2,2]}=c_{[3,1]}=1-2i\sqrt{3},\ \ \ \ \ \ c_{[2,1,1]}=-1 \label{n3firstC} \end{eqnarray} Note that the diagram $[1,1,1,1]$, which does not belong to $S_3P$, is indeed missing at the r.h.s. Similarly, in the next order \begin{eqnarray} \frac{1}{2!\cdot 4^2}\,\left( \left< \Big({\rm Tr}\, Y^4\Big)^2\right>-4\left<\Big({\rm Tr}\, L Y^3\Big)^2\cdot{\rm Tr}\, Y^4\right>+{4\over 3}\left<\Big({\rm Tr}\, L Y^3\Big)^4\right>\right) =\nonumber\\ ={1\over 32\cdot 81}\cdot\left(96p_7p_1+96p_5p_1^3+13p_4^2+156p_4p_2p_1^2-12p_2^4+ 36p_2^2p_1^4\right) \label{n3second} \end{eqnarray} {\footnotesize $$ \hspace{-.7cm} ={1\over 32\cdot 81}\left({5\cdot 7\cdot 11\over \sqrt{\beta_1}}Q^{(3)}_{[8]}+ 5\cdot 7(1+4i\sqrt{3})Q^{(3)}_{[7,1]}-{5\cdot 7\over 2}(3\sqrt{3}i+13)Q^{(3)}_{[6,2]}- 5\beta_2\sqrt{3\beta_1}Q^{(3)}_{[6,1,1]}+{5\cdot 7\over 2}(11-5\sqrt{3}i)Q^{(3)}_{[5,3]}+3\cdot 5\sqrt{\beta_1}Q^{(3)}_{[5,2,1]}+ \right. $$ $$ \hspace{-.7cm} \left.+{5\cdot 7\cdot 11\over 3\sqrt{3}}\beta_2^2Q^{(3)}_{[4,4]}+{5\sqrt{\beta_1}\over 2\sqrt{3}} (23i+11\sqrt{3})\Big(Q^{(3)}_{[4,3,1]}-iQ^{(3)}_{[4,2,2]}\Big)-{2\cdot 7\over\sqrt{3}}\beta_2Q^{(3)}_{[4,2,1,1]}-{5\over\sqrt{3}}\beta_2^2\sqrt{\beta_1}Q^{(3)}_{[3,3,2]} -{2\cdot 13\over 3}\beta_1^2\Big(Q^{(3)}_{[3,3,1,1]}+iQ^{(3)}_{[3,2,2,1]}\Big) \right) = $$ } \begin{eqnarray} =\left({\beta_2\over 27}\right)^2 \sum_{\stackrel{R\in S_3P}{|R|=8\ }}c_R\cdot Q^{(3)}_R\{\delta_{k,4}\} Q^{(3)}_{R}\{p_k\} \end{eqnarray} \vspace{-0.7cm} \noindent with \begin{eqnarray} c_{[8]}=c_{[4,4]}=5\cdot 7\cdot 11 ,\ \ \ \ \ \ c_{[7,1]}=c_{[6,2]}=c_{[5,3]}=-5\cdot 7\cdot(1+4i\sqrt{3}),\ \ \ \ \ \ c_{[6,1,1]}=c_{[5,2,1]}=c_{[3,3,2]}=-15, \nonumber\\ \ \ \ \ \ \ c_{[4,3,1]}=c_{[4,2,2]}=-\frac{5(37-8i\sqrt{3})}{7},\ \ \ \ \ \ c_{[4,2,1,1]}=-7, \ \ \ \ \ \ c_{[3,3,1,1]}=c_{[3,2,2,1]}=13 \ \ \ \ \ \ \ \ \ \ \ \end{eqnarray} \bigskip At $n=3$, one should not expect relations like (\ref{n2fact}) already because they do not respect the selection rule for partitions from $S_3P$: say, $[2,2]\in S_3P$, but $[2,2,2,2]\notin S_3P$. Still, one can observe some interesting relations, which resemble the corollaries of (\ref{n2fact}): \begin{eqnarray} c_{[3,3,1,1]}=c_{[3,2,2,1]}= |c_{[2,2]}|^2= |c_{[3,1]}|^2 \nonumber \\ c_{[4,2,1,1]} = c_{[4]}\cdot c_{[2,1,1]} \nonumber \\ \frac{c_{[4,3,1]}}{c_{[4]}c_{[3,1]}} = \frac{c_{[4,2,2]}}{c_{[4]}c_{[2,2]}} \end{eqnarray} however, say, \begin{eqnarray} c_{[4,4]}\neq c_{[4]}^2 \end{eqnarray} We see from the above formulas that extracting $Q^{(3)}\{\delta_{k,4}\}$ from the coefficients simplify them a little, but the remaining pieces do not have any nice enough properties, e.g. do not factorize in the spirit of \cite{Alex,MMNO}, as they did for $n=2$. Worse than that, already in the next order, some of $Q^{(3)}_R\{\delta_{k,4}\}=0$, though the corresponding contribution from the diagram $R$ is non-vanishing: hence such an extraction is simply impossible in general situation. The first diagrams with this property for $n=3$ appear at the level twelve: $[ 5,5,1,1]$, $[5,3,2,1,1]$ and $[6,2,2,1,1]$. In more detail, \begin{eqnarray} \frac{1}{3!\cdot 4^3}\,\left( \left< \Big({\rm Tr}\, Y^4\Big)^3\right>-6\left<\Big({\rm Tr}\, L Y^3\Big)^2\cdot\Big({\rm Tr}\, Y^4\Big)^2\right> +4\left<\Big({\rm Tr}\, L Y^3\Big)^4\cdot{\rm Tr}\, Y^4\right> - {8\over 15}\left<\Big({\rm Tr}\, L Y^3\Big)^6\right>\right) =\nonumber\\ = -\frac{5p_8p_2^2}{324}+\frac{5p_8p_1^4}{324}+\frac{7p_{10}p_1^2}{162} +\frac{25p_7p_4p_1}{972} +\frac{5p_7p_2p_1^3}{162} -\frac{p_5^2p_2}{162} +\frac{25p_5p_4p_1^3}{972}-\frac{p_5p_2^3p_1}{81} +\frac{p_5p_2p_1^5}{162} +\frac{325p_4^3}{279936} - \nonumber\\ -\frac{25 p_4 p_2^4}{7776} +\frac{25p_4p_2^2p_1^4}{2592} -\frac{p_2^5p_1^2}{1296}+\frac{325p_4^2p_2p_1^2}{15552} +\frac{p_2^3p_1^6}{1296} \ \ = \ \ \left({\beta_2\over 27}\right)^3\!\!\!\! \sum_{\stackrel{R\in S_3P}{|R|=12\ }} C_R\cdot Q^{(3)}_{R}\{p_k\} \label{n3third} \end{eqnarray} Wherever possible, we present the much simpler and more ``symmetric" expressions for $c_R$ defined from $C_R = c_R\cdot Q^{(3)}_R\{\delta_{k,4}\}$ \begin{eqnarray} c_{[12]}=c_{[8,4]} = 5\cdot 7\cdot 11\cdot 103, \ \ \ \ \ \ \ \ \ \ \ \nonumber\\ c_{[11,1]} = c_{[10,2]} = c_{[9,3]} = -5\cdot 7\cdot 11\cdot(59+54i\sqrt{3}), \ \ \ \ \ \ \ c_{[7,5]} = c_{[6,6]} = 5(2423-36\cdot 16 i\sqrt{3}), \ \ \ \ \nonumber \\ c_{[10, 1, 1]}=c_{[9, 2, 1]}= c_{[3,3, 2, 2, 1,1]}=175, \ \ \ \ \ \ c_{[8,3,1]}=c_{[8,2, 2 ]}=175(3i\sqrt{3}-8), \ \ \ \ c_{[7,4,1]}=35(15i\sqrt{3}-58), \ \ \ \ \nonumber \\ c_{[7,3,2]}=-5(187+6i\sqrt{3}), \ \ \ \ c_{[6, 5, 1]}=-5(355-78i\sqrt{3}), \ \ \ \ c_{[6, 4, 2]}=-5\cdot\frac{2233-582i\sqrt{3}}{19} \ \ \ \ \nonumber\\ c_{[6,3, 3 ]}=-5\cdot\frac{1345+354i\sqrt{3}}{7}, \ \ \ \ c_{[5, 5,2]}=25\cdot\frac{25 -12i\sqrt{3}}{7}, \ \ \ \ c_{[5, 4, 3]]}=-25\cdot\frac{23+132i\sqrt{3}}{7}, \ \ \ \ \nonumber\\ c_{[8,2,1,1]} = c_{[4,4,2,1,1]}=-385, \ \ \ \ c_{[7,3,1,1]}=c_{[7,2,2,1]}=35(23-6i\sqrt{3}), \ \ \ \ c_{[6,4,1,1]}=c_{[5,4,2,1]}=5(333-10i\sqrt{3}), \ \ \ \ \nonumber \\ c_{[6,3,2,1 ]}=5(179-80i\sqrt{3}), \ \ \ \ c_{[5,3,2, 2 ]}=c_{[5,3,3,1]}=5(113-70i\sqrt{3}), \ \ \ \ c_{[4,4,3,1]}= c_{[4,4,2,2]}=5\frac{323-466i\sqrt{3}}{7},\ \ \ \ \nonumber \end{eqnarray} \vspace{-0.5cm} \begin{eqnarray} c_{[4,3,3,2]}=-105, \ \ \ \ \ \ \ \ c_{[4,3,3,1,1]}=c_{[4,3,2,2,1]}=5(17+18i\sqrt{3}) \ \ \ \ \ \ \end{eqnarray} but in the above mentioned cases, when $Q^{(3)}_R\{\delta_{k,4}\}=0$ for $R=[5,5,1,1 ],\ [5,3,2,1,1 ], \ [6,2,2,1,1]$, only $C_R$'s make sense: \begin{eqnarray} C_{[5,5,1,1]} = -45\cdot\frac{5+59i\sqrt{3}}{64} \nonumber \\ C_{[5,3,2,1,1]}= 3^{1/4}\sqrt{2}\cdot 15\cdot\frac{19\sqrt{3}(1+i)-9(1-i)}{128} \nonumber \\ C_{[6,2,2,1,1]} = 3^{1/4}\sqrt{2}\cdot 15\cdot\frac{9(1+i) +19\sqrt{3}(1-i)}{128} \end{eqnarray} \section{Virasoro algebra action \label{VW}} The generators of the positive part ($m>0$) of Virasoro algebra are \begin{eqnarray} \hat L_m^{(n)}:=\sum_{k=1}(k+nm)p_k{\partial\over\partial p_{k+nm}} +{1\over 2}\sum_{k=1}^{nm-1}k(nm-k){\partial^2\over\partial p_k\partial p_{nm-k}} \label{Vir} \end{eqnarray} It acts on the linear space of Schur functions, moreover, it leaves the sub-space $S_nP$ intact, so that \begin{eqnarray} \hat L^{(n)}_m Q^{(n)}_R\{p\} = \sum_{R',\ |R'|=|R|-mn} \xi^{(n,m)}_{R,R'}Q_{R'}^{(n)}\{p\} \end{eqnarray} e.g. \begin{eqnarray} \hat L^{(n)}_m Q^{(n)}_{[r]} = rQ^{(n)}_{[r-mn]} \label{viracr} \end{eqnarray} \subsection{$n=2$} For $n=2$ its action is known on $Q^{(2)}$ with time-variables, rescaled by $\sqrt{2}$ \cite{VirJ,MMNO}: \begin{eqnarray} \hat L_m^{(2)} Q^{(2)}_R\left\{\frac{p_k}{\sqrt{2}}\right\} = \sum_{i=1}^{l_R} \frac{ (-)^{\nu_i}(R_i- m)}{(\sqrt{2})^{\delta_{R_i,m}}} \cdot Q^{(2)}_{R-2m\epsilon_i}\left\{\frac{p_k}{\sqrt{2}}\right\} \label{Virac2resc} \end{eqnarray} where $R-2m\epsilon_i$ means that exactly $i$-th length is diminished: $R_i\longrightarrow R_i-2m$. This can make it shorter than some other lines and thus imply reordering of lines in the diagram to put them back into decreasing order, then $\nu_i(R,m)$ is the number of lines, which the $i$-th one needs to jump over, e.g. $\hat L_2^{(2)} Q^{(2)}_{[6,5,3]}\left\{\frac{p_k}{\sqrt{2}}\right\} = (6-2)Q^{(2)}_{[5,3,2]}\left\{\frac{p_k}{\sqrt{2}}\right\} - (5-2)Q^{(2)}_{[6,3,1]}\left\{\frac{p_k}{\sqrt{2}}\right\}$ and $\hat L_3^{(2)} Q^{(2)}_{[7,6,3]}\left\{\frac{p_k}{\sqrt{2}}\right\} = (7-3)Q^{(2)}_{[6,3,1]}\left\{\frac{p_k}{\sqrt{2}}\right\} - \frac{(6-3)}{\sqrt{2}} Q^{(2)}_{[7,3]}\left\{\frac{p_k}{\sqrt{2}}\right\}$. If $R_i-2m=0$, then the line is simply omitted and the coefficient $1/\sqrt{2}$ appears. Formula (\ref{Virac2resc}) looks reasonably nice, but expansion of partition functions $Z^{(2)}_{GKM}$ in the basis $Q^{(2)}_R\left\{\frac{p_k}{\sqrt{2}}\right\}$ is rather ugly. Expansion is nice in terms of $Q^{(2)}$ per se, instead the Virasoro action on $Q^{(2)}$ per se is slightly more involved than (\ref{Virac2resc}): \begin{eqnarray} \hat L^{(2)}_m Q^{(2)}_{[r]} = rQ^{(2)}_{[r-2m]} \nonumber \\ \hat L^{(2)}_m Q^{(2)}_{[r,1]} = rQ^{(2)}_{[r-2m,1]} + \sqrt{2}Q^{(2)}_{[r+1-2m]} \nonumber \\ \hat L^{(2)}_m Q^{(2)}_{[r,2]} = rQ^{(2)}_{[r-2m,2]} + 2Q^{(2)}_{[r+1-2m,1]}, \ \ \ \ \ m\geq 2 \nonumber \\ \ldots \end{eqnarray} \subsection{Generic $n$} For generic $n$, we note that the Virasoro algebra acts in the simplest way to slightly renormalized functions ${\cal Q}_{R}^{(n)}= \beta_1^{l_{_R}/2} Q_{R}^{(n)}$. Now the action of the operator $\hat L_1^{(n)}$ on ${\cal Q}_{R}^{(n)}$ is expanded into the $Q$ Schur functions at level $|R|-n$. The rule is that $\hat L_1^{(n)}{\cal Q}_{R}^{(n)}$ spans only by Young diagrams $\check R=R-k_i\epsilon_i-k_j\epsilon_j$, $k_i+k_j=n$, and one of $k_i$ can be zero. This means that, for instance, in the case of $n=3$, there can be only either diagrams $R-3\epsilon_i$, or $R-2\epsilon_i-\epsilon_j$. Suppose that $\check R$ do not requite re-ordering the lines (i.e. the decreasing order is still preserved), and, moreover, the lines in all diagrams have different lengths (i.e. they are strict partitions). Then, \begin{eqnarray}\label{L3} \hat L_1^{(3)}{\cal Q}_{R}^{(3)}=\sum_i R_i{\cal Q}_{R-3\epsilon_i}^{(3)}+ \beta_2^{(3)}\sum_{i>j}{\cal Q}_{R-2\epsilon_i-\epsilon_j}^{(3)}+ \beta_1^{(3)}\sum_{i>j}{\cal Q}_{R-\epsilon_i-2\epsilon_j}^{(3)} \end{eqnarray} Similarly, in the case of $n=4$, there are possibilities: $R-4\epsilon_i$, $R-3\epsilon_i-\epsilon_j$ and $R-2\epsilon_i-2\epsilon_j$ so that \begin{eqnarray} \hat L_1^{(4)}{\cal Q}_{R}^{(4)}=\sum_i R_i{\cal Q}_{R-4\epsilon_i}^{(4)}+ \beta_3^{(4)}\sum_{i>j}{\cal Q}_{R-3\epsilon_i-\epsilon_j}^{(4)}+ \beta_2^{(4)}\sum_{i> j}{\cal Q}_{R-2\epsilon_i-2\epsilon_j}^{(4)}+ \beta_1^{(4)}\sum_{i>j}{\cal Q}_{R-\epsilon_i-3\epsilon_j}^{(4)} \end{eqnarray} and generally \begin{eqnarray} \hat L_1^{(n)}{\cal Q}_{R}^{(n)}=\sum_i R_i{\cal Q}_{R-n\epsilon_i}^{(n)}+ \sum_{k=1}^{n-1}\beta_k^{(n)}\sum_{i>j}{\cal Q}_{R-k\epsilon_i-(n-k)\epsilon_j}^{(4)} \end{eqnarray} Moreover, this action is immediately continued to action of the general $\hat L_m^{(n)}$. For instance, instead of (\ref{L3}), one has now \begin{eqnarray} \hat L_m^{(3)}{\cal Q}_{R}^{(3)}=\sum_i R_i{\cal Q}_{R-3m\epsilon_i}^{(3)}+ \beta_2^{(3)}\sum_{i>j}{\cal Q}_{R-2m\epsilon_i-m\epsilon_j}^{(3)}+ \beta_1^{(3)}\sum_{i<j}{\cal Q}_{R-2m\epsilon_i-m\epsilon_j}^{(3)} \end{eqnarray} and similarly for other $n$: the coefficients do not depend on $m$: \begin{eqnarray} \hat L_m^{(n)}{\cal Q}_{R}^{(n)}=\sum_i R_i{\cal Q}_{R-nm\epsilon_i}^{(n)}+ \sum_{k=1}^{n-1}\beta_k^{(n)}\sum_{i>j}{\cal Q}_{R-km\epsilon_i-(n-k)m\epsilon_j}^{(4)} \end{eqnarray} Thus, ones gets \begin{eqnarray}\label{Lf} \hat L_m^{(n)}{\cal Q}_{[r]}^{(n)}=r{\cal Q}_{[r-nm]}^{(n)}\nonumber\\ \hat L_m^{(n)}{\cal Q}_{[r,k]}^{(n)}=r{\cal Q}_{[r-nm,k]}^{(n)}+\sum_{i=1}^{n-1}\beta^{(n)}_{n-i}{\cal Q}_{[r-nm+mi,k-mi]}^{(n)}+ k{\cal Q}_{[r,k-mn]}^{(n)}\ \ \ \ \ \ \ \hbox{for }r> nm+k\nonumber\\ \ldots \end{eqnarray} When the diagram have two lines of the same length, it acquires an additional factor of $\rho_1:=\sqrt{\beta_2^{(n)}/\beta_1^{(n)}}$. For instance, \begin{eqnarray} \hat L_m^{(n)}{\cal Q}_{[nm+k,k]}^{(n)}=r\rho_1 {\cal Q}_{[k,k]}^{(n)}+ \sum_{i=1}^{n-1}\beta^{(n)}_{n-i}{\cal Q}_{[k+ni,k-ni]}^{(n)}+k{\cal Q}_{[k+mn,k-mn]}^{(n)} \end{eqnarray} Similarly, for three lines of the same length, there is a factor of $\sim\sqrt{\beta_3^{(n)}}$, etc. At last, when the diagram at the r.h.s. of (\ref{Lf}) requires a re-ordering of lines, each permutation needs an additional factor of $\rho_2:=1-\beta_1^{(n)}$. For instance, \begin{eqnarray} \hat L_1^{(4)}{\cal Q}_{[7,4,1]}^{(4)}=(\beta_3^{(4)}+7\rho_2){\cal Q}_{[4,3,1]}^{(4)} +\beta_3^{(4)}\rho_1{\cal Q}_{[4,4]}^{(4)}+\beta_2^{(4)}{\cal Q}_{[5,2,1]}^{(4)} +\beta_1^{(4)}\rho_1{\cal Q}_{[6,1,1]}^{(4)}+(\beta_3^{(4)}+4\rho_2){\cal Q}_{[7,1]}^{(4)} \end{eqnarray} Note also that there is an exception to the rule that only two lines are made shorter at the r.h.s. of these expressions: there could emerge a diagram $R-\sum_i^lk_i\epsilon_i$ with $l>2$ if two lines at the r.h.s. become of zero length. For instance, \begin{eqnarray} \hat L_1^{(n)}{\cal Q}_{[n+1,2,1]}^{(n)}=\Big((n+1)\rho_1\rho_2+\beta_{n-1}^{(n)}\rho_1\Big){\cal Q}_{[2,1,1]}^{(n)}+\beta_{n-1}^{(n)}\rho_1{\cal Q}_{[2,2]}^{(n)}+\beta_{n-2}^{(n)}\rho_2{\cal Q}_{[3,1]}^{(n)}+\underline{\beta_{n-1}^2{\cal Q}_{[4]}^{(n)}} \end{eqnarray} where the underlined term comes from $R-\epsilon_1-2\epsilon_2-\epsilon_3$. \subsection{Virasoro constraints for GKM} The partition function of GKM is annihilated \cite{GKM,MMM91,GN,Mikh} by action of the Virasoro algebra \cite{FKN} \begin{eqnarray}\label{Vcon} \left(\hat L^{(n)}_m - \gamma_n (n+1+nm) \frac{\partial}{\partial p_{n+1+mn}}+{n^2-1\over 24}\delta_{m,0}+ {\delta_{m,-1}\over 2}\sum_{k=1}^{n-1}p_kp_{n-k}\right)\sum_R c_R^{(n)}Q_R^{(n)}\{p\} = 0, \ \ \ \ \ \ m\ge -1 \end{eqnarray} Here $\gamma_n$ is a constant that can be made arbitrary by a rescaling of the external matrix $L$ in (\ref{GKMint}). This results in a simple factor of $(const)^{|R|}$ in the summand in this formula. Our choice in this paper corresponds to $\gamma_n=n$. Note that the first two constraints are rather trivial: both $\hat L^{(n)}_0$, $\hat L^{(n)}_{-1}$ gives rise to linear partial differential equations, and can be explicitly solved (see, e.g., \cite{AMMP}). At $m>0$ one obtains \begin{eqnarray} \sum_{R',\ |R'|=|R|+mn} \xi_{R',R}^{(n,m)}c_{R'}^{(n)} \ \ = \! \! \sum_{R'',\ |R''|=|R|+mn+n+1} n\zeta_{R'',R}^{(n,n+1+mn)}c_{R''}^{(n)} \label{virconmat} \end{eqnarray} where $\zeta$ is the matrix describing action of the derivative $$ r\frac{\partial}{\partial p_{r}} Q_R^{(n)}\{p\} = \sum_{R'\in S_{n}P(\nu)} \zeta^{(n,r)}_{R,R'}Q_{R'}^{(n)}\{p\} $$ that is, \begin{eqnarray} r\frac{\partial}{\partial p_{r}} Q_R^{(n)}\{p\} = \sum_{R'\in S_nP(r)}\Psi^{(n)}_{R'}([r])\cdot Q_{R/R'}^{(n)}\{p_k\} =\nonumber\\ =\sum_{{R'\in S_n P(\nu)}\atop{\Delta\in O_n P(\nu)}} {\Psi^{(n)}_R(\Delta+r)\Psi^{(n)}_{R'}(\Delta)\over \beta_\Delta z_\Delta}\cdot Q_{R'}^{(n)}\{p_k\} =\sum_{\Delta\in O_n P(\nu)}{\Psi^{(n)}_R(\Delta+r)\over z_\Delta}\cdot p_\Delta \end{eqnarray} where $\nu:=|R|-r$ and $\Delta+r$ denotes the Yang diagram with a line of length $r$ added. E.g. \begin{eqnarray} r\frac{\partial}{\partial p_r} Q^{(n)}_{[l]} = {\beta^{(n)}_r\over \sqrt{\beta^{(n)}_1}}Q^{(n)}_{[l]/[r]}=\beta^{(n)}_r Q^{(n)}_{[l-r]} \label{difacr} \end{eqnarray} since $\Psi^{(n)}_{[r]}([r])=\beta^{(n)}_r/\sqrt{\beta^{(n)}_1}$. Thus, \begin{eqnarray} \zeta^{(n,r)}_{R,R'}=\sum_{\Delta\in O_n P(\nu)} {\Psi^{(n)}_R(\Delta+r)\Psi^{(n)}_{R'}(\Delta)\over \beta_\Delta z_\Delta} \end{eqnarray} is not truly simple. Note that $\hat L$ and $p$-derivative have different grading, thus the Virasoro constraints relates coefficients $c_R$ with different sizes $|R|$. For $m>0$, the Virasoro action is not injective, thus (\ref{viracr}) and (\ref{difacr}) are not enough to check any constraint, one needs bigger pieces of matrices $\xi$ and $\zeta$. For $n=2$, the constraints with $m\geq -1$ are enough to fix the partition function completely (up to a common factor), while, for $n>2$, one needs to add similar $W$-constraints up to $W^{(n)}$, which can be studied in a similar way: \begin{eqnarray} \hat W^{(p|n)}_m\cdot\sum_R c_R^{(n)}Q_R^{(n)}\{p\} = 0, \ \ \ \ \ \ m\ge 1-p,\ \ p=2,\ldots,n \end{eqnarray} Here $W^{(p|n)}$ denotes the $W$ algebra of spin $p$: $p=2$ corresponds to the Virasoro algebra, etc. For instance, in the case of $n=3$, one has to add to the Virasoro constraints (\ref{Vcon}) the $W^{(3|3)}$-algebra constraints. This algebra at generic $n$ looks like \cite{FKN,GKM,Mikh}: \begin{eqnarray} \hat W^{(3|n)}_m:=\sum_{k=1}(k+l+nm)P_kP_l{\partial\over\partial p_{k+l+nm}} +\sum_{k=1}\sum_{l=1}^{nm+k-1}l(nm+k-1)P_k{\partial^2\over\partial p_l\partial p_{nm+k-l}}+\nonumber\\ +{1\over 3}\sum_{k=1}^{mn-2}\sum_{l=1}^{mn-k-1}kl(mn-k-l){\partial^3\over\partial p_l\partial p_k\partial p_{mn-k-l}}+ {1\over 3}\sum_{k=1}^{-mn-2}\sum_{l=1}^{-mn-k-1}P_kP_lP_{-mn-k-l} \end{eqnarray} where $P_k:=p_k-n\delta_{n+1,k}$ at $k>0$, and $P_k=0$ otherwise. \section{Conclusion} To conclude, we described an interesting set of functions $Q^{(n)}$, which, in many respects, generalize the $Q$ Schur functions $Q^{(2)}$, and provide a promising basis to expand the partition function of GKM in the Kontsevich phase: \begin{eqnarray} \int \exp\left(\frac{{\rm Tr}\, X^{n+1}}{n+1} + {\rm Tr}\, L^NX\right) dX \sim \sum_{R\in S_nP} C_R^{(n)} \cdot Q_R\{{\rm Tr}\, L^{-k}\} \end{eqnarray} This basis is distinguished by the selection rules: independence of $p_{kn}$ and of diagrams beyond $S_nP$, and, perhaps, also by integrability and Virasoro/W properties of $Q^{(n)}$, which still need to be carefully formulated. However, unlike the $n=2$ case, the coefficients $C_R^{(n)}$ are not properly identified and interpreted, largely because of the calculation difficulties with $Q^{(n)}$ functions. In the $n=2$ case, these coefficients have a very nice form \cite{MMkon} $C_R^{(2)} = \frac{Q_{2R}\{\delta_{k,3}\}Q_{R}\{\delta_{k,1}\}}{Q_{2R}\{\delta_{k,1}\}}$ which has a profound combinatorial explanation \cite{Alex,MMNO}, and can be related to the superintegrability property of the cubic Kontsevich model. At the moment, it is an open problem if so defined {\it superintegrability} persists for GKM at $n>2$. We discuss this question and a related issue of classification of correlators in the GKM elsewhere. \section*{Acknowledgements} We are indebted to Sasha Alexandrov, John Harnad and Sasha Orlov for numerous comments on \cite{MMkon}, which largely stimulated our further work in this direction. This work was supported by the Russian Science Foundation (Grant No.20-12-00195). \section*{Appendix} In this Appendix, we illustrate our consideration by first terms of the $Q$-expansion of the GKM partition functions in the $n=4$ case. The partition function of quintic GKM is defined as \begin{eqnarray} Z^{(4)} = \left<\exp\left( -\frac{1}{5}{\rm Tr}\, Y^5 - {\rm Tr}\, LY^4 - {\rm Tr}\, \Big(L^2 Y^3 + LYLY^2\right)\right>_{\! 4} := \left<\exp\Big(-V_5 - V_{1|4} - V_{2|3}\Big)\right>_{\! 4} \end{eqnarray} We introduced here a convenient notation $V_{a|b}$ for a term of the form ${\rm Tr}\, L^aX^b$. Then the grading level, i.e. the power of an average $\prod_m V_{a_m|b_m}$ in $L^{-1}$, is equal to $\sum_m\left(\frac{b_m}{2}-a_m\right)$. The lowest grade terms in expansion of the partition function are: \paragraph{Grading 5.} \begin{eqnarray} \left<-V_{1|4}+\frac{1}{2}V_{2|3}^2\right>_{\! 4} = \left< - {\rm Tr}\, LY^4 + \frac{1}{2}\left( {\rm Tr}\, \Big(L^2 Y^3 + LYLY^2\Big)\right)^2\right>_{\! 4} =\nonumber \\ = \frac{1}{32}\left(\frac{9}{ \sqrt{\beta_1^{(4)}}}Q_{[5]}^{(4)} -3(1-2i)Q_{[4,1]}^{(4)}-3(2+i)Q_{[3,2]}^{(4)} -(1+4i)\sqrt{2}Q_{[3,1,1]}^{(4)}+(4-i)\sqrt{2}Q_{[2,2,1]}^{(4)} +\beta_3^{(4)}\sqrt{1-i}Q_{[2,1,1,1]}^{(4)} \right) = \nonumber \end{eqnarray} \begin{eqnarray} = \frac{p_5+4p_3p_1^3+4p_2^2p_1}{32} \ = \ \frac{5\beta_3^{(4)}}{64}\sum_{\stackrel{R\in S_4P}{|R|=5\ }}c_R\cdot Q^{(4)}_R\{\delta_{k,5}\} Q^{(4)}_{R}\{p_k\} \ \ \ \ \ \ \ \ \ \nonumber\\ \nonumber \\ c_{[5]}=9,\ \ \ \ \ \ c_{[4,1]}=c_{[3,2]}=3(1-2i),\ \ \ \ \ \ c_{[3,1,1]}=c_{[2,2,1]}=-(1+4i), \ \ \ \ \ \ \ c_{[2,1,1,1]}=-1 \ \ \ \ \ \ \ \ \ \ \end{eqnarray} \paragraph{Grading 10.} \begin{eqnarray} \left<V_5V_{2|3}+ \frac{1}{2}V_{1|4}^2 -\frac{1}{2}V_{1|4}V_{2|3}^2+\frac{1}{24}V_{2|3}^4\right>_{\! 4} = \left< \frac{1}{5} {\rm Tr}\, Y^5 \cdot {\rm Tr}\, \Big(L^2 Y^3 + LYLY^2\Big) +\frac{1}{2}\left( {\rm Tr}\, LY^4\right)^2 - \right.\nonumber \\ \left. -\frac{1}{2} {\rm Tr}\, LY^4\left( {\rm Tr}\, \Big(L^2 Y^3 + LYLY^2\Big)\right)^2 +\frac{1}{24}\left( {\rm Tr}\, \Big(L^2 Y^3 + LYLY^2\Big)\right)^4 \right>_{\! 4} = \left(\frac{5\beta_3^{(4)}}{64}\right)^2\sum_{\stackrel{R\in S_4P}{|R|=10\ }}c_R\cdot Q^{(4)}_R\{\delta_{k,5}\} Q^{(4)}_{R}\{p_k\}\nonumber \end{eqnarray} {\small \begin{eqnarray} c_{[10]} = c_{[5,5]} = 3^2\cdot 7^2, \ \ \ \ c_{[9,1]} = c_{[8,2]} = c_{[7,3]}=c_{[6,4]} = -9(3+52i), \nonumber \\ c_{_{[8,1,1]}} = c_{_{[7,2,1]}} = c_{_{[6,3,1]}} = c_{_{[6,2,2]}} = -3(65+56i), \ \ \ \ c_{_{[5,4,1]}}=c_{_{[5,3,2]}} = -3(15+56i), \ \ \ \ c_{_{[4,4,2]}}=c_{_{[4,3,3]}} = \frac{3(11-148i)}{5}, \nonumber \\ c_{[7,1,1,1]} = c_{[6,2,1,1]} = -3, \ \ \ \ c_{[5,3,1,1]}=c_{[5,2,2,1]}= -\frac{3(47-14i)}{5}, \ \ \ \ c_{[4,4,1,1]} = -3(5-8i), \nonumber \\ c_{[4,3,2,1]} = -3(3-8i), \ \ \ \ \ \ \ \ c_{[4,2,2,2]} = c_{[3,3,3,1]}=-\frac{3(9-32i)}{5}, \ \ \ \ \ \ c_{[3,3,2,2]} = 1, \nonumber \\ c_{[3, 2, 2, 2, 1]} =3(7-6i), \ \ \ \ \ \ \ c_{[4, 2, 2, 1, 1]}= c_{[4, 3, 1, 1, 1]}= 3(7+6i), \ \ \ \ \ \ \ c_{[3, 3, 2, 1, 1]} = 39,\ \ \ \ \ \ \ c_{[5, 2, 1, 1, 1]} = -9\nonumber \end{eqnarray} } Similarly to the $n=3$ case, $Q^{(4)}_R\{\delta_{k,5}\}=0$ for $R=[3,2,2,1,1,1 ]$, and only $C_R$ makes sense for this diagram: \begin{eqnarray} C_{[3,2,2,1,1,1]}=-{3^2\over 2^9}\sqrt{2}\beta_3^{(4)} \end{eqnarray} Note that $R=[3,2,2,1,1,1 ]$ is the only Young diagram out of $S_4P(10)$ that contains 6 lines. \paragraph{Grading 15.} The new phenomenon here is that the $L$-independent observable first contributes only to the third ($k=3$) of the relevant levels $5k$: \begin{eqnarray} \left<\frac{1}{2}V_5^2 -V_5V_{1|4}V_{2|3} -\frac{1}{6}V_{1|4}^3 +\frac{1}{4}V_{1|4}^2V_{2|3}^2 + \frac{1}{6}V_5V_{2|3}^3 -\frac{1}{24}V_{1|4}V_{2|3}^4+\frac{1}{720}V_{2|3}^6 \right>_{\! 4} \end{eqnarray}
1,108,101,563,428
arxiv
\section{Introduction} \label{sec:1} Using Einstein's equivalence principle in the study of wave function reduction was introduced by Penrose under the title of "gravitization of quantum mechanics" \cite{RefP1,RefP2,RefP3}. But, the concept of gravitization of quantum mechanics can be seen in De-Broglie's works where the quantum motion of matter is equivalent to a conformal transformation of metric. I first make a quick reference to the subject of gravitization or geometrization of quantum mechanics. Then, I will return to the main topic of this paper, which is the study of wave function reduction by using the equivalence principle of general relativity in the context of Bohmian quantum theory.\par Contrary to what is common among physicists as "quantization of gravity", there is another point of view is called "gravitization(geometrization) of quantum mechanics". It means bringing quantum mechanics closer to the principles of general relativity \cite{RefP1,RefP2,RefP3,RefShoja}. We know that the theory of gravity is a geometric, intuitive, and visualizable local theory. But in quantum mechanics, we encounter non-classical and non-local behaviors such as entanglement, superposition of states and etc. Specially, in usual quantum mechanics, the concept of trajectory and pre-measurement quantities disappear and the physical quantities are defined as operators in Hilbert space. From the first days of birth of quantum mechanics, some physicists tended to present a more realistic view of quantum mechanics. One of the approaches in which an attempt is made to have a realistic description of quantum mechanics, is the causal quantum theory of De Broglie-Bohm. Although, in this approach, the wave function is represented in the configuration space, but to some extent it allows us to have a visualizable deterministic description \cite{RefB,RefI,RefU,RefH,RefCush}. But, as Bohm himself said, this is not the last word \cite{RefCB}. Rather, it shows that a causal interpretation of quantum mechanics is not impossible. The first sparks of the concept of geometrization or gravitization of quantum mechanics come from the introducing of quantum potential by De-Broglie \cite{Refdeb}. The quantum potential is responsible for the quantum behavior of matter. For example, the relativistic quantum Hamilton-Jacobi equation of a spinless particle is \begin{equation}\label{hj} \eta^{\mu \nu}\partial_{\mu} S \partial_{\nu} S = m^2 (1+Q)=\mathcal{M}^2 \end{equation} This can be converted to the equation \begin{equation}\label{chj} \tilde{g}^{\mu\nu}\partial_{\mu} S \partial_{\nu} S=\tilde{g}^{\mu\nu}p_{\mu} p_{\nu} =m^2 \end{equation} through the conformal transformation \begin{equation}\label{c} \tilde{g}_{\mu\nu}=\Omega^2 \eta_{\mu \nu}=(1+Q)\eta_{\mu \nu}. \end{equation} Here, $Q$ and $p_\mu $ are the relativistic quantum potential and the four-momentum of the particle respectively which are given by relations \begin{equation} Q=\frac{\hbar^2}{m^2}\frac{\Box R}{R} = \frac{\hbar^2}{m^2}\frac{\Box \sqrt{\rho}}{\sqrt{\rho}} \end{equation} and \begin{equation} p_\mu = \partial_\mu S. \end{equation} The functions, $S$ and $R$ are the action and the wave amplitude associated to the particle respectively. The polar form of the wave function is represented as \begin{equation} \psi=R \exp\left(\frac{iS}{\hbar}\right)=\sqrt{\rho}\exp\left(\frac{iS}{\hbar}\right). \end{equation} Equations (\ref{hj}), (\ref{chj}) and (\ref{c}) show that the quantum motion of particle in the spacetime with metric $\eta_{\mu\nu}$ and the modified mass $\mathcal{M}$, is equivalent to the motion of particle in the spacetime with metric $\tilde{g}_{\mu\nu}$ and the mass $m$ \cite{RefShoja,RefH,Refdeb,RefCar}. This shows that a causal description of quantum mechanics, leads to a non-classical and non-local modification of spacetime metric through a conformal transformation. In other words, quantum mechanics gets a geometric or gravitational flavor. Hence, it is possible to have a deterministic look at the quantum evolution of spacetime metric. But, this does not mean gravity must be quantized through the usual quantization procedure. According to Refs \cite{RefShoja,RefCar}, the generalization of relation (\ref{c}) is \begin{equation}\label{c2} \tilde{g}_{\mu\nu}=\Omega^2g_{\mu \nu}=(1+Q)g_{\mu\nu} \end{equation} where, $g_{\mu\nu}$ is not the Minkowski metric of flat spacetime necessarily. In this situation, the quantum potential of the particle becomes \begin{equation} Q=\frac{\hbar^2}{m^2}\frac{\nabla^\mu \nabla_\mu R}{R}=\frac{\hbar^2}{m^2}\frac{\nabla^\mu \nabla_\mu \sqrt{\rho}}{\sqrt{\rho}}. \end{equation} Applying the conformal transformation (\ref{c2}) into the famous Hilbert-Einstein action in the presence of matter action and after doing variations, the modified Einstein equations for $\tilde{g}_{\mu\nu}$, is given by \begin{align}\label{modg} \Omega^2\tilde{G}_{\mu\nu}-\left(\tilde{g}_{\mu\nu}\tilde{\Box}-\tilde{\nabla}_\mu \tilde{\nabla}_\nu \right)\Omega^2 -6\tilde{\nabla}_\mu \Omega \tilde{\nabla}_\nu \Omega +3\tilde{g}_{\mu\nu}\tilde{\nabla}_\alpha \Omega \tilde{\nabla}^\alpha \Omega\nonumber \\ +\frac{16\pi G}{m}\rho\Omega^2 \tilde{\nabla}_\mu S \tilde{\nabla}_\nu S -\frac{8\pi G}{m}\rho\Omega^2\tilde{g}_{\mu\nu} \tilde{\nabla}_\alpha S \tilde{\nabla}^\alpha S + 8\pi G m \rho \Omega^4 \tilde{g}_{\mu\nu}\nonumber \\ +\frac{8\pi G \hbar^2}{m^2}\left(\tilde{\nabla}_\mu \sqrt{\rho} \tilde{\nabla}_\nu (\frac{\lambda}{\sqrt{\rho}})+\tilde{\nabla}_\nu \sqrt{\rho} \tilde{\nabla}_\mu (\frac{\lambda}{\sqrt{\rho}})\right)-\frac{8\pi G \hbar^2}{m^2}\tilde{g}_{\mu\nu}\tilde{\nabla}_\alpha \left(\lambda \frac{\tilde{\nabla}^\alpha \sqrt{\rho}}{\sqrt{\rho}} \right) &=0 \end{align} Here, $\lambda$ is a Lagrange multiplier. This equation can be investigated to study the back reaction effects of the quantum potential on background metric \cite{RefShoja,RefCar}. In other words, the quantum evolution of metric can be studied through the above equation for different types of matter without the need for quantization of gravity in the usual sense. Indeed, the concept of "gravitization of quantum mechanics" is really appropriate for this approach. Of course, solving such complicated nonlinear equation is really difficult without considering specific constraints or symmetries. But, this approach, can be considered as a new look at the problem of quantum gravity. Although, Bohmian quantum mechanics provides a deterministic description of the quantum world, but the main problems are still unresolved. For example, the nature of wave function, the origin of quantum potential, the problem of non-locality and other related topics remain unanswered. One of the applications of relation (\ref{modg}) in study the quantum evolution of the Friedmann–Lemaître–Robertson–Walker metric can be seen in Ref \cite{RefGr}.\par Now, I deal with the concept of "gravitization of quantum mechanics" in the problem of wave function reduction. However, this concept was studied By Penrose prominently, but, the first serious work in using gravitational concepts to study the wave function reduction, was done by Karolyhazi\cite{RefK1,RefK2,RefK3}. The next outstanding works were done by Diosi and Penrose \cite{RefD1,RefD2,RefD3,RefP1,RefP2,RefP3}. In Ref \cite{RefP3}, an attempt has been made to bring quantum mechanics closer to the principles of general relativity. The measurement paradox is studied in such a way that the principles of equivalence and general covariance are preserved. In this approach, the linearity of quantum mechanics is broken through the measurement processes or effect of gravity. According to the universality of quantum mechanics, we expect that an object is in a superposition of different states of position. But, we do not observe such thing for macroscopic objects. In gravity-induced approaches, the reason of breaking the superposition of quantum states of the particle at different locations, is the self-gravity of the particle. Here, the self-gravity is justified through the concept of uncertainty in quantum mechanics. We know that a particle is detected around the point $\mathbf{x}$ in the configuration space of particle with the probability density $\rho (\mathbf{x},t)=\vert \psi \vert ^2$ at time $t$. Thus, the distribution of a point particle in the configuration space seems like an extended mass distribution. Hence, the definition of self-gravity is possible for a point particle in a quantum mechanical sense. \par In Ref \cite{RefP3}, the issue has been thoroughly investigated. There, it has been argued that if we use the principle of equivalence, the superposition of the quantum states of the particle at different locations, in the presence of gravity, is not stable. Consequently, it decays to a stable state. The lifetime of superposition is \begin{equation}\label{lt} \tau \approx \frac{\hbar}{\Delta E_G} \end{equation} where, $\Delta E_G$ is the uncertainty in the gravitational self-energy of the mass distributions of the two stationary states. \par The gravity-induced wave function reduction is also studied through the Schr\"{o}dinger-Newton equation. Its Bohmian version has been studied in Refs \cite{RefRGG1,RefRGG2,RefRGG3}. A Gaussian wave packet with the initial width $\sigma_0$, spreads out as time passes (quantum mechanical behavior). To have a stationary wave packet, the particle mass must be equal to a specific critical value to provide the required self-gravity to inhibit the dispersion of wave packet. The Schr\"{o}dinger-Newton equation for a single particle with distribution $\rho = \vert \psi(\mathbf{x},t) \vert ^2$ is: \begin{equation}\label{sn} i\hbar \frac{\partial\psi(\mathbf{x},t)}{\partial t}=\left(-\frac{\hbar^2}{2m}\nabla^2 -Gm^2 \int \frac{\vert \psi(\mathbf{x}^\prime,t)\vert^2}{\vert \mathbf{x}^\prime -\mathbf{x} \vert} d^3 x^\prime\right) \psi(\mathbf{x},t). \end{equation} Minimizing the Hamiltonian functional equation of the Schr\"{o}dinger-Newton equation for a stationary wave packet, gives a relation between the critical mass of the particle and the characteristic width of its associated stationary wave packet \cite{RefD1}. In other words, the value of $\sigma_0$ for which the wave packet remains stationary, is determined by the value of the particle mass and is equal to: \begin{equation}\label{Dio} \sigma_{0} = \frac{\hbar^2}{Gm^3}. \end{equation} The width of the wave packet on the left hand side of this equation is related to the objective quantities on the right hand side. This enables us to determine the characteristic width of the wave packet objectively. By using relation (\ref{Dio}), a critical mass for transition from the quantum world to classical world is defined. It is given by \begin{equation}\label{mc} m_c=\left(\frac{\hbar^2}{G\sigma_0 }\right)^{\frac{1}{3}}. \end{equation} Given a fixed $\sigma_0$, the particles with masses greater than the critical mass represent more macroscopic behavior, and for the particles with masses less than the critical mass, micro behaviors increase. To study the different classifications of particle motion in this context, see Ref \cite{RefRGG3}. Naturally, in a more realistic view, we must consider an object with the definite size. For an object with the ordinary matter densities, the critical mass is of the order of the Planck mass $m_p \approx 10^{-8}Kg$. See Refs \cite{RefK1,RefBassi}. The Refs \cite{RefGu,Refm1} and \cite{Refde} are suggested for studying the gravitational reduction of the wave function in the framework of usual quantum mechanics. \par In following, I investigate the equivalence principle in the context of Bohmian quantum mechanics and I shall show how it can be used to study the wave function reduction in the framework of Bohmian quantum mechanics. \section{Einstein equivalence principle in Bohmian quantum mechanics} In classical mechanics, the weak equivalence principle of general relativity(WEP) is expressed in different ways, all of which are basically the same. In following, I shall argue that all statements are not equivalent in Bohmian quantum mechanics. Three of the most famous statements of WEP in classical physics are as follows.\\ \textbf{The first statement}: \textit{Inertial mass is equivalent to passive gravitational mass: $m_i=m_g$}.\\ \textbf{The second statement}: \textit{The behavior of freely-falling test particle is universal: $\mathbf{a}=-\mathbf{g}$}.\\ \textbf{The third statement}: \textit{In small enough regions of spacetime, the motion of freely-falling bodies are the same in a gravitational field and a uniformly accelerated frame}.\par Before starting the discussion, let me make a point about the inertial and gravitational masses. The inertial mass $m_i$, in the first statement, has a universal character. Because it is defined in terms of resistance to momentum change by other forces and it does not matter what kind of force is exerted to it. On the other hand, $m_g$ is a quantity specific to the gravitational force. It can be thought of as a gravitational property or "gravitational charge". In this work, I assume that the gravitational property $m_g$ is equal to the universal quantity $m_i$. In other words, the extension of the first statement to quantum mechanics is unobjectionable.\par Now let's look at the second statement of WEP in Bohmian quantum mechanics. The quantum version of Newton's second law in Bohmian quantum mechanics in the gravitational potential $U=m\mathbf{g}\cdot \mathbf{x}$ is: \begin{equation}\label{d1} \frac{d}{dt}(m\dot{\mathbf{x}})=-m\mathbf{g}-\nabla Q \vert_{\mathbf{x}=\mathbf{x}(t)} \end{equation} which leads to the equation \begin{equation}\label{d2} \ddot{\mathbf{x}}=-\mathbf{g}-\frac{1}{m}\nabla Q \vert_{\mathbf{x}=\mathbf{x}(t)} \end{equation} The meaning of $\mathbf{x}=\mathbf{x}(t)$, is that the particle moves on that trajectory of ensemble whose position is specified by $\mathbf{x}(t)$. Equation (\ref{d2}) shows the violation of the second statement explicitly; even with the equality of gravitational and inertial mass. Therefore, the first and second statements are not equivalent in Bohmian quantum mechanics. In general, the quantum potential $Q$, depends on the mass of the particle. Thus, the violation of the second statement in (\ref{d2}) is not only due to the $\frac{1}{m}$ coefficient. I shall investigate relation (\ref{d2}) to estimate the critical mass of the particle which moves under the effect of its own gravity. \par Checking the validity of third statement has been widely done by some authors \cite{RefP1,RefNa}. Consider a non-relativistic falling object in a homogeneous gravitational field $\mathbf{g}$ with the coordinate system $(\mathbf{x},t)$. The Schro\"{o}dinger equation for this system is: \begin{equation}\label{ns} i\hbar \frac{\partial\psi(\mathbf{x},t)}{\partial t}=-\frac{\hbar^2}{2m}\nabla^2 \psi -m\mathbf{x}\cdot \mathbf{g}\psi(\mathbf{x},t) \end{equation} According to the third statement, we can change the coordinate systems by using the transformations \begin{equation}\label{trans} \begin{aligned} \mathbf{x}^{\prime}&=\mathbf{x}+\frac{1}{2}\mathbf{g}t^2 \\ t^{\prime}&=t \end{aligned} \end{equation} to get the Schr\"{o}dinger equation in the accelerated frame with acceleration $\mathbf{g}$. On the other hand, according to the third statement, an observer in this accelerated frame, feels no sense of gravity. For such observer, the Schr\"{o}dinger equation takes the form of free Schr\"{o}dinger equation which is given by \begin{equation}\label{rs} i\hbar \frac{\partial\Psi}{\partial t^{\prime}}=-\frac{\hbar^2}{2m}\nabla^2 \Psi \end{equation} Where, $\Psi$ is the wave function of the freely-falling particle in this frame. The functions $\Psi$ and $\psi$ are called "Einsteinian" and "Newtonian" wave functions\cite{RefP3}. To establish the third statement of WEP, the two wave functions $\psi$ and $\Psi$ must be related in the form \begin{equation}\label{wr} \Psi(\mathbf{x}^{\prime},t^{\prime})=\exp\left(\frac{im}{\hbar}\left(\frac{g^2 t^3}{6}-\mathbf{x}\cdot \mathbf{g} t\right)\right)\psi(\mathbf{x},t) \end{equation} or \begin{equation}\label{wr2} \psi(\mathbf{x},t)=\exp\left(\frac{im}{\hbar}\left(\frac{g^2 t^{\prime ^3}}{3}+\mathbf{x}^{\prime}\cdot \mathbf{g} t^{\prime}\right)\right)\Psi(\mathbf{x}^{\prime},t^{\prime}). \end{equation} The nonlinear term $\frac{m g^2 t^3}{\hbar} $ in above relations has a essential role in wave function reduction which has been clarified in Ref \cite{RefP3}. There, it has been argued that the nonlinear term in $\exp\left(\frac{im}{\hbar}\left(\frac{g^2 t^{\prime ^3}}{3}+\mathbf{x}^{\prime}\cdot \mathbf{g} t^{\prime}\right)\right)$, breaks down the notion of positive frequency and this leads to the existence of different vacua. Since, the superposition of different vacua is illegal, such superposition must be reduced to a stationary state during the decay time $\tau \approx \frac{\hbar}{\Delta E_G}$. According to the Unruh effect, a pure vacuum state of a quantum field for an inertial observer, is seen as a mixed state for an accelerated observer. The accelerated observer detects particles in his/her vacuum with the temperature $T=\frac{\hbar a}{2\pi c k_B}$ where $a$ denotes the acceleration of the observer.\cite{RefPad,RefCm,RefUW}. This shows that there is the same physical concept behind the both phenomena which manifests as a transformation from pure quantum state to the mixture of stats. \par In the next section, I shall show how one can obtain the critical mass for the boundary between the quantum and classical worlds, by applying the equivalence principle to the particle motion in Bohmian quantum mechanics. In addition, I shall determine the reduction time and define a temperature which can be seen as a corresponding quantity to the Unruh temperature. \section{The motion of particle and wave function reduction} Now, I want to investigate the problem of wave function reduction in Bohmian quantum mechanics for a particle moves in its own gravity. First, I review the quantum motion of a particle in an external homogeneous gravitational field. The motion of a particle in its own gravity can be studied by the same equations in a short-time estimation. In a short-time estimation the self-gravitational field of the particle and the width of its associated wave packet are approximately constant\cite{RefRGG3}. By using the contents of Ref \cite{RefH}, a particle falling in a constant gravitational field with the zero initial velocity is guided by a Gaussian wave packet which is given by: \begin{equation}\label{g1} \begin{aligned} \psi(\mathbf{x},t)&=& \\ &\left(2\pi \sigma_0^2\left(1+\frac{i\hbar t}{2m\sigma_0^2}\right)^2\right)^{-\frac{3}{4}} \exp\left\lbrace -\frac{(\mathbf{x}+\frac{1}{2}\mathbf{g}t^2)^2}{4\sigma_0^2\left(1+\frac{i\hbar t}{2m\sigma_0^2}\right)} +\frac{im}{\hbar}(\mathbf{x}^2-\mathbf{g}\cdot \mathbf{x}t -\frac{1}{6}m \mathbf{g}^2t^3)\right\rbrace \end{aligned} \end{equation} The amplitude and the phase of the wave packet are described by \begin{equation}\label{amp} R=(2\pi \sigma^2)^{-\frac{3}{4}} \exp \left\lbrace -\frac{(\mathbf{x}+\frac{1}{2}\mathbf{g}t^2)^2}{4\sigma^2} \right\rbrace \end{equation} and \begin{equation}\label{phase} S=-\frac{3\hbar}{2}\arctan \left(\frac{\hbar t}{2m\sigma_0^2}\right)-m\mathbf{g}\cdot \mathbf{x}t -\frac{1}{6}m\mathbf{g}^2 t^3 +\frac{\hbar^2 t}{8m\sigma_0^2 \sigma^2}\left(\mathbf{x}+\frac{1}{2}\mathbf{g}t^2 \right)^2 \end{equation} respectively. \\ Here, $\sigma$ is the random mean square with of the packet at time $t$ and it is given by: \begin{equation} \sigma=\sigma_0 \sqrt{1+\frac{\hbar^2 t^2}{4m^2\sigma_0^2}} \end{equation} The width $\sigma$ of the wave packet in a homogeneous gravitational field is the same as the width of the free wave packet\cite{RefH}. The trajectory of the particle in the ensemble is given by \begin{equation}\label{tra} \mathbf{x}(t)=-\frac{1}{2}\mathbf{g}t^2 + \mathbf{x}_0 \sqrt{1+\frac{\hbar^2 t^2}{4m^2\sigma_0^2}}=-\frac{1}{2}\mathbf{g}t^2 +\mathbf{x}_0 \frac{\sigma}{\sigma_0} \end{equation} The quantum potential of the particle is: \begin{equation}\label{pot1} Q=-\frac{\hbar^2}{2m}\frac{\nabla^2 R}{R}=\frac{\hbar^2}{4m\sigma^2}\left[3-\frac{(\mathbf{x}+\frac{1}{2}\mathbf{g}t^2)^2}{2\sigma^2}\right] \end{equation} It can be shown that the acceleration of the particle is given by: \begin{equation}\label{ac2} \ddot{\mathbf{x}}=-\mathbf{g}+\frac{\hbar^2 \mathbf{x}_0}{4m^2\sigma_0\sigma^3} \end{equation} I shall use equation (\ref{ac2}) to study the equivalence principle in Bohmian quantum mechanics and consequently the wave function reduction. It may be thought that by setting the second part of the above equation equal to zero, the classical equation $\ddot{\mathbf{x}}=-\mathbf{g}$ is obtained again. This is true, but it does not give any objective criterion like relation (\ref{Dio}). \par Consider a freely-falling particle in a constant gravitational field along the $-z$ axis, which in classical mechanics obeys the equation: \begin{equation}\label{gg} \ddot{z}=-g. \end{equation} Now, I change the reference frame with coordinate $z^{\prime}$ and constant acceleration $a$ along the $-z$ axis. The required transformations are \begin{equation}\label{at} \begin{aligned} z^{\prime} &= z+\frac{1}{2}at^2 \\ t&=t^{\prime} \end{aligned} \end{equation} By using these transformations, relation (\ref{gg}) transforms to \begin{equation}\label{a3} \ddot{z}^{\prime}=a-g. \end{equation} Then, if the acceleration of the frame is equal to the gravitational acceleration, we expect to have a free particle ($\ddot{z}^{\prime}=0$). This is what we expect locally according to the principle of equivalence. In other words, an observer in $(z^{\prime},t^{\prime})$ frame, feels no gravitational force. If we represent the free or Einsteinian wave function in the accelerated frame by $\phi(z^{\prime},t^{\prime})$, the relation between the Newtonian wave function $\psi(z,t)$ and its Einsteinian correspondence $\phi(z^{\prime},t^{\prime})$, will be described by a unitary transformation in the form: \begin{equation} \psi(z,t)=\phi(z^{\prime},t^{\prime})\exp\left(-\frac{imgt}{\hbar}\left(z+\frac{gt^2}{6}\right)\right) \end{equation} For details, see Ref \cite{RefNa}. This transformation gives no any dynamical information in usual quantum mechanics. But, in Bohmian quantum mechanics it gives useful information about the quantum motion of particle. In Bohmian quantum mechanics, the possibility of defining trajectories, enables us to discuss about the equivalence principle of general relativity in quantum mechanics clearly. While, in usual quantum mechanics, we are only limited to the Schr\"{o}dinger equation and its evolution. To clarify the issue, let's take a look at the formula (\ref{ac2}).\par Quantum acceleration only vanishes for a plane wave. But in relation (\ref{ac2}), we are always faced with a quantum acceleration. Because, in a finite region of spacetime specially in quantum world, we can not have a plane wave actually($\sigma_0 \longrightarrow \infty$). But, this is not the whole story. Here, there is a very interesting and subtle point and that is the quantum force is a type of inertial forces\cite{RefShoja,RefMach,RefVig}. In other words, when we consider the quantum force on a particle, it means that we are examining the particle dynamics in a non-inertial frame. In fact, the quantum force is different from usual forces in physics such as electric force between two charges, friction force and etc. In addition, some authors believe that the inertial forces do not obey Newton's third law\cite{RefIn}. And interestingly, this is one of the characteristics of the Bohmian force. Now, to have a free motion in this accelerated frame, we must equate the quantum acceleration with the gravitational acceleration\footnote{Remember the equation $\ddot{z^\prime}=0$ for a free motion in the previous paragraph.}. Note that in small regions of spacetime and in a short-time estimation, the average of quantum acceleration and gravitational acceleration can be considered constant. Therefore, the transformations (\ref{at}) can be used. When, we consider this condition for the average of the gravitational and quantum forces, we get the Diosi formula (\ref{Dio}) which is a criterion for transition from the quantum world to classical world. In other words, to establish the equivalence principle in quantum domain, the wave function of the particle must be reduced necessarily. To obtain the average of quantum and gravitational accelerations, we can use the relations \begin{equation}\label{abar} \bar{a}_q=\frac{\bar{f}_q}{m}=\frac{1}{m} \int_0^{\infty} \rho(r) \nabla Q dv \end{equation} and \begin{equation}\label{gbar} \bar{g}=\frac{f_g}{m}=\frac{1}{m} \int_0^{\infty} \rho(r) \nabla U dv. \end{equation} Where, $\rho=\psi^{*} \psi=R^2$, and the amplitude of the wave function in the short-time estimation($\sigma \approx \sigma_0$) is $R(r)= (2\pi \sigma_0^2)^{-\frac{3}{4}}e^{-\frac{r^2}{4\sigma_0^2}}$ in spherical coordinates. Here, $Q$ and $U$ are the quantum potential and the self-gravitational energy of the particle which are given by \begin{equation}\label{sq} Q=-\frac{\hbar^2}{2m}\frac{\nabla^2 R}{R}= -\frac{\hbar^2}{2m}\frac{1}{r}\frac{\partial}{\partial r}\left(r\frac{\partial R}{\partial r}\right)=\frac{\hbar^2}{8m\sigma_0^4}\left(6\sigma_0^2-r^2\right) \end{equation} and \begin{equation}\label{pot2} U=- \int_{0}^{r} \frac{Gm^2}{r^{\prime}}\rho(r^{\prime}) dv^{\prime}=\sqrt{\frac{2}{\pi}}\frac{Gm^2}{\sigma_0}\left(1-e^{-\frac{r^2}{2\sigma_0^2}}\right). \end{equation} The volume element is $dv^{\prime}=4\pi r^{\prime^2} dr^{\prime}$. For details see \cite{RefRGG3}. In this estimation the average values of the gravitational acceleration and the quantum acceleration (\ref{abar}) and (\ref{gbar}) become: \begin{equation}\label{bg} \bar{g}=\frac{Gm}{\sigma_0^2} \end{equation} and \begin{equation}\label{bg2} \bar{a}_q=\frac{\hbar^2 }{4m^2 \sigma_0^3}. \end{equation} Where $\sigma_0$ is the average radius of distribution or the initial width of the wave packet. Now, to have a average free motion in the accelerated frame with acceleration $\bar{a}_q$, we must have: \begin{equation}\label{av1} \bar{\ddot{x}}= \bar{g}-\bar{a}_q= 0, \end{equation} which results in: \begin{equation}\label{dio2} \sigma_0 \approx \frac{\hbar^2}{Gm^3}. \end{equation} The interesting point is that this relation is obtained through the investigation of principle of equivalence in Bohmian mechanics and the concept of trajectories. \par In Ref \cite{RefRGG3} it has been shown that for masses greater than the critical mass the gravitational force overcomes the quantum force and in a gravity- dominant regime the radial motion of particle in spherical coordinate is described by equation \begin{equation}\label{eqg} r(t)=r(0) -\frac{1}{2}\frac{Gm}{\sigma_0^2}t^2 \end{equation} This relation can be used to obtain the time it takes for the particle to fall from $r(0)=\sigma_0$ to $r(\tau)=0$. It is given by \begin{equation}\label{tt} \tau= \left(\frac{\sigma_0^3}{Gm}\right)^{\frac{1}{2}}. \end{equation} In Ref \cite{RefRGG3}, it has been shown that relation (\ref{tt}) is equal to the collapse time which has been obtained by some authors in usual quantum mechanics. Also, it has been shown that it is equal to the decay time (\ref{lt}) of Penrose. Now, by using the criterion (\ref{Dio}), relation (\ref{tt}) can be written completely objective which is given by \begin{equation}\label{tt2} \tau= \frac{\hbar^3}{G^2m^5}. \end{equation} According to these investigations and discussions, we can argue that "\textit{In small enough regions of spacetime, the motion of freely-falling particle is the same in a gravitational field and a uniformly accelerated frame, if the wave function of the particle is reduced}". This can be introduced as the extension of WEP in the framework of Bohmian quantum mechanics. \section{Einsteinian and Newtonian observers in Bohmian quantum mechanics} In Ref \cite{RefP3}, the reduction time of the wave function has been related to the uncertainty in the self-gravitational energy of the particle. Now, I want to examine the difference of the particle energy between the Einsteinian and Newtonian observers(frames) and use the uncertainty relation to get an estimation for the reduction time as a new look at the issue.\par In Ref \cite{RefNa}, it has been shown that the solution of the Schr\"{o}dinger equation in the accelerated frame with Newtonian wave function $\psi(z^{\prime},t^{\prime})$, is related to the free or Einsteinian solution $\phi(z^{\prime} ,t^{\prime})$ through the relation \begin{equation}\label{w} \psi(z^{\prime},t^{\prime})=\phi(z^{\prime},t^{\prime}) e^{ \frac{i}{\hbar}(-mgz^{\prime}t^{\prime}+\frac{1}{3}mg^2 t^{\prime 3})} \end{equation} The Einsteinian wave function can be represented by a free wave function in the form \begin{equation} \phi(z^{\prime},t^{\prime}) =\phi_0 e^{\frac{i}{\hbar}S(z^{\prime},t^{\prime})} \end{equation} Now, equation (\ref{w}) takes the form \begin{equation}\label{w2} \psi(z^{\prime},t^{\prime})=e^{\frac{iS(z^{\prime},t^{\prime})}{\hbar}+\frac{i}{\hbar}(-mgz^{\prime}t^{\prime}+\frac{1}{3}mg^2 t^{\prime 3}) } \end{equation} where, $S(z^{\prime},t^{\prime})$ is the free or Einsteinian phase of the wave function. The Newtonian phase of the wave function is \begin{equation}\label{pha} S^{\prime}(z^{\prime},t^{\prime})=S(z^{\prime},t^{\prime})-mgz^{\prime}t^{\prime}+\frac{1}{3}mg^2 t^{\prime 3} \end{equation} The energy of the particle is \begin{equation}\label{en1} E^{\prime}=-\frac{\partial S^{\prime}(z^{\prime},t^{\prime})}{\partial t^{\prime}}=-\frac{\partial S(z^{\prime},t^{\prime})}{\partial t^{\prime}} +mgz^{\prime}-mg^2t^{\prime 2} \end{equation} Now, by using $z^{\prime}=-\frac{1}{2}gt^{\prime 2}$ and $t^{\prime}=t$, equation(\ref{en1}) becomes \begin{equation}\label{en2} E^{\prime}=-\frac{\partial S(z^{\prime},t^{\prime})}{\partial t^{\prime}} -\frac{3}{2} mg^2t^{ 2}=E-\frac{3}{2} mg^2t^{ 2} \end{equation} The energy of the particle for the Newtonian observer is $E^{\prime}$, while for the Einsteinian observer is $E$. Relation (\ref{en2}), shows that the energy is not conserved and varies with time. In other words, the difference in particle energy between the observers is \begin{equation}\label{del} \mathcal{E}(t) =E^{\prime}-E=-\frac{3}{2} mg^2t^{ 2} \end{equation} It is clear that in the absence of gravitational field ($\mathbf{g}=0$), or in a flat spacetime, $\mathcal{E}=0$ and the Newtonian observer is the same as Einsteinian one. This energy difference arises due to the establishment of equivalence principle.\par Now, I return to the wave packet and its Gaussian distribution. I assume that at $t=0$, the position of the particle is $r(0)=\sigma_0$ and at $t=\tau$, the particle is at $r=0$. During the reduction time $\tau$, a sphere with the radius $\sigma_0$ is formed in the configuration space. The uncertainty in energy at $t=\tau$ must satisfy the relation \begin{equation} \mathcal{E}\vert_{t=\tau} \tau \approx \hbar \end{equation} where by using relation (\ref{del}) leads to \begin{equation}\label{1a} mg^2 \tau ^3 \approx \hbar \end{equation} Now, substituting relations (\ref{bg}) and (\ref{dio2}) into the relation(\ref{1a}), gives: \begin{equation} \tau=\frac{\hbar^3}{G^2 m^5} \end{equation} which is the same as relation (\ref{tt2}). \par As I mentioned before, in Ref \cite{RefP3} it has been argued that the nonlinear term $\frac{m g^2 t^3}{\hbar} $ in the unitary transformation between the Einsteinian and Newtonian wave functions is related to the different vacua and the Unruh effect. In quantum field theory in curved spacetime, an accelerated observer detects a gas of field particles in the field vacuum and attributes a temperature to the system which is a statistical system i.e. an ensemble of mixed states with the Unruh temperature. On the other hand, during the processes of wave function reduction, a pure quantum state (state of the particle) transforms to an ensemble of mixed states. Now, I want to define a quantity similar to the Unruh temperature for the wave function reduction, in the context of Bohmian quantum mechanics. In Ref \cite{RefP3}, it has been pointed out that there is no thermal effect in the issue of wave function reduction. But there are concepts similar to what is in quantum field theory. Now , we shall see that in the framework of Bohmian quantum mechanics, it is possible to define such a temperature systematically. In fact, the origin of such temperature is not due to the kinetic energy of the particle. It refers to the particle energy difference between the Einsteinian and Newtonian observers. If we divide this energy difference among the degrees of freedom of the particle, we shall see that a temperature can be attributed to the system. As the self-gravity for a single particle is defined through the probability distribution in configuration space, such temperature can also be defined through the particle distribution in configuration space. Now, let to have an estimation for such temperature.\par As has been done in Ref \cite{RefRGG3}, I consider a Gaussian wave packet in spherical coordinate with the single degree of freedom $r$ in the short-time estimation. When the particle falls from $r = \sigma_0$ to the center of distribution( $r=0$), due to its own gravity, a volume $\frac{4}{3}\pi \sigma_0^3$ can be considered in configuration space of the particle. The square mean velocity of the particle in the ensemble of trajectories is \begin{equation} \bar{u^2}(\tau)=\int u^2 \rho dv =\left(g^2t^2\vert_{t=\tau}\right)\int_{0}^{\sigma_0} \rho dv \approx g^2\tau^2 \end{equation} Where, $u=gt$ is the velocity of the particle when moves on the ith trajectory of the ensemble in its own gravity in the gravity-dominant regime \cite{RefRGG3}. In the short-time estimation, the distribution $\rho=R^2=\psi^{*}\psi$ is time-independent and $g=\bar{g}$ which it has been shown in Ref \cite{RefRGG3}. Also, in this approximation the integral $\int_{0}^{\sigma_0} \rho dv=\int_{0}^{\sigma_0} \rho 4\pi r^2 dr$ has a constant finite value which I ignored it because it has no effect on the final result\footnote{It can be easily seen that the exact value of $\bar{u^2}(\tau)$ is equal to $\displaystyle \frac{g^{2} \tau^{2} ({\mathrm e}^{-\frac{1}{2}} \sqrt{2}-\mathrm{erf}(\frac{\sqrt{2}}{2}) \sqrt{\pi})}{\sqrt{\pi}}$.}. Now, by using the conditions $r(0)=\sigma_0$ and $r(\tau)=0$, relation (\ref{eqg}), gives the reduction time in the form \begin{equation}\label{tt12} \tau^2 = 2 \frac{\sigma_0}{g} \end{equation} which helps us to rewrite the square mean velocity of the particle in the form \begin{equation}\label{a2} \bar{u^2}(\tau)=2g\sigma_0 \end{equation} In fact, this is the mean square velocity of the particle during the reduction time. Thus, relation (\ref{del}) can be written in the form \begin{equation} \vert \mathcal{E}(t=\tau)\vert = \frac{3}{2} m \bar{u^2} \end{equation} which represents the relation between particle energy difference between two free and accelerated observers and the velocity of the particle in the ensemble.\par As I mentioned before during the reduction time $\tau$, a spherical symmetric Gaussian distribution with the radius $\sigma_0$, is formed in configuration space. We can look at this distribution like a ideal gas composed of identical particles. Now, by using the relation $\bar{u^2}=\frac{k_BT}{m}$ for a system with one degree of freedom(radial motion toward the center of distribution in spherical coordinates), the kinetic energy due to the quantum effect of gravity is related to the ensemble temperature $T$. In other words, we have \begin{equation}\label{tem1} \frac{k_BT}{m}=2g\sigma_0 \end{equation} On the other hand, it has been shown that for ordinary densities of matter, the critical mass for the transition from the quantum to the classical world is of the order of the Planck mass($10^{-8}Kg$), for which the reduction of the wave function occurs \cite{RefK1,RefBassi}. As we know, for a particle or object with the Planck mass, the Schwarzschild radius of the object and its Compton wavelength are the same\cite{RefLa}. If we substitute the Plank mass into the relation (\ref{Dio}), the value of $\sigma_0$ becomes about the $10^{-33}m$ which is of order of the Plank length. This shows that the reduction of the wave function occurs where the effects of quantum gravity are important. According to these arguments, I take the matter distribution radius (characteristic width of the wave packet) approximately equal to the Compton wavelength of the particle, i.e. $\sigma_0 \sim \lambda_C = \frac{\hbar}{m}$. Now, relation (\ref{tem1}), gives: \begin{equation}\label{u1} T = \frac{\hbar g}{k_B} \end{equation} which is approximately equal to the Unruh temperature in quantum field theory in curved spacetime\footnote{If we write the Compton wavelength in the form $\lambda_C = \frac{\hbar}{mc}$, we will have $T = \frac{\hbar g}{k_B c}$. In non-relativistic regime, we can consider $c \longrightarrow \infty$. Then, the Unruh does not have a significant value in non-relativistic limit.}. It may be appropriate to call it "reduction temperature". This relation is derived for a particle which its mass is of the order of the Planck mass and the width of its wave packet is close to the Schwarzschild radius. Then, I substitute relations $\sigma_0 = 2Gm$ and $g=\frac{Gm}{\sigma_0^2}$ into the relation (\ref{u1}). This gives the reduction temperature in the form \begin{equation}\label{ha} T = \frac{\hbar}{k_B Gm} \end{equation} which is similar to the Hawking temperature\footnote{Because of the Gaussian form of the matter distribution, relations (\ref{u1}) and (\ref{ha}) are obtained by approximation. For example, the value of the gravitational acceleration for the Gaussian distribution is $g=2\sqrt{\frac{2}{\pi}}\frac{Gm}{\sigma_0^2}$, whereas I simply set it equal to the value $\frac{Gm}{\sigma_0^2}$} . These results are not unlikely. Because, both in the quantum field theory in curved spacetime and in the gravity-induced wave function reduction, acceleration or gravity transforms the pure quantum state into a mixture of states which can be seen as a thermodynamic system with a specific temperature. \par Relations (\ref{u1}) and (\ref{ha}) obtained, through the concept of geometrization or gravitization of quantum mechanics in Bohmian context. Naturally, such derivation which is based on the study of quantum motion of particle is not possible in the framework of standard quantum mechanics. But, what is more important and more mysterious is the hidden underlying physical concept of these phenomena which appears as a transition from pure quantum state to the mixture of states. If we give originality to geometry and equivalence principle and determinism, then the thermal behavior of vacuum field shows that the quantum systems are classical statistical systems. But, the hidden variables are not known to us to have a complete deterministic description of the system. This study highlights the problem of hidden variables. It seems to have a better ontological understanding of quantum world, the issue of hidden variables should be taken into consideration. \section{Conclusion} The results of this research are as follows.\\ Since, the structure of Bohmian quantum mechanics, can be related to geometric and gravitational concepts(remember conformal transformation (\ref{c})), the study of gravity-induced wave function reduction through the concept of equivalence principle was done in a clear way. The reduction time of the wave function and the critical mass for transition from quantum world to the classical world were obtained by applying the principle of equivalence to the quantum motion of particle. The use of equivalence principle in the study of wave function reduction has not been done before in the Bohmian framework. Such interpretations and results are not possible in usual quantum mechanics. It was shown that the third statement of weak equivalence principle must be extended in quantum domain. The temperature mentioned in Ref \cite{RefP3} was extracted systematically. Here, it was argued that such temperature is due to the particle energy difference between the Einsteinian and Newtonian observers (different vacua in quantum field theory in curved spacetime). It is a concept like the Unruh temperature in quantum field theory in curved spacetime. This is not unlikely. Because in both cases, gravity or acceleration transforms a pure quantum state to a mixture of states i.e to a statistical system with a specific temperature. We never say a thermodynamic system is not deterministic. Rather, we say it is a deterministic system fundamentally. But, because of our ignorance of the all initial conditions or information of the system, we use the statistical analysis. This argument can also be valid for a quantum system. Therefore, for an ontological understanding of the quantum world, we must pay more attention to the issue of hidden variables.
1,108,101,563,429
arxiv
\section{Introduction} In this paper, we are interested in the following Gagliardo--Nirenberg inequality: \\ For every $0\leq \alpha_1<\alpha_2$, and for $1\leq p_1, p_2, q \leq \infty$, there holds \begin{equation}\label{-10} \|f\|_{\dot{W}^{\alpha_1, p_1}} \lesssim \| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{q} } \|f \|^\frac{\alpha_1}{\alpha_2}_{\dot{W}^{\alpha_2, p_2}} \,, \end{equation} where $$\frac{1}{p_1} = \frac{1}{q} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p_2} \frac{\alpha_1}{\alpha_2} \,,$$ and $\dot{W}^{\alpha,p}(\RR^n)$ denotes by the homogeneous Sobolev space (see its definition in Section 2). \\ It is known that such an inequality of this type plays an important role in the analysis of PDEs. When $\alpha_i$, $i=1,2$ are nonnegative integer numbers, \eqref{-10} was obtained independently by Gagliardo \cite{Gag} and Nirenberg \cite{Nir}. After that, the inequalities of this type have been studied by many authors in \cite{Brezis1, Brezis2, Brezis3, CDDD, Dao1,DaoLamLu1,DaoLamLu2,Le, LuWheeden,Lu2, Miyazaki,MeRi2003, Van}, and the references cited therein. \\ The case $q=\infty$ can be considered as a limiting case of \eqref{-10}, i.e: \begin{equation}\label{-12} \|D^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{L^\infty} \|D^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,, \quad \forall f\in L^\infty(\RR^n)\cap \dot{W}^{\alpha_2,p_2}(\RR^n)\,, \end{equation} with $p_1=\frac{p_2\alpha_2}{\alpha_1}$. Obviously, this inequality fails if $\alpha_1=0$. \\ An partial improvement of \eqref{-12} in terms of {\rm BMO} space was obtained by Meyer and Rivi\`ere \cite{MeRi2003}: \begin{equation}\label{-13} \|D f\|^2_{L^{4}} \lesssim \|f\|_{ \rm{BMO} } \| D^2 f\|_{ L^2 } \,, \end{equation} for all $f\in {\rm BMO}(\mathbb{R}^n) \cap W^{2,2}(\mathbb{R}^n)$. Thanks to \eqref{-13}, the authors proved a regularity result for a class of stationary Yang--Mills fields in high dimension. \\ After that \eqref{-14} was extended to higher derivatives by the authors in \cite{Strz,Miyazaki}. Precisely, there holds true \begin{equation}\label{-14} \|D^{\alpha_1} f\|_{L^{p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}} _{{\rm BMO}} \|D^{\alpha_2} f \|^{\frac{\alpha_1}{\alpha_2}}_{L^{p_2}} \,, \end{equation} for all $f\in {\rm BMO}(\RR^n) \cap W^{\alpha_2,p_2}(\mathbb{R}^n)$, $p_2>1$. \\ Recently, the author et al. \cite{DaoLamLu1} improved \eqref{-14} by means of the homogeneous Besov spaces. For convenience, we recall the result here. \begin{theorem}[see Theorem 1.2, \cite{DaoLamLu1}] \label{TheC} \sl Let $m, k$ be integers with $1\leq k<m$. For every $s\geq 0$, let $f \in \mathcal{S}'(\mathbb{R}^n)$ be such that $ D^m f\in L^{p}(\mathbb{R}^n)$, $1\leq p<\infty$; and $f\in\dot{B}^{-s}(\mathbb{R}^n)$. Then, we have $D^k f\in L^r(\mathbb{R}^n)$, $r=p \left( \frac{m+s}{k+s} \right)$, and \begin{equation}\label{-15} \|D^k f\|_{L^r} \lesssim \|f\|^{\frac{m-k}{m+s}}_{\dot{B}^{-s}} \left\|D^m f \right\|^\frac{k+s}{m+s}_{L^p} \,, \end{equation} where we denote $\dot{B}^{\sigma} = \dot{B}^{\sigma,\infty}_{\infty}$, $\sigma\in\mathbb{R}$ (see the definition of Besov spaces in Section 2). \end{theorem} \begin{remark} Obviously, \eqref{-15} is stronger than \eqref{-14} when $s=0$ since ${\rm BMO}(\mathbb{R}^n) \hookrightarrow \dot{B}^{0}(\mathbb{R}^n)$. We emphasize that \eqref{-15} is still true for $k=0$ when $s>0$. \end{remark} \begin{remark} In studying the space ${\rm BV}(\RR^2)$, A. Cohen et al., \cite{CDPX} proved \eqref{-15} for the case $k=0, m=p=1, s=n-1, r=\frac{n}{n-1}$ by using wavelet decompositions (see \cite{Le} for the case $k=0, m=1, p\geq 1, r=p\big(\frac{1+s}{s}\big)$, with $s>0$). \end{remark} Inequality \eqref{-10} in terms of fractional Sobolev spaces has been investigated by the authors in \cite{Brezis1, Brezis2, Brezis3,Van} and the references therein. Surprisingly, there is a border line for the limiting case of Gagliardo--Nirenberg type inequality. In \cite{Brezis1}, Brezis--Mironescu proved that the following inequality \begin{align}\label{-16} \|f\|_{W^{\alpha_1,p_1}}\lesssim \|f\|^\theta_{W^{\alpha,p}} \|f\|^{1-\theta}_{W^{\alpha_2,p_2}} \,, \end{align} with $\alpha_1=\theta \alpha +(1-\theta)\alpha_2$, $\frac{1}{p_1}=\frac{\theta}{p}+\frac{1-\theta}{p_2}$, and $\theta\in(0,1)$ holds if and only if \begin{equation}\label{special-cond} \alpha-\frac{1}{p}< \alpha_2-\frac{1}{p_2} \,.\end{equation} As a consequence of this result, the following inequality \[ \|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty} \|D f\|_{L^1} \] fails whenever $0<\alpha_1<1$. \\ We note that the limiting case of \eqref{-16} reads as: \begin{equation}\label{-17} \|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|_{L^\infty}\|f\|_{\dot{W}^{\alpha_2,p_2}} \,, \end{equation} where $\alpha_1<\alpha_2$, and $\alpha_1 p_1=\alpha_2 p_2$. \\ When $\alpha_2<1$, Brezis--Mironescu improved \eqref{-17} by means of ${\rm BMO}(\RR^n)$ using the Littlewood--Paley decomposition. Very recently, Van Schaftingen \cite{Van} studied \eqref{-17} for the case $\alpha_2=1$ on a convex open set $\Omega\subset \RR^n$ satisfying certain condition. Particularly, he proved that \begin{equation}\label{-20} \|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{{\rm BMO}} \|D f\|^{\alpha_1}_{L^{p_2}} \end{equation} where $0<\alpha_1<1$, $p_1\alpha_1=p_2$, $p_2>1$. \\ Inspired by the above results, we would like to study \eqref{-10} by means of fractional Sobolev spaces and Besov spaces. Moreover, we also improve the limiting cases \eqref{-17}, \eqref{-20} in terms of $\dot{B}^0(\RR^n)$. \subsection*{Main results} Our first result is to improve \eqref{-10} by using fractional Sobolev spaces, and homogeneous Besov spaces. \begin{theorem}\label{Mainthe} Let $\sigma>0$, and $0\leq \alpha_1<\alpha_2<\infty$. Let $1\leq p_1, p_2 \leq \infty$ be such that $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, and $p_2(\alpha_2+\sigma)>1$. If $f\in \dot{B}^{-\sigma}(\RR^n) \cap \dot{W}^{\alpha_2,p_2}(\RR^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\RR^n)$. Moreover, there is a positive constant $C=C(n,\alpha_1,\alpha_2,p_2, \sigma)$ such that \begin{equation}\label{-3} \|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,. \end{equation} \end{theorem} \begin{remark} Note that \eqref{-3} is not true for the limiting case $\sigma= \alpha_1=0, p_1=\infty$, even \eqref{special-cond} holds, i.e: $\alpha_2-\frac{1}{p_2}>0$. Indeed, if it is the case, then \eqref{-3} becomes \[ \|f\|_{L^{\infty}} \lesssim \|f\|_{\dot{B}^{0}} \,. \] Obviously, the inequality cannot happen since $L^\infty(\RR^n)\hookrightarrow {\rm BMO}(\RR^n)\hookrightarrow \dot{B}^0(\RR^n)$. \end{remark} However, if $\alpha_1$ is positive, then \eqref{-3} holds true with $\sigma=0$. This assertion is in the following theorem. \begin{theorem}\label{Mainthe1} Let $\alpha_2> \alpha_1>0$, and let $1\leq p_1, p_2\leq \infty$ be such that $p_1=\frac{\alpha_2 p_2}{\alpha_1}$, and $\alpha_2 p_2>1$. If $f\in \dot{B}^{0}(\RR^n) \cap \dot{W}^{\alpha_2,p_2}(\RR^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\RR^n)$. Moreover, we have \begin{equation}\label{4.1} \|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,. \end{equation} \end{theorem} Our paper is organized as follows. In the next section, we provide the definitions of fractional Sobolev spaces and homogeneous Besov spaces. Section 3 is devoted to the proofs of Theorems \ref{Mainthe}, \ref{Mainthe1}. Moreover, we also obtain the homogeneous version of \eqref{-16} with an elementary proof, see Lemma \ref{Lem-Hom-Sobolev}. Finally, we prove $\|f\|_{\dot{W}^{s,p}}\approx \|f\|_{\dot{B}^{s,p}}$ for $0<s<1$, $1\leq p<\infty$ in the Appendix section. \section{Definitions and preliminary results} \subsection{Fractional Sobolev spaces} \begin{definition}\label{Def-frac-Sob} For any $0<\alpha<1$, and for $1\leq p<\infty$, we denote $\dot{W}^{\alpha,p}(\RR^n)$ (resp. $W^{\alpha,p}(\RR^n)$) by the homogeneous fractional Sobolev space (resp. the inhomogeneous fractional Sobolev space), endowed by the semi-norm: \[ \|f\|_{\dot{W}^{\alpha,p}} = \left(\displaystyle \int_{\RR^n} \int_{\RR^n} \frac{|f(x+h)-f(x)|^p}{|h|^{n+\alpha p}} dhdx \right)^{\frac{1}{p}} \,, \] and the norm \[ \|f\|_{W^{\alpha,p}} = \left(\|f\|^p_{L^p} + \|f\|^p_{\dot{W}^{\alpha,p}} \right)^\frac{1}{p}\,. \] \end{definition} When $\alpha\geq 1$, we can define the higher order fractional Sobolev space as follows: \\ Denote $\floor{\alpha}$ by the integer part of $\alpha$. Then, we define \[ \|f\|_{\dot{W}^{\alpha,p}} =\left\{ \begin{array}{cl} &\|D^{\floor{\alpha}} f\|_{L^p} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+. \vspace*{0.1in}\\ & \|D^{\floor{\alpha}}f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} ,\quad\text{otherwise}\,. \end{array}\right. \] In addition, we also define \[ \|f\|_{W^{\alpha,p}} =\left\{ \begin{array}{cl} &\|f\|_{W^{\alpha,p}} ,\quad \text{if }\, \alpha\in\mathbb{Z}^+. \\ & \left( \|f\|^p_{W^{\floor{\alpha},p}} + \|D^{\floor{\alpha}} f\|^p_{\dot{W}^{\alpha-\floor{\alpha},p}} \right)^{\frac{1}{p}} ,\quad\text{otherwise}\,. \end{array}\right. \] \subsection*{Notation} Through the paper, we accept the notation $\dot{W}^{\alpha,\infty}(\RR^n)=\dot{C}^{\alpha}(\RR^n)$, $\alpha\in(0,1)$; and $\dot{W}^{0,p}(\RR^n)=L^p(\RR^n)$, $1\leq p\leq \infty.$ \\ In addition, we always denote constant by C, which may change from line to line. Moreover, the notation $C(\alpha, p,n)$ means that $C$ merely depends on $\alpha, p,n$. Next, we write $A \lesssim B$ if there exists a constant $c > 0$ such that $A <cB$. And we write $A \approx B$ iff $A \lesssim B \lesssim A$. \subsection{Besov spaces} To define the homogeneous Besov spaces, we recall the Littlewood--Paley decomposition (see \cite{Triebel}). Let $\phi_j(x)$ be the inverse Fourier transform of the $j$-th component of the dyadic decomposition i.e., $$\sum_{j\in \mathbb{Z}} \hat{\phi}(2^{-j} \xi ) =1$$ except $\xi=0$, where $ {\rm supp}( \hat{\phi})\subset \left\{ \frac{1}{2} < |\xi| < 2 \right\}$. \\ Next, let us put $$ \mathcal{Z}(\mathbb{R}^n) = \left\{ f \in \mathcal{S}(\mathbb{R}^n), D^\alpha \hat{f}(0) = 0,\, \forall\alpha \in \mathbb{N}^n, \text{ multi-index} \right\} \,,$$ where $\mathcal{S}(\mathbb{R}^n)$ is the Schwartz space as usual. \begin{definition}\label{Def1} For every $s\in\mathbb{R}$, and for every $1\leq p, q\leq \infty$, the homogeneous Besov space is denoted by $$\dot{B}^s_{p,q} =\left\{ f\in \mathcal{Z}'(\mathbb{R}^n): \|f\|_{\dot{B}^s_{p,q}} <\infty \right\} \,,$$ with $$ \|f\|_{\dot{B}^s_{p,q}} = \left\{ \begin{array}{cl} &\left( \displaystyle \sum_{j\in\mathbb{Z}} 2^{jsq} \|\phi_j * f\|^q_{L^p} \right)^\frac{1}{q}\,, \text{ if }\, 1\leq q<\infty, \vspace{0.1in} \\ & \displaystyle\sup_{ j \in\mathbb{Z} } \left\{ 2^{js} \|\phi_j * f\|_{L^p} \right\} \,, \text{ if }\, q=\infty \,. \end{array} \right. $$ When $p=q=\infty$, we denote $\dot{B}^s_{\infty,\infty}=\dot{B}^s$ for short. \end{definition} The following characterization of $\dot{B}^{s}_{\infty,\infty}$ is useful for our proof below. \begin{theorem}[see Theorem 4, p. 164, \cite{Peetre}]\label{ThePeetre} Let $\big\{\varphi_\varepsilon\big\}_\varepsilon$ be a sequence of functions such that \[\left\{ \begin{array}{cl} &{\rm supp}(\varphi_\varepsilon)\subset B(0,\varepsilon) , \quad \big\{ \frac{1}{2\varepsilon}\leq |\xi|\leq \frac{2}{\varepsilon} \big\}\subset \big\{\widehat{\phi_\varepsilon}(\xi) \not=0 \big\} , \vspace*{0.1in}\\ &\int_{\RR^n} x^\gamma \varphi_\varepsilon (x)\, dx =0 ,\, \text{for all multi-indexes }\, |\gamma|<k, \text{ where $k$ is a given integer}, \vspace*{0.1in}\\ & \big|D^\gamma \varphi_\varepsilon(x)\big| \leq C \varepsilon^{-(n+|\gamma|)}\, \text{ for every multi-index } \gamma\,. \end{array} \right.\] Assume $s<k$. Then, we have \[ f\in \dot{B}^s(\RR^n) \Leftrightarrow \sup_{\varepsilon>0} \left\{\varepsilon^{-s} \|\varphi_\varepsilon * f\|_{L^\infty} \right\} < \infty \,. \] \end{theorem} We end this section by recall the following result (see \cite{DaoLamLu1}). \begin{proposition}[Lifting operator]\label{Pro1} Let $s\in\mathbb{R}$, and let $\gamma$ be a multi-index. Then, $\partial^\gamma$ maps $\dot{B}^s(\RR^n) \rightarrow \dot{B}^{s-|\gamma|}(\RR^n)$. \end{proposition} \section{Proof of the Theorems} \subsection{Proof of Theorem \ref{Mainthe}} We first prove Theorem \ref{Mainthe} for the case $0\leq \alpha_1<\alpha_2\leq 1$. After that, we consider $\alpha_i\geq 1$, $i=1,2$. \\ {\bf i) Step 1: $0\leq \alpha_1<\alpha_2 \leq 1$.} We divide our argument into the following cases. \\ {\bf a) The case $p_1=p_2=\infty$, $0< \alpha_1<\alpha_2 <1$.} Then, \eqref{-3} becomes \begin{equation}\label{1.0} \|f\|_{\dot{C}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{C}^{\alpha_2}} \,. \end{equation} To prove \eqref{1.0}, we use a characterization of homogeneous Besov space $\dot{B}^{s}$ in Theorem \ref{ThePeetre}, and the fact that $\dot{B}^s(\RR^n)$ coincides with $\dot{C}^s(\RR^n)$, $s\in(0,1)$ (see \cite{Grevholm}). \\ Then, let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ in Theorem \ref{ThePeetre}. \\ For $\delta>0$, we write \begin{align}\label{2.-1} \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} &=\varepsilon^{\alpha_2-\alpha_1} \varepsilon^{-\alpha_2}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon<\delta\big\}}+ \varepsilon^{-(\alpha_1+\sigma)} \varepsilon^{\sigma}\|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon\geq \delta\big\}} \\ &\leq \delta^{\alpha_2-\alpha_1} \|f\|_{\dot{B}^{\alpha_2}} +\delta^{-(\alpha_1+\sigma)} \|f\|_{\dot{B}^{-\sigma}} \,. \nonumber \end{align} Minimizing the right hand side with respect to $\delta$ in the indicated inequality yields \begin{align*} \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty}\lesssim \|f\|_{\dot{B}^{-\sigma}}^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}} \|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}\,. \end{align*} Since the last inequality holds for every $\varepsilon>0$, then we obtain \eqref{1.0}. \begin{remark}\label{Rem2} It is not difficult to observe that the above proof also adapts to the two following cases: \begin{enumerate} \item[$\bullet$] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then, we have \begin{equation}\label{2.-3} \|f\|_{L^\infty} \lesssim \|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|_{\dot{B}^{\alpha_2}}^{\frac{\sigma}{\alpha_2+\sigma}}\,. \end{equation} \item[$\bullet $] $\alpha_1=0$, $\alpha_2<1$, $\sigma>0$. Then, we have \begin{equation}\label{2.-2} \|f\|_{\dot{B}^{\alpha_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}} \|f\|_{\dot{B}^{\alpha_2}}^{\frac{\alpha_1}{\alpha_2}}\,. \end{equation} \end{enumerate} This is Theorem \ref{Mainthe1} when $p_i=\infty$, $i=1, 2$. \end{remark} To end part {\bf a)}, it remains to prove \eqref{-3} for the case $\alpha_2=1$. That is \begin{equation}\label{2.-4} \|f\|_{\dot{B}^{\alpha_1}} \lesssim \|f\|^{\frac{1}{1+\sigma}}_{\dot{B}^{-\sigma}} \|Df\|_{L^\infty}^{\frac{\sigma}{1+\sigma}}\,. \end{equation} The proof is similar to the one of \eqref{1.0}. Hence, it suffices to prove that \begin{equation}\label{2.-5} \varepsilon^{-\alpha_1} \|\varphi_\varepsilon * f\|_{L^\infty} \mathbf{1}_{\big\{\varepsilon<\delta\big\}} \leq \delta^{1-\alpha_1} \|Df\|_{L^\infty} \,. \end{equation} Indeed, using vanishing moment of $\varphi_\varepsilon$ and the mean value theorem yields \begin{align*} \big|\varphi_\varepsilon * f(x)\big| &=\big| \int_{B(0,\varepsilon)} (f(x)-f(x-y)) \varphi_\varepsilon (y)\, dy \big| \\ &\leq \int_{B(0,\varepsilon)} \|Df\|_{L^\infty} |y| |\varphi_\varepsilon (y)|\, dy \leq \varepsilon \|\varphi_\varepsilon\|_{L^1} \|Df\|_{L^\infty} \lesssim \varepsilon \|Df\|_{L^\infty} \,. \end{align*} Thus, \eqref{2.-5} follows easily. \\ By repeating the proof of \eqref{2.-1}, we obtain \eqref{2.-4}. \\ {\bf b) The case $p_i<\infty, \,i=1,2$.} Then, the proof follows by way of the following lemmas. \begin{lemma}\label{Lem10} Let $0<\alpha< 1$, and $1\leq p<\infty$. For every $s>0$, if $f\in \dot{B}^{-s}(\mathbb{R}^n)\cap \dot{W}^{\alpha,p}(\RR^n)$, then there exists a positive constant $C=C(s,\alpha,p)$ such that \begin{equation}\label{1.1} |f(x)| \leq C \| f \|_{\dot{B}^{-s}}^\frac{\alpha}{s+\alpha} \big[ \mathbf{G}_{\alpha,p}(f)(x)\big]^{\frac{s}{s+\alpha}}\,, \quad \text{for } x\in\mathbb{R}^n\,, \end{equation} with $$\mathbf{G}_{\alpha,p}(f)(x)= \displaystyle\sup_{\varepsilon>0} \left(\fint_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{\alpha p}} dy \right)^\frac{1}{p} \,.$$ \end{lemma} \begin{remark}\label{Rem6} When $\alpha=1$, then \eqref{1.1} becomes \begin{equation}\label{1.1b} |f(x)|\leq C\|f\|_{\dot{B}^{-s}}^\frac{1}{s+1} \big[\mathbf{M}(|Df|)(x)\big]^{\frac{s}{s+1}}\,, \quad \text{for } x\in\mathbb{R}^n\,. \end{equation} This inequality was obtained by the authors in \cite{DaoLamLu1}. As a result, we get \begin{equation}\label{1.1a} \|f\|_{L^{p_1}} \lesssim \| f \|_{\dot{B}^{-s}}^\frac{1}{s+1} \|Df\|_{L^{p_2}}\,, \end{equation} with $p_1=p_2\big(\frac{s+1}{s}\big)$, $p_2\geq 1$. \\ This is also Theorem \ref{Mainthe} when $\alpha_1=0$, $\alpha_2=1$, $s=\sigma>0$. \end{remark} \begin{remark}\label{Rem4} Obviously, for $1\leq p<\infty$ we have $\|\mathbf{G}_{\alpha,p}(f)\|_{L^p}\lesssim \|f\|_{\dot{W}^{\alpha,p}}$, and $\mathbf{G}_{\alpha,1}(f)(x)\leq \mathbf{G}_{\alpha,p}(f)(x)$ for $x\in\RR^n$. \\ Next, applying Lemma \ref{Lem10} to $s=\sigma, \alpha=\alpha_2$, $p=p_2$, and taking the $L^{p_1}$-norm of \eqref{1.1} yield \[ \|f\|_{L^{p_1}} \lesssim \|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \left( \int \big|\mathbf{G}_{\alpha_2,p_2}(f)(x)\big|^{\frac{\sigma p_1}{\sigma+\alpha_2}} \, dx \right)^{1/p_1} \leq \|f\|_{\dot{B}^{-\sigma}}^\frac{\alpha_2}{\sigma+\alpha_2} \|f\|^{\frac{\sigma}{\sigma+\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,, \] with $p_1= p_2 \big(\frac{\sigma+\alpha_2}{\sigma}\big)$. \\ Hence, we obtain Theorem \ref{Mainthe} for the case $\alpha_1=0$. \end{remark} \begin{proof}[Proof of Lemma \ref{Lem10}] Let us recall sequence $\{\varphi_\varepsilon\}_{\varepsilon>0}$ above. Then, we have from the triangle inequality that \begin{align*} |f(x)| \leq | \varphi_\varepsilon * f(x)| + | f(x)-\varphi_\varepsilon * f(x)| :=\mathbf{I}_1+ \mathbf{I}_2 \,. \end{align*} We first estimate $\mathbf{I}_1$ in terms of $\dot{B}^{-s}$. Thanks to Theorem \ref{ThePeetre}, we get \begin{align}\label{1.2} \mathbf{I}_1 = \varepsilon^{-s} \varepsilon^{s} | \varphi_\varepsilon * f(x)| \leq C \varepsilon^{-s} \| f \|_{\dot{B}^{-s}} \,. \end{align} For $\mathbf{I}_2$, applying H\"older's inequality yields \begin{align}\label{1.3} \mathbf{I}_2 &\leq \int_{B(0,\varepsilon)} |f(x)-f(x-y)| \varphi_\varepsilon (y) \, dy = \varepsilon^{\frac{n}{p}+\alpha} \int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|}{\varepsilon^{\frac{n}{p}+\alpha}} \varphi_\varepsilon (y) \, dy \nonumber \\ &\leq \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{p'}} \left(\int_{B(0,\varepsilon)} \frac{|f(x)-f(x-y)|^p}{\varepsilon^{n+\alpha p}} dy \right)^\frac{1}{p} \nonumber \\ &\lesssim \varepsilon^{\frac{n}{p}+\alpha} \|\varphi_\varepsilon\|_{L^{\infty}} \big|B(0,\varepsilon)\big|^{\frac{1}{p'}} \mathbf{G}_{\alpha,p}(f)(x) \lesssim \varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(f)(x) \,. \end{align} Note that the last inequality follows by using the fact $\|\varphi_\varepsilon\|_{L^{\infty}}\leq C \varepsilon^{-n}$. \\ By combining \eqref{1.2} and \eqref{1.3}, we obtain \[|f(x)|\leq C\left(\varepsilon^{-s} \| f \|_{\dot{B}^{-s}} +\varepsilon^{\alpha} \mathbf{G}_{\alpha,p}(f)(x)\right) \,. \] Since the indicated inequality holds true for $\varepsilon>0$, then minimizing the right hand side of this one yields the desired result. \\ Hence, we complete the proof of Lemma \ref{Lem10}. \end{proof} Next, we have the following lemma. \begin{lemma}\label{Lem11} Let $0<\alpha_1<\alpha_2< 1$. Let $1\leq p_1, p_2 <\infty$, and $r>1$ be such that \begin{equation}\label{1.6} \frac{1}{p_1}= \frac{1}{r} \left(1-\frac{\alpha_1}{\alpha_2}\right) + \frac{1}{p_2}\frac{\alpha_1}{\alpha_2} \,. \end{equation} If $f\in L^r(\RR^n)\cap \dot{W}^{\alpha_2,p_2}(\RR^n)$, then $f\in \dot{W}^{\alpha_1,p_1}(\RR^n)$. In addition, there exists a constant $C=C(\alpha_1,\alpha_2,p_1,p_2,n)>0$ such that \begin{equation}\label{1.8} \|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C \| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} } \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{Lem11}] For any set $\Omega$ in $\mathbb{R}^n$, let us denote $\fint_\Omega f(x) \, dx = \frac{1}{|\Omega|} \int_{\Omega} f(x) \, dx$. \\ For any $x, z\in\mathbb{R}^n$, we have from the triangle inequality and change of variables that \begin{align*} \big|f(x+z)-f(x)\big|&\leq \big|f(x+z) - \fint_{B(x,|z|)} f(y)\,dy \big|+\big|f(x) - \fint_{B(x,|z|)} f(y)\,dy \big| \\ &\leq \fint_{B(x,|z|)} \big|f(x+z) -f(y) \big| \, dy+ \fint_{B(x,|z|)} \big|f(x)-f(y) \big| \, dy \\ &\leq C(n) \left( \fint_{B(0,2|z|)} \big|f(x+z) -f(x+z+y) \big| \, dy+ \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy \right)\,. \end{align*} With the last inequality noted, and by using a change of variables, we get \begin{align}\label{1.10} \int\int \frac{|f(x+z)-f(x)|^{p_1}}{|z|^{n+\alpha_1 p_1}} dzdx \lesssim \int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}}\,. \end{align} Next, for every $p\geq 1$ we show that \begin{align}\label{1.13} \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}}. \end{align} Thanks to Remark \ref{Rem4}, it suffices to show that \eqref{1.13} holds for $p=1$. \\ Indeed, we have \begin{align}\label{1.14} \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} &= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|^{\alpha_2}} \, dy\right)^{p_1} \frac{|z|^{\alpha_2 p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber \\ &\lesssim \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} \int_{\{|z|<t\}} \frac{1}{|z|^{n+(\alpha_1-\alpha_2) p_1}} dz \nonumber \\ &\lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1}\,. \end{align} On the other hand, it is not difficult to observe that \begin{align}\label{1.15} \int_{|z|\geq t} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} &\lesssim \big[\mathbf{M}(f)(x)\big]^{p_1} \left( \int_{|z|\geq t}\frac{dz}{|z|^{n+\alpha_1 p_1}} \right) \nonumber \\ &\lesssim t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,. \end{align} From \eqref{1.14} and \eqref{1.15}, we obtain \begin{align*} \int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(\alpha_2-\alpha_1) p_1} \left[ \mathbf{G}_{\alpha_2,1}(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,. \end{align*} Minimizing the right hand side of the last inequality yields \eqref{1.13}. \\ Then, it follows from \eqref{1.13} that \[ \int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} dx \lesssim \int \big[\mathbf{M}(f)(x)\big]^{\frac{(\alpha_2-\alpha_1)p_1}{\alpha_2}} \left[\mathbf{G}_{\alpha_2,p_2}(x) \right]^{\frac{\alpha_1 p_1}{\alpha_2}} dx\,. \] Note that $\alpha_2 p_2>\alpha_1 p_1$, and $r=\frac{p_1p_2(\alpha_2-\alpha_1)}{\alpha_2p_2-\alpha_1p_1}$, see \eqref{1.6}. Then, applying H\"older's inequality with $\big((\frac{\alpha_2p_2}{\alpha_1p_1})^\prime, \frac{\alpha_2p_2}{\alpha_1p_1}\big)$ to the right hand side of the last inequaltiy yields \begin{align*} \int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} &\lesssim \big\|\mathbf{M}(f)\big\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|\mathbf{G}_{\alpha_2,p_2}\|^{\frac{\alpha_1p_1}{\alpha_2}}_{L^{p_2}} \,. \end{align*} Thanks to Remark \ref{Rem4}, and by the fact that $\mathbf{M}$ maps $L^r(\mathbb{R}^n)$ into $L^r(\mathbb{R}^n)$ $r>1$, we deduce from the last inequality that \begin{align}\label{1.18} \int\int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dzdx}{|z|^{n+\alpha_1 p_1}} \lesssim \|f\|^{\frac{(\alpha_2-\alpha_1) p_1}{\alpha_2}}_{L^r} \|f\|^{\frac{\alpha_1 p_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,. \end{align} Combining \eqref{1.10} and \eqref{1.18} yields \eqref{1.8}. \\ Hence, we obtain Lemma \ref{Lem11}. \end{proof} Now, we can apply Lemma \ref{Lem10} and Lemma \ref{Lem11} alternatively to get Theorem \ref{Mainthe} for the case $0<\alpha_2<1$. Indeed, we apply \eqref{1.1} to $s=\sigma$, $\alpha=\alpha_2$, $p=p_2$. Then, \begin{align}\label{1.20} \|f\|_{L^q} & \lesssim \|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}^{\frac{\sigma}{\alpha_2+\sigma}}_{\alpha_2,p_2}\big\|_{L^q} = \|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|\mathbf{G}_{\alpha_2,p_2}\big\|^{\frac{\sigma}{\alpha_2+\sigma}}_{L^{p_2}} \leq \|f\|^{\frac{\alpha_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,, \end{align} with $q=p_2\big(\frac{\alpha_2+\sigma}{\sigma}\big)$. \\ Since $p_1=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$, then it follows from \eqref{1.6} that $r=q>1$. \\ Next, applying Lemma \ref{Lem11} yields \begin{equation}\label{1.19} \|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \| f \|^{1-\frac{\alpha_1}{\alpha_2}}_{ L^{r} } \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \|f\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\,. \end{equation} Hence, we obtain Theorem \ref{Mainthe} for the case $0\leq \alpha_1< \alpha_2<1$, $p_i<\infty$, $i=1,2$. \\ To end {\bf Step 1}, it remains to study the case $\alpha_2=1$, i.e: \begin{equation} \label{1.22a} \|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}} \|Df\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}} \,. \end{equation} This can be done if we show that \begin{equation}\label{1.21} \|f\|_{\dot{W}^{\alpha_1,p_1}} \leq C \| f \|^{1-\alpha_1}_{ L^{r} } \|Df\|^{\alpha_1}_{L^{p_2}} \,, \end{equation} with $1\leq r<\infty$, $\frac{1}{p_1}= \frac{1-\alpha_1}{r} + \frac{\alpha_1}{p_2}$. \\ Indeed, a combination of \eqref{1.21} and \eqref{1.1a} implies that \begin{align*} \|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{1-\alpha_1}_{L^r} \|Df\|^{\alpha_1}_{L^{p_2}} \lesssim \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}} \|D f\|^{\frac{\sigma(1-\alpha_1)}{1+\sigma}}_{L^{p_2}} \|Df\|^{\alpha_1}_{L^{p_2}} = \|f\|^{\frac{1-\alpha_1}{1+\sigma}}_{\dot{B}^{-\sigma}} \|D f\|^{\frac{\alpha_1+\sigma}{1+\sigma}}_{L^{p_2}}\,. \end{align*} Note that $p_1=p_2\big( \frac{1+\sigma}{\alpha_1+\sigma}\big)$, and $r=p_2\big( \frac{1+\sigma}{\sigma}\big)$. \\ Hence, we obtain Theorem \ref{Mainthe} when $\alpha_2=1$. \\ Now, it remains to prove \eqref{1.21}. We note that \eqref{1.21} was proved for $p_2=1$ (see, e.g., \cite{Brezis3, CDPX}). In fact, one can modify the proofs in \cite{Brezis3, CDPX} in order to obtain \eqref{1.21} for the case $1<p_2<\infty$. However, for consistency, we give the proof of \eqref{1.21} for $1<p_2<\infty$. \\ To obtain the result, we prove a version of \eqref{1.13} in terms of $\mathbf{M}(|Df|)(x)$ instead of $\mathbf{G}_{1,p}(x)$. Precisely, we show that \begin{align}\label{1.23} \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim \big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1} \end{align} for $x\in\RR^n$. \\ Indeed, it follows from the mean value theorem and a change of variables that \begin{align*} \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy &\lesssim \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|y|} \, dy \\ &= \fint_{B(0,2|z|)} \frac{\big|\int^1_0 D f(x+\tau y) \cdot y\, d\tau\big| }{|y|} \, dy \\ &\leq \int^1_0\fint_{B(x,2\tau|z|)} | D f(\zeta) | \, d\zeta d\tau \leq \int^1_0 \mathbf{M}(|Df|)(x) \, d\tau = \mathbf{M}(|Df|)(x) \,. \end{align*} Thus, \begin{align}\label{1.24} \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} &= \int_{\{|z|<t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x)-f(x+y) \big|}{|z|} \, dy\right)^{p_1} \frac{|z|^{p_1} dz}{|z|^{n+\alpha_1 p_1}} \nonumber \\ &\lesssim \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \int_{\{|z|<t\}} |z|^{-n+(1-\alpha_1)p_1} \, dz \nonumber \\ &\lesssim t^{(1-\alpha_1)p_1} \big[\mathbf{M}(|Df|)(x) \big]^{p_1} \,. \end{align} From \eqref{1.24} and \eqref{1.15}, we obtain \begin{align}\label{1.25} \int \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{(1-\alpha_1) p_1} \left[ \mathbf{M}(|Df|)(x)\right]^{p_1} +t^{-\alpha_1 p_1}\big[\mathbf{M}(f)(x)\big]^{p_1} \,. \end{align} Hence, \eqref{1.23} follows by minimizing the right hand side of \eqref{1.25} with respect to $t$. If $p_2>1$, then we apply H\"older's inequality in \eqref{1.23} in order to get \begin{align*} \|f\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz dx}{|z|^{n+\alpha_1 p_1}} \\ &\lesssim \int\big[\mathbf{M}(f)(x)\big]^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{\alpha_1 p_1} dx \\ &\leq \|\mathbf{M}(f)\|^{(1-\alpha_1)p_1}_{L^r} \|\mathbf{M}(|Df|)\|^{\alpha_1 p_1}_{L^{p_2}} \\ &\lesssim \|f\|^{(1-\alpha_1)p_1}_{L^r} \|Df\|^{\alpha_1 p_1}_{L^{p_2}}\,, \end{align*} where $r=p_2\big(\frac{1+\sigma}{\sigma}\big)>1$. Note that the last inequality follows from the $L^{p}$-boundedness of $\mathbf{M}$, $p>1$. Thus, we get \eqref{1.21}. \\ This puts an end to the proof of {\bf Step 1}. \\ {\bf ii) Step 2.} Now, we can prove Theorem \ref{Mainthe} for the case $\alpha_1\geq 1$. At the beginning, let us denote $\alpha_i=\floor{\alpha_i}+s_i$, $i=1,2$. Then, we divide the proof into the following cases. \\ {\bf a) The case $\floor{\alpha_2}=\floor{\alpha_1}$:} By applying Theorem \ref{Mainthe} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$; and by Proposition \ref{Pro1}, we obtain \begin{align*} \big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\| D^{\floor{\alpha_1}} f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} \\ &\lesssim \big\| f \big\|^{\frac{s_2-s_1}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{s_2+\sigma+\floor{\alpha_1}}}_{\dot{W}^{s_2,p_2}} = \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,, \end{align*} with $p_1=p_2\big(\frac{s_2+\sigma_{{\rm new}}}{s_1+\sigma_{{\rm new}}}\big)=p_2\big(\frac{\alpha_2+\sigma}{\alpha_1+\sigma}\big)$. \\ Hence, we get the conclusion for this case. \\ {\bf b) The case $\floor{\alpha_2}>\floor{\alpha_1}$:} If $s_2>0$, then we can apply Theorem \ref{Mainthe} to $D^{\floor{\alpha_2}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_2}$. Therefore, \begin{align}\label{1.30} \big\| D^{\floor{\alpha_2}} f \big\|_{L^q} \lesssim \big\|D^{\floor{\alpha_2}} f \big\|^{\frac{s_2}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{B}^{-(\sigma+\floor{\alpha_2})}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\sigma+\floor{\alpha_2}}{s_2+\sigma+\floor{\alpha_2}}}_{\dot{W}^{s_2,p_2}} \lesssim \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\|f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,, \end{align} with $q=p_2\big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big)$. Again, the last inequality follows from the lifting property in Proposition \eqref{Pro1}. \\ Next, applying Theorem \ref{Mainthe} to $D^{\floor{\alpha_1}} f$, $\sigma_{{\rm new}}=\sigma+\floor{\alpha_1}$ yields \begin{align}\label{1.31} \big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} = \big\|D^{\floor{\alpha_1}} f \big\|_{\dot{W}^{s_1,p_1}} &\lesssim \big\|D^{\floor{\alpha_1}} f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-(\sigma+\floor{\alpha_1})}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \nonumber \\ &\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_1}+1} f \big\|^{\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}}_{L^{q_1}} \,, \end{align} with $q_1= p_1\big(\frac{s_1+\sigma+\floor{\alpha_1}}{1+\sigma+\floor{\alpha_1}}\big)$. \\ If $\floor{\alpha_2}=\floor{\alpha_1}+1$, then observe that $q=q_1$. Thus, we deduce from \eqref{1.30} and \eqref{1.31} that \begin{align*} \big\|f\big\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}= \big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,. \end{align*} This yields \eqref{-3}. \\ Note that $\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}+\frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)}=\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}$ since $\floor{\alpha_2}=\floor{\alpha_1}+1$. \\ If $\floor{\alpha_2}>\floor{\alpha_1}+1$, then we apply \cite[Theorem 1.2]{DaoLamLu1} to $k=\floor{\alpha_1}+1$, $m=\floor{\alpha_2}$. Thus, \begin{align}\label{1.32} \big\| D^{\floor{\alpha_1}+1} f \big\|_{L^{q_1}}\lesssim \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\,, \end{align} with $q_2=q_1 \big(\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}\big)$. \\ Combining \eqref{1.31} and \eqref{1.32} yields \begin{align}\label{1.33} \big\|f \big\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}}}_{\dot{B}^{-\sigma}} \left( \big\|f\big\|^{\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}}_{\dot{B}^{-\sigma}} \big\|D^{\floor{\alpha_2}} f \big\|^{\frac{\floor{\alpha_1}+1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}}\right)^{\frac{\alpha_1+\sigma}{1+\floor{\alpha_1}+\sigma}} \nonumber \\ &= \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}} \big\| D^{\floor{\alpha_2}} f \big\|^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}}_{L^{q_2}} \,. \end{align} Observe that $q=q_2=p_2 \big(\frac{\alpha_2+\sigma}{\floor{\alpha_2}+\sigma}\big) $. Thus, it follows from \eqref{1.33} and \eqref{1.30} that \begin{align*} \big\| f \big\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \big\| f \big\|^{\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)}_{\dot{B}^{-\sigma}} \left( \big\| f \big\|^{\frac{s_2}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\floor{\alpha_2}+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}}\right)^{\frac{\alpha_1+\sigma}{\floor{\alpha_2}+\sigma}} \\ &=\big\| f \big\|^{\frac{\alpha_2-\alpha_1}{\alpha_2+\sigma}}_{\dot{B}^{-\sigma}} \big\| f \big\|^{\frac{\alpha_1+\sigma}{\alpha_2+\sigma}}_{\dot{W}^{\alpha_2,p_2}} \,. \end{align*} A straightforward computation shows that \[\frac{1-s_1}{1+\sigma+\floor{\alpha_1}} + \big(\frac{\floor{\alpha_2}-\floor{\alpha_1}-1}{\floor{\alpha_2}+\sigma}\big)\big( \frac{\alpha_1+\sigma}{\floor{\alpha_1}+1+\sigma}\big)+ \frac{s_2(\alpha_1+\sigma)}{(\alpha_2+\sigma)(\floor{\alpha_2}+\sigma)} = \frac{\alpha_2-\alpha_1}{\alpha_2+\sigma} \,.\] This puts an end to the proof of Theorem \ref{Mainthe} for $s_2>0$. \\ The proof of the case $s_2=0$ can be done similarly as above. Then, we leave the details to the reader. \\ Hence, we complete the proof of Theorem \ref{Mainthe}. \subsection{Proof of Theorem \ref{Mainthe1}}At the beginning, let us recall the notation $\alpha_i=\floor{\alpha_i}+s_i$, $i=1, 2$. Then, we divide the proof into the two following cases. \\ {\bf i) The case $p_1=p_2=\infty$.} If $0<\alpha_1<\alpha_2<1$, then \eqref{4.1} becomes \begin{equation}\label{2.0} \|f\|_{\dot{C}^{\alpha_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,. \end{equation} Inequality \eqref{2.0} can be obtained easily from the proof of \eqref{1.0} for $\sigma=0$. Then, we leave the details to the reader. \\ If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is integer, then \eqref{4.1} reads as: \begin{equation}\label{2.0a} \|f\|_{\dot{C}^{\alpha_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,. \end{equation} To obtain \eqref{2.0a}, we utilize the vanishing moments of $\varphi_\varepsilon$ in Theorem \ref{ThePeetre}. In fact, let us fix $k>\alpha_2$. Then, it follows from the Taylor series that \begin{align}\label{2.0c} |\varphi_\varepsilon * f(x)| &=\left|\int\big( f(x-y)-f(x)\big) \varphi_\varepsilon(y)\, dy \right| \nonumber \\ &=\left| \int \left(\sum_{|\gamma|< \alpha_2} \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma + \sum_{|\gamma|= \alpha_2} \frac{D^\gamma f(\zeta)}{|\gamma|!} (-y)^\gamma \right) \varphi_\varepsilon(y)\,dy \right| \nonumber \\ &=\left| \int \sum_{|\gamma|= \alpha_2} \frac{D^{\gamma} f(\zeta)}{|\alpha_2|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right| \end{align} for some $\zeta$ in the line-$xy$. Note that $$ \int \frac{D^\gamma f(x)}{|\gamma|!} (-y)^\gamma \varphi_\varepsilon(y)\,dy=0$$ for every multi-index $|\gamma|<k$. \\ Hence, we get from \eqref{2.0c} that \[ |\varphi_\varepsilon * f(x)| \lesssim \|\nabla^{\alpha_2} f\|_{L^\infty} \int_{B(0,\varepsilon)} |y|^{\alpha_2} |\varphi_\varepsilon(y)|\,dy\lesssim \varepsilon^{\alpha_2} \|\nabla^{\alpha_2} f\|_{L^\infty}\,.\] Inserting the last inequality into \eqref{2.-1} yields \[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|\nabla^{\alpha_2} f\|_{L^\infty} +\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,. \] By minimizing the right hand side of the indicated inequality, we get \[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|\nabla^{\alpha_2} f\|^{\frac{\alpha_1}{\alpha_2}}_{L^\infty} \,. \] This implies \eqref{2.0a}. \\ If $0<\alpha_1<1\leq \alpha_2$, and $\alpha_2$ is not integer, then \eqref{4.1} reads as: \begin{equation}\label{2.0b} \|f\|_{\dot{C}^{\alpha_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \left\|\nabla^{\floor{\alpha_2}} f\right\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,. \end{equation} To obtain \eqref{2.0b}, we apply \eqref{2.0c} to $\floor{\alpha_2}$. Thus, \begin{align*} |\varphi_\varepsilon * f(x)| &=\left| \int \sum_{|\gamma|=\floor{\alpha_2}} \frac{D^{\gamma} f(\zeta)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right| \\ &=\left| \int \sum_{|\gamma|= \floor{\alpha_2}} \frac{D^{\gamma} f(\zeta) - D^{\gamma} f(x)}{\floor{\alpha_2}!} (-y)^\gamma \varphi_\varepsilon(y)\,dy \right| \\ &\lesssim \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int |x-\zeta|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy \\ &\leq\|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \int_{B(0,\varepsilon)} |y|^{s_2} |y|^{\floor{\alpha_2}} |\varphi_\varepsilon(y)|\,dy \lesssim \varepsilon^{\alpha_2} \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} \,. \end{align*} Thus, \[ \varepsilon^{-\alpha_1}\|\varphi_\varepsilon * f\|_{L^\infty} \lesssim \delta^{\alpha_2-\alpha_1} \|D^{\floor{\alpha_2}} f\|_{\dot{C}^{s_2}} +\delta^{-\alpha_1} \|f\|_{\dot{B}^{0}} \,. \] By the analogue as in the proof of \eqref{2.0a}, we also obtain \eqref{2.0b}. \\ In conclusion, Theorem \ref{Mainthe1} was proved for the case $0<\alpha_1<1$. \\ Now, if $\alpha_1\geq 1$, then \eqref{4.1} becomes \begin{equation}\label{2.0d} \| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|D^{\floor{\alpha_2}} f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} \,. \end{equation} Again, we note that $\|.\|_{\dot{C}^{s_i}}$ is replaced by $\|.\|_{L^\infty}$ whenever $s_i=0$, $i=1,2$. \\ To obtain \eqref{2.0d}, we apply Theorem \ref{Mainthe} to $f_{{\rm new}}=D^{\floor{\alpha_1}} f$, and $\sigma =\floor{\alpha_1}$. \\ Hence, it follows from Proposition \ref{Pro1} that \begin{align*} \|f\|_{\dot{C}^{\alpha_1}} =\| D^{\floor{\alpha_1}} f \|_{\dot{C}^{s_1}} &\lesssim \big\|D^{\floor{\alpha_1}} f\big\|^{\frac{\alpha_2-\floor{\alpha_1}-s_1}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{B}^{-\floor{\alpha_1}}} \big\|D^{\floor{\alpha_1}} f\big\|^{\frac{s_1+\sigma}{\alpha_2-\floor{\alpha_1}+\sigma}}_{\dot{C}^{\alpha_2-\floor{\alpha_1}}} \\ &\lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}} \big\|D^{\floor{\alpha_2}}f\big\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{s_2}} = \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^{0}} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{C}^{\alpha_2}} \,. \end{align*} This puts an end to the proof of Theorem \ref{Mainthe1} for the case $p_1=p_2=\infty$. \\ {\bf ii) The case $p_i<\infty, i=1,2$.} We first consider the case $0<\alpha_1<1$. \\ {\bf a)} If $\alpha_2\in(\alpha_1, 1)$, then we utilize the following result $\|\cdot\|_{\dot{W}^{s,p}} \approx \|\cdot\|_{\dot{B}^{s}_{p,p}}$ for $s\in(0,1)$, $p\geq 1$, see Proposition \ref{Pro-cha} in the Appendix section. Therefore, \eqref{4.1} is equivalent to the following inequality \begin{equation}\label{2.1} \|f\|_{\dot{B}^{\alpha_1}_{p_1,p_1}} \lesssim \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^{0}}\|f\|^\frac{\alpha_1}{\alpha_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \,. \end{equation} Note that $\alpha_1 p_1 = \alpha_2 p_2$. Hence, \begin{align}\label{2.2} 2^{jn \alpha_1 p_1} \|f*\phi_j\|_{L^{p_1}}^{p_1} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f*\phi_j\|_{L^{\infty}}^{p_1-p_2} \leq 2^{jn \alpha_2 p_2} \|f*\phi_j\|_{L^{p_2}}^{p_2} \|f\|_{\dot{B}^0}^{p_1-p_2} \,. \end{align} This implies that \[ \|f\|^{p_1}_{\dot{B}^{\alpha_1}_{p_1,p_1}} \leq \|f\|^{p_1-p_2}_{\dot{B}^{0}} \|f\|^{p_2}_{\dot{B}^{\alpha_2}_{p_2,p_2}} \] which is \eqref{2.1}. \\ {\bf b)} If $\alpha_2= 1$, then we show that \begin{equation}\label{2.3} \|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim\|f\|^{1-\alpha_1}_{\dot{B}^{0}} \|Df\|^{\alpha_1}_{L^{p_2}}\,. \end{equation} To obtain \eqref{2.3}, we prove the homogeneous version of \eqref{-16}. \begin{lemma}\label{Lem-Hom-Sobolev} Let $0<\alpha_0<\alpha_1 <\alpha_2\leq 1$, and $p_0\geq 1$ be such that $\alpha_0 -\frac{1}{p_0}<\alpha_2-\frac{1}{p_2}$, and \[ \frac{1}{p_1} = \frac{\theta}{p_0} + \frac{1-\theta}{p_2} ,\quad \theta= \frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0} \,. \] Then, we have \begin{equation}\label{2.4} \|f\|_{\dot{W}^{\alpha_1,p_1}} \lesssim \|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0} }_{\dot{W}^{\alpha_0,p_0}} \|f\|^{\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0}}_{\dot{W}^{\alpha_2,p_2}},\quad \forall f\in \dot{W}^{\alpha_0,p_0} (\RR^n) \cap \dot{W}^{\alpha_2,p_2}(\RR^n) \,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{Lem-Hom-Sobolev}] The proof is quite similar to the one in Lemma \ref{Lem10}. Indeed, the proof follows by way of the following result. \\ If $f\in \dot{W}^{\alpha_0,p_0} (\RR^n) \cap \dot{W}^{\alpha_2,p_2}(\RR^n)$, then there hold true \begin{align}\label{2.5a} \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x) \big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1} \end{align} if provided that $\alpha_2<1$, and \begin{align}\label{2.5} \int \left( \fint_{B(0,2|z|)} \big|f(x)-f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim\big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1} \end{align} if $\alpha_2=1$. \\ The proof of \eqref{2.5a} (resp. \eqref{2.5}) can be done similarly as the one of \eqref{1.13} (resp. \eqref{1.23}). Therefore, we only need to replace $\mathbf{M}(f)(x)$ by $\mathbf{G}_{\alpha_0,p_0}(f)(x)$ in \eqref{1.13} (resp. \eqref{1.23}). \\ In fact, we have from H\"{o}lder's inequality \begin{align}\label{2.6} \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} &\leq \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big|^{p_0} \, dy\right)^{\frac{p_1}{p_0}} \frac{dz}{|z|^{n+\alpha_1 p_1}} \nonumber \\ &= \int_{\{|z|\geq t\}} \left( \fint_{B(0,2|z|)} \frac{\big|f(x) - f(x+y) \big|^{p_0}}{|z|^{\alpha_0 p_0}} \, dy\right)^{\frac{p_1}{p_0}} \frac{|z|^{\alpha_0 p_1}dz}{|z|^{n+\alpha_1 p_1}} \nonumber \\ &\lesssim \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \int_{\{|z|\geq t\}} |z|^{-n-(\alpha_1-\alpha_0)p_1} \, dz\nonumber \\ &\lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1} \,. \end{align} If $\alpha_2<1$, then it follows from \eqref{2.5} and \eqref{1.14} that \begin{align*} \int_{\RR^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(\alpha_2-\alpha_1)p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{p_1} \,. \end{align*} Thus, \eqref{2.5a} follows by minimizing the right hand side of the indicated inequality. \\ Next, applying H\"older's inequality in \eqref{2.5a} with $\big(\frac{p_0(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)},\frac{p_2(\alpha_2-\alpha_0)}{p_1(\alpha_2-\alpha_1)}\big)$ yields \begin{align*} \|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\RR^n} \int_{\RR^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \\ &\lesssim \int \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1} \big[\mathbf{G}_{\alpha_2,p_2}(f)(x)\big]^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1} dx \\ &\leq \big\| \mathbf{G}_{\alpha_0,p_0} \big\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{L^{p_0}} \big\| \mathbf{G}_{\alpha_2,p_2} \big\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{L^{p_2}} \\ &\leq \|f\|^{(\frac{\alpha_2-\alpha_1}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_0,p_0}} \|f\|^{(\frac{\alpha_1-\alpha_0}{\alpha_2-\alpha_0})p_1}_{\dot{W}^{\alpha_2,p_2}} \,. \end{align*} Note that the last inequality is obtained by Remark \ref{Rem4}. Hence, we get \eqref{2.4} for $\alpha_2<1$. \\ If $\alpha_2=1$, then it follows from \eqref{2.6} and \eqref{1.24} that \begin{align*} \int_{\RR^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \lesssim t^{-(\alpha_1-\alpha_0)p_1} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{p_1}+ t^{(1-\alpha_1)p_1} \left[\mathbf{M}(|Df|)(x) \right]^{p_1} \end{align*} which implies \eqref{2.5}. \\ By applying H\"older's inequality with $\big(\frac{p_0(1-\alpha_0)}{p_1(1-\alpha_1)},\frac{p_2(1-\alpha_0)}{p_1(1-\alpha_1)}\big)$, we obtain \begin{align*} \|f\|^{p_1}_{\dot{W}^{\alpha_1,p_1}} &\lesssim \int_{\RR^n} \int_{\RR^n} \left( \fint_{B(0,2|z|)} \big|f(x) - f(x+y) \big| \, dy\right)^{p_1} \frac{dz}{|z|^{n+\alpha_1 p_1}} \\ &\lesssim \int_{\RR^n} \big[\mathbf{G}_{\alpha_0,p_0}(f)(x)\big]^{(\frac{1-\alpha_1}{1-\alpha_0})p_1} \left[\mathbf{M}(|Df|)(x) \right]^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1} dx \\ &\leq \big\|\mathbf{G}_{\alpha_0,p_0}(f)\big\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{L^{p_0}} \big\|\mathbf{M}(|Df|) \big\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}} \\ &\lesssim \|f\|^{(\frac{1-\alpha_1}{1-\alpha_0})p_1}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{(\frac{\alpha_1-\alpha_0}{1-\alpha_0})p_1}_{L^{p_2}} \,. \end{align*} This yields \eqref{2.4} for $\alpha_2=1$. \\ Hence, we complete the proof of Lemma \ref{Lem-Hom-Sobolev}. \end{proof} Now, we apply Lemma \ref{Lem-Hom-Sobolev} when $\alpha_2=1$ in order to obtain \begin{align*} \|f\|_{\dot{W}^{\alpha_1, p_1}}\lesssim \|f\|^{\frac{1-\alpha_1}{1-\alpha_0}}_{\dot{W}^{\alpha_0, p_0}} \|Df\|^{\frac{\alpha_1-\alpha_0}{1-\alpha_0}}_{L^{p_2}} \,, \end{align*} where $\alpha_0, p_0$ are chosen as in Lemma \ref{Lem-Hom-Sobolev}. \\ After that, we have from \eqref{2.1} that \[ \|f\|_{\dot{W}^{\alpha_0, p_0}} \lesssim \|f\|^{1-\frac{\alpha_0}{\alpha_1}}_{\dot{B}^0} \|f\|^{\frac{\alpha_0}{\alpha_1}}_{\dot{W}^{\alpha_1, p_1}} \,.\] Combining the last two inequalities yields the desired result. \\ {\bf The case $\alpha_2>1$}. \\ If $\alpha_2$ is not integer, then we apply Theorem \ref{Mainthe} to $\sigma=\floor{\alpha_2}$ to get \begin{align}\label{2.7} \| D^{\floor{\alpha_2}} f \|_{L^q} \lesssim \| D^{\floor{\alpha_2}} f \|^{\frac{s_2}{\alpha_2}}_{\dot{B}^{-\floor{\alpha_2}}} \|D^{\floor{\alpha_2}} f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{s_2,p_2}} \lesssim \|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,, \end{align} with $q=p_2\frac{\alpha_2}{\floor{\alpha_2}}$. Recall that $\alpha_2=\floor{\alpha_2}+s_2$. If $\floor{\alpha_2}=1$, then it follows from \eqref{2.3} and the last inequality that \begin{align*} \|f\|_{\dot{W}^{\alpha_1,p_1}}\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^q} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left( \|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\alpha_1}=\|f\|^{\frac{\alpha_2-\alpha_1}{\alpha_2}}_{\dot{B}^0}\|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,, \end{align*} with $q=\alpha_1 p_1=\alpha_2 p_2$ since $\floor{\alpha_2}=1$. \\ This yields \eqref{4.1} when $\floor{\alpha_2}=1$. If $\floor{\alpha_2}>1$, then we can apply Theorem \ref{TheC} in order to get \begin{align*} \|Df\|_{L^{q_1}} \lesssim \|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \,, \end{align*} with $q_1=\alpha_1 p_1$, and $q_2 = \frac{q_1}{\floor{\alpha_2}} = \frac{\alpha_2 p_2}{\floor{\alpha_2}}$. \\ A combination of the last inequality, \eqref{2.7}, and \eqref{2.3} implies that \begin{align*} \|f\|_{\dot{W}^{\alpha_1,p_1}} &\lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \|Df\|^{\alpha_1}_{L^{q_1}} \lesssim \|f\|^{1-\alpha_1}_{\dot{B}^0} \left(\|f\|^{\frac{\floor{\alpha_2}-1}{\floor{\alpha_2}}}_{\dot{B}^0} \big\|D^{\floor{\alpha_2}}f \big\|^{\frac{1}{\floor{\alpha_2}}}_{L^{q_2}} \right)^{\alpha_1} \\ & \lesssim\|f\|^{1-\frac{\alpha_1}{\floor{\alpha_2}}}_{\dot{B}^0} \left(\|f\|^{\frac{s_2}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\floor{\alpha_2}}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \right)^{\frac{\alpha_1}{\floor{\alpha_2}}} = \|f\|^{1-\frac{\alpha_1}{\alpha_2}}_{\dot{B}^0} \|f\|^{\frac{\alpha_1}{\alpha_2}}_{\dot{W}^{\alpha_2,p_2}} \,. \end{align*} Hence, we obtain\eqref{4.1} when $\floor{\alpha_2}>1$. \\ The case where $\alpha_2>1$ is integer can be done similarly as the above. Then, we leave the details to the reader. \section{Appendix} \begin{proposition}\label{Pro-cha} The following statement holds true \begin{equation}\label{5.1} \|f\|_{\dot{W}^{\alpha,p}} \approx \|f\|_{\dot{B}^{\alpha}_{p,p}} ,\quad \forall f\in \mathcal{S}(\RR^n) \,. \end{equation} \end{proposition} \begin{proof}[Proof of Proposition \ref{Pro-cha}] To obtain the result, we follow the proof by Grevholm, \cite{Grevholm}. \\ First of all, for any $s\in(0,1)$, $1\leq p<\infty$, it is known that (see, e.g., \cite{Leoni, Triebel}) \[ \|f\|_{\dot{W}^{s,p}} \approx \left( \sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+s p}} \right)^{1/p} ,\quad \forall f\in W^{s,p}(\RR^n)\,, \] where $\Delta_{te_k} f (x) = f(x+te_k)-f(x)$, and $e_k$ is the $k$-th vector of the canonical basis in $\RR^n$, $k=1,\dots,n$. \\ Thanks to this result, \eqref{5.1} is equivalent to the following inequality \begin{align}\label{5.1a} \sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \approx \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,. \end{align} Then, we first show that \begin{equation} \label{5.1c} \sum_{k=1}^n \int^\infty_0 \big\|\Delta_{te_k} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \lesssim \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,. \end{equation} It suffices to prove that \begin{equation} \label{5.1b} \int^\infty_0 \big\|\Delta_{te_1} f\big\|^p_{L^p}\frac{dt}{t^{1+\alpha p}} \lesssim \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \,. \end{equation} Indeed, let $\varphi\in\mathcal{S}(\RR^n)$ be such that ${\rm supp}(\hat{\varphi})\subset \big\{ \frac{1}{2}< |\xi|< 2 \big\}$, $\hat{\varphi}(\xi) \not=0$ in $\big\{ \frac{1}{4}< |\xi|<1 \big\}$, $\varphi_j(x)= 2^{-jn}\varphi(2^{-j}x)$ for $j\in\mathbb{Z}$, and $\displaystyle\sum_{j\in\mathbb{Z}} \hat{\varphi_j}(\xi) =1$ for $\xi\not=0$. \\ Next, let us set $$\widehat{\psi}_j(\xi) = \big(e^{it\xi_1}-1\big) \widehat{\varphi}_j(\xi)\,, \quad \xi=(\xi_1,...,\xi_n) \,.$$ Note that for any $g\in\mathcal{S}(\RR^n)$ $$\mathcal{F}^{-1}\big\{(e^{it\xi_1}-1) \widehat{g}\big\} = g(x+te_1)- g(x) \,,$$ where $\mathcal{F}^{-1}$ denotes by the inverse Fourier transform. \\ Since ${\rm supp}(\widehat{\varphi}_j) \cap {\rm supp}(\widehat{\varphi}_l) =\emptyset$ whenever $|l-j|\geq 2$, then we have \begin{align}\label{5.2} \psi_j * f &= \psi_j * \left(\sum_{i\in\mathbb{Z}} \varphi_j \right) * f = \psi_j * \big(\varphi_{j-1} + \varphi_j +\varphi_{j+1} \big) * f \,. \end{align} Applying Young's inequality yields \begin{align}\label{5.3} \| \psi_j * \varphi_{j} * f \|_{L^p}& \leq \| \psi_j \|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber \\ &= \big\| \mathcal{F}^{-1}\big\{ (e^{it\xi_1}-1) \widehat{\varphi}_j(\xi) \big\}\big\|_{L^1} \| \varphi_{j} * f \|_{L^p} \nonumber \\ &= \| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1} \| \varphi_{j} * f \|_{L^p} \leq C\| \varphi_{j} * f \|_{L^p} \,, \end{align} where $C=C_\varphi$ is independent of $j$. \\ On the other hand, we observe that \begin{align*} \big|\varphi_j(x+te_1)-\varphi_{j}(x)\big|&=\big| \int^1_0 D\varphi_{j} (x + \tau t e_1) \cdot te_1 \, d\tau \big| \\ &\leq t\int^1_0 \big|D\varphi_{j} (x + \tau t e_1) \big| \, d\tau = t 2^{-j} 2^{-jn} \int^1_0 \big|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big| \, d\tau \,. \end{align*} Therefore, \begin{align}\label{5.5} \| \varphi_j(.+te_1)-\varphi_{j}(.)\|_{L^1} &\leq t 2^{-j} 2^{-jn} \int^1_0 \big\|D\varphi \big( 2^{-j}( x + \tau t e_1)\big) \big\|_{L^1} \, d\tau \nonumber \\ & = t 2^{-j} \int^1_0 \|D\varphi\|_{L^1} \, d\tau = C(\varphi) \, t 2^{-j} \,. \end{align} Combining \eqref{5.2}, \eqref{5.3} and \eqref{5.5} yields \begin{equation}\label{5.6} \sum_{j\in\mathbb{Z}} \| \psi_j * f \|_{L^p} \lesssim \min\{1,t2^{-j}\} \sum_{j\in\mathbb{Z}}\| \varphi_{j} * f \|_{L^p}\,. \end{equation} Now, remind that\, $f(x+te_1)-f(x) = \displaystyle\sum_{j\in\mathbb{Z}} \psi_j * f(x)$ in $\mathcal{S}^\prime(\RR^n)$. Then, we deduce from \eqref{5.6} that \begin{align*} \int^\infty_0 \int_{\RR^n} \frac{|f(x+te_1)-f(x)|^p}{t^{1+\alpha p}} \, dx dt &= \int_{0}^{\infty} \big\| \sum_{j\in \mathbb{Z}} \psi_j * f\big\|_{L^p}^p \frac{dt}{t^{1+\alpha p}} \\ &\lesssim \sum_{k\in\mathbb{Z}} \int^{2^k}_{2^{k-1}} \sum_{j\in \mathbb{Z}} \min\{1,t^p 2^{-jp}\} \| \varphi_j * f\|_{L^p}^p \frac{dt}{t^{1+\alpha p}} \\ &\lesssim \sum_{k\in\mathbb{Z}} 2^{-k\alpha p} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} \| \varphi_j * f\|_{L^p}^p \\ &= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{1,2^{(k-j)p}\} 2^{-(k-j)\alpha p} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right] \\ &= \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} \min\{2^{-(k-j)\alpha p},2^{(k-j)(1-\alpha)p}\} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right] \\ &\leq \sum_{k\in\mathbb{Z}} \sum_{j\in \mathbb{Z}} 2^{-|k-j|\delta} \left[ 2^{-j\alpha p} \|\varphi_j * f\|_{L^p}^p \right],\quad \delta=\min\{\alpha p , (1-\alpha)p\} \\ &\leq C_\delta \sum_{k\in\mathbb{Z}} \left[ 2^{-k\alpha p} \|\varphi_k * f\|_{L^p}^p \right] = C_\delta \|f\|_{\dot{B}^{\alpha}_{p,p}}^p \,. \end{align*} Similarly, we also obtain \[ \int^\infty_0 \int_{\RR^n} \frac{|f(x+te_k)-f(x)|^p}{t^{1+\alpha p}} \, dx dt \lesssim \|f\|_{\dot{B}^{\alpha}_{p,p}}^p ,\quad k=2,\dots,n \,.\] This yields \eqref{5.1b}. \\ For the converse, let $\{\varphi_j\}_{j\in\mathbb{Z}}$ be the sequence above. By following \cite[page 246]{Grevholm}, we can construct function $\psi\in\mathcal{S}(\RR^n)$ such that $\hat{\psi}(\xi) =1$ on $\{1/2 \leq|\xi|\leq 2\}$, and $\widehat{\psi}=\displaystyle\sum^n_{k=1} \widehat{h}^{k}$, with $h^{k}\in\mathcal{S}(\RR^n)$ satisfies \begin{align}\label{5.8a} \sup_{t\in(2^{j-1}, 2^j)} \big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{L^1} \leq C , \quad k=1,\dots,n \,, \end{align} where $h^k_j(x) = 2^{-jn}h^k_j(2^{-j}x)$, and constant $C>0$ is independent of $k, j$. Actually, we only need $\eqref{5.8a}$ holds for $\big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{\mathcal{M}}$ instead of $\big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{L^1}$, where $\mathcal{M}$ is the space of bounded measures on $\RR^n$, and $\|\mu\|_{\mathcal{M}}$ is the total variation of $\mu$. \\ Next, from the construction of functions $h^k$, $k=1,\dots,n$, there exists a universal constant $C_1>0$ such that \begin{align*} \big\|\mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} \big\|_{L^1} \leq C_1 \big\|\frac{\widehat{h}^k_j(\xi)}{e^{it\xi_k}-1} \big\|_{L^1}\,. \end{align*} With the last inequality noted, we deduce from \eqref{5.8a} that \begin{align}\label{5.8c} \sup_{t\in(2^{j-1}, 2^j)} \big\|\mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} \big\|_{L^1} \leq C C_1 , \quad k=1,\dots,n \,. \end{align} Now, observe that $$h^k_j *f = \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} * \Delta_{te_k} f \,.$$ Thus, it follows from the triangle inequality, \eqref{5.8c}, and Young's inequality that \begin{align}\label{5.9} \big\|\psi_j* f\big\|_{L^p} &= \big\|\sum_{k=1}^n h^k_j *f \big\|_{L^p} \leq \sum_{k=1}^n \big\| h^k_j *f \big\|_{L^p} = \sum_{k=1}^n\big\| \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} * \Delta_{te_k} f \big\|_{L^p} \nonumber \\ &\leq \sum_{k=1}^n\big\| \mathcal{F}^{-1}\big\{\frac{\widehat{h^k_j}(\xi)}{e^{it\xi_k}-1}\big\} \big\|_{L^1} \big\| \Delta_{te_k} f \big\|_{L^p} \nonumber \\ &\lesssim \sum_{k=1}^n \big\| \Delta_{te_k} f \big\|_{L^p} \,, \quad \text{for all } t\in(2^{j-1},2^j)\,. \end{align} On the other hand, it is clear that $\hat{\psi}(\xi) \hat{\varphi} (\xi)=\hat{\varphi} (\xi)$ since ${\rm supp}(\hat{\varphi})\subset \{1/2 \leq|\xi|\leq 2\}$. \\ Hence, we obtain from \eqref{5.9} that \begin{align*} \| \varphi_{j} *f \|^p_{L^p} = \| \varphi_{j} * \psi_j * f \|^p_{L^p} \leq \| \varphi_{j} \|^p_{L^1} \|\psi_j * f\|^p_{L^p} \lesssim \sum_{k=1}^n \big\|\Delta_{te_k} f \big\|^p_{L^p} \end{align*} for all $t\in (2^{j-1},2^j)$. \\ Thus, \begin{align*} \sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \| \varphi_{j} *f \|^p_{L^p} &\lesssim \sum_{j\in\mathbb{Z}} 2^{-j\alpha p} \sum_{k=1}^n \fint^{2^j}_{2^{j-1}} \big\|\Delta_{te_k} f \big\|^p_{L^p} \,dt \lesssim \sum_{k=1}^n \int^\infty_0 \|\Delta_{te_k} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}} \end{align*} which yields \[ \|f\|^p_{\dot{B}^{\alpha}_{p,p}} \lesssim \sum_{k=1}^n \int^\infty_0 \|\Delta_{te_k} f\|^p_{L^p} \frac{dt}{t^{1+\alpha p}} \,. \] This completes the proof of Proposition \ref{Pro-cha}. \end{proof} \bigskip \textbf{Acknowledgement.} The research is funded by University of Economics Ho Chi Minh City, Vietnam.
1,108,101,563,430
arxiv
\section{Introduction} \label{Motivation} The thermodynamic and dynamical properties of particles in restricted geometries are of great interest. They have been extensively studied in the context of porous media \cite{Given,Konig,Evans,Smit}, transport through narrow channels such as carbon nanotubes \cite{Hummer,Allen} and pores in biological membranes \cite{Hille} as well as in numerous other systems \cite{Gelb}. Perhaps the simplest and most convenient theoretical approach for studying fluids in cavities models the fluid by hard spheres. This approach has been applied in a large number of studies (see for example, \cite{Kamenetskiy,Kim,Mon2000,Kofke,Klafter}). Recent studies of a dilute gas of hard disks in a narrow two dimensional channel have shown that the system exhibits a singularity in the pressure at a channel width equal to twice the diameter of the disks \cite{FMP04,Bowles}. This is a consequence of the fact that the volume of the phase space available to the disks exhibits a singularity at this width. In particular, it has been found that for a channel with periodic boundary condition, the transverse component of the pressure exhibits a $3/2$ singularity, while the longitudinal component exhibits a $5/2$ singularity \cite{FMP04}. In the case of a channel with reflecting boundaries, weaker singularities for both pressure components were found \cite{Bowles}. In this paper we extend these studies to consider a gas of "soft" disks in a narrow channel at low density. We consider several classes of two body potentials, with both periodic and reflecting channel boundaries. Our analysis shows that the pressure components are singular at some channel widths whenever the interaction potential between two disks, $u(r)$, is singular at some distance $r_0$. In particular, in the case of periodic boundary conditions and for potentials which are discontinuous at some $r_0$, the dependence of the transverse component of the pressure on the channel width exhibits a $1/2$ singularity, while the longitudinal component exhibits a $3/2$ singularity at some channel widths, which are simply related to $r_0$. The nature of these singularities becomes weaker for interaction potentials which are continuous, but they still display a power law singularity at $r_0$. The singularities of the case of reflecting boundary conditions are found to be weaker than those corresponding to periodic boundaries. Although we have not analyzed narrow channels in three dimensions, we expect similar phenomena to take place there as well. The nature of the singularities in the pressure in three dimensions is expected to be different from that of the two dimensional case. In the following sections we study several classes of two body potentials $u(r)$ for both periodic and reflecting boundary conditions. In Section \ref{formulation} we present the general formulation of the tools used in this study, the Mayer cluster expansion for gases at low densities, and molecular dynamics. In Section \ref{Periodic-boundaries} we analyze the case of a channel with periodic boundary conditions, for one-step and two-steps potentials. We also study a smooth potential which vanishes with a power law at a critical distance. In Section \ref{reflecting-boundaries} we study a channel with reflecting boundaries for the cases of soft disks and soft disks with a hard core. Finally, a brief summary is given in Section \ref{conclusions}. \section{General Formulation} \label{formulation} We consider $N$ disks of diameter $d$ and mass $m$, interacting via a two body potential $u(r)$ at temperature $T$. The disks are restricted to move in a channel of length $L_x$ and width $L_y$ with $L_y \ll L_x$. In this study we analyze the pressure components of this gas using virial expansion to second order in the density. We also carry out molecular dynamics simulations of this system. In the present study the channel width, $L_y$, is taken to be finite throughout the calculation. Therefore the free energy is not extensive in $L_y$ and, thus, Euler's relation does not hold, namely, $-PV \neq E-TS-\mu N$. Thus, the pressure has to be calculated by taking the appropriate derivative of the free energy. In the grand canonical ensemble the free energy of a system of $N$ disks is given by \begin{eqnarray} F(T,V,N) &=& -kT\ln{{\mathcal L}(T,V,z)} + kTN\ln{z}\nonumber \;,\\ N &=& z{\frac{\partial}{\partial z}}\ln{{\mathcal L}(T,V,z)} \;. \label{free-energy} \end{eqnarray} Here ${\mathcal L}$ is the grand partition sum, $z$ is the fugacity, $k$ is the Boltzmann constant and $V=L_xL_y$. The pressure components are evaluated by taking the appropriate derivatives of the free energy \begin{eqnarray} P_{xx}V &=&-L_x {\frac{\partial F}{\partial L_x}} \nonumber \\ P_{yy}V &=&-L_y {\frac{\partial F}{\partial L_y}} \nonumber \\ P &=& {\frac{1}{2}}(P_{xx} + P_{yy}) \;. \label{pressure-components} \end{eqnarray} To second order in the fugacity $z$, the Mayer expansion yields \begin{eqnarray} \ln{\mathcal L} &=& {\frac{V}{\lambda^2}}(b_1 z + b_2 z^2)\nonumber\\ {\frac{1}{v}}\equiv {\frac{N}{V}} &=& {\frac{1}{\lambda^2}}(b_1 z + 2b_2 z^2)\;, \label{Mayer-expansion} \end{eqnarray} where $\lambda=h/\sqrt{2 \pi mkT}$ is the average thermal wavelength and $h$ is Planck's constant. The coefficients of the expansion satisfy \begin{eqnarray} b_1 &=& 1\nonumber \\ b_2 &=& {\frac{1}{2\lambda^2}} q(L_y) \;, \end{eqnarray} with \begin{equation} q(L_y)=\int f_{12} d^2{r_{12}} ~ . \label{q} \end{equation} Here, $f_{12} = e^{-\beta u(r_{12})}-1$ is the Mayer function, and $\beta = 1/kT$. Using Eqs. (\ref{free-energy},\ref{Mayer-expansion}) we find that to order $1/v$ the free energy is given by \begin{equation} \frac{F}{kTN} = -1-{\frac{q(L_y)}{2v}} -\ln{v} +2\ln{\lambda} \quad , \end{equation} from which the components of the pressure tensor are obtained: \begin{eqnarray} \frac {P_{xx}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v} \,,\label{pxxgeneral}\\ \frac {P_{yy}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v}+\frac{L_y}{2v} \frac{d q(L_y)}{d L_y} \;. \label{pyygeneral} \end{eqnarray} All theoretical considerations are augmented by computer simulations. For our two-dimensional systems, the temperature $T$ is computed from \begin{equation} \langle K \rangle = \left(N - \frac{g}{2}\right) kT, \label{kinetic} \end{equation} where $K$ is the kinetic energy and the bracket denotes a time average. Here, $g$ is the number of macroscopic conservation laws, which differs for the periodic boundaries of Section \ref{Periodic-boundaries}, $(g = 3$; the total momentum is constant and is taken to vanish), and for the reflecting boundaries used in Section {\ref{reflecting-boundaries} $(g = 1)$. The diagonal elements of the pressure tensor, $P_{\alpha \alpha}$ for $\alpha \in \{ x,y\}$, are evaluated from the virial theorem,% \begin{equation} P_{\alpha \alpha} V = \langle K \rangle + W_{\alpha \alpha} . \label{virialtheorem} \end{equation} For impulsive interactions \cite{Rapaport}, the potential contribution, $W_{\alpha \alpha}$, is given by \begin{equation} W_{\alpha \alpha} = \frac{1}{ \tau } \sum_{c} r_{\alpha,ij}^{(c)} \Delta v_{\alpha,i}^{(c)} \; , \label{VTcoll} \end{equation} Here, the sum is over all collisional events $c$, which instantaneously change the potential energy during the averaging time $\tau$, $ r_{\alpha,ij}^{(c)} \equiv r_{\alpha,i}^{(c)} - r_{\alpha,j}^{(c)}$ is the $\alpha$-component of the separation vector of the two particles involved in the event, and $\Delta v_{\alpha,i}^{(c)}$ denotes the velocity change for particle $i$ parallel to $\alpha$ due to that event (the velocity change for particle $j$ being just the opposite). For the continuous potentials of Section \ref{PLP}, the potential contribution becomes \begin{equation} W_{\alpha \alpha} = \frac{1}{\tau} \int_0^{\tau} d t \sum_i\sum_{j>i} r_{\alpha,ij} f_{\alpha,ij} \;, \label{VTcont} \end{equation} where $f_{\alpha,ij}$ is the $\alpha$-component of the force exerted on $i$ by particle $j$. In all figures comparing experimental with theoretical pressures, the experimental dots represent $W_{\alpha \alpha} / \langle K \rangle$. The theoretical smooth curves represent $(P_{\alpha \alpha} v / k T) - 1$, where the temperature required for this computation is taken from Eq. (\ref{kinetic}). As usual, temperature units are used, for which Boltzmann's constant $k$ is unity. Because of the equivalence of the canonical and microcanonical ensembles, the experimental and theoretical pressures should agree up to a term of order O$(1/N)$. To make this correction insignificant, at least 60 particles, or even 120 in many cases, were used for the simulations. An event-driven algorithm is used \cite{Rapaport,AT} for the discontinuous potentials with periodic (Sec. \ref{Periodic-boundaries}) or reflecting (Sec. \ref{reflecting-boundaries}) boundary conditions, for which instantaneous potential energy changes and boundary crossings of a particle are considered as events. For the continuous power-law potentials of Sec. \ref{PLP} a hybrid code is used, which will be described there in more detail. \section{Narrow channels with periodic boundary conditions} \label{Periodic-boundaries} \subsection{Positive step potential} We proceed by considering the pressure in the case of a two-body step potential \begin{equation} u(r)= \left\{ \begin{array}{ll} u & \qquad r \leq d \\ 0 & \qquad r>d \,, \end{array} \right. \label{potential1} \end{equation} where $u>0$ is a constant (in our previous paper \cite{FMP04} we considered the hard-disk case $u=\infty$). The case $u<0$ is pathological, since the particles may collapse to form a cluster, as long as there is no repulsive interaction at short distances. The case of a two step potential with repulsion at short distances and attraction at larger distances will be considered in the following sub-section. Here we limit ourselves to the repulsive potential case $u>0$. To evaluate the pressure we associate with each particle an interaction disk of radius $d$ centered at its position. Two particles $i$ and $j$ interact with each other, if the center of $j$ is within the interaction disk of $i$, and {\em vice versa}. In the case $L_y > 2d$ the cross sections of a particle (a disk of radius d) and that of its image resulting from the periodic boundary conditions in the $y$ direction do not overlap. Hence the integral (\ref{q}) simply yields \begin{equation} q(L_y)=\pi d^2 (e^{-\beta u}-1) \qquad \mbox{for} \qquad L_y>2d \,. \label{qwide} \end{equation} For $L_y <2d$ we note that the area of overlap between the interaction disk of a particle and that of its image translated in the $y$ direction is given by \begin{equation} S(\vartheta)=d^2(\pi -2 \vartheta - \sin{2 \vartheta})\:, \label{area} \end{equation} where $\vartheta$ satisfies (see Fig.~(\ref{Fig1})) \begin{equation} L_y=2d\sin\vartheta\,. \label{L_y} \end{equation} In this case the integral (\ref{q}) yields \begin{equation} q(L_y)=(\pi d^2 - 2 S(e^{-\beta u}-1)+ S(e^{-2\beta u}-1)\;. \label{qnarrow} \end{equation} \begin{figure} \centering {\includegraphics[width=8cm]{Fig_1_channel.eps}} \caption{The interaction disks arrangement for $L_y < 2d$ with periodic boundary conditions. The overlap area of the two disks is $S$.} \label{Fig1} \end{figure} Using this result for $q(L_y)$, and noting that \begin{equation} \frac{dS}{dL_y}=-\sqrt{4d^2-L_y^2}~\quad , \end{equation} it is straightforward to derive the expressions for the pressure. We find that to order $1/v$ and for $L_y>2d$ one has $P_{xx}=P_{yy}=P$ with \begin{equation} \frac{Pv}{kT}=1 - \frac{q}{2v}\,, \end{equation} where $q$ is independent of $L_y$ and is given by (\ref{qwide}). On the other hand, for $L_y<2d$ one finds \begin{eqnarray} \frac {P_{xx}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v} \,,\label{pxxwide}\\ \frac {P_{yy}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v}-\frac{L_y}{2v}\sqrt{4d^2-L_y^2}\left(1-e^{-\beta u}\right)^2 \;, \label{pyywide}\\ \frac{Pv}{kT} &=& 1 - \frac{q{(L_y)}}{2v}-\frac{L_y}{4v}\sqrt{4d^2-L_y^2}\left(1-e^{-\beta u}\right)^2 \:, \label{pwide}\ \end{eqnarray} where $q(L_y)$ is given by (\ref{qnarrow}). It is evident that $P_{yy}$ exhibits a square-root singularity at $L_y=2d$ as in the case of hard disks \cite{FMP04}. This singularity originates from the term $dq(L_y)/dL_y$ in (\ref{pyygeneral}). On the other hand, $P_{xx}$ exhibits a weaker singularity with a singular term which vanishes as $(2d-L_y)^{3/2}$. The reason is that unlike the $P_{yy}$ component, here the singularity originates from $q(L_y)$ and not from its derivative. Clearly, the pressure $P$, which is the average of the two components, exhibits a square-root singularity as the more singular $P_{yy}$ component. \begin{figure} \centering {\includegraphics[width=10cm]{Fig_2a_channel.ps}\\ \vspace{-5mm} \includegraphics[width=10cm]{Fig_2b_channel.ps}\\ \vspace{-5mm} \includegraphics[width=10cm]{Fig_2c_channel.ps}} \vspace{-8mm} \caption{ Comparison of the channel-width dependence of the theoretical pressures (lines) with numerical simulations (points) for a system of 60 disks interacting with the step potential of Eq. (\ref{potential1}). The density is kept constant, $N/V = 0.01$. The potential varies from $u=0.5$ (top) to $u=2$ (bottom). The singularities appear at $L_y = 2$ and $L_y = 1$ as explained in the main text. Reduced units are used, for which $d$ and the total energy per particle, $E/N$, are unity. } \label{Fig2} \end{figure} In Fig. \ref{Fig2} the theoretical expressions for the singularity at $L_y = 2d = 2$ are compared to computer simulation results for various potential step sizes $u$ as indicated by the labels. Reduced units are used for which the particle diameter $d$ and the total energy per particle, $E/N$, are unity. The energy $E/N$ is almost exclusively kinetic in nature with a time-averaged kinetic temperature $T=0.986$ for $u=0.5$ (top), $T=0.994$ for $u=1.0$ (middle), and $T=0.996$ for $u=2$ (bottom figure). These temperatures vary slightly, but insignificantly, with the channel width $L_y$. The density is kept constant, $N/V = 0.01$. As for the case of hard spheres ($u=\infty$) treated already in Ref. \cite{FMP04}, the agreement between theory and simulation results is very satisfactory. \begin{figure} \centering {\includegraphics[width=10cm]{Fig_3_channel.eps}} \caption{The interaction disks arrangement for $L_y < d$ with periodic boundary conditions. The overlap area of the disk and its next nearest neighbor in the $y$ direction is $S_0$. } \label{Fig.4} \end{figure} As the width of the channel further decreases we expect to have more singularities to take place at $L_y=d,~2d/3,~d/2,\dots$, which result from the overlap of an interaction disk with those of further neighboring disks in the $y$ direction. Let us analyze, for example, the second singularity in the pressure curve, which takes place at $L_y=d$. For $d/2 < L_y < d$ the disk of a particle has some overlap with its nearest and next nearest neighbors images which result from the periodic boundary conditions in the $y$ direction. In order to evaluate the overlap integral (\ref{q}) we note that (see Fig. \ref{Fig.4}) \begin{equation} L_y=d \sin\vartheta_0\,. \label{L_y<1} \end{equation} The overlap area between an interaction disk of a particle and that of its next nearest neighbor in the $y$ direction, $S_0$, is given by \begin{equation} S_0=d^2(\pi -2 \vartheta_0 - \sin{2 \vartheta_0})\,. \end{equation} The overlap integral is thus expressed as \begin{equation} q(L_y)=(\pi d^2-2S + S_0)(e^{-\beta u}-1) +(S-2S_0)(e^{-2\beta u}-1) +S_0(e^{-3\beta u}-1) \;. \end{equation} Using this expression for the overlap integral, the pressure components can readily be calculated to yield \begin{eqnarray} \frac {P_{xx}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v} \; \label{pxxnarrow} \\ \frac {P_{yy}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v}-\frac{L_y}{2v}\sqrt{4d^2-L_y^2}\;(1-e^{-\beta u})^2-\frac{2 L_y}{v}\sqrt{d^2-L_y^2}\; e^{-\beta u} (1-e^{-\beta u})^2 \; \label{pyynarrow} \\ P & = & \frac{1}{2} ( P_{xx} + P_{yy}) \;. \label{pnarrow} \end{eqnarray} As is also shown in Figure \ref{Fig2}, these expressions for the singularity at $L_y = d = 1$ compare very well with simulation results. As before, reduced units are used for which $d$ and $E/N$ are unity. Note that as long as $u>0$ the coefficient of the singular term $\sqrt{d^2-L_y^2}~$ is positive, resulting in a positive compressibility just below $L_y=d$. \subsection{Two-step potential} In order to analyze the case of disks with an attractive potential, one has to add a repulsive interaction at short distances to prevent the collapse of the system into a macroscopic cluster. We thus consider in this section a two-step potential \begin{equation} u(r)= \left\{ \begin{array}{ll} u_1 & \qquad r \leq d \;,\\ u_2 & \qquad d \leq r<D \;, \\ 0 & \qquad r \geq D \;. \end{array} \right. \end{equation} where $u_1>0$ represents a repulsive interaction, $u_2$ could be either positive or negative, and $D>d$ is the outer radius of $u_2$. To evaluate the pressure we associate with each particle two concentric interaction disks, one with radius $d$ and the other with radius $D$. Two particles only interact with each other, if the center of the second particle lies within the interaction disks of the first. It is easy to see that the degree of overlap of the disks of a particle and those of its nearest neighbor image resulting from the periodic boundary condition in the y direction are singular at $L_y=2D, \; d+D$, and $2d$. Thus, the pressure curve is expected to be singular at these three values of the channel's width. We now analyze the pressure curve in more detail and consider first the upper singularity at $L_y=2D$. For $L_y > 2D$ the disks of a particle and those of its image do not overlap. Thus the integral (\ref{q}) yields \begin{equation} q(L_y)=\pi (D^2-d^2)(e^{-\beta u_2}-1) + \pi d^2 (e^{-\beta u_1}-1) \qquad \mbox{for} \qquad L_y>2D \,, \label{q>2D} \end{equation} and $q$ is independent of $L_y$. As in the case of a single step potential, one finds that to leading order in $1/v$ the pressure tensor is isotropic, $P_{xx}=P_{yy}=P$, with \begin{equation} \frac{Pv}{kT}=1 - \frac{q}{2v}\;. \label{rgt2d} \end{equation} For $d+D \leq L_y \leq 2D$, however, the outer disks of a particle and its periodic image overlap. As in the case of the single step potential, the overlap area $S$ is given by \begin{equation} S=D^2(\pi -2 \vartheta - \sin{2 \vartheta})\,, \end{equation} where $\vartheta$ satisfies (see Fig.~(\ref{Fig1})) \begin{equation} L_y=2D\sin\vartheta\,. \label{L_y} \end{equation} The resulting overlap integral (\ref{q})~ for $d+D < L_y < 2D$ is given by \begin{equation} q(L_y)=(\pi D^2-\pi d^2-2S)(e^{-\beta u_2}-1) + \pi d^2 (e^{-\beta u_1}-1) + S(e^{-2\beta u_2}-1) \,. \label{1+D<q<2D} \end{equation} The pressure tensor in this regime is thus found to be \begin{eqnarray} \frac {P_{xx}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v} \,, \label{swpxx}\\ \frac {P_{yy}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v}-\frac{L_y}{2v}\sqrt{4D^2-L_y^2}(1-e^{-\beta u_2})^2 \,, \label{swpyy}\\ \frac {Pv}{kT} &=& 1 - \frac{q{(L_y)}}{2v}-\frac{L_y}{4v}\sqrt{4D^2-L_y^2}(1-e^{-\beta u_2})^2 \,, \label{swp} \end{eqnarray} where $q(L_y)$ is given by (\ref{1+D<q<2D}). As in the case of a single step potential, $P_{yy}$ exhibits a square root singularity, while $P_{xx}$ behaves more smoothly, with a weaker 3/2 power behavior at the transition. The compressibility below the transition is negative. \begin{figure} \centering {\includegraphics[width=10cm]{Fig_4_channel.eps}} \caption{The interaction disks arrangement for the case of a two-step potential and for $2d \le L_y \le d+D$ with periodic boundary conditions. The overlap area of the outer disks of two nearest neighbors in the $y$ direction is $S_1$, while the overlap area of the outer disk with the inner one of its nearest neighbor in the $L_y$ direction is $S_1$. The angles $\vartheta_1$ and $\vartheta_2$ are indicated .} \label{Fig.5} \end{figure} Finally, we consider the regime $2d \leq L_y \leq d+D$. In this case the outer disk of a particle partially overlaps not only with the outer disk of its periodic image but also with the inner one (see Fig. (\ref{Fig.5})). The overlap area $S_1$ between the outer and the inner disks is given by \begin{equation} S_1=\frac{1}{2}d^2(\pi-2\vartheta_1)-d^2\sin\vartheta_1\cos\vartheta_1+\frac{1}{2} D^2(\pi-2\vartheta_2)-D^2\sin\vartheta_2\cos\vartheta_2 \end{equation} where $\vartheta_1$ and $\vartheta_2$ satisfy (see Fig. (\ref{Fig.5})) \begin{equation} L_y=d \sin\vartheta_1 + D\sin\vartheta_2 \,,\qquad \mbox{and} \qquad d\cos\vartheta_1=D\cos\vartheta_2\,, \label{sumrule} \end{equation} and, hence, \begin{equation} \sin\vartheta_1 = \frac{1}{2dL_y} (L_y^2 -D^2 + d^2) , \qquad \sin\vartheta_2 = \frac{1}{2DL_y}(L_y^2 +D^2 - d^2)\,. \label{sin12} \end{equation} It is straightforward to express the overlap integral (\ref{q}) in terms of the overlap areas $S$ and $S_1$ as \begin{eqnarray} q(L_y)&=&(\pi D^2-\pi d^2-2S+2S_1)(e^{-\beta u_2}-1) + (\pi d^2-2S_1)(e^{-\beta u_1}-1)\nonumber \\ &+& (S-2S_1)(e^{-2\beta u_2}-1) +2S_1(e^{-\beta(u_1+u_2)}-1) \qquad \mbox{for} \qquad 2d\leq L_y \leq d+D \,.\label{2<q<1+D}\, \end{eqnarray} According to Eqs. (\ref{pxxgeneral}) and (\ref{pyygeneral}), the pressure components may be expressed as \begin{eqnarray} \frac {P_{xx}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v} \,,\\ \frac {P_{yy}v}{kT} &=& 1 - \frac{q(L_y)}{2v}+\frac{L_y}{2v}(1-e^{-\beta u_2})^2 \frac{dS}{dL_y} + \frac{L_y}{v} (e^{-\beta u_2} - e^{-\beta u_1})(1 - e^{-\beta u_2}) \frac{dS_1}{dL_y} \,, \end{eqnarray} where \begin{equation} \frac{dS_1}{dL_y} = - 2d\cos(\vartheta_1)=-2D\cos(\vartheta_2)\,. \label{dS1dL} \end{equation} Using Eq. (\ref{sumrule}) and (\ref{sin12}), the channel-width dependence of the pressure tensor components for $2d < L_y < d+D$ is finally obtained, \begin{eqnarray} \frac {P_{xx}v}{kT} &=& 1 - \frac{q{(L_y)}}{2v} \,, \label{swpxxfin}\\ \frac {P_{yy}v}{kT} &=& 1 - \frac{q(L_y)}{2v}-\frac{L_y}{2v}\sqrt{4D^2-L_y^2}(1-e^{-\beta u_2})^2 -\frac{1}{v} (e^{-\beta u_2} - e^{-\beta u_1})(1 - e^{-\beta u_2}) \nonumber \\ &&\times (4L_y^2D^2+4L_y^2d^2+4D^2d^2-L_y^4-D^4-d^4)^{1/2} \,, \label{swpyyfin} \end{eqnarray} where $q(L_y)$ is given by (\ref{2<q<1+D}). In order to establish the nature of the singularity at the transition point $ d + D$, we expand $dS_1/dL_y$ in Eq. (\ref{dS1dL}) in terms of the small dimensionless offset $\epsilon = (d + D - L_y)/d$. Introducing the small angles \begin{equation} \vartheta_1=\frac{\pi}{2} - \delta\vartheta_1 \,,\qquad \vartheta_2=\frac{\pi}{2} -\delta\vartheta_2\,, \end{equation} which, according to Eq. (\ref{sumrule}), are related to $\epsilon$ by \begin{equation} (\delta\vartheta_1)^2=\frac{2D}{(d+D)} \epsilon \,,\qquad (\delta\vartheta_2)^2=\frac{2d^2}{D(d+D)} \epsilon\,, \end{equation} we finally obtain \begin{equation} \frac{dS_1}{dL_y} = - d \sqrt{ \frac{8 D}{d+D}} \sqrt{\epsilon} \end{equation} It is readily seen that $dS_1/dL_y$ and, hence, $P_{yy}$, exhibit a square root singularity. As in the case of a single step potential, the singularity in $P_{xx}$ originates from $S_1$ rather than from its derivative. Hence $P_{xx}$ exhibits a weaker $3/2$ singularity at $L_y=D+d$. It is interesting to note that, depending on the values of the interaction parameters $u_1$ and $u_2$, the compressibility just below the transition could be either positive or negative. \begin{figure} \centering {\includegraphics[width=10cm]{Fig_5a_channel.ps}\\ \vspace{-3mm} \includegraphics[width=10cm]{Fig_5b_channel.ps}}\\ \vspace{-3mm} \caption{Channel-width dependence of the pressures for the two-step potential case with the following set of parameters: $d = 1$, $D = 1.5$, $N = 60$, $N/V = 0.01$ and $E/N=1$. The top panel corresponds to a potential with two positive steps, $u_1 = 2$ and $u_2 = 1$, whereas the lower panel is for an attractive outer shell with $u_1 = 2$ and $u_2 = - 1$. } \label{Fig6} \end{figure} In Fig. \ref{Fig6} we compare the respective theoretical expressions -- Eq. (\ref{rgt2d}) for $L_y > 2D$, Eqs. (\ref{swpxx} - \ref{swp}) for $d+D < L_y < 2D$, and Eqs. (\ref{swpxxfin}, \ref{swpyyfin}) for $2d < L_y < d+D$ -- to computer simulation results (dots) for $N=60$ particles in a narrow channel of width $L_y$ with periodic boundaries both in $x$ and $y$ directions. We use reduced units for which $E/N$ and $d$ are unity. The outer diameter D = 1.5. In the top panel, results for a two-step potential with $u_1 = 2$ and $u_2 = 1$ are shown. The lower panel corresponds to a true square well potential with $u_1 = 2$ and $u_2 = - 1$. The agreement is very good in all cases. A similar analysis may be carried out near the third singularity which takes place at $L_y=2d$. \subsection{Power-law potential} \label{PLP} \begin{figure} \centering { \includegraphics[width=10cm]{Fig_6_channel.ps}} \vspace{-3mm} \caption{Potentials for $A = 10$ (smooth curves) and $A=100$ (dashed curves) for various values of $\kappa$ as indicated by the labels. } \label{pot} \end{figure} We now consider a soft potential which vanishes (continuously) at the disk boundary, \begin{equation} u(r)= \left\{ \begin{array}{ll} A\left(1-\left(\frac{r}{d} \right)^2 \right)^\kappa & \qquad r \leq d \\ 0 & \qquad r>d \end{array} \right. \label{contin} \end{equation} where the two parameters $A > 0$ and $\kappa > 0$ are constants. In Fig. \ref{pot} we show a few of such potentials for $A = 10 E_0$ (smooth lines) and $A = 100 E_0$ (dashed lines) for various $\kappa$ as indicated by the labels. Here $E_0 \equiv E/N = \left[ \sum_i p_i^2/ 2m + \sum_i \sum_{j>i} u(r_{ij})\right] /N $ is the {\em total} energy per particle. For our numerical work we use reduced units, for which the diameter $d$, the particle mass $m$, and $E_0$ are unity. Let us analyze the nature of the singularity of the pressure components slightly below $2 d$. The singularity arises from integrating the Mayer function $f$ in the overlap region of the two disks. For $\epsilon \equiv (2d -L_y)/d \ll 1$, the function $f$ is small in this region and may thus be expanded in powers of $u(r)$. To second order in $u$, $f(r)\simeq -\beta u(r) + (1/2) (\beta u(r))^2$. The singularity in the integral (\ref{q}) arises from the non-linear term in $f$, which is of the order $\epsilon^{2\kappa}$ in the overlap region. Since according to (\ref{area}) the area of this region scales as $\epsilon^{3/2}$ for small $\epsilon$, the singular contribution to the integral (\ref{q}), and hence to $P_{xx}$, scales as $\epsilon^{2\kappa+3/2}$. On the other hand, the pressure $P$ and its $P_{yy}$ component scale as $\epsilon^{2\kappa+1/2}$. Thus, for small enough $\epsilon$ we expect \begin{equation} P = c_1 \left[ 1 - c_2 (2d - L_y)^{2 \kappa+1/2} \right] \; , \label{pfit} \end{equation} and similarly for $P_{yy}$, where $c_1, c_2$ are constants. In the scaling form for $P_{xx}$, the exponent is $2\kappa +3/2$, and the singularity is weaker. To test this scaling form, we carried out numerical simulations of the model. In selecting the parameters $A$ and $\kappa$ most appropriate for numerical simulations, one should take into account two competing trends. On the one hand, the singular part of the pressure is expected to be more pronounced for large amplitude $A$ and small exponent $\kappa$. On the other hand, as we argue below, the channel-width interval, where the scaling form (\ref{pfit}) is expected to hold, is larger for small $A$ and large $\kappa$. Thus, in order to observe the scaling behavior one has to choose intermediate values of these two parameters. To estimate the scaling interval $L_{y,\min} < L_y < 2d$ over which the scaling form (\ref{pfit}) is expected to hold, we note that during a typical collision two particles penetrate each other up to a depth $\delta = d - r_0$, where $r_0$ is estimated from $u(r_0) = E_0$, $ r_0 = d \sqrt{ 1 - (E_0/A)^{1/\kappa}} $. The expansion of the Mayer function to second order in $u$ fails, if the third-order term starts to contribute more than, say, 10 \%. This failure only happens for particle separations smaller than $r_0$, the typical separation at maximum penetration, and, hence, for untypical high-energetic collisions. For typical energies and penetrations, particles will pass each other in the channel and contribute to the pressure scaling, if the thermally possible penetration depth $\delta$ exceeds the interaction-disk overlap $d \epsilon$ due to the periodic boundaries. The upper bound for $\epsilon$ is thus estimated to be $\epsilon_{\max} = \delta/d = (d- r_0)/d$, and the minimum channel width for which scaling is expected to hold becomes $L_{y,\min} = 2d - d\epsilon_{\max} = d + r_0$. Thus, the scaling interval decreases with $A$ and increases with $\kappa$. We find that $\kappa =2$ and $A= 10$ are a suitable choice, which gives a reasonable scaling range, and we consider this case first. Note that the choice $\kappa > 1$ also offers the slight numerical advantage that the particle force is continuous and vanishes at $r = d$. The simulation results for the pressures with potential parameters $ A = 10 E_0$ and $ \kappa = 2$ are shown by the dots in the bottom panel of Fig. \ref{pressure_e400}. In the simulation we used 20 particles at a density $N/V = 0.01$. \begin{figure} \centering {\includegraphics[width=10cm]{Fig_7top_channel.ps}}\\ \vspace{-6mm} {\includegraphics[width=10cm]{Fig_7middle_channel.ps}}\\ \vspace{-6mm} {\includegraphics[width=10cm]{Fig_7bottom_channel.ps}} \vspace{-8mm} \caption{Simulation results for the pressures as a function of the channel width $L_y$ with periodic boundaries. The potential parameter $A$ varies from $A = 400$ (top) to $A = 100$ (middle) and $A=10$ (bottom), and $\kappa = 2$. In this figure $N/V = 0.01$, and $N = 20$. The shaded areas indicate (very conservative) estimates of the scaling regimes. The smooth lines are a fit of Eq. (\ref{pfit}) for $P$ and $P_{yy}$ to the data in the shaded regime. Reduced units are used for which $d$ and $E_0$ are unity.} \label{pressure_e400} \end{figure} The estimated scaling interval, $L_{y,\min} \approx 1.83 d < L_y < 2d $, is indicated by the shaded area. The smooth lines are a fit of Eq. (\ref{pfit}) to the numerical data points for $P$ and $P_{yy}$ in that range. It shows that our estimate is rather conservative, since the fits represent the data points reasonably well in a slightly wider interval $1.76 d \le L_y \le 2 d$. The kinetic energy per particle is about $0.998 E_0$ and varies only marginally with $L_y$. The scaling is more convincingly demonstrated in Fig. \ref{scalcon}, where \begin{figure} \centering { \includegraphics[width=10cm]{Fig_8_channel.ps}} \vspace{-3mm} \caption{Scaling of the transverse pressure $P_{yy}$ for the power-law potential model with $A = 10$ and $\kappa = 2$ below the critical channel width $L_y = 2$. Here, $\epsilon = (2 - L_y)$. The slope of the straight line corresponds to the theoretically expected scaling, $\kappa + (1/2) = 4.5$. Reduced units are used as explained in the main text. } \label{scalcon} \end{figure} the $\epsilon$-dependence of the singular part, $\Delta \frac{PV}{\langle K \rangle} \equiv \left[ \frac{PV}{\langle K \rangle} \right] - \left[ \frac{PV}{\langle K \rangle} \right]_{\epsilon = 0} $, for $P_{yy}$ is shown. The straight line indicates the expected scaling with the power 4.5. For $\epsilon > 0.07$ corresponding to $L_y < 1.86 \sigma$, the scaling breaks down as expected. Next, we consider a potential with $A=100 E_0, \; \kappa = 2$, which is shown in Fig. \ref{pot}, and which resembles more realistic repulsive potentials. The channel-width dependence of the pressures is shown in the middle panel of Fig. \ref{pressure_e400}. The estimated scaling range is much narrower than before, $L_{y,\min} = 1.95 d$, and is indicated by the shaded area. The smooth lines are fits of Eq. (\ref{pfit}) to the data points for $P$ and $P_{yy}$ in that interval and confirm that our scaling-range estimate is rather conservative. The kinetic energy per particle is around $0.999 E_0$ in this case, and varies marginally with $L_y$. Finally, we consider the limiting case of a rather steep potential such as for $A = 400 E_0$ and $ \kappa = 2$ (much steeper than the $\kappa = 2$ curve in Fig. \ref{pot}), which already resembles that of hard disks and, therefore, should give a pressure variation with $L_y$ similar to that found in Ref. \cite{FMP04}. The results are shown in the top panel of Fig. \ref{pressure_e400}. The average kinetic energy per particle is $0.9997 E_0$. For $L_y < 1.95 d$ the pressure curves are indeed very similar to those of a hard-disk gas of 20 disks at the same density and at unit kinetic energy per particle as is shown in Fig. 3 of Ref. \cite{FMP04}. Differences appear for channel widths very close to $2 d$, which are due to the expected scaling. The estimated scaling range is very narrow, $L_{y,\min} = 1.975 d$ as indicated by the shaded area in the top panel of Fig. \ref{pressure_e400}. But a fit of Eq. (\ref{pfit}) in the range $1.96 d\le L_y \le 2 d$ represents the data points for $P$ and $P_{yy}$ reasonably well in that range as is shown by the smooth lines. Before closing this section, we provide some details about the molecular dynamics simulations. They were carried out with a hybrid code combining the advantages of the event-driven algorithm for hard particles during the forceless streaming stage with the simplicity of a time-stepping integration scheme during the collision of two or more particles. The beginning of each pair collision was determined as in the event-driven algorithms of the previous sections. During the collisions the equations of motion were integrated with a fourth order Runge-Kutta scheme. The end of each pair collision was determined by interpolation with a spatial uncertainty of less than 10$^{-8}$ reduced units. The moment the last interacting particles separate, another streaming move is initiated. This method is particularly suited for low densities. It even allows to accurately follow the trajectory for models with discontinuous forces. Periodic boundaries are used. In most cases a trajectory was followed for two million reduced time units $\sqrt{md^2/E_0}$. \section{Channels with reflecting boundary conditions} \label{reflecting-boundaries} \subsection{Soft disks: single step potential} In this section we calculate the pressure components of soft disks in a narrow rectangular box with elastic reflecting boundary conditions in the $y$ direction. Since we are interested in the narrow channel limit where the length of the box is much larger than the width, the system is not sensitive to the boundary conditions in the $x$ direction. For simplicity we take periodic boundary condition in this direction. We consider disks of diameter $d$, which interact with each other via the square well potential of Eq. (\ref{potential1}), but with an additional $\delta$-function at the center of the particles: \begin{equation} u(r)= \left\{ \begin{array}{ll} \delta(r) + u & \qquad r \leq d \;,\\ 0 & \qquad r>d \;, \end{array} \right. \label{single-step-potential} \end{equation} This $\delta$ function does not affect the particle-particle interactions, but it is responsible for the elastic reflections from the boundary, which confine the disk centers to the volume $V = L_x L_y$, where $L_y$ is referred to as the channel width. As a result of the reflecting boundary conditions, the system is no longer translationally invariant, and the overlap integral (\ref{q}) corresponding to the second virial coefficient is replaced by \begin{equation} q(L_y)=\frac{1}{L_y}\int f_{12} d^2{r_{1}}d^2{r_{2}} ~ . \label{q_reflecting} \end{equation} As was done in the case of periodic boundary conditions, with each particle we associate an interaction disk with a radius $d$. Two particles interact with each other only if the center of a particle is within the interaction disk of the other. In the case $L_y>d$ the overlap integral is given by (see the top panel of Fig. \ref{overlap_box2}) \begin{equation} q(L_y)=\frac{1}{L_y}\left[\pi d^2 L_y - 2\int_0^d S(\vartheta,d)dy \right](e^{-\beta u}-1)~, \end{equation} where $S(\vartheta,d)$ is the segment of a circle of radius $d$ corresponding to a central angle $2\vartheta$, \begin{equation} S(\vartheta,d)=d^2h(\vartheta)~, \qquad h(\vartheta)=\vartheta -\sin\vartheta \cos\vartheta ~,\qquad \mbox{and} \qquad \cos \vartheta =\frac{y}{d} ~. \label{S-y} \end{equation} Evaluating the integral one obtains \begin{equation} q(L_y)=\frac{1}{L_y}\left( \pi d^2 L_y -\frac{4}{3}d^3 \right)(e^{-\beta u}-1)~. \label{q-reflecting>} \end{equation} \begin{figure} \centering {\includegraphics[width=8cm]{Fig_9a_channel.eps}}\\ \vspace{3mm} {\includegraphics[width=8cm]{Fig_9b_channel.eps}} \vspace{-3mm} \caption{Interaction disks for the square well potential in Eq. (\ref{potential1}) for $L_y > d$ (top), and $L_y < d$ (bottom) in a channel with reflecting boundary condistions.} \label{overlap_box2} \end{figure} For $L_y<d$ the overlap integral is given by (see the bottom panel of Fig. \ref{overlap_box2}) \begin{equation} q(L_y)=\frac{1}{L_y}\left[\pi d^2 L_y - 2\int_0^{L_y} S(\vartheta)dy \right](e^{-\beta u}-1)~, \end{equation} where, as in the previous case, $S(\vartheta)$ and $y$ are defined by Eq. (\ref{S-y}). The integral can be readily evaluated to yield \begin{equation} q(L_y)=\frac{1}{L_y}\left[ \pi d^2 L_y -2d^3\left(\frac{2}{3}+\vartheta_0 \cos \vartheta_0 - \sin \vartheta +\frac{1}{3} \sin^3 \vartheta_0 \right) \right](e^{-\beta u}-1)~, \label{q-reflecting<} \end{equation} with \begin{equation} \cos \vartheta_0 = \frac{L_y}{d} \;. \end{equation} The pressure components may be evaluated by using equations (\ref{q-reflecting>},\ref{q-reflecting<}) in the general expressions (\ref{pxxgeneral},\ref{pyygeneral}). This analysis demonstrates that the pressure components exhibit a singularity at a channel width $L_y=d$. The nature of the singularity is obtained by expanding the overlap integral in $1-L_y/d \equiv \epsilon$ for small $\epsilon>0$. In this limit one has $\vartheta_0 \simeq \sqrt{2\epsilon}$ and $q(L_y)\simeq -{2/15}\vartheta_0^5$. Thus, for small $\epsilon$ the singular part of the overlap integral $\delta q(L_y)$ satisfies \begin{equation} \delta q(L_y)\simeq \epsilon^{5/2}~. \end{equation} As a result, $P_{xx}$ exhibits a $5/2$ singularity at $L_y=d$, and its third derivative with respect to $L_y$ diverges, as $L_y$ approaches $d$ from below. On the other hand, $P_{yy}$ exhibits a stronger, $3/2$, singularity, as it is related to the derivative of the overlap integral with respect to $L_y$. \begin{figure} \centering { \includegraphics[width=10cm]{Fig_10_channel.ps}} \vspace{-3mm} \caption{Channel-width dependence of the pressures for $N=60$ particles, which interact with each other with the weak repulsive step potential of Eq. (\ref{potential1}), where $ u = 1$. Reflecting boundary conditions are used as described in the main text. The particle diameter $d = 1.5$ in the reduced units applied, and the energy per particle, $E/N$, is unity. Keeping the particle density constant, $N/V = 0.01$, the temperature varies slightly with $L_y$. } \label{Fig8} \end{figure} A comparison between theoretical and simulation results is provided in Fig. \ref{Fig8} for a {\em positive} step potential. We use reduced units, in which the particle diameter $d$ is 1.5, and for which the total energy per particle, $E/N \equiv E_0$, is unity. In these units, we choose for the potential $u = 1$. In the simulation we studied $N=120$ particles enclosed in a box with reflecting boundaries both parallel and perpendicular to the channel axis, such that the centers of the particles are confined to the volume $V = L_x L_y$ (and the particle disks to the volume $(L_x + d)(L_y+d)$). Varying the channel width $L_y$, the density is kept constant, $N/V = 0.01$. The singular width, $d$, is indicated by the vertical line. The agreement between the points from the simulation and the theoretical smooth lines corresponding to $P_{xx}, P_{yy}$ and $P = (P_{xx} + P_{yy})/2$ is nearly perfect. To demonstrate the scaling directly, we plot in Fig. \ref{scalsteppot} the computer simulation results for the singular pressure contributions of $P_{xx}$ and $P_{yy}$ below the singular channel width $L_y = d$. To do this, we note that the non-singular contribution to the overlap integral Eq.(\ref{q-reflecting<}) for $L_y < d$ is given by Eq. (\ref{q-reflecting>}), also evaluated at $L_y < d$. The corresponding non-singular (NS) pressure contributions then follow from Eqs. (\ref{pxxwide}) and (\ref{pyywide}) with $q(L_y)$ taken from Eq. (\ref{q-reflecting>}). If this non-singular part is subtracted from the pressures determined by the simulations, a plot of $$ Z_{\alpha \alpha} \equiv \frac{W_{\alpha \alpha}} {\langle K \rangle} - \left[ \frac{P_{\alpha \alpha} v}{kT} -1 \right]_{\mbox{NS}} \; \;, \alpha \in \{x,y\} $$ as a function of the distance from the singularity, $\epsilon = (d - L_y)/d$, reveals the expected scaling for the $xx$ and $yy$ pressure components. This is demonstrated by the straight lines in the log-log plot of Fig. \ref{scalsteppot}, which are fully consistent with the expected scaling, 3/2 for $P_{xx}$, and 5/2 for $P_{yy}$. \begin{figure} \centering { \includegraphics[width=10cm]{Fig_11_channel.ps}} \vspace{-3mm} \caption{Channel-width scaling of the singular pressure contributions, which are obtained by subtracting the respective non-singular contributions from the computer simulation results for the pressures (points). The straight lines indicate the theoretically expected scaling. For details we refer to the main text. } \label{scalsteppot} \end{figure} It is interesting to note that a system with a purely negative box potential, $u < 0$, is thermally unstable and tends to form clusters. The maximum entropy state consists of a single cluster of overlapping disks, floating in the gas of the remaining particles. As a consequence, the specific heat is negative \cite{Thirring,Hertel,TNP}, and the temperature is increased due to the conversion of potential energy into kinetic. Such a property may only arise in the microcanonical ensemble and is familiar for gravitational systems. However, the attracting force need not be of long range \cite{PNT90}. A negative specific heat may even be observed for quantum-mechanical Coulomb systems \cite{TNP} and in experiments on nuclear fragmentation \cite{agostino} and atomic clusters \cite{schmidt,maerk}. In a preliminary study, we have observed the clustering for $ u = -1$, but we do not consider this case in more detail, because the times for reaching thermodynamic equilibrium are excessively long. \subsection{Soft disks with a hard core: two-step potential} \label{soft_disks_hard_core} In this section we consider particles which interact with a two-step potential \begin{equation} u(r)= \left\{ \begin{array}{ll} \infty & \qquad r \leq d \\ u & \qquad d \leq r<D \\ 0 & \qquad r \geq D \end{array} \right. \label{2step_potential}\end{equation} The soft potential $u$ may be either attractive or repulsive. The disks interact with the walls of the channel only by the hard core interaction with diameter $d$ (In the previous section this potential collapses into a $\delta$ function). As before, we assume reflecting boundary conditions in the $y$ direction as indicated in Figs. \ref{overlap_box3} and \ref{overlap_box4}, and periodic boundary conditions in the $x$ direction. Thus, the volume accessible to the centers of the particles is given by $V = (L_y - d) L_x$. \begin{figure} \vspace{-4cm} \centering { \includegraphics[width=12cm]{Fig_12_channel.eps}} \vspace{-6.5cm} \caption{ The interaction disks of a particle with a two-step potential in the case of reflecting boundaries. The inner disk is of diameter $2d$ and that of the outer disk is $2D$. The boundaries of the channel are indicated by bold lines. The thin lines, are at a distance $d/2$ from the respective boundary, and they represent the limits, which the centers of the disks cannot cross. The vertical coordinate of the disk, $y$, and the angles $\phi$ and $\vartheta$ are indicated.} \label{overlap_box3} \end{figure} \begin{figure} \vspace{-6cm} \centering { \includegraphics[width=12cm]{Fig_13_channel.eps}} \vspace{-6.5cm} \caption{The interaction disks of a particle with a two-step potential in the case of reflecting boundaries for $L_y<2d$, where the angles $\phi_0$ and $\vartheta_0$ are defined. In the intermediate regime $2d<L_y<D+d$, the smaller disk does not intersect the boundary of the channel, and $\vartheta_0$ becomes $0$.} \label{overlap_box4} \end{figure} In calculating the overlap integral, one should distinguish between three regimes. \begin{itemize} \item For $L_y>D+d$ the integral may be expressed as (see Fig. (\ref{overlap_box3})) \begin{eqnarray} q(L_y)&=& \pi (D^2-d^2) (e^{-\beta u}-1) -\pi d^2 \nonumber \\ &-& \frac{2}{L_y-d}\left[D^2 \int_0^Dh(\phi)dy -d^2\int_0^d h(\vartheta)dy \right](e^{-\beta u}-1) \nonumber \\ &+& \frac{2}{L_y-d}d^2 \int_0^d h(\vartheta)dy ~, \end{eqnarray} where the function $h$ is given in Eq. (\ref{S-y}), and the angles $\phi$ and $\vartheta$ are related to $y$ via (see Fig. (\ref{overlap_box3})) \begin{equation} y=d \cos\vartheta = D \cos \phi ~. \end{equation} Evaluating the integrals, one obtains \begin{equation} q(L_y)=(\pi D^2 -\frac{4}{3} \frac{D^3}{L_y-d})(e^{-\beta u}-1) - (\pi d^2 -\frac{4}{3} \frac{d^3}{L_y-d})e^{-\beta u} ~. \label{q-reflecting_hard_soft>} \end{equation} \item For $2d<L_y<D+d$ the overlap integral becomes \begin{eqnarray} q(L_y)&=& \pi (D^2-d^2) (e^{-\beta u}-1) -\pi d^2 \nonumber \\ &-& \frac{2}{L_y-d}\left[D^2 \int_0^{L_y-d}h(\phi)dy -d^2\int_0^d h(\vartheta)dy \right](e^{-\beta u}-1) \nonumber \\ &+& \frac{2}{L_y-d}d^2 \int_0^d h(\vartheta)dy ~. \end{eqnarray} The integrals are readily evaluated to yield \begin{eqnarray} q(L_y)&=& (\pi D^2 -\frac{2D^3}{L_y-d} g(\phi_0))(e^{-\beta u}-1) \nonumber \\ &-& (\pi d^2 -\frac{4}{3} \frac{d^3}{L_y-d})e^{-\beta u} ~. \label{q-reflecting_hard_soft>} \end{eqnarray} where \begin{equation} g(\alpha)=\frac{2}{3} +\alpha\cos \alpha - \sin \alpha + \frac{1}{3} \sin^3 \alpha ~, \end{equation} and (see Fig. (\ref{overlap_box4})) \begin{equation} \cos \phi_0 = \frac{L_y-d}{D} ~. \label{phi0} \end{equation} \item For $L_y < 2d$ the overlap integral is \begin{eqnarray} q(L_y)&=& \pi (D^2-d^2) (e^{-\beta u}-1) -\pi d^2 \nonumber \\ &-& \frac{2}{L_y-d}\left[D^2 \int_0^{L_y-d}h(\phi)dy -d^2\int_0^{L_y-d} h(\vartheta)dy \right](e^{-\beta u}-1) \nonumber \\ &+& \frac{2}{L_y-d}d^2 \int_0^d h(\vartheta)dy ~. \end{eqnarray} It yields \begin{eqnarray} q(L_y)&=& (\pi D^2 -\frac{2D^3}{L_y-d} g(\phi_0))(e^{-\beta u}-1) \nonumber \\ &-& (\pi d^2 -\frac{2D^3}{L_y-d} g(\vartheta_0)) e^{-\beta u} ~, \end{eqnarray} where $\phi_0$ is given by Eq. (\ref{phi0}), and \begin{equation} \cos \vartheta_0 =\frac{L_y-d}{d} ~, \end{equation} (see Fig. (\ref{overlap_box4})). \end{itemize} The nature of the singularities of the pressure components at $L_y=D+d$ and $L_y=2d$ can be analyzed as before. It is straightforward to show that at both points the $P_{yy}$ components exhibits a $3/2$ singularity, while the $P_{xx}$ component exhibits a $5/2$ singularity. \begin{figure} \centering { \includegraphics[width=10cm]{Fig_14_channel.ps}} \vspace{-3mm} \caption{ Channel width dependence of the potential-generated pressures for the interaction potential (\ref{2step_potential}) with $d=1$, $D=1.5$ and $u=1$. The dots are computer simulation results for $N=60$ particles at a density $N/V = 0.01$. The smooth lines are obtained from the theoretical overlap integrals of Sec. \ref{soft_disks_hard_core}. The singular channel widths are indicated by vertical lines. Reduced units are used, for which the particle mass $m$ and the energy per particle, $E/N$, are unity. } \label{Fig15} \end{figure} \begin{figure} \centering { \includegraphics[width=10cm]{Fig_15_channel.ps}} \vspace{-3mm} \caption{Channel width dependence of the potential-generated pressures for the interaction potential (\ref{2step_potential}) with $d=1$, $D=1.5$ and $u=-1$. The dots are computer simulation results for $N=60$ particles at a density $N/V = 0.01$. The smooth lines are obtained from the theoretical overlap integrals of Sec. \ref{soft_disks_hard_core}. The singular channel widths are indicated by vertical lines. Reduced units are used, for which the particle mass $m$ and the energy per particle, $E/N$, are unity. } \label{Fig16} \end{figure} Comparisons of these results with computer simulations for $N = 60$ particles of equal mass $m$ are provided in Fig. \ref{Fig15} for the potential (\ref{2step_potential}) with a positive step $u = 1$, and in Fig. \ref{Fig16} for the case of a negative step potential, $u = -1 $. All quantities are given in reduced units, for which the particle mass $m$, the hard core diameter, $d$, and the total energy per particle, $E_0 = E/N$, are unity. The outer diameter is taken to be $D=1.5$. The singular points at $L_y = d + D = 2.5$ and $L_y = 2d = 2$ are marked by the vertical lines. The density $N/V = 0.01$. For the computation of the theoretical pressures resulting in the smooth lines of Figs. \ref{Fig15} and \ref{Fig16}, the slight variation of the kinetic energy and, hence, of the temperature with the channel width was taken into account. The agreement between the theoretical expressions and the computer simulation results for the potential part of the pressures is very satisfactory. \section{Summary} \label{conclusions} In the paper we studied the pressure tensor of a system of disks moving in a narrow two dimensional channel, with either periodic or reflecting boundary conditions. We considered the low density regime using the Mayer cluster expansion, and tested the validity of the expansion using molecular dynamics studies. It is found that whenever the two-body interaction potential between disks, $u(r)$, exhibits a singularity at some distance $r_0$, the pressure tensor exhibits a singularity as a function of the channel width, at one or more widths which are simply related to $r_0$. By studying several classes of interaction potentials, some rather general conclusions regarding the singularities of the pressure tensor can be reached. In the case of periodic boundary conditions, singularities take place at channel widths $L_y = 2 r_0/n$ with $n=1,2, \dots$~. For potentials which exhibit a discontinuity at $r_0$, the transverse pressure, $P_{yy}$, exhibits a $1/2$ singularity while the longitudinal component, $P_{xx}$, exhibits a weaker $3/2$ singularity. For potentials which are continuous at $r_0$, and whose singular part vanishes as $(r_0-r)^\kappa$, the transverse pressure exhibits a $2\kappa +1/2$ singularity while the singularity of the longitudinal pressure is $2\kappa +3/2$. Although these results have been demonstrated for specific interaction potentials $u(r)$, they are rather general, as they are related only to the nature of the singularity of the potential. In the case of reflecting boundary conditions the pressure tensor exhibits a singularity at $L_y=r_0$. The singularity is weaker than that of the case of periodic boundary conditions. Particularly, it was found that for a potential which is discontinuous at $r_0$~, the transverse component of the pressure exhibits a $3/2$ singularity, while the longitudinal component exhibits a weaker $5/2$ singularity. \section{Appendix} \label{Appendix} There are a few minor misprints in some of the equations in the paper on hard disks \cite{FMP04} which are corrected below. These misprints do not affect any of the expressions for the pressure derived in that paper, or any of the numerical results. \noindent In particular, Equation (3) of Ref. \cite{FMP04} should read \begin{equation} s \equiv \frac{S}{N}= \ln \left(v-\frac{q(L_y)}{2}\right), \nonumber \label{entropycorr} \end{equation} and Equation (13) in \cite{FMP04} should become \begin{eqnarray} \frac{P_{xx}v}{kT} & = & L_x \left(\frac{\partial s}{\partial L_x}\right)_{L_y} ,\nonumber \\ \frac{P_{yy}v}{kT} & = & L_y \left(\frac{\partial s}{\partial L_y}\right)_{L_x} . \nonumber \end{eqnarray} Another misprint concerns the definition of ${\bf v}_c$ in the expression for the virial in Eq. (8). ${\bf v}_c$ is the velocity change of a particle $i$ taking part in a binary collision $c$. \section{Acknowledgments} HAP is grateful for the hospitality accorded to him at the Weizmann Institute of Science. Support from the Austrian Science Foundation (FWF), grant P18798, the Minerva Foundation with funding from the Federal German Ministry for Education and Research and the Albert Einstein Minerva Center for Theoretical Physics is gratefully acknowledged. A part of this work was carried out at the Erwin Schr\"odinger Institute in Vienna on the occasion of a workshop in June 2008.
1,108,101,563,431
arxiv
\section{Introduction} One of the most important problems to solve, in the realisation of quantum algorithms in hardware, is how to map operations onto the architecture. Scalable architectures for quantum computers are not expected to have all-to-all qubit connectivity: if we describe the pairs of qubits which may interact directly by the edges of a graph (or ``network'') $G$ whose nodes are qubit labels, then $G$ will not contain all pairs of nodes. This raises the question of how best to realise two-qubit operations on data stored on pairs of qubits $a,b \in G$ which are not adjacent in $G$. One solution is to swap qubit states through the network until they are on adjacent nodes~\cite{Cowtan-etal-2019,me,me2}. An alternative, which is possible when not all qubits in the hardware platform are used to store data, is to distribute entanglement between qubits $a', b' \in G$ which are adjacent to $a$ and $b$ respectively. This allows a gate between $a$ and $b$ to be performed by teleportation~\cite{newref1}. Which approach is the more practical will depend on whether it is economical to leave some number of qubits free to use as auxiliary space, but also on how much noise the state is subject to as a result. The question of which approach will lead to more accumulated noise will be determined in part by how long it takes to realise the chosen approach, in total, over all operations to be performed in a given algorithm. To reduce the time taken in distributing entanglement for two-qubit operations, we consider how entangled states may be distributed between multiple pairs in parallel. A direct approach may result in crossing paths in the network $G$, forcing the entangled pairs to be distributed in sequence rather than in parallel. The issue of crossing paths for transmissions across a network is also potentially an issue in conventional networks. In that setting, one solution to this problem is \emph{network coding}, in which independent signals in a network may share bandwidth by allowing intermediate nodes to combine their signals in appropriate ways to distribute complete information about each signal across the network. (A simple illustrative example of this, the ``butterfly network'', is shown in Fig.~\ref{f01}.) This motivates the idea of using network coding to realise entanglement distribution between multiple pairs of qubits in parallel using similar concepts. Previous work~\cite{Leung2006,Kobayashi2009,Kobayashi2011,Satoh2012} has shown that when a classical binary linear network code exists for the ``multiple unicast'' problem (the problem of sending signals between $k$ pairs of sources and targets) on a classical network, then there exists a quantum network code to distribute Bell states between each source--target pair in a quantum network of the same connectivity. However, these results suppose that each ``node'' is a small device, hosting multiple qubits and able to perform arbitrary transformations on them before transmitting onward ``messages'' through the network. This does not reflect the architecture of many hardware projects to realise quantum computers, in which the ``nodes'' are single qubits, and edges are pairs which may be acted on by a quantum operation (such as a CNOT) rather than a directed communications link~\cite{IBM1,GoogleQC,Rigetti,Intel,IBM2}. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{fig1.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{(a) Example of network coding over the Butterfly network for input bitstreams ``A'' and ``B'' -- nodes either perform a modulo-2 sum of the incoming bitstreams (when labelled $\oplus$) or fanout the single incoming bitstream otherwise; (b) the Butterfly shown as a (topologically identical) $2 \times 3$ grid, with node order shown by the labelled indices -- as the Butterfly network provides a useful illustrative example for many of the results presented in this paper this ordering is defined and used consistently throughout the paper (for example for qubit ordering).}} \label{f01} \end{figure} In this article, we describe techniques to translate linear network coding protocols on a directed graph $G$, to circuits --- called here ``QLNC circuits'' --- which involve only preparation of $\ket{0}$ and $\ket{+}$ states, CNOT gates along edges of $G$, unitary Pauli gates (possibly conditioned on classical information, which is communicated without constraints), and measurements of the observables $X$ and $Z$. Our techniques extend also to the multiple multicast problem, serving to distribute Bell and GHZ states across such a network $G$. We show that QLNC circuits allow us to distribute entanglement in a circuit whose quantum depth can be bounded from above by simple properties of the architecture network $G$, leading to a modest sized constant for reasonable choices of $G$ (\emph{e.g.},~9 for a square lattice provided no receiver node has four in-coming links).~\label{discn:introSquareLatticeBound} In particular, the depth is independent of the number of qubit pairs to be entangled, the distance between the nodes in any of the pairs, or the total number of other qubits involved. In addition to this constant quantum depth, is a dependency on computing classical controls for some of the quantum operations, which is at worst logarithmic in the number of qubits involved. These are lower circuit depths than can be achieved by realising two-qubit operations by routing~\cite{Cowtan-etal-2019,KMvdG-2019}. Furthermore, while our results are in some ways similar to what can be achieved with graph states (as described by Hahn~\emph{et al.}~\cite{Hahn}), our techniques are somewhat more versatile and also easier to analyse. We make these comparisons more precise in Section~\ref{sec:compare}. As well as describing how network codes can be used to distribute entanglement, in a setting where the nodes in the network represent individual qubits which may interact in pairs along the network, we also note two features of QLNC circuits that make them more versatile than classical linear network coding protocols: \begin{itemize} \item QLNC circuits can be used to simulate a classical linear network code ``out of order''. (Indeed, this is required for our main result, which simulates a linear network code in a depth which may be smaller than the length of the longest transmitter--receiver path in the classical network.) \item Entanglement swapping allows for QLNC circuits to perform entanglement distribution tasks, that \emph{do not} correspond to classical linear network coding protocols --- that is, for networks $G$ in which the corresponding linear network coding problem has no solution. \end{itemize} These results hold as a result of using the (unconstrained) classical control to allow a QLNC circuit to simulate a classical linear network code, on a network with more edges than $G$. Our analysis of QLNC circuits involves a simple computational formalism, which may be of independent interest. The formalism is similar to classical network coding in its representation of data with time, and allows the easy use of classical network coding results and intuitions to reason about entanglement distribution circuits. While QLNC circuits are stabiliser circuits, and can be efficiently simulated using the stabiliser formalism, QLNC circuits do not require the full power of the stabiliser formalism to simulate. This allows us to reason about them more efficiently than is possible even with the stabiliser formalism. This yields at least a factor $2$ improvement in space and time requirements, and achieves $O(n)$ complexity (without using sparse matrix techniques) to simulate protocols which only involve superpositions of $O(1)$ standard basis states. These techniques can also be applied to network codes on qudits of prime dimension. The remainder of the paper is organised as follows. In Section~\ref{prev} we review existing literature on classical and quantum network coding. In Section~\ref{prelim} we introduce the QLNC formalism, and present the main results described above. In Section~\ref{qubitsection} we give the generalisation for prime $d$-level qudit systems. In Section~\ref{comp} we discuss the computational complexity of simulating circuits using the QLNC formalism, as well as that of discovering linear network codes. Finally, in Section~\ref{app1}, we include a detailed proof of the Theorem~\ref{mainthm1}, which demonstrates the way in which a QLNC circuit may be regarded as realising a linear network code on an extended network $G' \supseteq G$. \section{Preliminaries} \label{prev} We begin by reviewing the literature on classical and quantum network coding, and an overview of techniques to help the realisation of two-qubit operations in limited architectures. \subsection{Classical network coding} \begin{figure}[!t] \centering \includegraphics[width=0.29\linewidth]{Fig1a.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Another example of network coding, on a $4\times 3$ grid with three bitstreams ``A'', ``B'' and ``C''.}} \label{f01a} \end{figure} Network coding, as a means to increase information flow in mesh networks beyond what can be achieved by routing alone, was conceptualised by Ahlswede~\textit{et al}~\cite{Ahlswede2000}. Rather than simply re-transmit one or more incoming information signals on different onward channels, a network coding protocol allows the nodes in the network to compute some function of its signals (\emph{e.g.},~the \texttt{xor} of bit streams from different incoming links) and to transmit the outcome, in principle ``encoding'' the signals. The standard example of network coding, providing a simple and clear illustration of the underling principle, is the Butterfly network (Fig.~\ref{f01}), which enables simultaneous transmission between the diagonally opposite corners. Fig.~\ref{f01a} illustrates a more elaborate network which solves a slightly more complicated signal transmission problem. These examples, which represent a proof of principle of the benefits of network coding, both use binary \textit{linear} network coding -- that is each node can encode its inputs by performing modulo-2 sums. Binary linear network coding provides the basis for the Clifford group QLNCs we address in this paper, however it is worth noting that much of the classical literature considers a more general setting in which the network nodes can encode the input data-streams by performing modulo-$r$ summations (for $r>3$) and / or nonlinear functions. Additionally, these examples are concerned with only one type of network coding task, namely the \textit{multiple unicast} problem (also known as the $k$-pairs problem), in which some number $k \geqslant 1$ of transmitter nodes each send different information streams each to a single, distinct receiver node. Other problems for which one may consider network coding protocols are the \emph{multicast} and \emph{broadcast} problems (in which a single source node sends the same information to some subset of~the~nodes\,/\,all~nodes in the network respectively), and the \textit{multiple multicast} problem (in which multiple transmitters send different information streams to different subsets of the other nodes). The advantage of network coding is most important in the case that the network $G$ has edges which are all directed (as illustrated in the examples of Figs.~\ref{f01} and~\ref{f01a}). In the case of directed networks, it is always possible to contrive situations in which network coding can yield an unbounded increase in information throughput (for a $k$-pairs example see Fig.~\ref{f025}). \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{fig21.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{An example of directed network in which network coding can yield an arbitrary speed-up in the $k$-pairs setting. The network is a directed graph, consisting of transmitters on the left hand side, and receivers on the right-hand side. Each receiver is paired with the transmitter horizontally left (as shown by the indexed ``t''s and ``r''s). The network consists of two components, a bipartite graph between the transmitters and receivers, with direct links t$_i$-r$_i$ missing, shown in (a); and all of the transmitters connected to all of the receivers through a single directed link, shown in (b). Clearly without network coding all of the transmitter-receiver pairs will have to share the link in (b), and the links in (a) will be useless, however with network coding each of the transmitters can broadcast its bitstream to each output, and the left-most of the central nodes in (b) can perform a modulo-2 sum of all its inputs and forward the result, and the right-most of the central nodes in (b) simply broadcasts this to each receiver. So it follows that each receiver receives 4 bitstreams -- the modulo-2 sum of all the transmissions, via the central nodes, and the bitstreams from all transmitters other than its pair, thus can perform a modulo-2 sum to resolve the bitstream from its paired transmitter. That is, for example, r$_1$ receives B, C and D directly from t$_2$, t$_3$ and t$_4$ respectively, as well as A$\oplus$B$\oplus$C$\oplus$D from the central nodes in (b), and can thus perform the modulo-2 sum of all its inputs A$\oplus$B$\oplus$C$\oplus$D$\oplus$B$\oplus$C$\oplus$D=A, as required. It can easily be appreciated that this construction can extend to any number of transmitter-receiver pairs.}} \label{f025} \end{figure} However, in many practical contexts, the available communication channels are bidirectional. For such networks, it is often not clear that network coding will yield any benefits at all. For the broadcast setting, it has been proven that there is no benefit to the application of network coding over standard routing~\cite{Li2004a}. For tasks of transmitting long information streams in undirected networks, other techniques than network coding appear to be competitive. For instance, \emph{fractional routing} involves dividing up a single bitstream and forwarding it along different routes, storing them locally in between rounds of use of the network. Fig.~\ref{f03} illustrates how fractional routing can achieve the same asymptotic throughput as network coding in the Butterfly Network. The \textit{multiple unicast conjecture} posits that there is no benefit to the application of network coding over standard routing for multiple unicast channels, if fractional routing is possible \cite{Li2004b}. While the multiple unicast conjecture remains unproven, the improvement possible by using network coding has been upper-bounded to typically low factors for various restricted multiple unicast settings \cite{Cai2015}. This rather sets the tone for the other settings considered, with an upper bound equal to two on the factor improvement over routing achievable by applying network coding being proven for the multicast and multiple multicast settings \cite{Li2009}. Table~\ref{tab1} summarises the benefits of network coding in various settings. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig31.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Demonstration of achieving the same throughout on the Butterfly as network as network coding by using fractional routing instead. This is achieved by splitting each of bitstreams ``A'' and ``B'' into halves, and forwarding half on each link, as shown in (a). That is (for example) let A consist of two bits $a_1$ and $a_2$ and likewise B two bits $b_1$ and $b_2$. In the first time interval the two bits of A are forwarded on different routes, as shown in (b), and then likewise for the bits of B, shown in (c). Thus time-sharing is used to achieve the fractional routing, and A and B can each forward two bits in a total of two time intervals, which corresponds to the same bit-rate as is achieved using network coding, as shown in Fig.~\ref{f01}.}} \label{f03} \end{figure} \begin{table}[t] \bigskip \centering \begin{tabular}{ l c c c c c} & \textbf{Broadcast} & \textbf{Multicast} & \textbf{Multiple multicast}& \textbf{Multiple unicast}\\ \hline\hline \textbf{Directed} & $\infty$ & $\infty$ & $\infty$ & $\infty$ \\ \textbf{Undirected} & 1 & $\leq 2$ & $\leq 2$ & 1 (conjectured)\\ \end{tabular} \captionsetup{width=0.95\linewidth} \caption{\small{Maximum factor increase in information throughput using network using, for various network and information transfer types.}} \label{tab1} \end{table} \subsection{Quantum network coding} The concept of network coding has been adapted to quantum state transmission~\cite{Leung2006}, and then to entanglement swapping~\cite{Kobayashi2009, Kobayashi2011, Satoh2012} in quantum communications networks. Because of the limitation imposed by the no-cloning theorem, the $k$-pairs problem (or for entanglement swapping, the problem of distributing entanglement between $k$ different transmitter--receiver pairs) is typically the problem studied in the quantum case. It has been shown that any situation in which a classical network code for multiple unicast exists, then there is also a quantum network code for entanglement swapping~\cite{Leung2006, Kobayashi2009, Kobayashi2011}. These results include quantum generalisations of both linear and non-linear network codes. It is with the former that we are concerned in this article, and Satoh~\textit{et al.} provide a very good visual demonstration of the correspondence between classical and quantum linear network coding for the case of the Butterfly graph~\cite{Satoh2012}. In the case of ``classically assisted'' quantum linear network coding, in which classical communication is less constrained than quantum communication, de Beaudrap and Roetteler~\cite{deBeaudrap2014} show how quantum network coding can be described as an instance of measurement-based quantum computation involving $X$ observable measurements to remove correlations between the input states and the states of qubits (or qudits) at interior nodes. One feature which is common for these existing pieces of research is that they consider quantum networks which are in the same essential form as classical telecommunications networks: nodes which have more than one qubit of internal memory (with negligible latency operations), which are connected to each other by channels with significant latency. This model is appropriate for entanglement distribution in quantum communication networks, but for entanglement distribution in quantum computers it may be relevant to consider a finer scale model in which each node is itself a single qubit. Note that in this setting, fractional routing is made more complicated by the inability to store and transmit information without resetting the state of the qubit, making the multiple unicast less plausible. (In the case that the ``information stream'' consists of a single Bell state between each of the $k$ transmitter/receiver pairs, fractional coding loses its meaning entirely.) \subsection{Other approaches to realise two-qubit operations in limited architectures} \label{sec:compare} While we consider the problem of distributing entanglement in limited quantum architectures, this is not the only approach to the problem of realising two-qubit operations between remote qubit pairs. We consider below other approaches to this problem \subsubsection{Realising two-qubit operations via mapping/routing} One way in which two-qubit operations can be realised between qubits is simply by moving the data stored by these qubits to adjacent nodes, \emph{e.g.},~using logical SWAP operations to exchange the data held by adjacent qubits. We may then consider the way that such a circuit of SWAP gates (or several such exchanges of qubits) can be decomposed into more primitive gates~\cite{ZPW-2018,Cowtan-etal-2019}. More generally, we may consider how to decompose a single ``long-distance'' operation (such as a CNOT) between remote qubits, into primitive gates consisting of single-qubit gates on adjacent qubits~\cite{KMvdG-2019}. These mapping/routing results are applicable to the NISQ setting, \emph{i.e.},~the near-term prospect of hardware platforms in which all or nearly all of the qubits will store data which ideally is not to be lost or disturbed owing to the scarcity of memory resources. They give rise to unitary circuits, whose depth must scale at least as the distance between the pair of qubits on which we want to perform a two-qubit operation. However, it seems plausible that the parity-map techniques of Ref.~\cite{KMvdG-2019} may be interpreted in terms of linear network codes. This might allow their techniques for finding suitable CNOT circuits (in the NISQ setting), to be combined with our techniques for distributing entanglement (in a setting where memory is less scarce). \subsubsection{Sequential distribution of Bell pairs} Our approach is to consider how multiple Bell pairs may be distributed through a quantum hardware platform in spite of ``bottlenecks'' in the network of the architecture, in a way that is independent of the distance between the qubits to be entangled. Note that individual Bell pairs can be distributed in constant depth as well, by taking advantage of the concept of entanglement swapping (a~concept which implicitly underlies our techniques as well). In (otherwise idealised) quantum hardware with paralellisable two-qubit interactions limited to a connected, undirected network $G$, we may distribute entanglement between any pair of qubits $q$ and $q'$ by first preparing a long chain of entangled qubits, and ``measuring out'' all intermediate qubits (essentially using what we call ``qubit termination'' below), in constant time. It suffices to consider a chain $q_0, q_1, \ldots, q_\ell$ of qubits with $q$ and $q'$ as endpoints, and to perform the following: \begin{enumerate}[itemsep=0ex] \item Prepare each $q_j$ for $j$ even in the state $\ket{+}$, and the remaining qubits in the state $\ket{0}$. \item Perform a CNOT from qubit $q_j$ to qubit $q_{j{-}1}$ for each even $j > 0$. \item Perform a CNOT from qubit $q_j$ to qubit $q_{j{+}1}$ for each even $j < \ell$. \item Measure the $X$ observable on each $q_j$ for $0 \!<\! j \!<\! \ell$ even (recording the outcome $s_j = \pm 1$); and measure the $Z$ observable on each $q_j$ for $j \!<\! \ell$ odd (discarding the outcome and assigning $s_j = +1$). \item If $\prod_j s_j = -1$, perform a $Z$ operation on either $q_0$ or $q_\ell$ (not both). \end{enumerate} The value of the product $\prod_j s_j$ can be evaluated by a simple (classical) circuit of depth $O(\log \ell)$, and only determines the final single-qubit operation which determines whether the prepared state is $\ket{\Phi^+}$ or $\ket{\Phi^-}$ on $\{q,q'\}$; the rest of the procedure is evidently realisable by a quantum circuit with a small depth, independent of $\ell$. To distribute Bell states between $k$ pairs of qubits, it clearly suffices to perform the above procedure $k$ times in sequence, independently of whether the chains involved cross one another. (Furthermore, any pairs of qubits whose chains do not cross in $G$ can be processed in parallel.) As the final corrections can be performed in parallel, the total depth of this procedure is then at most $4k+1$, regardless of the distance between the nodes or the size of $G$. One of our main results (Theorem~\ref{thm:constdepth} on page~\pageref{thm:constdepth}) is to demonstrate conditions under which we may use a QLNC circuit to simulate a classical linear network coding protocol, in ``essentially constant'' depth --- that is, using a number of rounds of quantum operations which is independent of the size of the network, or the distance between transmitters and receivers. Thus, for sufficiently large $k$, our techniques will distribute the same entangled states in parallel, with a lower depth of quantum operations than distributing the same entanglement sequentially. \subsubsection{Distribution of entanglement via graph states} Our techniques yield results that are in some ways similar to results involving graph states~\cite{graphstate}. We describe some of these here. In the work by de~Beaudrap and Roetteler~\cite{deBeaudrap2014}, linear network codes give rise to measurement-based procedures involving graph states (which differ from, but are in some cases very similar to, the coding network itself). The connection to measurement-based quantum computing informed our results, and in particular our techniques feature both measurements and the depth-reduction for which measurement-based computing is known. However, as our results rest upon unitary operations performed on a network in which each node is a single qubit, the results of Ref.~\cite{deBeaudrap2014} do not directly apply. More intriguingly, Hahn~\textit{et al.}~\cite{Hahn} have shown how entanglement can be ``routed'' from an initial graph state using transformations of graph states by local complementations. Graph states can be prepared in depth equal to the edge-chromatic number of the graph (\emph{i.e.},~as with our results, with depth independent of the size of the distances between the qubits involved). In this sense they represent a better-known way to address the problem of shallow-depth multi-party entanglement distribution in restricted architectures. Our results differ from those of Hahn~\textit{et al.}~\cite{Hahn} in that we are able to avoid using the sophisticated technique of local complementation of graph states, instead reducing the problem of entanglement distribution to the somewhat more easily grasped subject of linear network coding, which has also been well-studied in the context of information technologies. There are also entanglement distribution tasks which cannot be achieved by local transformations of graph states, which can be achieved through our techniques: see Section~\ref{sec:gphState-separatingExample}. \section{Quantum Linear Network Coding circuits} \label{prelim} In this Section, we describe techniques to distribute entanglement in architectures where the pairs of qubits which can interact are restricted to some graph $G$. Our results involve stabiliser circuits which in a sense simulate a linear network coding protocol on $G$ in order to distribute entanglement, given that the ``nodes'' are single qubits and the ``channels'' consist just of whether or not a CNOT operation is applied. For this reason, we call these circuits \emph{quantum linear network coding circuits} --- or henceforth, QLNC circuits. We demonstrate below how to simulate a particular classical linear network code using a QLNC circuit, and how doing so can be used to distribute Bell states in parallel by reducing this task to the $k$-pairs problem. More generally, we show that the same techniques may be used to distribute GHZ states of various sizes by reducing this task to the multiple multicast problem. We also demonstrate the way in which QLNC circuits allow us to find solutions which somewhat extend what can be achieved by reduction to the $k$-pairs or multiple multicast problems. To help this exposition, we introduce a formalism to describe the effect of QLNC circuits as a class of quantum circuits, independent of the application of entanglement distribution. \subsection{A first sketch of QLNC circuits} Consider a network $G$ with $k$ transmitters $T = \{t_1, \ldots, t_k\}$ and $k$ receivers $R = \{r_1, \ldots, r_k\}$, where we wish to distribute a Bell pair $\ket{\Phi^+}$ between each pair $(t_j, r_j)$, $1 \le j \le k$. The simplest application of our techniques is to reduce this problem to the existence of a linear network coding solution to the corresponding $k$ pairs problem on $G$, which we may describe by a subgraph $G'$ (omitting edges not required by the protocol) whose edges are given directions by the coding protocol.\footnote{% Note that this is not an easy problem in general: see Section~\ref{comp}. } In particular, our results apply to linear network codes in which, specifically, all nodes with output channels send the same message (consisting of the sum modulo~2 of its inputs) on each of its output channels. We suppose that classical information may be transmitted freely, without being constrained to the network. While there will be non-trivial costs associated with communicating and computing with classical information, it is reasonable to suppose that the control system(s) governing the quantum architecture can perform such tasks, without being subject to the restrictions involved in the interactions between qubits. \subsubsection{Directly simulating classical linear network codes} Given a linear network code as above, to send a standard basis state from each transmitter to their respective receiver would be straightforward, using a circuit of CNOT gates to simulate the network code. It would suffice to simply initialise all qubits to $\ket 0$, and at each node, compute the message that the node should transmit by using CNOT gates (oriented along the directed edge) to compute the parity of its incoming message(s) at the corresponding qubit. Fig.~\ref{f001a} illustrates this in the case of the Butterfly network. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{fig41.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Sending computational basis states $x_1$ and $x_2$ over a butterfly network in which each vertex is a qubit, and each edge in a CNOT gate, shown in (a) -- the order in which the CNOT gates are performed is given in the circuit, shown in (b). The grey numbers shown next to vertices in (a) are vertex indices, such that 1 -- 6 is upper-most to lower-most in (b).}} \label{f001a} \end{figure} To transmit Bell pairs, requires additional operations: if the qubits at the transmitter nodes do not initially start in the standard basis, the procedure described above will yield states in which the transmitters and receivers are entangled with the intermediate nodes. This elaborated procedure is illustrated in Fig.~\ref{f001}. Following Refs.~\cite{Kobayashi2009, Kobayashi2011,deBeaudrap2014}, we adapt classical network coding protocols by preparing the transmitter states in the $\ket{+}$ state (conceived of as a uniform superposition over standard basis states), and performing $X$ observable measurements (\emph{i.e.},~measurements in the $\ket{\pm}$ or ``Fourier'' basis) to disentangle the intermediary qubits while leaving them in (joint) superpositions of the standard basis. These measurements yield outcomes $\pm 1$. The $+1$ outcome represents a successful disentangling operation, erasing any local distinctions between possible standard basis states without introducing any relative phases. The $-1$ outcome represents a disentangling operation requiring further work, as a relative phase has been introduced between the possible standard basis states locally. By conceiving of the state of the qubit as being the parity of some (undetermined) bit-values originating at the transmitters, one may show that it is possible to correct the induced phase by performing $Z$ operations on an (easily determined) subset of the transmitters or receivers. We refer to this procedure, of measuring a qubit with the $X$ observable and performing appropriate $Z$ corrections, as \emph{termination} of the qubit. By considering the state of the qubits in Fig.~\ref{f001}(b) after the Hadamard gates simply as a superposition $\tfrac{1}{2} \sum_{a,b} \ket{a,0,b,0,0,0}$ for $a,b \in \{0,1\}$, it is easy to show that the final state after the measurements and classically controlled $Z$ operations is $\tfrac{1}{2} \sum_{a,b} \ket{a,\!\:\cdot\!\;,b,b,\!\:\cdot\!\;,a} = \ket{\Phi^+}_{1,6} \ket{\Phi^+}_{3,4}$, using dots as place-holders for the measured qubits $2$ and $5$. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{fig4abxbasis.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Example of performing the Butterfly on a single qubit: (a) shows the order of edges; (b) shows the quantum circuit diagram -- note that the final two layers consisting of Hadamard gates and measurements on qubits 2 and 5, and classically controlled Pauli-$Z$ gates on the other four qubits are necessary for the ``termination'' of qubits 2 and 5, which do not appear in the final desired entangled state. We discuss the general process of termination in full in due course. Nodes are indexed as in Fig.~\ref{f001a}.}} \label{f001} \end{figure} \subsubsection{Simulating classical linear network codes ``out of order''} \label{outoforder} For the application of distributing entanglement, QLNC circuits may simulate linear network coding protocols in other ways than sequential evaluation. As a fixed entangled state represents a non-local correlation rather than information as such, it suffices to perform operations which establish the necessary correlations between the involved parties. This principle applies to the simulation of the network coding protocol itself, as well as to the eventual outcome of the entanglement distribution procedure. For instance: the role of a node with exactly one output channel in our setting is to establish (for each possible standard basis state) a specific correlation between the parities of the qubits of the nodes which are adjacent to it: specifically, that the total parity should be zero. These correlations may be established without simulating the transmissions of the classical network code in their usual order. Fig.~\ref{f1} illustrates a mild example of how a QLNC circuit may simulate a classical network protocol (again on the Butterfly network), performing the operations ``out of order''. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{fig5ab.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{The Butterfly performed out of order, as illustrated graphically in (a), with the measurement of qubit 2 performed immediately prior to the classical control; (b) shows the corresponding quantum circuit, and exhibits a good example of the termination process, as described in detail later on. Nodes are indexed as in Fig.~\ref{f001a}.}} \label{f1} \end{figure} In this case, the correlation between the values of the qubits $1$, $3$, and $5$ (that their projections onto the standard basis should have even total parity, before the disentangling measurement on $5$) is established by attempting to project the qubit $2$ onto the state $\ket{0}$, via a $Z$ observable measurement. In the case that the outcome is instead $\ket{1}$, we must correct any receiver nodes which would be affected by this, by performing (classically conditioned) $X$ operations (represented by the doubled operations, and performed at the sixth time-step). Again, by considering the state of the qubits in Fig.~\ref{f1}(b) after the Hadamard gates simply as a superposition $\smash{\tfrac{1}{2\sqrt 2}} \sum_{a,b,z} \ket{a,z,b,0,0,0}$ for $a,b,z \in \{0,1\}$, it is easy to show that the state immediately prior to the measurement of qubit $2$ is $\smash{\tfrac{1}{2\sqrt 2}} \sum_{a,b,z} \ket{a,(z{\oplus}a{\oplus}b),b,(a{\oplus}z),z,(b{\oplus}z)}$, and that projecting qubit $2$ onto the state $\ket{0}$ projects onto those terms for which $z = a \oplus b$. (Projection onto $1$ projects onto those terms for which $z \oplus 1 = a \oplus b$, and we may correct for this simply by performing an $X$ operation on each receiver whose state depends on the index $z$.) It is then easy to verify, as with Fig.~\ref{f001}, that the resulting circuit prepares the state $\ket{\Phi^+}_{1,6} \ket{\Phi^+}_{3,4}$. One insight is that the freedom to communicate classical information outside of the network allows QLNC circuits to represent a linear network code on a larger network than the network $G$ which governs the two-qubit interactions --- with the qubits as nodes, and both the CNOT~gates\,/\, classically controlled $X$ gates as directed edges. We will formalise this insight in Section~\ref{main}. \subsubsection{A separation between QLNC circuits and local transformations of graph states} \label{sec:gphState-separatingExample} There are entanglement distribution tasks which can be achieved using QLNC circuits, which cannot be achieved using local transformations of graph states. Fig.~\ref{novfigref} demonstrates a QLNC circuit on a simple network, whose effect is to prepare a four-qubit GHZ state on the nodes of degree~1. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{novfigniel.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{An example of an entanglement distribution task separating QLNC circuits from local transformations of graph states. The qubits are numbered 1,2,3,4 (left to right) along the top row, and 5, 6 (left to right) along the bottom row. Qubit 2 is terminated and qubit 3 is measured (followed by the required classical correction), leaving a four-qubit GHZ state.}} \label{novfigref} \end{figure} An exhaustive search of the local complementation orbit, including measurements, revealed that the four-qubit GHZ state could not be reached by local Clifford operations and measurements if a graph-state was prepared over the same graph. (We provide the code for this exhaustive search~\cite{ourcode}, which was written specifically for this example but could in principle be adapted for any single network). While we do not make any formal claim to this effect, the existence of this example leads us to believe that our techniques may yield solutions for entanglement distribution in larger-scale networks and for a variety of entanglement distribution tasks, where it may be difficult or impossible to find a procedure to do so by manipulation of graph states. \subsection{The QLNC formalism} \label{main} Our main objective is to demonstrate how to simulate a classical linear network code to solve a multiple multicast problem on a network $G$, using a QLNC circuit of constant depth, to distribute Bell states and GHZ states between disjoint subsets of qubits located at the nodes of an network $G$ describing the interaction graph of some hardware platform. To do so, it will be helpful to introduce a simulation technique (which we call the ``QLNC formalism'') to describe the evolution of a set of qubits in a QLNC circuit. QLNC circuits are stabiliser circuits, by construction. Indeed, as the only operations which they involve are preparations of $\ket{0}$ and $\ket{+}$ states, CNOT gates, $X$ and $Z$ observable measurements, and unitary $X$ and $Z$ gates conditioned on measurement outcomes, they do not even generate the Clifford group. For this reason, one might consider using the stabiliser formalism to simulate a QLNC circuit. (Indeed, the QLNC formalism described below uses operations similar to those of the simulating stabiliser formalism, in that they involve transformations and row-reduction of matrices over $\mathbb Z_2$.) The QLNC formalism differs from the stabiliser formalism by expressing states implicitly as superpositions over standard basis states, essentially as a special case of that of Dehaene and de Moor~\cite[Theorem 5~(ii)]{dehaene}. This renders certain features of the correlations between states immediately obvious: \emph{e.g.},~not even a small amount of pre-processing (such as that required by the stabiliser formalism) is needed to establish the state of any single-qubit state which factorises from the rest. This alternative representation more transparently represents the qualities of the state which are important to simulate network coding: for this reason, it proves to be a somewhat more efficient method than the stabiliser formalism for this purpose. \subsubsection{Parity formula states} In the QLNC formalism, the global state is represented by an assignment of Boolean formulae $f_j(a)$, where $a = (a_1,a_2,\ldots,a_N)$ to each qubit $1 \le j \le n$ in the network $G$. We call each formula $f_j(a)$ a \emph{node formula} or a \emph{qubit formula}. Here, \begin{equation} f_j(a) \,=\, c_{j,0} + c_{j,1} a_1 + \cdots + c_{j,N} a_N\,, \end{equation} for some explicit coefficients $c_{j,0},\!\; c_{j,1},\!\; \ldots,\!\; c_{j,N} \in \{0,1\}$, and where addition is taken modulo $2$ (\emph{i.e.},~each function $f_j(a)$ computes the parity of $c_{j,0}$ and some given subset of its arguments). These arguments consist of some number of formal indeterminates $a_1, \ldots, a_N$, which we may interpret as variables which may take Boolean values but where those values are as yet undetermined. We require that, together the the vector $\mathbf e_0 = {[1\:\:0\:\:\cdots\:\:0]\trans}$, the vectors $\{{\mathbf c}_1, {\mathbf c}_2, \ldots, {\mathbf c}_n\} \subseteq \mathbb Z_2^{N+1}$ for ${{\mathbf c}_j \!\!\;= [c_{j,0}\:\:c_{j,1}\:\:\cdots\:\:c_{j,N}]\trans}$ span a set of $2^{N+1}$ vectors. In particular, each indeterminate $a_h$ must occur in \emph{some} qubit formula $f_j(a)$. The state also has an associated phase formula $\phi(a)$ of the form \begin{equation} \phi \,=\, p_{0} + p_{1} a_1 + \cdots + p_{N} a_N\,, \end{equation} for some coefficients $p_0, \ldots, p_N \in \mathbb{Z}_2$. Given such a phase formula $\varsigma$ and node-formulas $f_1, f_2, \ldots, f_n$ for a network $G$ of $n$ nodes, the global state of the system is given by \begin{equation} \label{eqn:parityFormulaExpan} \frac{1}{\sqrt{2^N}} \! \sum_{x \in \{0,1\}^N} \!\! (-1)^{\phi(x)}\, \ket{f_1(x)} \otimes \ket{f_2(x)} \otimes \cdots \otimes \ket{f_n(x)} \end{equation}~\\[-2ex] where $x = (x_1, x_2, \ldots, x_N)$. That is: the phase formula $\phi(a)$ and node-formulae $f_j(a)$ stand for an explicit superposition over the standard basis, ranging over all possible substitutions of Boolean strings $x \in \{0,1\}^N$ to the indeterminates $a_1, \ldots, a_N$, and where in particular $\phi(a)$ determines the relative phases. \begin{defn} A \emph{parity formula state} is a $n$-qubit state for $n \ge 1$ as expressed in \eqref{eqn:parityFormulaExpan}, where $\phi$ and $f_j$ for $1 \le j \le n$ are (not necessarily homogeneous) linear functions of $N \ge 0$ indeterminates, and where the functions $f_j(a)$ together with the constant function $e_0(a) = 1$ span a set of $2^{N+1}$ functions. \end{defn} It will be convenient to consider a representation of parity function states in terms of an $(N+1) \times (n+1)$ matrix $C$ and a separate column vector $\mathbf p$, where $\mathbf p = {[p_0\:\:p_1\:\:\cdots\:\:p_N]\trans}$, and where the columns of $C$ (indexed from $0$ to $n$) consist of the vector $\mathbf e_0$ and the columns $\mathbf c_1, \ldots , \mathbf c_{n}$. \begin{defn} A parity function matrix $C$ for a $n$-qubit state is an $(N{+}1)\times(n{+}1)$ matrix for some $N \ge 0$, of the form $C = \bigl[\mathbf e_0\:\:\mathbf c_1\:\:\cdots\:\:\mathbf c_{n+1}\bigr]$ of rank $N+1$. A parity function tableau is a matrix $T = \bigl[\:\! C\,\big\vert\, \mathbf p \:\!\bigr]$ consisting of a parity function matrix $C$ and a phase vector $\mathbf p$. \end{defn} Two distinct parity function tableaus $T = \bigl[\:\! C\,\big\vert\, \mathbf p \:\!\bigr]$ and $T' = \bigl[\:\! C'\,\big\vert\, \mathbf p' \:\!\bigr]$ may represent the same state, if $T' = Q T$ for some $(N{+}1) \times (N{+}1)$ invertible matrix $Q$. Such a transformation $Q$ represents a change of variables, in the summation expression of the state as described in \eqref{eqn:parityFormulaExpan}, leaving the overall sum invariant. Note that such a matrix must satisfy $Q \mathbf e_0 = \mathbf e_0$: this corresponds to the fact that no change of variables can affect the value of constants. Conversely, any invertible $(N{+}1) \times (N{+}1)$ matrix $Q$ which preserves the vector $\mathbf e_0 \in \mathbb Z_2^{N+1}$\!, may be used to transform a parity function tableau $T$ to an equivalent tableau (representing the same state) by left-multiplication. In our application to QLNC circuits for a given qubit interaction network $G$, we may use an alternative representation, in which we write the qubit functions $f_j(a)$ next to the nodes corresponding to each qubit $j$ in the diagram of $G$. For instance, the state illustrated in Fig.~\ref{f4} is the state $\ket{+}_1 \ket{+}_3 \ket{\mathrm{GHZ}_4}_{2,4,5,6}$ (with a phase function of zero). This will prove practical when the objective is to demonstrate the effect of operations within a particular network $G$. \subsubsection{QLNC operations on parity formula states} We now consider how each of the transformations which are admitted in QLNC circuits may be simulated through transformations of parity function tableaus. \paragraph{Simulating unitary gates.} The effect of the unitary transformations CNOT, $X$, and $Z$ on parity formula states are easy to describe as transformations of their representations, by simply reasoning about the representation of the state as a superposition over the standard basis: \begin{enumerate}[label=(\textit{\roman*})] \item The effect of an $X$ operation on qubit $k$ is to update $f_k(a) \gets 1 + f_k(a)$; \item The effect of a $Z$ operation on qubit $k$ is to update $\phi(a) \gets \phi(a) + f_k(a)$; \item The effect of a CNOT operation with control $k$ and target $\ell$, is to update $f_\ell(a) \gets f_\ell(a) + f_k(a)$. \end{enumerate} It is easy to verify that these transformations correspond to elementary column transformations of the parity function tableau ${\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$. Specifically --- indexing the columns of $C$ from $0$ --- these operations may be realised respectively by adding the zeroth column of $C$ to the $k\textsuperscript{th}$ column, adding the $k\textsuperscript{th}$ column of $C$ to $\mathbf p$, and adding the $k\textsuperscript{th}$ column of $C$ to the $\ell\textsuperscript{th}$ column. Note that these operations all preserve the rank of $C$. \paragraph{Simulating projective measurements.} The way in which we may represent measurements by transformations of a parity formula tableau is somewhat more complex, due to the possibility of state collapse. To simplify the description of an $X$ or $Z$ observable measurement on a qubit $k$, we first perform a change of variables --- specifically, by putting the block matrix $T = {\bigl[\:\! C \:\!\big\vert\:\! \mathbf p \:\!\bigr]}$ in a reduced row-echelon form in which either column $k$ is a pivot column, or column $k$ is co-linear with $\mathbf e_0$ (so that $f_k(a)$ is a constant). Suppose (without loss of generality) that ${\bigl[\:\! C \:\!\big\vert\:\! \mathbf p\:\!\bigr]}$ is already in such a reduced row echelon form, in which case either $f_k(a) = c_{k,0}$ is a constant function, or $f_k(a) = a_g$ for a single indeterminate indexed by $1 \le g \le N$; in the latter case, exactly one row of $C$ contains a $1$ in the $k\textsuperscript{th}$ column. Having put the parity function tableau into reduced row-echelon form of this kind, we may then describe an $X$ or $Z$ observable measurement on qubit $k$, as follows. \begin{enumerate}[label=(\textit{\roman*}), start=4] \item For an $X$ measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:XmeasProd} If $f_k(a) = a_g$ for an indeterminate which does not occur in any other qubit formula $f_j(a)$ --- \emph{i.e.},~if there is a single $1$ in the $g\textsuperscript{th}$ row of $C$ --- then the state is unchanged by the measurement, and the measurement outcome is $s = (-1)^{p_g}$. \item \label{item:XmeasEnt} Otherwise, let $z_{N\!\!\:{+}\!\!\:1}$ be a new indeterminate (represented in $C$ by adding a new row at the bottom), and choose a measurement outcome $s = \pm 1$ uniformly at random. If $f_k(a)$ is constant prior to the measurement, then let $\Delta$ be the $(N{+}2)$-dimensional vector with a single $1$ in the final row; otherwise, let $\Delta$ be the $(N{+}2)$-dimensional column-vector with exactly two $1$s, in row $g$ and $N{+}1$ (counting from~$0$). We then add $\Delta$ to the $k\textsuperscript{th}$ column of $C$, and (in the case that $s = -1$) to $\mathbf p$ as well. \end{enumerate} \end{enumerate} \begin{itemize} \item \textbf{Analysis of the state transformation.~~} In case~\ref{item:XmeasProd}, the state of qubit $k$ can be factored out of the sum, so that the state is either $\ket{+}$ (if $\phi$ lacks any $a_g$ term) or $\ket{-}$ (if $\phi$ contains a $z_g$ term), so that the measurement does not affect the state and the outcome is predetermined. Otherwise, in case~\ref{item:XmeasEnt}, qubit $k$ is maximally entangled with the rest of the system: the state has a Schmidt decomposition $\tfrac{1}{\sqrt 2} \ket{0}_k \ket{A_0} + \tfrac{1}{\sqrt 2} \ket{1}_k \ket{A_1}$, where $\ket{A_b}$ in each case is the state-vector on the qubits apart from $k$ in the case that $a_g := b$ (possibly including a phase factor that depends on $a_g$). It follows that the outcome of the $X$ observable measurement is uniformly random, and that the state $\ket{A_\pm}$ of all of the other qubits will be in tensor product with $k$ after measurement. A straightforward calculation shows that $\ket{A_+} = \tfrac{1}{\sqrt 2} \sum_{x_g} \ket{\smash{A_{x_g}}}$ and $\ket{A_-} = \tfrac{1}{\sqrt 2} \sum_{x_g} (-1)^{x_g} \ket{\smash{A_{x_g}}}$; these are the states described by simply omitting the $k\textsuperscript{th}$ column of $C$, and (in the case of $\ket{A_-}$) adding an extra $a_g$ term to the phase function. To represent the post-measurement state, it suffices to introduce a new indeterminate $a_{N+1}$ to represent the independent superposition on qubit $k$; for the post-measurement state $\ket{-}_k$, we also must add $a_{N+1}$ to the phase function. \item \textbf{On the rank of the resulting parity function matrix.~~} Note, above, that in case~\ref{item:XmeasProd} there is no change in the tableau, and thus no change in the rank of $C$. In case~\ref{item:XmeasEnt}, we must consider two sub-cases: one where $f_k(a) = c_{k,0}$ before the measurement, and one where $f_k(a) = a_{g}$ before the measurement. In either case, we add one row, in which the only non-zero entry is in column $k$. In the former case, we add one row and add a coefficient $1$ in column $k$ in that bottom row. This increases both the number of rows and the rank. In the latter case, we consider the operations performed at column $k$ in two steps: first setting the coefficient at row $g$ to zero, then setting the coefficient in the new row $N+1$ to one. Setting the coefficient at row $g$ to zero does not decrease the rank: the column $k$ cannot any longer be a pivot column. Prior to the first step, the $k\textsuperscript{th}$ column is a pivot column; but we may alternatively select any other column in which the $g\textsuperscript{th}$ row is set to $1$, as (by construction) these columns do not contain a pivot position for any other row. Thus, setting the $g\textsuperscript{th}$ coefficient of the $k\textsuperscript{th}$ row does not decrease the rank; and again, adding a row in which only the $k\textsuperscript{th}$ column has a $1$ increases both the rank and the number of columns. Thus, this operation maintains the property of $C$ having a rank equal to the number of its rows. \end{itemize} \begin{enumerate}[label=(\textit{\roman*}), start=5] \item For a $Z$ measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:ZmeasProd} If $f_k(a) = c$ is a constant function, then the measurement leaves the state unchanged, and the measurement outcome is $(-1)^c$. \item \label{item:ZmeasEnt} Otherwise, we select a measurement outcome $s = (-1)^b$ for a bit $b \in \{0,1\}$ chosen uniformly at random. Let $\Delta = b \mathbf e_0 + \mathbf c_k$. Add $\Delta$ to all columns of $T = {\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$ which contain a $1$ in the $g\textsuperscript{th}$ row (including the $k\textsuperscript{th}$ column itself), and remove the row $g$ entirely from the tableau. \end{enumerate} \end{enumerate} \begin{itemize} \item \textbf{Analysis of the state transformation.~~} In case~\ref{item:ZmeasProd}, it is obvious that qubit $k$ is in a fixed state: the outcome will be $+1$ if it is in the state $\ket{0}$, and $-1$ if it is in the state $\ket{1}$. Otherwise, in case~\ref{item:ZmeasEnt}, the state of the system can again be described as a superposition $\tfrac{1}{\sqrt 2} \ket{0}_k \ket{A_0} + \tfrac{1}{\sqrt 2} \ket{1}_k \ket{A_1}$, albeit where it is possible in principle that $\ket{A_0} = \pm \ket{A_1}$. We may simulate the assignment of the $k\textsuperscript{th}$ qubit to $b$ by quotienting out all of the functions $f_j(a)$ and the phase function $\phi(a)$ by the relation $a_g + b = 0$. We may do this in effect by adding the column vector $\Delta$ defined above to all columns with a non-zero coefficient in the row $g$, thereby obtaining a tableau in which the $g\textsuperscript{th}$ row is empty. This corresponds to a state in which the variable $a_g$ no longer plays any role; together with the updated normalisation after measurement, we may represent this by removing row $g$. \item \textbf{On the rank of the resulting parity function matrix.~~} Note, above, that in case~\ref{item:ZmeasProd} there is no change in the tableau, and thus no change in the rank of $C$. In case~\ref{item:ZmeasEnt}, we may without loss of generality suppose that the $k\textsuperscript{th}$ column is the last column to which we add $\Delta$. In each case, the vector is added to a non-pivot column, in which case this cannot decrease the rank; nor will it increase the rank, as it only sets coefficients to $0$ in rows which already have pivot positions. These column additions preserve the property of being a reduced row-echelon form. The final addition of $\Delta$ does decrease the rank by $1$, as it turns the $g\textsuperscript{th}$ row from a non-zero row-vector (in a reduced echelon form) to a zero row. Thus the rank of the parity function matrix $C$ decreases by $1$; as removing row $g$ from the tableau reduces the number of columns by $1$, this operation maintains the property of $C$ having a rank equal to the number of its rows. \end{itemize} From the above, we see that we may represent QLNC operations by simple transformations of a parity function tableau $T = {\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$, which in particular preserves an invariant that the rank of the parity function matrix $C$ is equal to the number of its rows. \paragraph{Simulating destructive measurements and qubit preparations.} One might reasonably wish to regard some measurements as being \emph{destructive}, \emph{i.e.},~in not leaving any post-measurement state. We may simulate this by simply removing from $C$ the column corresponding to the destructively measured qubit, and removing from the entire tableau any row for which (after the column removal) the matrix $C$ is entirely zero. Conversely, one may simulate the preparation of a fresh qubit in a standard basis state $\ket{b}$ for $b \in \{0,1\}$, by inserting a new column into $C$ with the value $b \mathbf e_0$. To instead simulate the introduction of a fresh qubit in the state $\tfrac{1}{\sqrt 2} \sum_{x'} (-1)^{bx'} \ket{x'}$ for $b \in \{0,1\}$, one may insert a new row into the tableau (at the bottom of both $C$ and $\mathbf p$) which is entirely zero, then setting the new coefficient of $\mathbf p$ in this row to $b$ if this is different from $0$, and then inserting a new column into $C$ which has only a single $1$ in the final row. \vspace*{-1ex} \paragraph{Terminology.} For the sake of definiteness, ``the QLNC formalism'' will refer below to the techniques described above to describe transformations of parity function tableaus (or some some equivalent representation), as a means to simulate stabiliser circuits composed of this limited set of operations. \subsubsection{Depicting and simulating QLNC circuits} \label{sim} Having defined the QLNC formalism, we now demonstrate how it may be used to simulate QLNC circuits. In this context, we will prefer to represent the parity function states diagrammatically rather than as a matrix --- and to represent it together with a visual representation of the transformations to be performed. \begin{enumerate} \item Each vertex is a qubit $j$, with an associated formula $f_j(a)$, for some symbols $a_1, \ldots, a_N$. The initial formulae for each qubit is generally very simple: each qubit prepared in the $\ket{+}$ state is assigned the formula $f_j(a) = a_j$ for a unique formal symbol $a_j$, and each qubit initialised in the $\ket{0}$ state is assigned the formula $f_j(a) = 0$. \item Pauli $X$ gates on a qubit $k$, which are classically conditioned by the outcome of a $Z$-observable measurement of a different qubit $j$, are represented as doubled lines with an orientation from $j$ to $k$. Coherently controlled CNOT gates are drawn along edges of the network $G$. \item One or more qubits may also be simultaneously ``terminated'', in which case they are measured with the $X$ observable. The outcome may then be used to control Pauli $Z$ operations to cancel out the relative phases which arise as a result of any $-1$ measurement outcomes. \item There is a time-ordering of the operations represented by the edges are performed. In simple QLNC circuits, this is represented by a single integer at each edge, and an integer inside each node to be terminated. (Two edges which meet at a common vertex, and which are not both classically controlled $X$ gates, must be performed at different times, and thus must be assigned different numbers. Also, no edge can have the same number as the termination of a node to which it is incident. Otherwise, there are no constraints.) More generally, it will be reasonable to consider QLNC circuits in which edges are used some constant number of times, \emph{e.g.}~up to two times; we would then label edges by a list (or set) of those times in which it is used, and the operations involving a common vertex must be disjoint (again, unless those operations are all classically controlled $X$ gates). \end{enumerate} \paragraph{Remarks on termination.} It may not be immediately obvious that the claim made about termination --- that any relative phases induced by obtaining measurement outcomes of $-1$ from $X$ observable measurements --- can be ``undone'', leaving a state which is a uniform superposition over some set of standard basis states (\emph{i.e.},~with no relative phases at all). In the case of a QLNC circuit which (successfully) simulates a classical linear network code, this may be more plausible to the reader. In fact, we make a stronger claim: \begin{lemma} For any state $\ket{\psi}$ given by a parity function tableau $T = {\bigl[\:\! C \:\!\big\vert\!\: \mathbf p \!\:\bigr]}$ on $n$ qubits, it is possible (in time dominated by Gaussian elimination on $T$) to find a subset $S \subset \{1,\ldots,n\}$ of qubits, such that $\ket{\psi'} = Z^{\otimes S} \ket{\psi}$ has a parity function tableau $T = {\bigl[\:\! C \:\!\big\vert\!\: \mathbf 0 \!\:\bigr]}$. \end{lemma} \noindent We prove this result here, to demonstrate that ``termination'' is a well-defined operation in principle. \begin{proof} Let $Q$ be an invertible linear transformation for which $Q T = \bigl[\mathbf e_0 \:\: \tilde{\mathbf c}_1 \:\: \tilde{\mathbf c}_2 \:\: \cdots \:\: \tilde{\mathbf c}_n \:\: \tilde{\mathbf p}\bigr]$ is in reduced row-echelon form, and let $\tilde f_j$ be the qubit function corresponding to column $\tilde{\mathbf c}_j$. Then, for every formal indeterminate $z_g$, there is a qubit $k_g \in \{1,\ldots,n\}$ for which $f_{k_g} = z_g$. Let $J$ be the set of rows for which $\tilde p_j = 1$, and let $S = \{ k_j \mid j \in J \}$. Then the effect of $Z^{\otimes S}$ is to map $\tilde{\mathbf p} \mapsto \mathbf 0$. This may be represented by a transformation $R$ for which $QTR = \bigl[\mathbf e_0 \:\: \tilde{\mathbf c}_1 \:\: \tilde{\mathbf c}_2 \:\: \cdots \:\: \tilde{\mathbf c}_n \:\: {\mathbf 0}\bigr]$, which is a parity function tableau for a state without relative phases over the standard basis. (Indeed, it follows that the final column of $TR$ is also $\mathbf 0$, so that simulating $Z^{\otimes S}$ on the original tableau removes all relative phases without committing to the change of variables described by $Q$.) \end{proof} As a corollary, it follows that for a parity function state, we can induce whatever relative phase we like, of the form $(-1)^{\phi(x)}$ for any linear function $\phi$ of the indeterminates. We may use this to justify the notion of ``terminating'' one qubit independently of any others, and ``undoing'' any change to the phase function which occurs as a result of obtaining a $-1$ outcome. The specific choice of qubits on which to perform the $Z$ operations may not be unique, but it suffices for our purposes that such a set can always be found efficiently. The way one might use the QLNC formalism to simulate a particular QLNC circuit is illustrated in Fig.~\ref{f4}. This example distributes two Bell states across a rectangular grid, by simulating the classical Butterfly network protocol with some ``out-of-order'' evaluations. To compensate for the out-of-order evaluation, classically controlled $X$ operations are required upon the measurement of one of the qubits: this is in effect a coding operation using a link outside of $G$, relying on our architectural assumption that classical information can be communicated more freely than quantum information. \begin{figure}[!t] \centering \includegraphics[width=0.88\linewidth]{fig9.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Example of the out of order Butterfly: (a) the order of edges, slightly different, but with equivalent quantum circuit to that given in Fig.~\ref{f1}; (b) the initial labelling of the qubits; (c) the labels after edges ``1''; (d) the labels after edges ``2''; (e) the labels after edges ``3''; (f) the labels after the classical control (edges ``4'') and the terminations (the fifth layer of operations, denoted by the nodes labelled ``5'').}} \label{f4} \end{figure} \subsection{Using the QLNC formalism to design entanglement distribution circuits} \label{example} As already noted, the purpose of developing the QLNC formalism is to enable the use of classical linear network codes as a basis to design entanglement distribution quantum circuits. We begin by noting that there are situations in which QLNC circuits can distribute entanglement which do not correspond to linear network codes. \subsubsection{Shallow QLNC circuits for entanglement distribution} The classical-quantum linear network code result suggests a number of ways in which QLNC circuits can be used to distribute entanglement. In this section we detail one such application, prompted by our desire to use classical linear network coding results and intuitions to distribute entanglement \textit{efficiently} (i.e., with as few layers of quantum operations as possible). We consider the following scenario. Let there be a classical binary linear network code over a network, connecting $k$ transmitter-receiver pairs; and let that network consist of three types of nodes: \begin{itemize} \item Transmitter nodes --- for which each incident edge is outbound (i.e., a directed edge with direction away from the transmitter), and the transmitter broadcasts its bitsteam on all incident edges. \item Relay nodes --- that have an arbitrary number of input and output edges, and whose operation is to broadcast the modulo-2 sum of all incoming bitstreams on all of the output edges. \item Receiver nodes --- for which each incident edge is inbound, and whose operation is to perform the modulo-2 sum of all of the incoming bitstreams, which yields the desired bitsream (i.e., that transmitted by the corresponding paired transmitter). \end{itemize} With the three types of nodes (graph vertices) defined thusly, we can prove an important result about the required depth of layers of quantum operations, when the qubit interaction graph is again $G=\{V,E\}$. \begin{thm} \label{thm:constdepth} If a multiple multicast (including multiple unicast as a special case) classical binary linear network code exists over a network $G$, from a set of transmitter nodes $T= \{t_1 , \cdots , t_N\}$ with in-degree $0$ to a corresponding set of receiver nodes $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ with out-degree $0$, then there is a QLNC circuit whose \textup{CNOT} operations are located along the edges of $G$ and distributes $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states between corresponding transmitter/receiver node sets. Moreover, this circuit has quantum depth $\le {2 (\chi{-}1) (\delta{+}2) + 1}$, where $\delta$ is the largest in/out degree of any vertex in $G$, and $\chi$ is the chromatic number of $G$. \end{thm} \begin{remark} It is in general NP-complete to compute the vertex-chromatic number $\chi$ of a network. However, in many realistic cases it will be easy to compute $\chi$. For instance, bipartite networks (such as tilings by squares or hexagons, or connected subgraphs of these) have $\chi = 2$ by definition. In any more complex network $G$, we may alternatively substitute $\chi$ with the number of colours of any proper vertex--colouring that one may find. For instance, any graph has a $\deg(G)+1$ vertex-colouring which can be found in polynomial time~\cite{greedy}. Furthermore, we have $\chi \le 4$ in planar architectures (\emph{i.e.},~for which $G$ is a planar graph) by the Four Colour Theorem~\cite{fourcolour}, and an explicit four-colouring can also be found in polynomial time~\cite{fourcolourp}. \end{remark} \begin{remark} The role of $\delta$ in Theorem~\ref{thm:constdepth} is in relation to the number of colours of an edge-colouring $\gamma$ of $G$, such that no two edges with the same colour label leave a common vertex or enter a common vertex. (We call such a colouring a ``proper directed-edge colouring''.) If we transform $G$ into a graph $\tilde G$, in which each vertex $q$ is replaced with a vertex $q_i$ (inheriting only the in-bound edges of $q$) and a vertex $q_o$ (inheriting only the out-bound edges of $q$), then $\delta$ is the maximum degree of $\tilde G$, and $\gamma$ corresponds to a proper edge-colouring of $\tilde G$. By Vizing's theorem \cite{Vizing1964}, the edge-chromatic number of $\tilde G$ is at most $\delta + 1$, and an edge-colouring with $\delta + 1$ colours can be found efficiently. (An edge-colouring of $\tilde G$ must have at least $\delta$ colours; and it may be easy to find an edge-colouring of this kind, \emph{e.g.},~if $G$ arises from a lattice in the plane. If one may find such a colouring, the bound above improves to ${2 (\chi{-}1) (\delta{+}1) + 1}$. For the square lattice, with $\chi = 2$ and with $\delta=3$ if no vertex has four in-edges, this yields the bound of $9$ described on page~\pageref{discn:introSquareLatticeBound}.) \end{remark} \begin{proof} Let $c: (V \cup E) \to \mathbb N$ be a colouring of the nodes and edges, such that $c$ provides a proper colouring $1, 2, \ldots, A$ to the nodes of $G$, and also a proper directed-edge colouring $1, 2, \ldots, B \le \delta+1$ to the edges of $G$. Consider the following procedure: \begin{enumerate} \item Initialise all of the qubits, where each qubit $q$ is initialised either in the state $\ket{0}$ if it has no outgoing edges, or if it has some neighbour $p$ by an in-bound edge (that is, there is a directed edge from $p$ to $q$) for which $c(p) < c(q)$, and is initialised in the state $\ket{+}$ otherwise. (In the QLNC formalism, we associate a formal indeterminate $a_q$ with each qubit $q$ initialised in the $\ket{+}$ state.) \item For each non-receiver node $q$ with $c(q) = 1$ in parallel, perform the following procedure: \begin{itemize} \item For each $1 \le j \le B$, perform a CNOT operation on any edge $e$ with $c(e) = j$ leaving $q$. (In the QLNC formalism, this adds $a_q$ to the formula $f_v(a)$ for the node $v$ at the end of $e$.) \end{itemize} \item For each $2 \le h \le A \!-\! 1$, perform the following operations in parallel on non-receiver nodes $q$ with $c(q) = h$: \begin{enumerate}[label=\alph*.] \item For each $1 \le j \le B$, perform a CNOT operation on any edge $e$ with $c(e) = j$ leaving $q$. \item If $f_q(a) \ne a_q$ (\emph{i.e.},~$q$ was a target of some CNOT or Pauli X operation before this round): \begin{enumerate}[label=(\textit{\roman*})] \item Terminate the qubit $q$, by performing an $X$ observable measurement. \item If the outcome of the preceding measurement is $-1$, perform $Z$ operations on an appropriate set of qubits, and a $Z$ operation on $q$ to transform the post-measurement state of $q$ from $\ket{-}$ to $\ket{+}$. (If any qubit $v$ has been selected to be subject to a $Z$ operation by multiple qubits $q$ with $c(q) = h$, we perform $Z$ on $v$ if and only if the number of such qubits $q$ is odd.) \item If $q$ has any neighbours $p$ by in-bound edges, such that $c(p) > c(q)$, then for each $1 \le j \le b$, perform any CNOT operations on edges $e$ with $c(e) = j$, which are outgoing from $q$. (In the QLNC formalism, this adds $a_q$ to the node-formula $f_v(a)$ for the node $v$ at the end of $e$.) \end{enumerate} \end{enumerate} \item For each non-receiver node $q$ with $c(q) = A$ in parallel, perform the following procedure: \begin{itemize} \item For each $1 \le j \le B$, perform a CNOT operation on any edge $e$ with $c(e) = j$ leaving $q$. \end{itemize} \item For each relay qubit $q$: if $c(q) < A$, perform a $Z$-observable measurement on $q$, obtaining an outcome $y_q \in \{0,1\}$. Otherwise, terminate $q$ (\emph{i.e.}, measure $q$ with the $X$ observable and perform appropriate $Z$ corrections). \item For receiver nodes $q$ and for the relay qubits $q$ for which $c(q) < A$, recursively define the \emph{delayed signal correction} $w_q$ as the sum mod $2$ of $(y_r \oplus w_r)$, for $r$ ranging over all neighbours of $q$ via incoming links for which $c(r) < A$ (or zero, if there are no such nodes $r$).% \footnote{% This recursive definition terminates in the relay nodes with no such incoming links. Note that the outcomes $z_r$ on which $w_q$ depends may be determined in advance for any receiver node $q$; from this the value of $w_q$ may be computed in logarithmic depth from that formula using standard parity-computation techniques. } Defining $w_q$ in this way, perform a Pauli-$X$ gate on each receiver node $q$ for which $w_q = 1$. (If such a qubit is subject to a $Z$ correction as a part of Step~5, we perform a $Y$ operation instead of the $X$ and $Z$ operations.) \end{enumerate} We compute the quantum depth of this procedure as follows. The operations of Step~1 has depth $1$. Both steps~2 and~4 have depth at most $B$. Step~3 is a loop with $A {\!\:-\!\:} 2$ iterations, in which part~(a) has depth at most $B$, and part~(b) has depth at most $B {\!\:+\!\:} 2$. Steps~5 and~6 together have depth at most $2$. Together, the depth is then $1 + B + (A\!\:{-}\!\:2)(2B\!\:{+}\!\:2) + B + 2 = 2(A\!\:{-}\!\:1)(B\!\:{+}\!\:1) + 1 \le 2(\chi\!\:{-}\!\:1)(\delta\!\:{+}\!\:2) + 1$. Fig.~\ref{appf1} shows a sketch of why this procedure works. In effect, we wish for ``information'' (more precisely: correlation of values of a qubit in the standard basis, when taken in superposition) to be transmitted through each relay node, from each of its sources (summing the signals from these sources modulo 2) to the qubits on each of its outward links. Some of this information may accumulate at a given relay node $q$ before round $c(q)$, in which case it is explicitly passed on through a CNOT. The rest accumulates at $q$ after round $c(q)$, and also after the node $c(q)$ has communicated a formal indeterminate $a_q$ on each of its outgoing links. If we may collapse the state in such a way to assign to $a_q$ the modulo~2 sum of the remaining signals from its incoming links (accumulated after round $c(q)$), this collapse will complete the transmission of the information from the inbound links of $q$ through to the outbound links. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{newfig.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Example of a relay node, whose operation in the linear network code is to forward $a \oplus b \oplus c \oplus d$ to all three outgoing edges. If the vertex colouring is such that the turn of this vertex is after incoming symbols $a$ and $b$ have arrived, but before $c$ and $d$ have, then the procedure continues as follows: in (a) $a \oplus b$ (i.e., the current label of the vertex) is forwarded to all outgoing edges; (b) the qubit is terminated and set to zero; (c) the qubit is set to the $\ket{+}$ state, and given the new label $\gamma$, which is then forwarded to all of the outgoing edges, therefore meaning that over the two rounds of forwarding $a \oplus b \oplus \gamma$ has been forwarded; the qubit then waits for the remainder of the process to complete, after which all edges will have been performed, so its label will now be $c \oplus d \oplus \gamma$, which can then be measured and corrected such that $c\oplus d = \gamma$, which then means that $a \oplus b \oplus c \oplus d$ has been forwarded as required.}} \label{appf1} \end{figure} More formally, consider the node formulae which result from this procedure. \begin{itemize} \item For each relay node, let $Z_p(a)$ denote the boolean formula which is transmitted to it on an incoming link from a node $p$ for which $c(p) < c(q)$. We will then have $Z_p(a) = a_p \oplus E_p(a)$, where $E_p(a)$ is the modulo~2 sum of the corresponding functions $Z_r(a)$ for nodes with edges towards $p$ such that $c(r) < c(p)$. \item The formula which is stored at qubit $p$ just prior to its measurement in Step~4 is the formula $Y_p(a) = a_p + L_p(a)$, where $L_p(a)$ is the modulo~2 sum of $Y_r(a)$ for nodes $r$ with links inwards to $p$ such that $c(r) > c(p)$. (An outcome $y_p$ represents the value of $Y_p(a)$ which is produced by a standard basis measurement of $p$.) \end{itemize} If in Step~4 we measure qubit $p$ and collapse it to the state $\ket{0}$, we in effect condition on the situation that $a_p = L_p(a)$. (In the event that we obtain $\ket{1}$, we perform corrections which allow us to simulate having obtained $\ket{0}$ instead.) This produces an acyclic graph of formula substitutions, from the node-formulae of the transmitters to the node-formulae of the receivers. By induction on the substitution depth (\emph{i.e.},~the distance of relay nodes from any receiver node), we may show that performing the necessary substitutions in the formula for $Z_p(a)$ yields the information which, in the classical linear protocol, $p$ would transmit on its outgoing links. It follows that the parity function computed at each receiver node is the function $a_t$ (for its corresponding transmitter node $t$) that is computed in the classical linear network code. \end{proof} In the protocol above, each relay is measured twice (\emph{i.e.},~for the termination, and then at the end to resolve the extra formal label introduced). For this reason, it is necessary to strictly separate transmitters, receivers and relays. However, this setting is not too restrictive, and corresponds to examples of classical linear network codes such as we see in Figs.~\ref{f01} and~\ref{f01a}. Note that while Steps 2, 3a, 3b(\textit{iii}), and 4 of our protocol iterate through all edge colours $1 \leq j \leq B$, the only edge-colours that contribute to the depth are those associated to edges which leave some \emph{vertex} of the colour $1 \le h \le A$ being considered in the given step. Thus the bound above will often be loose, and in fact it may be possible to find better realisations using a suitably tuned directed edge-colouring of $G$. However, our result obtains regardless which edge-colouring one uses, so long as it uses at most $\delta+1$ edge-colours, which again may easily be found~\cite{Vizing1964}. \subsubsection{Example of QLNC solution involving entanglement swapping, for which no classical linear network coding solution exists} \label{eswap} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig81.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{An example of a composite network with a QLNC circuit but no (classical) linear network code. Note that this composite network corresponds to three connected copies of the network in Fig.~\ref{f025}, here we draw the part of the graph in Fig.~\ref{f025}(b) as the two bottom-most nodes of each component.}} \label{appf2} \end{figure} When using linear network codes to design QLNCs, in which we distribute entangled pairs, we are free to assign which half of each desired Bell state corresponds to the transmitter in the linear network code and which half represents the receiver. However, while we have the freedom to decide which is the transmitter and which is the receiver, it may be the case that deciding the transmitter / receiver assignment for one Bell state fixes that for the rest. For example, if we consider the corresponding QLNC to the linear network code shown in Fig.~\ref{f025}, we can see that, even if we allow the links to be bi-directional, we must have one column of receivers and one column of receivers. That is, we cannot find a linear network code for the case where some of the left-hand nodes are receivers and some are transmitters. This principle allows us to construct composite networks, in which some data must flow through multiple networks such that there is no linear network code. This is the case shown in Fig.~\ref{appf2}(a), composed of three copies of the network shown in Fig.~\ref{f025} with some extra links, in which each pair of letters is to be connected. Even if we are free to assign which of each pair of letters is the transmitter and the receiver, and also the direction of each link, we still cannot find a linear network code. This can be seen by considering the non-circular coloured nodes, which correspond to data which must flow through two of the component networks. Using the linear network code of Fig.~\ref{f025}, we can connect the pairs ``$cc$'' and ``$dd$'', as well as propagating the left-hand occurrences of ``$a$'' and ``$b$'' forward from the left-hand column of vertices to the second from left. The left hand occurrence of $a$ can now be forwarded via the intermediate blue square node, and the same left-to-right linear network code can be performed on the middle of the three component graphs, and then again forwarding ``$e$'' through the intermediate red hexagonal node to the right-hand of the three component graphs. Once again, we perform the same left-to-right linear network code on the right-hand of the three graphs, which means that we have now connected all of the pairs of letters, with the exception of $b$. In the case of $b$, each of the two $b$s has been forwarded as if it were at a transmitter, and they are connected by a common receiver --- the top-most node, which is a green diamond with a thick black ring. Obviously, this is not a linear network code, as we have not connected pairs of letters as if one were a transmitter and the other a receiver (i.e., by a continuous data-flow from transmitter to receiver), however we \textit{can} find a QLNC circuit, as routing each letter towards a common receiver (the black-ringed green diamond node) can yield the desired Bell state by entanglement swapping in the black-ringed node, as shown in Fig.~\ref{appf2}(b). A similar argument can be made for there not being a linear network code even if the component linear network codes are run right-to-left, in which case the black-ringed node would look like a single transmitter forwarding data to two receivers (the nodes marked b). A situation which can also be implemented as a QLNC circuit (\emph{i.e.}, if the black ringed node is terminated at the end) that does not correspond to any linear network code with the transmitter-receiver pairs as designated by the symbols in Fig.~\ref{appf2}. \subsubsection{Classical-quantum linear network codes} In Section~\ref{outoforder} we saw how the network codes in QLNC circuits can be performed ``out of order'' in some cases, and in Section~\ref{eswap} we gave an example of the use of entanglement swapping to implement a linear network code as if two transmitters are routing towards a common receiver. These are two instances of a general principle we notice that \emph{together the} CNOT \emph{operations and classical control must form a linear network code}. That is, if we consider the following situation: \begin{enumerate} \item We have $n$ qubits connected in a network $G = \{V,E\}$, in which each edge means that a single CNOT (in either direction) is allowed. \item We allow classically-controlled $X$ gates (conditioned on measurement outcomes). It is convenient to consider this possibility of a classical control conditioned on the measurement outcome of a different vertex as a completely connected graph $K_n$ on the same set of vertices (where $n = \lvert V(G) \rvert$). That is, each edge represents the possibility of performing a classically controlled Pauli-$X$ gate. \end{enumerate} These coherently- and classically-controlled operations represent two of four primitives that we allow, the others being: \begin{enumerate} \setcounter{enumi}{2} \item Initialisation of qubits in the $\ket{+}$ or $\ket{0}$ state. \item Termination of the qubits according to the process described in the Section~\ref{sim}. \end{enumerate} \begin{thm} \label{mainthm1} Consider a multiple multicast (including multiple unicast as a special case) classical binary linear network code exists over any subgraph of the graph $G'= G \cup K_n$ sending a unit rate bitstream, where each edge of the graph is a unit rate bi-directional edge (but not allowing fractional routing). Suppose that this code has a set of transmitting source vertices $T' = \{t_1, \ldots, t_{N'}\}$ for some $N' > 0$, where the first $N < N'$ of these have corresponding recevier sets $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ for $1 \le j \le N$ (with the remaining transmitters $t_{N{+}1}, \ldots, t_{N'}$ having signals which are not necessarily actually received by any ``receiver'' nodes). Suppose further that \textup{(a)}~the information transmitted on the edges of $K_n$ from any single node, and \textup{(b)}~the information transmitted by the nodes $t_1$ through $t_N$, are linearly independent of each other. Then by simulating this linear network code by a QLNC circuit, with \textup{CNOT} operations restricted to the same graph $G$ and classically-controlled Pauli operations oriented along the other edges, the resulting protocol generates a product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states, where each $\ket{\Phi^+}$ or $\ket{\mathrm{GHZ}}$ is over each of the sets $\{ t_j, r_{j,1} , \cdots , r_{j,n_j} \}$ for all $1 \le j \le N$. \end{thm} \begin{proof}[Proof (sketch)] The core of the proof is showing that the QLNC formalism described correctly keeps track of the quantum state, which follows from the formalism description in Section~\ref{main}. We provide an explicit proof of the Theorem in Section~\ref{app1}, which explains why general QLNC circuits of this form achieve the desired result, and also serves to give a detailed walk through illustrating precisely how the QLNC formalism (including terminations) correctly simulates QLNC circuits in practise. \end{proof} An important special case occurs when the linear network code only requires edges in the graph $G$. \begin{corollary} \label{corr} Consider a multiple multicast (including multiple unicast as a special case) classical binary linear network code exists over any subgraph of the graph $G$ sending a unit rate bitstream, where each edge of the graph is a unit rate bi-directional edge (but not allowing fractional routing). Suppose that this code has a set of transmitting source vertices $T' = \{t_1, \ldots, t_{N'}\}$ for some $N' > 0$, where the first $N < N'$ of these have corresponding receiver sets $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ for $1 \le j \le N$ (with the remaining transmitters $t_{N{+}1}, \ldots, t_{N'}$ having signals which are not necessarily actually received by any ``receiver'' nodes). Then by simulating this linear network code by a QLNC circuit, with \textup{CNOT} operations restricted to the same graph $G$, the resulting protocol generates a product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states, where each $\ket{\Phi^+}$ or $\ket{\mathrm{GHZ}}$ is over each of the sets $\{ t_j, r_{j,1} , \cdots , r_{j,n_j} \}$ for all $1 \le j \le N$. Moreover, this can be achieved using only three of the primitives: initialisation, \textup{CNOT} and termination. \end{corollary} \begin{proof} This corollary simply selects the QLNC solutions which have no classical control (apart from in the terminations). \end{proof} \section{Generalisation to qudits of prime dimension} \label{qubitsection} Classical network coding is not restricted to information streams consisting of individual bits. Indeed, it is common in the literature to consider signals consisting of elements from finite fields in general, including the fields $\mathbb Z_d$ for $d$ a prime~\cite{LNCnew1,LNCnew2}. As most proposed quantum hardware platforms involve operations on qubits, our work has focused mainly on linear network codes over $\mathbb Z_2$. However, our techniques work equally well over qudits of any prime dimension $d$, using generalisations of Clifford group operations. \subsection{Generalising the QLNC formalism} On a Hilbert space of dimension $d$, label the standard basis states by $\ket{0}$, $\ket{1}$, \ldots, $\ket{d{-}1}$. Let $X_d$ and $Z_d$ be unitary operators satisfying $X \ket{a} = \ket{a{+}1 ~(\mathrm{mod}~d)}$ and $Z \ket{a} = \omega^a \ket{a}$ for $a \in \{0,1,\ldots,d{-}1\}$, where $\omega = \exp(2\pi i/d)$. These operators are the basis of a definition of the generalised Pauli group on qudits of dimension $d$~\cite{GottesmanQudits,Appleby2005}. The set of unitaries which preserves this extended Pauli group under conjugation corresponds to the Clifford group, and the effects of those operators on eigenstates of the $X$ and $Z$ operators can be simulated using an extension of the stabiliser formalism~\cite{GottesmanQudits,deBeaudrap2013}. For the special case of $d \ge 2$ a prime, we may define an extension of the QLNC formalism to qudits of dimension $d$ by identifying the operations of the generalised Clifford group which correspond to the operations of the qubit QLNC formalism: \begin{itemize} \item Preparation of the states $\ket{0}$ and $\ket{+_d} = \tfrac{1}{\sqrt d}\bigl(\ket{0} + \ket{1} + \cdots + \ket{d{-}1} )$; \item Performing (possibly classically controlled) $X_d$ and $Z_d$ operations on any qudit; \item Measuring qudits in the eigenbasis of the $Z_d$ operator or the $X_d$ operator; \item Addition operations $\mathrm{Add}_d$ on pairs of qudits which are connected in the network $G$, whose effect on standard basis states are $\mathrm{Add}_d \ket{x} \ket{y} = \ket{x}\ket{y + x ~(\mathrm{mod}~ d)}$. \end{itemize} We call circuits composed of these operations for a fixed $d \ge 2$, ``qudit QLNC circuits''. These operations allow one to prepare states which are the analogue of parity function states, which one might call ``linear function states'', which have the form \begin{equation} \label{eqn:parityFormulaExpana} \frac{1}{\sqrt{d^N}} \! \sum_{x \in \{0,1\}^N} \!\! \omega^{\phi(x)}\, \ket{f_1(x)} \otimes \ket{f_2(x)} \otimes \cdots \otimes \ket{f_n(x)} \end{equation}~\\[-2ex] for linear functions $f_k(a)$ and $\phi(a)$, and where $0 \le N \le n$. We may represent $n$-qubit linear function states states by $(N{+}1)\times(n{+}2)$ ``linear function tableaus'' $T = {\bigl[\:\! C \:\!\big\vert\!\: \mathbf p \:\!\bigr]}$, which represent the linear function state by specifying the coefficients of the functions $f_k$ and $\phi$ in the same way as in the qubit case. It is easy to show that preparation of the states $\ket{0}$ and $\ket{+_d}$ may be represented by the same column/row insertion steps, and the effects of the unitaries $X_d$, $Z_d$, and $\mathrm{Add}_d$ may be simulated in the same way through elementary column operations. (Indeed, one may use the same $\{0,1\}$-matrices in each case, albeit with coefficients modulo $d$.) The procedures to simulate measurements are similar to the case $d = 2$, but must be described without relying (for instance) on $1$ being the only non-zero element. It also becomes more helpful to describe the measurement outcomes as some element $s \in \mathbb Z_d$, representing a measurement of the $\omega^s$ eigenstate either of $X$ or of $Z$. As before, we put the tableau into reduced row echelon form, making the $k\textsuperscript{th}$ column (counting from 0) a pivot column if possible, where $k$ is the qubit to be measured. \begin{itemize} \item For an $X_d$-eigenbasis measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:XmeasProda} If $f_k(a) = a_g$ for an indeterminate which does not occur in any other qubit formula $f_j(a)$ --- \emph{i.e.},~if there is a single $1$ in the $g\textsuperscript{th}$ row of $C$ --- then the state is unchanged by the measurement, and the measurement outcome is $s = p_g$. \item \label{item:XmeasEnta} Otherwise, let $z_{N\!\!\:{+}\!\!\:1}$ be a new indeterminate (represented in $C$ by adding a new row at the bottom), and choose a measurement outcome $s \in \mathbb Z_d$ uniformly at random. If $f_k(a)$ is constant prior to measurement, let $\Delta$ be the $(N{+}2)$-dimensional column vector with $1$ in the final row, and zero elsewhere; otherwise, if $f_k(a) = a_g$, let $\Delta$ be the $(N{+}2)$-dimensional column-vector with $-1$ in row $g$ and $1$ in row $N{+}1$ (counting from~$0$), and zero elsewhere. We then add $\Delta$ to the $k\textsuperscript{th}$ column of $C$, and subtract $s\Delta$ from $\mathbf p$. \end{enumerate} \item For a $Z_d$-eigenbasis measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:ZmeasProda} If $f_k(a) = c$ is a constant function, then the measurement leaves the state unchanged, and the measurement outcome is $c$. \item \label{item:ZmeasEnta} Otherwise, we select a measurement outcome $s \in \mathbb Z_d$ uniformly at random. Let $\Delta = s \mathbf e_0 - \mathbf c_k$. For any column $j$ of $T = {\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$ which contains a non-zero coefficient $r_j \ne 0$ in the $g\textsuperscript{th}$ row (including the $k\textsuperscript{th}$ column itself), add $r_j \Delta$ to column $j$; then remove the row $g$ entirely from the tableau. \end{enumerate} \end{itemize} The analysis for these operations is similar to that of the case $d = 2$. This allows us to simulate qudit QLNC circuits. Finally, note that the property of parity function tableaus, that their ``parity function matrix'' $C$ has full rank, also holds for linear function tableaus for any $d$ prime, as these properties only depend on the fact that these matrices are defined over a field (which is a property on which we have also relied to consider reduced row echelons when simulating measurements). As a result, those results (such as ``termination'' of qubits being well-defined) which rely on such rank properties are also true in the QLNC formalism on qudits of prime dimension. It seems to us likely that, with some elaboration, the QLNC formalism may be extended to arbitrary dimensions $d \ge 2$. However, such an elaboration must carefully accommodate the fact that not all non-zero coefficients are invertible modulo $d$. \subsection{Entanglement distribution using the qudit QLNC formalism} For qudits of prime dimension $d$, the natural analogues of Bell states and GHZ states are the states \begin{equation} \begin{aligned} \ket{\Phi_d^+} \,&=\, \frac{1}{\sqrt d} \,\sum_{x = 0}^{d-1} \,\ket{x}\ket{x}; \qquad\qquad \ket{\mathrm{GHZ}_{d,n}} \,=\, \frac{1}{\sqrt d}\, \sum_{x=0}^{d-1}\, \underbrace{\ket{x}\ket{x}\cdots\ket{x}}_{\substack{\text{$n$ tensor} \\[0.25ex] \text{factors}}}\;. \end{aligned} \end{equation} These are evidently linear function states on qudits of dimension $d$. As all of the QLNC formalism for qubits (including the notion of qubit termination) generalises in an appropriate way to qudits --- albeit possibly with a constant factor $d{-}1$ overhead, to realise some power of the $\mathrm{Add}_d$, $Z_d$, or $X_d$ operations --- we obtain the following results: \begin{corollary}[to Theorem~\ref{thm:constdepth}] Let $d$ be prime. If a multiple multicast (including multiple unicast as a special case) classical $\mathbb Z_d$ linear network code exists over a network $G$, from a set of transmitter nodes $T= \{t_1 , \cdots , t_N\}$ with in-degree $0$ to a corresponding set of receiver nodes $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ with out-degree $0$, then there is a QLNC circuit whose \textup{CNOT} operations are located along the edges of $G$ and distributes $\ket{\Phi^+_d}$ and $\ket{\mathrm{GHZ}_{d,n_j}}$ states between corresponding transmitter/receiver node sets. Moreover, this circuit has depth at most ${2(d{-}1)\bigl[(\delta{+}2) (\chi{-}1) + 1\bigr]}$ time-steps, where $\delta$ is the largest in/out degree of any vertex in $G$, and $\chi$ is the chromatic number of $G$. \end{corollary} \begin{corollary}[to Theorem~\ref{mainthm1}] Let $d$ be prime. Consider a multiple multicast (including multiple unicast as a special case) classical $\mathbb Z_d$ linear network code exists over any subgraph of the graph $G'= G \cup K_n$ sending a unit rate stream, where each edge of the graph is a unit rate bi-directional edge (but not allowing fractional routing). Suppose that this code has a set of transmitting source vertices $T' = \{t_1, \ldots, t_{N'}\}$ for some $N' > 0$, where the first $N < N'$ of these have corresponding receiver sets $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ for $1 \le j \le N$ (with the remaining transmitters $t_{N{+}1}, \ldots, t_{N'}$ having signals which are not necessarily actually received by any ``receiver'' nodes). Suppose further that \textup{(a)}~the information transmitted on the edges of $K_n$ from any single node, and \textup{(b)}~the information transmitted by the nodes $t_1$ through $t_N$, are linearly independent of each other. Then by simulating this linear network code by a QLNC circuit, with \textup{CNOT} operations restricted to the same graph $G$ and classically-controlled Pauli operations oriented along the other edges, the resulting protocol generates a product of $\ket{\Phi^+_d}$ and $\ket{\mathrm{GHZ}_{d,n_j}}$ states, where each $\ket{\Phi^+_d}$ or $\ket{\mathrm{GHZ}_{d,n_j}}$ is over each of the sets $\{ t_j, r_{j,1} , \cdots , r_{j,n_j} \}$ for all $1 \le j \le N$. \end{corollary} The proofs of these statements are identical to the case $d= 2$, applying the extension of the QLNC formalism to $d > 2$ dimensional qudits. \section{Remarks on computational complexity} \label{comp} We now consider the computational complexity of the QLNC formalism, and also remark on the complexity of finding linear network codes. \subsection{Comparison of the QLNC formalism to the stabiliser formalism} Recall that a parity function tableau on $n$ qubits is a matrix of size $(N{+}1)\times(n{+}2)$, where $0 \le N \le n$ is some number of indeterminates involved in the expression of the state. As every parity function tableau has the same first column, the amount of information can be bounded above by $(N{+}1) \times (n{+}1)$ bits. By allocating enough space for an $(n{+}1)\times(n{+}1)$ matrix, and by maintaining lists to record which rows and columns in this space are actually occupied, we suppose that the data structure used for the matrix allows for $O(1)$ time row and column insertion and removal, apart from the time required to actually initialise the entries of new rows or columns. Several of the QLNC circuit operations may be represented by very simple operations or transformations on the tableau: \begin{itemize} \item Preparation of a fresh qubit involves introducing a new row and a new column, which involves $O(n + N)$ time to initialise. \item Performing a single CNOT, $X$, or $Z$ gate involves an elementary row operation, which requires $O(N) \subseteq O(n)$ time to perform. \end{itemize} Others of the operations are somewhat more involved: \begin{itemize} \item Performing measurements --- destructive or otherwise --- involves first putting the parity function tableau into a reduced row echelon form, which requires $O(N^2 n)$ time. This dominates the run-time required for the remaining operations: \begin{itemize} \item For an $X$ measurement, the subsequent operations may involve adding a new row, which takes $O(n + N)$ time; and adding a vector of size $O(N)$ to two columns, which takes $O(N)$ time. \item For a $Z$ measurement, the subsequent operations may involve adding a column vector of size $O(N)$ to $O(n)$ columns, and removing a row and a column, which all together takes $O(Nn)$ time. \end{itemize} \item Terminating a qubit requires a measurement, and also an appropriate set of qubits on which to perform $Z$ operations. Finding the latter also involves putting the tableau in reduced row echelon form, and $O(N)$ further work to determine an appropriate correction set; thus this also takes time $O(N^2 n)$. \end{itemize} A natural comparison to make is with the stabiliser formalism~\cite{Aaronson2004}. This also requires $O(n^2)$ space, with little room for improvement beyond techniques to represent sparse matrices. Preparation of a fresh qubit in the stabiliser formalism similarly involves extending the matrix, and takes $O(n)$ time; realising a CNOT, $X$, or $Z$ operation takes time $O(n)$. Using na\"{\i}ve techniques, simulating a measurement in the Stabiliser formalism may take time $O(n^3)$, involving Gaussian elimination; this may be avoided using ``destabiliser'' methods~\cite{Aaronson2004} at the cost of doubling the size of the tableau to take $O(n^2)$ time. In the worst case where $N \in \Theta(n)$, the run-time bounds for the QLNC formalism are then worse than those involving ``destabiliser'' methods; but for any circuit in which there is a bound $N \in o(n^{1/2})$, we obtain a better than constant factor improvement in the complexity of measurement, and a better than quadratic improvement in the complexity of simulating CNOT, $X$, and $Z$ gates. In summary, the computational advantage of the QLNC formalism, when simulating QLNC circuits, is the ability to take advantage of the potential for the parity function tableau to occupy space ${}\! \ll n^2$, in which case the operations required to transform it are less computationally costly. Even in the worst case, the fact that parity function tableaus have size $n^2 + O(n)$, rather than size $2n^2 + O(n)$, will also yield a mild improvement in performance for unitary operations (while incurring a performance hit for measurements). \subsection{On the complexity of finding QLNC circuits for entanglement distribution problems} Here, we consider the complexity of \emph{finding} a QLNC to perform a particular entanglement distribution task in a given network $G$. It is clear that when a linear network code for the classical $k$-pairs (or multiple multicast) problem exists in a particular network $G$, we may easily convert this to a constant depth QLNC circuit to solve the corresponding entanglement distribution problem on a quantum platform with the same interaction topology $G$ (with the mild restriction that nodes are either transmitters or receivers or relays, as previously discussed). However, it is not always easy to find such a linear network code. Lehman and Lehman \cite{Lehman2004} show that deciding whether a network code exists is NP-hard in general. As Kobayashi \textit{et al} \cite{Kobayashi2009} note, the $k$-pair problem is thus itself NP-hard, as all network coding can be reduced to an instance of the $k$-pair problem~\cite{Dougherty2006}. Given that finding network codes is a hard problem in general, it is reasonable to ask whether reducing the problem of entanglement distribution to the problem of finding linear network codes is of any practical advantage. One answer to this is that the problem of classical network coding has already received significant attention (\emph{e.g.}, \cite{Dougherty2006, LNCnov1, LNCnov2, LNCnov3, LNCnov4}), and thus such a reduction enables existing results and understanding to be transferred to the problem of entanglement distribution. Furthermore, the existing proof of NP-hardness appears to require a somewhat specialised network architecture in principle. (To us, this seems to mirror the situation with the bounds on the depth of the ``constant-depth'' QLNC circuits described in Theorem~\ref{thm:constdepth}: while the bound depends on parameters such as vertex-chromatic number which are NP-hard to compute in general, in many practical examples they may be computed very easily.) Finally, as we allow unconstrained classical control in QLNCs (\emph{i.e.},~the classical control could be thought of as being transmitted through a completely connected graph, as in Section~\ref{eswap}), we should expect it to be easier to find a QLNC for a network $G$, and perhaps to sometimes find a QLNC for entanglement distribution where there is no solution to the corresponding classical linear network coding problem. In any case, as our results more generally allow an edge to be used more than once, it is not clear whether we should expect the problem of finding QLNC solutions to entanglement distribution to be similar to that of solving the $k$ pairs problem. The complexity of this problem is open; though from our results, it is clear that it cannot be worse than NP-hard. We conjecture that it should be possible to do so in polynomial time. \section{Proof of Theorem~\ref{mainthm1}} \label{app1} Finally, we present here a more thorough presentation of the proof of Theorem~\ref{mainthm1}. In particular, we adopt a more concrete presentation in the hopes of describing in some detail what transformations of the states involved. Let there be $n$ qubits, ordered such that the first $n_1$ are prepared in the $\ket{+}$ state and the remaining $n_2 = n - n_1$ are prepared in the $\ket{0}$ state. The QLNC circuits described consist of four primitives: initialisation (i.e., preparation of qubits in the $\ket{+}$ or $\ket{0}$ state, as stated directly above); CNOT gates; measurements which can classically control Pauli-$X$ gates on other qubits; and termination operations. Firstly, we note that the principle of deferred measurement can be used to express an equivalent circuit with the classically controlled $X$ gates replaced by CNOT gates and deferred measurement on the control qubit, as shown in Fig.~\ref{f004}(a) and (b), and secondly, we address a generalised version of the circuit in question, as shown in Fig.~\ref{f004}(c). The remainder of the proof proceeds as follows: firstly, we relate the state of this generalised circuit after the network of CNOT gates to the actual circuit we want to express; secondly, we prove by induction that the state is correctly simulated by the QLNC formalism as the individual CNOT gates are executed; thirdly we show that the termination process does indeed remove qubits as required, without disturbing the rest of the state; and finally we show that the desired product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states is only realised if and only if the measurements do not reveal information thereabout, and that the Gaussian elimination procedure described is necessary and sufficient to verify this.\\ \begin{figure}[!t] \centering \includegraphics[width=0.58\textwidth]{fig12.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Illustration of equivalent circuits used in the proof of Theorem~\ref{mainthm1}, the three parallel vertical lines of descending size (i.e., a rotated `earth' symbol, as used in electrical engineering) denotes termination: (a) shows the actual quantum circuit, consisting of qubits initialised in the $\ket{+}$ and $\ket{0}$ states, a network of $CNOT$ gates, and measurements classically controlling Pauli-$X$ gates, and some terminations; (b) shows the same circuit, but now with deferred measurement (such that the classically controlled Pauli-$X$ gates can be represented as $CNOT$ gates; and (c) shows the circuit with additional ancilla qubits entangled with the qubits initialised in the $\ket{+}$ state, as is required in the proof.}} \label{f004} \end{figure} \indent Fig.~\ref{f004} illustrates a general instance of the circuit, in which $U$, in Fig.~\ref{f004}(a), is a block consisting of CNOT gates and classically controlled Pauli-$X$ gates, in Fig.~\ref{f004}(b) the principle of deferred measurement is used to draw an equivalent circuit, with CNOT gates replacing Pauli-$X$ in a block now labelled $\tilde{U}$, with measurements deferred until the end of the circuit. This allows us to write down the state directly after $\tilde{U}$: \begin{align} \ket{\psi_C} & = \tilde{U}(\ket{+}^{\otimes n_1} \ket{0}^{\otimes n_2} ) \nonumber \\ \label{app1eq10} & = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \tilde{U}(\ket{i} \ket{0}^{\otimes n_2} ), \end{align} where $i$ is binary number, later in the analysis we use $\mathbf{i}$ as the binary vector corresponding to the binary number $i$ (i.e., the $j^{th}$ element of $\mathbf{i}$ is the $j^{th}$ digit of $i$) and we use each of $\ket{i}$ and $\ket{\mathbf{i}}$ to denote the corresponding $n_q$-qubit quantum state, where $n_q$ is the number of digits in $i$ (and therefore the number of elements in $\mathbf{i}$). In the analysis, it is helpful to consider the circuit in Fig.~\ref{f004}(c), in which $n_1$ ancilla qubits are prepended to the state. Each ancilla is initialised in the $\ket{0}$ state, and then is the target of a CNOT by one of the qubits initialised in the $\ket{+}$ states (that is, a different one of these qubits controls the CNOT for each of the ancillas). This allows the state before $\tilde{U}$ to be expressed: \begin{equation} \label{app1eq15} \ket{\tilde{\psi}_B} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{i}\ket{i} \ket{0}^{\otimes n_2} . \end{equation} which in turn allows us to express the state after $\tilde{U}$: \begin{equation} \label{app1eq20} \ket{\tilde{\psi}_C} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{i} \tilde{U}(\ket{i} \ket{0}^{\otimes n_2} ). \end{equation} These extra ancillas have been introduced to make it easier to keep track of the state, and we later rely on the correspondence between (\ref{app1eq10}) and (\ref{app1eq20}) to show that these additional ancillas are indeed just an analytical device and do not affect the validity of the simulation of the actual circuit in the formalism.\\ \indent We can now introduce the QLNC formalism in vectorised form. We order the vector such that the first $n_1$ elements correspond to the qubits initialised in the $\ket{+}$ state, and are therefore labelled with a unique symbol in the initialisation process. Furthermore, the effect of each of these performing a CNOT on (a distinct) one of the ancillas is to copy the label to the ancilla, therefore it is convenient to think of the ancillas as having been labelled. Let these labels be $a_1 \cdots a_{n_1}$, and the actual qubits be labelled $q_1 ... q_n$ (which in general will be sums over the terms $a_1 \cdots a_{n_1}$). Stacking these up into vectors, we have that $\mathbf{a} = [a_1 , \cdots , a_{n_1}]^T$ and $\mathbf{q} = [q_1 , \cdots , q_{n}]^T$, such that: \begin{equation} \label{app1eq30} \mathbf{q} = \mathrm{L} \mathbf{a}, \end{equation} where $\mathrm{L}$ is a $n \times n_1$ binary matrix which keeps track of how the labels of the various qubits are related to the ancilla labels, i.e., initially $\mathrm{L} = [\mathbbm{1} | \mathbf{0}]^T$. In the QLNC formalism, the operation of a CNOT with the $j^{th}$ qubit controlling the $k^{th}$ qubit, is that the $j^{th}$ row of $\mathrm{L}$ is added to the $k^{th}$ row (modulo-2), that is $\mathrm{L}_{k,*} \leftarrow \mathrm{L}_{k,*} + \mathrm{L}_{j,*}$ (here `$*$' means the entirety of that row). Moving on to the network of CNOT gates (including those which have been included by the deferred measurement equivalence), we prove by induction that the quantum state is in the form: \begin{equation} \label{app1eq40} \ket{\tilde{\psi}_{BC}} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{i} \ket{\mathrm{L}\mathbf{i}} , \end{equation} where $\ket{\tilde{\psi}_{BC}}$ is the quantum state at an arbitrary point between $\ket{\tilde{\psi}_{B}}$ and $\ket{\tilde{\psi}_{C}}$ (i.e., within the block of CNOT gates, $\tilde{U}$).\\ \indent For the inductive proof, we observe that the initial definition of $\mathrm{L}$ (i.e., in the text below (\ref{app1eq30}) is of a format that corresponds to this definition, i.e., for the initial state in (\ref{app1eq20}). Turning to how the quantum state is changed by a CNOT gate, to simplify the notation (and without loss of generality) we re-order the qubits (and therefore the rows of $\mathrm{L}$ and $\mathbf{q}$) such that the first qubit is the control, and the second the target, before the CNOT we have: \begin{align} \ket{\tilde{\psi}_{BC}} = & \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{00} \ket{\psi_i'} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{01} \ket{\psi_i''} \right. \nonumber \\ \label{app1eq50} & \left. \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{10} \ket{\psi_i'''} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{11} \ket{\psi_i''''} \right), \end{align} where $s.t.$ means `such that', and $\ket{\psi_i'}$, $\ket{\psi_i''}$, $\ket{\psi_i'''}$ and $\ket{\psi_i''''}$ represent the remainder of the quantum state in each term, which is not required for this analysis. After the performing a CNOT gate on the first two qubits we have: \begin{align} & \left(\textnormal{CNOT} \otimes {\mathbbm{1}}_{n-2}\right)\ket{\tilde{\psi}_{BC}} \nonumber \\ & \,\,\,\,\, = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{00} \ket{\psi_i'} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{01} \ket{\psi_i''} \right. \nonumber \\ & \,\,\,\,\, \left. \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{11} \ket{\psi_i'''} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{10} \ket{\psi_i''''} \right) \nonumber \\ & \,\,\,\,\, = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 0 } \ket{i} \ket{00} \ket{\psi_i'} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 1 } \ket{i} \ket{01} \ket{\psi_i''} \right. \nonumber \\ \label{app1eq60} & \,\,\,\,\, \left. \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 1 } \ket{i} \ket{11} \ket{\psi_i'''} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 0 } \ket{i} \ket{10} \ket{\psi_i''''} \right), \end{align} which we can see is consistent with the operation of a CNOT where the first qubit controls the second in the QLNC formalism, i.e., the assignment $\mathrm{L}_{2,*} \leftarrow \mathrm{L}_{2,*} + \mathrm{L}_{1,*}$, thereby completing the inductive proof. It is worth observing that, while our proposed formalism was conceptualised from the starting point of classical network codes (as emphasised in Corollary~\ref{corr}), the manner in which the state is tracked bears some resemblance to the quadratic representation of the Stabliser formalism as described by Dehaene and de Moor \cite{dehaene}.\\ \indent $\ket{\tilde{\psi}_C}$ is simply $\ket{\tilde{\psi}_{BC}}$ after all of the CNOT gates in $\tilde{U}$ have been executed, and using the correspondence between (\ref{app1eq10}) and (\ref{app1eq20}) allows us to express $\ket{\psi_C}$ from (\ref{app1eq40}): \begin{equation} \label{app1eq61} \ket{\psi_{C}} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{\mathrm{L}\mathbf{i}} , \end{equation} \indent The next step in the circuit is the termination of any qubit which is left such that its label is the sum of two or more symbols, and indeed any other qubits with a single symbol label if desired. In the termination process the goal is, for any given post measurement state, that the corresponding qubits should be measured out in such a way that the superposition of quantum states is the same for the rest of the qubits, whichever state was measured. That is, if the state is $\ket{0}\ket{\phi} + \ket{1}\ket{\tilde{\phi}}$, termination and removal of the first qubit should leave the state $\ket{\phi} + \ket{\tilde{\phi}}$. This can be achieved in one of three ways: firstly, if the qubit to be terminated has a label which can be expressed exactly as a sum of qubits that are measured, then it can be measured out directly, as no additional information will be learned by doing so (in reality this measurement will have already taken place, although for the analysis this is treated as a deferred measurement, but this does not affect the validity of measuring it out directly). Conversely, in the case where the label of the qubit to be terminated is linearly independent of all of the other qubit labels, then it can also be measured out, as this will not reveal any information about the entangled superposition of interest. To see this, WLoG we re-order the qubits (including the ancilla qubits) such that the first qubit is to be terminated, from which we can express the state: \begin{equation} \label{app1eq75} \ket{\psi_{CD}}= \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{\mathrm{L}_{1,*}\mathbf{i}}\ket{ \mathrm{L}_{2:n,*}\mathbf{i}}, \end{equation} and thus we can see that because $\mathrm{L}_{1,*}$ is linearly independent of all of the other rows of $\mathrm{L}$, measuring it out will not collapse any other terms in the superposition.\\ \indent So we move onto the third option for termination, where the qubit to be terminated can be expressed as a sum of qubit labels, of which at least some haven't been measured. Once again, for simplicity of exposition and without loss of generality, we consider that it is the first qubit, labelled $q_1$, that is to be terminated. To see how the termination process works, first let us write the linear expression of $q_1$ in terms of the other qubit labels: $q_1 = \mathbf{r}^T\mathbf{q}_{2:n}$, where $\mathbf{r}$ is a binary vector that selects the other qubits whose labels sum to $q_1$. We now express $\mathbf{r} = \mathbf{r}_a + \mathbf{r}_b$, such that $\mathbf{r}_a$ corresponds to qubits that are measured, and $\mathbf{r}_b$ corresponds to qubits that are not measured. Thus we can re-express (\ref{app1eq61}), noting that $\mathbf{q}_{2:n} = \mathrm{L}_{2:n_1,*}\mathbf{a}$, from (\ref{app1eq30}): \begin{equation} \label{app1eq90} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i \, s.t. \, (\mathbf{r}_a + \mathbf{r}_b )^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, (\mathbf{r}_a + \mathbf{r}_b)^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} taking first the case where the existing measurements are such that $\mathbf{r}_a^T \mathrm{L}_{2:n_1,*}=0$, (\ref{app1eq90}) becomes: \begin{equation} \label{app1eq95} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} Next, we treat the $X$ observable measurement in the equivalent form of a Hadamard gate, followed by a $Z$ observable (computational basis) measurement. Thus, the Hadamard gate transforms (\ref{app1eq95}) to: \begin{equation} \label{app1eq100} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} (\ket{0}+\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} (\ket{0}-\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} After which the state is measured, and so we must address each of the cases where we measure each of $0$ or $1$. In the former we can see that the state collapses to: \begin{equation} \label{app1eq100a} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} with the terminated qubit still included. Whereas if we measure $1$, we get: \begin{equation} \label{app1eq100b} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} - \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} However, by definition, zero is measured when there are an even number of ones in $\mathbf{r}_b \odot (\mathrm{L}_{2:n_1,*} \mathbf{i})$ and one is measured when there are an odd number of ones therein (where $\odot$ means element-wise multiplication). Therefore, applying a Pauli-$Z$ (phase) gate to each qubit which corresponds to a one in $\mathbf{r}$ guarantees the correct adjustment, and this is exactly what is prescribed in the termination process. Thus, after the correction (\ref{app1eq100b}) becomes: \begin{equation} \label{app1eq100bb} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} \indent Turning now to the alternative situation, where the existing measurements are such that $\mathbf{r}_a^T \mathrm{L}_{2:n_1,*}=1$, (\ref{app1eq90}) becomes: \begin{equation} \label{app1eq95a} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} which the Hadamard gate transforms to: \begin{equation} \label{app1eq100n} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} (\ket{0}-\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} (\ket{0}+\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} After which the state is measured, and so we must address each of the cases where we measure each of $0$ or $1$. In the former we can see that the state again collapses to: \begin{equation} \label{app1eq100an} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} with the terminated qubit still included. Whereas if we measure $1$, we get: \begin{equation} \label{app1eq100bn} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( -\sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} However, applying the same correction as before, we get: \begin{equation} \label{app1eq100bbn} \ket{\psi_{CD}} = -\frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} So we can see that, regardless of the previous measurement outcomes, and the outcome of the $X$-basis measurement of the qubit being terminated, we get the same quantum state up to an unobservable global phase.\\ \indent Following the layer of terminations, we can express the state as: \begin{equation} \label{app1eq110} \ket{\psi_{D}} = \frac{1}{\sqrt{2^{n'_1}}}\sum_{i=0}^{2^{n_1'}-1} \ket{\mathrm{L}\mathbf{i}} , \end{equation} where we have omitted any qubits that have been measured out and any labels that are no longer present in any qubit label. Thus rows in $\mathrm{L}$ will have either exactly one, or more than one element equal to one (and the rest equal to zero, as it is a binary matrix). We know that all rows of $\mathrm{L}$ with multiple elements equal to one are measured (i.e., otherwise they would have been terminated), and in general some rows with exactly one element equal to 1 may be measured too. To verify that none of these measurements imparts information that would collapse the superposition of interest, we follow the same rationale as that described around (\ref{app1eq75}). Specifically, we construct a $n \times n_M$ matrix (where $n_M$ is the number of measurements), $\mathrm{M}$ such that each row corresponds to one measurement. For example, if we have five symbols in total, $a_1 \cdots a_5$, and we measure a qubit labelled $a_1 \oplus a_4$, the corresponding row of $\mathrm{M}$ would be $[1,0,0,1,0]$. We now re-order the columns of $\mathrm{M}$ such that the first $n_r$ correspond to symbols that aren't present in the final entangled state, and perform Gaussian elimination such that the matrix is in upper-echelon form, let this transformed version of $\mathrm{M}$ be denoted $\mathrm{M}'$. A necessary and sufficient condition for the measurements not to have imparted any information that collapses the final entangled state is that each row which is not all zeros should have at least one element equal to one in the first $n_r$ columns. An example of $\mathrm{M}'$ is shown in (\ref{mateq}), and the necessary and sufficient condition essentially means that the label of each measured qubit includes at least one unique symbol, not present in any other label (either those of other measured qubits, or in the labels of the qubits that compose the final state), and thus, by the same reasoning given in and around (\ref{app1eq75}) means that the final state will not be collapsed by this measurement. \begin{equation} \label{mateq} \mathrm{M}' = \left. \left[ \begin{array}{c c c c c c} \bovermat{$n_r$ cols}{1 & \cdots & & & & \cdots} \\ 0 & 1 & & & & \cdots\\ \vdots & \ddots & 1 & & & \cdots \\ & & & 1 & & \cdots \\ & & & & \ddots & \cdots \\ \end{array} \right] \right\} \text{\scriptsize{$n_M$ rows}} \normalsize . \end{equation} \indent Having performed these measurements, and verified the condition of not imparting information that collapses the superposition, we have the final state: \begin{equation} \label{app1eq120} \ket{\psi_{E}} = \frac{1}{\sqrt{2^{n''_1}}}\sum_{i=0}^{2^{n_1''}-1} \ket{\mathrm{L}\mathbf{i}} , \end{equation} where each row of $\mathrm{L}$ has exactly one element equal to one. Rows of $\mathrm{L}$ whose element equal to 1 is in the same column will be labelled with the same single symbol (i.e., according to the definition in (\ref{app1eq30})), and thus we can see that this will correspond to the product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states as specified, thus completing the proof. \section{Summary} In this article, we consider the problem of entanglement distribution in quantum architectures with constraints on the interactions between pairs of qubits, described by a network $G$. We describe how this problem may be fruitfully reduced to solving the $k$ pairs problem through linear network coding, on the same network $G$; and we describe how such codes may be simulated to achieve entanglement distribution using a shallow circuit, independent of the size of $G$ or the distance over which the entanglement is to be distributed. We also present several novel observations about realising linear network codes through stabiliser circuits. For the purposes of practically realising operations on practical quantum architectures, it will be of interest both to reduce the depth of circuits to distribute entanglement, and to efficiently discover protocols to do so. However, it will also be important to address issues which we have not considered here, such as the fidelity of the entanglement which is distributed to some known Bell state. We do not expect the quality of such Bell states to be independent of the distance over which such entangled states are distributed, or the number of Bell states which are distributed in parallel using QLNC circuits. Nevertheless, it may be feasible to consider techniques to mitigate what noise may be present. We hope that it may be possible to do so while incorporating the apparent benefits that QLNC circuits theoretically provide in the noiseless case. \section*{Acknowledgements} This work was supported by an Networked Quantum Information Technologies Hub Industrial Partnership Project Grant. The authors also thank Earl Campbell, Steve Brierley and the team at Riverlane for their encouragement and the general discussions that helped to shape this paper. \section{Introduction} One of the most important problems to solve, in the realisation of quantum algorithms in hardware, is how to map operations onto the architecture. Scalable architectures for quantum computers are not expected to have all-to-all qubit connectivity: if we describe the pairs of qubits which may interact directly by the edges of a graph (or ``network'') $G$ whose nodes are qubit labels, then $G$ will not contain all pairs of nodes. This raises the question of how best to realise two-qubit operations on data stored on pairs of qubits $a,b \in G$ which are not adjacent in $G$. One solution is to swap qubit states through the network until they are on adjacent nodes~\cite{Cowtan-etal-2019,me,me2}. An alternative, which is possible when not all qubits in the architecture are being used to store data, is to distribute entanglement between qubits $a', b' \in G$ which are adjacent to $a$ and $b$ respectively. This allows a gate between $a$ and $b$ to be performed by teleportation~\cite{newref1}. Which approach is the more practical will depend on whether it is economical to leave some number of qubits free to use as auxiliary space, but also on how much noise the state is subject to as a result. The question of which approach will lead to more accumulated noise will be determined in part by how long it takes to realise the chosen approach, in total, over all operations to be performed in a given algorithm. To reduce the time taken in distributing entanglement for two-qubit operations, we may consider how entangled states may be distributed between multiple pairs in parallel. A direct approach may result in crossing paths in the network $G$, forcing the entangled pairs to be distributed in sequence rather than in parallel. The issue of crossing paths for transmissions across a network is also potentially an issue in conventional networks. In that setting, one solution to this problem is \emph{network coding}, in which independent signals in a network may share bandwidth by allowing intermediate nodes to combine their signals in appropriate ways to distribute complete information about each signal across the network. (A simple illustrative example of this, the ``butterfly network'', is shown in Fig.~\ref{f01}.) This motivates the idea of using network coding to realise entanglement distribution between multiple pairs of qubits in parallel using similar concepts. Previous work~\cite{Leung2006,Kobayashi2009,Kobayashi2011,Satoh2012} has shown that when a classical binary linear network code exists for the ``multiple unicast'' problem (the problem of sending signals between $k$ pairs of sources and targets) on a classical network, then there exists a quantum network code to distribute Bell states between each source--target pair in a quantum network of the same connectivity. However, these results suppose that each `node' is a small device, hosting multiple qubits and able to perform arbitrary transformations on them before transmitting onward ``messages'' through the network. This does not reflect the architecture of many hardware projects to realise quantum computers, in which the `nodes' are single qubits, and edges are pairs which may be acted on by a quantum operation (such as a CNOT) rather than a directed communications link~\cite{IBM1,GoogleQC,Rigetti,Intel,IBM2}. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{fig1.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{(a) Example of network coding over the Butterfly network for input bitstreams ``A'' and ``B'' -- nodes either perform a modulo-2 sum of the incoming bitstreams (when labelled $\oplus$) or fanout the single incoming bitstream otherwise; (b) the Butterfly shown as a (topologically identical) $2 \times 3$ grid, with node order shown by the labelled indices -- as the Butterfly network provides a useful illustrative example for many of the results presented in this paper this ordering is defined and used consistently throughout the paper (for example for qubit ordering).}} \label{f01} \end{figure} In this article, we describe techniques to translate linear network coding protocols on a directed graph $G$, to circuits --- called here ``QLNC circuits'' --- which involve only preparation of $\ket{0}$ and $\ket{+}$ states, CNOT gates along edges of $G$, unitary Pauli gates (possibly conditioned on classical information, which is communicated without constraints), and measurements of the observables $X$ and $Z$. Our techniques extend also to the multiple multicast problem, serving to distribute Bell and GHZ states across such a network $G$. We show that QLNC circuits allow us to distribute entanglement in a circuit whose quantum depth can be bounded from above by simple properties of the architecture network $G$, leading to a modest sized constant for reasonable choices of $G$ (\emph{e.g.},~12 for a square lattice provided no receiver node has four in-coming links).~\label{discn:introSquareLatticeBound} In particular, the depth is independent of the number of qubit pairs to be entangled, the distance between the nodes in any of the pairs, or the total number of other qubits involved. In addition to this constant quantum depth, is a dependency on computing classical controls for some of the quantum operations, which is at worst logarithmic in the number of qubits involved. These are lower circuit depths than can be achieved by realising two-qubit operations by routing~\cite{Cowtan-etal-2019,KMvdG-2019}. Furthermore, while our results are in some ways similar to what can be achieved with graph states (as described by Hahn~\emph{et al.}~\cite{Hahn}), our techniques are somewhat more versatile and also easier to analyse. We make these comparisons more precise in Section~\ref{sec:compare}. As well as describing how network codes can be used to distribute entanglement, in a setting where the nodes in the network represent individual qubits which may interact in pairs along the network, we also note two features of QLNC circuits that make them more versatile than classical linear network coding protocols: \begin{itemize} \item QLNC circuits can be used to simulate a classical linear network code ``out of order''. (Indeed, this is required for our main result, which simulates a linear network code in a depth which may be smaller than the length of the longest transmitter--receiver path in the classical network.) \item Entanglement swapping allows for QLNC circuits to perform entanglement distribution tasks, that \emph{do not} correspond to classical linear network coding protocols --- that is, for networks $G$ in which the corresponding linear network coding problem has no solution. \end{itemize} These results hold as a result of using the (unconstrained) classical control to allow a QLNC circuit to simulate a classical linear network code, on a network with more edges than $G$. Our analysis of QLNC circuits involves a simple computational formalism, which may be of independent interest. The formalism is similar to classical network coding in its representation of data with time, and allows the easy use of classical network coding results and intuitions to reason about entanglement distribution circuits. While QLNC circuits are stabiliser circuits, and can be efficiently simulated using the stabiliser formalism, QLNC circuits do not require the full power of the stabiliser formalism to simulate. This allows us to reason about them more efficiently than is possible even with the stabiliser formalism. This yields at least a factor $2$ improvement in space and time requirements, and achieves $O(n)$ complexity (without using sparse matrix techniques) to simulate protocols which only involve superpositions of $O(1)$ standard basis states. These techniques can also be applied to network codes on qudits of prime dimension. The remainder of the paper is organised as follows. In Section~\ref{prev} we review existing literature on classical and quantum network coding. In Section~\ref{prelim} we introduce the QLNC formalism, and present the main results described above. In Section~\ref{qubitsection} we give the generalisation for prime $d$-level qudit systems. In Section~\ref{comp} we discuss the computational complexity of simulating circuits using the QLNC formalism, as well as that of discovering linear network codes. Finally, in Section~\ref{app1}, we include a detailed proof of the Theorem~\ref{mainthm1}, which demonstrates the way in which a QLNC circuit may be regarded as realising a linear network code on an extended network $G' \supseteq G$. \section{Preliminaries} \label{prev} We begin by reviewing the literature on classical and quantum network coding, and an overview of techniques to help the realisation of two-qubit operations in limited architectures. \subsection{Classical network coding} \begin{figure}[!t] \centering \includegraphics[width=0.29\linewidth]{Fig1a.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Another example of network coding, on a $4\times 3$ grid with three bitstreams ``A'', ``B'' and ``C''.}} \label{f01a} \end{figure} Network coding, as a means to increase information flow in mesh networks beyond what can be achieved by routing alone, was conceptualised by Ahlswede~\textit{et al}~\cite{Ahlswede2000}. Rather than simply re-transmit one or more incoming information signals on different onward channels, a network coding protocol allows the nodes in the network to compute some function of its signals (\emph{e.g.},~the \texttt{xor} of bit streams from different incoming links) and to transmit the outcome, in principle ``encoding'' the signals. The standard example of network coding, providing a simple and clear illustration of the underling principle, is the Butterfly network (Fig.~\ref{f01}), which enables simultaneous transmission between the diagonally opposite corners. Fig.~\ref{f01a} illustrates a more elaborate network which solves a slightly more complicated signal transmission problem. These examples, which represent a proof of principle of the benefits of network coding, both use binary \textit{linear} network coding -- that is each node can encode its inputs by performing modulo-2 sums. Binary linear network coding provides the basis for the CLifford group QLNCs we address in this paper, however it is worth noting that much of the classical literature considers a more general setting in which the network nodes can encode the input data-streams by performing modulo-$r$ summations (for $r>3$) and / or nonlinear functions. Additionally, these examples are concerned with only one type of network coding task, namely the \textit{multiple unicast} problem (also known as the $k$-pairs problem), in which some number $k \geqslant 1$ of transmitter nodes each send different information streams each to a single, distinct receiver node. Other problems for which one may consider network coding protocols are the \emph{multicast} and \emph{broadcast} problems (in which a single source node sends the same information to some subset of~the~nodes\,/\,all~nodes in the network respectively), and the \textit{multiple multicast} problem (in which multiple transmitters send different information streams to different subsets of the other nodes). The advantage of network coding is most important in the case that the network $G$ has edges which are all directed (as illustrated in the examples of Figs.~\ref{f01} and~\ref{f01a}). In the case of directed networks, it is always possible to contrive situations in which network coding can yield an unbounded increase in information throughput (for a $k$-pairs example see Fig.~\ref{f025}). \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{fig21.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{An example of directed network in which network coding can yield an arbitrary speed-up in the $k$-pairs setting. The network is a directed graph, consisting of transmitters on the left hand side, and receivers on the right-hand side. Each receiver is paired with the transmitter horizontally left (as shown by the indexed ``t''s and ``r''s). The network consists of two components, a bipartite graph between the transmitters and receivers, with direct links t$_i$-r$_i$ missing, shown in (a); and all of the transmitters connected to all of the receivers through a single directed link, shown in (b). Clearly without network coding all of the transmitter-receiver pairs will have to share the link in (b), and the links in (a) will be useless, however with network coding each of the transmitters can broadcast its bitstream to each output, and the left-most of the central nodes in (b) can perform a modulo-2 sum of all its inputs and forward the result, and the right-most of the central nodes in (b) simply broadcasts this to each receiver. So it follows that each receiver receives 4 bitstreams -- the modulo-2 sum of all the transmissions, via the central nodes, and the bitstreams from all transmitters other than its pair, thus can perform a modulo-2 sum to resolve the bitstream from its paired transmitter. That is, for example, r$_1$ receives B, C and D directly from t$_2$, t$_3$ and t$_4$ respectively, as well as A$\oplus$B$\oplus$C$\oplus$D from the central nodes in (b), and can thus perform the modulo-2 sum of all its inputs A$\oplus$B$\oplus$C$\oplus$D$\oplus$B$\oplus$C$\oplus$D=A, as required. It can easily be appreciated that this construction can extend to any number of transmitter-receiver pairs.}} \label{f025} \end{figure} However, in many practical contexts, the available communication channels are bidirectional. For such networks, it is often not clear that network coding will yield any benefits at all. For the broadcast setting, it has been proven that there is no benefit to the application of network coding over standard routing~\cite{Li2004a}. For tasks of transmitting long information streams in undirected networks, other techniques than network coding appear to be competitive. For instance, \emph{fractional routing} involves dividing up a single bitstream and forwarding it along different routes, storing them locally in between rounds of use of the network. Fig.~\ref{f03} illustrates how fractional routing can achieve the same asymptotic throughput as network coding in the Butterfly Network. The \textit{multiple unicast conjecture} posits that there is no benefit to the application of network coding over standard routing for multiple unicast channels, if fractional routing is possible \cite{Li2004b}. while the multiple unicast conjecture remains unproven, the improvement possible by using network coding has been upper-bounded to typically low factors for various restricted multiple unicast settings \cite{Cai2015}. This rather sets the tone for the other settings considered, with an upper bound equal to two on the factor improvement over routing achievable by applying network coding being proven for the multicast and multiple multicast settings \cite{Li2009}. Table~\ref{tab1} summarises the benefits of network coding in various settings. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig31.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Demonstration of achieving the same throughout on the Butterfly as network as network coding by using fractional routing instead. This is achieved by splitting each of bitstreams ``A'' and ``B'' into halves, and forwarding half on each link, as shown in (a). That is (for example) let A consist of two bits $a_1$ and $a_2$ and likewise B two bits $b_1$ and $b_2$. In the first time interval the two bits of A are forwarded on different routes, as shown in (b), and then likewise for the bits of B, shown in (c). Thus time-sharing is used to achieve the fractional routing, and A and B can each forward two bits in a total of two time intervals, which corresponds to the same bit-rate as is achieved using network coding, as shown in Fig.~\ref{f01}.}} \label{f03} \end{figure} \begin{table}[t] \bigskip \centering \begin{tabular}{ l c c c c c} & \textbf{Broadcast} & \textbf{Multicast} & \textbf{Multiple multicast}& \textbf{Multiple unicast}\\ \hline\hline \textbf{Directed} & $\infty$ & $\infty$ & $\infty$ & $\infty$ \\ \textbf{Undirected} & 1 & $\leq 2$ & $\leq 2$ & 1 (conjectured)\\ \end{tabular} \captionsetup{width=0.95\linewidth} \caption{\small{Maximum factor increase in information throughput using network using, for various network and information transfer types.}} \label{tab1} \end{table} \subsection{Quantum network coding} The concept of network coding has been adapted to quantum state transmission~\cite{Leung2006}, and then to entanglement swapping~\cite{Kobayashi2009, Kobayashi2011, Satoh2012} in quantum communications networks. Because of the limitation imposed by the no-cloning theorem, the $k$-pairs problem (or for entanglement swapping, the problem of distributing entanglement between $k$ different transmitter--receiver pairs) is typically the problem studied in the quantum case. It has been shown that any situation in which a classical network code for multiple unicast exists, then there is also a quantum network code for entanglement swapping~\cite{Leung2006, Kobayashi2009, Kobayashi2011}. These results include quantum generalisations of both linear and non-linear network codes. It is with the former that we are concerned in this article, and Satoh~\textit{et al.} provide a very good visual demonstration of the correspondence between classical and quantum linear network coding for the case of the Butterfly graph~\cite{Satoh2012}. In the case of ``classically assisted'' quantum linear network coding, in which classical communication is less constrained than quantum communication, de Beaudrap and Roetteler~\cite{deBeaudrap2014} show how quantum network coding can be described as an instance of measurement-based quantum computation involving $X$ observable measurements to remove correlations between the input states and the states of qubits (or qudits) at interior nodes. One feature which is common for these existing pieces of research is that they consider quantum networks which are in the same essential form as classical telecommunications networks: nodes which have more than one qubit of internal memory (with negligible latency operations), which are connected to each other by channels with significant latency. This model is appropriate for entanglement distribution in quantum communication networks, but for entanglement distribution in quantum computers it may be relevant to consider a finer scale model in which each node is itself a single qubit. Note that in this setting, fractional routing is made more complicated by the inability to store and transmit information without resetting the state of the qubit, making the multiple unicast less plausible. (In the case that the ``information stream'' consists of a single Bell state between each of the $k$ transmitter/receiver pairs, fractional coding loses its meaning entirely.) \subsection{Other approaches to realise two-qubit operations in limited architectures} \label{sec:compare} While we consider the problem of distributing entanglement in limited quantum architectures, this is not the only approach to the problem of realising two-qubit operations between remote qubit pairs. We consider below other approaches to this problem \subsubsection{Realising two-qubit operations via mapping/routing} One way in which two-qubit operations can be realised between qubits is simply by moving the data stored by these qubits to adjacent nodes, \emph{e.g.},~using logical SWAP operations to exchange the data held by adjacent qubits. We may then consider the way that such a circuit of SWAP gates (or several such exchanges of qubits) can be decomposed into more primitive gates~\cite{ZPW-2018,Cowtan-etal-2019}. More generally, we may consider how to decompose a single ``long-distance'' operation (such as a CNOT) between remote qubits, into primitive gates consisting of single-qubit gates on adjacent qubits~\cite{KMvdG-2019}. These results are applicable to the NISQ setting, \emph{i.e.},~the near-term prospect of hardware platforms in which all or nearly all of the qubits will store data which ideally is not to be lost or disturbed owing to the scarcity of memory resources. They give rise to unitary circuits, whose depth must scale at least as the distance between the pair of qubits on which we want to perform a two-qubit operation. It seems plausible, for the parity-map techniques of Ref.~\cite{KMvdG-2019}, these techniques will in some cases yield something that could be interpreted in terms of linear network codes; this may allow their techniques for finding suitable CNOT circuits in the NISQ setting, to be combined with our techniques for distributing entanglement in a setting where memory is less scarce. \subsubsection{Sequential distribution of Bell pairs} Our approach is to consider how multiple Bell pairs may be distributed through an quantum hardware platform in spite of ``bottlenecks'' in the network of the architecture, in a way that is independent of the distance between the qubits to be entangled. Note that individual Bell pairs can be distributed in constant depth as well, by taking advantage of the concept of entanglement swapping (a~concept which implicitly underlies our techniques as well). In (otherwise idealised) quantum hardware with paralellisable two-qubit interactions limited to a connected, undirected network $G$, we may distribute entanglement between any pair of qubits $q$ and $q'$ by first preparing a long chain of entangled qubits, and ``measuring out'' all intermediate qubits (essentially using what we call ``qubit termination'' above), in constant time. It suffices to consider a chain $q_0, q_1, \ldots, q_\ell$ of qubits with $q$ and $q'$ as endpoints, and to perform the following: \begin{enumerate}[itemsep=0ex] \item Prepare every $q_j$ for $j$ even in the state $\ket{+}$, and the remaining qubits in the state $\ket{0}$. \item Perform a CNOT from qubit $q_j$ to qubit $q_{j{-}1}$ for each even $j > 0$. \item Perform a CNOT from qubit $q_j$ to qubit $q_{j{+}1}$ for each even $j < \ell$. \item Measure the $X$ observable on each $q_j$ for $0 \!<\! j \!<\! \ell$ even (recording the outcome $s_j = \pm 1$); and measure the $Z$ observable on each $q_j$ for $j \!<\! \ell$ odd (discarding the outcome and assigning $s_j = +1$). \item If $\prod_j s_j = -1$, perform a $Z$ operation on either $q_0$ or $q_\ell$ (not both). \end{enumerate} The value of the product $\prod_j s_j$ can be evaluated by a simple circuit of depth $O(\log \ell)$, and only determines the final single-qubit operation which determines whether the prepared state is $\ket{\Phi^+}$ or $\ket{\Phi^-}$ on $\{q,q'\}$; the rest of the procedure is evidently realisable by a quantum circuit with a small depth, independent of $\ell$. To distribute Bell states between $k$ pairs of qubits, it clearly suffices to perform the above procedure $k$ times in sequence, independently of whether the chains involved cross one another. (Furthermore, any pairs of qubits whose chains do not cross in $G$ can be processed in parallel.) As the final corrections can be performed in parallel, the total depth of this procedure is then at most $4k+1$, regardless of the distance between the nodes or the size of $G$. One of our main results (Theorem~\ref{thm:constdepth} on page~\pageref{thm:constdepth}) is to demonstrate conditions under which we may use a QLNC circuit to simulate a classical linear network coding protocol, in ``essentially constant'' depth --- that is, independent of the size of the network or the distance between transmitters and receivers. Thus, for sufficiently large $k$, our techniques will distribute the same entangled states in parallel, with a lower depth of quantum operations than distributing the same entanglement sequentially. \subsubsection{Distribution of entanglement via graph states} Our techniques yield results that are in some ways similar to results involving graph states~\cite{graphstate}. We describe some of these here. In the work by de~Beaudrap and Roetteler~\cite{deBeaudrap2014}, linear network codes give rise to measurement-based procedures involving graph states (which differ from, but are in some cases very similar to, the coding network itself). The connection to measurement-based quantum computing informed our results, and in particular our techniques feature both measurements and the depth-reduction for which measurement-based computing is known. However, as our results rest upon unitary operations performed on a network in which each node is a single qubit, the results of Ref.~\cite{deBeaudrap2014} do not directly apply. More intriguingly, Hahn~\textit{et al.}~\cite{Hahn} have shown how entanglement can be ``routed'' from an initial graph state using transformations of graph states by local complementations. Graph states can be prepared in depth equal to the edge-chromatic number of the graph (\emph{i.e.},~as with our results, with depth independent of the size of the distances between the qubits involved). In this sense they represent a better-known way to address the problem of shallow-depth multi-party entanglement distribution in restricted architectures. Our results differ from those of Hahn~\textit{et al.}~\cite{Hahn} in that we are able to avoid using the sophisticated technique of local complementation of graph states, instead reducing the problem of entanglement distribution to the somewhat more easily grasped subject of linear network coding, which has also been well-studied in the context of information technologies. There are also entanglement distribution tasks which cannot be achieved by local transformations of graph states, which can be achieved through our techniques: see Section~\ref{sec:gphState-separatingExample}. \section{Quantum Linear Network Coding circuits} \label{prelim} In this Section, we describe techniques to distribute entanglement in architectures where the pairs of qubits which can interact are restricted to some graph $G$. Our results involve stabiliser circuits which in a sense simulate a linear network coding protocol on $G$ in order to distribute entanglement, given that the ``nodes'' are single qubits and the ``channels'' consist just of whether or not a CNOT operation is applied. For this reason, we call these circuits \emph{quantum linear network coding circuits} --- or henceforth, QLNC circuits. We demonstrate below how to simulate a particular classical linear network code using a QLNC circuit, and how doing so can be used to distribute Bell states in parallel by reducing this task to the $k$-pairs problem. More generally, we show that the same techniques may be used to distribute GHZ states of various sizes by reducing this task to the multiple multicast problem. We also demonstrate the way in which QLNC circuits allow us to find solutions which somewhat extend what can be achieved by reduction to the $k$-pairs or multiple multicast problems. To help this exposition, we introduce a formalism to describe the effect of QLNC circuits as a class of quantum circuits, independent of the application of entanglement distribution. \subsection{A first sketch of QLNC circuits} Consider a network $G$ with $k$ transmitters $T = \{t_1, \ldots, t_k\}$ and $k$ receivers $R = \{r_1, \ldots, r_k\}$, where we wish to distribute a Bell pair $\ket{\Phi^+}$ between each pair $(t_j, r_j)$, $1 \le j \le k$. The simplest application of our techniques is to reduce this problem to the existence of a linear network coding solution to the corresponding $k$ pairs problem on $G$, which we may describe by a subgraph $G'$ (omitting edges not required by the protocol) whose edges are given directions by the coding protocol.\footnote{% Note that this is not an easy problem in general: see Section~\ref{comp}. } In particular, our results apply to linear network codes in which, specifically, all nodes with output channels send the same message (consisting of the sum modulo~2 of its inputs) on each of its output channels. We suppose that classical information may be transmitted freely, without being constrained to the network. While there will be non-trivial costs associated with communicating and computing with classical information, it is reasonable to suppose that the control system(s) governing the quantum architecture can perform such tasks, without being subject to the restrictions involved in the interactions between qubits. \subsubsection{Directly simulating classical linear network codes} Given a linear network code as above, to send a standard basis state from each transmitter to their respective receiver would be straightforward, using a circuit of CNOT gates to simulate the network code. It would suffice to simply initialise all qubits to $\ket 0$, and at each node, compute the message that the node should transmit by using CNOT gates (oriented along the directed edge) to compute the parity of its incoming message(s) at the corresponding qubit. Fig.~\ref{f001a} illustrates this in the case of the Butterfly network. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{fig41.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Sending computational basis states $x_1$ and $x_2$ over a butterfly network in which each vertex is a qubit, and each edge in a CNOT gate, shown in (a) -- the order in which the CNOT gates are performed is given in the circuit, shown in (b).}} \label{f001a} \end{figure} To transmit Bell pairs, requires additional operations: if the qubits at the transmitter nodes do not initially start in the standard basis, the procedure described above will yield states in which the transmitters and receivers are entangled with the intermediate nodes. This elaborated procedure is illustrated in Fig.~\ref{f001}. Following Refs.~\cite{Kobayashi2009, Kobayashi2011,deBeaudrap2014}, we adapt classical network coding protocols by preparing the transmitter states in the $\ket{+}$ state (conceived of as a uniform superposition over standard basis states), and performing $X$ observable measurements (\emph{i.e.},~measurements in the $\ket{\pm}$ or ``Fourier'' basis) to disentangle the intermediary qubits while leaving them in (joint) superpositions of the standard basis. These measurements yield outcomes $\pm 1$. The $+1$ outcome represents a successful disentangling operation, erasing any local distinctions between possible standard basis states without introducing any relative phases. The $-1$ outcome represents a disentangling operation requiring further work, as a relative phase has been introduced between the possible standard basis states locally. By conceiving of the state of the qubit as being the parity of some (undetermined) bit-values originating at the transmitters, one may show that it is possible to correct the induced phase by performing $Z$ operations on an (easily determined) subset of the transmitters or receivers. We refer to this procedure, of measuring a qubit with the $X$ observable and performing appropriate $Z$ corrections, as \emph{termination} of the qubit. By considering the state of the qubits in Fig.~\ref{f001}(b) after the Hadamard gates simply as a superposition $\tfrac{1}{2} \sum_{a,b} \ket{a,0,b,0,0,0}$ for $a,b \in \{0,1\}$, it is easy to show that the final state after the measurements and classically controlled $Z$ operations is $\tfrac{1}{2} \sum_{a,b} \ket{a,\!\:\cdot\!\;,b,b,\!\:\cdot\!\;,a} = \ket{\Phi^+}_{1,6} \ket{\Phi^+}_{3,4}$, using dots as place-holders for the measured qubits $2$ and $5$. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{fig4abxbasis.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Example of performing the Butterfly on a single qubit: (a) shows the order of edges; (b) shows the quantum circuit diagram -- note that the final two layers consisting of Hadamard gates and measurements on qubits 2 and 5, and classically controlled Pauli-$Z$ gates on the other four qubits are necessary for the `termination' of qubits 2 and 5, which do not appear in the final desired entangled state. We discuss the general process of termination in full in due course.}} \label{f001} \end{figure} \subsubsection{Simulating classical linear network codes ``out of order''} \label{outoforder} For the application of distributing entanglement, QLNC circuits may simulate linear network coding protocols in other ways than sequential evaluation. As a fixed entangled state represents a non-local correlation rather than information as such, it suffices to perform operations which establish the necessary correlations between the involved parties. This principle applies to the simulation of the network coding protocol itself, as well as to the eventual outcome of the entanglement distribution procedure. For instance: the role of a node with exactly one output channel in our setting is to establish (for each possible standard basis state) a specific correlation between the parities of the qubits of the nodes which are adjacent to it: specifically, that the total parity should be zero. These correlations may be established without simulating the transmissions of the classical network code in their usual order. Fig.~\ref{f1} illustrates a mild example of how a QLNC circuit may simulate a classical network protocol (again on the Butterfly network), performing the operations ``out of order''. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{fig5ab.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{The Butterfly performed out of order, as illustrated graphically in (a), with the measurement of qubit 2 performed immediately prior to the classical control; (b) shows the corresponding quantum circuit, and exhibits a good example of the termination process, as described in detail later on.}} \label{f1} \end{figure} In this case, the correlation between the values of the qubits $1$, $3$, and $5$ (that their projections onto the standard basis should have even total parity, before the disentangling measurement on $5$) is established by attempting to project the qubit $2$ onto the state $\ket{0}$, via a $Z$ observable measurement. In the case that the outcome is instead $\ket{1}$, we must correct any receiver nodes which would be affected by this, by performing (classically conditioned) $X$ operations (represented by the doubled operations, and performed at the sixth time-step). Again, by considering the state of the qubits in Fig.~\ref{f1}(b) after the Hadamard gates simply as a superposition $\smash{\tfrac{1}{2\sqrt 2}} \sum_{a,b,z} \ket{a,z,b,0,0,0}$ for $a,b,z \in \{0,1\}$, it is easy to show that the state immediately prior to the measurement of qubit $2$ is $\smash{\tfrac{1}{2\sqrt 2}} \sum_{a,b,z} \ket{a,(z{\oplus}a{\oplus}b),b,(a{\oplus}z),z,(b{\oplus}z)}$, and that projecting qubit $2$ onto the state $\ket{0}$ projects onto those terms for which $z = a \oplus b$. (Projection onto $1$ projects onto those terms for which $z \oplus 1 = a \oplus b$, and we may correct for this simply by performing an $X$ operation on each receiver whose state depends on the index $z$.) It is then easy to verify, as with Fig.~\ref{f001}, that the resulting circuit prepares the state $\ket{\Phi^+}_{1,6} \ket{\Phi^+}_{3,4}$. One insight is that the freedom to communicate classical information outside of the network allows QLNC circuits to represent a linear network code on a larger network than the network $G$ which governs the two-qubit interactions --- with the qubits as nodes, and both the CNOT~gates\,/\, classically controlled $X$ gates as directed edges. We will formalise this insight in Section~\ref{main}. \subsubsection{A separation between QLNC circuits and local transformations of graph states} \label{sec:gphState-separatingExample} There are entanglement distribution tasks which can be achieved using QLNC circuits, which cannot be achieved using local transformations of graph states. Fig.~\ref{novfigref} demonstrates a QLNC circuit on a simple network, whose effect is to prepare a four-qubit GHZ state on the nodes of degree~1. \begin{figure}[!t] \centering \includegraphics[width=0.66\linewidth]{novfigniel.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{An example of an entanglement distribution task separating QLNC circuits from local transformations of graph states. The qubits are numbered 1,2,3,4 (left to right) along the top row, and 5, 6 (left to right) along the bottom row. Qubit 2 is terminated and qubit 3 is measured (followed by the required classical correction), leaving a four-qubit GHZ state.}} \label{novfigref} \end{figure} An exhaustive search of the local complementation orbit, including measurements, revealed that the four-qubit GHZ state could not be reached by local Clifford operations and measurements if a graph-state was prepared over the same graph. (We provide the code for this exhaustive search~\cite{ourcode}, which was written specifically for this example but could in principle be adapted for any single network). While we do not make any formal claim to this effect, the existence of this example leads us to believe that our techniques may yield solutions for entanglement distribution in larger-scale networks and for a variety of entanglement distribution tasks, where it may be difficult or impossible to find a procedure to do so by manipulation of graph states. \subsection{The QLNC formalism} \label{main} Our main objective is to demonstrate how to simulate a classical linear network code to solve a multiple multicast problem on a network $G$, using a QLNC circuit of constant depth, to distribute Bell states and GHZ states between disjoint subsets of qubits located at the nodes of an architecture whose interaction graph is the same network $G$. To do so, it will be helpful to introduce a simulation technique (which we call the ``QLNC formalism'') to describe the evolution of a set of qubits in a QLNC circuit. QLNC circuits are stabiliser circuits, by construction. Indeed, as the only operations which they involve are preparations of $\ket{0}$ and $\ket{+}$ states, CNOT gates, $X$ and $Z$ observable measurements, and unitary $X$ and $Z$ gates conditioned on measurement outcomes, they do not even generate the Clifford group. For this reason, one might consider using the stabiliser formalism to simulate a QLNC circuit. (Indeed, the QLNC formalism described below uses operations similar to those of the simulating stabiliser formalism, in that they involve transformations and row-reduction of matrices over $\mathbb Z_2$.) The QLNC formalism differs from the stabiliser formalism by expressing states implicitly as superpositions over standard basis states, essentially as a special case of that of Dehaene and de Moor~\cite[Theorem 5~(ii)]{dehaene}. This renders certain features of the correlations between states immediately obvious: \emph{e.g.},~not even a small amount of pre-processing (such as that required by the stabiliser formalism) is needed to establish the state of any single-qubit state which factorises from the rest. This alternative representation more transparently represents the qualities of the state which are important to simulate network coding: for this reason, it proves to be a somewhat more efficient method than the stabiliser formalism for this purpose. \subsubsection{Parity formula states} In the QLNC formalism, the global state is represented by an assignment of boolean formulae $f_j(a)$, where $a = (a_1,a_2,\ldots,a_N)$ to each qubit $1 \le j \le n$ in the network $G$. We call each formula $f_j(a)$ a \emph{node formula} or a \emph{qubit formula}. Here, \begin{equation} f_j(a) \,=\, c_{j,0} + c_{j,1} a_1 + \cdots + c_{j,N} a_N\,, \end{equation} for some explicit coefficients $c_{j,0},\!\; c_{j,1},\!\; \ldots,\!\; c_{j,N} \in \{0,1\}$, and where addition is taken modulo $2$ (\emph{i.e.},~each function $f_j(a)$ computes the parity of $c_{j,0}$ and some given subset of its arguments). These arguments consist of some number of formal indeterminates $a_1, \ldots, a_N$, which we may interpret as variables which may take Boolean values but where those values are as yet undetermined. We require that, together the the vector $\mathbf e_0 = {[1\:\:0\:\:\cdots\:\:0]\trans}$, the vectors $\{{\mathbf c}_1, {\mathbf c}_2, \ldots, {\mathbf c}_n\} \subseteq \mathbb Z_2^{N+1}$ for ${{\mathbf c}_j \!\!\;= [c_{j,0}\:\:c_{j,1}\:\:\cdots\:\:c_{j,N}]\trans}$ span a set of $2^{N+1}$ vectors. In particular, each indeterminate $a_h$ must occur in \emph{some} qubit formula $f_j(a)$. The state also has an associated phase formula $\phi(a)$ of the form \begin{equation} \phi \,=\, p_{0} + p_{1} a_1 + \cdots + p_{N} a_N\,. \end{equation} Given such a phase formula $\varsigma$ and node-formulas $f_1, f_2, \ldots, f_n$ for a network $G$ of $n$ nodes, the global state of the system is given by \begin{equation} \label{eqn:parityFormulaExpan} \frac{1}{\sqrt{2^N}} \! \sum_{x \in \{0,1\}^N} \!\! (-1)^{\phi(x)}\, \ket{f_1(x)} \otimes \ket{f_2(x)} \otimes \cdots \otimes \ket{f_n(x)} \end{equation}~\\[-2ex] where $x = (x_1, x_2, \ldots, x_N)$. That is: the phase formula $\phi(a)$ and node-formulae $f_j(a)$ stand for an explicit superposition over the standard basis, ranging over all possible substitutions of Boolean strings $x \in \{0,1\}^N$ to the indeterminates $a_1, \ldots, a_N$, and where in particular $\phi(a)$ determines the relative phases. \begin{defn} A \emph{parity formula state} is an $n$-qubit state for $n \ge 1$ as expressed in \eqref{eqn:parityFormulaExpan}, where $\phi$ and $f_j$ for $1 \le j \le n$ are (not necessarily homogeneous) linear functions of $N \ge 0$ indeterminates, and where the functions $f_j(a)$ together with the constant function $e_0(a) = 1$ span a set of $2^{N+1}$ functions. \end{defn} It will be convenient to consider a representation of parity function states in terms of an $(N+1) \times (n+1)$ matrix $C$ and a separate column vector $\mathbf p$, where $\mathbf p = {[p_0\:\:p_1\:\:\cdots\:\:p_N]\trans}$, and where the columns of $C$ (indexed from $0$ to $n$) consist of the vector $\mathbf e_0$ and the columns $\mathbf c_1, \ldots , \mathbf c_{n}$. \begin{defn} A parity function matrix $C$ for an $n$-qubit state is an $(N{+}1)\times(n{+}1)$ matrix for some $N \ge 0$, of the form $C = \bigl[\mathbf e_0\:\:\mathbf c_1\:\:\cdots\:\:\mathbf c_{n+1}\bigr]$ of rank $N+1$. A parity function tableau is a matrix $T = \bigl[\:\! C\,\big\vert\, \mathbf p \:\!\bigr]$ consisting of a parity function matrix $C$ and a phase vector $\mathbf p$. \end{defn} Two distinct parity function tableaus $T = \bigl[\:\! C\,\big\vert\, \mathbf p \:\!\bigr]$ and $T' = \bigl[\:\! C'\,\big\vert\, \mathbf p' \:\!\bigr]$ may represent the same state, if $T' = Q T$ for some $(N{+}1) \times (N{+}1)$ invertible matrix $Q$. Such a transformation $Q$ represents a change of variables, in the summation expression of the state as described in \eqref{eqn:parityFormulaExpan}, leaving the overall sum invariant. Note that such a matrix must satisfy $Q \mathbf e_0 = \mathbf e_0$: this corresponds to the fact that no change of variables can affect the value of constants. Conversely, any invertible $(N{+}1) \times (N{+}1)$ matrix $Q$ which preserves the vector $\mathbf e_0 \in \mathbb Z_2^{N+1}$\!, may be used to transform a parity function tableau $T$ to an equivalent tableau (representing the same state) by left-multiplication. In our application to QLNC circuits for a given qubit interaction network $G$, we may use an alternative representation, in which we write the qubit functions $f_j(a)$ next to the nodes corresponding to each qubit $j$ in the diagram of $G$. For instance, the state illustrated in Fig.~\ref{f4} is the state $\ket{+}_1 \ket{+}_3 \ket{\mathrm{GHZ}_4}_{2,4,5,6}$ (with a phase function of zero). This will prove practical when the objective is to demonstrate the effect of operations within a particular network $G$. \subsubsection{QLNC operations on parity formula states} We now consider how each of the transformations which are admitted in QLNC circuits may be simulated through transformations of parity function tableaus. \paragraph{Simulating unitary gates.} The effect of the unitary transformations CNOT, $X$, and $Z$ on parity formula states are easy to describe as transformations of their representations, by simply reasoning about the representation of the state as a superposition over the standard basis: \begin{enumerate}[label=(\textit{\roman*})] \item The effect of an $X$ operation on qubit $k$ is to update $f_k(a) \gets 1 + f_k(a)$; \item The effect of a $Z$ operation on qubit $k$ is to update $\phi(a) \gets \phi(a) + f_k(a)$; \item The effect of a CNOT operation with control $k$ and target $\ell$, is to update $f_\ell(a) \gets f_\ell(a) + f_k(a)$. \end{enumerate} It is easy to verify that these transformations correspond to elementary column transformations of the parity function tableau ${\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$. Specifically --- indexing the columns of $C$ from $0$ --- these operations may be realised respectively by adding the zeroeth column of $C$ to the $k\textsuperscript{th}$ column, adding the $k\textsuperscript{th}$ column of $C$ to $\mathbf p$, and adding the $k\textsuperscript{th}$ column of $C$ to the $\ell\textsuperscript{th}$ column. Note that these operations all preserve the rank of $C$. \paragraph{Simulating projective measurements.} The way in which we may represent measurements by transformations of a parity formula tableau is somewhat more complex, due to the possibility of state collapse. To simplify the description of an $X$ or $Z$ observable measurement on a qubit $k$, we first perform a change of variables --- specifically, by putting the block matrix $T = {\bigl[\:\! C \:\!\big\vert\:\! \mathbf p \:\!\bigr]}$ in a reduced row-echelon form in which either column $k$ is a pivot column, or column $k$ is co-linear with $\mathbf e_0$ (so that $f_k(a)$ is a constant). Suppose (without loss of generality) that ${\bigl[\:\! C \:\!\big\vert\:\! \mathbf p\:\!\bigr]}$ is already in such a reduced row echelon form, in which case either $f_k(a) = c_{k,0}$ is a constant function, or $f_k(a) = a_g$ for a single indeterminate indexed by $1 \le g \le N$; in the latter case, exactly one row of $C$ contains a $1$ in the $k\textsuperscript{th}$ column. Having put the parity function tableau into reduced row-echelon form of this kind, we may then describe an $X$ or $Z$ observable measurement on qubit $k$, as follows. \begin{enumerate}[label=(\textit{\roman*}), start=4] \item For an $X$ measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:XmeasProd} If $f_k(a) = a_g$ for an indeterminate which does not occur in any other qubit formula $f_j(a)$ --- \emph{i.e.},~if there is a single $1$ in the $g\textsuperscript{th}$ row of $C$ --- then the state is unchanged by the measurement, and the measurement outcome is $s = (-1)^{p_g}$. \item \label{item:XmeasEnt} Otherwise, let $z_{N\!\!\:{+}\!\!\:1}$ be a new indeterminate (represented in $C$ by adding a new row at the bottom), and choose a measurement outcome $s = \pm 1$ uniformly at random. If $f_k(a)$ is constant prior to the measurement, then let $\Delta$ be the $(N{+}2)$-dimensional vector with a single $1$ in the final row; otherwise, let $\Delta$ be the $(N{+}2)$-dimensional column-vector with exactly two $1$s, in row $g$ and $N{+}1$ (counting from~$0$). We then add $\Delta$ to the $k\textsuperscript{th}$ column of $C$, and (in the case that $s = -1$) to $\mathbf p$ as well. \end{enumerate} \end{enumerate} \begin{itemize} \item \textbf{Analysis of the state transformation.~~} In case~\ref{item:XmeasProd}, the state of qubit $k$ can be factored out of the sum, so that the state is either $\ket{+}$ (if $\phi$ lacks any $a_g$ term) or $\ket{-}$ (if $\phi$ contains a $z_g$ term), so that the measurement does not affect the state and the outcome is predetermined. Otherwise, in case~\ref{item:XmeasEnt}, qubit $k$ is maximally entangled with the rest of the system: the state has a Schmidt decomposition $\tfrac{1}{\sqrt 2} \ket{0}_k \ket{A_0} + \tfrac{1}{\sqrt 2} \ket{1}_k \ket{A_1}$, where $\ket{A_b}$ in each case is the state-vector on the qubits apart from $k$ in the case that $a_g := b$ (possibly including a phase factor that depends on $a_g$). It follows that the outcome of the $X$ observable measurement is uniformly random, and that the state $\ket{A_\pm}$ of all of the other qubits will be in tensor product with $k$ after measurement. A straightforward calculation shows that $\ket{A_+} = \tfrac{1}{\sqrt 2} \sum_{x_g} \ket{\smash{A_{x_g}}}$ and $\ket{A_-} = \tfrac{1}{\sqrt 2} \sum_{x_g} (-1)^{x_g} \ket{\smash{A_{x_g}}}$; these are the states described by simply omitting the $k\textsuperscript{th}$ column of $C$, and (in the case of $\ket{A_-}$) adding an extra $a_g$ term to the phase function. To represent the post-measurement state, it suffices to introduce a new indeterminate $a_{N+1}$ to represent the independent superposition on qubit $k$; for the post-measurement state $\ket{-}_k$, we also must add $a_{N+1}$ to the phase function. \item \textbf{On the rank of the resulting parity function matrix.~~} Note, above, that in case~\ref{item:XmeasProd} there is no change in the tableau, and thus no change in the rank of $C$. In case~\ref{item:XmeasEnt}, we must consider two sub-cases: one where $f_k(a) = c_{k,0}$ before the measurement, and one where $f_k(a) = a_{g}$ before the measurement. In either case, we add one row, in which the only non-zero entry is in column $k$. In the former case, we add one row and add a coefficient $1$ in column $k$ in that bottom row. This increases both the number of rows and the rank. In the latter case, we consider the operations performed at column $k$ in two steps: first setting the coefficient at row $g$ to zero, then setting the coefficient in the new row $N+1$ to one. Setting the coefficient at row $g$ to zero does not decrease the rank: the column $k$ cannot any longer be a pivot column. Prior to the first step, the $k\textsuperscript{th}$ column is a pivot column; but we may alternatively select any other column in which the $g\textsuperscript{th}$ row is set to $1$, as (by construction) these columns do not contain a pivot position for any other row. Thus, setting the $g\textsuperscript{th}$ coefficient of the $k\textsuperscript{th}$ row does not decrease the rank; and again, adding a row in which only the $k\textsuperscript{th}$ column has a $1$ increases both the rank and the number of columns. Thus, this operation maintains the property of $C$ having a rank equal to the number of its rows. \end{itemize} \begin{enumerate}[label=(\textit{\roman*}), start=5] \item For a $Z$ measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:ZmeasProd} If $f_k(a) = c$ is a constant function, then the measurement leaves the state unchanged, and the measurement outcome is $(-1)^c$. \item \label{item:ZmeasEnt} Otherwise, we select a measurement outcome $s = (-1)^b$ for a bit $b \in \{0,1\}$ chosen uniformly at random. Let $\Delta = b \mathbf e_0 + \mathbf c_k$. Add $\Delta$ to all columns of $T = {\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$ which contain a $1$ in the $g\textsuperscript{th}$ row (including the $k\textsuperscript{th}$ column itself), and remove the row $g$ entirely from the tableau. \end{enumerate} \end{enumerate} \begin{itemize} \item \textbf{Analysis of the state transformation.~~} In case~\ref{item:ZmeasProd}, it is obvious that qubit $k$ is in a fixed state: the outcome will be $+1$ if it is in the state $\ket{0}$, and $-1$ if it is in the state $\ket{1}$. Otherwise, in case~\ref{item:ZmeasEnt}, the state of the system can again be described as a superposition $\tfrac{1}{\sqrt 2} \ket{0}_k \ket{A_0} + \tfrac{1}{\sqrt 2} \ket{1}_k \ket{A_1}$, albeit where it is possible in principle that $\ket{A_0} = \pm \ket{A_1}$. We may simulate the assignment of the $k\textsuperscript{th}$ qubit to $b$ by quotienting out all of the functions $f_j(a)$ and the phase function $\phi(a)$ by the relation $a_g + b = 0$. We may do this in effect by adding the column vector $\Delta$ defined above to all columns with a non-zero coefficient in the row $g$, thereby obtaining a tableau in which the $g\textsuperscript{th}$ row is empty. This corresponds to a state in which the variable $a_g$ no longer plays any role; together with the updated normalisation after measurement, we may represent this by removing row $g$. \item \textbf{On the rank of the resulting parity function matrix.~~} Note, above, that in case~\ref{item:ZmeasProd} there is no change in the tableau, and thus no change in the rank of $C$. In case~\ref{item:ZmeasEnt}, we may without loss of generality suppose that the $k\textsuperscript{th}$ column is the last column to which we add $\Delta$. In each case, the vector is added to a non-pivot column, in which case this cannot decrease the rank; nor will it increase the rank, as it only sets coefficients to $0$ in rows which already have pivot positions. These column additions preserve the property of being a reduced row-echelon form. The final addition of $\Delta$ does decrease the rank by $1$, as it turns the $g\textsuperscript{th}$ row from a non-zero row-vector (in a reduced echelon form) to a zero row. Thus the rank of the parity function matrix $C$ decreases by $1$; as removing row $g$ from the tableau reduces the number of columns by $1$, this operation maintains the property of $C$ having a rank equal to the number of its rows. \end{itemize} From the above, we see that we may represent QLNC operations by simple transformations of a parity function tableau $T = {\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$, which in particular preserves an invariant that the rank of the parity function matrix $C$ is equal to the number of its rows. \paragraph{Simulating destructive measurements and qubit preparations.} One might reasonably wish to regard some measurements as being \emph{destructive}, \emph{i.e.},~in not leaving any post-measurement state. We may simulate this by simply removing from $C$ the column corresponding to the destructively measured qubit, and removing from the entire tableau any row for which (after the column removal) the matrix $C$ is entirely zero. Conversely, one may simulate the preparation of a fresh qubit in a standard basis state $\ket{b}$ for $b \in \{0,1\}$, by inserting a new column into $C$ with the value $b \mathbf e_0$. To instead simulate the introduction of a fresh qubit in the state $\tfrac{1}{\sqrt 2} \sum_{x'} (-1)^{bx'} \ket{x'}$ for $b \in \{0,1\}$, one may insert a new row into the tableau (at the bottom of both $C$ and $\mathbf p$) which is entirely zero, then setting the new coefficient of $\mathbf p$ in this row to $b$ if this is different from $0$, and then inserting a new column into $C$ which has only a single $1$ in the final row. \vspace*{-1ex} \paragraph{Terminology.} For the sake of definiteness, ``the QLNC formalism'' will refer below to the techniques described above to describe transformations of parity function tableaus (or some some equivalent representation), as a means to simulate stabiliser circuits composed of this limited set of operations. \subsubsection{Depicting and simulating QLNC circuits} \label{sim} Having defined the QLNC formalism, we now demonstrate how it may be used to simulate QLNC circuits. In this context, we will prefer to represent the parity function states diagrammatically rather than as a matrix --- and to represent it together with a visual representation of the transformations to be performed. \begin{enumerate} \item Each vertex is a qubit $j$, with an associated formula $f_j(a)$, for some symbols $a_1, \ldots, a_N$. The initial formulae for each qubit is generally very simple: each qubit prepared in the $\ket{+}$ state is assigned the formula $f_j(a) = a_j$ for a unique formal symbol $a_j$, and each qubit initialised in the $\ket{0}$ state is assigned the formula $f_j(a) = 0$. \item Pauli $X$ gates on a qubit $k$, which are classically conditioned by the outcome of a $Z$-observable measurement of a different qubit $j$, are represented as doubled lines with an orientation from $j$ to $k$. Coherently controlled CNOT gates are drawn along edges of the network $G$. \item One or more qubits may also be simultaneously ``terminated'', in which case they are measured with the $X$ observable. The outcome may then be used to control Pauli $Z$ operations to cancel out the relative phases which arise as a result of any $-1$ measurement outcomes. \item There is a time-ordering of the operations represented by the edges are performed. In simple QLNC circuits, this is represented by a single integer at each edge, and an integer inside each node to be terminated. (Two edges which meet at a common vertex, and which are not both classically controlled $X$ gates, must be performed at different times, and thus must be assigned different numbers. Also, no edge can have the same number as the termination of a node to which it is incident. Otherwise, there are no constraints.) More generally, it will be reasonable to consider QLNC circuits in which edges are used some constant number of times, \emph{e.g.}~up to two times; we would then label edges by a list (or set) of those times in which it is used, and the operations involving a common vertex must be disjoint (again, unless those operations are all classically controlled $X$ gates). \end{enumerate} \paragraph{Remarks on termination.} It may not be immediately obvious that the claim made about termination --- that any relative phases induced by obtaining measurement outcomes of $-1$ from $X$ observable measurements --- can be ``undone'', leaving a state which is a uniform superposition over some set of standard basis states (\emph{i.e.},~with no relative phases at all). In the case of a QLNC circuit which (successfully) simulates a classical linear network code, this may be more plausible to the reader. In fact, we make a stronger claim: \begin{lemma} For any state $\ket{\psi}$ given by a parity function tableau $T = {\bigl[\:\! C \:\!\big\vert\!\: \mathbf p \!\:\bigr]}$ on $n$ qubits, it is possible (in time dominated by Gaussian elimination on $T$) to find a subset $S \subset \{1,\ldots,n\}$ of qubits, such that $\ket{\psi'} = Z^{\otimes S} \ket{\psi}$ has a parity function tableau $T = {\bigl[\:\! C \:\!\big\vert\!\: \mathbf 0 \!\:\bigr]}$. \end{lemma} \noindent We prove this result here, to demonstrate that ``termination'' is a well-defined operation in principle. \begin{proof} Let $Q$ be an invertible linear transformation for which $Q T = \bigl[\mathbf e_0 \:\: \tilde{\mathbf c}_1 \:\: \tilde{\mathbf c}_2 \:\: \cdots \:\: \tilde{\mathbf c}_n \:\: \tilde{\mathbf p}\bigr]$ is in reduced row-echelon form, and let $\tilde f_j$ be the qubit function corresponding to column $\tilde{\mathbf c}_j$. Then, for every formal indeterminate $z_g$, there is a qubit $k_g \in \{1,\ldots,n\}$ for which $f_{k_g} = z_g$. Let $J$ be the set of rows for which $\tilde p_j = 1$, and let $S = \{ k_j \mid j \in J \}$. Then the effect of $Z^{\otimes S}$ is to map $\tilde{\mathbf p} \mapsto \mathbf 0$. This may be represented by a transformation $R$ for which $QTR = \bigl[\mathbf e_0 \:\: \tilde{\mathbf c}_1 \:\: \tilde{\mathbf c}_2 \:\: \cdots \:\: \tilde{\mathbf c}_n \:\: {\mathbf 0}\bigr]$, which is a parity function tableau for a state without relative phases over the standard basis. (Indeed, it follows that the final column of $TR$ is also $\mathbf 0$, so that simulating $Z^{\otimes S}$ on the original tableau removes all relative phases without committing to the change of variables described by $Q$.) \end{proof} As a corollary, it follows that for a parity function state, we can induce whatever relative phase we like, of the form $(-1)^{\phi(x)}$ for any linear function $\phi$ of the indeterminates. We may use this to justify the notion of ``terminating'' one qubit independently of any others, and ``undoing'' any change to the phase function which occurs as a result of obtaining a $-1$ outcome. The specific choice of qubits on which to perform the $Z$ operations may not be unique, but it suffices for our purposes that such a set can always be found efficiently. The way one might use the QLNC formalism to simulate a particular QLNC circuit is illustrated in Fig.~\ref{f4}. This example distributes two Bell states across a rectangular grid, by simulating the classical Butterfly network protocol with some ``out-of-order'' evaluations. To compensate for the out-of-order evaluation, classically controlled $X$ operations are required upon the measurement of one of the qubits: this is in effect a coding operation using a link outside of $G$, relying on the fact that classical information can be communicated more freely than quantum information can under our architectural assumptions. \begin{figure}[!t] \centering \includegraphics[width=0.88\linewidth]{fig9.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Example of the out of order Butterfly: (a) the order of edges, slightly different, but with equivalent quantum circuit to that given in Fig.~\ref{f1}; (b) the initial labelling of the qubits; (c) the labels after edges ``1''; (d) the labels after edges ``2''; (e) the labels after edges ``3''; (f) the labels after the classical control (edges ``4'') and the terminations (the fifth layer of operations, denoted by the nodes labelled ``5'').}} \label{f4} \end{figure} \subsection{Using the QLNC formalism to design entanglement distribution circuits} \label{example} As already noted, the purpose of developing the QLNC formalism is to enable the use of classical linear network codes as a basis to design entanglement distribution quantum circuits. We begin by noting that there are situations in which QLNC circuits can distribute entanglement which do not correspond to linear network codes. \subsubsection{Shallow QLNC circuits for entanglement distribution} The classical-quantum linear network code result suggests a number of ways in which QLNC circuits can be used to distribute entanglement. In this section we detail one such application, prompted by our desire to use classical linear network coding results and intuitions to distribute entanglement \textit{efficiently} (i.e., with as few layers of quantum operations as possible), we consider the following scenario: Let there be a classical binary linear network code over a network, connecting $k$ transmitter-receiver pairs; and let that network consist of three types of nodes: \begin{itemize} \item Transmitter nodes --- for which each incident edge is outbound (i.e., a directed edge with direction away from the transmitter), and the transmitter broadcasts its bitsteam on all incident edges. \item Relay nodes --- that have an arbitrary number of input and output edges, and whose operation is to broadcast the modulo-2 sum of all incoming bitstreams on all of the output edges. \item Receiver nodes --- for which each incident edge is inbound, and whose operation is to perform the modulo-2 sum of all of the incoming bitstreams, which yields the desired bitsream (i.e., that transmitted by the corresponding paired transmitter). \end{itemize} With the three types of nodes (graph vertices) defined thusly, we can prove an important result about the required depth of layers of quantum operations, when the qubit interaction graph is again $G=\{V,E\}$. \begin{thm} \label{thm:constdepth} If a multiple multicast (including multiple unicast as a special case) classical binary linear network code exists over a network $G$, from a set of transmitter nodes $T= \{t_1 , \cdots , t_N\}$ with in-degree $0$ to a corresponding set of receiver nodes $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ with out-degree $0$, then there is a QLNC circuit whose \textup{CNOT} operations are located along the edges of $G$ and distributes $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states between corresponding transmitter/receiver node sets. Moreover, this circuit has depth at most ${2 (\chi{-}1) (\delta{+}2) + 2}$ time-steps, where $\delta$ is the largest in/out degree of any vertex in $G$, and $\chi$ is the chromatic number of $G$. \end{thm} \begin{remark} It is in general NP-complete to compute the vertex-chromatic number $\chi$ of a network. However, in many realistic cases it will be easy to compute $\chi$. For instance, bipartite networks (such as tilings by squares or hexagons, or connected subgraphs of these) have $\chi = 2$ by definition. In any more complex network $G$, we may alternatively substitute $\chi$ with the number of colours of any proper vertex--colouring that one may find. For instance, in planar architectures (\emph{i.e.},~in which $G$ is a planar graph) we will have $\chi \le 4$ by the Four Colour Theorem~\cite{fourcolour}, and a four-colouring can be found in polynomial time~\cite{fourcolourp}; and every graph has both a $\deg(G)+1$ vertex-colouring which can be found in polynomial time~\cite{greedy}. \end{remark} \begin{remark} The role of $\delta$ in Theorem~\ref{thm:constdepth} is in relation to the number of colours of an edge-colouring $\gamma$ of $G$, such that no two edges with the same colour label leave a common vertex or enter a common vertex. (We call such a colouring a ``proper directed-edge colouring''.) If we transform $G$ into a graph $\tilde G$, in which each vertex $q$ is replaced with a vertex $q_i$ (inheriting only the in-bound edges of $q$) and a vertex $q_o$ (inheriting only the out-bound edges of $q$), then $\delta$ is the maximum degree of $\tilde G$, and $\gamma$ corresponds to a proper edge-colouring of $\tilde G$. By Vizing's theorem \cite{Vizing1964}, the edge-chromatic number of $\tilde G$ is at most $\delta + 1$, and an edge-colouring with $\delta + 1$ colours can be found efficiently. (An edge-colouring of $\tilde G$ must have at least $\delta$ colours; and it may be easy to find an edge-colouring of this kind, \emph{e.g.},~if $G$ arises from a lattice in the plane. If one may find such a colouring, the bound above improves to ${2 (\chi{-}1) (\delta{+}2) + 2}$. For the square lattice, with $\chi = 2$ and with $\delta=3$ if no vertex has four in-edges, this yields the bound of $14$ described on page~\pageref{discn:introSquareLatticeBound}.) \end{remark} \begin{proof} Let $c: (V \cup E) \to \mathbb N$ be a colouring of the nodes and edges, such that $c$ provides a proper colouring $1, 2, \ldots, A$ to the nodes of $G$, and also a proper directed-edge colouring $1, 2, \ldots, B \le \delta+1$ to the edges of $G$. Consider the following procedure: \begin{enumerate} \item Initialise all of the qubits, where each qubit $q$ is initialised either in the state $\ket{0}$ if it has no outgoing edges, or if it has some neighbour $p$ by an in-bound edge for which $c(p) < c(q)$, and is initialised in the state $\ket{+}$ otherwise. (In the QLNC formalism, we associate a formal indeterminate $a_q$ with each qubit $q$ initialised in the $\ket{+}$ state.) \item For each non-receiver node $q$ with $c(q) = 1$ in parallel, perform the following procedure: \begin{itemize} \item For each $1 \le j \le B$, perform a CNOT operation on any edge $e$ with $c(e) = j$ leaving $q$. (In the QLNC formalism, this adds $a_q$ to the formula $f_v(a)$ for the node $v$ at the end of $e$.) \end{itemize} \item For each $2 \le h \le A \!-\! 1$, perform the following operations in parallel on non-receiver nodes $q$ with $c(q) = h$: \begin{enumerate}[label=\alph*.] \item For each $1 \le j \le B$, perform a CNOT operation on any edge $e$ with $c(e) = j$ leaving $q$. \item If $f_q(a) \ne a_q$ (\emph{i.e.},~$q$ was a target of some CNOT or Pauli X operation before this round): \begin{enumerate}[label=(\textit{\roman*})] \item Terminate the qubit $q$, by performing an $X$ observable measurement. \item If the outcome of the preceding measurement is $-1$, perform $Z$ operations on an appropriate set of qubits, and a $Z$ operation on $q$ to transform the post-measurement state of $q$ from $\ket{-}$ to $\ket{+}$. (If any qubit $v$ has been selected to be subject to a $Z$ operation by multiple qubits $q$ with $c(q) = h$, we perform $Z$ on $v$ if and only if the number of such qubits $q$ is odd.) \item If $q$ has any neighbours $p$ by in-bound edges, such that $c(p) < c(q)$, then for each $1 \le j \le b$, perform any CNOT operations on edges $e$ with $c(e) = j$, which are outgoing from $q$. (In the QLNC formalism, this adds $a_q$ to the node-formula $f_v(a)$ for the node $v$ at the end of $e$.) \end{enumerate} \end{enumerate} \item For each non-reciever node $q$ with $c(q) = A$ in parallel, perform the following procedure: \begin{itemize} \item For each $1 \le j \le B$, perform a CNOT operation on any edge $e$ with $c(e) = j$ leaving $q$. \end{itemize} \item For all relay qubits $q$: if $c(q) < A$, perform a $Z$ observable measurement on $q$; otherwise terminate $q$ (\emph{i.e.},~measure $q$ with the $X$ observable and perform appropriate $Z$ corrections). \item Perform classically controlled Pauli-$X$ gates on all out-edges according to the outcomes of the $Z$ observable measurements. (If any qubit $v$ has been selected to be subject to an $X$ operation by multiple relay qubits, we perform $X$ on $v$ if and only if the number of such qubits $q$ is odd.) \end{enumerate} The operations of Step~1 has depth $1$. Both steps~2 and~4 have depth at most $B$. Step~3 is a loop with $A {\!\:-\!\:} 2$ iterations, in which part~(a) has depth at most $B$, and part~(b) has depth at most $B {\!\:+\!\:} 2$. Step~5 has depth at most $2$, and step~6 has depth $1$. Together, the depth is then $1 + B + (A\!\:{-}\!\:2)(2B\!\:{+}\!\:2) + B + 2 + 1 = 2(A\!\:{-}\!\:1)(B\!\:{+}\!\:1) + 2 \le 2(\chi\!\:{-}\!\:1)(\delta\!\:{+}\!\:2) + 2$. Fig.~\ref{appf1} shows a sketch of why this procedure works. In effect, we wish for ``information'' (more precisely: correlation of values of a qubit in the standard basis, when taken in superposition) to be transmitted through each relay node, from of its sources (with multiplicity taken modulo 2) to the qubits on each of its outward links. Some of this information may accumulate at a given relay node $q$ before round $c(q)$, in which case it is explicitly passed on through a CNOT. The rest accumulates at $q$ after round $c(q)$, and also after the node $c(q)$ has communicated a formal indeterminate $a_q$ on each of its outgoing links. If we may collapse the state in such a way to assign to $a_q$ the modulo~2 sum of the remaining signals from its incoming links (accumulated after round $c(q)$), this collapse will complete the transmission of the information from the inbound links of $q$ through to the outbound links. \begin{figure}[!t] \centering \includegraphics[width=0.73\linewidth]{newfig.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Example of a relay node, whose operation in the linear network code is to forward $a \oplus b \oplus c \oplus d$ to all three outgoing edges. If the vertex colouring is such that the turn of this vertex is after incoming symbols $a$ and $b$ have arrived, but before $c$ and $d$ have, then the procedure continues as follows: in (a) $a \oplus b$ (i.e., the current label of the vertex) is forwarded to all outgoing edges; (b) the qubit is terminated and set to zero; (c) the qubit is set to the $\ket{+}$ state, and given the new label $\gamma$, which is then forwarded to all of the outgoing edges, therefore meaning that over the two rounds of forwarding $a \oplus b \oplus \gamma$ has been forwarded; the qubit then waits for the remainder of the process to complete, after which all edges will have been performed, so its label will now be $c \oplus d \oplus \gamma$, which can then be measured and corrected such that $c\oplus d = \gamma$, which then means that $a \oplus b \oplus c \oplus d$ has been forwarded as required.}} \label{appf1} \end{figure} More formally, consider the node formulae which result from this procedure. \begin{itemize} \item For each relay node, let $Z_p(a)$ denote the boolean formula which is transmitted to it on an incoming link from a node $p$ for which $c(p) < c(q)$. We will then have $Z_p(a) = a_p \oplus E_p(a)$, where $E_p(a)$ is the modulo~2 sum of the corresponding functions $Z_r(a)$ for nodes with edges towards $p$ such that $c(r) < c(p)$. \item The formula which is stored at qubit $p$ just prior to its measurement in Step~4 is the formula $Y_p(a) = a_p + L_p(a)$, where $L_p(a)$ is the modulo~2 sum of $Y_r(a)$ for nodes $r$ with links inwards to $p$ such that $c(r) > c(p)$. \end{itemize} If in Step~4 we measure qubit $p$ and collapse it to the state $\ket{0}$, we in effect condition on the situation that $a_p = L_p(a)$. (In the event that we obtain $\ket{1}$, we perform corrections which allow us to simulate having obtained $\ket{0}$ instead.) This produces an acyclic graph of formula substitutions, from the node-formulae of the transmittors to the node-formulae of the receivers. By induction on the substitution depth (\emph{i.e.},~the distance of relay nodes from any reciever node), we may show that performing the necessary substitutions in the formula for $Z_p(a)$ yields the information which, in the classical linear protocol, $p$ would transmit on its outgoing links. It follows that the parity function computed at each receiver node is the function $a_t$ (for its corresponding transmitter node $t$) that is computed in the classical linear network code. \end{proof} In the protocol above, each relay is measured twice (\emph{i.e.},~for the termination, and then at the end to resolve the extra formal label introduced). For this reason, it is necessary to strictly separate transmitters, receivers and relays. However, this setting is not too restrictive, and corresponds to examples of classical linear network codes such as we see in Figs.~\ref{f01} and~\ref{f01a}. Note that while Steps 2, 3a, 3b\textit{iii}, and 4 of our protocol iterate through all edge colours $1 \le j \le B$, the only edge-colours that contribute to the depth are those associated to edges which leave some \emph{vertex} of the colour $1 \le h \le A$ being considered in the given step. Thus the bound above will often be loose, and in fact it may be possible to find better realisations using a suitably tuned directed edge-colouring of $G$. However, our result obtains regardless which edge-colouring one uses, so long as it uses at most $\delta+1$ edge-colours, which again may easily be found~\cite{Vizing1964}. \subsubsection{Example of QLNC solution involving entanglement swapping, for which no classical linear network coding solution exists} \label{eswap} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{fig81.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{An example of a composite network with a QLNC circuit but no (classical) linear network code. Note that this composite network corresponds to three connected copies of the network in Fig.~\ref{f025}, here we draw the part of the graph in Fig.~\ref{f025}(b) as the two bottom-most nodes of each component.}} \label{appf2} \end{figure} When using linear network codes to design QLNCs, in which we distribute entangled pairs, we are free to assign which half of each desired Bell state corresponds to the transmitter in the linear network code and which half represents the receiver. However, while we have the freedom to decide which is the transmitter and which is the receiver, it may be the case that deciding the transmitter / receiver assignment for one Bell state fixes that for the rest. For example, if we consider the corresponding QLNC to the linear network code shown in Fig.~\ref{f025}, we can see that, even if we allow the links to be bi-directional, we must have one column of receivers and one column of receivers. That is, we cannot find a linear network code for the case where some of the left-hand nodes are receivers and some are transmitters. This principle allows us to construct composite networks, in which some data must flow through multiple networks such that there is no linear network code. This is the case shown in Fig.~\ref{appf2}(a), composed of three copies of the network shown in Fig.~\ref{f025} with some extra links, in which each pair of letters is to be connected. Even if we are free to assign which of each pair of letters is the transmitter and the receiver, and also the direction of each link, we still cannot find a linear network code. This can be seen by considering the non-circular coloured nodes, which correspond to data which must flow through two of the component networks. Using the linear network code of Fig.~\ref{f025}, we can connect the pairs ``$cc$'' and ``$dd$'', as well as propagating the left-hand occurrences of ``$a$'' and ``$b$'' forward from the left-hand column of vertices to the second from left. The left hand occurrence of $a$ can now be forwarded via the intermediate blue square node, and the same left-to-right linear network code can be performed on the middle of the three component graphs, and then again forwarding ``$e$'' through the intermediate red hexagonal node to the right-hand of the three component graphs. Once again, we perform the same left-to-right linear network code on the right-hand of the three graphs, which means that we have now connected all of the pairs of letters, with the exception of $b$. In the case of $b$, each of the two $b$s has been forwarded as if it were at a transmitter, and they are connected by a common receiver --- the top-most node, which is a green diamond with a thick black ring. Obviously, this is not a linear network code, as we have not connected pairs of letters as if one were a transmitter and the other a receiver (i.e., by a continuous data-flow from transmitter to receiver), however we \textit{can} find a QLNC circuit, as routing each letter towards a common receiver (the black-ringed green diamond node) can yield the desired Bell state by entanglement swapping in the black-ringed node, as shown in Fig.~\ref{appf2}(b). A similar argument can be made for there not being a linear network code even if the component linear network codes are run right-to-left, in which case the black-ringed node would look like a single transmitter forwarding data to two receivers (the nodes marked b). A situation which can also be implemented as a QLNC circuit (\emph{i.e.}, if the black ringed node is terminated at the end) that does not correspond to any linear network code with the transmitter-receiver pairs as designated by the symbols in Fig.~\ref{appf2}. \subsubsection{Classical-quantum linear network codes} In Section~\ref{outoforder} we saw how the network codes in QLNC circuits can be performed ``out of order'' in some cases, and in Section~\ref{eswap} we gave an example of the use of entanglement swapping to implement a linear network code as if two transmitters are routing towards a common receiver. These are two instances of a general principle we notice that \emph{together the} CNOT \emph{operations and classical control must form a linear network code}. That is, if we consider the following situation: \begin{enumerate} \item We have $n$ qubits connected in a network $G = \{V,E\}$, in which each edge means that a single CNOT (in either direction) is allowed. \item We allow classically-controlled $X$ gates (conditioned on measurement outcomes). It is convenient to consider this possibility of a classical control conditioned on the measurement outcome of a different vertex as a completely connected graph $K_n$ on the same set of vertices (where $n = \lvert V(G) \rvert$). That is, each edge represents the possibility of performing a classically controlled Pauli-$X$ gate. \end{enumerate} These coherently- and classically-controlled operations represent two of four primitives that we allow, the others being: \begin{enumerate} \setcounter{enumi}{2} \item Initialisation of qubits in the $\ket{+}$ or $\ket{0}$ state. \item Termination of the qubits according to the process described in the Section~\ref{sim}. \end{enumerate} \begin{thm} \label{mainthm1} Consider a multiple multicast (including multiple unicast as a special case) classical binary linear network code exists over any subgraph of the graph $G'= G \cup K_n$ sending a unit rate bitstream, where each edge of the graph is a unit rate bi-directional edge (but not allowing fractional routing). Suppose that this code has a set of transmitting source vertices $T' = \{t_1, \ldots, t_{N'}\}$ for some $N' > 0$, where the first $N < N'$ of these have coresponding reciever sets $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ for $1 \le j \le N$ (with the remaining transmitters $t_{N{+}1}, \ldots, t_{N'}$ having signals which are not necessarily actually received by any ``receiver'' nodes). Suppose further that \textup{(a)}~the information transmitted on the edges of $K_n$ from any single node, and \textup{(b)}~the information transmitted by the nodes $t_1$ through $t_N$, are linearly independent of each other. Then by simulating this linear network code by a QLNC circuit, with \textup{CNOT} operations restricted to the same graph $G$ and classically-controlled Pauli operations oriented along the other edges, the resulting protocol generates a product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states, where each $\ket{\Phi^+}$ or $\ket{\mathrm{GHZ}}$ is over each of the sets $\{ t_j, r_{j,1} , \cdots , r_{j,n_j} \}$ for all $1 \le j \le N$. \end{thm} \begin{proof}[Proof (sketch)] The core of the proof is showing that the QLNC formalism described correctly keeps track of the quantum state, which follows from the formalism description in Section~\ref{main}. We provide an explicit proof of the Theorem in Section~\ref{app1}, which explains why general QLNC circuits of this form achieve the desired result, and also serves to give a detailed walk through illustrating precisely how the QLNC formalism (including terminations) correctly simulates QLNC circuits in practise. \end{proof} An important special case occurs when the linear network code only requires edges in the graph $G$. \begin{corollary} \label{corr} Consider a multiple multicast (including multiple unicast as a special case) classical binary linear network code exists over any subgraph of the graph $G$ sending a unit rate bitstream, where each edge of the graph is a unit rate bi-directional edge (but not allowing fractional routing). Suppose that this code has a set of transmitting source vertices $T' = \{t_1, \ldots, t_{N'}\}$ for some $N' > 0$, where the first $N < N'$ of these have corresponding receiver sets $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ for $1 \le j \le N$ (with the remaining transmitters $t_{N{+}1}, \ldots, t_{N'}$ having signals which are not necessarily actually received by any ``receiver'' nodes). Then by simulating this linear network code by a QLNC circuit, with \textup{CNOT} operations restricted to the same graph $G$, the resulting protocol generates a product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states, where each $\ket{\Phi^+}$ or $\ket{\mathrm{GHZ}}$ is over each of the sets $\{ t_j, r_{j,1} , \cdots , r_{j,n_j} \}$ for all $1 \le j \le N$. Moreover, this can be achieved using only three of the primitives: initialisation, \textup{CNOT} and termination. \end{corollary} \begin{proof} This corollary simply selects the QLNC solutions which have no classical control (apart from in the terminations). \end{proof} \section{Generalisation to qudits of prime dimension} \label{qubitsection} Classical network coding is not restricted to information streams consisting of individual bits. Indeed, it is common in the literature to consider signals consisting of elements from finite fields in general, including the fields $\mathbb Z_d$ for $d$ a prime~\cite{LNCnew1,LNCnew2}. As most proposed quantum hardware platforms involve operations on qubits, our work has focused mainly on linear network codes over $\mathbb Z_2$. However, our techniques work equally well over qudits of any prime dimension $d$, using generalisations of Clifford group operations. \subsection{Generalising the QLNC formalism} On a Hilbert space of dimension $d$, label the standard basis states by $\ket{0_d}$, $\ket{1_d}$, \ldots, $\ket{d{-}1_d}$. Let $X_d$ and $Z_d$ be unitary operators satisfying $X \ket{a} = \ket{a{+}1 ~(\mathrm{mod}~d)}$ and $Z \ket{a_d} = \omega^a \ket{a_d}$ for $a \in \{0,1,\ldots,d{-}1\}$, where $\omega = \exp(2\pi i/d)$. These operators serve to form the basis of a generalised Pauli group of qudits of dimension $d$~\cite{GottesmanQudits,Appleby2005}. The set of unitaries which preserves this extended Pauli group under conjugation corresponds to the Clifford group, and the effects of those operators on eigenstates of the $X$ and $Z$ operators can be simulated using an extension of the stabiliser formalism~\cite{GottesmanQudits,deBeaudrap2013}. We may define an extension of the QLNC formalism to qudits of dimension $d$ by identifying the operations of the generalised Clifford group which correspond to the operations of the qubit QLNC formalism: \begin{itemize} \item Preparation of the states $\ket{0_d}$ and $\ket{+_d} = \tfrac{1}{\sqrt d}\bigl(\ket{0_d} + \ket{1_d} + \cdots + \ket{d{-}1_d} )$; \item Performing (possibly classically controlled) $X_d$ and $Z_d$ operations on any qudit; \item Measuring qudits in the eigenbasis of the $Z_d$ operator or the $X_d$ operator; \item Addition operations $\mathrm{Add}_d$ on pairs of qudits which are connected in the network $G$, whose effect on standard basis states are $\mathrm{Add}_d \ket{x} \ket{y} = \ket{x}\ket{y + x ~(\mathrm{mod}~ d)}$. \end{itemize} We call circuits composed of these operations ``qudit QLNC circuits'' for qudits of dimension $d$. These operations allow one to prepare states which are the analogue of parity function states, which one might call ``linear function states'', which have the form \begin{equation} \label{eqn:parityFormulaExpana} \frac{1}{\sqrt{d^N}} \! \sum_{x \in \{0,1\}^N} \!\! \omega^{\phi(x)}\, \ket{f_1(x)} \otimes \ket{f_2(x)} \otimes \cdots \otimes \ket{f_n(x)} \end{equation}~\\[-2ex] for linear functions $f_k(a)$ and $\phi(a)$, and where $0 \le N \le n$. We may represent $n$-qubit linear function states states by $(N{+}1)\times(n{+}2)$ ``linear function tableaus'' $T = {\bigl[\:\! C \:\!\big\vert\!\: \mathbf p \:\!\bigr]}$, which represent the linear function state by specifying the coefficients of the functions $f_k$ and $\phi$ in the same way as in the qubit case. It is easy to show that preparation of the states $\ket{0_d}$ and $\ket{+_d}$ may be represented by the same column/row insertion steps, and the effects of the unitaries $X_d$, $Z_d$, and $\mathrm{Add}_d$ may be simulated in the same way through elementary column operations. (Indeed, one may use the same $\{0,1\}$-matrices in each case, albeit with coefficients modulo $d$.) The procedures to simulate measurements are similar to the case $d = 2$, but must be described without relying (for instance) on $1$ being the only non-zero element. It also becomes more helpful to describe the measurement outcomes as some element $s \in \mathbb Z_d$, representing a measurement of the $\omega^s$ eigenstate either of $X$ or of $Z$. As before, we put the tableau into reduced row echelon form, making the $k\textsuperscript{th}$ column (counting from 0) a pivot column if possible, where $k$ is the qubit to be measured. \begin{itemize} \item For an $X_d$-eigenbasis measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:XmeasProd} If $f_k(a) = a_g$ for an indeterminate which does not occur in any other qubit formula $f_j(a)$ --- \emph{i.e.},~if there is a single $1$ in the $g\textsuperscript{th}$ row of $C$ --- then the state is unchanged by the measurement, and the measurement outcome is $s = p_g$. \item \label{item:XmeasEnt} Otherwise, let $z_{N\!\!\:{+}\!\!\:1}$ be a new indeterminate (represented in $C$ by adding a new row at the bottom), and choose a measurement outcome $s \in \mathbb Z_d$ uniformly at random. If $f_k(a)$ is constant prior to measurement, let $\Delta$ be the $(N{+}2)$-dimensional column vector with $1$ in the final row, and zero elsewhere; otherwise, if $f_k(a) = a_g$, let $\Delta$ be the $(N{+}2)$-dimensional column-vector with $-1$ in row $g$ and $1$ in row $N{+}1$ (counting from~$0$), and zero elsewhere. We then add $\Delta$ to the $k\textsuperscript{th}$ column of $C$, and subtract $s\Delta$ from $\mathbf p$. \end{enumerate} \item For a $Z_d$-eigenbasis measurement: \begin{enumerate}[label=({\alph*})] \item \label{item:ZmeasProd} If $f_k(a) = c$ is a constant function, then the measurement leaves the state unchanged, and the measurement outcome is $c$. \item \label{item:ZmeasEnt} Otherwise, we select a measurement outcome $s \in \mathbb Z_d$ uniformly at random. Let $\Delta = s \mathbf e_0 - \mathbf c_k$. For any column $j$ of $T = {\bigl[\:\!C\:\!\big\vert\:\!\mathbf p\:\!\bigr]}$ which contains a non-zero coefficient $r_j \ne 0$ in the $g\textsuperscript{th}$ row (including the $k\textsuperscript{th}$ column itself), add $r_j \Delta$ to column $j$; then remove the row $g$ entirely from the tableau. \end{enumerate} \end{itemize} The analysis for these operations is similar to that of the case $d = 2$. This allows us to simulate qudit QLNC circuits. Finally, note that the property of parity function tableaus, that their ``parity function matrix'' $C$ has full rank, also holds for linear function tableaus for any $d$ prime, as these properties only depend on the fact that these matrices are defined over a field (which is a property on which we have also relied to consider reduced row echelons when simulating measurements). As a result, those results (such as ``termination'' of qubits being well-defined) which rely on such rank properties are also true in the QLNC formalism on qudits of prime dimension. \subsection{Entanglement distribution using the qudit QLNC formalism} For qudits of prime dimension $d$, the natural analogues of Bell states and GHZ states are the states \begin{equation} \begin{aligned} \ket{\Phi_d^+} \,&=\, \frac{1}{\sqrt d} \,\sum_{x = 0}^{d-1} \,\ket{x}\ket{x}; \qquad\qquad \ket{\mathrm{GHZ}_{d,n}} \,=\, \frac{1}{\sqrt d}\, \sum_{x=0}^{d-1}\, \underbrace{\ket{x}\ket{x}\cdots\ket{x}}_{\substack{\text{$n$ tensor} \\[0.25ex] \text{factors}}}\;. \end{aligned} \end{equation} These are evidently linear function states on qudits of dimension $d$. As all of the QLNC formalism for qubits (including the notion of qubit termination) generalises in an appropriate way to qudits --- albeit possibly with a constant factor $d{-}1$ overhead, to realise some power of the $\mathrm{Add}_d$, $Z_d$, or $X_d$ operations --- we obtain the following results: \begin{corollary}[to Theorem~\ref{thm:constdepth}] Let $d$ be prime. If a multiple multicast (including multiple unicast as a special case) classical $\mathbb Z_d$ linear network code exists over a network $G$, from a set of transmitter nodes $T= \{t_1 , \cdots , t_N\}$ with in-degree $0$ to a corresponding set of receiver nodes $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ with out-degree $0$, then there is a QLNC circuit whose \textup{CNOT} operations are located along the edges of $G$ and distributes $\ket{\Phi^+_d}$ and $\ket{\mathrm{GHZ}_{d,n_j}}$ states between corresponding transmitter/receiver node sets. Moreover, this circuit has depth at most ${2(d{-}1)(\delta{+}2) (\chi{-}1) + 2}$ time-steps, where $\delta$ is the largest in/out degree of any vertex in $G$, and $\chi$ is the chromatic number of $G$. \end{corollary} \begin{corollary}[to Theorem~\ref{mainthm1}] Let $d$ be prime. Consider a multiple multicast (including multiple unicast as a special case) classical $\mathbb Z_d$ linear network code exists over any subgraph of the graph $G'= G \cup K_n$ sending a unit rate stream, where each edge of the graph is a unit rate bi-directional edge (but not allowing fractional routing). Suppose that this code has a set of transmitting source vertices $T' = \{t_1, \ldots, t_{N'}\}$ for some $N' > 0$, where the first $N < N'$ of these have coresponding reciever sets $R_j = \{r_{j,1} , \cdots , r_{j,n_j} \}$ for $1 \le j \le N$ (with the remaining transmitters $t_{N{+}1}, \ldots, t_{N'}$ having signals which are not necessarily actually received by any ``receiver'' nodes). Suppose further that \textup{(a)}~the information transmitted on the edges of $K_n$ from any single node, and \textup{(b)}~the information transmitted by the nodes $t_1$ through $t_N$, are linearly independent of each other. Then by simulating this linear network code by a QLNC circuit, with \textup{CNOT} operations restricted to the same graph $G$ and classically-controlled Pauli operations oriented along the other edges, the resulting protocol generates a product of $\ket{\Phi^+_d}$ and $\ket{\mathrm{GHZ}_{d,n_j}}$ states, where each $\ket{\Phi^+_d}$ or $\ket{\mathrm{GHZ}_{d,n_j}}$ is over each of the sets $\{ t_j, r_{j,1} , \cdots , r_{j,n_j} \}$ for all $1 \le j \le N$. \end{corollary} The proofs of these statements are identical to the case $d= 2$, applying the extension of the QLNC formalism to $d > 2$ dimensional qudits. \section{Remarks on computational complexity} \label{comp} We now consider the computational complexity of the QLNC formalism, and also remark on the complexity of finding linear network codes. \subsection{Comparison of the QLNC formalism to the stabiliser formalism} Recall that a parity function tableau on $n$ qubits is a matrix of size $(N{+}1)\times(n{+}2)$, where $0 \le N \le n$ is some number of indeterminates involved in the expression of the state. As every parity function tableau has the same first column, the amount of information can be bounded above by $(N{+}1) \times (n{+}1)$ bits. By allocating enough space for an $(n{+}1)\times(n{+}1)$ matrix, and by maintaining lists to record which rows and columns in this space are actually occupied, we suppose that the data structure used for the matrix allows for $O(1)$ time row and column insertion and removal, apart from the time required to actually initialise the entries of new rows or columns. Several of the QLNC circuit operations may be represented by very simple operations or transformations on the tableau: \begin{itemize} \item Preparation of a fresh qubit involves introducing a new row and a new column, which involves $O(n + N)$ time to initialise. \item Performing a single CNOT, $X$, or $Z$ gate involves an elementary row operation, which requires $O(N) \subseteq O(n)$ time to perform. \end{itemize} Others of the operations are somewhat more involved: \begin{itemize} \item Performing measurements --- destructive or otherwise --- involves first putting the parity function tableau into a reduced row echelon form, which requires $O(N^2 n)$ time. This dominates the run-time required for the remaining operations: \begin{itemize} \item For an $X$ measurement, the subsequent operations may involve adding a new row, which takes $O(n + N)$ time; and adding a vector of size $O(N)$ to two columns, which takes $O(N)$ time. \item For a $Z$ measurement, the subsequent operations may involve adding a column vector of size $O(N)$ to $O(n)$ columns, and removing a row and a column, which all together takes $O(Nn)$ time. \end{itemize} \item Terminating a qubit requires a measurement, and also an appropriate set of qubits on which to perform $Z$ operations. Finding the latter also involves putting the tableau in reduced row echelon form, and $O(N)$ further work to determine an appropriate correction set; thus this also takes time $O(N^2 n)$. \end{itemize} A natural comparison to make is with the stabiliser formalism~\cite{Aaronson2004}. This also requires $O(n^2)$ space, with little room for improvement beyond techniques to represent sparse matrices. Preparation of a fresh qubit in the stabiliser formalism similarly involves extending the matrix, and takes $O(n)$ time; realising a CNOT, $X$, or $Z$ operation takes time $O(n)$; and simulating a measurement may similarly involve Gaussian elimination (if it happens that the outcome is deterministic), requiring $O(n^3)$ time using Gaussian elimination. In the worst case where $N \in \Theta(n)$, the run-time bounds for the QLNC formalism then matches that of the stabiliser formalism; but if (say) $N \in O(n^{1/4})$ for a given circuit, we obtain a quadratic improvement on the complexity of measurement, and a quartic improvement in the complexity of simulating CNOT, $X$, and $Z$ gates. The computational advantage of the QLNC formalism, when simulating QLNC circuits, is the ability to take advantage of the potential for the parity function tableau to occupy space ${}\! \ll n^2$, in which case the operations required to transform it are less computationally costly. Even in the worst case, the fact that parity function tableaus have size $n^2 + O(n)$, rather than size $2n^2 + O(n)$, will also yield a mild improvement in performance. \subsection{On the complexity of finding QLNC circuits for entanglement distribution problems} Here, we consider the complexity of \emph{finding} a QLNC to perform a particular entanglement distribution task in a given network $G$. It is clear that when a linear network code for the classical $k$-pairs (or multiple multicast) problem exists in a particular network $G$, we may easily convert this to a constant depth QLNC circuit to solve the corresponding entanglement distribution problem on a quantum architecture based on the same network $G$ (with the mild restriction that nodes are either transmitters or receivers or relays, as previously discussed). However, it is not always easy to find such a linear network code. Lehman and Lehman \cite{Lehman2004} show that deciding whether a network code exists is NP-hard in general. As Kobayashi \textit{et al} \cite{Kobayashi2009} note, the $k$-pair problem is thus itself NP-hard, as all network coding can be reduced to an instance of the $k$-pair problem~\cite{Dougherty2006}. Given that finding network codes is a hard problem in general, it is reasonable to ask whether reducing the problem of entanglement distribution to the problem of finding linear network codes is of any practical advantage. One answer to this is that the problem of classical network coding has already received significant attention (\emph{e.g.}, \cite{Dougherty2006, LNCnov1, LNCnov2, LNCnov3, LNCnov4}), and thus such a reduction enables existing results and understanding to be transferred to the problem of entanglement distribution. Furthermore, the existing proof of NP-hardness appears to require a somewhat specialised network architecture in principle. (To us, this seems to mirror the situation with the bounds on the depth of the ``constant-depth'' QLNC circuits described in Theorem~\ref{thm:constdepth}: while the bound depends on parameters such as vertex-chromatic number which are NP-hard to compute in general, in many practical examples they may be computed very easily.) Finally, as we allow unconstrained classical control in QLNCs (\emph{i.e.},~the classical control could be thought of as being transmitted through a completely connected graph, as in Section~\ref{eswap}), we should expect it to be easier to find a QLNC for a network $G$, and perhaps to sometimes find a QLNC for entanglement distribution where there is no solution to the corresponding classical linear network coding problem. In any case, as our results more generally allow an edge to be used more than once, it is not clear whether we should expect the problem of finding QLNC solutions to entanglement distribution to be similar to that of solving the $k$ pairs problem. The complexity of this problem is open; though from our results, it is clear that it cannot be worse than NP-hard. We conjecture that it should be possible to do so in polynomial time. \section{Proof of Theorem~\ref{mainthm1}} \label{app1} Finally, we present here a more thorough presentation of the proof of Theorem~\ref{mainthm1}. In particular, we adopt a more concrete presentation in the hopes of describing in some detail what transformations of the states involved. Let there be $n$ qubits, ordered such that the first $n_1$ are prepared in the $\ket{+}$ state and the remaining $n_2 = n - n_1$ are prepared in the $\ket{0}$ state. The QLNC circuits described consist of four primitives: initialisation (i.e., preparation of qubits in the $\ket{+}$ or $\ket{0}$ state, as stated directly above); CNOT gates; measurements which can classically control Pauli-$X$ gates on other qubits; and termination operations. Firstly, we note that the principle of deferred measurement can be used to express an equivalent circuit with the classically controlled $X$ gates replaced by CNOT gates and deferred measurement on the control qubit, as shown in Fig.~\ref{f004}(a) and (b), and secondly, we address a generalised version of the circuit in question, as shown in Fig.~\ref{f004}(c). The remainder of the proof proceeds as follows: firstly, we relate the state of this generalised circuit after the network of CNOT gates to the actual circuit we want to express; secondly, we prove by induction that the state is correctly simulated by the QLNC formalism as the individual CNOT gates are executed; thirdly we show that the termination process does indeed remove qubits as required, without disturbing the rest of the state; and finally we show that the desired product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states is only realised if and only if the measurements do not reveal information thereabout, and that the Gaussian elimination procedure described is necessary and sufficient to verify this.\\ \begin{figure}[!t] \centering \includegraphics[width=0.58\textwidth]{fig12.pdf} \captionsetup{width=0.95\linewidth} \caption{\small{Illustration of equivalent circuits used in the proof of Theorem~\ref{mainthm1}, the three parallel vertical lines of descending size (i.e., a rotated `earth' symbol, as used in electrical engineering) denotes termination: (a) shows the actual quantum circuit, consisting of qubits initialised in the $\ket{+}$ and $\ket{0}$ states, a network of $CNOT$ gates, and measurements classically controlling Pauli-$X$ gates, and some terminations; (b) shows the same circuit, but now with deferred measurement (such that the classically controlled Pauli-$X$ gates can be represented as $CNOT$ gates; and (c) shows the circuit with additional ancilla qubits entangled with the qubits initialised in the $\ket{+}$ state, as is required in the proof.}} \label{f004} \end{figure} \indent Fig.~\ref{f004} illustrates a general instance of the circuit, in which $U$, in Fig.~\ref{f004}(a), is a block consisting of CNOT gates and classically controlled Pauli-$X$ gates, in Fig.~\ref{f004}(b) the principle of deferred measurement is used to draw an equivalent circuit, with CNOT gates replacing Pauli-$X$ in a block now labelled $\tilde{U}$, with measurements deferred until the end of the circuit. This allows us to write down the state directly after $\tilde{U}$: \begin{align} \ket{\psi_C} & = \tilde{U}(\ket{+}^{\otimes n_1} \ket{0}^{\otimes n_2} ) \nonumber \\ \label{app1eq10} & = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \tilde{U}(\ket{i} \ket{0}^{\otimes n_2} ), \end{align} where $i$ is binary number, later in the analysis we use $\mathbf{i}$ as the binary vector corresponding to the binary number $i$ (i.e., the $j^{th}$ element of $\mathbf{i}$ is the $j^{th}$ digit of $i$) and we use each of $\ket{i}$ and $\ket{\mathbf{i}}$ to denote the corresponding $n_q$-qubit quantum state, where $n_q$ is the number of digits in $i$ (and therefore the number of elements in $\mathbf{i}$). In the analysis, it is helpful to consider the circuit in Fig.~\ref{f004}(c), in which $n_1$ ancilla qubits are prepended to the state. Each ancilla is initialised in the $\ket{0}$ state, and then is the target of a CNOT by one of the qubits initialised in the $\ket{+}$ states (that is, a different one of these qubits controls the CNOT for each of the ancillas). This allows the state before $\tilde{U}$ to be expressed: \begin{equation} \label{app1eq15} \ket{\tilde{\psi}_B} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{i}\ket{i} \ket{0}^{\otimes n_2} . \end{equation} which in turn allows us to express the state after $\tilde{U}$: \begin{equation} \label{app1eq20} \ket{\tilde{\psi}_C} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{i} \tilde{U}(\ket{i} \ket{0}^{\otimes n_2} ). \end{equation} These extra ancillas have been introduced to make it easier to keep track of the state, and we later rely on the correspondence between (\ref{app1eq10}) and (\ref{app1eq20}) to show that these additional ancillas are indeed just an analytical device and do not affect the validity of the simulation of the actual circuit in the formalism.\\ \indent We can now introduce the QLNC formalism in vectorised form. We order the vector such that the first $n_1$ elements correspond to the qubits initialised in the $\ket{+}$ state, and are therefore labelled with a unique symbol in the initialisation process. Furthermore, the effect of each of these performing a CNOT on (a distinct) one of the ancillas is to copy the label to the ancilla, therefore it is convenient to think of the ancillas as having been labelled. Let these labels be $a_1 \cdots a_{n_1}$, and the actual qubits be labelled $q_1 ... q_n$ (which in general will be sums over the terms $a_1 \cdots a_{n_1}$). Stacking these up into vectors, we have that $\mathbf{a} = [a_1 , \cdots , a_{n_1}]^T$ and $\mathbf{q} = [q_1 , \cdots , q_{n}]^T$, such that: \begin{equation} \label{app1eq30} \mathbf{q} = \mathrm{L} \mathbf{a}, \end{equation} where $\mathrm{L}$ is a $n \times n_1$ binary matrix which keeps track of how the labels of the various qubits are related to the ancilla labels, i.e., initially $\mathrm{L} = [\mathbbm{1} | \mathbf{0}]^T$. In the QLNC formalism, the operation of a CNOT with the $j^{th}$ qubit controlling the $k^{th}$ qubit, is that the $j^{th}$ row of $\mathrm{L}$ is added to the $k^{th}$ row (modulo-2), that is $\mathrm{L}_{k,*} \leftarrow \mathrm{L}_{k,*} + \mathrm{L}_{j,*}$ (here `$*$' means the entirety of that row). Moving on to the network of CNOT gates (including those which have been included by the deferred measurement equivalence), we prove by induction that the quantum state is in the form: \begin{equation} \label{app1eq40} \ket{\tilde{\psi}_{BC}} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{i} \ket{\mathrm{L}\mathbf{i}} , \end{equation} where $\ket{\tilde{\psi}_{BC}}$ is the quantum state at an arbitrary point between $\ket{\tilde{\psi}_{B}}$ and $\ket{\tilde{\psi}_{C}}$ (i.e., within the block of CNOT gates, $\tilde{U}$).\\ \indent For the inductive proof, we observe that the initial definition of $\mathrm{L}$ (i.e., in the text below (\ref{app1eq30}) is of a format that corresponds to this definition, i.e., for the initial state in (\ref{app1eq20}). Turning to how the quantum state is changed by a CNOT gate, to simplify the notation (and without loss of generality) we re-order the qubits (and therefore the rows of $\mathrm{L}$ and $\mathbf{q}$) such that the first qubit is the control, and the second the target, before the CNOT we have: \begin{align} \ket{\tilde{\psi}_{BC}} = & \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{00} \ket{\psi_i'} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{01} \ket{\psi_i''} \right. \nonumber \\ \label{app1eq50} & \left. \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{10} \ket{\psi_i'''} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{11} \ket{\psi_i''''} \right), \end{align} where $s.t.$ means `such that', and $\ket{\psi_i'}$, $\ket{\psi_i''}$, $\ket{\psi_i'''}$ and $\ket{\psi_i''''}$ represent the remainder of the quantum state in each term, which is not required for this analysis. After the performing a CNOT gate on the first two qubits we have: \begin{align} \left(\textnormal{CNOT} \otimes {\mathbbm{1}}_{n-2}\right)\ket{\tilde{\psi}_{BC}} = & \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{00} \ket{\psi_i'} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{01} \ket{\psi_i''} \right. \nonumber \\ & \left. \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 0 } \ket{i} \ket{11} \ket{\psi_i'''} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, \mathrm{L}_{2,*}\mathbf{i} = 1 } \ket{i} \ket{10} \ket{\psi_i''''} \right), \nonumber \\ = & \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 0 } \ket{i} \ket{00} \ket{\psi_i'} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 0 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 1 } \ket{i} \ket{01} \ket{\psi_i''} \right. \nonumber \\ \label{app1eq60} & \left. + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 1 } \ket{i} \ket{11} \ket{\psi_i'''} + \sum_{i\, s.t. \mathrm{L}_{1,*}\mathbf{i} = 1 , \, (\mathrm{L}_{1,*} + \mathrm{L}_{2,*})\mathbf{i} = 0 } \ket{i} \ket{10} \ket{\psi_i''''} \right), \end{align} which we can see is consistent with the operation of a CNOT where the first qubit controls the second in the QLNC formalism, i.e., the assignment $\mathrm{L}_{2,*} \leftarrow \mathrm{L}_{2,*} + \mathrm{L}_{1,*}$, thereby completing the inductive proof. It is worth observing that, while our proposed formalism was conceptualised from the starting point of classical network codes (as emphasised in Corollary~\ref{corr}), the manner in which the state is tracked bears some resemblance to the quadratic representation of the Stabliser formalism as described by Dehaene and de Moor \cite{dehaene}.\\ \indent $\ket{\tilde{\psi}_C}$ is simply $\ket{\tilde{\psi}_{BC}}$ after all of the CNOT gates in $\tilde{U}$ have been executed, and using the correspondence between (\ref{app1eq10}) and (\ref{app1eq20}) allows us to express $\ket{\psi_C}$ from (\ref{app1eq40}): \begin{equation} \label{app1eq61} \ket{\psi_{C}} = \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{\mathrm{L}\mathbf{i}} , \end{equation} \indent The next step in the circuit is the termination of any qubit which is left such that its label is the sum of two or more symbols, and indeed any other qubits with a single symbol label if desired. In the termination process the goal is, for any given post measurement state, that the corresponding qubits should be measured out in such a way that the superposition of quantum states is the same for the rest of the qubits, whichever state was measured. That is, if the state is $\ket{0}\ket{\phi} + \ket{1}\ket{\tilde{\phi}}$, termination and removal of the first qubit should leave the state $\ket{\phi} + \ket{\tilde{\phi}}$. This can be achieved in one of three ways: firstly, if the qubit to be terminated has a label which can be expressed exactly as a sum of qubits that are measured, then it can be measured out directly, as no additional information will be learned by doing so (in reality this measurement will have already taken place, although for the analysis this is treated as a deferred measurement, but does this does not affect the validity of measuring it out directly). Conversely, in the case where the label of the qubit to be terminated is linearly independent of all of the other qubit labels, then it can also be measured out, as this will not reveal any information about the entangled superposition of interest. To see this, WLoG we re-order the qubits (including the ancilla qubits) such that the first qubit is to be terminated, from which we can express the state: \begin{equation} \label{app1eq75} \ket{\psi_{CD}}= \frac{1}{\sqrt{2^{n_1}}}\sum_{i=0}^{2^{n_1}-1} \ket{\mathrm{L}_{1,*}\mathbf{i}}\ket{ \mathrm{L}_{2:n,*}\mathbf{i}}, \end{equation} and thus we can see that because $\mathrm{L}_{1,*}$ is linearly independent of all of the other rows of $\mathrm{L}$, measuring it out will not collapse any other terms in the superposition.\\ \indent So we move onto the third option for termination, where the qubit to be terminated can be expressed as a sum of qubit labels, of which at least some haven't been measured. Once again, for simplicitly of exposition and without loss of generality, we consider that it is the first qubit, labelled $q_1$, that is to be terminated. To see how the termination process works, first let us write the linear expression of $q_1$ in terms of the other qubit labels: $q_1 = \mathbf{r}^T\mathbf{q}_{2:n}$, where $\mathbf{r}$ is a binary vector that selects the other qubits whose labels sum to $q_1$. We now express $\mathbf{r} = \mathbf{r}_a + \mathbf{r}_b$, such that $\mathbf{r}_a$ corresponds to qubits that are measured, and $\mathbf{r}_b$ corresponds to qubits that are not measured. Thus we can re-express (\ref{app1eq61}), noting that $\mathbf{q}_{2:n} = \mathrm{L}_{2:n_1,*}\mathbf{a}$, from (\ref{app1eq30}): \begin{equation} \label{app1eq90} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i \, s.t. \, (\mathbf{r}_a + \mathbf{r}_b )^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, (\mathbf{r}_a + \mathbf{r}_b)^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} taking first the case where the existing measurements are such that $\mathbf{r}_a^T \mathrm{L}_{2:n_1,*}=0$, (\ref{app1eq90}) becomes: \begin{equation} \label{app1eq95} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} Next, we treat the $X$ observable measurement in the equivalent form of a Hadamard gate, followed by a $Z$ observable (computational basis) measurement. Thus, the Hadamard gate transforms (\ref{app1eq95}) to: \begin{equation} \label{app1eq100} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} (\ket{0}+\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} (\ket{0}-\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} After which the state is measured, and so we must address each of the cases where we measure each of $0$ or $1$. In the former we can see that the state collapses to: \begin{equation} \label{app1eq100a} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} with the terminated qubit still included. Whereas if we measure $1$, we get: \begin{equation} \label{app1eq100b} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} - \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} However, by definition, zero is measured when there are an even number of ones in $\mathbf{r}_b \odot (\mathrm{L}_{2:n_1,*} \mathbf{i})$ and one is measured when there are an odd number of ones therein (where $\odot$ means element-wise multiplication). Therefore, applying a Pauli-$Z$ (phase) gate to each qubit which corresponds to a one in $\mathbf{r}$ guarantees the correct adjustment, and this is exactly what is prescribed in the termination process. Thus, after the correction (\ref{app1eq100b}) becomes: \begin{equation} \label{app1eq100bb} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} \indent Turning now to the alternative situation, where the existing measurements are such that $\mathbf{r}_a^T \mathrm{L}_{2:n_1,*}=1$, (\ref{app1eq90}) becomes: \begin{equation} \label{app1eq95a} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} which the Hadamard gate transforms to: \begin{equation} \label{app1eq100n} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} (\ket{0}-\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} (\ket{0}+\ket{1}) \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right). \end{equation} After which the state is measured, and so we must address each of the cases where we measure each of $0$ or $1$. In the former we can see that the state again collapses to: \begin{equation} \label{app1eq100an} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{0} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} with the terminated qubit still included. Whereas if we measure $1$, we get: \begin{equation} \label{app1eq100bn} \ket{\psi_{CD}} = \frac{1}{\sqrt{2^{n_1+1}}} \left( -\sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} However, applying the same correction as before, we get: \begin{equation} \label{app1eq100bbn} \ket{\psi_{CD}} = -\frac{1}{\sqrt{2^{n_1+1}}} \left( \sum_{i \, s.t. \, \mathbf{r}_b^T \mathrm{L}_{2:n_1,*} \mathbf{i} = 0} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} + \sum_{i \, s.t. \, \mathbf{r}^T_b \mathrm{L}_{2:n_1,*} \mathbf{i} = 1} \ket{1} \ket{\mathrm{L}_{2:n_1,*} \mathbf{i}} \right), \end{equation} So we can see that, regardless of the previous measurement outcomes, and the outcome of the $X$-basis measurement of the qubit being terminated, we get the same quantum state up to an unobservable global phase.\\ \indent Following the layer of terminations, we can express the state as: \begin{equation} \label{app1eq110} \ket{\psi_{D}} = \frac{1}{\sqrt{2^{n'_1}}}\sum_{i=0}^{2^{n_1'}-1} \ket{\mathrm{L}\mathbf{i}} , \end{equation} where we have omitted any qubits that have been measured out and any labels that are no longer present in any qubit label. Thus rows in $\mathrm{L}$ will have either exactly one, or more than one element equal to one (and the rest equal to zero, as it is a binary matrix). We know that all rows of $\mathrm{L}$ with multiple elements equal to one are measured (i.e., otherwise they would have been terminated), and in general some rows with exactly one element equal to 1 may be measured too. To verify that none of these measurements imparts information that would collapse the superposition of interest, we follow the same rationale as that described around (\ref{app1eq75}). Specifically, we construct a $n \times n_M$ matrix (where $n_M$ is the number of measurements), $\mathrm{M}$ such that each row corresponds to one measurement. For example, if we have five symbols in total, $a_1 \cdots a_5$, and we measure a qubit labelled $a_1 \oplus a_4$, the corresponding row of $\mathrm{M}$ would be $[1,0,0,1,0]$. We now re-order the columns of $\mathrm{M}$ such that the first $n_r$ correspond to symbols that aren't present in the final entangled state, and perform Gaussian elimination such that the matrix is in upper-echelon form, let this transformed version of $\mathrm{M}$ be denoted $\mathrm{M}'$. A necessary and sufficient condition for the measurements not to have imparted any information that collapses the final entangled state is that each row which is not all zeros should have at least one element equal to one in the first $n_r$ columns. An example of $\mathrm{M}'$ is shown in (\ref{mateq}), and the necessary and sufficient condition essentially means that the label of each measured qubit includes at least one unique symbol, not present in any other label (either those of other measured qubits, or in the labels of the qubits that compose the final state), and thus, by the same reasoning given in and around (\ref{app1eq75}) means that the final state will not be collapsed by this measurement. \begin{equation} \label{mateq} \mathrm{M}' = \left. \left[ \begin{array}{c c c c c c} \bovermat{$n_r$ cols}{1 & \cdots & & & & \cdots} \\ 0 & 1 & & & & \cdots\\ \vdots & \ddots & 1 & & & \cdots \\ & & & 1 & & \cdots \\ & & & & \ddots & \cdots \\ \end{array} \right] \right\} \text{\scriptsize{$n_M$ rows}} \normalsize . \end{equation} \indent Having performed these measurements, and verified the condition of not imparting information that collapses the superposition, we have the final state: \begin{equation} \label{app1eq120} \ket{\psi_{E}} = \frac{1}{\sqrt{2^{n''_1}}}\sum_{i=0}^{2^{n_1''}-1} \ket{\mathrm{L}\mathbf{i}} , \end{equation} where each row of $\mathrm{L}$ has exactly one element equal to one. Rows of $\mathrm{L}$ whose element equal to 1 is in the same column will be labelled with the same single symbol (i.e., according to the definition in (\ref{app1eq30})), and thus we can see that this will correspond to the product of $\ket{\Phi^+}$ and $\ket{\mathrm{GHZ}}$ states as specified, thus completing the proof. \section{Summary} In this article, we consider the problem of entanglement distribution in quantum architectures with constraints on the interactions between pairs of qubits, described by a network $G$. We describe how this problem may be fruitfully reduced to solving the $k$ pairs problem through linear network coding, on the same network $G$; and we describe how such codes may be simulated to achieve entanglement distribution using a shallow circuit, independent of the size of $G$ or the distance over which the entanglement is to be distributed. We also present several novel observations about realising linear network codes through stabiliser circuits. For the purposes of practically realising operations on practical quantum architectures, it will be of interest both to reduce the depth of circuits to distribute entanglement, and to efficiently discover protocols to do so. However, it will also be important to address issues which we have not considered here, such as the fidelity of the entanglement which is distributed to some known Bell state. We do not expect the quality of such Bell states to be independent of the distance over which such entangled states are distributed, or the number of Bell states which are distributed in parallel using QLNC circuits. Nevertheless, it may be feasible to consider techniques to mitigate what noise may be present. We hope that it may be possible to do so while incorporating the apparent benefits that QLNC circuits theoretically provide in the noiseless case. \section*{Acknowledgements} This work was supported by an Networked Quantum Information Technologies Hub Industrial Partnership Project Grant. The authors also thank Earl Campbell, Steve Brierley and the team at Riverlane for their encouragement and the general discussions that helped to shape this paper.
1,108,101,563,432
arxiv
\section{Introduction} Time-frequency (TF) masking is widely used for monaural audio source separation. Given a monaural mixture speech signal, the goal is to estimate all the TF bins dominated by the same speaker, after applying the short-time Fourier transform (STFT) to the mixture signal. For TF mask prediction, there are two major approaches. One is a \textit{generative approach}, where the goal is to estimate the underlying parameters in a generative model describing the process by which a mixture signal has been generated. Typical examples of this approach include methods based on non-negative matrix factorization (NMF) \cite{Smaragdis2004nmf, Cichocki2006, Schmidt2006nmf, Virtanen2007, Fevotte2009} or non-negative matrix factor deconvolution (NMFD) \cite{Smaragdis2004nmfd, Schmidt2006nmfd, Smaragdis2007, Kameoka2009}. The idea behind these methods is to approximate the magnitude spectrogram of a mixture signal as a weighted sum of spectrum templates or spectrogram templates, pretrained using the speech samples of each speaker. Once the magnituide spectrogram of each speaker has been estimated from a mixture signal, we can use it to construct a Wiener mask that enhances the spectrogram of a particular speaker. The main advantage of these methods is that they easily allow us to incorporate a model adaptation mechanism into the generative process of observed signals, which makes it possible to compensate for the mismatch between the acoustic conditions at training and test time, caused, for example, by reverberation and unseen noise. However, one downside is that the separation performance can be limited due to the inconsistency between the training and test objectives. Namely, the process of training the spectrogram templates does not ensure that the separated signal at test time will be optimal. Another limitation is that, in principle, these methods can only work in speaker-dependent scenarios. The other is a \textit{discriminative approach}, where the goal is to directly learn to predict TF masks or TF embeddings from a given mixture signal. Methods based on the discriminative approach can be roughly categorized into class-based and partition-based methods. The class-based methods are designed to predict the most dominant speaker at each TF bin \cite{Han2011, Xu2014}, whereas the partition-based methods are designed to learn the embedding at each TF bin that can be used to measure how likely each pair of TF bins is to be dominated by the same speaker \cite{Bach2006, Hershey2016, Li2018}. A partition-based method incorporating a deep neural network (DNN) called \textit{deep clustering} (DC) \cite{Hershey2016, Li2018} has attracted a lot of attention in recent years owing to its several attractive features. The main idea of DC is to learn an embedding for each TF bin of an input spectrogram so that a TF mask for each speaker can be constructed by clustering the embedding vectors. Instead of using a DNN to directly predict the speaker label for each TF point, DC uses a DNN to describe the embedding process and trains it so that the embedding vectors of each pair of TF bins dominated by the same speaker get close to each other. This allows the DNN to produce an output that is invariant to the number and permutations of the speaker labels. This property, called the permutation-free property, is particularly favorable in that it allows the model to handle even speaker-independent speech separation (i.e., separation of unseen speakers). Several attempts have been made to develop models incorporating this property since DC was proposed. These include Permutation Invariant Training (PIT) \cite{Yu2017} and Deep Attractor Net (DANet) \cite{Chen2017}. The model proposed in this paper is also designed to satisfy the permutation-free property. While the above DNN-based discriminative methods are often reported to perform better than the generative counterparts, the decision-making process of DNNs is usually difficult to interpret. For instance, in DC \cite{Hershey2016, Li2018}, the trained DNN does not provide any reasons as to why it has come up with the specific value of each embedding vector according to an input spectrogram. One potential weakness in DC owing to the non-interpretable black-box structure is that it is not flexible enough to address the mismatch between training and test conditions (caused by reverberation, for instance). If we could somehow describe the DNN based on the generative process of observed signals, we would be able to incorporate a model adaptation mechanism into its process, according to which we could compensate for the mismatch at test time. In this paper, to address the above concerns, we propose an extension of DC called \textit{explainable deep clustering} (\textbf{X-DC}), where the embedding process is described by a DNN with a physically interpretable structure. Specifically, we design the network architecture so that it can be interpreted as a process of fitting learnable short-segment spectrogram templates to an input spectrogram followed by Wiener filtering. Namely, in our model, the square root Wiener mask vector, in which each element indicates the square root of the probability of a particular speaker being present in a mixture, is treated as the embedding vector. Thus, the dimension of the embedding space can be seen as the maximum potential number of speakers. Notice that a weighted sum of spectrogram templates shifted at all different positions can be described by an NMFD model \cite{Smaragdis2004nmfd, Schmidt2006nmfd, Smaragdis2007, Kameoka2009} (i.e., a convolutive mixture of spectrogram templates). Using this fact, we design the X-DC model so that the network mainly consists of the ``NMFD" part and the ``Wiener mask construction" part, where the former produces a set of the non-negative activation matrices in the NMFD models for all possible sources and the latter is designed to produce Wiener masks constructed according to the NMFD models, each of which can be described as the output of a convolution layer with non-negative kernels. Since we would like the intermediate layer outputs of this network to correspond to the estimates of the spectrograms of underlying sources, we include a training loss to encourage the sum of them to become close to the input spectrogram, in addition to the original training loss for DC measured using the embedding vectors. This allows the Wiener mask construction process to be structure-aware in the sense that Wiener masks are constructed explicitly according to the spectrogram estimates. Another potential advantage of X-DC is that it easily allows us to incorporate a model adaptation mechanism into the network thanks to its physically interpretable structure. For example, we may be able to make it handle unseen reverberation by inserting a non-negative convolution layer corresponding to the reverberation process \cite{Kameoka2009} into the network and optimize it at test time so that we can jointly perform dereverberation and Wiener-mask inference. In Section \ref{sec:X_DC}, we give a detailed explanation of the proposed X-DC model. In Section \ref{sec:experiment}, we experimentally show that the proposed X-DC enables us to visualize and understand the clues for the model to determine the embedding vectors while achieving speech separation performance comparable to the original DC. Furthermore, the experimental results show that the trained spectrogram templates obtained with X-DC captured the harmonic structure of speech fairly well, even though the proposed X-DC model was not explicitly implemented to use harmonic structure information. \paragraph{Related work (1) NMFD.} The original NMFD \cite{Smaragdis2004nmfd} is an extension of the idea of using the NMF model to express audio spectrograms \cite{Smaragdis2004nmf}. NMF is a method for factorizing a given non-negative matrix $A = (a_{f, n})_{1 \leq f \leq F, 1 \leq n \leq N}$ (e.g., a magnitude spectrogram in the speech separation task) \footnote{We use the term ``non-negative matrix'' to mean the matrix all of whose entries are non-negative.} into the product of two non-negative matrices $W = (w_{f, j})_{1 \leq f \leq F, 1 \leq j \leq J}$ and $H = (h_{j, n})_{1 \leq j \leq J, 1 \leq n \leq N}$: $A \approx WH$, where we generally set $J < \min \{ F, N \}$. Namely, each $(f, n)$th entry of $A$ is approximated by \begin{align} \label{eq:nmf_original} A_{f, n} \approx \sum_{j=1}^J w_{f, j} h_{j, n}. \end{align} The two matrices $W$ and $H$ can be found by iteratively decreasing the difference between $A$ and $WH$ (see \cite{Paatero1997, Lee1999, Lee2001, Berry2007} for specific algorithms) until convergence, starting from random initial values. Like NMF, NMFD finds a decomposition of $A$, but is based on a convolutive mixture model instead of the instantaneous mixture model (\ref{eq:nmf_original}). Namely, each $(f, n)$th entry of the observed matrix $A$ is modeled as a sum of every shifted version of $\{ W_j \}$ scaled by $\{ \bm{h}_j \}$: \begin{align} \label{eq:nmfd_original} A_{fn} \approx \sum_{j=1}^J \sum_{m=1}^M w_{j, f, m} h_{j, n - m + 1}, \end{align} where $W_j = (w_{j, f, m})_{1 \leq f \leq F, 1 \leq m \leq M}$ and $\bm{h}_j = (h_{j, n})_{1 \leq n \leq N}$, respectively, represent the $j$th template matrix and its activation vector. As can be seen from (\ref{eq:nmf_original}) and (\ref{eq:nmfd_original}), NMFD reduces to NMF when $M=1$. Owing to the time index $m$ in $W$, NMFD allows each template to express the local spectro-temporal pattern underlying an observed spectrogram. The proposed X-DC model incorporates the expression (\ref{eq:nmfd_original}) into the last layer of a neural network so that the model output can be interpreted as a convolutive mixture of spectrogram templates. There are differences between the original NMFD and the proposed X-DC in model formulation and loss function. First, while NMFD treats each element of $W$ and $H$ as a free variable to estimate, X-DC treats $W$ as the kernel values of a convolution layer and $H$ as the output of the previous layer in a DNN. Second, while source separation using NMFD is categorized as a generative approach, where the objective is to approximate an observed mixture spectrogram as described above, X-DC is a discriminative one, where the objective is to find the best partition of the TF embeddings, es explained in Section \ref{sec:X_DC}. \paragraph{Related work (2) DeepNMF.} The idea of using a single DNN to describe the processes of fitting the NMF model to an input spectrogram, constructing Wiener masks, and generating the outputs of the Wiener masks has already been adopted in a concept called DeepNMF \cite{Hershey2014, Roux2015, Wisdom2017}. In DeepNMF, each intermediate layer of the DNN is interpreted as a single iteration of the multiplicative update algorithm for NMF and the final layer involves a Wiener filtering-like operation. Specifically, let $W^k \equiv \begin{bmatrix} W^{(k, 1)} & \cdots & W^{(k, I)} \end{bmatrix}$ and $H^k \equiv \begin{bmatrix} \left( H^{(k, 1)} \right)^{\mathsf{T}} & \cdots & \left( H^{(k, I)} \right)^{\mathsf{T}} \end{bmatrix}^{\mathsf{T}}$, respectively, be the parameter of the $k$th layer and the input to the same layer, which can also be interpreted as the NMF parameters in the $k$th iteration ($k = 1, \dots, K$). Here, $W^{(k, i)}$ and $H^{(k, i)}$ are the parameters corresponding to the $i$th speaker ($i = 1, \dots, I$). In NMF-based speech separation, the spectrogram of each $i$th speaker given the finally estimated parameters $W^K$ and $H^K$ is typically obtained as \begin{align} \label{eq:deepNMF_WF} S^{(K, i)} = \frac{W^{(K, i)} H^{(K, i)}}{\sum_{i=1}^I W^{(K, i)} H^{(K, i)}} \circ A, \end{align} where $\circ$ represents element-wise multiplication. The above Wiener filtering-like operation (\ref{eq:deepNMF_WF}) can be incorporated in the final layer of the DNN, and thus the entire DNN represents a function that maps the initial NMF parameter $H^0$ to the separated spectrograms $\{ S^{(K, i)} \}_{i = 1, \dots, I}$. The DNN is trained so that it can infer the NMF parameters $W^K$ and $H^K$ in the second last layer, and the spectrogram of each speaker (as the output of the DNN) from an observed mixture spectrogram. Our X-DC differs from DeepNMF in several aspects. First, X-DC uses the NMFD model instead of the regular NMF model as the building block for the architecture design. Second, the shallower part of the X-DC architecture can be designed freely and does not need to be expressed in the form of the multiplicative update rule of NMFD. Finally, the most essential difference lies in the permutation-invariance property: Since DeepNMF employs a training objective given by the difference between the model outputs and the spectrograms of target speech, the model training depends on the permutations of the source order and is thus \textit{not} permutation-invariant. This implies that DeepNMF can only handle speaker-dependent speech separation. By contrast, X-DC differs from DeepNMF in that our model uses a training objective that is permutation-invariant as with DC, thus allowing it to handle speaker-independent separation. In the following sections, we first describe the original DC \cite{Hershey2016, Li2018} in Section \ref{sec:DC}. Then, we explain how to construct the proposed X-DC model in Section \ref{sec:X_DC}. In Section \ref{sec:experiment}, we describe the experiment we conducted to compare the speech separation performance of the proposed X-DC and the conventional DC models, and to check the extracted spectrogram templates and their activations in the proposed X-DC. We conclude this paper in Section \ref{sec:conclusion}. \section{Original Deep Clustering} \label{sec:DC} Before describing our proposed model X-DC, we briefly review the original concept of DC \cite{Hershey2016, Li2018}. Let $X = (X_{f, n})_{1 \leq f \leq F, 1 \leq n \leq N} \in \mathbb{R}^{F \times N}$ be the spectrogram of a mixture signal of $I$ sources. We denote the vectorized spectrogram as $\bm{x} = \mathrm{vec} (X) = \left[ x_1, \dots, x_k, \dots, x_K \right]^{\mathsf{T}} \in \mathbb{R}^K$, where the index $k = F \times (n - 1) + f \in \{ 1, \dots, K \}$ corresponds to the $(f, n)$th TF point of the original spectrogram and $K = F \times N$. From the input spectrogram, a DNN is used to map each $k$th TF bin into a $D$-dimensional unit-norm embedding vector $\bm{v}_k = \left[ v_{k, 1}, \dots, v_{k, D} \right]^{\mathsf{T}}$. Based on these embedding vectors, we define a matrix $V = \left[ \bm{v}_1, \dots, \bm{v}_{K} \right]^{\mathsf{T}} \in \mathbb{R}^{K\times D}$. The main idea of DC is to train a nonlinear mapping $V = \varphi_{\theta} (\bm{x})$ from input spectrogram $\bm{x}$ to the embedding vectors $V$ with a set of model parameters $\theta$ so that the embedding vectors of the TF bins dominated by the same source be close to each other. We can use either a recurrent neural network (RNN) \cite{Hershey2016} or a convolutional neural network (CNN) \cite{Li2018} to implement $\varphi_{\theta}$ to let each element of $\varphi_{\theta} (\bm{x})$ be determined according to the entire range of $\bm{x}$. Let $\bm{y}_k = (y_{k, i})_{1 \leq i \leq I} \in \{0, 1\}^I$ be a one-hot vector that indicates which speaker is dominant at the $k$th TF point in input spectrogram $\bm{x}$, where $I$ is the maximum potential number of speakers. That is, $y_{k, i} = 1$ if the $i$th speaker is dominant at the $k$th TF point, and $y_{k, i} = 0$ otherwise. Based on these vectors, we define a matrix $Y = \left[ \bm{y}_1, \dots, \bm{y}_K \right]^{\mathsf{T}} \in \{0, 1\}^{K \times I}$. From these definitions, matrices $V V^{\mathsf{T}} \in \mathbb{R}^{K \times K}$ and $Y Y^{\mathsf{T}} \in \{ 0, 1 \}^{K \times K}$ represent the estimated and true affinity matrices of all the TF points. The $(k, k')$th element of matrix $V V^{\mathsf{T}}$ represents the similarity between the embedding vectors at the $k$th and $k'$th TF bins, whereas that of matrix $Y Y^{\mathsf{T}}$ is one if the $k$th and $k'$th TF bins belong to the same speaker, and is zero otherwise. At training time, we aim to let $V V^{\mathsf{T}}$ get as close as possible to $Y Y^{\mathsf{T}}$. Thus, the loss function of DC for a set of model parameters $\theta$ is given by \begin{align} \tilde{\mathcal{J}} (\theta) &= \| V V^{\mathsf{T}} - Y Y^{\mathsf{T}} \|_{\mathrm{F}}^2 \nonumber \\ &= \| V^{\mathsf{T}} V\|_{\mathrm{F}}^2 - 2\| V^{\mathsf{T}} Y \|_{\mathrm{F}}^2 + \| Y^{\mathsf{T}} Y \|_{\mathrm{F}}^2. \label{eq:DC} \end{align} Here, $\| \cdot \|_{\mathrm{F}}^2$ is the Frobenius norm of a matrix. It must be noted that we can easily confirm that $Y Y^{\mathsf{T}}$ is invariant under the permutations of the speaker order and thus leads to the permutation-free property of DC. At test time, given input spectrogram $\bm{x}$, we first compute $V = \varphi_{\theta} (\bm{x})$ and then perform clustering (e.g., k-means algorithm) on the rows of $V$, that is, the embedding vectors $\{ \bm{v}_k \}$ of all the TF points. Each obtained cluster consists of the TF points with similar embedding vectors, according to which we can determine the TF mask for a particular speaker. As described above, an important advantage of DC is that it is invariant under the permutations of speaker labels. Namely, speaker labels do not need to be consistent over different utterances in the training data. Thanks to this permutation-free property, the DC model has been shown to generalize well over unseen speakers and languages, presumably by capturing speaker- and language-independent patterns (such as the harmonic structure and temporal/spectral continuity) underlying the spectrogram of a single stream of speech. \section{X-DC: Explainable Deep Clustering} \label{sec:X_DC} In this section, we propose an extension of DC named \textit{explainable deep clustering} (X-DC) and describe its specific formulation in detail. Figure \ref{fig:nn} shows the entire network architecture of the proposed X-DC. The proposed X-DC inherits the main advantages of the conventional DC described in Section \ref{sec:DC}, while explicitly incorporating the process of fitting learnable spectrogram templates to input spectrograms. By taking account of the unit norm constraint, we start by treating a square root Wiener mask vector as the TF embedding vector $\bm{v}_k$. Namely, each element of vector $\bm{v}_k$ is given by the square root of the Wiener mask for a different speaker. We believe that this is reasonable since it is easy to see that clustering these vectors directly corresponds to finding all the TF bins that are dominated by the same speaker. To define $\bm{v}_k$, we start out by defining $\tilde{v}^{(i)}_{f, n}$ as \begin{align} \label{eq:v} &\tilde{v}^{(i)}_{f, n} = \frac{\tilde{h}^{(i)}_{f, n}}{\sqrt{\sum_{i'} \left( \tilde{h}^{(i')}_{f, n} \right)^2} + \epsilon}, \nonumber \\ &f = 1, \dots, F,\ \ n = 1, \dots, N,\ \ i = 1, \dots, I, \end{align} where $\tilde{v}^{(i)}_{f, n}$ and $\tilde{h}^{(i)}_{f, n}$ denote the outputs of the last and second last layers at the $i$th channel corresponding to the $(f, n)$th TF point, respectively. To avoid the division by zero, notice that we have added a small constant $\epsilon$ to the denominator of each mask value $\tilde{v}^{(i)}_{f, n}$ in (\ref{eq:v}). In the following experiment, we set this constant at $\epsilon = 10^{-5}$. The vector $\bm{v}_k = [v_{k, 1}, \dots, v_{k, I}]$ is finally obtained by arranging $\tilde{v}^{(1)}_{f, n}, \dots, \tilde{v}^{(I)}_{f, n}$ into a vector such that $v_{F (n - 1) + f, i} = \tilde{v}^{(i)}_{f, n}$. By further defining $V = [\bm{v}_1, \dots, \bm{v}_K]^{\mathsf{T}}$, we can use (\ref{eq:DC}) as is as our training objective. Since $\tilde{v}^{(i)}_{f, n}$ is treated as a square root Wiener mask for the $i$th speaker, $\tilde{h}^{(i)}_{f, n}$ must correspond to the magnitude of the $i$th speaker at the $(f, n)$th TF bin. In this view, we would like to ensure that the sum of $\tilde{h}^{(i)}_{f, n}$ over $i = 1, \dots, I$ is consistent with the input magnitude spectrogram $X_{f, n}$. To this end, we consider including the loss \begin{align} \mathcal{R} (\theta) &= \frac{\lambda}{4K} \| X - \sum_i \tilde{H}^{(i)} \|_{\mathrm{F}}^2, \label{eq:regularization} \end{align} in our training objective, where $\tilde{H}^{(i)} \equiv (\tilde{h}^{(i)}_{f, n})_{1 \leq f \leq F, 1 \leq n \leq N}$, and $\lambda$ is a regularization hyperparameter. It is important to note that inclusion of this regularization term does not violate the permutation-invariance property since this term is also invariant under the permutations of speaker indices $i = 1, \dots, I$. Since the spectrograms of speech have certain structures that are common across different speakers and languages (such as the harmonic/formant structure and spectral/temporal continuity), it would be natural to assume that the spectrogram of each speaker can be expressed as a superposition of local spectrogram templates drawn from a common dictionary. Thus, one way to express the magnitude spectrogram $\tilde{h}^{(i)}_{f, n}$ would be to assume an NMFD-like model \cite{Smaragdis2004nmfd} such that \begin{align} \tilde{h}^{(i)}_{f, n} &= \sum_{j = 1}^J \sum_{m = 1}^M w_{j, f, m} h^{(i)}_{j, n-m+1} \nonumber \\ &= \sum_{j = 1}^J \sum_{m = n - M}^{n - 1} w_{j, f, n-m} h^{(i)}_{j, m+1}. \label{eq:conv_xdc} \end{align} If we use $H = (h_{q, n})_{1 \leq q \leq Q, 1 \leq n \leq N} \in \mathbb{R}^{Q \times N}$ to denote the output of the previous layer with channel number $Q$ and length $N$, (\ref{eq:conv_xdc}) can be described as a parallel convolution layer. Namely, it can be seen as a layer consisting of vertically splitting $H$ into $I$ sub-arrays $H^{(1)}, \dots, H^{(I)} \in \mathbb{R}^{J \times N}$ with the same shape (i.e., $Q = IJ$), applying a regular 1D convolution to these sub-arrays, treated as a virtual mini-batch, and producing $\tilde{H}^{(1)}, \dots, \tilde{H}^{(I)} \in \mathbb{R}^{F \times N}$ in parallel. Here, $J$ denotes the number of the spectrogram templates and $M$ denotes the length of each template. Simply put, the magnitude spectrogram $\tilde{h}^{(i)}_{f, n}$ of the $i$th speaker is assumed to be given as a convolutive mixture of the $J$ spectrogram templates, as shown at the bottom of Figure \ref{fig:nn}. To interpret the convolution kernel $W$ as a set of magnitude spectrogram templates and the layer input $H$ as a set of the corresponding temporal activity patterns, we impose a non-negative constraint on each element of both $W$ and $H$, assuming the additivity of magnitude spectrograms (though this holds only approximately). This non-negative constraint is expected to induce the sparsity of $W$ and $H$ so that the trained $W$ can be interpreted as non-negative principal parts that frequently appear in $X$. We can implement the non-negative constraint on the activations $H$ by including an activation function that is ensured to produce non-negative values only (e.g., a softplus and rectified linear unit function) in the third last layer. As for the convolution kernel $W$, we replace each element with $w_{j, f, n} = \max \{ 0, \tilde{w}_{j, f, n} \}$ and treat the unconstrained variable $\tilde{W}=(\tilde{w}_{j, f, n})_{j, f, n}$ as the parameter to train instead. The remaining layers can be designed arbitrarily under two requirements: the input of the network must be a spectrogram (2D array) with $F \times N$ TF bins, and the output of the third last layer $H$ must be a 3D array with shape $J \times N \times I$ with non-negative elements. The part of the entire network from the input layer to the third last layer can be thought of as an NMFD part, which is responsible for predicting the template activity patterns from the input spectorgram whereas the remaining part can be seen as a Wiener mask construction part, which constructs a square root Wiener mask in accordance with the spectrogram templates and the corresponding activity patterns. We train the entire network function $\varphi_{\theta}$ of the proposed X-DC model with the following loss function: \begin{align} \mathcal{J} (\theta) = \tilde{\mathcal{J}} (\theta) + \mathcal{R} (\theta). \label{eq:XDC} \end{align} At test time, we can construct a TF mask directly from $\tilde{v}^{(i)}_{f, n}$ given an input spectrogram $X$. The TF mask obtained in this way is expected to produce reasonably natural and smooth spectrograms since it respects the constraint given by the spectrogram estimate of each source. \begin{figure} \centering \begin{minipage}{\hsize} \centering \includegraphics[width=0.9\hsize]{network.pdf}\\ \includegraphics[width=0.75\hsize]{conv.pdf} \end{minipage} \caption{Neural network architecture of the proposed X-DC. An amplitude spectrogram is input to the proposed network, and first its embedding vectors are extracted, which can be interpreted as activation vectors of the spectrogram templates later. An arbitrary network architecture can be used as such a shallower part of the network for extracting embedding vectors only if the output activations are constrained to be non-negative. Then, the extracted embeddings are input to the deeper part of the network, where a Wiener mask for each speaker is estimated based on the embeddings. This mask construction part consists of a one-dimensional convolutional layer with non-negative convolutional kernels, which can be interpreted as learnable spectrogram templates. Finally, the output of the convolutional layer is normalized so that the output of the entire network can be viewed as a square root Wiener mask for each speaker. To make the output before normalization identifiable and interpretable as an amplitude spectrogram, we add a regularization term to the loss function that penalizes the discrepancy between the input spectrogram and the output before normalization. The bottom part of the figure shows how we can interpret the estimation process of the proposed X-DC. The estimation is performed by the convolution of the trained spectrogram templates and their activations. } \label{fig:nn} \end{figure} \section{Experiments} \label{sec:experiment} To show the effectiveness of the proposed X-DC model, we conducted a speech separation experiment using the CMU Arctic speech databases \cite{arctic2004} and L2-ARCTIC \cite{arctic2018} to compare its performance with the conventional DC models \cite{Hershey2016, Li2018} and to check the obtained spectrogram templates and their activations. \begin{table*}[t] \centering \caption{Experimental settings of the speakers in the training, validation, and test data sets. ``Known'' shows that the pair of speakers in the test data set has also been in the training data set, and ``unknown'' shows otherwise. M and F, respectively, stand for male and female speakers.}\vspace{0.02\hsize} \begin{tabular}{|c||p{0.25\hsize}|c|c|c|c|} \hline \rowcolor[rgb]{0.93, 0.93, 0.93} \multicolumn{1}{|c||}{\textbf{Setting \#}} & \multicolumn{1}{c|}{\textbf{Training data}} & \multicolumn{1}{c|}{\textbf{Validation data}} & \multicolumn{3}{c|}{\textbf{Test data}} \\ \hline \hline $1$ & \multirow{6}{*}{\parbox{\hsize}{\textbf{$2$ speakers} (bdl, clb)}} & \multirow{18}{*}{bdl, clb} & \multirow{3}{*}{Known} & M/F & bdl, clb \\ $2$ & & & & M/M & - \\ $3$ & & & & F/F & - \\ \cline{4-6} $4$ & & & \multirow{3}{*}{Unknown} & M/F & rms, slt \\ $5$ & & & & M/M & rms, aba \\ $6$ & & & & F/F & slt, zhaa \\ \cline{2-2} \cline{4-6} $7$ & \multirow{6}{*}{\parbox{\hsize}{\textbf{$4$ speakers} (bdl, clb, rms, slt)}} & & \multirow{3}{*}{Known} & M/F & bdl, clb \\ $8$ & & & & M/M & bdl, rms \\ $9$ & & & & F/F & clb, slt \\ \cline{4-6} $10$ & & & \multirow{3}{*}{Unknown} & M/F & aba, zhaa \\ $11$ & & & & M/M & aba, bwc \\ $12$ & & & & F/F & zhaa, lxc \\ \cline{2-2} \cline{4-6} $13$ & \multirow{6}{*}{\parbox{\hsize}{\textbf{$8$ speakers} (bdl, clb, rms, slt, aba, zhaa, bwc, lxc)}} & & \multirow{3}{*}{Known} & M/F & bdl, clb \\ $14$ & & & & M/M & bdl, rms \\ $15$ & & & & F/F & clb, slt \\ \cline{4-6} $16$ & & & \multirow{3}{*}{Unknown} & M/F & asi, svbi \\ $17$ & & & & M/M & asi, hkk \\ $18$ & & & & F/F & svbi, hjk \\ \hline \end{tabular} \label{tb:setting_speaker} \end{table*} Table \ref{tb:setting_speaker} shows the experimental settings of the speakers in the training, validation, and test data sets. As a training data set, we used $N^{\mathrm{sample}}$ sets of a mixed speech signal of two, four, or eight speakers and the corresponding true affinity matrix $Y Y^{\mathsf{T}}$, where $N^{\mathrm{sample}} \equiv \max \{ N^{(1)}, N^{(2)} \}$ and $N^{(i)}$ is the number of utterances in the training data set of the $i$th speaker. In each epoch, a pair of speakers, $1$ and $2$, were randomly selected from the $2$, $4$, or $8$ speakers shown in Table \ref{tb:setting_speaker} from the uniform distribution. To make the mixed speech signal, we first applied the STFT based on the Hanning window to the speech signal of each speaker, and obtained the spectrogram $\tilde{X}^{(i)}$ of the $i$th speaker for all $i = 1, \dots, I$ ($I = 2$ in this case) by multiplying random weights that were generated from the uniform distribution on $[0, 1)$. From these spectrograms, we computed the scaled amplitude spectrogram $X$ of a mixed signal by \begin{align} &X = (X_{f, n})_{1 \leq f \leq F, 1 \leq n \leq N}, \nonumber \\ &X_{f, n} = | \tilde{X}_{f, n} | / \max_{f, n} |\tilde{X}_{f, n}|, \end{align} where $\tilde{X} = \sum_i \tilde{X}^{(i)}$ is the complex spectrogram of the mixed signal. With regard to the input to the conventional DC models, we used the log spectrogram $X_{f, n} = 10 \log_{10} (|\tilde{X}_{f, n}| + 10^{-16})$, as in \cite{Li2018}. We considered the $(f, n)$th TF point to be silent (i.e., it is not assigned to any speaker) if and only if the following inequality holds: $20 \log_{10} (|\tilde{X}_{f, n}| / \max_{f, n} (|\tilde{X}_{f, n}|) +10^{-16}) < -40$. Finally, to make the true affinity matrix, we defined the one-hot vectors of the speaker labels by $Y = \left[ \bm{y}_1, \dots, \bm{y}_K \right]^{\mathsf{T}}$, where $\bm{y}_k = (y_{k, i})_{1 \leq i \leq I} \in \{0, 1\}^I$, $k = F \times (n - 1) + f \in \{ 1, \dots, K \}$, and \begin{eqnarray} y_{k, i} \equiv \begin{cases} 1 & \mathrm{if}\ i = \mathrm{argmax}_{i'} |\tilde{X}^{(i')}_{f, n}|, \\ 0 & \mathrm{if\ otherwise}, \end{cases} \ \mathrm{for\ all}\ (f, n). \end{eqnarray} Based on such speaker labels $Y$, we obtained an affinity matrix $Y Y^{\mathsf{T}}$ of all the pairs of TF points. As for a test data set, we used $66$ sets of mixed speech signals of the two speakers shown in Table \ref{tb:setting_speaker}, and obtained the sets of input and output data based on the same procedure as the training data set. The only difference is that we multiplied the spectrogram of each speaker by the deterministic weight of one for the test data set, not by a random weight. Using the above test data set, we compared the speech separation performance of the conventional DC models \cite{Hershey2016, Li2018} and the proposed X-DC model, all of which were trained with the above training data set. As performance measures, we used the source to distortion ratio (SDR), source to interference ratio (SIR), and source to artifact ratio (SAR) \cite{Vincent2006}. The detailed numerical settings are as follows: \begin{itemize} \item For the conventional DC model of a BLSTM network \cite{Hershey2016}, we set the dimension of the embedding space at $D = 20$, the number of hidden cells in each BLSTM layers at $600$, the number of the BLSTM layers at $3$, the learning rate at $10^{-5}$, and the number of epochs for training at $T = 1100$. Under these settings, the number of learnable parameters in the conventional DC model of a BLSTM network is $23,880,160$. \item For the conventional DC model of a gated convolutional network \cite{Li2018}, \begin{itemize} \item When there are two speakers in the training data set, we set the dimension of the embedding space at $D = 90$, the number of output channels in the middle convolutional layers at $C = 14$, the number of the middle convolutional layers at $L^{\mathrm{conv}} = 3$ (layers) $\times N^{\mathrm{block}}$ (blocks) with $N^{\mathrm{block}} = 30$, the learning rate at $\eta = 5 \times 10^{-4}$, and the number of epochs for training at $T = 700$. Under these settings, the number of learnable parameters in the conventional DC model of a gated convolutional network is $1,181,802$. \item When there are four speakers in the training data set, we set $D = 10$, $C = 18$, $L^{\mathrm{conv}} = 3$ (layers) $\times N^{\mathrm{block}}$ (blocks) with $N^{\mathrm{block}} = 20$, $\eta = 5 \times 10^{-3}$, and $T = 1,000$. Under these settings, the number of learnable parameters is $1,143,154$. \item When there are eight speakers in the training data set, we set $D = 60$, $C = 22$, $L^{\mathrm{conv}} = 3$ (layers) $\times N^{\mathrm{block}}$ (blocks) with $N^{\mathrm{block}} = 15$, $\eta = 5 \times 10^{-4}$, and $T = 800$. Under these settings, the number of learnable parameters is $1,169,316$. \end{itemize} \item For the proposed X-DC model, we set the maximum potential number of speakers at $I = 2$, and used the following settings: \begin{itemize} \item When there are two speakers in the training data set, we set the regularization hyperparameter at $\lambda = 10^{-3}$, the frame width of a convolutional kernel at $M = 15$, the number of spectrogram templates at $J = 40$, the number of output channels in the middle convolutional layers at $C = 241$, the number of layers of ``NMFD part'' at $L^{\mathrm{NMFD}} = 5$, the learning rate at $\eta = 10^{-2}$, and the number of epochs for training at $T = 700$. Under these settings, the number of learnable parameters in the proposed X-DC model is $1,201,787$. \item When there are four speakers in the training data set, we set $\lambda = 5 \times 10^{-2}$, $M = 15$, $J = 60$, $C = 185$, $L^{\mathrm{NMFD}} = 7$, $\eta = 5 \times 10^{-3}$, and $T = 1,100$. Under these settings, the number of learnable parameters is $1,203,915$. \item When there are eight speakers in the training data set, we set $\lambda = 5 \times 10^{-1}$, $M = 15$, $J = 120$, $C = 179$, $L^{\mathrm{NMFD}} = 6$, $\eta = 5 \times 10^{-3}$, and $T = 700$. Under these settings, the number of learnable parameters is $1,203,985$. \end{itemize} \item As a common setting to all the models, we set the batch size at $4$, the sampling rate at $8$ khz, the window size of the STFT at $254$ points (which results in that the number of frequency bins is $F = 254/2 + 1 = 128$), the window shift at $127$ points, and the number of time bins of an input spectrogram at $N = 100$. As a training algorithm, we used Adam \cite{Kingma2015}. The number of utterances in the training data set of each speaker is as follows: $1,000$ (bdl, clb, rms, slt, zhaa, svbi), $999$ (lxc, asi, hkk, hjk), $998$ (bwc), and $997$ (aba). \end{itemize} The above hyperparameter settings (i.e., $T$ for the BLSTM-DC, $D$, $C$, $N^{\mathrm{block}}$, $\eta$, and $T$ for the conv-DC, and $J$, $C$, $L^{\mathrm{NMFD}}$, $\eta$, $T$, and $\lambda$ for the proposed X-DC) were chosen by the hold-out validation with $66$ validation data sets. Figures \ref{fig:SDR2}, \ref{fig:SDR4}, and \ref{fig:SDR8} show the results of the speech separation performance of the conventional DC models and the proposed X-DC model under the above experimental conditions. From these results, we see that the proposed X-DC could achieve speech separation performance comparable to that of the conventional DC in some settings, while providing an interpretation of the mask estimation mechanism. One possible reason for this is that we have incorporated the smooth spectrogram structure of a speech signal into the output $\tilde{H}$ by adding the penalty term for the difference between $\tilde{H}$ and input spectrogram $X$, as we have described in Section \ref{sec:X_DC}. In the conventional DC models, such ``unnaturalness'' of the output masked spectrogram is not penalized during the training phase, which would have resulted in the lower SDR with some test input signals. On the other hand, in some cases [e.g., $8$ speakers in the training data set, known pair, (m, f)], the proposed X-DC could not achieve as high performance as the conventional DC. In the proposed X-DC, the convolution kernel $W$ was trained with the non-negative constraint, which might have led to the vanishing gradient. Developing a more sophisticated training method to avoid such a problem would be important future work. Figure \ref{fig:mask_bdl_clb} shows the input spectrogram $X$ to the proposed X-DC model, true speaker label $Y$, output $\tilde{H}$ before normalization of the trained X-DC model, and the estimated Wiener mask $V$, where there were $2$ speakers (bdl and clb) in the training data set. Figures \ref{fig:templates}, \ref{fig:activation_1}, and \ref{fig:activation_2}, respectively, show the trained spectrogram templates $W$ of the proposed X-DC and the temporal changes in activation weights $H$ of the spectrogram templates in the above setting (Figure \ref{fig:activation_1} and \ref{fig:activation_2}, respectively, correspond to the results of speakers $1$ and $2$). Interestingly, from Figure \ref{fig:templates}, we see that the some of the trained spectrogram templates capture \textbf{harmonic structures} (i.e., features that are distributed over almost equal intervals along a frequency axis) of an input spectrogram for Wiener mask estimation. Note that we have \textit{not} incorporated any prior knowledge about such harmonic structures into the proposed X-DC model, unlike the methods in previous studies \cite{Duan2008, Rabiee2012}. During training, the proposed X-DC model automatically learned to use these templates as informative features. \begin{figure*}[p] \centering \includegraphics[width=0.95\hsize]{result2_deterministic_w-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between the conventional DC models and proposed X-DC model (\textbf{$2$ speakers} in the training data set). Each bar shows the mean and standard deviation of the results for the test data set.}\vspace{5mm} \label{fig:SDR2} \includegraphics[width=0.95\hsize]{result4_deterministic_w-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between the conventional DC models and proposed X-DC model (\textbf{$4$ speakers} in the training data set).}\vspace{5mm} \label{fig:SDR4} \includegraphics[width=0.95\hsize]{result8_deterministic_w-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between the conventional DC models and proposed X-DC model (\textbf{$8$ speakers} in the training data set).} \label{fig:SDR8} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=\hsize]{mask_test_bdl_clb-eps-converted-to.pdf}\vspace{-2mm} \caption{Input spectrogram $X$ to the proposed X-DC model, true speaker label $Y$, output $\tilde{H}$ before normalization of the trained X-DC model, and the Wiener mask $V$, from left to right, respectively (\textbf{$2$ speakers} in the training data set, \textbf{bdl} and \textbf{clb}). The upper and lower figures, respectively, show the results for speakers $1$ and $2$. Note that the results for $Y$ and $V$ are listed in random order (i.e., top or bottom) from the permutation-invariance property of the X-DC.} \label{fig:mask_bdl_clb} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=\hsize]{trained_W5_bdl_clb-eps-converted-to.pdf} \caption{$40$ spectrogram templates $W$ of the trained X-DC model (\textbf{$2$ speakers} in the training data set, \textbf{bdl} and \textbf{clb}). Each spectrogram template represents a short-time feature of an input spectrogram, which is used for Wiener mask estimation. Each one has the size of $F \times M$, where the row size $F$ is equal to the number of frequency bins of an input spectrogram, and the column size $M$ is the frame width of a convolutional kernel. For visibility, we plotted $W^{\circ \frac{1}{5}}$, where $\circ$ represents an element-wise power.}\vspace{5mm} \label{fig:templates} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=\hsize]{trained_H_Speaker1_bdl_clb-eps-converted-to.pdf}\vspace{-2mm} \caption{Temporal changes in activation weights $H$ of all the spectrogram templates for \textbf{speaker $1$} (\textbf{$2$ speakers} in the training data set, \textbf{bdl} and \textbf{clb}). The numbers in the parentheses correspond to the indices of the spectrogram templates $W$ in Figure \ref{fig:templates}. In each figure, the horizontal and vertical axes, respectively, show the time bins of an input spectrogram with the size of $N$ and the activation weights of the corresponding spectrogram template.} \label{fig:activation_1} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=\hsize]{trained_H_Speaker2_bdl_clb-eps-converted-to.pdf}\vspace{-2mm} \caption{Temporal changes in activation weights $H$ of all the spectrogram templates for \textbf{speaker $2$} (\textbf{$2$ speakers} in the training data set, \textbf{bdl} and \textbf{clb}).} \label{fig:activation_2} \end{figure*} We also provide the results of additional experiments in the following appendices: In Appendix \ref{sec:ap_random_weights}, we tried multiplying not a deterministic but random weight to the spectrogram of each speaker before generating the mixed speech signal in the test time. Next, in Appendix \ref{sec:ap_same_D}, we compared the performances of the conventional DC models (i.e., BLSTM-DC \cite{Hershey2016} and conv-DC \cite{Li2018}) by setting their dimensions $D$ of the embedding space at the same value. Third, in Appendix \ref{sec:ap_xdc_M_J}, we checked the changes in performance of the proposed X-DC model with teh different settings of $M$ and $J$, which represent the frame width of a convolutional kernel and the number of spectrogram templates, respectively. In Appendix \ref{sec:ap_16speakers}, we used larger training data set with $16$ different speakers from the CMU Arctic speech databases \cite{arctic2004} and L2-ARCTIC \cite{arctic2018}, and compared the test performance of the proposed and conventional models. In Appendix \ref{sec:ap_potential_speakers}, we checked the performance of the proposed X-DC when setting the maximum potential number of speakers $I$ at larger values than the true one. Finally, in Appendix \ref{sec:ap_danet}, we compared X-DC with DANet \cite{Chen2017}, which is also an end-to-end permutation-free speech separation method that uses a soft mask to perform separation. \section{Conclusion} \label{sec:conclusion} In this paper, we proposed a new explainable deep clustering (X-DC) model, which extends the original DC model \cite{Li2018} to provide interpretation for speech separation mechanism. To develop such an interpretable model while exploiting the high predictive performance of a neural network, we constructed a network architecture such that its output Wiener mask is computed as a scaled sum of convolutions of short-term spectrogram templates and their activations, both of which are constrained to be non-negative. Experimental results showed that the proposed X-DC model could achieve accuracy comparable to that of the original DC model, and some of the trained spectrogram templates captured the harmonic structures of an input spectrogram, even though we did not incorporate any prior knowledge about such harmonic structures into the proposed X-DC model. Recently, many extensions of DC have been proposed. These include the Chimera Network \cite{Luo2017}, which learns to perform mask inference and TF embedding simultaneously, a phase-aware extension \cite{Wang2018a}, and a multi-channel extension \cite{Wang2018b}. We believe that these extended versions can also benefit from the idea proposed in this paper by having model adaptation capabilities. This would be an interesting direction for future work. Another challenge is to consider the reverberant conditions, which may be different between the training and test phases. This problem would be alleviated by incorporating some model adaptation mechanism into the model, which we plan to present in a follow-up paper. \section*{Acknowledgments} This work was supported by JST CREST Grant Number JPMJCR19A3, Japan. \clearpage \begin{appendices} \section{Experiment using the test input mixture generated by random weight} \label{sec:ap_random_weights} In the test time in Section \ref{sec:experiment}, we used the deterministic weight of one for the spectrogram of each speaker, when generating the spectrogram of a mixed speech signal. Here, we tried using a random weight in the test phase, instead of such a deterministic one, by generating it from the uniform distribution on $[0, 1)$. Aside from the spectrogram weight, we used the same settings as in Section \ref{sec:experiment}. Figures \ref{fig:SDR2_random_weights}, \ref{fig:SDR4_random_weights}, and \ref{fig:SDR8_random_weights}, respectively, show the speech separation performances of the three models (i.e., X-DC, conv-DC, and BLSTM-DC) when using the training data set with $2$, $4$, and $8$ different speakers. By comparing these figures with Figures \ref{fig:SDR2}, \ref{fig:SDR4}, and \ref{fig:SDR8}, we see that the variance of the test result increased with a random weight. \begin{figure*}[p] \centering \includegraphics[width=0.95\hsize]{result2_random_w-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$2$ speakers} in the training data set, \textbf{random weight}).}\vspace{5mm} \label{fig:SDR2_random_weights} \includegraphics[width=0.95\hsize]{result4_random_w-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$4$ speakers} in the training data set, \textbf{random weight}).}\vspace{5mm} \label{fig:SDR4_random_weights} \includegraphics[width=0.95\hsize]{result8_random_w-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$8$ speakers} in the training data set, \textbf{random weight}).} \label{fig:SDR8_random_weights} \end{figure*} \section{Experiment for comparing BLSTM-DC and conv-DC with the same $D$ setting} \label{sec:ap_same_D} We also checked the performances of the conventional DC models (i.e., BLSTM-DC and conv-DC) by setting their dimensions $D$ of the embedding space at the same value. Specifically, we tried the following two patterns: $D = 20$ and $D = 90$, which we used for the BLSTM-DC and conv-DC, respectively, in Section \ref{sec:experiment}. Aside from $D$, we used the same settings (including all the hyperparameters for the proposed X-DC) as in Section \ref{sec:experiment}. Figures \ref{fig:SDR2_D20}, \ref{fig:SDR4_D20}, and \ref{fig:SDR8_D20}, show the results for the setting of $D = 20$, and Figures \ref{fig:SDR2_D90}, \ref{fig:SDR4_D90}, and \ref{fig:SDR8_D90}, show those for $D = 90$. When the training data set contains exactly $2$ different speakers (i.e., Figures \ref{fig:SDR2_D20} and \ref{fig:SDR2_D90}), the setting of $D$ did not have a large impact on the relative speech separation performances of the two models. On the other hand, when there were $4$ or $8$ different speakers in the training data set, the relative performances of these models changed more drastically according to the setting of $D$. From these figures, we see that the BLSTM-DC was more robust with the change of $D$ than the conv-DC. \begin{figure*}[p] \centering \includegraphics[width=0.95\hsize]{result2_D20-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$2$ speakers} in the training data set, \textbf{$D = 20$}).}\vspace{5mm} \label{fig:SDR2_D20} \includegraphics[width=0.95\hsize]{result4_D20-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$4$ speakers} in the training data set, \textbf{$D = 20$}).}\vspace{5mm} \label{fig:SDR4_D20} \includegraphics[width=0.95\hsize]{result8_D20-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$8$ speakers} in the training data set, \textbf{$D = 20$}).} \label{fig:SDR8_D20} \end{figure*} \begin{figure*}[p] \centering \includegraphics[width=0.95\hsize]{result2_D90-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$2$ speakers} in the training data set, \textbf{$D = 90$}).}\vspace{5mm} \label{fig:SDR2_D90} \includegraphics[width=0.95\hsize]{result4_D90-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$4$ speakers} in the training data set, \textbf{$D = 90$}).}\vspace{5mm} \label{fig:SDR4_D90} \includegraphics[width=0.95\hsize]{result8_D90-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$8$ speakers} in the training data set, \textbf{$D = 90$}).} \label{fig:SDR8_D90} \end{figure*} \section{Experiment with different $M$ and $J$ settings in the X-DC} \label{sec:ap_xdc_M_J} To check the effect of changing the frame width of a convolutional kernel $M$ and the number of spectrogram templates $J$ of the porposed X-DC model on the speech separation performance, we tried multiple combinations of their settings. Aside from the values of $M$ and $J$, we used the same settings as in Section \ref{sec:experiment}. Figures \ref{fig:SDR2_JM}, \ref{fig:SDR4_JM}, and \ref{fig:SDR8_JM} show the speech separation performances of the proposed X-DC model under nine settings of $(J, M)$ (the combinations of three settings of $J$ and three settings of $M$). From Figure \ref{fig:SDR2_JM}, when the training data set contains $2$ types of speakers, the SDR, SIR, and SAR did not change significantly with the setting of $M$ and $J$ in most cases. In case where the training data set contains $4$ or $8$ different speakers, from Figures \ref{fig:SDR4_JM} and \ref{fig:SDR8_JM}, we see that the performance of the X-DC was more strongly affected by the setting of $(M, J)$. For instance, in Figure \ref{fig:SDR4_JM}, the setting of $(M, J)=(15, 80)$ yielded lower performance with a set of known (male, female) speakers than the other settings of $(M, J)$, while it achieved higher performance with a set of known (male, male) speakers than the other settings of $(M, J)$ in most cases. \begin{figure*}[!t] \centering \includegraphics[width=\hsize]{result_JM_2-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between different settings of $J$ and $M$ (\textbf{$2$ speakers} in the training data set).}\vspace{5mm} \label{fig:SDR2_JM} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=\hsize]{result_JM_4-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between different settings of $J$ and $M$ (\textbf{$4$ speakers} in the training data set).}\vspace{5mm} \label{fig:SDR4_JM} \end{figure*} \begin{figure*}[!t] \centering\includegraphics[width=\hsize]{result_JM_8-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between different settings of $J$ and $M$ (\textbf{$8$ speakers} in the training data set).} \label{fig:SDR8_JM} \end{figure*} \section{Experiment using larger training data set with $16$ different speakers} \label{sec:ap_16speakers} Here, we compare the performances of the proposed and conventional models by using a larger training data set than in Section \ref{sec:experiment}. Specifically, we tried the case where the training data set contained $16$ different speakers of the CMU Arctic speech databases \cite{arctic2004} and L2-ARCTIC \cite{arctic2018}. \begin{table*}[t] \centering \caption{Experimental settings of the speakers in the training, validation, and test data sets. ``Known'' shows that the pair of speakers in the test data set has also been in the training data set, and ``unknown'' shows otherwise. M and F, respectively, stand for male and female speakers.}\vspace{0.02\hsize} \begin{tabular}{|c||p{0.25\hsize}|c|c|c|c|} \hline \rowcolor[rgb]{0.93, 0.93, 0.93} \multicolumn{1}{|c||}{\textbf{Setting \#}} & \multicolumn{1}{c|}{\textbf{Training data}} & \multicolumn{1}{c|}{\textbf{Validation data}} & \multicolumn{3}{c|}{\textbf{Test data}} \\ \hline \hline $19$ & \multirow{6}{*}{\parbox{\hsize}{\textbf{$16$ speakers} (bdl, clb, rms, slt, aba, zhaa, bwc, lxc, asi, svbi, hkk, hjk, ybaa, mbmps, rrbi, ncc)}} & \multirow{6}{*}{bdl, clb} & \multirow{3}{*}{Known} & M/F & bdl, clb \\ $20$ & & & & M/M & bdl, rms \\ $21$ & & & & F/F & clb, slt \\ \cline{4-6} $22$ & & & \multirow{3}{*}{Unknown} & M/F & ykwk, tni \\ $23$ & & & & M/M & ykwk, erms \\ $24$ & & & & F/F & tni, ydck \\ \hline \end{tabular} \label{tb:setting_speaker16} \end{table*} Table \ref{tb:setting_speaker16} shows the experimental settings of the speakers in the training, validation, and test data sets. We generated these data sets based on the same procedure as in Section \ref{sec:experiment}. For both the validation and test data sets, in any case (i.e., known or unknown speakers and gender combination of the two speakers in the test phase), we used $66$ sets of mixed speech signals of the two speakers shown in Table \ref{tb:setting_speaker16}. The detailed numerical settings are as follows: \begin{itemize} \item For the conventional BLSTM-DC model \cite{Hershey2016}, we set the dimension of the embedding space at $D = 20$, the number of hidden cells in each BLSTM layers at $600$, the number of the BLSTM layers at $3$, the learning rate at $10^{-5}$, and the number of epochs for training at $T = 1100$. Under these settings, the number of learnable parameters in the conventional DC model of a BLSTM network is $23,880,160$. \item For the conventional conv-DC model \cite{Li2018}, we set the dimension of the embedding space at $D = 40$, the number of output channels in the middle convolutional layers at $C = 11$, the number of the middle convolutional layers at $L^{\mathrm{conv}} = 3$ (layers) $\times N^{\mathrm{block}}$ (blocks) with $N^{\mathrm{block}} = 45$, the learning rate at $\eta = 5 \times 10^{-4}$, and the number of epochs for training at $T = 600$. Under these settings, the number of learnable parameters in the conventional DC model of a gated convolutional network is $1,256,658$. \item For the proposed X-DC model, we set the maximum potential number of speakers at $I = 2$, the regularization hyperparameter at $\lambda = 5 \times 10^{-4}$, the frame width of a convolutional kernel at $M = 15$, the number of spectrogram templates at $J = 50$, the number of output channels in the middle convolutional layers at $C = 188$, the number of layers of ``NMFD part'' at $L^{\mathrm{NMFD}} = 7$, the learning rate at $\eta = 5 \times 10^{-3}$, and the number of epochs for training at $T = 900$. Under these settings, the number of learnable parameters in the proposed X-DC model is $1,197,604$. \item We used the same common settings as in Section \ref{sec:experiment} for the batch size, sampling rate, window size of the STFT, window shift, number of time bins of an input spectrogram, and training algorithm. The number of utterances in the training data set of each of the eight speakers (that have not been used in Section \ref{sec:experiment}) is as follows: $1,000$ (mbmps, erms), $999$ (ncc, ykwk, tni, ydck), and $998$ (ybaa, rrbi). \end{itemize} The above hyperparameter settings (i.e., $T$ for the BLSTM-DC, $D$, $C$, $N^{\mathrm{block}}$, $\eta$, and $T$ for the conv-DC, and $J$, $C$, $L^{\mathrm{NMFD}}$, $\eta$, $T$, and $\lambda$ for the proposed X-DC) were chosen by the hold-out validation with $66$ validation data sets. Figure \ref{fig:SDR16} shows the results of the speech separation performance of the proposed and conventional models, where the training data set contains $16$ different speakers. From this figure, we see that in the setting of ``unknown (m, f)'' (i.e., the test input contains male and female speakers who were not included in the training data set), the conv-DC and BLSTM-DC achieved as high performance as in the case of ``known (m, f)'' (i.e., the test input contains male and female speakers who were included in the training data set), except for some cases (e.g., SDR and SAR for Speaker $1$). On the other hand, when the gender combination of the test data set is (male, male) or (female, female), the proposed X-DC maintained as high performance with unknown speakers as with known ones in most settings. \begin{figure*}[!t] \centering \includegraphics[width=0.95\hsize]{result16-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance (\textbf{$16$ speakers} in the training data set.} \label{fig:SDR16} \end{figure*} \section{Experiment under the condition where the assumed number of masks exceeds the actual number of speakers} \label{sec:ap_potential_speakers} The proposed X-DC model enables us to separate an input mixed speech signal into an arbitrary number $I$. Here, $I$ represents the maximum potential number of speakers, and its value is not required to be exactly the same as the true number of speakers $I^{\mathrm{true}}$ in the mixed speech signal. Here, we consider the case of $I^{\mathrm{true}} = 2$ as in Section \ref{sec:experiment}, and check the performance of the proposed X-DC under the multiple settings of $I$, where $I \geq I^{\mathrm{true}}$. In the test case of $I > I^{\mathrm{true}}$, we first obtain $I$ separated spectrograms $\tilde{H}^{(1)}, \dots, \tilde{H}^{(I)}$ using the trained X-DC model, and select $I^{\mathrm{true}}$ spectrograms with the maximum Frobenius norms. Then, for selected $I^{\mathrm{true}}$ spectrograms, we compute the SDR, SIR, and SAR and plot the results. Figure \ref{fig:SDR_potential_2}, \ref{fig:SDR_potential_4}, and \ref{fig:SDR_potential_8}, show the speech separation performances of the proposed X-DC model under the different settings of $I$. From this figure, we see that when the training data set contains $2$ or $4$ different speakers, the models of $I = 2$ or $I = 3$ achieved the best performance in most cases in terms of the SDR. However, in other cases, the setting of $I=2$ (i.e., the maximum potential number of speakers is set at the true number of speakers) did not always yield the best test performance. One possible reason for this result would be that when the input signal could not successfully separated, the ``interference factor'' (i.e., the signal of different speakers from the one being focused on) got smaller by setting redundant potential speakers (i.e., $I \geq 3$). Moreover, in practice, it is often the case that the true number of speakers in an input signal is unknown advance. The proposed X-DC is applicable even in such cases, by setting $I$ at some sufficiently large value. \begin{figure*}[!t] \centering \includegraphics[width=\hsize]{result_pot_2-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between different settings of the maximum potential number of speakers $I$ (\textbf{$2$ speakers} in the training data set).}\vspace{5mm} \label{fig:SDR_potential_2} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=\hsize]{result_pot_4-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between different settings of $I$ (\textbf{$4$ speakers} in the training data set).}\vspace{5mm} \label{fig:SDR_potential_4} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=\hsize]{result_pot_8-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between different settings of $I$ (\textbf{$8$ speakers} in the training data set).} \label{fig:SDR_potential_8} \end{figure*} \section{Comparison with DANet} \label{sec:ap_danet} While the original DC uses a binary mask to perform separation, the proposed X-DC uses a soft mask. This difference may have had some effect on the SDR scores shown earlier. To evaluate the performance excluding the effect of the choice of mask type, we also compared X-DC with DANet \cite{Chen2017}, an end-to-end permutation-free speech separation method that uses a soft mask to perform separation like X-DC. Given a magnitude spectrogram $X \in \mathbb{R}^{F \times N}$ of a mixture signal of $I$ sources, DANet learns to obtain a Wiener mask $\hat{m}_{f, n}^{(i)}$ that minimizes the following loss function: \begin{align} &\mathcal{J}^{\mathrm{DANet}} (\theta) = \frac{1}{FNI} \sum_{f = 1}^F \sum_{n = 1}^N \sum_{i = 1}^I \left| X_{f, n} \left( m_{f, n}^{(i)} - \hat{m}_{f, n}^{(i)} \right) \right|, \end{align} where $m_{f, n}^{(i)} \equiv \left( \tilde{X}^{(i)}_{f, n} \right)^2 / \left( X_{f, n}^2 + 10^{-16} \right)$ and $\tilde{X}^{(i)}_{f, n}$ is the magnitude spectrogram of the $i$th speaker. In DANet, the estimated Wiener mask is given by \begin{align} \hat{m}_{f, n}^{(i)} = \sigma \left( \sum_{d = 1}^D A_{i, d} v_{F(n-1)+f, d} \right), \end{align} where $A = (A_{i, d})_{1 \leq i \leq I, 1 \leq d \leq D}$ and $V = (v_{k, d})_{1 \leq k \leq FN, 1 \leq d \leq D}$, respectively, are the \textit{attractors} and embeddings of the TF bins, and $\sigma (x) \equiv 1/(1+\exp(-x))$. The embeddings $V$ is obtained by feeding the magnitude spectrogram $X$ of a mixture signal into BLSTM layers: $V = \mathrm{BLSTM} (X)$. As for the attractors $A$, they are given by \begin{align} A_{i, d} = \frac{\sum_{k = 1}^{FN} v_{k, d} y_{k, i}}{\sum_{k = 1}^{FN} y_{k, i} + 10^{-8}}, \end{align} where $y_{k, i} = 1$ if the $i$th speaker is dominant at the $k$th TF point, and $y_{k, i} = 0$ otherwise. Based on these definitions, we can obtain the estimated Wiener mask $\hat{m}_{f, n}^{(i)}$ for each $i$th speaker from input mixture $X$ through end-to-end training. By using the same data sets as in Section \ref{sec:experiment}, we compared the speech separation performance of the proposed X-DC and the conventional methods including DANet \footnote{We implemented DANet by referring to the source code provided by the authors of \cite{Chen2017}: \url{https://github.com/naplab/DANet}.}. Aside from the hyperparameter settings for DANet, we used the same experimental settings as in Section \ref{sec:experiment}. The detailed numerical settings for DANet are as follows: \begin{itemize} \item For any number of speakers in the training data set, we set the number of hidden cells in each BLSTM layers at $300$, the number of the BLSTM layers at $3$, the learning rate at $10^{-5}$, and the number of epochs for training at $T = 800$. \item When there are two speakers in the training data set, we set the dimension of the embedding space at $D = 70$. Under this setting, the number of learnable parameters in DANet is $10,747,760$. \item When there are four speakers in the training data set, we set $D = 70$. Under this setting, the number of learnable parameters in DANet is $10,747,760$. \item When there are eight speakers in the training data set, we set $D = 20$. Under this setting, the number of learnable parameters in DANet is $6,901,360$. \end{itemize} The above hyperparameter settings of $D$ and $T$ were chosen by the hold-out validation with $66$ validation data sets. Figures \ref{fig:SDR_danet_2}, \ref{fig:SDR_danet_4}, and \ref{fig:SDR_danet_8}, show the speech separation performance of the proposed and conventional models including DANet, where the training data set contains $2$, $4$, and $8$ different speakers, respectively. From these results, we see that DANet achieved as high speech separation performance as the other DC methods in most settings. \begin{figure*}[p] \centering \includegraphics[width=\hsize]{result_danet2-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between the proposed X-DC, conv-DC, BLSTM-DC, and DANet (\textbf{$2$ speakers} in the training data set).}\vspace{5mm} \label{fig:SDR_danet_2} \includegraphics[width=\hsize]{result_danet4-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between the proposed X-DC, conv-DC, BLSTM-DC, and DANet (\textbf{$4$ speakers} in the training data set).}\vspace{5mm} \label{fig:SDR_danet_4} \includegraphics[width=\hsize]{result_danet8-eps-converted-to.pdf}\vspace{-2mm} \caption{Comparison of speech separation performance between the proposed X-DC, conv-DC, BLSTM-DC, and DANet (\textbf{$8$ speakers} in the training data set).} \label{fig:SDR_danet_8} \end{figure*} \end{appendices} \clearpage \bibliographystyle{abbrv}
1,108,101,563,433
arxiv
\section{Introduction} Transport is among the best probes of quantum many-body systems, because it is sensitive to the nature of the system's excitations. Recently, quantum simulation has emerged as a new method to tackle the many-body problem by using a controlled cold atomic gas as a model for electrons in condensed matter \cite{Cirac:2012aa}. With this approach getting ready to address the open questions of condensed matter physics, there is a growing interest in the direct measurement of transport properties in atomic gases \cite{Chien:2015ab,0953-8984-29-34-343003}. The current methods used to investigate transport in cold atomic gases suffer from the intrinsically destructive nature of the observation. Even when snapshots of the density distribution are obtained at the level of individual atoms, the investigation of the dynamics involves sample-to-sample fluctuations. As a result, the noise in the preparation directly feeds in the measurement outcomes, rendering the cold-atom transport measurements far less precise than their condensed-matter counterparts~\cite{BLANTER20001}. In this paper, we describe a method for the continuous measurement of atomic currents over single realizations of a quantum gas, that applies equally well to weakly and strongly interacting gases at the ultimate limit set by quantum-mechanical back-action. The concept is depicted in Fig. \ref{fig:twoTerms}: it relies on (i) the two (or multi-)terminal configuration, where the system of interest is connected to large atomic reservoirs allowing to inject and collect particles, and (ii) the use of continuous measurements of atom numbers using a high-finesse cavity and a probe laser far from the atomic resonance. The atomic current consists of atoms continuously entering and leaving the reservoir, thereby interacting with the cavity mode and causing the phase shift of a probe laser, which is measured by a quantum-limited interferometer. The high-finesse cavity ensures that the phase shift and the measurement back-action do not suffer significantly from the effects of spontaneous emission \cite{Lye:2003aa,2011AAMOP..60..201T}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.3\textwidth]{cavity.png} \caption{Experimental concept of the atomic current detector. An atomic cloud is shaped in a two-terminal setup with large reservoirs connected by a mesoscopic channel. One reservoir overlaps with the mode of a single-sided optical cavity. A chemical potential difference introduced between the two reservoirs drives a quasi-DC current in the channel. A probe beam far detuned from the atomic resonance is sent onto the cavity, and its phase, measured by homodyne detection, provides a real-time measurement of the atom number in the reservoir.} \label{fig:twoTerms} \end{center} \end{figure} Outline of this paper is as follows. In Sec.~II, we investigate the continuous dispersive measurement of a reservoir in the absence of currents, and express both the noise spectrum and the heating due to measurement back-action in terms of Tan's contact--the parameter relating the two-body correlations at short distance to the macroscopic properties of the cloud \cite{Tan:2008ac,braaten2012universal}. The connection between Tan's contact and measurement back action provides a quantitative estimate of the measurement outcomes, and allows one to identify a good-cavity regime where the back-action vanishes, realizing a emergent QND measurement. Similar to Tan's relations, this applies to an arbitrary strength of interaction, demonstrating the universal character of the measurement scheme. This complements existing proposals focusing on single-particle physics or lattice systems \cite{Eckert:2008aa,PhysRevA.87.021601,PhysRevA.90.023628,PhysRevLett.102.020403,PhysRevLett.115.060401,PhysRevLett.115.095301,Yang:2017aa}. In Sec.~III, we consider the reservoirs connected by a channel which carries atomic currents. There, even in the good-cavity regime, the observation of a reservoir produces a back-action on the transport process, which we interpret as fluctuations of the chemical-potential bias across the channel. Together with the intrinsic detection noise, this back-action yields a finite, universal quantum limit on the precision of the atomic current measurement. Although the atom-field coupling is treated within linear response theory, this result assume neither Fermi-liquid behavior nor linear response of the atomic current to the applied bias. We express the limit explicitly in terms of a finite-bias admittance of the channel. Section~IV discusses possible experiments with cold atomic gases. In the appendices, we discuss technical details of our formulation. \section{Continuous reservoir observation} We first consider a closed reservoir containing $N$ fermions at zero temperature, dispersively coupled to the optical field in a Fabry-Perot cavity which, in turn, is coupled to an environment through an imperfect mirror. The field is far detuned from the atomic resonance such that the excited states of the atoms are not populated. The atoms populate equally two hyperfine states, and for simplicity we assume these two states to be coupled identically to the field, though this assumption is not essential. Then the Hamiltonian of the system reads \cite{Ritsch:2013aa} \begin{equation} \hat{H} = \hat{H}_{at} + \omega_c \hat{d}^\dagger \hat{d} + \Omega \hat{M} \hat{d}^\dagger \hat{d}. \end{equation} Here $\hat{H}_{at}$ is the Hamiltonian of the atoms in the absence of the cavity field, consisting of the kinetic energy, the interaction energy between the two spin components, and a possible trapping potential. The empty cavity has frequency $ \omega_c$, and $ \hat{d}$ ($ \hat{d}^\dagger$) annihilates (creates) a photon in the cavity. The coupling $\Omega$ of the atoms to the field represents the shift of the cavity resonance due to the presence of one atom maximally coupled to the field, and $\hat{M}$ is the overlap of the density distribution of the atoms with the cavity mode: \begin{equation} \hat{M} = \int d\mathbf{r} \cos^2(kz) \hat{n}(\mathbf{r}) = \frac{1}{2}\hat{N} + \frac 1 4 (\hat{n}_{2k} + \hat{n}_{-2k}), \end{equation} where $k$ is the wave vector of cavity photons. We have introduced the operators for the total atom density $\hat{n}(\mathbf{r})$, the total atom number $\hat{N}$ and the density fluctuations $\hat{n}_{2k} = \int d\mathbf{r} e^{-2ikz} \hat{n}(\mathbf{r})$. Here we consider a situation in which the waist of the cavity mode is much larger than the atomic cloud. We describe a coherent driving resonant with the cavity and the coupling to the vacuum using the input-output formalism \cite{gardiner2004}. We decompose the atom-field coupling into a non-fluctuating part which we include in $\hat{H}_{at}$~\footnote{This implies the presence of a static lattice modulation. For $k\gg k_F$ the modulation is negligible as long as the number of intra-cavity photons is small compared with $ \hbar k^2/2m\Omega$. Moreover, the static lattice can experimentally be suppressed using interrogation techniques described in Refs. \cite{PhysRevA.94.061601,1367-2630-19-8-083002}} and a part containing the vacuum fluctuations $ \hat{\eta}$. To first order in $\hat{\eta}$, the coupling Hamiltonian reads $ \hat{F} \hat{M}$, where $\hat{F}=ig ( \hat{\eta}^\dagger - \hat{\eta}) $ with $g = 2 \Omega \sqrt{\Phi/\kappa}$ being the measurement strength, $\Phi$ is the photon flux incident on the cavity, and $\kappa$ is the cavity decay rate. Importantly, to first order in fluctuations, the time evolution of $\hat{F}$ is decoupled from that of the atoms, so that the freely evolving $\hat{F} (t)$ can be directly treated as a perturbation for the atoms. The presence of the operators $\hat{n}_{\pm 2k}$ in $\hat{M}$ implies that $\hat{M}$ does not commute with the atomic Hamiltonian, and that the measurement is thus destructive. In fact, the Heisenberg equation directly relates the commutator of $\hat{M}$ with $\hat{H}_{at}$ to the energy absorption rate due to measurement: \begin{equation} \frac{d \hat{H}_{at} }{dt} = - i \left[ \hat{H}_{at}, \hat{M} \right ] \hat{F}. \end{equation} We evaluate this expression using linear response theory, obtaining (see Appendix B for details) \begin{equation} \frac{d\mathcal{E}_{\mathrm{at}}}{dt}= -\frac{g^2\kappa}{16n}\chi^{R}(2k,i\kappa/2), \end{equation} where $\mathcal{E}_{\mathrm{at}}$ is the energy per atom, $n$ is the atomic density, and $\chi^R$ is the retarded density response function which is determined by an equilibrium average \footnote{It might seem that the heating rate given by the response at momentum $2k$ is due to the cosine shape of the mode function in the Fabry-Perot configuration. In fact, heating is due to the effect of photon back-scattering onto atoms, imparting momentum $2k$ to the cloud, which arises regardless of the mode profile. Choosing instead a ring cavity yields the same result up to a numerical factor as discussed in Appendix E. }. We now focus on the case with $2k\gg k_F$ and $\kappa/2\gg \epsilon_F$, which we expect to be realized in typical cold-atom experiments \cite{2007Natur.450..272C,Brennecke:2007aa,PhysRevA.75.063620,Murch:2008aa} (note however Ref.~\cite{Keler:2014aa}). In this regime, the density response function can systematically be evaluated using the operator product expansion~\cite{PhysRevLett.100.205301,PhysRevA.81.063634,PhysRevA.84.043603,PhysRevA.85.013613,PhysRevX.7.011022} (OPE), regardless of the interaction strength, temperature, and phase of matter. The result of this expansion is expressed as \begin{multline} \frac{1}{g^2}\frac{d\mathcal{E}_{\mathrm{at}}}{dt} = g_n(x) + g_c(x,2k a)\frac{C}{k_F^4} \left( \frac{k_F}{2k}\right) \\ +g_H(x) \frac{\mathcal{E}_{\mathrm{at}}}{\mathcal{E}_{0\mathrm{at}}}\left( \frac{k_F}{2k}\right)^2+..., \label{eq:heating} \end{multline} where $C$ is Tan's contact density, $\mathcal{E}_{0\mathrm{at}} = 3\epsilon_F/5$ is the energy per atom of a noninteracting Fermi gas, $g_n$ is a function of $x=\frac{\kappa m}{4 k^2}$, independent of interactions, and $g_c$, and $g_H$ are universal functions of $x$ and $ka$, where $a$ is the $s$-wave scattering length. The analytic expressions of these functions are shown in Appendix C. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.45\textwidth]{heating_JP.png} \caption{ Dimensionless energy absorption rate as functions of $x$ and the interaction parameter $1/(k_Fa)$. The cuts in red, green and blue present the dimensionless energy absorption rate for $1/(k_Fa)=-0.5,\,0,$ and $0.5$, respectively.} \label{fig:heating_rate} \end{center} \end{figure} Figure \ref{fig:heating_rate} presents the right-hand side of Eq.~\eqref{eq:heating} for the ratio $k_F/(2k) = 0.2$, in line with typical experiments using $^6$Li atoms, and well in the valid regime of the OPE~\cite{PhysRevLett.109.050403,Hoinka:2013aa}. The normalization implies that variations of $x$ (i.e., $\kappa$) are taken at a constant mean intra-cavity photon number. In the regime of large $x$, the heating rate tends to vanish according to a power law, a feature expected from Tan's relations. In the opposite regime $x<1$, heating linearly decreases to zero due to the saturation of $\chi$ at low $\omega$. There, the cavity responds too slowly to resolve the atomic motion along the cavity mode, thereby realizing an emergent QND measurement \cite{PhysRevA.43.3819,PhysRevLett.120.133601}. Importantly, the conditions for both QND operation $\left(\kappa<\frac{4k^2}{m}\right)$ and the validity of the OPE $(\epsilon_F<\kappa)$ can be fulfilled simultaneously, since $\epsilon_F$ and $\frac{4k^2}{m}$ differ by more than one order of magnitude in typical experiments \cite{PhysRevLett.109.050403,Hoinka:2013aa}. As interactions are varied from the BCS to the BEC limit, the maximum shifts towards low frequencies, as a result of pairing becoming more pronounced. Here, we have neglected heating due to spontaneous emission~\cite{Gerbier:2010aa}, which is justified for cavities with large enough cooperativities (see Appendix F for details). The input-output formalism predicts that the photocurrent of the homodyne detector can be expressed after appropriate renormalization and up to a constant offset as $I_h(t) = \hat{b}_{\mathrm{out}} + \hat{b}^\dagger_{\mathrm{out}}$. Here $\hat{b}_{\mathrm{out}}$ describes the field emanating from the cavity, and the phase of the interferometer is chosen to be zero. The mean homodyne current relates to the atom number through $\langle \hat{N} \rangle = \sqrt{\kappa} \langle I_h \rangle/ 2g$. Using again linear response theory, we relate the photocurrent noise spectral density, referred back to the atom number, to the dynamical structure factor $S$ of the gas (see Appendix D for details): \begin{multline} \mathcal{S}_{NN}(\omega) = \frac{\kappa}{4g^2} + N^2 \delta(\omega)+ \frac {V} 4 \frac{S(2k,\omega) + S(2k,-\omega)}{1 + \left(\frac{2\omega}{\kappa}\right)^2}, \label{eq:noise} \end{multline} where $V$ is the volume of the system. The first term on the right-hand side shows the imprecision introduced by the photon shot noise, the second term arises from the constant value of $N$, and the last term represents the quantum fluctuations of atoms in the cavity mode. While a similar form of the noise was obtained in a Bose-Einstein condensate inside a cavity from a different perspective~\cite{landig2015aa}, we do not rely on the mean-field approximation, and the above formula is valid for any system in the weak atom-field coupling regime. Similar to the heating rate, the atomic contribution to the noise is also universal in the regime where the OPE is valid. In contrast to the density response at imaginary frequencies, the OPE expansion for the structure factor has been considered in Refs.~\cite{PhysRevA.81.063634,PhysRevA.84.043603,PhysRevA.85.013613,PhysRevX.7.011022}. In the good cavity regime, the contribution of the atomic fluctuations becomes negligible, since $S$ decreases according to a power law in the low-frequency regime, confirming the emergent QND character of the measurement. Our analysis applies for any interactions between fermions in the weak-measurement limit. In the opposite, strong-measurement regime, even the non-interacting Fermi gas shows large nonlinearities \cite{PhysRevA.78.023815,PhysRevLett.104.063601,PhysRevA.83.043606}. \section{Current measurements} We now consider the entire system in the presence of connection between the reservoirs, which we describe with the following Hamiltonian \begin{equation} \hat{H} = \hat{H}_\mathrm{at,L} + \hat{H}_\mathrm{at,R} + \hat{H}_\mathrm{t} + \hat{F} \hat{M}, \end{equation} where we introduce a tunneling Hamiltonian $\hat{H}_\mathrm{t}$ \cite{Ingold:1992aa}. We consider the QND regime for the atom-number measurement and replace $\hat{M}$ by $\hat{N}_L/2 $, with $\hat{N}_L$ being the atom number operator in the left reservoir. The Hamiltonians for the left and right reservoirs are identical to $\hat{H}_{at}$ from the previous part, and $\left[ \hat{H}_\mathrm{at,L}, \hat{N}_L \right ] = \left[ \hat{H}_\mathrm{at,R}, \hat{N}_L \right ] = 0$. The Heisenberg equation for $\hat{N}_L$ reduces to the commutator with the tunneling Hamiltonian, and we suppose Kirchoff's law $\dot{\hat{N}}_L = \hat{I}_{at}$, where $\hat{I}_{at}$ is the atomic current operator. To describe the back-action of the measurement of $\hat{N}_L$, we introduce the phase operator $\hat{\Phi}$ as the generator of an infinitesimal change of the atom number in the left reservoir \cite{Ingold:1992aa}. By construction, it verifies $\left[ \hat{N}_L, \hat{\Phi} \right ] = i$ and its action on states in the atom-number representation is $-i\frac{\partial}{\partial N_L}$. For a closed reservoir without the probe, the phase operator evolves according to $\dot{ \hat{\Phi}} = \frac{\partial \hat{H}_\mathrm{at,L}}{ \partial N_L}$, as a result of $\left[ \hat{H}_\mathrm{at,L}, \hat{N}_L \right ] = 0$. Provided that the chemical potential in the right reservoir is fixed, the above equation allows one to identify the fluctuations of $\dot{ \hat{\Phi}}$ as those of the chemical-potential difference between the reservoirs. The continuous measurement of the atom number yields noise on the phase as a result of Heisenberg's uncertainty principle, which we evaluate using the Heisenberg equation: \begin{equation} \dot{\hat{\Phi}} = \frac{1}{i} \left[ \hat{H}_\mathrm{at,L} + \hat{H}_\mathrm{at,R} + \hat{H}_\mathrm{t} , \hat{\Phi} \right ] + \frac{1}{2} \hat{F}, \end{equation} where the first term on the right-hand side is the evolution in the absence of measurement including the dynamical effects of the coupling between the reservoirs and the channel, and the last term represents the random fluctuations due to the continuous observation. Owing to the fact that in the weak-measurement regime, $\hat{F}$ is not correlated to the reservoir dynamics, the power spectrum of the phase fluctuations is given by $\mathcal{S}_{\Phi \Phi}(\omega) = \mathcal{S}_{\Phi \Phi}^0(\omega) + \frac{\mathcal{S}_{FF}(\omega) }{4^2 \omega^2}$, where $\mathcal{S}_{FF}$ is the noise spectrum of $F$, and $ \mathcal{S}_{\Phi \Phi}^0(\omega)$ describes the fluctuations in the absence of measurement~\cite{PhysRevX.6.031002}. For a probe resonant with the cavity in the weak-measurement regime, the dynamical back-action of the measurement vanishes since a small change in the atom number does not alter the intra-cavity photon number. The atomic current noise spectrum is then \begin{equation} \mathcal{S}_{II}(\omega) = \left| Y(\omega) \right|^2 \left(\omega^2 \mathcal{S}_{\Phi \Phi}^0(\omega) + \frac{\mathcal{S}_{FF}(\omega) }{4} \right), \end{equation} where we introduce the frequency-dependent admittance of the channel $Y(\omega)$ and the noise spectral density of $\hat{F}$, $\mathcal{S}_{FF}(\omega)$. This assumes the linear response of the atomic current to small fluctuations around the average bias, but does not assume the linearity of the current-bias relation itself~\cite{PhysRevX.6.031002}. Measurements of the current in the setup of Fig. \ref{fig:twoTerms} will proceed by measurements of the homodyne signal separated in time by $\tau$, yielding the averaged current operator \begin{equation} \hat{i}_\tau(t) = \frac{\hat{N}(t+\tau) - \hat{N}(t)}{\tau} = \frac 1 \tau \int_t^{t+\tau}\hat{I}_{at}(u) du, \end{equation} where the second equality results from Kirchoff's law. We assume that $\tau$ is much larger than both $1/\kappa$ and the dwell time of atoms in the channel. The total imprecision on the current measurement is then given by the total imprecision originating from detection and the measurement back-action, \begin{equation} \mathcal{S}_{ii}^\mathrm{imp}(\omega) = \mathrm{sinc}^2\left( \frac{\omega \tau}{2} \right) \left[ \frac{\omega ^2 \kappa}{4 g^2} + \left| Y(\omega) \right|^2 \frac{g^2}{\kappa} \frac{1}{1 + \frac{4\omega^2}{\kappa^2}}\right], \end{equation} where consistently with the emergent QND operation we have ignored the equilibrium fluctuations of the atoms within the cavity mode, and we have expressed $\mathcal{S}_{FF}(\omega)$ in terms of the cavity parameters. This expression represents the trade-off between noise and back-action as the measurement strength is varied, similar to the standard quantum limit in cavity optomechanics \cite{Clerk:2010aa,Aspelmeyer:2014aa} We illustrate this for the case of a fully open quantum point contact at low bias by using the universal conductance quantum as the low-frequency admittance. The total current imprecision $\delta_{ii}^2$ obtained by integration over the bandwidth $1/\tau$ is presented in Fig. \ref{fig:currentSQL}. The lower bound on current fluctuations is of the order of $1/\tau$, typically two orders of magnitude below the technical noise of the state-of-the-art cold-atom measurements \cite{PhysRevLett.119.030403}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.35\textwidth]{currentSQL.png} \caption{(Color online) Total imprecision of current through a fully open quantum point contact over the bandwidth $1/\tau$ as a function of the measurement parameter $g^2 \tau /\kappa$. The dashed red curve represents the contribution of photon shot noise, and the blue dashed-dotted curve represents the contribution from the measurement back-action. } \label{fig:currentSQL} \end{center} \end{figure} \section{Discussion} The above result is universal in that it does not rely on a Fermi-liquid description of the reservoirs and thus applies to both normal and superfluid phases of interacting fermions. It is a consequence of the existence of the emergent QND measurement of the population of a reservoir, which rests on Tan's relations. It provides a general framework for the quantum simulation of mesoscopic transport using tunable Fermi gases. This concept differs from other proposals where the current operator couples directly with the cavity field via photon-assisted tunneling, which produces a dissipative current \cite{PhysRevLett.116.060401,PhysRevLett.117.175302,PhysRevA.95.043843}. It also differs from mesosopic electronic devices, in that the two terminals together form a closed system, without electromagnetic environments \cite{PhysRevX.6.031002}, thereby allowing for a simplified and universal analysis. The most natural experimental platform would consist of cold $^6$Li atoms in the two-terminal configuration accessible in the state-of-the-art experiments~\cite{0953-8984-29-34-343003}, where the light mass of the atom facilitates reaching the QND regime. In the presence of a finite $\kappa$ or finite cooperativity, the measurement is not strictly QND. Phenomenologically, we can treat the energy increase in the reservoir as generating a temperature bias across the channel, leading to an extra thermoelectric contribution to the average current \cite{Brantut:2013aa}. The QND measurement presented here can be generalized to multi-terminal cases, where a comparable number of cavities or cavity modes monitor several reservoirs simultaneously. Further generalizations could describe situations, where the cavity is focused on a small region within a single cloud in order to observe the dynamics of the gas \cite{PhysRevLett.120.133601}. In addition, the measurement is, in principle, spin-sensitive such that spin currents as well as particle currents could be monitored along the same principle \cite{Krinner:2016ab}. \section*{Acknowledments} We thank C. Galland, T. Donner, and C. Altimiras, T. Giamarchi for discussions and a careful reading of the manuscript, and J. Hofmann and S. Yoshida for discussions on the OPE. JPB acknowledges funding from the ERC project DECCA, EPFL and the Sandoz Family Foundation-Monique de Meuron program for academic promotion. SU is supported by JSPS KAKENHI Grant No.~JP17K14366. MU is supported by JSPS KAKENHI Grant No.~JP26287088, a Grant-in-Aid for Scientic Research on Innovative Areas ``Topological Materials Science'' (KAKENHI Grant No. JP15H05855), and the Photon Frontier Network Program from MEXT of Japan.
1,108,101,563,434
arxiv
\section{Introduction} \label{s:intro} \subsection{Motivation} \label{s:rarrccp} To keep pace with steeply growing global demand, e-service providers increasingly rely on platforms integrating heterogeneous computing resources. Such is the case with \emph{cloud computing}, which has emerged over the last decade aiming to realize the vision of computing as the 5th utility. See, e.g., \citet{buyya09}, \citet{li13}, \citet{caoLiStoj14}, and \citet{meietal15}. A cloud computing environment involves three main stakeholders: customers, service providers, and infrastructure providers. In exchange for fees, customers expect to receive a certain Quality of Service (QoS) level. Penalties to the service provider for degraded QoS are specified in a Service Level Agreement (SLA), which may include clauses for refund of service fees. Service providers need to decide how to provision computing resources to maximize profit. Instead of owning such resources, it is often more economical to lease them from infrastructure providers. In a \emph{static resource allocation} scheme, a fixed \emph{basic infrastructure} is leased on a long-term basis, consisting of heterogeneous parallel multi-server nodes. In a \emph{dynamic resource allocation} scheme (see \citet{meietal15}) such basic infrastructure is complemented with extra external resources that are occasionally leased on a short-term basis when deemed convenient, e.g., due to temporary overload. To fully specify how incoming requests, thereafter referred to as \emph{jobs}, are handled in the latter setting, the service provider needs to adopt a \emph{joint resource allocation and job routing policy}, which prescribes for each incoming job whether to route it to a service node in the basic infrastructure, or to outsource it instead to an external server. Since the average profit depends on the policy adopted, this motivates the research goal of designing policies that are simple to implement and yet nearly optimal for maxmizing profit. This paper addresses such an issue as it applies to emerging cloud platforms providing time-critical services, catering to impatient customers whose QoS requirements take the form of \emph{random firm deadlines}, which are unknown by the service provider until they expire. In such platforms, jobs immediately abandon upon missing their deadlines, as they lose all value to the customers. See, e.g., \citet{pdt14} and \citet{chiangetal16}. We consider the two standard types of firm deadlines: \emph{deadlines to the beginning of service} (DBS) and \emph{deadlines to the end of service} (DES). A job's deadline specifies, under DBS, that its service should begin before a certain time, and, under DES, that it should end before a certain time. Thus, an incoming job with a \emph{relative deadline} (time from arrival to deadline expiration) of 5 min.\ should start service within 5 min.\ under DBS, and complete service within 5 min.\ under DES. Otherwise, it abandons. Examples are found, e.g., in \emph{distributed real-time database} applications (see \citet{lasotaetal17}), in particular in those where content is replicated across multiple servers and transactions must be started or completed before current conditions change significantly. Think of online low-latency high-frequency stock trading platforms (see \citet{hasbrouckSaar13}), aiming to complete transactions before current stock prices move beyond given limits. Another example is \emph{online retailing} platforms. Since online shoppers are willing to wait only a limited time for a page to load before navigating away (see \cite{nah04}, and \citet{priyaetal17} on shopping cart abandonment), it is important for such platforms to minimize the fraction of lost customers. An emerging firm real-time application is numerical weather prediction (see \citet{siutaetal16}), where the customer is a weather forecaster submitting computationally demanding models to run on a cloud platform, with current weather conditions as input. If the latter change before the model run completes, the job loses all value and is dropped. In firm real-time platforms, jobs that end up abandoning are harmful not only in that they contribute no profit (if the service fee is refunded), but because their sojourn in the system causes later jobs to also abandon, further lowering profit. A standard approach is to incorporate \emph{admission control} so that jobs can be rejected on arrival, as in \citet{wuetal12} and \citet{chiangetal16}. However, upfront job rejection has undesirable effects, such as a sure loss of both revenue and customer goodwill. An alternative is to use dynamic resource allocation as explored herein. \subsection{Model formulation} \label{s:md} This paper considers a \emph{Markov decision process} (MDP) model (see \citet{put94}) of a cloud platform as outlined above. See Fig.\ \ref{fig:acrmodel}. Basic long-term leased infrastructure is modeled as a collection of $n$ parallel multi-server nodes, with service node $k = 1, \ldots, n$ having its own queue with unlimited buffer and a finite number $m_k$ of identical exponential servers with rate $\mu_k$. As for external short-term leased infrastructure, it is modeled as a multi-server node labeled by $k = 0$ with $m_0 \triangleq \infty$ servers with rate $\mu_0$. Jobs arrive as a Poisson stream with rate $\lambda$ and independent service times. \begin{figure}[htb!] \centering \psfrag{a}{$\lambda$} \psfrag{b}[cb]{$A(t) = 0$} \psfrag{c}[c]{$A(t) = 1$} \psfrag{d}[lb]{$A(t) = n$} \psfrag{g}[rb]{$L_1(X_1(t))$} \psfrag{h}[lb]{$L_n(X_n(t))$} \psfrag{e}[cb]{$\mu_1$} \psfrag{f}[cb]{$\mu_n$} \psfrag{i}[cb]{$\mu_0$} \includegraphics[width=0.75\textwidth]{acrmodel.eps} \caption{Dynamic resource allocation and job routing model with abandonments.} \label{fig:acrmodel} \end{figure} The \emph{relative deadlines} (time from arrival to deadline expiration) of jobs are modeled as independent exponential random variables with \emph{abandonment rate} $\theta$, being also independent of arrival and service times. The \emph{total abandonment} (or \emph{loss}) \emph{rate} $L_k(i_k)$ for node $k$ when it lies in \emph{state} (number of jobs present) $X_k(t) = i_k \in \mathbb{Z}_+ \triangleq \{0, 1, \ldots\}$ is $L_k(i_k) \triangleq (i_k-m_k)^+ \theta$ under DBS, where $x^+ \triangleq \max(x, 0)$, and $L_k(i_k) \triangleq i_k \theta$ under DES. In \citet{baccelliHebu81}'s notation, node $k$ is modeled as an M/M/$m_k+$M queue. Immediately upon a job's arrival at time $t$, the system controller sends it irrevocably to a service node $A(t) \in \{0, 1, \ldots, n\}$, where such \emph{actions} are prescribed through a policy $\boldsymbol{\pi}$ from the class $\boldsymbol{\Pi}$ of nonanticipative policies. Thus, action $A(t)$ is a (possibly randomized) function of the \emph{observed history} $\mathcal{H}(t) \triangleq \{(\mathbf{X}(s), A(s))\colon 0 \leqslant s < t\} \cup \{\mathbf{X}(t)\}$ of states and actions, where the \emph{system state} at time $t$ is $\mathbf{X}(t) \triangleq (X_k(t))_{k=1}^n$. Note that the number of jobs $X_0(t)$ at the external node is not considered here part of the system state. Jobs dispatched to a basic node are scheduled in \emph{first-come first-serve} (FCFS) order, which ensures that the sojourn-time distribution of each such job is determined by the node's state found upon arrival. The service provider charges customers an upfront service fee of $F > 0$ per job, normalized here to $F = \$1$, which is fully refunded if the job's deadline is missed so it abandons. For each job outsourced to the external node, a lump charge $c> 0$ is paid to the infrastructure provider. Note that the latter jobs may still miss their deadlines under DES, with probability $\theta/(\theta + \mu_0)$. Thus, the total expected cost $C$ of allocating a job to the external node, including the possible refund, is $C \triangleq c$ under DBS and $C \triangleq c + \theta/(\theta + \mu_0)$ under DES. We shall assume that $C < 1$, as otherwise it would clearly be uneconomical to use the external node. In such a setting, this paper addresses the design and implementation of effective joint dynamic resource allocation and job routing policies, aiming to maximize the service provider's long-run average expected profit per unit time. Since the average rate of service fees collected per unit time equals $\lambda$, such an objective is equivalent to minimizing the long-run average expected rate per unit time of deadline-missing refunds and short-term leasing charges for use of the external node. The latter problem can be formulated as \begin{equation} \label{eq:panew} \mathop{\textup{minimize}}_{\boldsymbol{\pi} \in \boldsymbol{\Pi}} \, \mathop{\textup{lim\,sup}}_{T \to \infty} \frac{1}{T} \, \ensuremath{\mathsf{E}}_{\mathbf{i}}^{\boldsymbol{\pi}}\Big[\int_0^T \Big(\sum_{k =1}^n L_k(X_k(t)) + \lambda C 1_{\{A(t) = 0\}}\Big) \, dt\Big], \end{equation} where $\ensuremath{\mathsf{E}}_{\mathbf{i}}^{\boldsymbol{\pi}}$ denotes expectation starting from $\mathbf{X}(0) = \mathbf{i} = (i_k)_{k=1}^n$ under policy $\boldsymbol{\pi}$ and ``$\mathop{\textup{lim\,sup}}$'' denotes limit superior. A policy $\boldsymbol{\pi}^* \in \boldsymbol{\Pi}$ is \emph{average-cost optimal} if it minimizes the objective in (\ref{eq:panew}) for every possible initial state $\mathbf{i}$. In the real-time environment of concern here, a practical requirement on policies is that they allow a low-complexity implementation preventing burdensome overheads. Note that, in the computer communications literature, use of dynamic routing policies basing decisions on the current system state is often considered impractical, due to the communication latency for gathering distributed state information. See \citet{heetal06}. Yet, dynamic policies might be practically implementable for cloud computing applications where latency is negligible relative to job processing times and deadline durations. Think, e.g., of the numerical weather prediction application mentioned above, where processing of each job can take minutes or hours. Problem (\ref{eq:panew}) is a denumerable state MDP with unbounded state transition and cost rates. Under certain conditions, reviewed in \citet[Ch.\ 7]{guoHL09}, such MDPs have optimal \emph{stationary deterministic} policies, which select actions based on the current state, characterized by the problem's \emph{dynamic programming} (DP) equations. Yet, computing an optimal policy for the above model by solving numerically its DP equations is generally not possible, as there is an infinite number of equations, one per state. Even if one considers a finite-state model with finite-buffer queues, the DP equations cannot be solved for platforms with more than a few nodes due to the \emph{curse of dimensionality}, as the number of states grows exponentially with the number of nodes. In \S\ref{s:obsei} and \S\ref{s:civdpni} we shall refer to three base instances of the above model with $n = 3$ basic nodes, both under DBS and DES, with parameters as shown in Table \ref{t:bi}, where $\mathbf{m} = (m_1, m_2, m_3)$ and $\boldsymbol{\mu} = (\mu_1, \mu_2, \mu_3)$. Note that base instance $1$ represents a balanced system in that basic nodes have equal total processing capacities $m_k \mu_k \equiv 20$. Base instance $2$ represents an imbalanced system with total processing capacities of basic nodes ordered as $m_1 \mu_1 < m_2 \mu_2 < m_3 \mu_3$, whereas base instance $3$ has $m_2 \mu_2 < m_1 \mu_1 < m_3 \mu_3$. The arrival rate $\lambda$ has been chosen so that the \emph{nominal system load} $\rho \triangleq \lambda / \sum_{k=1}^3 m_k \mu_k$ equals $1$ in each base instance. Note that the information given on the external node is reduced to its expected usage cost $C$, as there is no need to specify $c$ or $\mu_0$. The job's abandonment rate is $\theta = 0.3$, so the mean relative deadline is $10/3$ time units. \begin{table}[!htb] \centering \begin{tabular}{|l|cccccc|} \hline & $n$ & $\lambda$ & $\theta$ & $C$ & $\mathbf{m}$ & $\boldsymbol{\mu}$\\ \hline Base instance 1: & $3$ & $60$ & $0.3$ & $0.2$ & $(2, 5, 10)$ & $(10, 4, 2)$ \\ Base instance 2: & $3$ & $53.5$ & $0.3$ & $0.2$ & $(2, 5, 10)$ & $(8, 3.5, 2)$ \\ Base instance 3: & $3$ & $62.5$ & $0.3$ & $0.2$ & $(2, 5, 10)$ & $(10, 3.5, 2.5)$ \\ \hline \end{tabular} \caption{Base instances for \S\ref{s:obsei} and \S\ref{s:civdpni}.} \label{t:bi} \end{table} \subsection{Heuristic policies based on Bernoulli splitting and routing indices} \label{s:fadhp} The above discussion motivates the interest of designing heuristic policies that, though suboptimal, can be implemented with low complexity and perform well. For such a purpose, this paper deploys four approaches. The first approach produces a \emph{static policy} (blind to the state) given by a \emph{Bernoulli splitting} (BS) of the arrival stream, where the total arrival rate $\lambda$ is split into a rate $\lambda_k$ for each node $k = 0, 1, \ldots, n$, and then each arrival is sent to node $k$ with probability $\lambda_k/\lambda$, independently of other jobs. In an \emph{optimal BS} the rates $\lambda_k$ are chosen to maximize average profit \emph{within the class of BS policies} (so they are \emph{optimal} in that limited sense). See \citet{lee95}. The other three approaches yield dynamic \emph{index policies} that use the system state in a tractable way, being based on \emph{routing indices} $\varphi_k(i_k)$ attached to each basic node $k = 1, \ldots, n$ as a function of its state $i_k$ (number of jobs present), which are used as a measure of \emph{routing priority}: the lower the index value of a node the higher the priority for routing a job to it. Thus, upon a job's arrival, it is dispatched to a basic node with currently \emph{lowest index value} (breaking ties arbitrarily), provided that the latter does not exceed the external node's expected usage cost $C$. Otherwise, the job is outsourced to the external node. If the indices $\varphi_k(i_k)$ can be computed with low complexity, and if in the application at hand the latency for gathering the system state is negligible, such policies may be suitable for practical implementation. For such a policy to make intuitive sense, the index $\varphi_k(i_k)$ must somehow measure the cost of routing a job to basic node $k$ when this lies in state $i_k$. This paper considers three approaches to gauging such routing costs. The first approach measures the cost of routing a job to basic node $k$ by considering only the impact on that particular job. This leads to taking as $\varphi_k(i_k)$ the expected refund from the job if routed to node $k$, which yields the \emph{individually optimal} (IO) \emph{index policy}. The second approach takes one step of the \emph{policy improvement} (PI) algorithm for MDPs starting from an optimal BS, yielding the \emph{PI index policy}. The third approach casts the model into the framework of the \emph{multi-armed restless bandit problem} (MARBP), and then deploys \citet{whit88b}'s index policy. This yields the \emph{restless bandit} (RB) \emph{index policy}. \subsection{Target properties for routing indices} \label{s:dpeop} A basic node's routing index $\varphi(i)$ (where the node label $k$ is dropped from the notation) is a function of the node's state $i$, which somehow incorporates some model parameters. Based on intuitive grounds, it appears reasonable to consider routing indices that satisfy certain structural properties. In particular, the target properties listed in Table \ref{t:indexp} are proposed herein, concerning dependence on the node's state $i$, the system's arrival and abandonment rates $\lambda$ and $\theta$, the external node's (expected) usage cost $C$, and the node's servers' rate $\mu$ and server pool size $m$. Below we use the acronyms \emph{nondecreasing} (ND) and \emph{nondecreasing} (NI). \begin{table}[!htb] \centering \begin{tabular}{|ll|} \hline & Other things being equal, \\ $P_1\colon$ & $\varphi(i)$ is ND in $i$ with $\varphi(i) \nearrow 1$ as $i \nearrow \infty$. \\ $P_2\colon$ & $\varphi(i)$ is ND in $\lambda$ \\ $P_3\colon$ & $\varphi(i)$ is ND in $\theta$, with $\varphi(i) \searrow 0$ as $\theta \searrow 0$ and $\varphi(i) \nearrow 1$ as $\theta \nearrow \infty$ \\ $P_4\colon$ & $\varphi(i)$ is NI in $C$ for large enough $C$ \\ $P_5\colon$ & $\varphi(i)$ is NI in $\mu$, with $\varphi(i) \nearrow 1$ as $\mu \searrow 0$ and $\varphi(i) \searrow 0$ as $\mu \nearrow \infty$ \\ $P_6\colon$ & $\varphi(i)$ is NI in $m$ \\ \hline \end{tabular} \caption{Target properties for a routing index $\varphi(i)$.} \label{t:indexp} \end{table} \begin{remark} \label{re:tpri} The following is an intuitive rationale for properties in Table \ref{t:indexp}. \begin{itemize} \item[\textup{(i)}] $P_1$: the more congested a basic node, the lower its routing priority, which can become lower than that of the external node. \item[\textup{(ii)}] $P_2$: the higher the arrival rate, the lower the routing priorities of basic nodes relative to the external node. \item[\textup{(iii)}] $P_3$: the more impatient the customers, the lower the routing priorities of basic nodes relative to the external node. \item[\textup{(iv)}] $P_4$: the routing priority of a basic node does not increase with $C$, for large enough $C$. \item[\textup{(v)}] $P_5$: the faster a node's servers, the higher its routing priority. \item[\textup{(vi)}] $P_6$: the larger a node's server pool, the higher its routing priority. \end{itemize} \end{remark} The above raises the question of whether properties $P_1$--$P_6$ in Table \ref{t:indexp} are consistent with structural properties of optimal policies, as intuition would suggest. Yet, resolving such an issue is beyond the scope of this paper. \subsection{Goals and contributions} \label{s:gandc} The main goal of this paper is to compare the approaches mentioned above to the design of policies for the present model, with respect both to the complexity of implementing them and to their empirical performance. The paper aims to assess the effects of changes in model parameters on the deviation from optimality of such policies to identify their strengths and weaknesses. For such a purpose, an extensive numerical study is carried out on a wide range of instances. Contributions include the following: (1) formulation of a new MDP model for profit maximization of a cloud platform for impatient customers with dynamic resource allocation and job routing; (2) development of four tractable heuristic policies, along with efficient means for their computation; the means of computing an optimal BS is based on an empirically supported conjecture proposed here on properties of performance metrics for the M/M/$m+$M queue, as well as on new relations reducing the analysis of the M/M/$m+$M queue under DES to the DBS case; (3) identification of qualitative insights on the heuristics considered; and (4) an extensive comparative numerical study on the performance of the policies across a wide range of instances. \subsection{Organization of the paper} \label{s:oofp} The remainder of the paper is organized as follows. \S \ref{s:rw} reviews related work. \S \ref{s:obsm} considers the BS policy. \S \ref{s:ip} discusses the IO, PI and RB policies. \S \ref{s:cbs} reports the results of a comparative benchmarking study on the performance of the policies considered. \S \ref{s:d} presents a final discussion of results. Two appendices contain ancillary material. \ref{s:evalell} presents required results for computing the optimal BS on analysis of the M/M/$m+$M queue under DBS, and shows how to reduce the analysis of the DES case to the DBS case. \ref{a:rbip} outlines how to reformulate the present model into the framework of the MARBP, and reviews computation of the RB policy. \section{Some related work} \label{s:rw} The study of multi-dimensional MDP models for optimal dynamic resource allocation has attracted substantial research attention. Since the numerical solution of their DP equations is hindered by the curse of dimensionality, researchers have sought to identify optimal policies with a simple structure, often of index type. See, e.g., \citet{courVa83}, \citet{katehDerm84}, and \citet{katehJoh84}, which are among the first papers to use MDPs for this kind of problems. Further optimality results are obtained in \citet{katehMel88,katehMel95}, where the latter paper also considers optimal routing to queues. For more complex models, identifying optimal policies is elusive. Yet, works such as \citet{katehLev86} and \citet{katehDerm89} presented methods to analyze the models investigated under light traffic and heavy traffic conditions, giving efficient algorithms for finding optimal or asymptotically policies in such regimes. The design and implementation of resource allocation and/or job routing policies in MDP models of cloud or similar e-service platforms has been the subject of substantial recent research attention. Markovian multi-server queues have been often used as system models in such settings: for blade servers in \citet{li13}, multi-core computer servers in \citet{caoLiStoj14}, and multi-server cloud platforms in \citet{meietal15}. The latter work considered an M/M/$m+$D model of a cloud service provider where all jobs have the same relative deadline. That work assumes a dynamic resource allocation scheme where jobs missing deadlines are sent to a short-term leased server. Research on queueing models with abandonments originated with the work of \citet[60--67]{palm57b} on the M/M/$m+$M queue under DBS, and has been extensively developed. See \citet{baccelliHebu81}. Currently, such models are mostly applied to the analysis of call centers. See, e.g., \citet{zeltynMan05}. More recent work applies them to the study of cloud platforms, as in \citet{pdt14} and \citet{chiangetal16}. Optimal BS policies have been developed for several queueing models, but not for that herein. \citet{buzenChen74} derived the optimal BS for routing jobs to parallel M/G/$1$ queues to minimize mean response time. \citet{lee95} extended such work to a model where jobs are classified into multiple priority classes. \citet{heetal06} applied optimal BS to a model with parallel M/M/$m$ queues. \citet{li13} obtained the optimal BS for routing generic jobs to parallel M/M/$m$ queues that also cater to dedicated jobs. \citet{kallmescass95} applied optimal BS for deadline-miss rate minimization in a model with admission control and routing of soft real-time jobs to parallel M/M/$1$ queues. \citet{heetal06} extended the latter work to parallel M/M/$m$ queues, and \citet[\S 4.1]{nmcor12} further incorporated admission control. As for IO index policies, they are optimal in certain models for routing jobs to parallel symmetric queues (see \citet{winston77} and \citet{johri89} for mean response time minimization, and \citet{movagh05} for deadline-miss rate minimization with firm real-time jobs). Use of such policies in heterogeneous systems has been addressed, e.g., in \citet{chowKohler79} for mean response time minimization with parallel M/M/$1$ queues, and in \citet[\S 3]{nmcor12} for deadline-miss rate minimization of soft real-time jobs with parallel M/M/$m$ queues and admission control. Work on PI index policies includes, e.g., \citet{krish90} and \citet{sassen97}, which considered minimization of mean job response time with parallel M/M/$m$ queues and M/G/$1$ queues, respectively. \citet[\S 4.2]{nmcor12} developed a PI index policy for control of admission and routing to parallel M/M/$m$ queues with soft real-time jobs for deadline-miss rate minimization. Regarding Whittle's RB index policy, its application to admission control and routing to parallel queues was introduced in \citet[\S 8.1]{nmmp02}, in a broad model including that herein as a special case. See also \citet{nmnetcoop07}. \citet[\S 5]{nmcor12} investigated such a policy in a model for control of admission and routing of soft real-time jobs to parallel M/M/$m$ queues. \section{Optimal BS policy} \label{s:obsm} {This section develops a heuristic static policy for problem (\ref{eq:panew}) that is optimal within the class of BS policies, adapting the approach in \citet{buzenChen74} to the present model. In a BS, the actions selected upon job arrivals are drawn according to fixed probabilities: when a job arrives at time $t$, it is dispatched to service node $k = 0, 1, \ldots, n$ (i.e., action $A(t) = k$ is selected) with probability $p_k = \lambda_k/\lambda$, independently of the actions taken on previous jobs, where the $\lambda_k$'s are rates adding up to $\lambda$ to be determined. The input to service node $k$ under such a BS policy is a Poisson process with rate $\lambda_k$, and hence the node behaves as an M/M/$m_k+$M queue with \emph{offered load} $r_k(\lambda_k) \triangleq \lambda_k/\mu_k$ and \emph{offered load per server} $\rho_k(\lambda_k) \triangleq r_k(\lambda_k)/m_k$, which is stable. Let $P_{k, \mathrm{ab}}(\lambda_k)$ be the \emph{abandonment probability} for node $k$, i.e., the probability that a random arrival abandons due to missing its deadline. An \emph{optimal BS} is a globally optimal solution $\boldsymbol{\lambda}^* = (\lambda_k^*)_{k=0}^n$ to the following constrained optimization problem (cf.\ (\ref{eq:panew})), where $\ell_k(\lambda_k) \triangleq \lambda_k P_{k, \mathrm{ab}}(\lambda_k)$ is the \emph{mean abandonment} (or \emph{loss}) \emph{rate} for basic node $k = 1, \ldots, n$: \begin{equation} \label{eq:nlp} \begin{split} & \mathop{\textup{minimize}} \, \sum_{k =1}^n \ell_k(\lambda_k) + C \lambda_0 \\ & \mathop{\textup{subject to}}\colon \lambda_0, \lambda_1, \ldots, \lambda_n \geqslant 0 \\ & \sum_{k =0}^n \lambda_k = \lambda. \end{split} \end{equation} The evaluation of functions $\ell_k$ and their derivatives $\ell_k'$, which is required to solve numerically problem (\ref{eq:nlp}), is addressed in \ref{s:evalell}. \subsection{Computing the optimal BS} \label{s:cobs} We next address computation of the optimal BS. Since (\ref{eq:nlp}) is a smooth linearly constrained nonlinear optimization problem, standard results (see, e.g., \citet{andreasApr11}) yield that, if $\boldsymbol{\lambda}^*$ is a \emph{local optimum}, there is a Lagrange multiplier $\alpha^*$ for its equality constraint satisfying the \emph{Karush--Kuhn--Tucker} (KKT) \emph{first-order optimality conditions}: \begin{equation} \label{eq:kktc} \begin{split} C & \geqslant \alpha^*, \quad \text{ with ``='' if } \lambda_0^* > 0 \\ \ell_k'(\lambda_k^*) & = \alpha^* \quad \text{ for any basic node $k$ having } \lambda_k^* > 0 \\ \alpha_k & \geqslant \alpha^* \quad \text{ for any basic node $k$ having } \lambda_k^* = 0, \end{split} \end{equation} where $\alpha_k \triangleq \ell_k'(0^+) = P_{k, \mathrm{ab}}(0)$ is the probability that an arrival to an empty node $k$ abandons, so $\alpha_k = 0$ under DBS and $\alpha_k = \theta/(\theta + \mu_k)$ under DES. To ensure that such conditions uniquely determine a global optimum for problem (\ref{eq:nlp}) we need the functions $\ell_k$ in the objective to satisfy certain properties such as, e.g., being strictly convex. Yet, establishing such properties is a challenging analytical problem beyond the scope of this paper, due to the complexity of the formulae involved (see \ref{s:evalell}), which, to the author's knowledge, has not been addressed hitherto in the literature. Still, based on observation of particular instances (see, e.g., Fig.\ \ref{fig:ffprime}) the following conjecture on properties of the mean abandonment rate $\ell(\lambda)$ for an M/M/$m+$M queue is proposed herein, where the label $k$ is dropped from the notation. Note that the term ``increasing'' is used below in the strict sense. \begin{figure}[!ht] \centering \includegraphics[height=1.7in]{ffpdbs.eps} \caption{$\ell(\lambda)$ and $\ell'(\lambda)$ for an M/M/$m+$M queue under DBS, $m = 4$, $\mu = 1$ and $\theta = 0.5$.} \label{fig:ffprime} \end{figure} \begin{conjecture} \label{con:fkci} Under either DBS or DES$,$ $\ell(\lambda)$ is increasing and strictly convex in $\lambda,$ having a smooth derivative satisfying $\ell'(\lambda) \nearrow 1$ as $\lambda \nearrow \infty.$ \end{conjecture} In the sequel it is assumed that Conjecture \ref{con:fkci} holds, in which case (\ref{eq:nlp}) would be a nonlinear optimization problem with strictly convex objective and linear constraints. Hence, by standard results it would have a unique global optimum $\boldsymbol{\lambda}^*$ determined by the above KKT conditions. We assume henceforth, by reordering if necessary, that $\alpha_1 \leqslant \cdots \leqslant \alpha_n$. Note that under DES this is equivalent to ordering basic nodes in nonincreasing order of server speed, so $\mu_1 \geqslant \cdots \geqslant \mu_n$. Under Conjecture \ref{con:fkci}, it follows that for any basic node $k$ and $\alpha \in (\alpha_k, 1)$ the equation $\ell_k'(\lambda_k) = \alpha$ in variable $\lambda_k$ has a unique root $\lambda_k^*(\alpha) > 0$, which is a smooth increasing function of $\alpha$ satisfying $\lambda_k^*(\alpha) \searrow 0$ as $\alpha \searrow \alpha_k$ and $\lambda_k^*(\alpha) \nearrow \infty$ as $\alpha \nearrow 1$. We shall find it convenient to extend the domain of such a function from $(\alpha_k, 1)$ to $\mathbb{R}_+ \triangleq [0, \infty)$ by further defining \begin{equation} \label{eq:lambdakstext} \lambda_k^*(\alpha) \triangleq \begin{cases} 0 & \quad \textup{for } 0 \leqslant \alpha \leqslant \alpha_k \\ \infty & \quad \textup{for } \alpha \geqslant 1. \end{cases} \end{equation} We shall write, for $\alpha \in \mathbb{R}_+$, $\boldsymbol{\lambda}^*(\alpha) \triangleq (\lambda_k^*(\alpha))_{k=0}^n$, with $\lambda_0^*(\alpha) \triangleq \lambda - \sum_{k=1}^n \lambda_k^*(\alpha)$. Let $\Lambda^*(\alpha) \triangleq \sum_{k=1}^n \lambda_k^*(\alpha)$, and note that \begin{equation} \label{eq:Lambdast} \Lambda^*(\alpha) = \begin{cases} 0 & \textup{if } 0 \leqslant \alpha \leqslant \alpha_1 \\ \displaystyle \sum_{k=1}^l \lambda_k^*(\alpha) & \textup{if } \alpha_l \leqslant \alpha \leqslant \alpha_{l+1}, \quad l = 1, \ldots, n-1 \\ \displaystyle \sum_{k=1}^n \lambda_k^*(\alpha) & \textup{if } \alpha \geqslant \alpha_{n}. \end{cases} \end{equation} Fig.\ \ref{fig:bigLambdastar} displays the function $\Lambda^*(\alpha)$ for a given instance in the DBS and DES cases. The equations $\ell_k'(\lambda_k) = \alpha$ and (\ref{eq:Rstar}) have been solved using the MATLAB \texttt{fzero} function. Note that $\Lambda^*(\alpha)$ is an increasing function of $\alpha$ that is smooth in the DBS case and piecewise smooth with $n+1$ pieces in the DES case, consistently with (\ref{eq:Lambdast}), with $\Lambda^*(\alpha) \nearrow \infty$ as $\alpha \nearrow 1$. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{bigLambdastar.eps} \caption{$\Lambda^*(\alpha)$ for the instance with $n = 2$, $\theta = 0.5$, $(m_1, m_2) = (5, 10)$, $(\mu_1, \mu_2) = (4, 2)$.} \label{fig:bigLambdastar} \end{figure} It follows from the above that, for any $\lambda > 0$, the equation in $\alpha$ \begin{equation} \label{eq:Rstar} \Lambda^*(\alpha) = \lambda \end{equation} has a unique root $C^*(\lambda) \in (\alpha_1, 1)$, which can be numerically computed, e.g., by the \emph{bisection method}. See \citet[Ch.\ 2.1]{burdenFaires11}. Note that $C^*(\lambda)$ is an increasing function of $\lambda$ that is smooth under DBS and piecewise smooth with $n$ pieces under DES, with $C^*(0^+) = \alpha_1$ and $C^*(\lambda) \nearrow 1$ as $\lambda \nearrow \infty$, which follows from corresponding properties of $\Lambda^*(\alpha)$. Let also $C^*(0) \triangleq \alpha_1$. The following result characterizes the optimal BS and identifies relevant properties of it. Let $\alpha^* \triangleq \min \{C, C^*(\lambda)\}$. \begin{proposition} \label{pro:obsastar} Assuming Conjecture $\ref{con:fkci},$ the following holds$:$ \begin{itemize} \item[\textup{(a)}] $\boldsymbol{\lambda}^*(\alpha^*)$ is the optimal BS$;$ \item[\textup{(b)}] $\lambda_0^*(\alpha^*) > 0$ if and only if $C < C^*(\lambda);$ \item[\textup{(c)}] $\lambda_k^*(\alpha^*) > 0$ if and only if $\alpha^* > \alpha_k,$ for any basic node $k = 1, \ldots, n.$ \end{itemize} \end{proposition} \begin{proof} (a, b) The proofs of the two parts are intertwined. In the case $C < C^*(\lambda)$ where $\alpha^* = C$, it follows from the above that \[ \sum_{k=1}^n \lambda_k^*(\alpha^*) < \lambda, \] whence $\lambda_0^*(\alpha^*) > 0$. As for the case $C \geqslant C^*(\lambda)$ where $\alpha^* = C^*(\lambda)$, we have \[ \sum_{k=1}^n \lambda_k^*(\alpha^*) = \lambda, \] whence $\lambda_0^*(\alpha^*) = 0$. Furthermore, it is readily verified that $\boldsymbol{\lambda}^*(\alpha^*)$ and $\alpha^*$ satisfy conditions (\ref{eq:kktc}), which, assuming Conjecture \ref{con:fkci}, are sufficient for global optimality. Part (c) follows from the definition of $\lambda_k^*(\alpha)$ and (\ref{eq:lambdakstext}). \end{proof} \begin{remark} \label{re:obsastar} The following qualitative properties of the optimal BS are readily derived from Proposition \ref{pro:obsastar}: \begin{itemize} \item[\textup{(i)}] Proposition \ref{pro:obsastar}(b) characterizes when the optimal BS resorts to external short-term leased servers in terms of the threshold cost $C^*(\lambda) \in (\alpha_1, 1)$, as it is optimal to use the external node ($\lambda_0^* > 0$) if and only if the cost of doing so is low enough ($C < C^*(\lambda)$, so $\alpha^* = C$). Hence, it is not optimal to use the external node if its cost is not lower than the service fee ($C \geqslant 1$). Furthermore, for large enough $C$ ($C \geqslant C^*(\lambda)$, so $\alpha^* = C^*(\lambda)$), the optimal BS remains constant, being $\boldsymbol{\lambda}^*(C^*(\lambda))$. \item[\textup{(ii)}] Since $C^*(0^+) = \alpha_1$ and $C^*(\lambda) \nearrow 1$ as $\lambda \nearrow \infty$, for any $C < 1$ the optimal BS uses the external node ($\lambda_0^* > 0$) if and only if the arrival rate $\lambda$ is large enough, viz.\ $\lambda > \lambda^* \triangleq \Lambda^*(C)$, in which case $\alpha^* = C$. Hence, for $\lambda > \lambda^*$ the $\lambda_k^*$ for basic nodes $k$ remain constant, being equal to $\lambda_k^*(C)$, whereas $\lambda_0^*$ grows linearly in $\lambda$, being $\lambda_0^* = \lambda - \sum_{k=1}^n \lambda_k^*(C)$. \item[\textup{(iii)}] In light of Proposition \ref{pro:obsastar}(c), we see that in the DBS case where $\alpha_k \equiv 0$ the optimal BS uses all basic nodes ($\lambda_1^*, \ldots, \lambda_n^* > 0$). In contrast, in the DES case where the $\alpha_k$ are positive, the optimal BS may not use some basic nodes: it uses no basic nodes if and only if $\alpha^* \leqslant \alpha_1$ (in which case $\alpha^* = C$); for $k = 1, \ldots, n-1$, it uses only the $k$ basic nodes with faster servers ($\lambda_l^* > 0$ for $l = 1, \ldots, k$ and $\lambda_l^* = 0$ for $l > k$) if and only if $\alpha_k < \alpha^* \leqslant \alpha_{k+1}$; and it uses all basic nodes if and only if $\alpha^* > \alpha_n$. \item[\textup{(iv)}] The behavior of the optimal BS in the DES case stated in (iii) is more intuitively reformulated in terms of the arrival rate $\lambda$ and the external node usage cost $C$ as follows. If $C \leqslant \alpha_{1}$, the optimal BS sends all jobs to the external node. If $\alpha_{l} < C \leqslant \alpha_{l+1}$ for some $l = 1, \ldots, n-1$, then the optimal BS does not use the $n-l$ slower basic nodes ($\lambda_{l+1}^* = \cdots = \lambda_n^* = 0$); it uses at least the $k$ faster basic nodes ($\lambda_1^*, \ldots, \lambda_k^* > 0$) for any given $k \leqslant l$ if and only if $\lambda > \Lambda^*(\alpha_k)$; and it uses the external node if and only if $\lambda > \Lambda^*(C)$, in which case it uses the $l$ faster basic nodes. If $C > \alpha_n$, the optimal BS uses at least the $k$ faster basic nodes for any given $k$ if and only if $\lambda > \Lambda^*(\alpha_k)$; and it uses the external node if and only if $\lambda > \Lambda^*(C)$, in which case it uses all basic nodes. \end{itemize} \end{remark} \subsection{Computing the optimal BS in large-scale models} \label{s:cobslsm} This section discusses computation of the optimal BS in case of a large number of basic nodes, or with some basic nodes having large server pools. We first consider how computation of the optimal BS scales with $n$. First, equation (\ref{eq:Rstar}) needs to be solved to obtain (approximately) $C^*(\lambda)$ and hence $\alpha^*$. In light of the definition of $\Lambda^*(\alpha)$ in (\ref{eq:Lambdast}) in terms of vector $\boldsymbol{\lambda}^*(\alpha)$, its evaluation can involve up to $n$ evaluations of functions $\lambda_k^*(\alpha)$ corresponding to all basic nodes $k$, so the complexity of this step scales linearly with $n$. Second, once $C^*(\lambda)$ and $\alpha^*$ are available, the optimal BS is immediately obtained by Proposition \ref{pro:obsastar}(a) as $\boldsymbol{\lambda}^*(\alpha^*)$. Thus, the complexity of approximately computing the optimal BS scales linearly with $n$. We next consider how having a basic node $k$ with a large number $m_k$ of servers affects computation of the optimal BS. Note that the equation $\ell_k'(\lambda_k) = \alpha$ in variable $\lambda_k$ needs to be solved for different values of $\alpha$ to obtain $\lambda_k^*(\alpha)$. As shown in \ref{s:evalell}, computation of $\ell_k(\lambda_k)$ and $\ell_k'(\lambda_k)$ scales linearly with $m_k$, as it involves the recursive calculation of Erlang-B blocking probabilities for the queue M/M/$m_k$/$m_k$ queue. Hence, other things being equal, computing the optimal BS scales linearly with $m_k$ for a given node $k$. \subsection{Dependence of optimal BS on model parameters: examples and insights} \label{s:obsei} This section explores through examples with $n = 3$ basic nodes how the optimal BS depends on various model parameters. Note that the base instances referred to below are those specified in Table \ref{t:bi}. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse1.eps} \caption{Optimal BS vs.\ $\lambda$ for base instance $1$.} \label{fig:obsdbse1} \end{figure} \subsubsection{Dependence on the arrival rate $\lambda$} \label{s:obslambda} We start by investigating the effect of varying the arrival rate $\lambda$. \paragraph{Base instance $1$} Fig.\ \ref{fig:obsdbse1} displays the optimal BS vs.\ $\lambda$ for base instance $1$. The rate $\lambda_0^*$ of jobs sent to the external node is positive only for large enough $\lambda$ ($\lambda > \lambda^*$), increasing linearly for $\lambda > \lambda^*$, consistently with Remark \ref{re:obsastar}(ii). As for the rates $\lambda_k^*$ for basic nodes, their behavior differs in the DBS and DES cases. In the former, displayed in the left pane of Fig.\ \ref{fig:obsdbse1}, we have $\lambda_1^* < \lambda_2^* < \lambda_3^*$ for $\lambda < \lambda^*$, while $\lambda_1^* \approx \lambda_2^* \approx \lambda_3^*$ for $\lambda > \lambda^*$. Thus, when the load is light a larger share of jobs is routed to basic nodes with more servers. In the DES case, displayed in the right pane of Fig.\ \ref{fig:obsdbse1}, for light loads a larger share of jobs is routed to basic nodes with faster servers. Further, nodes $2$ and $3$ are only used for arrival rates $\lambda$ larger than certain critical levels, consistently with Remark \ref{re:obsastar}(iv) as $C > \alpha_3$ in this instance. The behavior for $\lambda > \lambda^*$ is similar to that in the DBS case. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse2.eps} \caption{Optimal BS vs.\ $\lambda$ for base instance $2$.} \label{fig:obsdbse2} \end{figure} \paragraph{Base instance $2$} Fig.\ \ref{fig:obsdbse2} displays the optimal BS vs.\ $\lambda$ for base instance $2$. Note that $\lambda_1^* < \lambda_2^* < \lambda_3^*$ for $\lambda > \lambda^*$, both in the DBS and DES cases, an ordering consistent with total processing capacities, as $m_1 \mu_1 < m_2 \mu_2 < m_3 \mu_3$. The same holds in the DBS case when the load is light ($\lambda \leqslant \lambda^*$). In the DES case we see for $\lambda \leqslant \lambda^*$ a behavior consistent with Remark \ref{re:obsastar}(iv). \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse3.eps} \caption{Optimal BS vs.\ $\lambda$ for base instance $3$.} \label{fig:obsdbse3} \end{figure} \paragraph{Base instance $3$} Fig.\ \ref{fig:obsdbse3} displays the optimal BS vs.\ $\lambda$ for base instance $3$. We see that $\lambda_2^* < \lambda_1^* < \lambda_3^*$ for $\lambda > \lambda^*$, both in the DBS and DES cases, again consistently with total processing capacities, as $m_2 \mu_2 < m_1 \mu_1 < m_3 \mu_3$. When the load is low, we observe in the DBS case the ordering $\lambda_1^* < \lambda_2^* < \lambda_3^*$, consistent with node server pool sizes, as $m_1 < m_2 < m_3$. In the DES case we see for low enough $\lambda$ a behavior consistent with Remark \ref{re:obsastar}(iv). \paragraph{Insights} The above examples suggest the following insights: (1) for high enough load, basic nodes with larger total processing capacities $m_k \mu_k$ receive a larger share of traffic, both in the DBS and DES cases; (2) in the DBS case, for low enough arrival rates basic nodes with larger server pools receive a larger share of traffic; (3) in the DES case, for low enough arrival rates basic nodes with faster servers receive a larger share of traffic. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse4.eps} \caption{Optimal BS vs.\ $\theta$ for base instance $1$.} \label{fig:obsdbse4} \end{figure} \subsubsection{Dependence on the abandonment rate $\theta$} \label{s:obstheta} We continue by investigating the effect on the optimal BS of modifying the abandonment rate $\theta$. \paragraph{Base instance $1$} Fig.\ \ref{fig:obsdbse4} displays the optimal BS vs.\ $\theta$ as the latter ranges over $(0, 2]$ for base instance $1$. The rate $\lambda_0^*$ of jobs sent to the external node grows with $\theta$, as a smooth concave function in the DBS case, and as a piecewise smooth function with $3$ pieces in the DES case. As for the $\lambda_k^*$ for basic nodes, they consequently decrease as $\theta$ grows. Otherwise, their behavior differs substantially in the DBS and DES cases. In the former, such $\lambda_k^*$ are very close for very small $\theta$, and get further apart as $\theta$ grows, while maintaining the ordering $\lambda_1^* < \lambda_2^* < \lambda_3^*$. In the latter, such $\lambda_k^*$ are very close only for small $\theta$. For larger $\theta$, they are ordered as $\lambda_3^* < \lambda_2^* < \lambda_1^*$, consistently with server speeds $\mu_3 < \mu_2 < \mu_1$, with $\lambda_3^*$ and $\lambda_2^*$ dropping to $0$ at about $\theta \approx 0.5$ and $\theta \approx 1$, respectively. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse5.eps} \caption{Optimal BS vs.\ $\theta$ for base instance $2$.} \label{fig:obsdbse5} \end{figure} \paragraph{Base instance $2$} Fig.\ \ref{fig:obsdbse5} displays the optimal BS vs.\ $\theta$ for base instance $2$. The behavior of $\lambda_0^*$ is similar to that in the previous base instance. As for the $\lambda_k^*$ for basic nodes, their behavior is also similar, except for the following: (1) in the DBS case, such $\lambda_k^*$ are not very close, while they are also ordered as $\lambda_1^* < \lambda_2^* < \lambda_3^*$, consistently with capacities $m_1 \mu_1 < m_2 \mu_2 < m_3 \mu_3$; and (2) in the DES case, such $\lambda_k^*$ are also ordered as $\lambda_1^* < \lambda_2^* < \lambda_3^*$ for very small $\theta$. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse6.eps} \caption{Optimal BS vs.\ $\theta$ for base instance $3$.} \label{fig:obsdbse6} \end{figure} \paragraph{Base instance $3$} Fig.\ \ref{fig:obsdbse6} displays the optimal BS vs.\ $\theta$ for base instance $3$. Regarding the $\lambda_k^*$ for basic nodes $k$, we observe that, in the DBS case, they are ordered as $\lambda_2^* < \lambda_1^* < \lambda_3^*$, consistently with node capacities $m_2 \mu_2 < m_1 \mu_1 < m_3 \mu_3$. In the DES case, such an ordering also holds for very small $\theta$. \paragraph{Insights} The above examples suggest the following insights: (1) in the DBS case, the rate $\lambda_0^*$ of jobs outsourced to the external node increases with $\theta$ as a smooth concave function, while the $\lambda_k^*$ for basic nodes decrease as smooth convex functions, with nodes having a larger total processing capacity receiving a larger share of traffic; and (2) in the DES case, $\lambda_0^*$ increases steeply with $\theta$, with the $\lambda_k^*$ for basic nodes steeply decreasing; faster nodes receive a larger share of traffic for larger $\theta$, to the point that nodes with slower servers are not used for high enough $\theta$. Thus, the dependence of the optimal BS on $\theta$ is substantially more pronounced in the DES case than in the DBS case. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse7.eps} \caption{Optimal BS vs.\ $C$ for base instance $1$.} \label{fig:obsdbse7} \end{figure} \subsubsection{Dependence on the external node's expected usage cost $C$} \label{s:obsC} Finally, we consider the effect on the optimal BS of modifying $C$. \paragraph{Base instance $1$} Fig.\ \ref{fig:obsdbse7} displays the optimal BS vs.\ $C$ for base instance $1$. The rate $\lambda_0^*$ of jobs outsourced to the external node decreases with $C$, as a convex smooth function in the DBS case for low enough $C$, and as a piecewise smooth function in the DES case, dropping to $0$ for large enough $C$. Such behavior is consistent with Remark \ref{re:obsastar}(i). As for the $\lambda_k^*$ for basic nodes, they consequently increase as $C$ grows, remaining constant for large enough $C$. In the DBS case, such $\lambda_k^*$ are very close, while appearing ordered as $\lambda_1^* < \lambda_2^* < \lambda_3^*$ for very small $C$. In the DES case, such $\lambda_k^*$ are very close only for large enough $C$. For small $C$, they are ordered as $\lambda_3^* < \lambda_2^* < \lambda_1^*$, consistently with server speeds $\mu_3 < \mu_2 < \mu_1$, and in agreement with Remark \ref{re:obsastar}(iv). \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse8.eps} \caption{Optimal BS vs.\ $C$ for base instance $2$.} \label{fig:obsdbse8} \end{figure} \paragraph{Base instance $2$} Fig.\ \ref{fig:obsdbse8} displays the optimal BS vs.\ $C$ for base instance $2$. The behavior of $\lambda_0^*$ is similar to the previous instance. As for the $\lambda_k^*$ for basic nodes, in the DBS case they are clearly separated and ordered as $\lambda_1^* < \lambda_2^* < \lambda_3^*$, as total node capacities. In the DES case, the same ordering is observed for high enough $C$, while the ordering for small $C$ is consistent with node server speeds, as indicated in Remark \ref{re:obsastar}(iv). \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{obsdbse9.eps} \caption{Optimal BS vs.\ $C$ for base instance $3$.} \label{fig:obsdbse9} \end{figure} \paragraph{Base instance $3$} Fig.\ \ref{fig:obsdbse9} displays the optimal BS vs.\ $C$ for base instance $3$. The behavior of $\lambda_0^*$ is similar to that in the previous instances. As for the $\lambda_k^*$ for basic nodes, in the DBS case they are clearly separated and ordered as $\lambda_2^* < \lambda_1^* < \lambda_3^*$, as total node capacities. In the DES case, the same ordering is observed for high enough $C$, while the ordering for small $C$ is consistent with nodes server speeds, as in Remark \ref{re:obsastar}(iv). \paragraph{Insights} The above examples suggest the following insights: (1) in the DBS case, the rate $\lambda_0^*$ of jobs outsourced to the external node decreases with $C$ as a smooth convex function for $C < C^*(\lambda)$, while the $\lambda_k^*$ for basic nodes increase as smooth concave functions for such $C$, with nodes having a larger total processing capacity receiving a larger share of traffic; and (2) in the DES case, $\lambda_0^*$ decreases with $C$ as a piecewise-smooth convex function with $n+1$ pieces for $C < C^*(\lambda)$; the $\lambda_k^*$ for basic nodes vanish for small enough $C$, and then increase as smooth concave functions; faster nodes receive a larger share of traffic for smaller $C$, while for larger $C$ the ordering is as under DBS. \section{Index policies} \label{s:ip} \subsection{IO index policy} \label{s:ioip} This section derives an index policy for problem (\ref{eq:panew}) based on indices $\varphi_k^{\scriptscriptstyle \mathrm{IO}}(i_k)$ that measure the probability that an individual job sent to basic node $k$ when this lies in state $i_k$ abandons due to missing its deadline. The resulting \emph{IO index policy} makes dynamic resource allocation and routing decisions for a job accounting only for their impact on itself. The analysis below considers a basic node in isolation, dropping the label $k$ from the notation. Thus, consider an M/M/$m+$M queue with arrival rate $\lambda$, service rate $\mu$ per server, and abandonment rate $\theta$. Denote by $p_{\mathrm{ab}}(i)$ the probability that an arriving job finding $i$ jobs present abandons due to missing its deadline. The \emph{IO index} is $\varphi^{\scriptscriptstyle \mathrm{IO}}(i) \triangleq p_{\mathrm{ab}}(i)$, which is evaluated below. Recall that $L(i)$ is the total abandonment rate in state $i$, and let $D(i)$ be the \emph{total death rate} in state $i$, so $L(i) \triangleq (i-m)^+ \theta$ under DBS and $L(i) \triangleq i \theta$ under DES, and \begin{equation} \label{eq:barmu} D(i) \triangleq \min(i, m) +L(i). \end{equation} \begin{proposition} \label{pro:ioi} Under either DBS or DES$,$ \begin{equation} \label{eq:ioindxue} p_{\mathrm{ab}}(i) = \frac{L(i+1)}{D(i+1)}, \quad i \in \mathbb{Z}_+. \end{equation} \end{proposition} \begin{proof} Start with the DBS case. Consider an arrival that finds $i$ jobs in system. To simplify the evaluation of $p_{\mathrm{ab}}(i)$ we assume that there are no further arrivals, which is without loss of generality given the assumption that jobs are scheduled in FCFS order (see \S\ref{s:md}). If the arrival finds a free server it does not abandon, so $p_{\mathrm{ab}}(i) = 0 = L(i+1)/D(i+1)$ for $i < m$. If $i = m$, so the arrival finds all servers busy and no queue, if will abandon if and only if its deadline expires before any service is completed, which happens with probability (w.p.) $\theta / (\theta + m \mu) = L(m+1)/D(m+1)$, so (\ref{eq:ioindxue}) also holds for $i = m$. If $i = m+j$, so the arrival finds all servers busy and $j \geqslant 1$ jobs waiting, the next event will be one of the following three: the arriving job abandons, w.p.\ $\theta/{D(m+j+1)}$; or a service is completed, w.p.\ $m \mu/{D(m+j+1)}$; or one of the $j$ waiting jobs found on arrival abandons, w.p.\ $j \theta/{D(m+j+1)}$. By conditioning on the next event and using such probabilities one readily obtains the recursion \[ p_{\mathrm{ab}}(m + j) = \frac{\theta}{D(m + j+1)} + \frac{j \theta + m \mu}{D(m + j+1)} p_{\mathrm{ab}}(m + j-1), \quad j = 1, 2, \ldots, \] whose solution (taking into account the value obtained for $p_{\mathrm{ab}}(m)$), is \[ p_{\mathrm{ab}}(m + j) = \frac{(j+1) \theta}{(j+1) \theta + m \mu} = \frac{L(m + j+1)}{D(m + j+1)}, \quad j = 0, 1, \ldots. \] In the DES case, we have, for $i < m$, that $p_{\mathrm{ab}}(i)$ is the probability that the arriving job, which enters service immediately, completes service before its deadline expires, so $p_{\mathrm{ab}}(i) = \theta/(\theta + \mu) = i \theta/ (i \theta + i \mu) = L(i+1)/D(i+1)$. Further, arguing along the same lines as in the DBS case yields the recursion \[ p_{\mathrm{ab}}(m + j) = \frac{\theta}{D(m + j+1)} + \frac{(m+j) \theta + m \mu}{D(m + j+1)} p_{\mathrm{ab}}(m + j-1), \quad j = 0, 1, \ldots, \] whose solution (taking into account the value obtained for $p_{\mathrm{ab}}(m-1)$), is \[ p_{\mathrm{ab}}(m + j) = \frac{(m+j+1) \theta}{(m+j+1) \theta + m \mu} = \frac{L(m + j+1)}{D(m + j+1)}, \quad j = 0, 1, \ldots. \] \end{proof} \begin{remark} \label{re:ioip} \begin{itemize} \item[\textup{(i)}] It is immediate that, in both the DBS and DES cases$,$ the IO index $\varphi^{\scriptscriptstyle \mathrm{IO}}(i)$ satisfies target properties $P_1$--$P_6$ (see Table \ref{t:indexp} in \S\ref{s:dpeop}). \item[\textup{(i)}] $P_2$ holds as $\varphi^{\scriptscriptstyle \mathrm{IO}}(i)$ does not depend on the arrival rate $\lambda$. \item[\textup{(iii)}] $P_5$ holds with $\varphi^{\scriptscriptstyle \mathrm{IO}}(i) \searrow 0$ in the DBS case and $\varphi^{\scriptscriptstyle \mathrm{IO}}(i) \searrow \theta/(\theta + \mu)$ in the DES case as $m \nearrow \infty$. \end{itemize} \end{remark} \subsection{PI index policy} \label{s:model2bs} We next turn to the PI method (see, e.g., \cite{krish90}) which has not been applied before to the present model. This method involves two stages: (1) finding the optimal BS of the arrival stream, which is addressed in \S\ref{s:obsm}; and (2) carrying out one step of the PI algorithm for MDPs starting from the optimal BS, which is guaranteed to produce a better policy. The PI method yields an index policy based on indices $\varphi_k^{\scriptscriptstyle \mathrm{PI}}(i_k)$ for basic nodes $k = 1, \ldots, n$ that are defined as follows. Consider the M/M/$m_k$+M queue corresponding to basic node $k$ with arrival rate $\lambda_k^*$, as determined by the optimal BS (see \S\ref{s:obsm}). For such a model, let \[ b_k^*(i) \triangleq \ensuremath{\mathsf{E}}_i\left[\int_0^\infty \{L_k(X_k(t)) - \ell_k(\lambda_k^*)\} \, dt\right], \quad i = 0, 1, 2, \ldots, \] where $\ensuremath{\mathsf{E}}_i[\cdot]$ denotes expectation starting from state $X_k(0) = i$ and $\ell_k(\lambda_k^*)$ is the mean abandonment rate as in \S\ref{s:obsm}. Quantity $b_k^*(i)$, which is known as a \emph{bias} or \emph{relative cost}, measures the expected total incremental number of abandonments when starting from state $i$ relative to those when starting from steady state. Now, the PI index for basic node $k$ is given by \begin{equation} \label{eq:nukpii} \varphi_k^{\scriptscriptstyle \mathrm{PI}}(i) \triangleq b_k^*(i+1) - b_k^*(i), \quad i = 0, 1, 2, \ldots, \end{equation} We next address numerical evaluation of the PI index for a basic node, whose label $k$ is dropped from the notation below. To evaluate the PI index $\varphi^{\scriptscriptstyle \mathrm{PI}}(i)$ we must solve the \emph{Poisson equations} (see, e.g., \citet{glynnMeyn96}) for the corresponding M/M/$m+$M queue with arrival rate $\lambda^*$ ---as determined by the optimal BS--- and cost rates $L(i)$. Such equations are given by \begin{equation} \label{eq:pe1piim} \begin{split} \ell(\lambda^*) + \lambda^* b(0) & = \lambda b(1) \\ \ell(\lambda^*) + \{\lambda^* + D(i)\} b(i) & = L(i) + \lambda^* b(i+1) + D(i) b(i-1), \quad i = 1, 2, \ldots, \end{split} \end{equation} and are immediately reformulated in terms of the PI index $\varphi^{\scriptscriptstyle \mathrm{PI}}$ in (\ref{eq:nukpii}) as \begin{equation} \label{eq:pepiim} \begin{split} \ell(\lambda^*) - \lambda^* \varphi^{\scriptscriptstyle \mathrm{PI}}(0) & = 0 \\ \ell(\lambda^*) - \lambda^* \varphi^{\scriptscriptstyle \mathrm{PI}}(i) + D(i) \varphi^{\scriptscriptstyle \mathrm{PI}}(i-1) & = L(i), \quad i = 1, 2, \ldots, \end{split} \end{equation} where $D(i)$ is as in (\ref{eq:barmu}) and $\ell(\lambda^*)$ is the node's mean abandonment rate. \begin{remark} \label{re:piip} \begin{itemize} \item[\textup{(i)}] Although the first-order linear recursion (\ref{eq:pepiim}) cannot be solved in closed form in terms of elementary functions, it gives an efficient means of computing index values $\varphi^{\scriptscriptstyle \mathrm{PI}}(0), \ldots, \varphi^{\scriptscriptstyle \mathrm{PI}}(i)$ in $O(i)$ time, given $\lambda^*$ and $\ell(\lambda^*)$. However, such a recursion suffers from numerical instability for large states $i$ (see \citet{nmnetgcoop14}). \item[\textup{(ii)}] If the optimal BS does not use the node of concern ($\lambda^* = 0$ and hence $\ell(\lambda^*) = 0$), which can happen in the DES case (see Remark \ref{re:obsastar}(iii, iv)), then (\ref{eq:pepiim}) yields $\varphi^{\scriptscriptstyle \mathrm{PI}}(i) = L(i+1)/D(i+1)$, whence (see Proposition \ref{pro:ioi}) the node's PI index $\varphi^{\scriptscriptstyle \mathrm{PI}}(i)$ reduces to its IO index $\varphi^{\scriptscriptstyle \mathrm{IO}}(i)$. \item[\textup{(iii)}] Observation of the PI index suggests that it satisfies target property $P_1$ in Table \ref{t:indexp}. Yet, establishing such a property is a challenging analytical task, as it involves proving corresponding properties of the mean abandonment rate $\ell(\lambda)$ that the author has not found in the literature. \item[\textup{(iv)}] Being based on the optimal BS, a node's PI index $\varphi^{\scriptscriptstyle \mathrm{PI}}(i)$ incorporates all model parameters, including server rates and server pool sizes of other nodes as well as the external node's usage cost. \item[\textup{(v)}] A node's PI index inherits properties of the optimal BS: other things being equal, it does not change for large enough external node's cost $C$, nor for large enough arrival rate $\lambda$. See Remark \ref{re:obsastar}(i, ii). \end{itemize} \end{remark} \subsection{RB index policy} \label{s:rbip} The RB policy results by applying to the present model the index policy proposed in \citet{whit88b} for the general MARBP. To deploy such an approach, problem (\ref{eq:panew}) needs to be cast into the framework of the MARBP, which concerns the optimal dynamic activation of a collection of stochastic \emph{projects} modeled as RBs ---binary-action (active or passive) MDPs--- subject to given activation constraints. The required reformulation was first given in \citet[\S8.1]{nmmp02} in a broader model for optimal control of admission and routing to parallel queues, and further developed in \citet{nmnetcoop07}. To make this paper self-contained, the reformulation of the present model as an MARBP and the RB index policy are outlined in \ref{a:rbip}. \subsection{Dependence of the indices on the state and model parameters} \label{s:civdpni} This section explores through examples how the routing indices considered depend on the node's state and various model parameters, and draws insights from the results obtained. In particular, satisfaction of target properties $P_1$--$P_6$ (see Table \ref{t:indexp} in \S\ref{s:dpeop}) is investigated. The base instance numbers referred to below are as in Table \ref{t:bi} in \S\ref{s:obsei}. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{indbse3.eps} \caption{Indices vs.\ state for base instance 3 under DBS.} \label{fig:indbse3} \end{figure} \subsubsection{Dependence on the node's state $i_k$} \label{s:dotsik} Fig.\ \ref{fig:indbse3} plots the indices for each of the basic nodes $k = 1, 2, 3$ vs.\ the state $i_k$ for base instance 3 under DBS. The plot is consistent with target property $P_1$ in Table \ref{t:indexp}. Furthermore, note that the IO indices grow relatively slowly with the state, the PI indices grow faster, and the RB indices grow steeply. In particular, the plot shows that such indices satisfy \begin{equation} \label{eq:rbsi} \begin{split} \varphi_k^{\scriptscriptstyle \mathrm{IO}}(i) < \varphi_k^{\scriptscriptstyle \mathrm{PI}}(i) < \varphi_k^{\scriptscriptstyle \mathrm{RB}}(i), & \quad i \geqslant m_k. \end{split} \end{equation} A similar behavior is observed in Fig.\ \ref{fig:indbse3b} for the same instance under DES. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{indbse3b.eps} \caption{Indices vs.\ state for base instance 3 under DES.} \label{fig:indbse3b} \end{figure} An implication of (\ref{eq:rbsi}), and of the relative index magnitudes as illustrated in the plots, is that the RB policy will tend to outsource more jobs to the external node than the PI policy, and this more than the IO policy. Note that a key insight from optimal admission control to service systems (see \citet{stid85}) is that IO admission policies admit more jobs than is socially optimal. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{indbsLam3.eps} \caption{Indices for $i_k \equiv 14$ vs.\ $\lambda$ for base instance $3$ under DBS.} \label{fig:indbsLam3} \end{figure} \subsubsection{Dependence on the arrival rate $\lambda$} \label{s:dotarl} Fig.\ \ref{fig:indbsLam3} plots the indices for state $i_k = 14$ vs.\ $\lambda$ for base instance $3$ under DBS. The plot is consistent with property $P_2$ in Table \ref{t:indexp}. Note that the IO index remains constant as $\lambda$ varies, as it does not incorporate such a parameter. As for the PI index, it grows as a convex function up to a critical value, and then remains constant for larger $\lambda$, consistently with Remark \ref{re:piip}(v). Regarding the RB index, it grows steeply to one as $\lambda$ grows. The corresponding DES case is not shown as results are similar. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{indbsThet3.eps} \caption{Indices for state $\mathbf{i} = (14, 14, 14)$ vs.\ $\theta$ for base instance $3$ under DBS.} \label{fig:indbsThet3} \end{figure} \subsubsection{Dependence on the abandonment rate $\theta$} \label{s:dotirt} Fig.\ \ref{fig:indbsThet3} plots the indices for state $i_k = 14$ vs.\ $\theta$ for base instance $3$ under DBS. The plot is consistent with property $P_3$ in Table \ref{t:indexp}. Note that the IO indices grow relatively slowly as the abandonment rate $\theta$ increases, the PI indices grow faster, and the RB indices grow very steeply, to the point that they are very close to $1$ for nodes $1$ and $2$ even for small $\theta$. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{indbsC3.eps} \caption{Indices for state $\mathbf{i} = (14, 14, 14)$ vs.\ $C$ for base instance $3$ under DBS.} \label{fig:indbsC3} \end{figure} \subsubsection{Dependence on the external node's usage cost $C$} \label{s:dotesncc} Fig.\ \ref{fig:indbsC3} plots the indices for state $i_k = 14$ vs.\ $C$ for base instance $3$ under DBS. The plot is consistent with property $P_4$ in Table \ref{t:indexp}, which can be checked by using the diagonal lines shown. Note that both the IO and the RB indices do not vary with $C$, as they do not incorporate such a parameter. As for the PI indices, they grow as concave functions up to the critical value $C^*(\lambda)$, and then stay constant for larger $C$, consistently with Remark \ref{re:piip}(v). \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{indbsmu2.eps} \caption{Indices for state $i_k = 14$ vs.\ $\mu_k$ for node $k = 2$ in base instance $3$ under DBS.} \label{fig:indbsmu2} \end{figure} \subsubsection{Dependence on the server speed $\mu_k$} \label{s:dotesr} Fig.\ \ref{fig:indbsmu2} plots the indices for state $i_k = 14$ vs.\ $\mu_k$ for node $k =2$ of base instance $3$ under DBS. The plot is consistent with property $P_5$ in Table \ref{t:indexp}. Note that the routing priority for node $2$ decreases steeply as its servers get slower (corresponding to high index values). Such an effect is less pronounced for the PI policy, and still less for the IO policy. \begin{figure}[!htb] \centering \includegraphics[height=1.7in]{indbsm2.eps} \caption{Indices for state $i_k = 14$ vs.\ $m_k$ for node $k = 2$ in base instance $3$ under DBS.} \label{fig:indbsm2} \end{figure} \subsubsection{Dependence on the server pool size $m_k$} \label{s:dotsps} Fig.\ \ref{fig:indbsm2} plots the indices for state $i_k = 14$ vs.\ $m_k$ for node $k =2$ of base instance $3$ under DBS. The plot is consistent with property $P_6$ in Table \ref{t:indexp}. The routing priority for node $2$ decreases steeply as the number of servers in the node gets smaller (corresponding to high index values). Such an effect is less pronounced for the PI index policy, and even less for the IO policy. \section{Comparative benchmarking study} \label{s:cbs} This section presents a comparative benchmarking study of the four policies considered herein: optimal BS, IO policy, PI policy, and RB policy. For such a purpose, a test bed of instances with $n = 2$ basic nodes was generated by varying model parameters across a grid of plausible values, while server pool sizes and the rate of node $2$'s servers were held constant with $\mathbf{m} = (10, 40)$ and $\mu_2 = 1$. The other model parameters were varied as follows: $\mu_1 \in \{1, 1.5, \ldots, 5\}$, $\rho \triangleq \lambda/(m_1 \mu_1 + m_2 \mu_2) \in \{0.9, 1, \ldots, 1.5\}$, $\theta \in \{0.2, 0.3, \ldots, 1.1\}$, and $C \in \{0.1, 0.2, \ldots, 0.8\}$. Hence, the resulting test bed consists of a total of $5{,}040$ instances, in each the DBS and DES cases. The performance objective considered was maximization of the long-run expected average profit per job. The performance of the optimal policy was used as the benchmark against which the other policies were compared. For each instance $i$, the maximum long-run expected average profit per job $z_i^*$ was computed, as well as the value $z_i^{\pi}$ of such an objective under each of the policies $\pi \in \{\mathrm{BS}, \mathrm{IO}, \mathrm{PI}, \mathrm{RB}\}$, truncating the basic nodes' buffer sizes to $80$, as it was checked that increasing them did not change results. Then, the \emph{relative optimality gap} (percent deviation from optimal performance) $100 (z_i^*-z_i^{\pi})/z_i^*$ was evaluated for each policy $\pi$ and instance. The optimal performance was obtained by solving with CPLEX the linear programming formulation of the DP equations for the truncated model. The optimal BS's performance was evaluated by solving problem (\ref{eq:nlp}) as discussed in \S\ref{s:obsm}. As for the performance of the index policies, it was evaluated by solving numerically the corresponding Poisson equations. The numerical study was based on a MATLAB implementation developed by the author. \subsection{Overall results} \label{s:orcomp} The overall results of the comparative study across all $5{,}040$ instances, both in the DBS and DES cases, are summarized in Tables \ref{t:orog1} and \ref{t:orog2}. Table \ref{t:orog1} shows the minimum, average and maximum relative optimality gaps for each policy across all instances. The PI policy tends to outperform the others on such criteria, achieving very low average and maximum relative optimality gaps. The RB policy closely follows, being also nearly optimal throughout. The third best policy is the static BS, which is remarkably close to optimal and strongly outperforms the IO policy. We further observe that the relative optimality gaps of all policies except IO are smaller under DES case than under DBS. The maximum improvement against the RB policy is small, but it is substantial against the IO policy. Yet, both RB and IO can outperform PI, as reflected by the negative minimum improvements. As for the improvement of PI over BS, while it is guaranteed to be nonnegative, the results show that, remarkably, the gains are not major, being at most of $5.33\%$ in the DBS case and $3.94\%$ in the DES case. \begin{table}[!htb] \centering \begin{tabular}{crrr|} & \multicolumn{3}{c}{DBS: optimality gap (\%)} \\ \cline{2-4} & \multicolumn{1}{|c}{minimum} & average & maximum \\ \cline{2-4} BS & \multicolumn{1}{|r}{$0.83$} & $2.99$ & $5.15$ \\ IO & \multicolumn{1}{|r}{$0.00$} & $6.54$ & $21.66$ \\ PI & \multicolumn{1}{|r}{$0.00$} & $0.36$ & $1.40$ \\ RB & \multicolumn{1}{|r}{$0.00$} & $0.44$ & $2.19$ \\ \cline{2-4} \end{tabular} \begin{tabular}{crrr|} & \multicolumn{3}{c}{DES: optimality gap (\%)} \\ \cline{2-4} & \multicolumn{1}{|c}{minimum} & average & maximum \\ \cline{2-4} BS & \multicolumn{1}{|r}{$0.00$} & $1.70$ & $3.79$ \\ IO & \multicolumn{1}{|r}{$0.00$} & $6.57$ & $22.76$ \\ PI & \multicolumn{1}{|r}{$0.00$} & $0.10$ & $0.88$ \\ RB & \multicolumn{1}{|r}{$0.00$} & $0.10$ & $1.36$ \\ \cline{2-4} \end{tabular} \caption{Minimum, average and maximum relative optimality gaps of the policies.} \label{t:orog1} \end{table} Table \ref{t:orog2} shows the minimum, average and maximum percent improvement in the profit per job objective of the PI policy vs.\ the other policies. \begin{table}[!htb] \centering \begin{tabular}{crrr|} & \multicolumn{3}{c}{DBS: improvement (\%) of PI} \\ \cline{2-4} & \multicolumn{1}{|c}{minimum} & average & maximum \\ \cline{2-4} BS & \multicolumn{1}{|r}{$0.70$} & $2.72$ & $5.33$ \\ IO & \multicolumn{1}{|r}{$-0.48$} & $7.02$ & $27.42$ \\ RB & \multicolumn{1}{|r}{$-1.34$} & $0.09$ & $2.16$ \\ \cline{2-4} \end{tabular} \begin{tabular}{crrr|} & \multicolumn{3}{c}{DES: improvement (\%) of PI} \\ \cline{2-4} & \multicolumn{1}{|c}{minimum} & average & maximum \\ \cline{2-4} BS & \multicolumn{1}{|r}{$0.00$} & $1.64$ & $3.94$ \\ IO & \multicolumn{1}{|r}{$-0.21$} & $7.40$ & $29.46$ \\ RB & \multicolumn{1}{|r}{$-0.88$} & $0.00$ & $1.30$ \\ \cline{2-4} \end{tabular} \caption{Minimum, average and maximum improvement (\%) of PI vs.\ the other policies.} \label{t:orog2} \end{table} \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{dbsogapsvmu1.eps} \caption{Relative optimality gaps vs.\ $\mu_1$ under DBS.} \label{fig:dbsogapsvmu1} \end{figure} \subsection{Effect of changing model parameters} \label{s:ecvmp} To assess the effects of changing model parameters on the policies' relative optimality gaps, the following approach was used: for each of the varying parameters and policy, the minimum, average and maximum relative optimality gaps were evaluated across all instances with a given parameter value. \subsubsection{Effect of changing node $1$'s servers' rate $\mu_1$} \label{s:en1sr} Fig.\ \ref{fig:dbsogapsvmu1} displays, for the DBS case, the smallest, average and largest relative optimality gaps for the four policies as $\mu_1$ varies, marked as a downward-pointing triangle, a circle, and an upward-pointing triangle, respectively. For BS, the gaps remain quite insensitive to changes in $\mu_1$. The minimum gap ranges between $0.82\%$ and $1.04\%$, the average gap between $2.85\%$ and $3.08\%$, and the maximum gap between $4.74\%$ and $5.15\%$. As for IO, the gaps also remain quite stable as $\mu_1$ varies. The minimum gap ranges between $0.001\%$ and $0.22\%$, the average gap between $6.29\%$ and $6.66\%$, and the maximum gap between $20.7\%$ and $21.66\%$. Regarding PI, its maximum gap slightly improves as $\mu_1$ gets larger. The minimum gap ranges between $0\%$ and $0.05\%$, the average gap between $0.32\%$ and $0.38\%$, and the maximum gap ranges $0.97\%$ and $1.4\%$. The maximum gap of the RB policy worsens as $\mu_1$ grows. The minimum gap ranges between $0\%$ and $0.06\%$, the average gap between $0.14\%$ and $0.67\%$, and the maximum gap between $0.79\%$ and $2.19\%$. Note that for smaller $\mu_1$ RB outperforms PI, whereas for larger $\mu_1$ the opposite holds. This suggests the appropriateness of the very small routing priority given by the RB index to a node with slow servers, in contrast to the other index policies, as shown in Fig.\ \ref{fig:indbsmu2}. It further suggests that the routing priority awarded by the RB policy does not sufficiently increase as the node's servers become faster. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{desogapsvmu1.eps} \caption{Relative optimality gaps vs.\ $\mu_1$ under DES.} \label{fig:desogapsvmu1} \end{figure} Fig.\ \ref{fig:desogapsvmu1} shows corresponding results for the DES case. The main differences with the results for the DBS case are: (1) the gaps are somewhat smaller for all policies; and (2) the maximum and average gaps for the IO policy increase with $\mu_1$. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{dbsogapsvrho.eps} \caption{Relative optimality gaps vs.\ $\rho$ under DBS.} \label{fig:dbsogapsvrho} \end{figure} \subsubsection{Effect of changing the system's nominal load $\rho$} \label{s:ecsnlrho} To assess the effect of changing $\rho$ across the grid of values considered, corresponding values of the arrival rate $\lambda$ were generated for each instance by taking $\lambda = (m_1 \mu_1 + m_2 \mu_2) \rho $. Fig.\ \ref{fig:dbsogapsvrho} displays the results in the DBS case. For BS, the three gaps tend to deteriorate as $\rho$ grows. The minimum gap ranges between $0.83\%$ and $1.42\%$, the average gap between $1.82\%$ and $3.42\%$, and the maximum gap between $2.87\%$ and $5.15\%$. As for IO, its gaps deteriorate substantially as $\rho$ grows. The minimum gap increases from $0.001\%$ to $3.06\%$, the average gap increases from $0.42\%$ to $12.94\%$, and the maximum gap increases from $1.32\%$ to $21.66\%$. Such behavior is consistent with the fact that such a policy does not incorporate arrival rate information. Regarding PI, its gaps remain quite stable as $\rho$ varies. The minimum gap ranges between $0\%$ and $0.03\%$, the average gap between $0.3\%$ and $0.45\%$, and the maximum gap between $1.16\%$ and $1.4\%$. As for RB, the minimum gap remains at $0\%$, the average gap ranges between $0.14\%$ and $0.8\%$, and the maximum gap between $0.54\%$ and $2.19\%$. Note that the maximum and average gaps reach a peak at $\rho = 1$, and then improve as $\rho$ grows. Note that for smaller $\rho$ PI outperforms RB, whereas for larger $\rho$ RB is better. This is consistent with the fact that the PI policy is insensitive to the arrival rate as this becomes large, whereas the RB policy steeply decreases the routing priority of basic nodes in heavy traffic, as shown in Fig.\ \ref{fig:indbsLam3}. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{desogapsvrho.eps} \caption{Relative optimality gaps vs.\ $\rho$ under DES.} \label{fig:desogapsvrho} \end{figure} Fig.\ \ref{fig:desogapsvrho} shows the results under DES. The main differences with the DBS case are: (1) again, the gaps are somewhat smaller for all policies except for IO; (2) the maximum and average gaps for BS and IO do not vary much as $\rho$ changes; and (3) RB is better than PI for small $\rho$, and PI is better than RB for larger $\rho$ within the range considered. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{dbsogapsvtheta.eps} \caption{Relative optimality gaps vs.\ $\theta$ under DBS.} \label{fig:dbsogapsvtheta} \end{figure} \subsubsection{Effect of changing the abandonment rate $\theta$} \label{s:ecsnltheta} Fig.\ \ref{fig:dbsogapsvtheta} displays the smallest, average and largest relative optimality gaps for each of the policies considered as $\theta$ varies, in the DBS case. For BS, the three gaps tend to deteriorate as $\theta$ grows. The minimum gap increases from $0.83\%$ to $1.68\%$, the average gap from $2.07\%$ to $3.6\%$, and the maximum gap from $3.12\%$ to $5.15\%$. As for IO, its worst gap degrades for the smaller values of $\theta$ considered, and then levels off. The average and minimum gaps remain stable. The minimum gap ranges from $0.001\%$ to $0.002\%$, the average gap from $5.73\%$ to $6.79\%$, and the maximum gap from $14.74\%$ to $21.66\%$. Regarding PI, its worst and average gaps degrade slightly as $\theta$ grows. The minimum gap ranges between $0\%$ and $0.01\%$, the average gap between $0.22\%$ and $0.55\%$, and the maximum gap between $0.67\%$ and $1.4\%$. Concerning RB, its gaps remain stable as $\theta$ varies. The minimum gap remains at $0\%$, the average gap ranges between $0.14\%$ and $0.8\%$, and the maximum gap between $1.96\%$ and $2.19\%$. Note that PI consistently outperforms RB in the worst case. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{desogapsvtheta.eps} \caption{Relative optimality gaps vs.\ $\theta$ under DES.} \label{fig:desogapsvtheta} \end{figure} Fig.\ \ref{fig:desogapsvtheta} shows the results under DES. The main differences under DBS are: (1) the gaps are somewhat smaller for all policies except for IO; and (2) the average gap for BS does not vary much as $\theta$ changes; (3) the worst gap for IO deteriorates for larger $\theta$; and (4) the average and worst gaps for RB improves as $\theta$ grows, to the point that RB outperforms PI for larger $\theta$. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{dbsogapsvC.eps} \caption{Relative optimality gaps vs.\ $C$ under DBS.} \label{fig:dbsogapsvC} \end{figure} \subsubsection{Effect of changing the external node's usage cost $C$} \label{s:ecesnuc} Fig.\ \ref{fig:dbsogapsvC} displays the smallest, average and largest relative optimality gaps for each policy considered as $C$ varies in the DBS case. For BS, the minimum gap remains stable, ranging from $0.83\%$ to $0.98\%$, the average gap increases and then decreases as $C$ grows, ranging from $1.85\%$ to $3.55\%$, and the maximum gap has a similar behavior as the average gap, ranging from $2.34\%$ to $5.15\%$. As for IO, its qualitative behavior is similar to that of the BS policy. The minimum gap ranges from $0.001\%$ to $0.37\%$, the average gap from $3.12\%$ to $9.53\%$, and the maximum gap from $6.58\%$ to $21.66\%$. Regarding PI, its maximum and average gaps improve slightly as $C$ grows. The minimum gap ranges between $0\%$ and $0.1\%$, the average gap decreases from $0.23\%$ to $0.56\%$, and the maximum gap ranges between $0.72\%$ and $1.4\%$. Concerning RB, its maximum and average gaps deteriorate as $C$ grows. The minimum gap grows from $0\%$ to $0.02\%$, the average gap from $0.1\%$ to $0.72\%$, and the maximum gap from $0.34\%$ to $2.19\%$. Note that while RB outperforms PI for small $C$, the opposite happens for large $C$. \begin{figure}[!htb] \centering \includegraphics[height=2.5in]{desogapsvC.eps} \caption{Relative optimality gaps vs.\ $C$ under DES.} \label{fig:desogapsvC} \end{figure} Fig.\ \ref{fig:desogapsvC} displays the results in the DES case. The main differences with the DBS case are that the worst-case gaps deteriorate as $C$ grows for the BS, IO and PI policies, leveling off for large enough $C$. \section{Discussion and conclusions} \label{s:d} The results presented above allow us to draw some insights about the strengths and weaknesses of the four policies considered. Concerning the optimal BS policy, it can be efficiently computed, assuming validity of Conjecture \ref{con:fkci}, and its performance is remarkably close to optimal for a static policy, at least for the range of instances considered. Its relative optimality gap tends to deteriorate as the abandonment rate grows. As for the IO policy, it is the easiest to compute, being given in closed form. Yet, despite being a dynamic policy, its performance is substantially suboptimal, being the worst of the four policies. As a result of not incorporating the arrival rate parameter, its relative optimality gap severely degrades as traffic becomes heavier. It also fails to appropriately incorporate other model parameters, as shown in the results in \S\ref{s:ecvmp}. Regarding the PI policy, it can be efficiently computed through a simple linear recursion and is overall the best performing of the four, consistently achieving a nearly optimal performance. Yet, it is insensitive to the arrival rate $\lambda$ and to the external node's usage cost $C$ as they become large enough, which causes a slight performance degradation in such cases. Finally, the RB policy can also be efficiently computed through a linear recursion, and is overall the second-best performing of the four policies, being also consistently near optimal. Yet, its index values grow too steeply with the state, which may lead it to outsource too many jobs for large values of $C$, causing a slight deterioration of performance in such cases. While a theoretical proof that the four routing indices considered satisfy the intuitive target properties listed in Table \ref{t:indexp} remains elusive for PI and RB, the above results provide supporting evidence that such is the case. This paper raises several challenging issues for future work, such as proving (or disproving) Conjecture \ref{con:fkci}, analyzing theoretically whether the policies considered satisfy the target properties in Table \ref{t:indexp}, analyzing their performance, carrying out comparative studies on larger scale instances, and extending the present model to incorporate more complex dynamics. \section*{Acknowledgements} This work was supported by the Spanish General Directorate for Scientific and Technical Research (DGICYT) projects MTM2010-20808 and ECO2015-66593-P. The author has presented preliminary parts at the 13th IEEE International Conference on High Performance Computing and Communications (HPCC 2011, Banff, Canada), and at the 7th International Conference on Network Games, Control and Optimization (NetGCooP 2014, Trento, Italy), appearing in abridged form in their proceedings (\citet{nmhpcc11,nmnetgcoop14}).
1,108,101,563,435
arxiv
\section{Introduction} In the last years a substantial effort has been devoted to measure black hole (BH) masses for various quasar samples covering a wide range of redshifts and luminosities. McLure \& Dunlop (2004), from H$\beta$\ and Mg\textsc{ii}, measured virial black hole masses ($M_{\rmn{BH}}$) for $\sim10000$ quasars with $z\le 2.1$ included in the Sloan Digital Sky Survey (SDSS) Data Release 1 (DR1). Fine et al.\ (2006) used composite spectra to measure the dependence on redshift of the mean BH mass for an $L^*$ subsample of the 2QZ quasar catalogue (Croom et al.\ 2004) from $z\sim 0.5$ to $z\sim 2.5$. Shen et al. (2008) listed BH masses for $\sim 60000$ quasars in the redshift range $0.1\lesssim z \lesssim4.5$ contained in the SDSS DR5, by means of virial BH mass estimators based on the H$\beta$, Mg\textsc{ii}\ and C\textsc{iv}\ lines. A common result of these works is that the mean BH mass of the QSO population at given $z$ appears to increase with redshift, but the observed $z$-dependence is dominated by the well-known Malmquist bias, because the BH mass strongly correlates with the central source luminosity (see Vestergaard et al.\ 2008 for a detailed analysis of the selection bias effects). McLure \& Dunlop (2004), for instance, suggest that the observed active BH mass evolution is entirely due to the effective flux limit of the sample. A full understanding of this scenario would give important insights on the BH formation and evolution and on the activation of the quasar phenomenon. Moreover, along with a parallel study on the dependence on redshift of the host galaxy luminosity (mass), this would enlighten on the joint evolution of galaxy bulges and their central black holes. For these reasons, it is of focal importance to trace the dependence of $M_{\rmn{BH}}$\ on $z$, overcoming the problems related to the Malmquist bias. We start from the recently published SDSS DR5 quasar catalog (Schneider et al.\ 2007) and focus on the $\sim50000$ quasars for which Mg\textsc{ii}\ line width and 3000 \AA\ flux are available (Shen et al. 2008). The sample selection is described in Section \ref{sample}. The sample ($0.35<z<2.25$) is divided in 8 redshift bins and it is shown that, in each bin, the object distribution in the FWHM--luminosity plane can be reproduced assuming a minimum luminosity, a maximum mass and a maximum Eddington ratio (Sections \ref{secdistrib} and \ref{secdensity}). Comparing the assumed probability density to the observed distribution of objects, the parameters can be determined in each redshift bin (Section \ref{secfit}). This procedure is shown to be unaffected by the Malmquist bias (Section \ref{robust}), and provides a method to study the ``unbiassed'' dependence on redshift of quasar BH masses and Eddington ratios (Section \ref{secevoz}). In Section \ref{seclll} we test the dependence of our results on the assumed $r_{\rm BLR}-\lambda L_{\lambda}$ calibration. We compare our results with previous literature in Section \ref{seccomp} and in Section \ref{secdisc} we discuss some implications of our findings for the study of the co-evolution of supermassive BHs and their host galaxies. A summary of the paper is given in the last Section. Throughout this paper, we adopt a concordant cosmology with $H_0=70$ ~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_m=0.3$ and $\Omega_{\Lambda}=0.7$. \section{The Mg II sample}\label{sample} The SDSS DR5 quasar catalogue (Schneider et al. 2007) contains more than $77,000$ quasars. It covers about 8000 deg$^{2}$ and selects objects with $M_i<-22$, have at least one emission line with FWHM larger than 1000 km$/$s or are unambiguously broad absorption line objects, are fainter than $i=15.0$ and have highly reliable redshifts. Shen et al. (2008) calculated BH masses for $\sim 60000$ quasars in the redshift range $0.1<z<4.5$ included in the SDSS DR5 quasar catalogue, using virial BH mass estimators based on the H$\beta$, Mg\textsc{ii}\ and C\textsc{iv}\ emission lines. They provide rest-frame line widths and monochromatic luminosities at 5100 \AA, 3000 \AA\ and 1350 \AA\ (see Shen et al. 2008 for details on calibrations, measure procedures, corrections). In the following we will focus on the $\sim50000$ quasars from the Shen et al.~(2008) sample for which Mg\textsc{ii}\ line width and 3000 \AA\ monochromatic flux are available (hereafter Mg\textsc{ii}\ sample). We assume the virial theorem and adopt the calibration of McLure \& Dunlop (2004) to evaluate the BH mass: \begin{equation}\label{formulamassa} \logM_{\rmn{BH}}{}=6+\log(a)+2\log({\rmn{FWHM}})+b\log{\lambda L_{\lambda}} \end{equation} with $a=3.2\pm 1.1$ and $b=0.62\pm 0.14$. Here $M_{\rmn{BH}}$\ is expressed in solar masses, FWHM in units of 1000 km/s and $\lambda L_{\lambda}$ in units of $10^{44}$ erg/s. \section{Description of the procedure}\label{secprocedure} \subsection{Malmquist bias}\label{secbias} \begin{figure} \includegraphics[width=0.37\textwidth, angle=270]{fig1} \caption[]{Average values of the absolute magnitude $M_i$ (triangles) and of $M_{\rmn{BH}}$\ (circles) vs. redshift, in bins $\Delta z=0.15$. The typical standard deviation of each bin is given in the inset. } \label{biasbis} \end{figure} Figure \ref{biasbis} shows the mean absolute $i$ magnitude (see Shen et al. 2008 for details) vs.~redshift of the Mg\textsc{ii}\ sample. The effects of the Malmquist bias are apparent as an increase of the average observed luminosity with redshift. The mean BH mass vs.~redshift is overplotted to the mean $M_i(z)$: it is apparent that the average observed BH masses follow the same trend as the absolute magnitudes with a higher dispersion, as expected given that the distribution of the line widths does not depend on luminosity or redshift (Shen et al. 2008). This suggests that the $z$-dependence of the observed BH masses is strongly subject to a Malmquist-type bias, because at high redshift one cannot observe low mass objects. In order to trace the ``unbiassed'' dependence of active BH masses with redshift, one should consider a combination of two effects, namely the $z$-dependence of the quasar number density and the increase of the average mass of quasar populations with redshift. To illustrate these effects consider two extreme cases: \begin{enumerate} \item The $M_{\rmn{BH}}$\ distribution does not depend on redshift, but the quasar number density increases until $z\sim2-2.5$. At any redshift there is a population of low mass ($\sim10^8\rmn{M}_{\odot}$) quasars, which cannot be observed at high redshift, and the population of high mass ($\sim10^{9.5}\rmn{M}_{\odot}$) active BHs that is observed at $z\gtrsim1.5$ is the high mass end of the $M_{\rmn{BH}}$\ distribution. \item The quasar $M_{\rmn{BH}}$\ distribution shifts toward higher masses at increasing redshift. The population of low mass ($\sim10^8\rmn{M}_{\odot}$) objects that is observed at low redshift is not present at all at $z\gtrsim1.5$. The observed increase of $M_{\rmn{BH}}$\ with redshift, in active BHs, is ``true'' and it is not due to a Malmquist-type bias. \end{enumerate} Of course, each of these pictures is {\it per se} unlikely: the observed dependence on redshift of quasar BH masses is due to a combination of both these effects. In the following, we will concentrate on these points using statistical arguments, focusing on the distribution of objects in the FWHM--luminosity plane. \subsection{Quasar distribution in the FWHM--$\lambda L_{\lambda}$ plane}\label{secdistrib} \begin{figure*} \begin{minipage}{\textwidth} \centering \includegraphics[width=0.6\textwidth, angle=270]{fig2} \caption[]{The 8 panels show the Mg\textsc{ii}\ sample in the FWHM--luminosity plane at increasing redshift. Dotted, dashed and dash-dotted lines (and lines parallel to them) represent the loci of constant monochromatic luminosity, constant mass and constant Eddington ratio respectively.} \label{graficozsemplice} \end{minipage} \end{figure*} Figure \ref{graficozsemplice} shows the objects of the Mg\textsc{ii}\ sample in the logFWHM--log$\lambda L_{\lambda}$ plane (see Fine et al.~2008 for a similar approach). The sample has been divided in 8 redshift bins of equal co-moving volume. In each panel, it is apparent that the data-points form a sort of ``triangle'', the left side of which represents a cut due to the the survey flux limit (which gives raise to the Malmquist bias). From Eq.~\ref{formulamassa}, the loci of quasars with constant mass are represented in this plane by straight lines with fixed slope, as plotted in the figure: \begin{equation}\label{formulamassa2} \log({\rmn{FWHM}})=-0.31\log{\lambda L_{\lambda}}+0.5\log\frac{M_{\rm BH}}{M_{\odot}}-3.25 \end{equation} where units are the same as in Eq. \ref{formulamassa}. We propose that the top right side of the triangle is representative of a maximum mass in the quasar sample. The third (i.e. the bottom right) side of the triangle is supposedly due to the Eddington limit, as the loci of quasars with constant Eddington ratios are again straight lines. The dependence of FWHM on the monochromatic luminosity at a given Eddington ratio is fixed assuming the bolometric correction by Richards et al. (2006; BC$_{3000}=5.15$) and Eq.~\ref{formulamassa}. This yields: \begin{equation}\label{formulaedd} \log({\rmn{FWHM}})=0.19\log{\lambda L_{\lambda}}-0.5\log\frac{L_{\rm bol}}{L_{\rm Edd}}+0.05 \end{equation} where units are the same as in Eq. \ref{formulamassa}. Note that in each redshift bin, the plotted cuts describe qualitatively well the shape of the quasar distribution in the FWHM--luminosity plane. \subsection{Construction of a probability density}\label{secdensity} \begin{figure} \includegraphics[width=0.375\textwidth, angle=270]{fig3} \caption[]{The shape of $\tilde{P}_l(l)$ (Eq. \ref{PL}), $\tilde{P}_m(m)$ (Eq. \ref{PM}) and $\tilde{P}_e(e)$ (Eq. \ref{PE}).} \label{distrib} \end{figure} Now we aim to construct a probability density of quasars as a function of FWHM and luminosity with a main criterion of simplicity. We propose that in each redshift bin the object density is only constrained by a maximum mass, a maximum Eddington ratio and a minimum luminosity due to the instrumental flux limit. We then assume a probability density of form: \begin{equation}\label{Ptot} P_{\,l,\,{\rm FWHM}}(l,\,{\rm FWHM})=k\cdot\tilde{P}_l(l)\cdot \tilde{P}_m(m)\cdot \tilde{P}_e(e) \end{equation} where $k$ is a normalization constant and each $\tilde{P}$ is assumed to be a smoothed step function, which increases from 0 to 1 (or {\it vice versa}) in a range of width $\sigma$ around a fixed value of the independent variable. In the following, we describe our results assuming $\tilde{P}$ of form (see Figure \ref{distrib}): \begin{equation}\label{PL} \tilde{P}_l(l)=\frac{1}{\sigma_l \sqrt{2\pi}}\int_{-\infty}^{l}\!\!\!\!{\rm exp}\Big[-\frac{(l'-l_{\rm min}-2\sigma_l)^2}{2\sigma_l^2}\Big]dl' \end{equation} \begin{equation}\label{PM} \tilde{P}_m(m)=\frac{1}{\sigma_m \sqrt{2\pi}}\int_{m}^{+\infty}\!\!\!\!\!\!{\rm exp}\Big[-\frac{(m'-m_{\rm max}+2\sigma_m)^2}{2\sigma_m^2}\Big]dm' \end{equation} \begin{equation}\label{PE} \tilde{P}_e(e)=\frac{1}{\sigma_e \sqrt{2\pi}}\int_{e}^{+\infty}\!\!\!\!\!\!{\rm exp}\Big[-\frac{(e'-e_{\rm max}+2\sigma_e)^2}{2\sigma_e^2}\Big]de' \end{equation} where: \begin{equation} l\equiv\log\lambda L_{\lambda} \end{equation} \begin{equation} m=m(l,\,{\rm FWHM})\equiv\log\frac{M_{\rm BH}}{M_{\odot}} \end{equation} \begin{equation} e=e(l,\,{\rm FWHM})\equiv\log\frac{L_{\rm bol}}{L_{\rm Edd}}. \end{equation} Here, the parameters $l_{\rm min}$ and $\sigma_l$, $m_{\rm max}$ and $\sigma_m$, $e_{\rm max}$ and $\sigma_e$ are the minimum luminosity, the maximum mass, the maximum Eddington ratio and the widths of the corresponding distributions. These parameters will be determined in the following via a best fit procedure. Note that, since the integrals of the $\tilde{P}$ functions diverge, we must restrict their domain to use them as probability densities (e.g., for values of the parameters $l\lesssim l_{\rm min}+3\sigma_l$, $m\gtrsim m_{\rm max}-3\sigma_m$ and $e\gtrsim e_{\rm max}-3\sigma_e$). This doesn't significantly affect the results, because mass, Eddington ratio and luminosity are not independent variables (for example, low mass objects also have low luminosities or high Eddington ratios), hence the derived probability density $P_{\,l,\,{\rm FWHM}}(l,\,{\rm FWHM})$ (Eq. \ref{Ptot}) is essentially insensitive to the shape of $\tilde{P}_m(m)$ at low masses, of $\tilde{P}_e(e)$ at low Eddington ratios or to the shape of $\tilde{P}_l(l)$ at high luminosities. \begin{figure*} \begin{minipage}{\textwidth} \centering \includegraphics[width=0.6\textwidth, angle=270]{fig4} \caption[]{The 8 panels show the Mg\textsc{ii}\ sample in the FWHM--luminosity plane at increasing redshift: solid black contour plot (levels: 20, 90, 250 objects per box, see text) represents the discrete observed distribution of objects. Dotted red contour plot (same levels) shows the discrete distribution of a sample of objects simulated with the Monte Carlo method, adopting the assumed $P_{\,l,\,{\rm FWHM}}(l,\,{\rm FWHM})$ probability density with the best fit parameters. Dotted, dashed and dash-dotted lines represent $l_{\rm min}$, $m_{\rm max}$ and $e_{\rm max}$ respectively.} \label{graficoz} \end{minipage} \end{figure*} \subsection{Best fit procedure}\label{secfit} The assumed probability density depends on 6 free parameters, i.e. the minimum luminosity ($l_{\rm min}$), the maximum mass ($m_{\rm max}$) and Eddington ratio ($e_{\rm max}$) and the widths of the corresponding distributions ($\sigma_l$, $\sigma_m$ and $\sigma_e$). We focus on the first redshift bin and determine the free parameters matching with the observed distribution of objects in the FWHM--luminosity plane. In detail, for each choice in the 6-dimension parameter space, the probability density has been constructed, discretized in boxes with $\Delta \log$FWHM=0.04dex and $\Delta \log \lambda L_{\lambda}$=0.2dex and then normalized to the total number of observed objects, in order to evaluate the expected number of objects in each box ($\Delta\log \lambda L_{\lambda},\,\Delta \log$FWHM). We assumed a Poissonian error (i.e. $\sqrt{n}$) on the observed number of objects in each box. For each choice of the parameters, the expected distribution was compared to the observed distribution in the discrete $\log \lambda L_{\lambda}-\log$FWHM plane, evaluating the relative $\chi^2$ value. The minimum $\chi^2$ determines the best fit parameters. In order to determine the uncertainties on these values, the same fit procedure was repeated many times comparing the observed distribution to a set of simulated distributions of objects, constructed through the Monte Carlo method. This procedure allows an estimate of the error since we sound the underlying probability density only through a finite number of observed objects, the distribution of which in the FWHM--luminosity plane ideally follows Eq. 4 with a certain random dispersion. In detail, given a set of values of the six parameters, we generated $10^7$ points ($\log \lambda {\rm L}_{\lambda},\,\log$FWHM) with uniform probability densities, and then rejected points accordingly to the assumed $P_{L,{\rm FWHM}}(L,{\rm FWHM})$ at given $l_{\rm min}$, $m_{\rm max}$, $e_{\rm max}$, $\sigma_l$, $\sigma_m$ and $\sigma_e$, so that the number of simulated points matches the number of observed objects. We then calculated the root mean square (rms) between the observed and the simulated distributions. This operation was repeated for all the possible combinations of the six parameters (in a reduced phase space around the best fit values). The sextuple which led to the minimum rms gave the so-called Monte Carlo best fit parameters. This procedure was repeated a dozen times, giving as many Monte Carlo best fit values for each parameter, slightly different from one another, but fully consistent with the previous determination. For each parameter, the standard deviation of this set of best fit values was assumed as an estimate of its uncertainty. This uncertainty is much larger than that corresponding to $\Delta\chi^2=1$. \begin{table*} \centering \begin{minipage}{\textwidth} \centering \caption{Best fit values of minimum luminosity, maximum mass, maximum Eddington ratio and widths of the corresponding distributions, with errors and $\chi^2_{\nu}$. The number of degrees of freedom is $\nu=594$ in the first redshift bin (600 data-points and 6 free parameters) and $\nu=595$ in the others (600 data-points and 5 free parameters). Data that come from a best fit procedure are displayed in boldface.} \begin{tabular}{@{}ccccccccc@{}} \hline \hline Bin & $<z>$ & $l_{\rm min}$ & $\sigma_l$ & $m_{\rm max}$ & $\sigma_m$ & $e_{\rm max}$ & $\sigma_e$ & $\chi^2_{\nu}$ \\ \hline 1st &0.62&{\bf 44.30}$\pm0.025$&{\bf 0.26}$\pm0.008$&{\bf 9.18}$\pm0.05$&{\bf 0.31}$\pm 0.003$&{\bf -0.35}$\pm0.02$&{\bf 0.22}$\pm0.003$&3.51\\ \hline 2nd &0.97&44.83 &{\bf 0.23}$\pm0.007$&{\bf 9.35}$\pm0.04$&{\bf 0.31}$\pm 0.006$&{\bf -0.34}$\pm0.02$&{\bf 0.23}$\pm0.003$&4.97\\ 3rd &1.22&45.07 &{\bf 0.23}$\pm0.005$&{\bf 9.42}$\pm0.05$&{\bf 0.31}$\pm 0.004$&{\bf -0.33}$\pm0.02$&{\bf 0.23}$\pm0.002$&4.63\\ 4th &1.42&45.24 &{\bf 0.23}$\pm0.008$&{\bf 9.43}$\pm0.05$&{\bf 0.32}$\pm 0.003$&{\bf -0.34}$\pm0.02$&{\bf 0.22}$\pm0.001$&5.11\\ 5th &1.61&45.36 &{\bf 0.24}$\pm0.006$&{\bf 9.52}$\pm0.04$&{\bf 0.31}$\pm 0.003$&{\bf -0.35}$\pm0.01$&{\bf 0.22}$\pm0.003$&4.36\\ 6th &1.80&45.51 &{\bf 0.22}$\pm0.005$&{\bf 9.60}$\pm0.05$&{\bf 0.32}$\pm 0.002$&{\bf -0.34}$\pm0.02$&{\bf 0.22}$\pm0.004$&8.44\\ 7th &1.98&45.61 &{\bf 0.22}$\pm0.005$&{\bf 9.67}$\pm0.05$&{\bf 0.31}$\pm 0.003$&{\bf -0.33}$\pm0.02$&{\bf 0.22}$\pm0.004$&6.84\\ 8th &2.15&45.67 &{\bf 0.24}$\pm0.006$&{\bf 10.02}$\pm0.05$&{\bf 0.30}$\pm 0.005$&{\bf -0.34}$\pm0.02$&{\bf 0.21}$\pm0.003$&7.21\\ \hline \label{tab} \end{tabular} \end{minipage} \end{table*} In the first panel of Fig. \ref{graficoz} we compare the observed distribution with one simulated best fit distribution for the lowest redshift bin. It is apparent that the choice of three simple distributions in luminosity, mass and Eddington ratios describes rather closely the data, giving circumstantial support to the validity of the virial hypothesis on which the theoretical assumptions are based. Table \ref{tab} (first line) contains the best fit values of the 6 parameters with relative errors and the reduced $\chi^2$ value (hereafter, $\chi^2_{\nu}$). The fact that the $\chi^2_{\nu}$ is larger than 1 is interpreted as due to the choice of an oversimplified distribution. This doesn't influence our results, because our goal is to find a good way to quantify a parameter related to the BH mass (and one related to the Eddington ratio) such that it is not affected by a Malmquist-type bias (i.e. disentangled of the $z$-dependence of the luminosity instrumental limit). In fact, by construction, $m_{\rm max}$ and $e_{\rm max}$ depend neither on the quasar number density nor on the survey flux limit (see next Section for tests on this statement). \subsection{Bias analysis and robustness of the procedure}\label{robust} The effect of the luminosity cut on the results for $m_{\rm max}$ and $e_{\rm max}$ can be further tested by simulation. In order to show that our results are not affected by the instrumental flux limit of the dataset, we selected a subsample from the lowest redshift bin applying the probability function $\tilde{P}_l(l)$ (Eq. \ref{PL}) with a higher luminosity cut, i.e. assuming the $l_{\rm min}$ and $\sigma_{\rm l}$ derived for the third redshift bin in which $<z>=1.22$ (see next Section). This subsample consists of about 1/12 of the objects in the original lowest redshift sample. The fit procedure presented in this paper has been performed again on this subsample. Figure \ref{test1} (left panel) shows that the luminosity cut of a higher redshift bin has negligible effects on the results, being these values ($m_{\rm max}=9.20$ and $e_{\rm max}=-0.36$) consistent within $1\sigma$ with the best fit parameters derived for the whole sample ($m_{\rm max}=9.18\pm0.05$ and $e_{\rm max}=-0.35\pm0.02$). A similar test has been performed to show that $m_{\rm max}$ and $e_{\rm max}$ do not depend on the quasar number density: we re-sampled from the first redshift bin rejecting randomly $2/3$ of the objects, in order to obtain a smaller sample with the same distribution. The fit procedure was then performed on the reduced sample. Again, no significant deviation in the determination of the best fit parameters was observed (see Figure \ref{test1}, right panel). Again, the derived values ($m_{\rm max}=9.20$ and $e_{\rm max}=-0.34$) are consistent within $1\sigma$ with the best fit parameters obtained for the whole sample. These tests show that $m_{\rm max}$ and $e_{\rm max}$ are independent of the quasar number density and of the survey flux limits, and therefore indicate that our procedure is not affected by a Malmquist-type bias. \section{Evolution of the QSO population}\label{secevo} \subsection{Quasar BH mass and Eddington ratio dependence on redshift}\label{secevoz} \begin{figure} \includegraphics[width=0.375\textwidth, angle=270]{fig5} \caption[]{ The same of Figure~\ref{graficoz} referred to a subsample of the lowest redshift bin objects, selected accordingly to the luminosity cut function of a higher redshift bin (left panel) or randomly selected with $p=0.3$ (right panel). For comparison, green thin lines represent the whole low redshift sample and its best fit $l_{\rm min}$, $m_{\rm max}$ and $e_{\rm max}$ lines. } \label{test1} \end{figure} \begin{figure} \includegraphics[width=0.375\textwidth, angle=270]{fig6} \caption[]{Black dots represent the best fit parameters $l_{min}$ vs. $z$, compared to the values expected from cosmology (solid line).} \label{plotlumi} \end{figure} \begin{figure} \includegraphics[width=0.375\textwidth, angle=270]{fig7} \caption[]{Normalized $\chi^2_{\nu}$ values of the best fit function. Filled circles refer to the fit procedure described in this work. Open triangles show the $\chi^2_{\nu}$ values that would be obtained assuming that the quasar BH mass is constant with redshift and open squares show the $\chi^2_{\nu}$ values assuming that the BH mass evolves with redshift as proposed by Fine et al. (2006).} \label{chi2v} \end{figure} The fit procedure described above is applied to all the redshift bins, in order to determine the best fit parameters and their uncertainties as a function of redshift. In each redshift bin we compared the best fit minimum luminosity with the values inferred through the $z$-dependence of the luminosity distance (see Fig.~\ref{plotlumi}). It is apparent that the agreement is very good: apart from the highest redshift bin, where the 3000 \AA{} continuum is very close to the red edge of the observed spectral range and the flux calibration may be unreliable, all the data are consistent with the expectations within $1\sigma$. This gives further support to the assumed description of the object distributions in the FWHM--luminosity panels and suggests to repeat the entire procedure assuming that the value of $l_{\rm min}(z)$ is constrained by cosmology. The same fit procedure is then applied again to all the redshift bins, but now the dependence on redshift of the minimum luminosity is set by cosmology and $l_{\rm min}$ is no more treated as a free parameter. In each bin, the $\chi^2_{\nu}$ was evaluated and normalized to the $\chi^2_{\nu { }0}$ value obtained in the first redshift bin, in order to compare the adequacy of the best fit function in the 8 panels. Figure \ref{chi2v} shows that these values are almost constant in each redshift bin. Again, Fig. \ref{graficoz} and Table \ref{tab} show respectively the Monte Carlo simulated distributions compared to the observed distributions of quasars and the best fit values, their errors and relative $\chi^2$ values. The maximum mass and Eddington ratio values are plotted versus redshift in Figure \ref{mez}. Note that the proposed $M_{\rmn{BH}}$\ $z$-dependence refers to the active BH population and not to the total supermassive BH mass distribution. Of course, the average mass of the inactive BH population must decrease with increasing redshift. \begin{figure} \includegraphics[width=0.5\textwidth, angle=0]{fig8} \caption[]{{\it Upper panel:} small dots are the the virial BH masses of the Mg\textsc{ii}\ sample given by Shen et al.~(2008); the dash-dotted line reports the corresponding mean values. Red circles are our estimates of $\log \frac{M_{\rm BH\,(max)}}{{\rm M}_{\odot}}$ and the red solid line is the best fit reported in Eq.~\ref{mmax_z}. The $M_{\rmn{BH}}$\ vs.~$z$ dependence proposed by Fine et al.~(2006) is the dashed line. {\it Lower panel:} small dots are the Eddington ratios for each source of the Mg\textsc{ii}\ sample; the dash-dotted line shows the corresponding average. Red circles are the maximum Eddington ratios and the red solid line is the best linear fit reported in Eq.~\ref{emax_z}.} \label{mez} \end{figure} A linear fit to the maximum mass values (excluded the highest redshift bin one) gives: \begin{equation}\label{mmax_z} \log \frac{M_{\rm BH\,(max)}}{{\rm M}_{\odot}}=m_{\rm max}=0.34(\pm0.02)z+8.99(\pm0.03); \end{equation} while the maximum Eddington ratio ($\sim 0.45$) is consistent with no evolution with cosmic time: \begin{equation}\label{emax_z} \frac{L_{\rm bol}}{L_{\rm Edd}}\phantom{.}\!_{\rm (max)}=10^{(e_{\rm max})}=0.005(\pm0.006)z+0.45(\pm0.01). \end{equation} Assuming that the shapes of $M_{\rmn{BH}}$\ and Eddington ratio distributions do not change with redshift, which is suggested by the fact that the value of $\sigma_m$ and $\sigma_e$ are independent of $z$ (see Table \ref{tab}), Eq. \ref{mmax_z} also describes the slope of the $z$-dependence of the mean quasar BH mass, and not only of the maximum mass of the quasar populations. Similarly, the mean (and not only the maximum) Eddington ratio is constant with redshift. \subsection{Dependence of the results on the $r_{\rm BLR}-\lambda L_{\lambda}$ calibration}\label{seclll} \begin{figure} \includegraphics[width=0.5\textwidth, angle=0]{fig9} \caption[]{ BH maximum masses (upper panel) and maximum Eddington ratios (lower panel) as a function of redshift for different values of the luminosity exponent in Eq.~\ref{formulamassa}. } \label{test2} \end{figure} We tested whether a variation of the luminosity exponent of the virial calibration may affect the relative evolution in $m_{\rm max}$ and $e_{\rm max}$ derived in this paper. The $L-r_{\rm BLR}$ relation assumed in Eq.~\ref{formulamassa} (McLure \& Dunlop 2004) is quite steep, although still consistent with the canonical $r_{\rm BLR}\propto\lambda L_{\lambda}^{\,\,b}$ with $b=0.5$ which is often assumed for idealised photoionisation. In order to quantify the effects that this has on the relative evolution in the maximum mass and Eddington ratio, we reproduced the analysis assuming the exponent on the luminosity term is $b=0.5$ or $b=0.4$, both of which are consistent to within $2\sigma$ with the McLure \& Dunlop (2004) calibration in which $b=0.62\pm0.14$. \begin{table*} \centering \begin{minipage}{\textwidth} \centering \caption{Best linear fit to the maximum mass and to the maximum Eddington ratio as a function of redshift for various values of the luminosity exponent in Eq.~\ref{formulamassa} ($b=0.62$, $b=0.5$ and $b=0.4$). For each value of $b$, the average value over all the redshift bins of the $\chi^2_{\nu}$ of the best fit probability density (Eq.~\ref{Ptot}) is also given.} \begin{tabular}{@{}cccccc@{}} \hline \hline &\multicolumn{2}{c}{$\log \frac{M_{\rm BH\,(max)}}{{\rm M}_{\odot}}=\alpha z+\beta$}&\multicolumn{2}{c}{$\frac{L_{\rm bol}}{L_{\rm Edd}}\phantom{.}\!_{\rm (max)}=\alpha z+\beta$}\\ \\ $b$ & $\alpha$ & $\beta$ & $\alpha$ & $\beta$ & $\langle\chi^2_{\nu}\rangle_z$\\ \hline 0.62&$0.34\pm0.02$&$8.99\pm0.03$&$0.005\pm0.006$&$0.45\pm0.01$&5.6\\ 0.5 &$0.28\pm0.05$&$9.07\pm0.08$&$0.06\pm0.02$ &$0.37\pm0.03$&5.8\\ 0.4 &$0.19\pm0.05$&$9.19\pm0.07$&$0.13\pm0.01$ &$0.25\pm0.02$&7.5\\ \hline \label{tabz} \end{tabular} \end{minipage} \end{table*} Figure \ref{test2} (upper panel) shows that the smaller is the luminosity exponent in the virial calibration, the flatter is the dependence on redshift of the BH masses. The results are only slightly affected by the choice of the $L-r_{\rm BLR}$ relation, since the $z$-evolution determined assuming an exponent of 0.5 is consistent within $1\sigma$ with the previous determination, obtained assuming the McLure \& Dunlop (2004) virial calibration. In Table \ref{tabz} we give the best linear fit to the maximum mass as a function of redshift for $b=0.5$, $b=0.4$ and, for comparison, $b=0.62$. On the other hand, as regards the dependence on redshift of the Eddington ratio, the picture is more delicate. This parameter appears to increase significantly with $z$ assuming a flatter $L-r_{\rm BLR}$ relation, while it was found to be constant with redshift within the assumed virial calibration (Eq.~\ref{formulamassa}; see Fig.~\ref{test2}, lower panel). In Table \ref{tabz}, the parameters of the best linear fit to the maximum Eddington ratio as a function of redshift are given for various values of the luminosity exponent ($b=0.5$, $b=0.4$ and, for comparison, $b=0.62$). Note that the assumption of a flatter $L-r_{\rm BLR}$ relation leads to an increase of the residuals between the best fit probability density (Eq.~\ref{Ptot}) and the observed quasar distribution. In Table~\ref{tabz}, the $\chi^2_{\nu}$ values averaged over $z$ of the best fit probability density are given for $b=0.62$, $b=0.5$ and $b=0.4$. It is apparent that the $\chi^2_{\nu}$ is minimum for $b=0.62$, giving a circumstantial independent support to the index proposed by McLure \& Dunlop (2004, see Eq.~\ref{formulamassa}) and, hence, to Eqs.~\ref{mmax_z} and \ref{emax_z}. \subsection{Comparison with previous results}\label{seccomp} We now compare our results with those obtained by McLure \& Dunlop (2004), Fine et al.\ (2006) and Shen et al.\ (2008), focusing just on the slope of the $M_{\rmn{BH}}$\ and Eddington ratio evolution. Fine et al. (2006), in order to reduce the effects of the Malmquist bias, concentrated on a subsample of the 2dF quasars with luminosity around $L^*(z)$ at each redshift. They observe a significant dependence of the quasar BH mass on redshift ($M_{\rmn{BH}}\propto(1+z)^{3.3\pm1.1}$), but conclude that their result cannot directly be interpreted as evidence for anti-hierarchical ``downsizing'' because the $z$-dependence they found is strongly dominated by the dependence on redshift of $L^*$. For comparison, we repeated the entire fit procedure described above imposing that $M_{\rm BH\,(max)}(z)$ varied as proposed by Fine et al.~(2006). Figure \ref{chi2v} shows the relative $\chi^2_{\nu}$ values in each redshift bin: the fit appears inadequate if we assume their results. Note however that the error given for the evolution of the average BH mass of QSOs by Fine et al. (2004) is quite large, so that their results are consistent with those given here within $1\sigma$. McLure \& Dunlop (2004) proposed that the observed increase of the quasar BH mass with redshift is entirely as expected due to the effective flux limit of the sample. To further test the possibility that the mean BH mass is independent of redshift, we repeated again the fit procedure described above assuming that $M_{\rm BH\,(max)}$ is constant over all the redshift bins. In Figure \ref{chi2v} we plot the relative $\chi^2_{\nu}$, and again the fit is inconsistent with the data, giving further to evidence for an evolution of quasar populations with $z$. McLure \& Dunlop (2004) and Shen et al. (2008), studying the SDSS DR1 and DR5 samples, found that there is a clear upper mass limit of $\sim10^{10}M_{\odot}$ for active BHs at $z>2$, decreasing at lower redshift. This trend is in good agreement with our results and can be explained assuming that the quasar number density peaks at a certain $z_{\rm peak}\sim2-2.5$ and then flattens out (see for example Richards et al. 2006). Around $z_{\rm peak}$, both the high and the low mass end of the quasar BH mass distribution are more populated, so that the observation of very massive objects is likely (while low mass quasars cannot be observed due to the instrumental flux limit). Therefore, the slope of the ``unbiassed'' dependence on redshift of the maximum BH mass is raised below $z_{\rm peak}$ and it is flattened above. This effect translates in evidence for a limiting BH mass for active BHs at $z>2$, decreasing at lower redshift, that is apparent in all large samples of quasars. McLure \& Dunlop (2004) observed a substantial increase of Eddington ratios with redshift and a similar trend is apparent from the sample of 2dF $L^*$ quasars of Fine et al. (2006) after correcting their data for the offset between the Mg\textsc{ii}\ and C\textsc{iv}\ virial mass calibrations (see for example Shen et al.~2008). We suggest that the observed $z$-dependence of Eddington ratios is spurious, and that it is entirely dominated by the dependence on redshift of the average quasar luminosity due to the Malmquist bias. \subsection{Discussion of the results}\label{secdisc} Studying a sample of $\sim50000$ SDSS quasars with $0.35<z<2.25$, we obtained that the maximum mass of the quasar populations increases with $z$, while the maximum Eddington ratio is practically independent of redshift. These results are unaffected by the Malmquist bias and may be interpreted as evidence for evolution of the active BH population with redshift. Quasar samples at lower redshift are increasingly dominated by lower mass BHs, i.e. most massive BHs start quasar activity before less massive ones. This is indicative of anti-hierarchical ``downsizing'' of active BHs and it is in agreement with recent theoretical predictions by e.g. Merloni, Rudnik \& Di Matteo (2006). Our findings may have implications for the study of the co-evolution of supermassive BHs and their host galaxies, even if they cannot be directly interpreted as evidence for evolution of the $M_{\rm BH}-M_{\rm bulge}$ scale relation. There is observational evidence that quasar host galaxies are already fully formed massive ellipticals at $z\sim 2.5$ and then passively fade in luminosity to the present epoch (e.g.~Kotilainen et al. 2007, 2009; Falomo et al. 2008). Within this scenario, our results can be interpreted as an evolution with redshift of the parameter $\Gamma\equiv M_{\rm BH}/M_{\rm bulge}$, which would be 4--5 times larger at $z\sim2$ than today. This is in good agreement with the results of Peng et al. (2006), who found that $\Gamma$ is $\sim4$ times larger at $z\sim1.7$ than today in a sample of 11 lensed quasar hosts. Our results are also consistent with Salviander et al. (2007), who examined a sample of SDSS quasars finding that galaxies of a given dispersion at $z\sim1$ have BH masses that are larger by $\Delta \log M_{\rm BH}\sim0.2$ than at $z\sim 0$ (see Lauer et al. 2007 for a detailed discussion on the selection bias which may affect these results). \section{Summary and conclusions}\label{secsum} Starting from the SDSS DR5 quasar catalogue, we focused on the $\sim50000$ objects for which Mg\textsc{ii}\ line widths and 3000\AA\ monochromatic luminosities were available. This sample ($0.35<z<2.25$) was divided in 8 redshift bins. In each bin, the object distribution in the FWHM--luminosity plane was described in terms of a minimum luminosity limit (due to the instrumental flux limit), a maximum mass and a maximum Eddington ratio. The assumed probability density was compared to the observed distribution of objects in order to determine the free parameters with a best fit procedure in each redshift bin. Errors on the best fit parameters were determined with Monte Carlo simulations. We tested the robustness of the procedure through some simulations, and showed that the maximum mass and the maximum Eddington ratio determined in each redshift bin depend neither on the quasar number density nor on the survey flux limit (which is responsible of giving raise to a Malmquist-type bias in the observed dependence on redshift of the mean quasar BH masses). We then studied the dependence on redshift of the maximum quasar BH mass and of the maximum Eddington ratio and found clear evidence for evolution of the active BH population with redshift. Over the redshift range studied, we obtained that the maximum mass of the quasar population depends on redshift as $\log (M_{\rm BH\,(max)}/{\rm M}_{\odot})=0.34z+8.99$, while the maximum Eddington ratio is found to be practically independent of redshift. This means that QSO samples at lower redshift are increasingly dominated by lower mass BHs, i.e. the more massive a BH is, the earlier it starts quasar activity. Within a scenario in which quasar host galaxies are already fully formed massive ellipticals at $z\sim 2.5$, our results can be also interpreted as an evolution with redshift of the parameter $\Gamma\equiv M_{\rm BH}/M_{\rm bulge}$, which would be 4--5 times larger at $z\sim2$ than today. \section*{Acknowledgments} We wish to thank Yue Shen for providing SDSS quasar spectral measurements before publication. We are grateful to an anonymous referee for constructive criticism, which led to an improvement of the paper. \begin{footnotesize}
1,108,101,563,436
arxiv
\section{Introduction} \paragraph{\bf Hard lattice problems.} Lattices are discrete subgroups of $\mathbb{R}^d$. More concretely, given a basis $B = \{\vc{b}_1, \dots, \vc{b}_d\} \subset \mathbb{R}^d$, the lattice $\mathcal{L} = \mathcal{L}(B)$ generated by $B$ is defined as $\mathcal{L}(B) = \left\{\sum_{i=1}^d \lambda_i \vc{b}_i: \lambda_i \in \mathbb{Z}\right\}$. Given a basis of a lattice $\mathcal{L}$, the Shortest Vector Problem (SVP) asks to find a shortest non-zero vector in $\mathcal{L}$ under the Euclidean norm, i.e., a non-zero lattice vector $\vc{s}$ of norm $\|\vc{s}\| = \lambda_1(\mathcal{L}) := \min_{\vc{v} \in \mathcal{L} \setminus \{\vc{0}\}} \|\vc{v}\|$. Given a basis of a lattice and a target vector $\vc{t} \in \mathbb{R}^d$, the Closest Vector Problem (CVP) asks to find a vector $\vc{s} \in \mathcal{L}$ closest to $\vc{t}$ under the Euclidean distance, i.e.\ such that $\|\vc{s} - \vc{t}\| = \min_{\vc{v} \in \mathcal{L}} \|\vc{v} - \vc{t}\|$. These two hard problems are fundamental in the study of lattice-based cryptography, as the security of these schemes is directly related to the hardness of SVP and CVP in high dimensions. Various other hard lattice problems, such as Learning With Errors (LWE) and the Shortest Integer Solution (SIS) problem are closely related to SVP and CVP, and many reductions between these and other hard lattice problems are known; see e.g.\ \cite[Figure 3.1]{laarhoven12kolkata} or \cite{stephens16} for an overview. These reductions show that being able to solve CVP efficiently implies that almost all other lattice problems can also be solved efficiently in the same dimension, which makes the study of the hardness of CVP even more important for choosing parameters in lattice-based cryptography. \paragraph{\bf Algorithms for SVP and CVP.} Although SVP and CVP are both central in the study of lattice-based cryptography, algorithms for SVP have received somewhat more attention, including a benchmarking website to compare different algorithms~\cite{svp}. Various SVP methods have been studied which can solve CVP as well, such as enumeration (see e.g.\ \cite{kannan83, fincke85, gama10, micciancio15}), discrete Gaussian sampling~\cite{aggarwal15, aggarwal15b}, constructing the Voronoi cell of the lattice~\cite{agrell02, micciancio10}, and using a tower of sublattices~\cite{becker14}. On the other hand, for the asymptotically fastest method in high dimensions for SVP\footnote{To obtain provable guarantees, sieving algorithms are commonly modified to facilitate a somewhat artificial proof technique, which drastically increases the time complexity beyond e.g.\ the discrete Gaussian sampler and the Voronoi cell algorithm~\cite{ajtai01, nguyen08, pujol09, micciancio10b}. On the other hand, if some natural heuristic assumptions are made to enable analyzing the algorithm's behavior, then sieving clearly outperforms these methods. We focus on heuristic sieving in this paper.}, lattice sieving, it is not known how to solve CVP with similar costs as SVP. After a series of theoretical works on constructing efficient heuristic sieving algorithms~\cite{nguyen08, micciancio10b, wang11, zhang13, laarhoven15crypto, laarhoven15latincrypt, becker15nns, becker16cp, becker16lsf} as well as practical papers studying how to speed up these algorithms even further~\cite{milde11, schneider11, schneider13, bos14, fitzpatrick14, ishiguro14, mariano14, mariano14b, mariano15, mariano16pdp, mariano16}, the best time complexity for solving SVP currently stands at $2^{0.292d + o(d)}$~\cite{becker16lsf, mariano16}. Although for various other methods the complexities for solving SVP and CVP are similar~\cite{gama10, micciancio10, aggarwal15b}, one can only guess whether the same holds for lattice sieving methods. To date, the best heuristic time complexity for solving CVP in high dimensions stands at $2^{0.377d + o(d)}$, due to Becker--Gama--Joux~\cite{becker14}. \subsection{Contributions} In this paper we revisit heuristic lattice sieving algorithms, as well as the recent trend to speed up these algorithms using nearest neighbor searching, and we investigate how these algorithms can be modified to solve CVP and its generalizations. We present two different approaches for solving CVP with sieving, each of which we argue has its own merits. \paragraph{\bf Adaptive sieving.} In \textit{adaptive sieving}, we adapt the entire sieving algorithm to the problem instance, including the target vector. As the resulting algorithm is tailored specifically to the given CVP instance, this leads to the best asymptotic complexity for solving a single CVP instance out of our two proposed methods: $2^{0.292d + o(d)}$ time and space. This method is very similar to solving SVP with lattice sieving, and leads to equivalent asymptotics on the space and time complexities as for SVP. The corresponding space-time tradeoff is illustrated in Figure~\ref{fig:1}, and equals that of \cite{becker16lsf} for solving SVP. \paragraph{\bf Non-adaptive sieving.} Our main contribution, \textit{non-adaptive sieving}, takes a different approach, focusing on cases where several CVP instances are to be solved on the same lattice. The goal here is to minimize the costs of computations depending on the target vector, and spend more time on preprocessing the lattice, so that the amortized time complexity per instance is smaller when solving many CVP instances on the same lattice. This is very closely related to the Closest Vector Problem with Preprocessing (CVPP), where the difference is that we allow for exponential-size preprocessed space. Using nearest neighbor techniques with a balanced space-time tradeoff, we show how to solve CVPP with $2^{0.636d + o(d)}$ space and preprocessing, in $2^{0.136d + o(d)}$ time. A continuous tradeoff between the two complexities can be obtained, where in the limit we can solve CVPP with $(1/\varepsilon)^{O(d)}$ space and preprocessing, in $2^{\varepsilon d + o(d)}$ time. This tradeoff is depicted in Figure~\ref{fig:1}. A potential application of non-adaptive sieving is as a subroutine within enumeration methods. As described in e.g.\ \cite{gama10}, at any given level in the enumeration tree, one is attempting to solve a CVP instance in a lower-dimensional sublattice of $\mathcal{L}$, where the target vector is determined by the path chosen from the root to the current node in the tree. That means that if we can preprocess this sublattice such that the amortized time complexity of solving CVPP is small, then this could speed up processing the bottom part of the enumeration tree. This in turn might help speed up the lattice basis reduction algorithm BKZ~\cite{schnorr87, schnorr94, chen11}, which commonly uses enumeration as its SVP subroutine, and is key in assessing the security of lattice-based schemes. As the preprocessing needs to be performed once, CVPP algorithms with impractically large preprocessing costs may not be useful, but we show that with sieving the preprocessing costs can be quite small. \begin{figure}[!t] {\center \includegraphics{sievingcvp-figure1}} \caption{Heuristic complexities for solving the Closest Vector Problem (CVP), the Closest Vector Problem with Preprocessing (CVPP), Bounded Distance Decoding with Preprocessing ($\delta$-BDDP), and the Approximate Closest Vector Problem with Preprocessing ($\kappa$-CVPP). The red curve shows CVP complexities of Becker--Gama--Joux~\cite{becker14}. The left blue curve denotes CVP complexities of adaptive sieving. The right blue curve shows exact CVPP complexities using non-adaptive sieving. Purple curves denote relaxations of CVPP corresponding to different parameters $\delta$ (BDD radius) and $\kappa$ (approximation factor). Note that exact CVPP corresponds to $\delta$-BDDP with $\delta = 1$ and to $\kappa$-CVPP with $\kappa = 1$.\label{fig:1}} \end{figure} \paragraph{\bf Outline.} The remainder of the paper is organized as follows. In Section~\ref{sec:pre} we describe some preliminaries, such as sieving algorithms and a useful result on nearest neighbor searching. Section~\ref{sec:ad} describes adaptive sieving and its analysis for solving CVP without preprocessing. Section~\ref{sec:non} describes the preprocessing approach to solving CVP, with complexity analyses for exact CVP and some of its relaxations. \section{Preliminaries} \label{sec:pre} \subsection{Lattice sieving for solving SVP} Heuristic lattice sieving algorithms for solving the shortest vector problem all use the following basic property of lattices: if $\vc{v}, \vc{w} \in \mathcal{L}$, then their sum/difference $\vc{v} \pm \vc{w} \in \mathcal{L}$ is a lattice vector as well. Therefore, if we have a long list $L$ of lattice vectors stored in memory, we can consider combinations of these vectors to obtain new, shorter lattice vectors. To make sure the algorithm makes progress in finding shorter lattice vectors, $L$ needs to contain a lot of lattice vectors; for vectors $\vc{v}, \vc{w} \in \mathcal{L}$ of similar norm, the vector $\vc{v} - \vc{w}$ is shorter than $\vc{v}, \vc{w}$ iff the angle between $\vc{v}, \vc{w}$ is smaller than $\pi/3$, which for random vectors $\vc{v}, \vc{w}$ occurs with probability $(3/4)^{d/2 + o(d)}$. The expected space complexity of heuristic sieving algorithms follows directly from this observation: if we draw $(4/3)^{d/2 + o(d)}$ random vectors from the unit sphere, we expect a large number of pairs of vectors to have angle less than $\pi/3$, leading to many short difference vectors. This is exactly the heuristic assumption used in analyzing these sieving algorithms: when normalized, vectors in $L$ follow the same distribution as vectors sampled uniformly at random from the unit sphere. \begin{heuristic} \label{heur:1} When normalized, the list vectors $\vc{w} \in L$ behave as i.i.d.\ uniformly distributed random vectors from the unit sphere $\mathcal{S}^{d-1} := \{\vc{x} \in \mathbb{R}^d: \|\vc{x}\| = 1\}$. \end{heuristic} Therefore, if we start by sampling a list $L$ of $(4/3)^{d/2 + o(d)}$ long lattice vectors, and iteratively consider combinations of vectors in $L$ to find shorter vectors, we expect to keep making progress. Note that naively, combining pairs of vectors in a list of size $(4/3)^{d/2 + o(d)} \approx 2^{0.208d + o(d)}$ takes time $(4/3)^{d + o(d)} \approx 2^{0.415d + o(d)}$. \paragraph{\bf The Nguyen-Vidick sieve.} The heuristic sieve algorithm of Nguyen and Vidick~\cite{nguyen08} starts by sampling a list $L$ of $(4/3)^{d/2 + o(d)}$ long lattice vectors, and uses a \textit{sieve} to map $L$, with maximum norm $R := \max_{\vc{v} \in L} \|\vc{v}\|$, to a new list $L'$, with maximum norm at most $\gamma R$ for $\gamma < 1$ close to $1$. By repeatedly applying this sieve, after $\operatorname{poly}(d)$ iterations we expect to find a long list of lattice vectors of norm at most $\gamma^{\operatorname{poly}(d)} R = O(\lambda_1(\mathcal{L}))$. The final list is then expected to contain a shortest vector of the lattice. Algorithm~\ref{alg:nv} in Appendix~\ref{app:alg} describes a sieve equivalent to Nguyen-Vidick's original sieve, to map $L$ to $L'$ in $|L|^2$ time. \paragraph{\bf Micciancio and Voulgaris' GaussSieve.} Micciancio and Voulgaris used a slightly different approach in the GaussSieve~\cite{micciancio10b}. This algorithm reduces the memory usage by immediately \textit{reducing} all pairs of lattice vectors that are sampled. The algorithm uses a single list $L$, which is always kept in a state where for all $\vc{w}_1, \vc{w}_2 \in L$, $\|\vc{w}_1 \pm \vc{w}_2\| \geq \|\vc{w}_1\|, \|\vc{w}_2\|$, and each time a new vector $\vc{v} \in \mathcal{L}$ is sampled, its norm is reduced with vectors in $L$. After the norm can no longer be reduced, the vectors in $L$ are reduced with $\vc{v}$. Modified list vectors are added to a stack to be processed later (to maintain the pairwise reduction-property of $L$), and new vectors which are pairwise reduced with $L$ are added to $L$. Immediately reducing all pairs of vectors means that the algorithm uses less time and memory in practice, but at the same time Nguyen and Vidick's heuristic proof technique does not apply here. However, it is commonly believed that the same bounds $(4/3)^{d/2 + o(d)}$ and $(4/3)^{d + o(d)}$ on the space and time complexities hold for the GaussSieve. Pseudocode of the GaussSieve is given in Algorithm~\ref{alg:gauss} in Appendix~\ref{app:alg}. \subsection{Nearest neighbor searching} Given a data set $L \subset \mathbb{R}^d$, the nearest neighbor problem asks to preprocess $L$ such that, when given a query $\vc{t} \in \mathbb{R}^d$, one can quickly return a nearest neighbor $\vc{s} \in L$ with distance $\|\vc{s} - \vc{t}\| = \min_{\vc{w} \in L} \|\vc{w} - \vc{t}\|$. This problem is essentially identical to CVP, except that $L$ is a finite set of unstructured points, rather than the infinite set of all points in a lattice $\mathcal{L}$. \paragraph{\bf Locality-Sensitive Hashing/Filtering (LSH/LSF).} A celebrated technique for finding nearest neighbors in high dimensions is Locality-Sensitive Hashing (LSH)~\cite{indyk98, wang14}, where the idea is to construct many random partitions of the space, and store the list $L$ in hash tables with buckets corresponding to regions. Preprocessing then consists of constructing these hash tables, while a query $\vc{t}$ is answered by doing a lookup in each of the hash tables, and searching for a nearest neighbor in these buckets. More details on LSH in combination with sieving can be found in e.g.\ \cite{laarhoven15crypto, laarhoven15latincrypt, becker15nns, becker16cp}. Similar to LSH, Locality-Sensitive Filtering (LSF)~\cite{becker16lsf, laarhoven15nns} divides the space into regions, with the added relaxation that these regions do not have to form a partition; regions may overlap, and part of the space may not be covered by any region. This leads to improved results compared to LSH when $L$ has size exponential in $d$~\cite{becker16lsf, laarhoven15nns}. Below we restate one of the main results of~\cite{laarhoven15nns} for our applications. The specific problem considered here is: given a data set $L \subset \mathcal{S}^{d-1}$ sampled uniformly at random, and a random query $\vc{t} \in \mathcal{S}^{d-1}$, return a vector $\vc{w} \in L$ such that the angle between $\vc{w}$ and $\vc{t}$ is at most $\theta$. The following result further assumes that the list $L$ contains $n = (1 / \sin \theta)^{d + o(d)}$ vectors. \begin{lemma} \label{lem:nns} \cite[Corollary 1]{laarhoven15nns} Let $\theta \in (0, \frac{1}{2} \pi)$, and let $u \in [\cos \theta, 1/\cos \theta]$. Let $L \subset \mathcal{S}^{d-1}$ be a list of $n = (1 / \sin \theta)^{d + o(d)}$ vectors sampled uniformly at random from $\mathcal{S}^{d-1}$. Then, using spherical LSF with parameters $\alpha_{\mathrm{q}} = u \cos \theta$ and $\alpha_{\mathrm{u}} = \cos \theta$, one can preprocess $L$ in time $n^{1 + \rho_{\mathrm{u}} + o(1)}$, using $n^{1 + \rho_{\mathrm{u}} + o(1)}$ space, and with high probability answer a random query $\vc{t} \in \mathcal{S}^{d-1}$ correctly in time $n^{\rho_{\mathrm{q}} + o(1)}$, where: \begin{align} n^{\rho_{\mathrm{q}}} &= \left(\frac{\sin^2 \theta \, (u \cos \theta + 1)}{u \cos \theta - \cos 2 \theta}\right)^{d/2}, \quad n^{\rho_{\mathrm{u}}} = \left(\frac{\sin^2 \theta}{1 - \cot^2 \theta\left(u^2 - 2 u \cos \theta + 1\right)}\right)^{d/2}. \label{eq:main3} \end{align} \end{lemma} Applying this result to sieving for solving SVP, where $n = \sin(\frac{\pi}{3})^{-d + o(d)}$ and we are looking for pairs of vectors at angle at most $\frac{\pi}{3}$ to perform reductions, this leads to a space and preprocessing complexity of $n^{0.292d + o(d)}$, and a query complexity of $2^{0.084d + o(d)}$. As the preprocessing in sieving is only performed once, and queries are performed $n \approx 2^{0.208d + o(d)}$ times, this leads to a reduction of the complexities of sieving (for SVP) from $2^{0.208d + o(d)}$ space and $2^{0.415d + o(d)}$ time, to $2^{0.292d + o(d)}$ space and time~\cite{becker16lsf}. \section{Adaptive sieving for CVP} \label{sec:ad} We present two methods for solving CVP using sieving, the first of which we call \textit{adaptive sieving} -- we adapt the entire sieving algorithm to the particular CVP instance, to obtain the best overall time complexity for solving one instance. When solving several CVP instances, the costs roughly scale linearly with the number of instances. \subsubsection{Using one list.} The main idea behind this method is to translate the SVP algorithm by the target vector $\vc{t}$; instead of generating a long list of lattice vectors reasonably close to $\vc{0}$, we generate a list of lattice vectors close to $\vc{t}$, and combine lattice vectors to find lattice vectors even closer vectors to $\vc{t}$. The final list then hopefully contains a closest vector to $\vc{t}$. One quickly sees that this does not work, as the fundamental property of lattices does not hold for the lattice coset $\vc{t} + \mathcal{L}$: if $\vc{w}_1, \vc{w}_2 \in \vc{t} + \mathcal{L}$, then $\vc{w}_1 \pm \vc{w}_2 \notin \vc{t} + \mathcal{L}$. In other words, two lattice vectors close to $\vc{t}$ can only be combined to form lattice vectors close to $\vc{0}$ or $2 \vc{t}$. So if we start with a list of vectors close to $\vc{t}$, and combine vectors in this list as in the Nguyen-Vidick sieve, then after one iteration we will end up with a list $L'$ of lattice vectors close to $\vc{0}$. \subsubsection{Using two lists.} To make the idea of translating the whole problem by $\vc{t}$ work for the Nguyen-Vidick sieve, we make the following modification: we keep track of two lists $L = L_{\vc{0}}$ and $L_{\vc{t}}$ of lattice vectors close to $\vc{0}$ and $\vc{t}$, and construct a sieve which maps two input lists $L_{\vc{0}}, L_{\vc{t}}$ to two output lists $L_{\vc{0}}', L_{\vc{t}}'$ of lattice vectors slightly closer to $\vc{0}$ and $\vc{t}$. Similar to the original Nguyen-Vidick sieve, we then apply this sieve several times to two initial lists $(L_{\vc{0}}, L_{\vc{t}})$ with a large radius $R$, to end up with two lists $L_{\vc{0}}$ and $L_{\vc{t}}$ of lattice vectors at distance at most approximately $\sqrt{4/3} \cdot \lambda_1(\mathcal{L})$ from $\vc{0}$ and $\vc{t}$\footnote{Observe that by the Gaussian heuristic, there are $(4/3)^{d/2 + o(d)}$ vectors in $\mathcal{L}$ within any ball of radius $\sqrt{4/3} \cdot \lambda_1(\mathcal{L})$. So the list size of the NV-sieve will surely decrease below $(4/3)^{d/2}$ when $R < \sqrt{4/3} \cdot \lambda_1(\mathcal{L})$.}. The argumentation that this algorithm works is almost identical to that for solving SVP, where we now make the following slightly different heuristic assumption. \begin{heuristic} \label{heur:2} When normalized, the list vectors $L_{\vc{0}}$ and $L_{\vc{t}}$ in the modified Nguyen-Vidick sieve both behave as i.i.d.\ uniformly distributed random vectors from the unit sphere. \end{heuristic} The resulting algorithm, based on the Nguyen-Vidick sieve, is presented in Algorithm~\ref{alg:nv-adaptive}. \begin{algorithm}[!t] \caption{The adaptive Nguyen-Vidick sieve for finding closest vectors} \label{alg:nv-adaptive} \begin{algorithmic}[1] \Require Lists $L_{\vc{0}}, L_{\vc{t}} \subset \mathcal{L}$ containing $(4/3)^{d/2 + o(d)}$ vectors at distance $\leq R$ from $\vc{0}, \vc{t}$ \Ensure Lists $L_{\vc{0}}', L_{\vc{t}}' \subset \mathcal{L}$ contain $(4/3)^{d/2 + o(d)}$ vectors at distance $\leq \gamma R$ from $\vc{0}, \vc{t}$ \State Initialize empty lists $L_{\vc{0}}', L_{\vc{t}}'$ \For{\textbf{each} $(\vc{w}_1, \vc{w}_2) \in L_{\vc{0}} \times L_{\vc{0}}$} \If{$\|\vc{w}_1 - \vc{w}_2\| \leq \gamma R$} \State Add $\vc{w}_1 - \vc{w}_2$ to the list $L_{\vc{0}}'$ \EndIf \EndFor \For{\textbf{each} $(\vc{w}_1, \vc{w}_2) \in L_{\vc{t}} \times L_{\vc{0}}$} \If{$\|(\vc{w}_1 - \vc{w}_2) - \vc{t}\| \leq \gamma R$} \State Add $\vc{w}_1 - \vc{w}_2$ to the list $L_{\vc{t}}'$ \EndIf \EndFor \State \Return $(L_{\vc{0}}', L_{\vc{t}}')$ \end{algorithmic} \end{algorithm} \subsubsection{Main result.} As the (heuristic) correctness of this algorithm follows directly from the correctness of the original NV-sieve, and nearest neighbor techniques can be applied to this algorithm in similar fashion as well, we immediately obtain the following result. Note that space-time tradeoffs for SVP, such as the one illustrated in \cite[Figure 1]{becker16lsf}, similarly carry over to solving CVP, and the best tradeoff for SVP (and therefore CVP) is depicted in Figure~\ref{fig:1}. \begin{theorem} Assuming Heuristic~\ref{heur:2} holds, the adaptive Nguyen-Vidick sieve with spherical LSF solves CVP in time $\mathrm{T}$ and space $\mathrm{S}$, with \begin{align} \mathrm{S} = (4/3)^{d/2 + o(d)} \approx 2^{0.208 d + o(d)}, \quad \mathrm{T} = (3/2)^{d/2 + o(d)} \approx 2^{0.292 d + o(d)}. \end{align} \end{theorem} An important open question is whether these techniques can also be applied to the faster GaussSieve algorithm to solve CVP. The GaussSieve seems to make even more use of the property that the sum/difference of two lattice vectors is also in the lattice, and operations in the GaussSieve in $\mathcal{L}$ cannot as easily be \textit{mimicked} for the coset $\vc{t} + \mathcal{L}$. Solving CVP with the GaussSieve with similar complexities is left as an open problem. \section{Non-adaptive sieving for CVPP} \label{sec:non} Our second method for finding closest vectors with heuristic lattice sieving follows a slightly different approach. Instead of focusing only on the total time complexity for one problem instance, we split the algorithm into two phases: \begin{itemize} \item Phase 1: Preprocess the lattice $\mathcal{L}$, without knowledge of the target $\vc{t}$; \item Phase 2: Process the query $\vc{t}$ and output a closest lattice vector $\vc{s} \in \mathcal{L}$ to $\vc{t}$. \end{itemize} Intuitively it may be more important to keep the costs of Phase 2 small, as the preprocessed data can potentially be reused later for other instances on the same lattice. This approach is essentially equivalent to the Closest Vector Problem with Preprocessing (CVPP): preprocess $\mathcal{L}$ such that when given a target vector $\vc{t}$ later, one can quickly return a closest vector $\vc{s} \in \mathcal{L}$ to $\vc{t}$. For CVPP however the preprocessed space is usually restricted to be of polynomial size, and the time used for preprocessing the lattice is often not taken into account. Here we will keep track of the preprocessing costs as well, and we do not restrict the output from the preprocessing phase to be of size $\operatorname{poly}(d)$. \subsubsection{Algorithm description.} To minimize the costs of answering a query, and to do the preprocessing independently of the target vector, we first run a standard SVP sieve, resulting in a large list $L$ of almost all short lattice vectors. Then, after we are given the target vector $\vc{t}$, we use $L$ to reduce the target. Finally, once the resulting vector $\vc{t}' \in \vc{t} + \mathcal{L}$ can no longer be reduced with our list, we hope that this reduced vector $\vc{t}'$ is the shortest vector in the coset $\vc{t} + \mathcal{L}$, so that $\vc{0}$ is the closest lattice vector to $\vc{t}'$ and $\vc{s} = \vc{t} - \vc{t}'$ is the closest lattice vector to $\vc{t}$. The first phase of this algorithm consists in running a sieve and storing the resulting list in memory (potentially in a nearest neighbor data structure for faster lookups). For this phase either the Nguyen-Vidick sieve or the GaussSieve can be used. The second phase is the same for either method, and is described in Algorithm~\ref{alg:nonadaptive} for the general case of an input list essentially consisting of the $\alpha^{d + o(d)}$ shortest vectors in the lattice. Note that a standard SVP sieve would produce a list of size $(4/3)^{d/2 + o(d)}$ corresponding to $\alpha = \sqrt{4/3}$. \begin{algorithm}[!t] \caption{Non-adaptive sieving (Phase 2) for finding closest vectors} \label{alg:nonadaptive} \begin{algorithmic}[1] \Require A list $L \subset \mathcal{L}$ of $\alpha^{d/2 + o(d)}$ vectors of norm at most $\alpha \cdot \lambda_1(\mathcal{L})$, and $\vc{t} \in \mathbb{R}^d$ \Ensure The output vector $\vc{s}$ is the closest lattice vector to $\vc{t}$ (w.h.p.) \State Initialize $\vc{t}' \leftarrow \vc{t}$ \For{\textbf{each} $\vc{w} \in L$} \If{$\|\vc{t}' - \vc{w}\| \leq \|\vc{t}'\|$} \State Replace $\vc{t}' \leftarrow \vc{t}' - \vc{w}$ and restart the \textbf{for}-loop \EndIf \EndFor \State \Return $\vc{s} = \vc{t} - \vc{t}'$ \end{algorithmic} \end{algorithm} \subsubsection{List size.} We first study how large $L$ must be to guarantee that the algorithm succeeds. One might wonder why we do not fix $\alpha = \sqrt{4/3}$ immediately in Algorithm~\ref{alg:nonadaptive}. To see why this choice of $\alpha$ does not suffice, suppose we have a vector $\vc{t}' \in \vc{t} + \mathcal{L}$ which is no longer reducible with $L$. This implies that $\vc{t}'$ has norm approximately $\sqrt{4/3} \cdot \lambda_1(\mathcal{L})$, similar to what happens in the GaussSieve. Now, unfortunately the fact that $\vc{t}'$ cannot be reduced with $L$ anymore, does \textit{not} imply that the closest lattice point to $\vc{t}'$ is $\vc{0}$. In fact, it is more likely that there exists an $\vc{s} \in \vc{t} + \mathcal{L}$ of norm slightly more than $\sqrt{4/3} \cdot \lambda_1(\mathcal{L})$ which is closer to $\vc{t}'$, but which is not used for reductions. By the Gaussian heuristic, we expect the distance from $\vc{t}$ and $\vc{t}'$ to the lattice to be $\lambda_1(\mathcal{L})$. So to guarantee that $\vc{0}$ is the closest lattice vector to the reduced vector $\vc{t}'$, we need $\vc{t}'$ to have norm at most $\lambda_1(\mathcal{L})$. To analyze and prove correctness of this algorithm, we will therefore prove that, under the assumption that the input is a list of the $\alpha^{d + o(d)}$ shortest lattice vectors of norm at most $\alpha \cdot \lambda_1(\mathcal{L})$ for a particular choice of $\alpha$, w.h.p.\ the algorithm reduces $\vc{t}$ to a vector $\vc{t}' \in \vc{t} + \mathcal{L}$ of norm at most $\lambda_1(\mathcal{L})$. To study how to set $\alpha$, we start with the following elementary lemma regarding the probability of reduction between two uniformly random vectors with given norms. \begin{lemma} \label{lem:1} Let $v, w > 0$ and let $\vc{v} = v \cdot \vc{e}_v$ and $\vc{w} = w \cdot \vc{e}_w$. Then: \begin{align} \mathbb{P}_{\vc{e}_v, \vc{e}_w \sim \mathcal{S}^{d-1}}\Big(\|\vc{v} - \vc{w}\|^2 < \|\vc{v}\|^2\Big) \sim \left[1 - \left(\tfrac{w}{2v}\right)^2\right]^{d/2 + o(d)}. \end{align} \end{lemma} \begin{proof} Expanding $\|\vc{v} - \vc{w}\|^2 = v^2 + w^2 - 2 v w \ip{\vc{e}_v}{\vc{e}_w}$ and $\|\vc{v}\|^2 = v^2$, the condition $\|\vc{v} - \vc{w}\|^2 < \|\vc{v}\|^2$ equals $\frac{w}{2v} < \ip{\vc{e}_v}{\vc{e}_w}$. The result follows from \cite[Lemma 2.1]{becker16lsf}. \end{proof} Under Heuristic~\ref{heur:1}, we then obtain a relation between the choice of $\alpha$ for the input list and the expected norm of the reduced vector $\vc{t}'$ as follows. \begin{lemma} \label{lem:2} Let $L \subset \alpha \cdot \mathcal{S}^{d-1}$ be a list of $\alpha^{d + o(d)}$ uniformly random vectors of norm $\alpha > 1$, and let $\vc{v} \in \beta \cdot \mathcal{S}^{d-1}$ be sampled uniformly at random. Then, for high dimensions $d$, there exists a $\vc{w} \in L$ such that $\|\vc{v} - \vc{w}\| < \|\vc{v}\|$ if and only if \begin{align} \alpha^4 - 4 \beta^2 \alpha^2 + 4\beta^2 < 0. \label{eq:a} \end{align} \end{lemma} \begin{proof} By Lemma~\ref{lem:1} we can reduce $\vc{v}$ with $\vc{w} \in L$ with probability similar to $p = [1 - \frac{\alpha^2}{4\beta^2}]^{d/2 + o(d)}$. Since we have $n = \alpha^{d + o(d)}$ such vectors $\vc{w}$, the probability that none of them can reduce $\vc{v}$ is $(1 - p)^n$, which is $o(1)$ if $n \gg 1/p$ and $1 - o(1)$ if $n \ll 1/p$. Expanding $n \cdot p$, we obtain the given equation~\eqref{eq:a}, where $\alpha^4 - 4 \beta^2 \alpha^2 + 4 \beta^2 < 0$ implies $n \gg 1/p$. \end{proof} Note that in our applications, we do not just have a list of $\alpha^{d + o(d)}$ lattice vectors of norm $\alpha \cdot \lambda_1(\mathcal{L})$; for any $\alpha_0 \in [1, \alpha]$ we expect $L$ to contain $\alpha_0^{d + o(d)}$ lattice vectors of norm at most $\alpha_0 \cdot \lambda_1(\mathcal{L})$. To obtain a reduced vector $\vc{t}'$ of norm $\beta \cdot \lambda_1(\mathcal{L})$, we therefore obtain the condition that for \textit{some} value $\alpha_0 \in [1, \alpha]$, it must hold that $\alpha_0^4 - 4 \beta^2 \alpha_0^2 + 4\beta_0^2 < 0$. From~\eqref{eq:a} it follows that $p(\alpha^2) = \alpha^4 - 4 \beta^2 \alpha^2 + 4\beta^2$ has two roots $r_1 < 2 < r_2$ for $\alpha^2$, which lie close to $2$ for $\beta \approx 1$. The condition that $p(\alpha_0^2) < 0$ for some $\alpha_0 \leq \alpha$ is equivalent to $\alpha > r_1$, which for $\beta = 1 + o(1)$ implies that $\alpha^2 \geq 2 + o(1)$. This means that asymptotically we must set $\alpha = \sqrt{2}$, and use $n = 2^{d/2 + o(d)}$ input vectors, to guarantee that w.h.p.\ the algorithm succeeds. A sketch of the situation is also given in Figure~\ref{fig:2a}. \begin{figure}[!t] \subfloat[For solving \textbf{exact CVP}, we must reduce the vector $\vc{t}$ to a vector $\vc{t}' \in \vc{t} + \mathcal{L}$ of norm at most $\lambda_1(\mathcal{L})$. The nearest lattice point to $\vc{t}'$ lies in a ball of radius approximately $\lambda_1(\mathcal{L})$ around $\vc{t}'$ (blue), and almost all the mass of this ball is contained in the (black) ball around $\vc{0}$ of radius $\sqrt{2} \cdot \lambda_1(\mathcal{L})$. So if $\vc{s} \in \mathcal{L} \setminus \{\vc{0}\}$ had lain closer to $\vc{t}'$ than $\vc{0}$, we would have reduced $\vc{t}'$ with $\vc{s}$, since $\vc{s} \in L$.\label{fig:2a}]{% \includegraphics{sievingcvp-figure2a}}% \hfill \subfloat[For \textbf{variants of CVP}, a choice $\alpha$ for the list size implies a norm $\beta \cdot \lambda_1(\mathcal{L})$ of $\vc{t}'$. The nearest lattice vector $\vc{s}$ to $\vc{t}'$ lies within $\delta \cdot \lambda_1(\mathcal{L})$ of $\vc{t}'$ ($\delta = 1$ for approx-CVP), so with high probability $\vc{s}$ has norm approximately $(\sqrt{\beta^2 + \delta^2}) \cdot \lambda_1(\mathcal{L})$. For $\delta$-BDD, if $\sqrt{\beta^2 + \delta^2} \leq \alpha$ then we expect the nearest point $\vc{s}$ to be in the list $L$. For $\kappa$-CVP, if $\beta \leq \kappa$, then the lattice vector $\vc{t} - \vc{t}'$ has norm at most $\kappa \cdot \lambda_1(\mathcal{L})$.\label{fig:2b}]{ \includegraphics{sievingcvp-figure2b}}% \caption{Comparison of the list size complexity analysis for CVP (left) and BDD/approximate CVP (right). The point $\vc{t}$ represents the target vector, and after a series of reductions with Algorithm~\ref{alg:nonadaptive}, we obtain $\vc{t}' \in \vc{t} + \mathcal{L}$. Blue balls around $\vc{t}'$ depict regions in which we expect the closest lattice point to $\vc{t}'$ to lie, where the blue shaded area indicates a negligible fraction of this ball~\cite[Lemma 2]{becker16lsf}.\label{fig:2}} \end{figure} \subsubsection{Modifying the first phase.} As we will need a larger list of size $2^{d/2 + o(d)}$ to make sure we can solve CVP exactly, we need to adjust Phase 1 of the algorithm as well. Recall that with standard sieving, we reduce vectors iff their angle is at most $\theta = \frac{\pi}{3}$, resulting in a list of size $(\sin \theta)^{-d + o(d)}$. As we now need the output list of the first phase to consist of $2^{d/2 + o(d)} = (\sin \theta')^{-d + o(d)}$ vectors for $\theta' = \frac{\pi}{4}$, we make the following adjustment: only reduce $\vc{v}$ and $\vc{w}$ if their common angle is less than $\frac{\pi}{4}$. For unit length vectors, this condition is equivalent to reducing $\vc{v}$ with $\vc{w}$ iff $\|\vc{v} - \vc{w}\|^2 \leq (2 - \sqrt{2}) \cdot \|\vc{v}\|^2$. This further accelerates nearest neighbor techniques due to the smaller angle $\theta$. Pseudocode for the modified first phase is given in Appendix~\ref{app:alg2} \subsubsection{Main result.} With the algorithm in place, let us now analyze its complexity for solving CVP. The first phase of the algorithm generates a list of size $2^{d/2 + o(d)}$ by combining pairs of vectors, and naively this can be done in time $\mathrm{T}_1 = 2^{d + o(d)}$ and space $\mathrm{S} = 2^{d/2 + o(d)}$, with query complexity $\mathrm{T}_2 = 2^{d/2 + o(d)}$. Using nearest neighbor searching (Lemma~\ref{lem:nns}), the query and preprocessing complexities can be further reduced, leading to the following result. \begin{theorem} \label{thm:2} Let $u \in (\frac{1}{2} \sqrt{2}, \sqrt{2})$. Using non-adaptive sieving, we can solve CVP with preprocessing time $\mathrm{T}_1$, space complexity $\mathrm{S}$, and query time complexity $\mathrm{T}_2$ as follows: \begin{align} \mathrm{S} = \mathrm{T}_1 &= \left(\frac{1}{u (\sqrt{2} - u)}\right)^{d/2 + o(d)}, \qquad \mathrm{T}_2 = \left(\frac{\sqrt{2} + u}{2 u}\right)^{d/2 + o(d)}. \end{align} \end{theorem} \begin{proof} These complexities follow from Lemma~\ref{lem:nns} with $\theta = \frac{\pi}{4}$, noting that the first phase can be performed in time and space $\mathrm{T}_1 = \mathrm{S} = n^{1 + \rho_{\mathrm{u}}}$, and the second phase in time $\mathrm{T}_2 = n^{\rho_{\mathrm{q}}}$. \end{proof} To illustrate the time and space complexities of Theorem~\ref{thm:2}, we highlight three special cases $u$ as follows. The full tradeoff curve for $u \in (\frac{1}{2} \sqrt{2}, \sqrt{2})$ is depicted in Figure~\ref{fig:1}. \begin{itemize} \item Setting $u = \frac{1}{2} \sqrt{2}$, we obtain $\mathrm{S} = \mathrm{T}_1 = 2^{d/2 + o(d)}$ and $\mathrm{T}_2 \approx 2^{0.2925d + o(d)}$. \item Setting $u = 1$, we obtain $\mathrm{S} = \mathrm{T}_1 \approx 2^{0.6358 d + o(d)}$ and $\mathrm{T}_2 \approx 2^{0.1358 d + o(d)}$. \item Setting $u = \frac{1}{2}(\sqrt{2} + 1)$, we get $\mathrm{S} = \mathrm{T}_1 = 2^{d + o(d)}$ and $\mathrm{T}_2 \approx 2^{0.0594 d + o(d)}$. \end{itemize} The first result shows that the query complexity of non-adaptive sieving is never worse than for adaptive sieving; only the space and preprocessing complexities are worse. The second and third results show that CVP can be solved in significantly less time, even with preprocessing and space complexities bounded by $2^{d + o(d)}$. \paragraph{\bf Minimizing the query complexity.} As $u \to \sqrt{2}$, the query complexity keeps decreasing while the memory and preprocessing costs increase. For arbitrary $\varepsilon > 0$, we can set $u = u_\varepsilon \approx \sqrt{2}$ as a function of $\varepsilon$, resulting in asymptotic complexities $\mathrm{S} = \mathrm{T}_1 = (1/\varepsilon)^{O(d)}$ and $\mathrm{T}_2 = 2^{\varepsilon d + o(d)}$. This shows that it is possible to obtain a slightly subexponential query complexity, at the cost of superexponential space, by taking $\varepsilon = o(1)$ as a function of $d$. \begin{corollary} \label{thm:3} For arbitrary $\varepsilon > 0$, using non-adaptive sieving we can solve CVPP with preprocessing time and space complexities $(1/\varepsilon)^{O(d)}$, in time $2^{\varepsilon d + o(d)}$. In particular, we can solve CVPP in $2^{o(d)}$ time, using $2^{\omega(d)}$ space and preprocessing. \end{corollary} Being able to solve CVPP in subexponential time with superexponential preprocessing and memory is neither trivial nor quite surprising. A naive approach to the problem, with this much memory, could for instance be to index the entire fundamental domain of $\mathcal{L}$ in a hash table. One could partition this domain into small regions, solve CVP for the centers of each of these regions, and store all the solutions in memory. Then, given a query, one looks up which region $\vc{t}$ is in, and returns the answer corresponding to that vector. With a sufficiently fine-grained partitioning of the fundamental domain, the answers given by the look-ups are accurate, and this algorithm probably also runs in subexponential time. Although it may not be surprising that it is possible to solve CVPP in subexponential time with (super)exponential space, it is not clear what the complexities of other methods would be. Our method presents a clear tradeoff between the complexities, where the constants in the preprocessing exponent are quite small; for instance, we can solve CVPP in time $2^{0.06d + o(d)}$ with less than $2^{d + o(d)}$ memory, which is the same amount of memory/preprocessing of the best provable SVP and CVP algorithms~\cite{aggarwal15, aggarwal15b}. Indexing the fundamental domain may well require much more memory than this. \subsection{Bounded Distance Decoding with Preprocessing} We finally take a look at specific instances of CVP which are easier than the general problem, such as when the target $\vc{t}$ lies unusually close to the lattice. This problem naturally appears in practice, when a private key consists of a \textit{good basis} of a lattice with short basis vectors, and the public key is a \textit{bad basis} of the same lattice. An encryption of a message could then consist of the message being mapped to a lattice point $\vc{v} \in \mathcal{L}$, and a small error vector $\vc{e}$ being added to $\vc{v}$ ($\vc{t} = \vc{v} + \vc{e}$) to hide $\vc{v}$. If the noise $\vc{e}$ is small enough, then with a good basis one can decode $\vc{t}$ to the closest lattice vector $\vc{v}$, while someone with the bad basis cannot decode correctly. As decoding for arbitrary $\vc{t}$ (solving CVP) is known to be hard even with knowledge of a good basis~\cite{micciancio01e, feige02, regev04d, alekhnovich05}, $\vc{e}$ needs to be very short, and $\vc{t}$ must lie unusually close to the lattice. So instead of assuming target vectors $\vc{t} \in \mathbb{R}^d$ are sampled at random, suppose that $\vc{t}$ lies at distance at most $\delta \cdot \lambda_1(\mathcal{L})$ from $\mathcal{L}$, for $\delta \in (0,1)$. For adaptive sieving, recall that the list size $(4/3)^{d/2 + o(d)}$ is the minimum initial list size one can hope to use to obtain a list of short lattice vectors; with fewer vectors, one would not be able to solve SVP.\footnote{The recent paper \cite{bai16} discusses how to use less memory in sieving, by using triple- or tuple-wise reductions, instead of the standard pairwise reductions. These techniques may also be applied to adaptive sieving to solve CVP with less memory, at the cost of an increase in the time complexity.} For non-adaptive sieving however, it may be possible to reduce the list size below $2^{d/2 + o(d)}$. \subsubsection{List size.} Let us again assume that the preprocessed list $L$ contains almost all $\alpha^{d + o(d)}$ lattice vectors of norm at most $\alpha \cdot \lambda_1(\mathcal{L})$. The choice of $\alpha$ implies a maximum norm $\beta_{\alpha} \cdot \lambda_1(\mathcal{L})$ of the reduced vector $\vc{t}'$, as described in Lemma~\ref{lem:2}. The nearest lattice vector $\vc{s} \in \mathcal{L}$ to $\vc{t}'$ lies within radius $\delta \cdot \lambda_1(\mathcal{L})$ of $\vc{t}'$, and w.h.p.\ $\vc{s} - \vc{t}'$ is approximately orthogonal to $\vc{t}'$; see Figure~\ref{fig:2b}, where the shaded area is asymptotically negligible. Therefore w.h.p.\ $\vc{s}$ has norm at most $(\sqrt{\beta_{\alpha}^2 + \delta^2}) \cdot \lambda_1(\mathcal{L})$. Now if $\sqrt{\beta_{\alpha}^2 + \delta^2} \leq \alpha$, then we expect the nearest vector to be contained in $L$, so that ultimately $\vc{0}$ is nearest to $\vc{t}'$. Substituting $\alpha^4 - 4 \beta^2 \alpha^2 + 4 \beta^2 = 0$ and $\beta^2 + \delta^2 \leq \alpha^2$, and solving for $\alpha$, this leads to the following condition on $\alpha$. \begin{align} \alpha^2 \geq \tfrac{2}{3} (1 + \delta^2) + \tfrac{2}{3} \sqrt{(1 + \delta^2)^2 - 3 \delta^2} \, . \label{eq:a2} \end{align} Taking $\delta = 1$, corresponding to exact CVP, leads to the condition $\alpha \geq \sqrt{2}$ as expected, while in the limiting case of $\delta \to 0$ we obtain the condition $\alpha \geq \sqrt{4/3}$. This matches experimental observations using the GaussSieve, where after finding the shortest vector, newly sampled vectors often cause \textit{collisions} (i.e.\ being reduced to the $\vc{0}$-vector). In other words, Algorithm~\ref{alg:nonadaptive} often reduces target vectors $\vc{t}$ which essentially lie \textit{on} the lattice ($\delta \to 0$) to the $\vc{0}$-vector when the list has size $(4/3)^{d/2 + o(d)}$. This explains why collisions in the GaussSieve are common when the list size grows to size $(4/3)^{d/2 + o(d)}$. \subsubsection{Main result.} To solve BDD with a target $\vc{t}$ at distance $\delta \cdot \lambda_1(\mathcal{L})$ from the lattice, we need the preprocessing to produce a list of almost all $\alpha^{d + o(d)}$ vectors of norm at most $\alpha \cdot \lambda_1(\mathcal{L})$, with $\alpha$ satisfying~\eqref{eq:a2}. Similar to the analysis for CVP, we can produce such a list by only doing reductions between two vectors if their angle is less than $\theta$, where now $\theta = \arcsin(1 / \alpha)$. Combining this with Lemma~\ref{lem:1}, we obtain the following result. \begin{theorem} \label{thm:BDD} Let $\alpha$ satisfy \eqref{eq:a2} and let $u \in (\sqrt{\frac{\alpha^2 - 1}{\alpha^2}}, \sqrt{\frac{\alpha^2}{\alpha^2 - 1}})$. Using non-adaptive sieving, we can heuristically solve BDD for targets $\vc{t}$ at distance $\delta \cdot \lambda_1(\mathcal{L})$ from the lattice, with preprocessing time $\mathrm{T}_1$, space complexity $\mathrm{S}$, and query time complexity $\mathrm{T}_2$ as follows: \begin{align} & \qquad \mathrm{S} = \left(\frac{1}{1 - (\alpha^2 - 1) (u^2 - \frac{2 u}{\alpha} \sqrt{\alpha^2 - 1} + 1)}\right)^{d/2 + o(d)}, \\ \mathrm{T}_1 &= \max\left\{\mathrm{S}, \ (3/2)^{d/2 + o(d)}\right\}, \qquad \mathrm{T}_2 = \left(\frac{\alpha + u \sqrt{\alpha^2 - 1}}{2 \alpha - \alpha^3 + \alpha^2 u \sqrt{\alpha^2 - 1}}\right)^{d/2 + o(d)}. \end{align} \end{theorem} \begin{proof} These complexities directly follow from applying Lemma~\ref{lem:nns} with $\theta = \arcsin(1/\alpha)$, and again observing that Phase 1 can be performed in time $\mathrm{T}_1 = n^{1 + \rho_{\mathrm{u}}}$ and space $\mathrm{S} = n^{1 + \rho_{\mathrm{u}}}$, while Phase 2 takes time $\mathrm{T}_2 = n^{\rho_{\mathrm{q}}}$. Note that we cannot combine vectors whose angles are larger than $\frac{\pi}{3}$ in Phase 1, which leads to a lower bound on the preprocessing time complexity $\mathrm{T}_1$ based on the costs of solving SVP. \end{proof} Theorem~\ref{thm:BDD} is a generalization of Theorem~\ref{thm:2}, as the latter can be derived from the former by substituting $\delta = 1$ above. To illustrate the results, Figure~\ref{fig:1} considers two special cases: \begin{itemize} \item For $\delta = \frac{1}{2}$, we find $\alpha \approx 1.1976$, leading to $\mathrm{S} \approx 2^{0.2602d + o(d)}$ and $\mathrm{T}_2 = 2^{0.1908d + o(d)}$ when minimizing the space complexity. \item For $\delta \to 0$, we have $\alpha \to \sqrt{4/3} \approx 1.1547$. The minimum space complexity is therefore $\mathrm{S} = (4/3)^{d/2 + o(d)}$, with query complexity $\mathrm{T}_2 = 2^{0.1610d + o(d)}$. \end{itemize} In the limit of $u \to \sqrt{\frac{\alpha^2}{\alpha^2 - 1}}$ we need superexponential space/preprocessing $\mathrm{S}, \mathrm{T}_1 \to 2^{\omega(d)}$ and a subexponential query time $\mathrm{T}_2 \to 2^{o(d)}$ for all $\delta > 0$. \subsection{Approximate Closest Vector Problem with Preprocessing} Given a lattice $\mathcal{L}$ and a target vector $\vc{t} \in \mathbb{R}^d$, approximate CVP with approximation factor $\kappa$ asks to find a vector $\vc{s} \in \mathcal{L}$ such that $\|\vc{s} - \vc{t}\|$ is at most a factor $\kappa$ larger than the real distance from $\vc{t}$ to $\mathcal{L}$. For random instances $\vc{t}$, by the Gaussian heuristic this means that a lattice vector counts as a solution iff it lies at distance at most $\kappa \cdot \lambda_1(\mathcal{L})$ from $\vc{t}$. \subsubsection{List size.} Instead of reducing $\vc{t}$ to a vector $\vc{t}'$ of norm at most $\lambda_1(\mathcal{L})$ as is needed for solving exact CVP, we now want to make sure that the reduced vector $\vc{t}'$ has norm at most $\kappa \cdot \lambda_1(\mathcal{L})$. If this is the case, then the vector $\vc{t} - \vc{t}'$ is a lattice vector lying at distance at most $\kappa \cdot \lambda_1(\mathcal{L})$, which w.h.p.\ qualifies as a solution. This means that instead of substituting $\beta = 1$ in Lemma~\ref{lem:2}, we now substitute $\beta = \kappa$. This leads to the condition that $\alpha_0^4 - 4\kappa^2 \alpha_0^2 + 4 \beta^2 < 0$ for some $\alpha_0 \leq \alpha$. By a similar analysis $\alpha^2$ must therefore be larger than the smallest root $r_1 = 2\kappa (\kappa - \sqrt{\kappa^2 - 1})$ of this quadratic polynomial in $\alpha^2$. This immediately leads to the following condition on $\alpha$: \begin{align} \alpha^2 \geq 2 \kappa \left(\kappa - \sqrt{\kappa^2 - 1}\right). \label{eq:a3} \end{align} A sanity check shows that $\kappa = 1$, corresponding to exact CVP, indeed results in $\alpha \geq \sqrt{2}$, while in the limit of $\kappa \to \infty$ a value $\alpha \approx 1$ suffices to obtain a vector $\vc{t}'$ of norm at most $\kappa \cdot \lambda_1(\mathcal{L})$. In other words, to solve approximate CVP with very large (constant) approximation factors, a preprocessed list of size $(1 + \varepsilon)^{d + o(d)}$ suffices. \subsubsection{Main result.} Similar to the analysis of CVPP, we now take $\theta = \arcsin(1/\alpha)$ as the angle with which to reduce vectors in Phase 1, so that the output of Phase 1 is a list of almost all $\alpha^{d + o(d)}$ shortest lattice vectors of norm at most $\alpha \cdot \lambda_1(\mathcal{L})$. Using a smaller angle $\theta$ for reductions again means that nearest neighbor searching can speed up the reductions in both Phase 1 and Phase 2 even further. The exact complexities follow from Lemma~\ref{lem:nns}. \begin{theorem} \label{thm:aCVP} Using non-adaptive sieving with spherical LSF, we can heuristically solve $\kappa$-CVP with similar complexities as in Theorem~\ref{thm:BDD}, where now $\alpha$ must satisfy \eqref{eq:a3}. \end{theorem} Note that only the dependence of $\alpha$ on $\kappa$ is different, compared to the dependence of $\alpha$ on $\delta$ for bounded distance decoding. The complexities for $\kappa$-CVP arguably decrease \textit{faster} than for $\delta$-BDD: for instance, for $\kappa \approx 1.0882$ we obtain the same complexities as for BDD with $\delta = \frac{1}{2}$, while $\kappa = \sqrt{4/3} \approx 1.1547$ leads to the same complexities as for BDD with $\delta \to 0$. Two further examples are illustrated in Figure~\ref{fig:1}: \begin{itemize} \item For $\kappa = 2$, we have $\alpha \approx 1.1976$, which for $u \approx 0.5503$ leads to $\mathrm{S} = \mathrm{T}_1 = 2^{0.2602 d + o(d)}$ and $\mathrm{T}_2 = 2^{0.1908 d + o(d)}$, and for $u = 1$ leads to $\mathrm{S} = \mathrm{T}_1 = 2^{0.3573 d + o(d)}$ and $\mathrm{T}_2 = 2^{0.0971 d + o(d)}$. \item For $\kappa \to \infty$, we have $\alpha \to 1$, i.e.\ the required preprocessed list size approaches $2^{o(d)}$ as $\kappa$ grows. For sufficiently large $\kappa$, we can solve $\kappa$-CVP with a preprocessed list of size $2^{\varepsilon d + o(d)}$ in at most $2^{\varepsilon d + o(d)}$ time. The preprocessing time is given by $2^{0.2925 d + o(d)}$. \end{itemize} The latter result shows that for any superconstant approximation factor $\kappa = \omega(1)$, we can solve the corresponding approximate closest vector problem with preprocessing in subexponential time, with an exponential preprocessing time complexity $2^{0.292d + o(d)}$ for solving SVP and generating a list of short lattice vectors, and a subexponential space complexity required for Phase 2. In other words, even without superexponential preprocessing/memory we can solve CVPP with large approximation factors in subexponential time. To compare this result with previous work, note that the lower bound on $\alpha$ from \eqref{eq:a3} tends to $1 + 1/(8 \kappa^2) + O(\kappa^{-4})$ as $\kappa$ grows. The query space and time complexities are further both proportional to $\alpha^{\Theta(d)}$. To obtain a polynomial query complexity and polynomial storage after the preprocessing phase, we can solve for $\kappa$, leading to the following result. \begin{corollary} \label{cor:acvpp-poly} With non-adaptive sieving we can heuristically solve approximate CVPP with approximation factor $\kappa$ in polynomial time with polynomial-sized advice iff $\kappa = \Omega(\sqrt{d / \log d})$. \end{corollary} \begin{proof} The query time and space complexities are given by $\alpha^{\Theta(d)}$, where $\alpha = 1 + \Theta(1 / \kappa^2)$. To obtain polynomial complexities in $d$, we must have $\alpha^{\Theta(d)} = d^{O(1)}$, or equivalently: \begin{align} 1 + \Theta\left(\frac{1}{\kappa^2}\right) = \alpha = d^{O(1/d)} = \exp \, O\left(\frac{\log d}{d}\right) = 1 + O\left(\frac{\log d}{d}\right). \end{align} Solving for $\kappa$ leads to the given relation between $\kappa$ and $d$. \end{proof} Apart from the heuristic assumptions, this approximation factor is equivalent to Aharonov and Regev~\cite{aharonov04}, who showed that the decision version of CVPP with approximation factor $\kappa = \Omega(\sqrt{d / \log d})$ can provably be solved in polynomial time. This further (heuristically) improves upon results of~\cite{lagarias90b, dadush14}, who are able to solve search-CVPP with polynomial time and space complexities for $\kappa = O(d^{3/2})$ and $\kappa = \Omega(d / \sqrt{\log d})$ respectively. Assuming the heuristic assumptions are valid, Corollary~\ref{cor:acvpp-poly} closes the gap between these previous results for decision-CVPP and search-CVPP with a rather simple algorithm: (1) preprocess the lattice by storing all $d^{O(1)}$ shortest vectors of the lattice in a list; and (2) apply Algorithm~\ref{alg:nonadaptive} to this list and the target vector to find an approximate closest vector. Note that nearest neighbor techniques only affect leading constants; even without nearest neighbor searching this would heuristically result in a polynomial time and space algorithm for $\kappa$-CVPP with $\kappa = \Omega(\sqrt{d / \log d})$. An interesting open problem would be to see if this result can be made provable for arbitrary lattices, without any heuristic assumptions. \section*{Acknowledgments} The author is indebted to L\'{e}o Ducas, whose initial ideas and suggestions on this topic motivated work on this paper. The author further thanks Vadim Lyubashevsky and Oded Regev for their comments on the relevance of a subexponential time CVPP algorithm requiring (super)exponential space. The author is supported by the SNSF ERC Transfer Grant CRETP2-166734 FELICITY. \bibliographystyle{alpha}
1,108,101,563,437
arxiv
\section{Introduction} The dynamics of many physical systems is described by using quantum time-dependent harmonic oscillators \cite{Dod75,Pri83,Cum86,Cum88,Pro91,Ghe92,Dod95,Dod05,Maj05,Mih09,Cor11,Der13,Gue15,Leo16,Zha16,Zel17a,Con17,HCr18,Con19}, where the construction of minimum wave packets is relevant \cite{Har82,Com12,Cas13,Sch13,Cru15,Cru16,Afs16,Mih18,Mih19,Una18,Zel19} (see also the recent reviews \cite{Dod18,Ros19}). Such a diversity of applications is due to the quadratic profile of the oscillator \cite{Dod95,Man96,Dod00a,Dod00b,Cor10,Nag19,Ram18,Wol81,Dod89}, which is also useful in the trapping of quantum particles with electromagnetic fields \cite{Pri83,Cum86,Ghe92,Maj05,Mih09,Mih18,Mih19,Pau90,Gla92,Bar96,Dod96,Dod98,Cas98,Gen11,Cas12}. In most of the cases reported in the literature the oscillator has a frequency of oscillation that depends on time. Usually, it is also acted by a driving force which also depends on time. Thereby, the oscillator is subjected to external forces that either take energy from it or supply energy to it. Such a nonconservative system has no solutions with the property of being orthogonal if they are evaluated at different times. Nevertheless, diverse techniques have been developed to find solutions with physical meaning \cite{Wol81,Dod89,Dod95,Gla92,Dod00a,Dod00b,Cor10,Cas13,Sch13,Cru15,Cru16,Ram18,Nag19}. The progenitor of most of the solvable models reported in the literature is the approach of Lewis and Reisenfeld \cite{Lew68,Lew69}, where an invariant operator is introduced, as an ans\"atz, to get a basis of eigenvectors that serve to construct the physical solutions. Important results on the matter were obtained by Dodonov and Man'ko \cite{Dod89}, and by Glauber \cite{Gla92}. Further developments have been reported in, e.g. \cite{Dod95,Zha16,Cru15,Cru16,Dod00a,Dod00b,Cor10,Nag19}. In the present work we develop an approach to study nonstationary oscillators by means of the so called point transformations \cite{Dew52,Ste93}. These have been used in the classical context to deform the trajectories of a given linear second order differential equation into trajectories of the free particle \cite{Arn83}, although the latter procedure is commonly called {\em Arnold transformation}. An extension to quantum systems was introduced in \cite{Ald11} which, in turn, has been used to study the Caldirola-Kanai oscillator \cite{Gue12,Gue13} (see also the book \cite{Sch18}). The point transformations are also useful to interrelate the harmonic oscillator with a series of oscillator-like systems for which the mass is a function of the position \cite{Cru09,Cru13}, as well as to study the ordering ambiguity of the momentum operator for position-dependent mass systems in the quantum case \cite{Mus19}. The major advantage of the point transformation method is that conserved quantities (first integrals) as well as the structure of the inner product are preserved \cite{Ste93}. Another property of these transformations is that they can be constructed to be invertible. Then, one may depart from a system, for which the dynamical law of motion is already solved, to arrive at a new exactly solvable dynamical law that can be tailored on demand to describe the behavior of another system, and vice versa. In the present case we are interested in solving the Schr\"odinger equation associated to the Hamiltonian \begin{equation} \hat{H} (t)=\frac{\hat{p}^{2}}{2m}+\frac{m}{2}\Omega^{2}(t)\hat{x}^{2}+F(t)\hat{x}+V_{0}(t) \mathbb{I}, \label{eq:PMO1} \end{equation} where $\hat x$ and $\hat p$ are the canonical operators of position and momentum $[\hat{x},\hat{p}]=i\hbar \mathbb{I}$, $F(t)$ stands for a time-dependent driving force, $V_{0}(t)$ is the time-dependent zero point energy, and $\mathbb I$ is the identity operator. The function $\Omega(t)$ is real-valued and positive. That is, the Hamiltonian (\ref{eq:PMO1}) describes a nonstationary oscillator, the frequency of which $\Omega(t)$ depends on time. In general, the system under interest is nonconservative, so the orthogonality of the related solutions is not granted a priori. As $\hat H$ is not an integral of motion, an additional problem is to determine the invariants (first integrals) that may serve as observables to define uniquely the system. The main result reported in this work is to show that the properly chosen point transformations permit to solve the above problems by overpassing the difficulties that arise in the conventional approaches. In particular, we show that the integrals of motion are automatically obtained as a consequence of the transformation, without necessity of any ans\"atz. Another interesting result is that the point transformations permit to verify the orthogonality of the basis states, so that the construction of arbitrary linear superpositions is achieved easily. The latter lays the groundwork to construct the corresponding coherent states since the dynamical algebras are also immediately obtained as a deformation of the well known boson algebra. The paper is organized as follows. In Section~\ref{oscilador} we pose the problem to solve by providing the explicit forms of the Schr\"odinger equation for the stationary oscillator and the nonstationary one. In Section~\ref{point} we solve the differential equation of the parametric oscillator by point transforming the differential equation of the stationary one. In Section~\ref{ortogonal} we verify that the orthogonality of the initial solutions as well as the matrix representation of observables is inherited to the new system by the point transformations. The determination of the invariants (quantum integrals of motion) for the new system is discussed in Section~\ref{integrals}, and the derivation of the related dynamical algebras is developed in Section~\ref{dynamical}. We discuss the superposition of the solutions of the nonstationary oscillators in Section~\ref{Seclin}. The construction of the coherent states of the parametric oscillator is developed in Section~\ref{Seccs}, where we show that these states share almost all the properties of the Glauber states \cite{Gla07}, except in the fact that they minimize the Schr\"odinger-Robertson inequality rather than the Heisenberg uncertainty. Section~\ref{examples} provides some particular cases as concrete examples of the applicability of our approach. Some results reported already by other authors are recovered on the way. Final concluding remarks are given in Section~\ref{conclu}. Detailed information about the point transformations we use throughout the manuscript is provided in Appendix~\ref{ApA}. A discussion about the possibility of making the zero point energy $V_0(t)$ equal to zero without loosing generality is delivered in Appendix~\ref{ApB}. Finally, relevant information about the Ermakov equation, which is a keystone in our approach, can be found in Appendix~\ref{ApC}. \section{One-dimensional parametric oscillator} \label{oscilador} The one-dimensional stationary quantum oscillator with mass $m$ and constant frequency of oscillation $w$ is described by the Hermitian Hamiltonian \begin{equation} \hat{H}_{osc}=\frac{\hat{P}^{2}}{2m}+\frac{m}{2}w^{2}\hat{X}^{2}, \quad w>0, \label{eq:INT0} \end{equation} where $\hat{X}$ and $\hat{P}$ stand for the canonical position and momentum operators, $[\hat{X},\hat{P}] = i \hbar$. The Schr\"odinger equation for the oscillator wave function $\Psi(X,\tau)=\langle X \vert \Psi(\tau)\rangle$ in the position representation is well known \begin{equation} i\hbar\frac{\partial\Psi}{\partial \tau} = - \frac{\hbar^2}{2m}\frac{\partial^{2}\Psi}{\partial X^2} + \frac{1}{2}m w^2 X^2\Psi=0, \label{eq:INT1} \end{equation} with $\tau$ the time-parameter. The solutions are easily achievable by separation of variables $\Psi(X,\tau)=e^{-i E \tau/\hbar}\Phi(X)$, where $\Phi(X)= \langle X\vert\Phi \rangle$ fulfills the eigenvalue equation \begin{equation} -\frac{\hbar^{2}}{2m}\frac{d^{2}\Phi}{dX^2} + \frac{1}{2}mw^{2}X^2 \Phi = E \Phi. \label{eq:INT2-2} \end{equation} The fundamental set of normalized solutions is therefore \begin{equation} \Phi_{n}(X)=\sqrt{\frac{1}{2^{n}n!}\sqrt{\frac{mw}{\pi\hbar}}} \, e^{-\frac{mw}{2\hbar}X^{2}}H_{n}\left(\sqrt{\frac{mw}{\hbar}}X \right), \quad E_{n}=\hbar w(n+1/2), \label{eq:INT3} \end{equation} where $H_n(z)$ are the Hermite Polynomials \cite{Olv10}. In the space ${\cal H} = \mbox{span} \{\vert\Phi_{n}\rangle\}_{n=0}^{\infty}$, a vector $\vert \Phi \rangle$ is regular if it satisfies the normalization condition $\vert\vert \vert\Phi\rangle\vert\vert^{2}=\langle \Phi\vert\Phi\rangle<\infty$, with inner product defined as follows \begin{equation} \langle \Phi_{(2)}\vert\Phi_{(1)}\rangle=\int_{-\infty}^{\infty}dX\,\Phi_{(2)}^{*}(X)\Phi_{(1)}(X) \, . \label{eq:INT2-3} \end{equation} Clearly, the basis set is orthonormal $\langle\Phi_n \vert\Phi_m \rangle=\delta_{n,m}$. On the other hand, the wave functions $\psi(x,t)=\langle x \vert \psi(t)\rangle$ of the one-dimensional non stationary quantum oscillator described by the Hamiltonian \eqref{eq:PMO1} satisfy the Schr\"odinger equation \begin{equation} i\hbar\frac{\partial\psi}{\partial t} = -\frac{\hbar^{2}}{2m}\frac{\partial^2\psi}{\partial x^2} \psi + \frac{1}{2}m\Omega^{2}(t)x^2 \psi +F(t)x \psi +V_{0}(t) \psi . \label{eq:INT4} \end{equation} In this case the oscillator has a frequency of oscillation $\Omega$ that depends on time. The driving force $F$ and zero point of energy $V_0$ also depend on time. That is, the oscillator under study is subjected to external forces that either take energy from it or supply energy to it. This system is nonconservative, with no orthogonal basis of solutions $\psi_n(x,t)$ at arbitrary times $t$ and $t'$, $\langle \psi_n(t) \vert \psi_m (t') \rangle \neq \delta_{n,m}$ for $t\neq t'$. Nevertheless, as it has been indicated in the introduction, diverse techniques have been developed to find solutions with physical meaning \cite{Wol81,Dod89,Dod95,Lew68,Lew69,Gla92,Dod00a,Dod00b,Cor10,Cas13,Sch13,Cru15,Cru16,Ram18,Nag19}. In the sequel we show that the Schr\"odinger equations (\ref{eq:INT1}) and (\ref{eq:INT4}) are interrelated in such a form that the solutions of the stationary problem (\ref{eq:INT1}) can be used to get the solutions of the nonstationary one (\ref{eq:INT4}), and vice versa. The key is provided by a deformation of the coordinate variable, the time parameter, and the wave functions of the `initial' system, which gives rise to the corresponding variables and parameters of the `new' (or `deformed') system. Such a deformation is properly defined by point transformations \cite{Ste93}. We shall consider the stationary oscillator as the initial system, so the parametric oscillator can be interpreted as a deformation of the stationary one. \subsection{Point transformations} \label{point} We look for relationships between the elements of the set $\{ X, \tau, \Psi \}$ and those of the set $\{ x, t, \psi \}$. Formally, \begin{equation} X=X(x,t), \quad \tau = \tau (x,t), \quad \Psi = \Psi (X(x,t), \tau (x,t)). \label{eq:INT5} \end{equation} Notice that the dependence of $\Psi$ on $x$ and $t$ is implicit, so it is convenient to rewrite it as an explicit function of the elements in $\{ x, t, \psi \}$. We may write \begin{equation} \Psi = G(x,t;\psi(x,t)). \label{eq:INT5-1} \end{equation} The explicit dependence of $G$ on $\psi$ is essential, since it provides a mechanism to map any solution of~\eqref{eq:INT1} into the set of solutions of~\eqref{eq:INT4}, and vice versa. To be precise, the latter equations are respectively of the form \begin{equation} S_{in} \left(X, \tau; \Psi, \Psi_{\tau}, \Psi_{X,X} \right) =0, \quad S_{def} \left( x,t; \psi, \psi_t, \psi_{x,x} \right)=0, \label{eq:INT6-1} \end{equation} with nonlinearities present in neither $S_{in}$ nor $S_{def}$. Hereafter, for simplicity, we use no-number subindices to denote partial derivatives $f_u = \frac{\partial f}{\partial u}$. Departing from $S_{in}$, the proper point transformation (see Appendix~\ref{ApA} for details) produces \begin{equation} i\hbar \psi_t+\frac{\hbar^{2}}{2m}\frac{\tau_{t}}{X_{x}^{2}} \psi_{x,x} + B(x,t) \psi_x - V(x,t)\psi = 0, \label{eq:INT11} \end{equation} where \begin{equation} \begin{aligned} & B(x,t)=-i\hbar\frac{X_{t}}{X_{x}}+\frac{\hbar^{2}}{2m}\frac{\tau_{t}}{X_{x}^{2}}\left( 2\frac{A_{x}}{A}-\frac{X_{xx}}{X_{x}} \right) ,\\[1ex] & V(x,t)=-i\hbar\left(\frac{A_{t}}{A}-\frac {X_{t}}{X_{x}}\frac{A_{x}}{A} \right)-\frac{\hbar^{2}}{2m}\frac{\tau_{t}}{X_{x}^{2}}\left( \frac{A_{xx}}{A}-\frac{X_{xx}}{X_{x}}\frac{A_{x}}{A} \right)+\frac{\tau_{t}}{2}m w^{2}X^{2}(x,t). \end{aligned} \label{eq:INT12} \end{equation} As Eq.~(\ref{eq:INT11}) must be of the form $S_{def}$ indicated in (\ref{eq:INT6-1}), we impose the conditions \begin{equation} \frac{\tau_{t}}{X_{x}^{2}}=1, \quad B(x,t)=0. \label{eq:INT13} \end{equation} To satisfy the first condition let us introduce a real-valued function $\sigma(t) >0$ such that $\tau_{t}=\sigma^{-2}(t)$. Then, by simple integration (and some rearrangements), one gets \begin{equation} \tau (t)=\int^{t}\frac{dt'}{\sigma^{2}(t')}, \quad X(x,t)=\frac{x+\gamma(t)}{\sigma(t)}, \label{eq:INT14} \end{equation} where the real-valued function $\gamma(t)$ stems from the integration with respect to $x$. Clearly $X_{xx}=0$ for any functions $\sigma >0$ and $\gamma$. Then, the condition $B(x,t)=0$ leads to \begin{equation} A(x,t)=\exp\left[ i\frac{m}{\hbar}\left(-\frac{\dot{\sigma}}{2\sigma}x^{2}+\frac{W}{\sigma}x+\eta\right)\right], \quad W(t)=\sigma\dot{\gamma}-\dot{\sigma}\gamma, \label{eq:INT15} \end{equation} with $\dot f = \frac{df}{dt}$, and $\eta=\eta(t)$ a complex-valued function that arises by integration. The introduction of \eqref{eq:INT15} into \eqref{eq:INT12} gives the energy potential \begin{eqnarray} V(x,t) = \frac{m}{2}\left(-\frac{\ddot{\sigma}}{\sigma}+\frac{w^{2}}{\sigma^{4}}\right)x^{2}+m\left(\frac{\dot{W}}{\sigma}+w^{2}\frac{\gamma}{\sigma^{4}}\right)x +\frac{m}{2}\left( i\frac{\hbar}{m}\frac{\dot{\sigma}}{\sigma}+2\dot{\eta}-\frac{W^{2}}{\sigma^{2}}+w^{2}\frac{\gamma^{2}}{\sigma^{4}}\right). \label{eq:INT16} \end{eqnarray} Comparing this result with Eq.~(\ref{eq:INT4}) we obtain a system of three equations for $\sigma$, $\gamma$, and $\eta$. Without loss of generality we may take $V_0(t)=0$ (see Appendix~\ref{ApB}) to get \begin{equation} \ddot{\sigma}+\Omega^{2}(t)\sigma=\frac{w^2}{\sigma^{3}}, \quad \ddot{\gamma}+\Omega^{2}(t)\gamma=\frac{F(t)}{m}, \quad \eta(t)=\xi(t)-i\frac{\hbar}{2m}\ln\sigma(t), \label{eq:INT17} \end{equation} where the real-valued function $\xi(t)$ is given by \begin{equation} \xi(t)=\frac{\gamma W}{2\sigma}-\frac{1}{2m}\int^{t}dt'F(t')\gamma(t'). \end{equation} Remark that $\xi$ is just a displaced version of $\eta$ in the complex plane that permits to rewrite the function $A(x,t)$ in (\ref{eq:INT15}) as follows \begin{equation} A(x,t)=\sqrt{\sigma}\exp\left[ i\frac{m}{\hbar}\left(-\frac{\dot{\sigma}}{2\sigma}x^{2}+\frac{W}{\sigma}x+\xi\right)\right]. \label{eq:INT18} \end{equation} In turn, the time-dependent function $\sigma$ satisfies the Ermakov equation~\cite{Erm08}, which is a quite natural result in the studies of the parametric oscillator \cite{Cas13,Sch13,Cru15,Cru16}. Therefore, for a set of nonnegative parameters $\{a,b,c\}$, we have \begin{equation} \sigma(t)= \left[ a q_1^2(t)+ b q_1(t)q_2(t)+c q_2^2(t) \right]^{1/2}, \label{eq:OSC7} \end{equation} where $q_{1}$ and $q_{2}$ are two linearly independent real solutions of the linear homogeneous equation obtained from (\ref{eq:INT17}) by making $w=0$, see Appendix~{\ref{ApC} for details. That is, the Wronskian $W(q_1,q_2) =W_0$ is a constant. The condition $b^2-4ac=- 4\tfrac{w^2}{W_{0}^2}$ ensures $\sigma >0$ at any time\cite{Ros15,Bla18}. Notice that $w\rightarrow 0$ produces $b=2 \sqrt{ac}$, so that $\sigma_{free} = \sqrt{a} q_1 + \sqrt c q_2$. That is, our method applies even if the initial Hamiltonian $\hat H_{osc}$ in (\ref{eq:INT0}) is reduced to the purely kinematic Hamiltonian of the free particle. The deformation of the system is thus provided by the point transformation ruled by the function $\sigma_{free}$, although the latter is not necessarily connected with the parametric oscillator. In the present work we omit the analysis of such a case, results on the matter will be reported elsewhere. On the other hand, $\gamma(t)$ describes a classical oscillator of frequency $\Omega(t)$ that is subjected to the driving force $F(t)$, see e.g. \cite{Ros08}. This function can be expressed as the sum of the homogeneous solution $\gamma_h = \gamma_{1} q_{1}(t) + \gamma_{2} q_{2}(t)$, and an arbitrary particular solution $\gamma_{p}(t)$. The real constants $\gamma_{1,2}$ as well as the function $\gamma_{p}(t)$ are defined whenever the driving force $F(t)$ has been provided. Therefore, the function $\tau$ introduced in (\ref{eq:INT14}) can be rewritten in terms of $q_1$ and $q_2$: \begin{equation} \tau (t)=\int^{t}\frac{dt'}{\sigma^{2}(t')}=\frac{1}{w}\arctan\left[ \frac{W_0}{2w}\left( b+2c\frac{q_2}{q_1} \right) \right]. \label{eq:OSC8} \end{equation} To conclude this section we emphasize that, as a result of the point transformation, the function (\ref{eq:INT5-1}) acquires the factorized form $\Psi = G(x,t; \psi (x,t))=A(x,t) \psi(x,t)$, see Appendix~\ref{ApA}. Therefore, we can write the solutions $\psi(x,t)$ of the parametric oscillator in terms of the solutions $\Psi(X,\tau)$ of the stationary one, and vice versa. As we have already solved the stationary case, it is easy to get the solutions we are looking for \begin{equation} \psi(x,t)=\exp\left[ i\frac{m}{\hbar}\left(\frac{\dot{\sigma}}{2\sigma}x^{2}-\frac{W}{\sigma}x-\xi\right)\right]\frac{\Psi(X(x,t), \tau(t))}{\sqrt{\sigma}} \, . \label{eq:INN1} \end{equation} \subsection{Orthogonality and basic solutions} \label{ortogonal} As indicated above, the explicit form of the solutions $\psi_n(x,t)$ is easily achieved from (\ref{eq:INN1}) by using $\Psi_n (X,\tau)=e^{-i E_n \tau/\hbar} \Phi_n (X)$ and the functions $\Phi_n(X)$ defined in (\ref{eq:INT3}). However, the orthogonality of the new set $\psi_n(x,t)$ is not evident. We are interested in the orthogonality of these functions since, although it is not a necessary condition to get physically admissible solutions, it is sufficient to get superpositions of states in easy form. To elucidate such a property let us consider a pair of arbitrary solutions of the stationary oscillator, $\Psi_{(1)}(X, \tau)$ and $\Psi_{(2)}(X, \tau)$. Using (\ref{eq:INN1}), the straightforward calculation gives \begin{equation} \int^{\infty}_{-\infty}dX \, \Psi_{(2)}^{*}(X, \tau)\Psi_{(1)}(X, \tau) = \int^{\infty}_{-\infty}dx \, \psi_{(2)}^{*}(x,t)\psi_{(1)}(x,t). \label{eq:INN2} \end{equation} That is, the point transformation preserves the structure of the inner product. Hence, the orthogonal set of solutions $\{ \vert\Psi_{n} (\tau) \rangle \}_{n=0}^{\infty}$ is mapped to an orthogonal set $\{ \vert\psi_{n}(t)\rangle \}_{n=0}^{\infty}$. In position representation one has \begin{equation} \psi_{n}(x,t)=e^{-i \hbar w(n+1/2) \tau(t)} \varphi_{n}(x,t) , \label{eq:INN2-1} \end{equation} with \begin{equation} \begin{alignedat}{3} & \varphi_{n}(x,t)&&=A^{-1}(x,t)\Phi\left(\frac{x+\gamma}{\sigma}\right) \\ & &&=\exp\frac{m}{\hbar}\left[ \left(-\frac{w}{\sigma^2}+i\frac{\dot{\sigma}}{\sigma}\right)\frac{x^2}{2} - \left(w\frac{\gamma}{\sigma^2}+i\frac{W}{\sigma}\right)x+\left(-\frac{w}{2}\frac{\gamma^2}{\sigma^2}-i\xi \right) \right] \\ & && \hspace{40mm}\times \sqrt{\frac{1}{2^{n}n!}\sqrt{\frac{mw}{\pi\hbar}}} \frac{1}{\sqrt{\sigma}} H_{n}\left[\sqrt{\frac{mw}{\hbar}}\left(\frac{x+\gamma}{\sigma} \right) \right] \, . \end{alignedat} \label{eq:INN3} \end{equation} The above expression is in agreement with the results reported by Glauber \cite{Gla92}. From (\ref{eq:INN2}) we immediately realize that the orthonormality \begin{equation} \int^{\infty}_{-\infty}\, dX \, \Psi_{n}(X, \tau)\Psi^{*}_{m}(X, \tau)=\int^{\infty}_{-\infty}\, dx \, \psi_{n}(x,t)\psi^{*}_{m}(x,t)=\delta_{n,m} \label{eq:INN4} \end{equation} holds when the functions $\psi$ are evaluated at the same time. In general, if $t \neq t'$, the orthonormality is not granted. We write \begin{equation} \int_{-\infty}^{\infty}dx \,\psi_{n}(x,t)\psi_{m}^{*}(x,t') \not=\delta_{n,m}, \quad t\not=t'. \label{eq:INN5} \end{equation} Having in mind that the products (\ref{eq:INN3}) are evaluated at a given time $t$, we may write $\mathcal{H}(t)=\operatorname{Span}\{\vert\psi_{n}(t)\rangle \}_{n=0}^{\infty}$. That is, the space of states we are dealing with is dynamical (see, e.g. \cite{Ali18} for a discussion on the matter). The detailed analysis of the properties of such a space is out of the scope of the present work, so it will be provided elsewhere. \subsection{Quantum integrals of motion} \label{integrals} The nonconservative system described by the Hamiltonian $\hat H(t)$ defined in (\ref{eq:PMO1}), equivalently by the Schr\"odinger equation (\ref{eq:INT4}), is quite different from the stationary oscillator associated to the well known Hamiltonian $\hat H_{osc}$ of Eq.~(\ref{eq:INT0}). Although we have shown the orthonormality of the solutions $\psi_n(x,t)$, it is necessary to emphasize that they are not eigenfunctions of the Hamiltonian $\hat H(t)$. Indeed, the time-dependence of $\hat H(t)$ prohibits the factorization of $\psi(x,t)$ as the product of a purely time-dependent function $T (t)$ with a position-dependent function $\chi (x)$, where $\chi (x)$ fulfills a given eigenvalue equation. Nevertheless, the functions $\psi_n(x,t)$ are admissible from the physical point of view. Since $\hat H(t)$ is not a constant of motion of the system $\frac{d}{dt}\hat{H}(t)\not=0$, we wonder about the observable(s) that define the system uniquely. Such observable(s) must include the set $\psi_n(x,t)$ as its (their) eigenfunctions. Moreover, what about the related spectrum? The latter points must be clarified in order to provide the functions (\ref{eq:INN2-1}), and any linear combination of them, with a physical meaning. Remarkably, such information is obtained from the point transformation itself, because any conserved quantity is preserved \cite{Ste93}. Indeed, from \eqref{eq:INT3} we see that the energy eigenvalues $E_{n}=\hbar w(n+1/2)$ of the stationary oscillator must be preserved since they are constant quantities. To be specific, using the relationships~\eqref{eq:INT10} of Appendix~\ref{ApA}, the stationary eigenvalue equation~\eqref{eq:INT2-2} gives rise to the new eigenvalue equation \begin{equation} \begin{aligned} -\sigma^2\frac{\hbar^{2}}{2m}\frac{\partial^2\varphi_{n}}{\partial x^2}&+\frac{m}{2}\left( \dot{\sigma}^{2}+\frac{w^2}{\sigma^2} \right)x^2 \varphi_{n}-\sigma\dot{\sigma}\frac{\hbar}{2i}\left(2x\frac{\partial}{\partial x} + 1 \right)\varphi_{n} + \frac{\hbar\sigma W}{i}\frac{\partial\varphi_{n}}{\partial x} \\ & + m\left(w^2 \frac{\gamma}{\sigma^2}-W\dot{\sigma} \right)x \varphi_{n} +\frac{m}{2}\left(W^{2}+w^2\frac{\gamma^2}{\sigma^2}\right) \varphi_{n} = E_n \varphi_{n}, \end{aligned} \label{eq:INV1} \end{equation} where the eigenvalues $E_{n}=\hbar w(n+1/2)$ have been inherited from the stationary oscillator. It is immediate to identify the operator \begin{multline} \hat{I} (t)=\frac{\sigma^2}{2m}\hat{p}^2+\frac{m}{2}\left( \dot{\sigma}^{2}+\frac{w^{2}}{\sigma^2} \right)\hat{x}^2-\frac{\sigma\dot{\sigma}}{2}(\hat{x}\hat{p}+\hat{p}\hat{x})+\sigma W \hat{p} \\ +m\left(w^{2}\frac{\gamma}{\sigma^{2}}-W\dot{\sigma} \right)\hat{x} + \frac{m}{2}\left(W^{2}+w^{2}\frac{\gamma^2}{\sigma^2} \right) \mathbb{I}(t), \label{eq:INV2} \end{multline} where $\mathbb I(t)$ is the identity operator in ${\cal H}(t)$, see Section~\ref{Seclin}. The operator $\hat I$ is such that the eigenvalue equation \begin{equation} \hat{I} (t)\vert\varphi_{n}(t) \rangle = \hbar w(n+1/2)\vert\varphi_{n}(t)\rangle \label{eq:INV1-1} \end{equation} coincides with (\ref{eq:INV1}) in position-representation $\varphi_{n}(x,t)=\langle x \vert \varphi_{n}(t)\rangle$. Besides, the straightforward calculation shows that $\hat{I}(t)$ satisfies the invariant condition \begin{equation} \frac{d}{dt}\hat{I} (t)=i\hbar[\hat{H}(t),\hat{I} (t)]+\frac{\partial}{\partial t}\hat{I}(t)=0. \label{eq:INV3} \end{equation} That is, $\hat{I}(t)$ is an integral of motion of the parametric oscillator. We would like to stress that the invariant operator $\hat{I}(t)$ arises in natural form from the point transformation we are presenting in this work, without necessity of any ans\"atz. In particular, for $\gamma_{1}=\gamma_{2}=F(t)=0$, the operator (\ref{eq:INV2}) coincides with the invariant of Lewis and Reisenfeld~\cite{Lew69}. \subsection{Dynamical algebra and quadratures} \label{dynamical} In addition to the previous results, it is possible to obtain a set of the ladder operators for the parametric oscillator. We first recall that the action of the boson ladder operators \begin{equation} \hat{a}=\frac{\hbar}{\sqrt{2m}}\frac{\partial}{\partial X} + \sqrt{\frac{m}{2}}w X, \quad \hat{a}^{\dagger}=-\frac{\hbar}{\sqrt{2m}}\frac{\partial}{\partial X} + \sqrt{\frac{m}{2}}w X, \quad [\hat{a},\hat{a}^{\dagger}]=\hbar w \mathbb{I} \label{eq:ALG1} \end{equation} on the eigenstates of $H$ is well known \begin{equation} \hat{a}\Phi_{n+1}(X)=\sqrt{\hbar w (n+1/2)}\Phi_{n}(X), \quad \hat{a}^{\dagger}\Phi_{n}(X)=\sqrt{\hbar w (n+1/2)}\Phi_{n+1}(X). \label{eq:ALG2} \end{equation} The above results are quite natural considering the relationships \begin{equation} \hat{H}_{osc}=\hat{a}^{\dagger}\hat{a}+\frac{\hbar w}{2}, \quad [\hat{H}_{osc},\hat{a}]=-\hbar w \hat{a}, \quad [\hat{H}_{osc},\hat{a}^{\dagger}]=\hbar w \hat{a}^{\dagger}. \label{eq:ALG1-1} \end{equation} Using the relationships \eqref{eq:INT10} of Appendix~\ref{ApA}, the boson operators (\ref{eq:ALG1}) are deformed as follows \begin{equation} \begin{aligned} & \hat{a}_{2}(t)=\frac{\hbar}{\sqrt{2m}}\sigma\frac{\partial}{\partial x}+\sqrt{\frac{m}{2}}\left( - i \dot{\sigma} +\frac{w}{\sigma} \right)x +\sqrt{\frac{m}{2}} \left(iW+w\frac{\gamma}{\sigma} \right), \\[1ex] & \hat{a}^{\dagger}_{2}(t)=-\frac{\hbar}{\sqrt{2m}}\sigma\frac{\partial}{\partial x}+\sqrt{\frac{m}{2}}\left( i \dot{\sigma} +\frac{w}{\sigma} \right)x +\sqrt{\frac{m}{2}} \left(-iW+w\frac{\gamma}{\sigma} \right), \end{aligned} \label{eq:ALG3} \end{equation} while the equations (\ref{eq:ALG2}) acquire the form \begin{equation} \hat{a}_{2}(t) \varphi_{n+1}(x,t)=\sqrt{\hbar w \left(n+\frac{1}{2}\right)} \, \varphi_{n}(x,t), \quad \hat{a}_{2}^{\dagger}(t)\varphi_{n}(x,t)=\sqrt{\hbar w \left( n+\frac{1}{2} \right)}\varphi_{n+1}(x,t). \label{eq:ALG5} \end{equation} Remarkably, the time-dependent ladder operators (\ref{eq:ALG3}}) satisfy the Heisenberg algebra \begin{equation} [\hat{a}_{2}(t) , \hat{a}_{2}^{\dagger}(t)]=\hbar w \mathbb{I}(t), \label{algebra} \end{equation} and factorize the invariant operator of the parametric oscillator \begin{equation} \hat{I}(t)=\hat{a}_{2}^{\dagger}(t)\hat{a}_{2}(t)+\frac{\hbar w}{2}. \label{factor} \end{equation} The latter leads to the commutation rules \begin{equation} [\hat{I}(t),\hat{a}_{2}(t)]=-\hbar w \hat{a}_{2}(t), \quad [\hat{I}(t),\hat{a}^{\dagger}_{2}(t)]=\hbar w \hat{a}^{\dagger}_{2}(t), \label{eq:ALG4} \end{equation} which verify that $\hat{a}_{2}(t)$ and $\hat{a}^{\dagger}_{2}(t)$ are indeed ladder operators for the eigenfunctions of the invariant operator. On the other hand, the canonical operators of position and momentum become time-dependent \begin{equation} \hat{x}=\frac{\sigma}{\sqrt{2m} \, w } \left( \hat{a}_{2}(t)+\hat{a}_{2}^{\dagger}(t) \right)-\gamma \mathbb{I}(t) \, , \quad \hat{p}=\sqrt{\frac{m}{2}}\left( \Xi \, \hat{a}_{2}(t) + \Xi^{*} \, \hat{a}^{\dagger}_{2}(t) \right) - m\dot{\gamma}\mathbb{I}(t), \label{eq:ALG6} \end{equation} where $\Xi(t)=-\frac{i}{\sigma}+\frac{\dot{\sigma}}{w}$. It may be proved that $[\hat{x},\hat{p}]=i\hbar \mathbb{I}(t)$, as expected. Using $\hat{I}(t)$, from \eqref{eq:INN2-1}, we find \begin{subequations} \begin{equation} \vert\psi_{n}(t)\rangle=e^{-i\hat{I} (t) \tau(t)/\hbar}\vert\varphi_{n}(t)\rangle, \label{eq:INV3-1} \end{equation} \begin{equation} \psi_{n}(x,t)=e^{-iw(n+1/2) \tau (t)}\varphi_{n}(x,t). \label{eq:INV3-2} \end{equation} \end{subequations} Contrary to the stationary case, the operator $e^{-i\hat{I} (t) \tau (t)/\hbar}$ in~\eqref{eq:INV3-1} is not the time evolution operator. No matter it adds the appropriate time-dependent complex phase to the eigenfunctions of $\hat I(t)$, just as this has been discussed by Lewis and Reisenfeld, see Figure~\ref{fig:DIA}. \begin{figure} \centering \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=5em,column sep=8em,minimum width=2em] { i\frac{\partial}{\partial \tau}\Psi=\hat{H}_{osc}\Psi & i\frac{\partial}{\partial t}\psi=\hat{H}(t)\psi \\ \hat{H}_{osc}\Phi_{n}=\hbar w(n+1/2)\Phi_{n} & \hat{I}(t)\varphi_{n}=\hbar w(n+1/2)\varphi_{n} \\}; \path[-stealth] (m-1-1) edge [<->,very thick,red] node [left] {$\Psi_{n}=e^{-i\hat{H}_{osc}\tau/\hbar}\Phi_{n}$} (m-2-1) (m-2-2) edge [<->,very thick,red] node [right] {$\psi_{n}=e^{-i\hat{I}_{2}\tau(t)/\hbar}\varphi_{n}$} (m-1-2) (m-1-1) edge [blue] node [above] {\textcolor{black}{P.T.}} (m-1-2) (m-1-1) edge [->,very thick,blue] node [below] {\textcolor{black}{$X(x,t)$, $\tau(t)$, $\psi=A(x,t)\Psi$}} (m-1-2) (m-2-1) edge [->,very thick,blue] node [below] {\textcolor{black}{P.T.}} (m-2-2); \end{tikzpicture} \caption{\footnotesize Connection between the stationary and parametric oscillators through the point transformation (P.T. for short). The orientation of the blue (horizontal) arrows may be inverted with the construction of the inverse point transformation. Thus, the diagram is commutative.} \label{fig:DIA} \end{figure} \subsection{Linear superpositions and representation space} \label{Seclin} Consider the normalized superposition \begin{equation} \vert \chi;t\rangle_{I}=\sum_{n=0}^{\infty} c_{n}\vert \varphi_{n}(t)\rangle, \quad \mbox{with} \quad \sum_{n=0}^{\infty}\vert c_{n}\vert^{2}=1, \quad c_{n} \in\mathbb{C}. \label{eq:INV4} \end{equation} We say that any regular solution of the Schr\"odinger equation~\eqref{eq:INT4}, in free-representation form, can be written as \begin{equation} \vert \chi;t\rangle=e^{-i\hat{I}_{2}(t) \tau (t)/\hbar}\vert\chi;t\rangle_{I}=\sum_{n=0}^{\infty}c_{n}e^{-i w(n+1/2) \tau (t)}\vert\varphi_{n}(t)\rangle=\sum_{n=0}^{\infty}c_{n}\vert \psi_{n}(t)\rangle \, . \label{eq:INV5} \end{equation} Additionally, we can construct linear operators $\hat{\mathcal{O}}(t,t')$ that map elements of $\mathcal{H}(t')$ into elements of $\mathcal{H}(t)$. Using the Hubbard representation~\cite{Enr13} we may write \begin{equation} \hat{\mathcal{O}}(t,t'):=\sum_{n,m=0}^{\infty}\mathcal{O}_{n,m}\vert\psi_{n}(t)\rangle\langle\psi_{m}(t')\vert \, , \quad \mathcal{O}_{n,m}=\langle\psi_{n}(t)\vert\hat{\mathcal{O}}(t,t')\vert\psi_{m}(t')\rangle \, , \label{eq:INV6} \end{equation} where the coefficient $\mathcal{O}_{n,m}$ does not depend on time. In particular, for equal times $\hat{\mathcal{O}}(t):=\hat{\mathcal{O}}(t,t)$, we can construct a representation of the identity operator in $\mathcal{H}(t)$ as \begin{equation} \mathbb{I}(t):=\sum_{n=0}^{\infty}\vert\varphi_{n}(t)\rangle\langle\varphi_{n}(t)\vert. \label{eq:INV7} \end{equation} The time-evolution operator $U(t, t')$ is obtained from~\eqref{eq:INV6} by fixing $\mathcal{O}_{n,m}=1$ for any $n,m$. From the orthogonality of the eigenfunctions at a fixed time~\eqref{eq:INN4} it follows that the action of $U(t,t')$ on any superposition~\eqref{eq:INV4} defined in $t'$ produces \begin{equation} U(t,t')\vert\chi;t'\rangle=\sum_{n=0}^{\infty}c_{n}U(t,t')\vert\psi_{n}(t')\rangle=\sum_{n=0}^{\infty} c_{n}\vert\psi_{n}(t)\rangle=\vert\chi;t\rangle. \end{equation} In turn, the time-propagator \begin{equation} G(x,t;x't')=\sum_{n=0}^{\infty}\psi_{n}(x,t)\psi^{*}_{n}(x',t') \end{equation} is such that \begin{equation} \psi_{\chi}(x,t)=\langle x \vert\chi;t\rangle=\int_{-\infty}^{\infty}dx' \, G(x,t;x',t')\psi_{\chi}(x',t'). \end{equation} The time-propagator can be explicitly computed by using the solutions~\eqref{eq:INN3} and the summation identities of the Hermite polynomials~\cite{Olv10}. However, such a derivation is not necessary in the present work. A discussion on the matter has been recently carried out for a similar problem in~\cite{Dod73}. \section{Coherent states} \label{Seccs} The simplest form to define the coherent states is to say that they ``are superpositions of basis elements to which some specific properties are requested on demand'' \cite{Ros19}. In this sense the discussion of Section~\ref{Seclin} is relevant since the capability of summing up an orthonormal set of the parametric oscillator states facilitates the construction of the corresponding (generalized) coherent states. Additionally, as the set $\{a_{2}(t),a_{2}^{\dagger}(t),\mathbb{I}(t) \}$ generates the Heisenberg Lie algebra (\ref{algebra}), one may use the conventional disentangling formulae to construct the appropriate displacement operator $\hat D(\alpha;t)$. The relevant point here is that the set $\{a_{2}(t),a_{2}^{\dagger}(t),\mathbb{I}(t) \}$, together with the invariant $\hat I$, close the oscillator algebra (\ref{eq:ALG4}). Thus, the coherent states so constructed are linear superpositions of the eigenstates of $\hat I$ which, in turn, is factorized by the time-dependent ladder operators (\ref{factor}). The resemblance of the mathematical background of the parametric oscillator to that of the stationary oscillator is, in this form, extended to the related coherent states. Using the conventional disentangling formulae, see e.g. \cite{Ros19,Gil74}, using $a_{2}(t)$ and $a_{2}^{\dagger}(t)$, one obtains the operator \begin{equation} \hat{D} (\alpha;t)=e^{\frac{1}{\hbar w}\left( \alpha \hat{a}_{2}^{\dagger}(t) - \alpha^{*}\hat{a}_{2}(t) \right)}=e^{-\frac{\vert\alpha\vert^{2}}{2\hbar w}}e^{\frac{\alpha}{\hbar w}\hat{a}_{2}^{\dagger}(t)}e^{-\frac{\alpha^{*}}{\hbar w}\hat{a}_2(t)}, \quad \alpha\in\mathbb{C}, \label{eq:CS1} \end{equation} which produces displacements on the time-dependent ladder operators \begin{equation} \hat{D}^{\dagger}(\alpha;t)\hat{a}_{2}(t)\hat{D} (\alpha;t)=\hat{a}_{2}(t)+\alpha, \quad \hat{D}^{\dagger} (\alpha;t)\hat{a}^{\dagger}_{2}(t)\hat{D} (\alpha;t)=\hat{a}^{\dagger}_{2}(t)+\alpha^{*}. \label{eq:CS2} \end{equation} In the Perelomov picture \cite{Per86} the coherent states $\vert\alpha;t\rangle_{I}$ are constructed by the action of $D(\alpha;t)$ on the fiducial state $\vert \varphi_{0}(t)\rangle$. From~\eqref{eq:CS2}, we find that the result \begin{equation} \vert \alpha; t\rangle=e^{-iw \tau (t)/2}e^{-\frac{\vert\alpha\vert^2}{2\hbar w}}\sum_{n=0}^{\infty}\left( \frac{\alpha e^{-i w \tau(t)}}{\sqrt{\hbar w}} \right)^{n} \frac{1}{\sqrt{n!}} \vert \varphi_{n}(t)\rangle, \label{eq:CS4} \end{equation} is equivalent to the one obtained in the Barut-Girardello picture \cite{Bar71}, where the following equation holds \begin{equation} \hat{a}_{2}(t)\vert\alpha;t\rangle=\alpha e^{-iw \tau(t)}\vert\alpha;t\rangle. \label{eq:CS4-1} \end{equation} Although the explicit dependence on time of $\vert \alpha; t \rangle$, it is found that the related probability distribution is time-independent \begin{equation} \mathcal{P}_{n}(\alpha)=\vert\langle\varphi_{n}(t)\vert\alpha;t\rangle\vert^2=e^{-\frac{\vert\alpha\vert^2}{\hbar w}}\left(\frac{\vert\alpha\vert^2}{\hbar w}\right)^{n}\frac{1}{n!}. \label{eq:CS5} \end{equation} Clearly, ${\cal P}_n$ is a Poisson distribution, as expected~\cite{Zel19} (compare with \cite{Una18}). In turn, the expectation values of the quadratures are as follows \begin{subequations} \begin{equation} \small{\langle \hat{x} \rangle_{t}=\sqrt{\frac{2}{m}}\frac{\sigma}{w}\operatorname{Re}\alpha e^{-iwT(t)}-\gamma=\sqrt{\frac{2\vert\alpha\vert^2}{mw^{2}c}}\left[\left( \frac{w}{W_{0}}\cos\theta_{\alpha}+\frac{b}{2}\sin\theta_{\alpha} \right)q_{1} + c\sin\theta_{\alpha}q_{2} \right]-\gamma} \, , \label{eq:CS6-1} \end{equation} \begin{equation} \langle \hat{p} \rangle_{t} = m\frac{d}{dt}\langle \hat{x} \rangle(t) = \sqrt{2m}\left(\frac{\dot{\sigma}}{w}\operatorname{Re}\alpha e^{-iwT(t)}+\frac{1}{\sigma}\operatorname{Im}\alpha e^{-iwT(t)} \right) - m\dot{\gamma} \, , \label{eq:CS6-2} \end{equation} \end{subequations} with $\alpha=\vert\alpha\vert e^{i\theta_{\alpha}}$. If $F(t)=\gamma(t)=0$ then $\langle \hat{x} \rangle(t)$ becomes a linear combination of $q_{1,2}$ that matches with the classical result. As usual, $\vert\alpha\vert$ and $\theta_{\alpha}$ play the role of the classical initial conditions of the system. For $F(t)\not=0$, the expected value becomes displaced by a quantity $\gamma$, so that it describes a classical oscillator subjected to the action of a driving force~\eqref{eq:INT17}. In both cases the expected value of the momentum~\eqref{eq:CS6-2} is in agreement with the Ehrenfest theorem\cite{Sch02}, which is a property of the quadratic Hamiltonians. On the other hand, the Heisenberg uncertainty relation is given by \begin{equation} \left( \Delta \hat{x} \right)_{t}^{2}\left( \Delta p \right)_{t}^{2}=\frac{\hbar^{2}}{4}+\frac{\hbar^{2}}{4}\frac{\sigma^{2}\dot{\sigma}^2}{ w^{2}}, \label{eq:CS10} \end{equation} with \begin{equation} \left( \Delta \hat{x} \right)_{t}^{2}=\frac{\hbar}{2mw}\sigma^{2}, \quad \left( \Delta \hat{p} \right)_{t}^{2}=\frac{\hbar mw}{2}\left(\frac{\dot{\sigma}^{2}}{w^{2}}+\frac{1}{\sigma^2} \right). \end{equation} Thus, the product (\ref{eq:CS10}) is minimized for $\dot{\sigma}=0$. The latter means that $\Delta \hat{x}$ and $\Delta \hat{p}$ are inversely proportional, up to the constant $\sfrac{\hbar}{2}$, just as this occurs in the stationary case. In the trivial situation where $\sigma \neq \sigma (t)$, from \eqref{eq:INT17} we realize that the unique solution is obtained for the constant frequency $\Omega =w^{2}/\sigma^{4} \neq \Omega(t)$, which reproduces the conventional results of the stationary oscillator. For arbitrary time-dependent $\sigma$-functions the uncertainty $\Delta \hat{x} \Delta \hat{p} \geq \sfrac{\hbar}{2}$ is minimized at the times $t_k$ such that $\dot{\sigma}(t_k)=0$, see Section~\ref{examples} for details. Paying attention to the product (\ref{eq:CS10}) it is clear that the variances minimize the Schr\"odinger-Robertson inequality at any time, it is given by \cite{Rob29,Nie93,Tri94}: \begin{equation} (\Delta \hat{x})^2(\Delta \hat{p})^2\geq\frac{\hbar^{2}}{4}+\sigma_{\hat{x},\hat{p}}^{2}, \quad \sigma_{\hat{x},\hat{p}}=\frac{1}{2}\langle \hat{x}\hat{p}+\hat{p} \hat{x} \rangle - \langle \hat{x} \rangle\langle \hat{p} \rangle, \label{eq:SRU1} \end{equation} where $\sigma_{\hat{x},\hat{p}}$ stands for the covariance function. In our case \begin{equation} \sigma_{\hat{x},\hat{p}}=\frac{\hbar}{2}\frac{\sigma\dot{\sigma}}{w} \, . \label{eq:SRU2} \end{equation} As we can see, the coherent states of the parametric oscillator satisfy almost all the properties of the Glauber coherent states. The unique exception is that they minimize the Schr\"odinger-Robertson inequality rather than the Heisenberg uncertainty. For completeness, the coordinate representation of the coherent states is given by the wavepacket \begin{multline} \psi(\alpha;x,t)= \sqrt{\frac{1}{\sqrt{2\pi}(\Delta x)_{t}}} \, \exp\left[ \frac{i}{2\hbar} \left( \int dt' F(t')\gamma(t')-\hbar w \tau (t) \right) \right] \\ \times \exp\left[ \left(-\frac{1}{4(\Delta x)^{2}_{t}}+i\frac{m}{2\hbar}\frac{\dot{\sigma}}{\sigma} \right) (x-\langle \hat{x} \rangle_{t})^{2} + \frac{i}{\hbar}\langle p \rangle_{t}x + \frac{i}{2\hbar} \langle \hat{x}\rangle_{t}\langle \hat{p}\rangle_{t} \right], \label{eq:CS11} \end{multline} which is characterized by a Gaussian function with time-dependent width, the maximum of which follows the trajectory of a classical particle under the influence of the parametric oscillator potential. \section{Examples and discussion of results} \label{examples} To show the applicability of our approach we consider the results for some specific forms of the time-dependent frequency $\Omega^{2}(t)$. We take $F(t)=0$ for simplicity. With these considerations, it follows that the mapping of the position variable acquires the form \begin{equation} X(x,t)=\frac{x+\gamma_{1} q_{1}(t)+\gamma_{2} q_{2}(t)}{\sigma(t)} \, , \quad \gamma_1,\gamma_2\in\mathbb{R} \, . \label{eq:FP} \end{equation} \subsection{$\Omega^{2}(t)=0$.} Despite its simplicity, the null frequency $\Omega=0$ provides a connection between the solutions of the harmonic oscillator and the free-particle systems, see e.g. \cite{Mil81,Blu96}. It is straightforward to obtain the function \begin{equation} \sigma(t)=\left(a+ct^2+2\sqrt{ac-w^{2}} \, t\right)^{1/2}, \quad \gamma(t)=\gamma_{1}+\gamma_{2}t , \label{eq:NF0} \end{equation} where $a,c>0$ and $ac>w^{2}$. Then, the relation between the time parameters is given by \begin{equation} \tau (t)=\frac{1}{w}\arctan\left[\frac{1}{w}\left(\sqrt{ac-w^{2}}+ct \right) \right] \, , \label{eq:NF1} \end{equation} while the spatial coordinates are related through Eq.~\eqref{eq:FP}. Now, from \eqref{eq:INN1} with $a=c=w=1$, we arrive at the equivalent result \begin{equation} \psi(x,t)=e^{i\frac{m}{\hbar}\left(\frac{tx^{2}}{1+t^2}\right)}\left(1+t^2\right)^{-1/4}\Psi\left(\frac{x}{\sqrt{1+t^2}},\arctan t \right), \label{eq:NF2} \end{equation} which has been already reported in \cite{Mil81}, p.~83. The above procedure permits the construction of coherent states for the free-particle system by means of a simple mapping of the Glauber states to the appropriate basis (similar results can be found in \cite{Bag14}). In such case, the function $\sigma$ is proportional to the width of the wave-packet which, from~\eqref{eq:NF0}, is an increasing function in time. In other words, the coherent states of a free-particle are less localized as the time goes pass. \subsection{$\Omega^{2}(t)=\Omega_{0}^{2}>0$.} In this case the Hamiltonian (\ref{eq:PMO1}) is of the form \begin{equation} \left. \hat{H} (t) \right\vert_{\Omega(t)=\Omega_{0}} =\frac{\hat{p}^{2}}{2m}+\frac{m\Omega_{0}^{2}}{2}\hat{x}^{2} \equiv \hat{H}_{osc}. \label{eq:CF0} \end{equation} That is, $\hat H(t)$ represents a stationary oscillator of frequency $\Omega_{0}$. With the pair of linearly independent functions, $q_{1}(t)=\cos(\Omega_{0} t)$ and $q_{2}(t)=\sin(\Omega_{0} t)$, the functions $\sigma$ and $\gamma$ take the form \begin{equation} \begin{aligned} & \sigma^{2}(t)=a\cos^{2}(\Omega_{0}t)+c\sin^{2}(\Omega_{0} t)+\sqrt{ac-\frac{w^{2}}{\Omega^{2}_{0}}}\,\sin(2\Omega_{0}t), \\[1ex] & \gamma(t)=\gamma_{1}\cos\Omega t+\gamma_{2}\sin\Omega t. \end{aligned} \label{eq:CF1} \end{equation} From~\eqref{eq:INV2} and~\eqref{eq:CF1} we realize that $\hat{I}(t)$ still is a time-dependent operator, which is also an invariant of the system. Consequently, the functions $\varphi_{n}(x,t)$ are not eigenfunctions of $\hat{H}$, although, they are solutions of the corresponding Schr\"odinger equation. In the special case $a=c=w/\Omega$ we obtain $\sigma(t)=w/\Omega$. In addition, for $\gamma_{1,2}\not=0$ we recover the displaced number states discussed in~\cite{Nie97} and \cite{Phi14}. For $\gamma_{1,2}=0$, the eigenfunctions $\varphi_{n}$ are simply reduced to the solutions of the stationary oscillator of frequency $\Omega_{0}$. \subsection{$\Omega^{2}(t)=\Omega_1+\Omega_2 \tanh(k t)$.} For $\Omega_{1}>\Omega_{2}$ the frequency $\Omega(t)$ changes smoothly from $\Omega_{1}-\Omega_{2}$ to $\Omega_{1}+\Omega_{2}$. In the limit $k\rightarrow\infty$, the function $\Omega(t)$ converges to the Heaviside step distribution $\Theta (t)$ \cite{Olv10}. In general, we have the linearly independent functions \begin{equation} \begin{aligned} & \widetilde{q}_{1}(t)=(1-z)^{-\frac{i}{2}g_{+}}(1+z)^{-\frac{i}{2}g_{-}} \, {}_{2}F_{1}\left( \left. \begin{aligned} -i \mu \, , \, 1-i \mu \\ 1-ig_{+}(t) \hspace{5mm} \end{aligned} \right\vert \frac{1-z}{2} \right) , \\[1ex] & \widetilde{q}_{2}(t)=(1-z)^{+\frac{i}{2}g_{+}}(1+z)^{+\frac{i}{2}g_{-}} \, {}_{2}F_{1}\left( \left. \begin{aligned} i \mu \, , \, 1+i \mu \\ 1+ig_{+}(t) \hspace{5mm} \end{aligned} \right\vert \frac{1-z}{2} \right), \\[1ex] & g_{\pm}=\mu \pm\frac{\Omega_2}{2k^{2}\mu}, \quad \mu=\frac{1}{k}\sqrt{\frac{\Omega_1+\sqrt{\Omega_1^{2}-\Omega_2^{2}}}{2}}, \quad z=\tanh(k t) , \end{aligned} \label{eq:TDF2} \end{equation} where ${}_{2}F_{1}(a,b;c;z)$ stands for the hypergeometric function \cite{Olv10}. From~\eqref{eq:TDF2} it is clear that both $\widetilde{q}_{1,2}$ are complex-valued functions. Moreover, as $\widetilde{q}_{2}(t)=\widetilde{q}^{*}_{1}(t)$, the Wronskian is the pure imaginary number $W_{r}(\widetilde{q}_1,\widetilde{q}_2)=-2ikg_{+}$. \begin{figure}[htb] \centering \includegraphics[width=0.3\textwidth]{TH1} \caption{\footnotesize The solution of the Ermakov equation~\eqref{eq:OSC7} (solid-black) is compared with $q_{1}(t)$ (dashed-blue) and $q_{2}(t)$ (dotted-red). In all cases the time-dependence is dictated by the frequency function $\Omega^{2}(t)=\Omega_1+\Omega_2 \tanh(kt)$, with $k=1/2$, $\Omega_1=5$, $\Omega_2=3$, and $a=c=1$.} \label{fig:F1} \end{figure} Following the discussion of Appendix~\ref{ApC} we set $q_{1}=\operatorname{Re}[q_{1}]$ and $q_{2}=\operatorname{Im}[q_{1}]$ as the pair of linearly independent real solutions that are required in our approach. Then $W_{0}=kg_{+}$, and \begin{equation} \sigma^{2}(t)=a\operatorname{Re}[q_{1}]^2+c\operatorname{Im}[q_{1}]^2+2\sqrt{ac-\frac{w^{2}}{k^{2}g_{+}^{2}}} \, \operatorname{Re}[q_{1}] \operatorname{Im}[q_{1}], \label{eq:TDF3} \end{equation} where $a,c>0$ to obtain a nodeless real-valued solution. It is worth to remember that any linear combination of Re$[q_{1}]$ and Im$[q_{1}]$ can be used to describe the classical motion of a particle under the influence of the parametric oscillator. Whereas for the quantum case the nonlinear combination~\eqref{eq:TDF3} is necessary to make any prediction. The behavior of Re[$q_{1}$], Im[$q_{1}$], and $\sigma$ is depicted in Figure~\ref{fig:F1}. It can be appreciated that the classical solutions transit from lower ($t<0$) to higher ($t>0$) frequency oscillations, as expected. The time rate of such transition is controlled by the parameter $k$. The oscillations are not exactly periodic, but they can be cosidered periodic at large enough times. \begin{figure}[htb] \centering \subfigure[~$n=0$ ]{\includegraphics[width=0.3\textwidth]{WFn0} } \hskip1ex \subfigure[~$n=1$ ]{\includegraphics[width=0.3\textwidth]{WFn1} } \hskip1ex \subfigure[~$n=2$ ]{\includegraphics[width=0.3\textwidth]{WFn2} } \caption{\footnotesize Probability density $\vert\varphi_{n}\vert^{2}=\vert\psi_{n}\vert^{2}$ for the indicated values of $n$ with $k=1/2,\Omega_{1}=5,\Omega_{2}=3,a=c=w=1$. The horizontal and vertical axes correspond to position and time, respectively. } \label{fig:F2} \end{figure} The probability densities of the eigenfunctions $\varphi_{n}(x,t)$ are shown in Figure~\ref{fig:F2} for $n=0,1,2$. We can appreciate that $\varphi_{0}(x,t)$ is a localized wave-packet that spreads out during a finite interval of time, then it is squeezed up to it recovers its initial configuration. Such an oscillatory property is relevant in the paraxial approximation of electromagnetic signals, for it is associated with self-focusing beams in varying media \cite{Cru17,Gre17,Gre19,Raz19}. For higher eigenfunctions there is a definite number of nodes, the position of which varies in time. Moreover, from the polynomial behavior of the solutions, it is clear that the oscillation theorem holds at each time, leading to a complete set of solutions which form a basis. The latter generates a vector space which turns out to be dynamical \cite{Ali18}. On the other hand, the behavior of the coherent states in coordinate representation~\eqref{eq:CS11} and the variances associated with it~\eqref{eq:CS10} are depicted in Figure~\ref{fig:F3}. It is clear that the maximum of $\vert\psi(\alpha;xt)\vert^{2}$ follows a classical trajectory, compare with the behavior of $q_{1}(t)$ in Fig.~\ref{fig:F1}. The variance $(\Delta\hat{x})^{2}$ squeezes in time with oscillatory profile. The squeezing increases as the time goes on. On the other hand, the variance $(\Delta\hat{p})^{2}$ spreads more strongly than its canonical counterpart. Thus, this configuration skews in favor of the localization in position, which is the desired behavior inside ion traps, as discussed in, e.g., \cite{Gla92}. \begin{figure}[htb] \centering \subfigure[]{\includegraphics[width=0.3\textwidth]{WFCS1} } \hskip1cm \subfigure[]{\includegraphics[width=0.41\textwidth]{VAR} } \caption{\footnotesize (a) Probability density $\vert\psi(\alpha;x,t)\vert^{2}$ for the coherent states with $k=1/2,\Omega_{1}=5,\Omega_{2}=3,a=c=w=1$. The horizontal and vertical axes correspond to position and time, respectively. (b) Variances of the physical position $(\Delta\hat{x})^{2}_{t}$ (solid-blue) and momentum $(\Delta\hat{p})^{2}_{t}$ (dashed-red), with the same parameters as in figure~(a). } \label{fig:F3} \end{figure} \section{Conclusions} \label{conclu} We have shown that the properly chosen point transformation permits to solve the Schr\"odinger equation for a wide diversity of nonstationary oscillators. Our method overpasses the difficulties that arise in the conventional approaches like the absence of the observable(s) that define(s) uniquely the state of a parametric oscillator. Namely, as the related Hamiltonian is not an integral of motion, it is usual to provide an ans\"atz in order to guess the form of the related invariant. A striking feature of our method is that the integrals of motion are automatically obtained as a consequence of the transformation, with no necessity of guessing any ans\"atz. In this context, it is to be expected that our method can be applied to study the dynamics of particles in electromagnetic traps \cite{Pau90}. Other difficulty which is automatically fixed by our approach concerns the orthogonality of the solutions of the nonstationary oscillators. That is, in contrast with the stationary case, solving the Schr\"odinger equation for a nonstationary system, the orthogonality of the solutions is not automatically granted. We demonstrated that the orthonormality of the states of the parametric oscillator is granted by the point transformation of the states of the stationary case. The dynamical algebra, in turn, is also inherited from the stationary oscillator algebra. The latter results laid the groundwork to construct the corresponding coherent states, which inherit all the properties of the Glauber states with the exception that they minimize the Schr\"odinger-Robertson inequality rather than the Heisenberg uncertainty. Additional applications may include the propagation of electromagnetic signals in waveguides, where the Helmholtz equation is formally paired with the Schrödinger one \cite{Man08,CruT15a,CruT15b}, and the self-focusing is relevant \cite{Cru17,Gre17,Gre19,Raz19}. Finally, the approach can be extended to study supersymmetric structures in quantum mechanics \cite{Mie04} with time-dependent potentials \cite{Zel17a,Con17}
1,108,101,563,438
arxiv
\section{Introduction} Geoparsing is the process of recognizing and geo-locating location mentions from texts. It has been widely applied to various textual data, and is an important task in geographic information retrieval \cite[]{purves2018geographic}. A geoparsing system, known as a geoparser, usually functions in two steps: toponym recognition and toponym resolution. Toponym recognition detects the place mentions in texts, while toponym resolution resolves any place name ambiguity and assigns the appropriate spatial footprint (e.g., a pair of coordinates). Many geoparsers have been developed, such as CLAVIN\footnote{\url{https://clavin.bericotechnologies.com}}, the Edinburgh Geoparser \cite[]{grover2010use}, GeoTxt \cite[]{karimzadeh2019geotxt}, and TopoCluster \cite[]{delozier2015gazetteer}. In June 2019, an important geoparsing competition, \textit{Toponym Resolution in Scientific Papers}, was held as the SemEval 2019 Task 12, in conjunction with the Annual Conference of the North American Chapter of the Association for Computational Linguistics. This competition attracted 29 registered teams and 8 teams eventually submitted a system run \cite[]{weissenbacher2019semeval}. The winning teams all leveraged state-of-the-art neural network based models, such as BiLSTM-CRF and deep contextualized word embeddings, to design their geoparsers. Particularly, the geoparser that won the first place, DM\_NLP \cite[]{wang2019dm_nlp}, achieved over 90\% precision, recall, and F1 score for toponym recognition. This result is exciting and brings the question ``are we there yet?" A 90\% performance is not perfect but is probably sufficient for many applications. So have we already made enough progress that we can consider the problem of geoparsing as solved? A major limitation of the SemEval 2019 Task 12 competition is that the submitted geoparsers were tested on a single dataset which has 45 research articles from one particular domain of Bio-medicine. Existing research has shown that the same geoparser can have very different performances when tested on different datasets \cite[]{gritta2018s}. Accordingly, answering the question of whether the problem of geoparsing can be considered as solved requires a systematic evaluation of the state-of-the-art geoparsers on multiple datasets which should ideally be in different text genres (e.g., news articles, social media posts, and other types of texts). In a recent work, we developed an online platform called EUPEG\footnote{\url{https://geoai.geog.buffalo.edu/EUPEG}} which is an Extensible and Unified Platform for Evaluating Geoparsers \cite[]{hu2018eupeg,wang2019eupeg}. EUPEG hosts a majority of the geopasing resources reported in the literature, including eight annotated datasets, nine geoparsers, and eight evaluation metrics. In addition, the eight annotated datasets are in four different text genres which are news articles, Wikipedia articles, social media posts, and texts on Web pages. The source code of EUPEG and the related geoparsing resources are shared on GitHub\footnote{\url{https://github.com/geoai-lab/EUPEG}}. In this paper, we systematically evaluate the top geoparsers from SemEval Task 12 using EUPEG as a benchmarking platform. We focus on the top three end-to-end geoparsers that showed the highest performances in the competition, which are DM\_NLP \cite{wang2019dm_nlp}, UniMelb \cite{li2019unimelb}, and UArizona \cite{yadav2019university}. We test the performances of these three geoparsers on the datasets hosted on EUPEG, and compare their performances with the other existing geoparsers. The contributions of this paper are as follows: \begin{itemize} \item We conduct a systematic evaluation experiment on three state-of-the-art geoparsers, and discuss the implications and challenges based on the experiment results. \item We implement the three tested geoparsers based on their papers and share the source code at \url{https://github.com/geoai-lab/GeoAI2019Geoparser} to support future research. \end{itemize} \section{State-of-the-Art Geoparsers} The top three end-to-end geoparsers from SemEval Task 12 are DM\_NLP, UniMelb, and UArizona. They are all designed as pipeline systems comprising of two independent components for toponym recognition and resolution respectively. Accordingly, we describe and compare the three geoparsers based on the two components. \subsection{Toponym Recognition} All three geoparsers adopt the general Bidirectional Long Short Term Memory (BiLSTM) model for toponym recognition. However, their models vary in regard to the selection of word embeddings, integration of character-level embeddings, concatenation with a conditional random field layer, and mechanisms of self attention. \textbf{DM\_NLP: } This model, ranked as the 1st place, is built upon the character and word level BiLSTM model developed by Lample et al. \cite{lample2016neural}. The authors of DM\_NLP also tested the strategies of adding four extra linguistic features into the input layer: Part-of-Speech (POS) tags, NER labels from Stanford NER, Chunking labels, and deep contextualized word representations from the ELMo word embeddings \cite{peters2018deep}, but found that only adding ELMo produces the most performance improvement. In our implementation, we add the ELMo word embeddings as the extra linguistic feature. The final output layer of DM\_NLP is a CRF layer. \textbf{UniMelb: } This model is developed by integrating a word-level BiLSTM \cite{hochreiter1997long} and the self-attention mechanism \cite{NIPS2017_7181}. The authors tested both the GloVe and ELMo word embeddings, and found that the model with ELMo performed better. Thus, our implementation also uses ELMo word embeddings. The final layer of UniMelb is a binary softmax classifier. \textbf{UArizona: } This model is a re-implementation of a word, character, and affix level LSTM developed by Yadav et al. \cite[]{yadav2018survey}. In this model, the input of word LSTM is a concatenation of GloVe word embeddings, char embeddings represented by the output of a char BiLSTM, and word affix features. The word LSTM representations are given to the final CRF layer to recognize toponyms. We train all three toponym recognition models using a general dataset CoNLL 2003. The hyperparameters are set as the same as what reported in their papers. We use 300-dimensional pre-trained GloVe word embeddings and 1024 dimensional pre-trained EMLo embeddings from Tensorflow Hub (\url{https://tfhub.dev/google/elmo/2}). We do not update the weights of word embeddings during the training process. \subsection{Toponym Resolution} For toponym resolution, all three geoparsers use the same general workflow of first retrieving place candidates from the GeoNames gazetteer and then identifying the correct place instance among the candidates. However, different techniques were used by each geoparser to identify the right place instance. \textbf{DM\_NLP: } This model constructs four groups of features, which include name string similarity, candidate attributes, contextual features, and mention list features. These features are then used to train a LightGBM model for toponym resolution. \textbf{UniMelb: } This model also constructs features, including history result in the training dataset, population, GeoNames feature codes, name similarity, and ancestor names, and trains a support vector machine (SVM) for toponym resolution. \textbf{UArizona: } This model simply uses the population heuristic for toponym resolution. Each place name is resolved to the place instance that has the highest population in GeoNames. There is a challenge for re-implementing these toponym resolution models, that is, both DM\_NLP and UniMelb were trained on the specific training dataset from SemEval Task 12, which consists of 105 research articles in Bio-medicine. While this is fine and even desirable for a competition (since the testing is based on 45 research articles from the same domain), a model trained with one specific type of texts may not generalize well to other types of texts from different domains. Though we have multiple datasets available from the EUPEG platform, training the models with any of these datasets leads to the same bias issue. Ideally, the toponym resolution models of DM\_NLP and UniMelb should be trained with a large and general dataset which has labeled place instances (note that CoNLL 2003 cannot be used for training toponym resolution models) so that the general performances of these models can be measured. However, we currently do not have access to such a dataset. Thus, we resort to a simple but general implementation, namely using the population heuristic of UArizona for all three models. Previous research, as well as the experiment result reported by the DM\_NLP team \cite[]{wang2019dm_nlp}, has shown that population heuristic is a competent baseline and can sometimes outperform more complex models \cite[]{weissenbacher2015knowledge,delozier2015gazetteer}. Nevertheless, we are aware of the limitations of this simple heuristic and will discuss them with the experiment results. \section{Experiments and Results} \subsection{Experiments on EUPEG} The three neural network based geoparsers are tested on EUPEG. As a benchmarking platform, EUPEG provides eight annotated corpora, nine geoparsers, and eight performance metrics. Table \ref{EUPEG_info} summarizes these resources. More detailed descriptions on each of the resources can be found in our full paper about EUPEG \cite{wang2019eupeg}. We provide brief descriptions below to make this current paper self-contained. \begin{table}[ht] \caption{Datasets, geoparsers, and metrics on EUPEG} \label{EUPEG_info} \vspace*{-0.2cm} \begin{tabular}{cl} \toprule \textbf{Category} &\textbf{Resources}\\ \midrule Datasets & LGL, GeoVirus, TR-News, GeoWebNews, WikToR, \\ &GeoCorpora, Hu2014, Ju2016\\ Geoparsers&GeoTxt, The Edinburgh Geoparser, CLAVIN,\\ &Yahoo! PlaceSpotter, CamCoder, TopoCluster, \\&StanfordNER+Population, SpaCyNER+Population,\\ & DBpedia Spotlight\\ Metrics & Precision, Recall, F1 score, Accuracy, Mean,\\ & Median, AUC, Accuracy@161\\ \bottomrule \end{tabular} \end{table} The eight datasets are in four different text genres: news articles, Wikipedia articles, social media posts, and Web pages. Particularly, \textit{LGL}, \textit{GeoVirus}, \textit{TR-News}, and \textit{GeoWebNews} contain annotated news articles; \textit{WikToR} is a Wikipedia dataset; \textit{GeoCorpora} is a social media dataset that contains annotated tweets; and \textit{hu2014} and \textit{Ju2016} are two corpora that contain texts retrieved from Web pages. These diverse datasets enable a more comprehensive evaluation on the performance of a geoparser. It is worth noting that these datasets were annotated by researchers from different domains (e.g., geography, linguistics, and computer science). As a result, there exist differences in the words and phrases that are considered as toponyms. All datasets annotate administrative units, such as cities, towns, and countries. However, some datasets, such as \textit{LGL} and \textit{GeoWebNews}, also consider demonyms (e.g., Canadian) as toponyms. The toponyms in the dataset \textit{GeoCorpora}, in addition to administrative units, also include natural features (e.g., lakes and mountains) and facilities (e.g., streets and buildings) which are not included in some other datasets (e.g., \textit{GeoVirus}). This definition difference of toponyms directly affects the performances of the same geoparser on different datasets. The nine geoparsers hosted on EUPEG use a variety of heuristics and machine learning based methods. Particularly, \textit{GeoTxt}, \textit{The Edinburgh Geoparser}, and \textit{CLAVIN} use a named entity recognition tool for toponym recognition and a number of heuristics (e.g., the level of an administrative unit and population) for toponym resolution. \textit{TopoCluster} uses Stanford NER for toponym recognition and generates geographic profiles of words for toponym resolution. \textit{CamCoder} is a deep learning based geoparser that leverages a Convolutional Neural Network (CNNs) model. \textit{Yahoo! PlaceSpotter} is an industrial geoparser which provides an online REST API (at the time of writing this paper, the online service of \textit{Yahoo! PlaceSpotter} has become unavailable). In addition to the six geoparsers, EUPEG also includes two baseline geoparsers that are developed using Stanford NER and SpaCy NER with a population heuristic, as well as \textit{DBpedia Spotlight}, a general named entity recognition and linking (NERL) tool that can be used as a geoparser. The eight performance metrics provided on EUPEG include standard metrics from information retrieval as well as geographic distance based metrics designed for measuring the quality of the resolved geographic locations. The metrics of \textit{precision}, \textit{recall}, \textit{F1 score} and \textit{accuracy} evaluate the ability of a geoparser in correctly recognizing toponyms from texts. Particularly, the metric of \textit{accuracy} is used in situations when only some of the mentioned toponyms are annotated. The metrics of \textit{mean} and \textit{median} measures how far the resolved location is away from the ground-truth location (in kilometers). The metric of \textit{accuracy@161} measures the percentage of the resolved locations that are within 161 kilometers (100 miles) of the ground truth. The metric of \textit{AUC} (Area Under the Curve) measures a normalized distance error by calculating the area under a distance error curve. The three neural network based geoparsers from SemEval Task 12 are tested using the datasets from EUPEG. We quantify their performances using the discussed metrics, and compare their performances with those of the other geoparsers hosted on EUPEG. \subsection{Results} The experiment results contain the performances of the three state-of-the-art geoparsers on the eight datasets in comparison with the other existing geoparsers. In the following, we present and discuss the experiment results on three datasets, namely \textit{GeoVirus}, \textit{GeoCorpora}, and \textit{Ju2016}. We provide the results on the other five datasets in Appendix A. \subsubsection{Results on GeoVirus.} GeoVirus is a corpus that contains 229 news articles. This dataset was originally developed by Gritta et al. \cite{gritta2018melbourne}, and the news articles were collected during 08/2017 - 09/2017, covering the topics about global disease outbreaks and epidemics. GeoVirus is a relatively easy dataset since most location mentions refer to prominent place instances (e.g., major cities or countries) and the texts from news articles are well formatted. The evaluation results on GeoVirus are summarized in Table \ref{GeoVirusTable}. Since the online service of Yahoo! PlaceSpotter has become unavailable, its performance is not included in the experiment results. \begin{table}[H] \caption{Evaluation results on GeoVirus} \label{GeoVirusTable} \vspace*{-0.2cm} \begin{tabular}{ccccccccc} \toprule Geoparser&precision&recall&f\_score&mean &median &acc@161&AUC\\ \midrule \specialcell{DM\_NLP+Pop}&0.917&0.916&\textbf{0.917}&770.337&48.676&0.655&0.378\\ StanfordNER&0.927&0.903&0.915&791.296&48.676&0.655&0.378\\ \specialcell{UniMelb+Pop}&0.882&\textbf{0.936}&0.908&777.234&48.466&0.657&0.379\\ \specialcell{UArizona }&0.887&0.859&0.873&769.810&55.635&0.640&0.386\\ CamCoder&\textbf{0.940}&0.802&0.866&619.397&33.945&0.770&0.336\\ TopoCluster&0.877&0.813&0.844&599.632&63.858&0.673&0.407\\ GeoTxt&0.857&0.726&0.786&487.874&36.255&0.787&0.338\\ CLAVIN&0.913&0.637&0.750&522.176&35.503&0.786&0.320\\ DBpedia &0.792&0.616&0.693&1272.937&122.314&0.533&0.406\\ Edinburgh &0.860&0.559&0.678&\textbf{435.799}&\textbf{33.187}&\textbf{0.807}&\textbf{0.319}\\ SpaCyNER&0.721&0.382&0.499&788.231&40.653&0.698&0.367\\ \bottomrule \end{tabular} \end{table} The geoparsers in the table above are ordered by their F1 scores. The metrics of \textit{precision}, \textit{recall}, and \textit{f\_score} evaluate the performances of a geoparser for toponym recognition. The other four metrics evaluate the performance of a geoparser in resolving a toponym to its correct geographic location. It can be seen that the three top geoparsers from SemEval Task 12 indeed rank very high based on their F1 scores for the task of toponym recognition. However, the off-the-shelf StanfordNER also shows very competitive performance on this simple dataset. In terms of toponym resolution, \textit{The Edinburgh Geoparser} performs the best, although the median error distance for most geoparsers are within 100 km. Since most place mentions refer to their prominent instances, the population heuristic works well. It is worth noting that toponym resolution is performed based on only the toponyms recognized in the previous step. Thus, the metrics of \textit{mean}, \textit{median}, \textit{acc@161}, and \textit{AUC} are measured based on different numbers of toponyms that need to be resolved. \subsubsection{Results on GeoCorpora} GeoCorpora is a social media corpus that contains annotated tweets. GeoCorpora was developed by Wallgr{\"u}n et al. \cite{wallgrun2018geocorpora}, and their original paper reported 2,287 annotated tweets. Due to deletions, only 1,639 tweets are recovered on EUPEG. Compared to GeoVirus, GeoCorpora has two unique characteristics. First, the texts in GeoCorpora are short sentences (tweets within 140 characters) which provide only limited contextual information. Second, the content of tweets does not strictly follow grammatical rules and often contains abbreviations. Accordingly, GeoCorpora presents a more difficult dataset than GeoVirus. The evaluation results are summarized in Table \ref{GeoCorporaTable}. \begin{table}[H] \caption{Evaluation results on GeoCorpora} \label{GeoCorporaTable} \vspace*{-0.2cm} \begin{tabular}{cccccccc} \toprule Geoparser&precision&recall&f\_score&mean&median&acc@161&AUC\\ \midrule \specialcell{DM\_NLP+Pop}&0.888&\textbf{0.669}&\textbf{0.763}&1249.865&0.000&0.661&0.288\\ \specialcell{UniMelb+Pop}&0.852&0.661&0.745&1245.992&0.000&0.659&0.289\\ \specialcell{UArizona}&0.892&0.598&0.716&1079.012&0.000&0.668&0.278\\ GeoTxt&\textbf{0.926}&0.521&0.667&714.94&0.000&0.876&0.116\\ StanfordNER&0.899&0.526&0.664&1063.473&0.000&0.676&0.270\\ CamCoder&0.904&0.503&0.647&1024.723&0.000&0.820&0.163\\ TopoCluster&0.882&0.506&0.643&575.225&32.948&0.698&0.361\\ DBpedia &0.865&0.500&0.633&669.105&33.816&0.654&0.352\\ Edinburgh &0.832&0.505&0.628&958.401&0.000&0.848&0.139\\ SpacyNER&0.705&0.467&0.562&982.137&0.000&0.752&0.224\\ CLAVIN&0.907&0.341&0.496&\textbf{373.563}&0.000&\textbf{0.913}&\textbf{0.084}\\ \bottomrule \end{tabular \end{table} As can be seen, the F1 scores of all three geoparsers drop considerably on this more difficult dataset. However, it is worth emphasizing that DM\_NLP increases the best possible F1 score from about 0.66 (by GeoTxt) to about 0.76, which is a large improvement. For toponym resolution, population heuristic is still a relatively effective approach on this dataset based on the zero median error distances achieved by the three new geoparsers. However, population heuristic is not as effective as other models, such as CLAVIN, GeoTxt, Edinburgh, and CamCoder (based on their higher acc@161 and lower AUC). Again, when we interpret the values of \textit{mean}, \textit{median}, \textit{acc@161} and \textit{AUC}, it is necessary to take into account the factor that toponym resolution is evaluated based on the different numbers of recognized toponyms from the previous step. \subsubsection{Results on Ju2016} Ju2016 is a corpus containing short sentences retrieved from various Web pages. This dataset was created by Ju et al. \cite[]{ju2016things} who developed a script using Microsoft Bing Search API to automatically retrieve sentences containing highly ambiguous US place names (e.g., ``Washington"). This corpus contains 5,441 entries in total and the average length of each entry is 21 words. This is a very difficult dataset, because the sentences are short (limited contextual information), place names are ambiguous, and upper and lower cases are not differentiated (all words are converted to lower case). Since this is an automatically created dataset, not all place mentions are annotated and as a result, precision, recall, and F1 score cannot be used as performance metrics. Following previous research \cite[]{gritta2018s}, we use \textit{accuracy} which measures the percentage of place names that are correctly recognized among all annotated place names. The results on Ju2016 are provided in Table \ref{Ju2016Table}. \begin{table}[ht] \caption{Evaluation results on Ju2016} \label{Ju2016Table} \vspace*{-0.2cm} \begin{tabular}{cccccc} \toprule Geoparser&accuracy&mean&median&acc@161&AUC\\ \midrule GeoTxt&\textbf{0.463}&2609.734&1616.741&0.032&0.731\\ DBpedia &0.447&3101.087&1417.795&\textbf{0.111}&\textbf{0.698}\\ \specialcell{UniMelb+Pop}&0.379&3301.993&2081.599&0.020&0.758\\ TopoCluster&0.158&4026.270&1547.266&0.036&0.752\\ \specialcell{DM\_NLP+Pop}&0.097&3357.802&2266.718&0.020&0.760\\ \specialcell{UArizona}&0.036&2433.890&1966.937&0.029&0.739\\ StanfordNER&0.01&2027.016&2459.841&0&0.745\\ CamCoder&0.004&\textbf{1559.437}&\textbf{1389.716}&0.042&0.709\\ SpacyNER&0.004&3330.696&3478.187&0&0.818\\ CLAVIN&0&0&0&0&0\\ Edinburgh &0&0&0&0&0\\ \bottomrule \end{tabular} \end{table} As can be seen, many geoparsers show dramatically decreasing performances on this very difficult dataset. Two geoparsers, CLAVIN and Edinburgh, completely fail on this dataset which does not have word capitalization. Many other geoparsers, including DM\_NLP and UArizona, also largely fail on this dataset due to their use of case-sensitive features, such as separate character-level embeddings for upper and lower case characters. UniMelb is an exception among the three geoparsers that performs still relatively well. Its performance can be attributed to its model design that does not include case sensitive character-level embeddings as DM\_NLP and UArizona do. The highest accuracy is achieved by GeoTxt and DBpedia Spotlight, but all geoparsers show very low performances for toponym resolution based on the low acc@161 and high AUC scores. \textit{Ju2016} is an artificially created dataset whose difficulty was deliberately increased for the purpose of testing geoparsers. It is less likely for a real world corpus to contain so many different place instances all sharing the same name (e.g., the many ``Washington"s in this dataset). However, many real world corpora are likely to have irregular case alternations, and a robust geoparser should be able to accommodate such variations. \subsection{Discussion} So are we there yet? Have we achieved sufficient progress on geoparsing to possibly consider the problem as solved? In our view, the answer is ``it depends". It depends on the characteristics of the textual corpus on which geoparsing is performed. If the dataset contains well-formatted articles and is mostly about prominent places throughout the world (e.g., international news articles), then the answer is probably ``yes" since the state-of-the-art geoparser, DM\_NLP can achieve over 0.91 in precision, recall, and F1 score, and a relatively low toponym resolution error using a simple population heuristic. In fact, for such a dataset, one can even use the off-the-shelf StanfordNER combined with a population heuristic, saving the time for training a complex deep neural network model. On the other hand, if the dataset contains mostly short and informally-written sentences with ambiguous place names, then the answer is ``no" since many of our current geoparsers will largely fail on such a dataset. In addition to handling toponym ambiguity, typos, name variations, case alterations, and limited contexts in short texts, future geoparsing research could also explore a number of directions, which are discussed as follows. \textit{Geoparsing without population information.} As shown in our experiment results, an off-the-shelf NER tool combined with a simple population heuristic can already provide competent performance for geoparsing. However, there are situations in which population information is not available in the gazetteer, or the toponyms to be parsed do not have population (e.g., toponyms about streets or mountains). Methods that do not rely on population information need to be employed in these situations. For example, Moncla et al. \cite{moncla2014geocoding} leveraged clustering techniques to disambiguate toponyms contained in a hiking description corpus. \textit{Geoparsing fine-grained locations.} A majority of geoparsing research so far has focused on recognizing and resolving toponyms at a geographic level higher than cities, towns, and villages. Sometimes, we may want to geoparse fine-grained locations within a city, such as street names, or the names of parks and monuments. A geoparser based on a large and general gazetteer will not be able to geo-locate such fine-grained locations. In a recent work, Alex et al. adapted the Edinburgh Geoparser to process literary text containing fine-grained place names located in and around the City of Edinburgh, and also released a non-copyrighted gold standard datasets to support research in this direction \cite{alexgeoparsing}. \textit{Geoparsing with gazetteers beyond GeoNames.} Gazetteer plays a critical role in linking recognized toponyms and their geographic locations. However, most existing geoparsers only use GeoNames as their gazetteer. This, to some extent, can be attributed to the fact that many corpora are annotated based on GeoNames, and as a result, geoparsers are also developed based on GeoNames for evaluation convenience. As discussed in the previous point, a geoparser based on GeoNames will not be able to parse fine-grained place names. Besides, such a geoparser cannot process the historical texts in the context of digital humanities applications. An ideal geoparser, therefore, should allow users to switch the underlying gazetteer to one beyond GeoNames. \section{Conclusion and Future Work} Geoparsing is an important research problem. This paper presents our work on evaluating the three state-of-the-art geoparsers coming out from the SemEval-2019 Task 12 competition in June 2019. This work is motivated by the outstanding performances of these geoparsers in the competition. As a result, we set out to examine whether we have made enough progress to possibly consider the problem of geoparsing as solved. We systematically tested the top three geoparsers on our benchmarking platform EUPEG. The results suggest that these new geoparsers indeed improve the highest possible scores on multiple datasets, and the problem of geoparsing well-formatted texts referring to prominent place instances could be considered as solved. Meanwhile, some challenges remain, such as geoparsing toponyms from informally-written texts with ambiguous place names. This work can be extended in several directions. As discussed previously, we used a simple population heuristic for the toponym resolution component of the three geoparsers. Therefore, a next step is to develop a general toponym resolution dataset and use it to train the machine learning models described in the papers of DM\_NLP and UniMelb. Second, EUPEG currently does not contain historical corpora. As a result, it cannot be used for testing the performances of geoparsers on historical texts for humanities applications. An extension of EUPEG with historical corpora (e.g., 19th century newspapers and fictional works) can make this platform even more useful for researchers in digital humanities. A similar idea can be applied to extending EUPEG with non-English corpora. Third, EUPEG currently evaluates only end-to-end geoparsers, and it could be useful to extend EUPEG with the capability of evaluating software tools designed for toponym recognition or resolution only. We have shared the source code of EUPEG, along with the datasets under open licenses, on GitHub at: \url{https://github.com/geoai-lab/EUPEG}. The source code of the three implemented neural network geoparsers tested in this work is also shared on GitHub at: \url{https://github.com/geoai-lab/GeoAI2019Geoparser}. We hope that these resources can help support the future work of the community to further advance geoparsing. \section*{Acknowledgments} The authors would like to thank the four anonymous reviewers for their constructive comments and suggestions. \bibliographystyle{ACM-Reference-Format}
1,108,101,563,439
arxiv
\section{Introduction} \label{sec:INTRODUCTION} Getting information about the camera used to acquire an image provides forensic analysts with important cues to counterfeit digital crime. Occasionally, such information can be extracted from the attached metadata, e.g., the Exif header. However, this may be unavailable or can be easily modified or swept out even by non-experts. A more interesting option would be to detect the source directly from the image data. To this purpose, it has been observed that digital images intrinsically contain a Sensor Pattern Noise (SPN) caused by sensor imperfections of the capturing device \cite{Lukas2006,Chen2008}. Just like human fingerprints, SPN uniquely identifies the acquisition source and can be considered as a \textit{camera fingerprint}. In blind scenarios, where only a set of unsourced images are given, such fingerprint can reveal images that share the same source. We refer to this task as image clustering by source camera. Further investigations, for instance detecting how many cameras a suspect owns, or how likely an image is taken from the suspect's camera, can derive from clustering results. Indexing images by source camera also leads to direct applications in large-scale image retrieval. In order to properly estimate the camera fingerprint, a number of smooth and uniformly bright images should be collected \cite{Chen2008}. Unfortunately, this requirement is usually not fulfilled in a blind scenario, where all images are unlabeled and no assumption can be made about the visual content. Consequently, SPN can just be coarsely approximated from the noise residual of a single image, which contains not only the pattern noise but also various other noise sources, such as shot noise and noise resulting from lossy compression or other filtering. Different methods have been proposed to enhance the SPN estimation and matching, such as averaging \cite{Lukas2006}, PCA+LDA \cite{Li2015}, spectrum equalization \cite{Lin2016_2}. All those methods, however, require a labeled training set, making them unsuitable for image clustering by source camera in unsupervised scenarios. Existing unsupervised techniques are typically based on the normalized correlation among SPNs, used as a similarity measure, whose degree of reliability is limited by the impact of multiple noise sources. In \cite{Bloy2008}, an image is assigned to a group if the correlation between its noise residual and the relevant centroid exceeds a threshold, approximated by a quadratic model. Markov Random Fields are applied in \cite{Li2010_2,Li2017} to iteratively assign a class label to an image based on the consensus of a small set of SPNs, called membership committee. This raises another problem on how to choose a good committee, especially on asymmetric datasets where cluster cardinalities are unbalanced. In \cite{Caldelli2010, GarciaVillalba2015, Fahmy2015}, a hierarchical partition - a binary tree containing singleton clusters as leaf nodes and whose root node is a cluster containing all data points - is built by hierarchical clustering. The major problem of existing hierarchical approaches is the sensitivity to noise and outliers, as a wrong assignment might result in the propagation of errors to higher tiers. Multiclass spectral clustering is applied in \cite{Liu2010} to partition an undirected graph of unsourced images. The algorithm starts with two clusters and stops when it finds a cluster containing only one member. This stopping condition is heuristic, and as been improved by using normalized cut in \cite{Amerini2014}. Recently, in \cite{Marra2016,Marra2017}, multiple base partitions are obtained on top of multiple binarized undirected graphs and then combined to form a complete clustering solution. Another important problem that has to be taken into account is scalability. In practical applications, often the clustering has to be applied to large databases, containing huge numbers of high-resolution images. To the best of our knowledge, only the method in \cite{Lin2017} addresses large-scale clustering of camera fingerprints, where the main idea is to split the dataset into small batches, which can be efficiently loaded on RAM, and to apply a coarse-to-fine clustering. In the present work, we propose a clustering framework that exploits linear dependencies among SPNs in their intrinsic vector subspaces. Such dependencies are encoded under sparse representations, which are solutions of a constrained \textproc{LASSO} problem. Well-known clustering methods can then be applied on top of sparse representation matrix to obtain the final segmentation. Our framework is scalable despite of the complexity of \textproc{LASSO} thanks to a divide-and-conquer mechanism, which allows clustering on large datasets. Experimental tests on medium-scale and large-scale contexts exhibit advantages of sparse representations on clustering performance. The robustness of our proposed framework is demonstrated against the presence of outliers and double JPEG compression. A similar approach has been preliminarily described in \cite{Phan2017}. Differently from \cite{Phan2017}, here we impose a non-negativity constraint that provides the \textit{interpretability} of solutions, thus allowing the extension to large-scale contexts. The rest of the paper is organized as follows: in Section \ref{sec:PRELIMINARIES} we describe the extraction of SPNs and the sparse subspace clustering method; in Section \ref{sec:PROPOSED_METHOD} we present the proposed optimization problem and its solution, as well as a clustering framework for large-scale datasets. Finally, discussions on computational complexity and extensive experimental analysis are provided in Section \ref{sec:COMPLEXITY_ANALYSIS} and \ref{sec:EXPERIMENTS}, respectively. \section{Preliminaries} \label{sec:PRELIMINARIES} Throughout the paper, elements of a matrix or a vector are indicated by subscripts and single subscript $\mathbf{M}_i$ denotes the $i^\text{th}$ column of $\mathbf{M}$. With $\text{diag}(\mathbf{M})$ we denote the matrix containing only the diagonal of $\mathbf{M}$, all other entries being set to zero. Infinity norm of $\mathbf{y}$ is defined as $\|\mathbf{y}\|_\infty = \underset{i}{\max}\, |\mathbf{y}_i|$, while $\ell_1$ and $\ell_2$ norms of vector $\mathbf{y}$ are denoted as $\|\mathbf{y}\|_1$ and $\|\mathbf{y}\|_2$, respectively. For matrices, $\ell_1$ norm, Frobenius norm and infinity norm are respectively defined as: $\|\mathbf{M}\|_1 = \underset{i,j}{\sum}\, |\mathbf{M}_{ij}|$, $\|\mathbf{M}\|_F = \sqrt{\underset{i,j}{\sum} \, \mathbf{M}_{ij}^2}$, $\|\mathbf{M}\|_\infty = \underset{i,j}{\max} \, |\mathbf{M}_{ij}|$. \subsection{Sensor Pattern Noise} \label{sec:SPN} Given a grayscale image $\mathbf{Y}$, its noise residual $\mathbf{W}$ can be extracted by a denoising filter. A simplified model of $\mathbf{W}$ can be expressed as follows \cite{Chen2007,Chen2008}: \begin{IEEEeqnarray}{rCl} \mathbf{W} &=& \mathbf{T}\mathbf{Y}\mathbf{K} + \mathbf{\Xi} \text{,} \label{W_estimate} \end{IEEEeqnarray} where $\mathbf{\Xi}$ is a the matrix of independently and identically distributed (i.i.d) Gaussian random variables, $\mathbf{T}$ is an attenuation matrix, and $\mathbf{K}$ is referred to as Photo-Response Non-Uniformity (PRNU). In theory, PRNU can be used to cluster images with respect to the acquisition device. However, in a blind scenario this is difficult due to two main problems. First, the Cramer-Rao Lower Bound on the variance of PRNU estimate indicates that a number of smooth and bright (but not saturated) images are required for each camera \cite{Chen2008} and this condition is hardly satisfied. Indeed, the only available information is the noise residual $\mathbf{W}$ for each image, which contains not only the PRNU but also the additive noise $\mathbf{\Xi}$, which limits the reliability of traditional similarity measures used in conventional clustering algorithms. Several methods have been proposed for SPN enhancement \cite{Li2010, Lin2016_2} but it has been confirmed by \cite{Lin2017} that such methods are not suitable for unsupervised setting. Second, the dimension of camera fingerprints is usually high, due to the high resolution of camera sensors, thus their clustering requires huge computation and memory as long as the number of data increases. Existing approaches use normalized correlation to measure the similarity between two flattened fingerprints $\mathbf{a},\mathbf{b}$ of dimension $d$: \begin{IEEEeqnarray}{rCl} \rho(\mathbf{a},\mathbf{b}) &=& \frac{\sum_{i=1}^{d} \left( \mathbf{a}_i - \bar{\mathbf{a}} \right)\left( \mathbf{b}_i - \bar{\mathbf{b}} \right)}{\sqrt{\sum_{i=1}^{d}\left( \mathbf{a}_i - \bar{\mathbf{a}} \right)^2} \sqrt{\sum_{i=1}^{d}\left( \mathbf{b}_i - \bar{\mathbf{b}} \right)^2}}\text{,} \label{corr_definition} \end{IEEEeqnarray} where scalars $\bar{\mathbf{a}}, \bar{\mathbf{b}}$ are the mean values of $\mathbf{a}$ and $\mathbf{b}$, respectively. Without loss of generality, if $\mathbf{a}, \mathbf{b}$ are normalized to have zero mean and unit norm, Eq. (\ref{corr_definition}) simply becomes: \begin{IEEEeqnarray}{rCl} \rho(\mathbf{a},\mathbf{b}) &=& \sum_{i=1}^{d} \mathbf{a}_i \mathbf{b}_i \IEEEnonumber \text{,} \end{IEEEeqnarray} which represents the cosine similarity between $\mathbf{a}$ and $\mathbf{b}$. \subsection{Sparse Representation} Given a set of data points arranged into the columns of a matrix $\mathbf{X} \in \mathbb{R}^{d \times n}$, a data point $\mathbf{y}$ can be expressed as a linear combination of the columns of $\mathbf{X}$. A \emph{sparse} combination reveals columns in the same subspace which $\mathbf{y}$ happens to lie into. Sparse Subspace Clustering (SSC) \cite{Elhamifar2013} finds a sparse representation of $\mathbf{y}$ by solving the following optimization problem: \begin{equation} \underset{\mathbf{z}}{\text{minimize}} \quad \|\mathbf{z}\|_1 \quad \text{subject to} \quad \mathbf{X}\mathbf{z} = \mathbf{y} \text{.} \label{eq:naive_ssc} \end{equation} If columns of $\mathbf{X}$ are contaminated by noise or not well distributed, $\mathbf{X}\mathbf{z} = \mathbf{y}$ might never be reached. Works in \cite{Soltanolkotabi2014,Wang2016} have shown that SSC can deal with noisy data if Eq. (\ref{eq:naive_ssc}) is reformulated as a LASSO problem: \begin{equation} \underset{\mathbf{z}}{\text{minimize}} \quad \|\mathbf{X}\mathbf{z} - \mathbf{y}\|_2^2 + \gamma \|\mathbf{z}\|_1 \text{, } \label{eq:ssc} \end{equation} where $\gamma \geq 0$ is a regularization hyperparameter. Let us show a simple example to interpret the solutions of SSC. As illustrated in Figure \ref{fig:geometric_ssc}, we have $\mathbf{X} = \left[ \mathbf{X}_1, \mathbf{X}_2, \mathbf{X}_3, \mathbf{X}_4 \right] \in \mathbb{R}^{3\times4}$. We assume there are two subspaces spanned by $\left[\mathbf{X}_1, \mathbf{X}_2\right]$ and $\left[\mathbf{X}_3, \mathbf{X}_4\right]$. Without the regularization term, $\mathbf{y}$ can be expressed as linear combination of any $3$ columns of $\mathbf{X}$. However, we would like to assign $\mathbf{y}$ to the closest subspace spanned by $\left[\mathbf{X}_1, \mathbf{X}_2\right]$, i.e., $\| \hat{\mathbf{y}} - \mathbf{y} \|_2 < \| \tilde{\mathbf{y}} - \mathbf{y} \|_2$, a preferable solution should satisfy: \begin{equation} \hat{\mathbf{y}} = \mathbf{z}_1\mathbf{X}_1 + \mathbf{z}_2\mathbf{X}_2 \IEEEnonumber \text{.} \end{equation} Such solution can be reached if we encourage the sparseness of $\mathbf{z}$ by penalizing $\|\mathbf{z}\|_0$. Unfortunately, $\ell_0$ optimization is usually intractable due to its non-convex and combinatorial nature. $\|\mathbf{z}\|_1$ can be used instead, being a good approximation of $\|\mathbf{z}\|_0$ \cite{Donoho2004}. Exploiting the $\ell_1$ regularization term as in Eq. (\ref{eq:ssc}) and properly selecting $\gamma$, SSC finds a sparse solution that minimizes the reconstruction error. \begin{figure} \centering \includegraphics[width=.45\textwidth]{geometric_ssc.pdf} \caption{Geometric interpretation of solution of SSC. } \label{fig:geometric_ssc} \end{figure} \subsection{Motivation} Even if two fingerprints come from the same camera, their normalized correlation is very weak due to the presence of irrelevant pixels. Only a subset of pixels are supposed to be relevant for each camera. Finding the subset of relevant pixels is connected to finding the subspace where fingerprints of a camera happen to lie in. Noticeable efforts in the direction of dimensionality reduction, such as fingerprint digest \cite{Goljan2010} or random projection \cite{Valsesia2015}, cannot be applied in clustering problems. In fact, \cite{Valsesia2015} considers a single subspace, while different cameras have different subsets of relevant pixels. On the other hand, fingerprint digest is composed only by saturated values of the reference fingerprint, which can be extracted only if the common source of images is known. SSC has been exploited to find structure of data in their intrinsic subspaces. Indeed, SSC formulated under \textproc{LASSO} problem works in broad conditions: theoretical guarantees had been provided when subspaces intersect \cite{Soltanolkotabi2011}, or in the presence of additive noise \cite{Soltanolkotabi2014} even if the level of noise is higher than level of signal \cite{Wang2016}, demonstrating that in practice SSC can reliably recover cluster memberships. Moreover, SPNs are known as compressible signals: low-dimensional representations of SPNs are found in \cite{Li2015,Rao2017,LI2018,Valsesia2015, Valsesia2015_2}, thus implying the existence of subspaces that can well represent them. Finally, we observe that the residual $\mathbf{W}$ is a noisy estimate of true camera fingerprint, thus the distributions of intra-class and inter-class correlations computed on $\mathbf{W}$ are heavily overlappped, making clustering algorithms less accurate. This challenge raises the need to eliminate inter-class data relationships and obtain unambiguous underlying data structure. By leveraging sparsity, SSC expresses each data point by a few linear relationships with other data points and extracts unambigous representation, which is essential for this problem. \section{Proposed Method} \label{sec:PROPOSED_METHOD} In this section, we present our clustering framework including three steps: \begin{itemize} \item \textit{Fingerprint extraction and normalization} (Section \ref{sec:FINGERPRINT_EXT_NORM}): given a set of color images as input, we extract, refine and normalize the corresponding noise residuals. \item \textit{Proposed optimization} (Section \ref{sec:OPT_SOLVING}): we present a constrained optimization problem to retrieve sparse and interpretable solutions. \item \textit{Extension to large-scale contexts} (Section \ref{sec:LARGE-SCALE CLUSTERING}): we design a divide-and-conquer mechanism enabling large-scale clustering. \end{itemize} \subsection{Fingerprint Extraction and Normalization} \label{sec:FINGERPRINT_EXT_NORM} In this study, we do not make any assumption on image content, but we simply filter out dark images (if any) since dark or textured images are inappropriate for fingerprint estimation. An image is considered dark if more than $75\%$ of pixels have values in $[0,80]$. A noise residual $\mathbf{W}^c$, $c \in \{\text{red, green, blue\}}$ is extracted from each color channel of $\mathbf{Y}$, by exploiting the wavelet-based denoising filter used in \cite{Lukas2006,Chen2007,Chen2008,Goljan2010}, and then converted to one-channel noise residual. To further suppress non-unique artifacts caused by color interpolation of demosaicing algorithms and standard JPEG compression, we subtract from each row the mean of rows and from each column the mean of columns, and transform the obtained noise residual $\mathbf{W}$ into a one-dimensional unit-norm signal. We then obtain a data matrix $\mathbf{X} \in \mathbb{R}^{d \times n}$, $n$ being the number of fingerprints, and $d$ the number of pixels. \subsection{Proposed optimization} \label{sec:OPT_SOLVING} SSC learns a sparse representation $\mathbf{z}$ of $\mathbf{y}$, whose non-zero entries indicate data points closest to the orthogonal projection of $\mathbf{y}$ onto the relevant subspace. We can \emph{interpret} the magnitude of $\mathbf{z}_i$ as a similarity measure: the closer $\mathbf{X}_i$ is to $\hat{\mathbf{y}}$, the more it contributes to the reconstruction of $\mathbf{y}$, resulting in a larger value of $\mathbf{z}_i$. Back to the example in Figure \ref{fig:geometric_ssc}, denoted as $\alpha_i=\angle \left(\mathbf{X}_i, \hat{\mathbf{y}}\right)$ the angle between $\hat{\mathbf{y}}$ and $\mathbf{X}_i$, it is easily to see that if $\alpha_i < \alpha_j$ then $|\mathbf{z}_i| > |\mathbf{z}_j|$: \begin{equation} \alpha_i < \alpha_j \Leftrightarrow \text{cos}(\alpha_i) >\text{cos}(\alpha_j) \Leftrightarrow \|\mathbf{z}_i\mathbf{X}_i \|_2 > \|\mathbf{z}_j\mathbf{X}_j \|_2 \IEEEnonumber \\ \end{equation} \begin{equation} \Leftrightarrow \left |\mathbf{z}_i \right| > \left| \mathbf{z}_j \right| \text{(since $\|\mathbf{X}_i\|_2 = 1$).} \IEEEnonumber \end{equation} Thus, $\left | \mathbf{z}_i \right |$ is inversely propotional to $\alpha_i$. The $\ell_1$ regulari\-zation term encourages the sparseness of $\mathbf{z}$, whose non-zero entries should indicate data points \emph{closest} to $\hat{\mathbf{y}}$. Due to the nature of $\ell_1$ norm, however, negative and positive contributions are weighted equally. We provide in Figure \ref{fig:geometric_ssc_counter} an example where solution $\tilde{\mathbf{z}}=[-\mathbf{z}'_1, -\mathbf{z}'_2]^T$ might be chosen instead of $\mathbf{z}=\left[\mathbf{z}_1, \mathbf{z}_2\right]^T$ as it is possible that $\left \|\tilde{\mathbf{z}}\right\|_1 < \left \|\mathbf{z}\right\|_1$. Note, in this example $\mathbf{z}_i > 0$ and $\mathbf{z}'_i > 0$, thus $\tilde{\mathbf{z}}$ contains negative entries. This is an unexpected solution since $\mathbf{X}'_1, \mathbf{X}'_2$ lie in another half-space, i.e., $\angle(\mathbf{X}'_1, \hat{\mathbf{y}}) > \pi/2$ and $\angle(\mathbf{X}'_2, \hat{\mathbf{y}}) > \pi/2$. In order to avoid this solution, all entries $\mathbf{z}_i$ can be constrained to non-negative values. Therefore, the optimal solution reveals data points lying in the subspace closest to $\mathbf{y}$ and correlated to the orthogonal projection of $\mathbf{y}$ on that subspace. Another intesting property is that if $\mathbf{z}_i \geq 0$, $\mathbf{z}_j \geq 0$ and $\alpha_i < \alpha_j$ then $\mathbf{z}_i > \mathbf{z}_j$. This motivates us to impose a non-negativity constraint on the optimization problem. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{geometric_ssc_counter.pdf} \caption{Geometric interpretation of negative solution of SSC.} \label{fig:geometric_ssc_counter} \end{figure} For each column $\mathbf{X}_i$, we expect to learn a sparse representation $\mathbf{Z}_i$ such that $\mathbf{X}_i = \mathbf{X}\mathbf{Z}_i$. To obtain a meaningful representation, a column should not be expressed by itself, thus requiring the constraint ${\mathbf{Z}_{ii} = 0}$. Accordingly, we have to solve the following optimization problem: \begin{IEEEeqnarray}{rl} \label{proposed_opt} \underset{\mathbf{Z}}{\text{minimize}} &\quad \frac{1}{2}\left \| \mathbf{X}\mathbf{Z}-\mathbf{X} \right \|_F^2 + \gamma \left \|\mathbf{Z} \right\|_1 \IEEEnonumber \\ \text{subject to} &\quad \text{diag}(\mathbf{Z}) = 0, \mathbf{Z} \geq 0 \text{,} \end{IEEEeqnarray} where $\gamma > 0$ is the regularization hyperparameter. Many research efforts have been spent in solving the unconstrained version of Eq. (\ref{proposed_opt}) \cite{Yang2013}. The $\ell_1$ minimization problem does not have an analytical solution; its solution instead has to be obtained numerically. Among the proposed algorithms, Augmented Lagrange Multiplier (ALM) generally converges faster under a wide range of data \cite{Yang2013}. In this paper, we adopt Alternating Direction Method of Multipliers (ADMM) \cite{Boyd2011} to solve the problem in Eq. (\ref{proposed_opt}), which couples the fast convergence of ALM with the decomposibility property, which is fundamental for distributed implementation in large-scale problems. ADMM introduces a complementary variable $\mathbf{V}$ and re-formulates the unconstrained version of Eq. (\ref{proposed_opt}) into the following equivalent form: \begin{IEEEeqnarray}{rl} \label{eq:lasso_admm} \underset{\mathbf{Z},\mathbf{V}}{\text{minimize}} &\quad \gamma\| \mathbf{V} \|_1 + \frac{1}{2}\| \mathbf{X}\mathbf{Z}-\mathbf{X}\|^2_F \IEEEnonumber \\ \text{subject to} &\quad \mathbf{Z}=\mathbf{V} \text{.} \end{IEEEeqnarray} Here, decomposibility means that $\mathbf{Z}$ and $\mathbf{V}$ can be updated separately, possibly on a distributed system, thus constraints in Eq. (\ref{proposed_opt}) can be imposed on $\mathbf{V}$. They are enforced during $\mathbf{V}$ update by Euclidean projections which are much simpler than ALM. The augmented Lagrangian form of Eq. (\ref{eq:lasso_admm}) is \begin{IEEEeqnarray}{rl} \mathcal{L}_\eta (\mathbf{Z},\mathbf{V},\mathbf{\Lambda}) {}={}& \gamma \|\mathbf{V}\|_1 + \frac{1}{2}\| \mathbf{X}\mathbf{Z}-\mathbf{X}\|^2_F + \langle \mathbf{\Lambda}, \mathbf{Z}-\mathbf{V} \rangle \IEEEnonumber\\ &+\frac{\eta}{2} \|\mathbf{Z}-\mathbf{V}\|_F^2 \text{.}\IEEEnonumber \end{IEEEeqnarray} where $\mathbf{\Lambda} \in \mathbb{R}^{n \times n}$ is the Lagrangian multiplier and $\eta>0$ is the augmented Lagrangian hyperparameter. ADMM iteratively optimizes $\mathbf{Z,V}$ in an alternate fashion, by keeping one variable fixed and updating the others: \begin{IEEEeqnarray}{rCl} \mathbf{Z}^{t+1} &=& \arg \underset{\mathbf{Z}}{\min}\; \mathcal{L}_\eta \left(\mathbf{Z}, \mathbf{V}^t, \mathbf{\Lambda}^t \right) \text{,} \label{eq:z_update} \IEEEnonumber\\ \mathbf{V}^{t+1} &=& \arg \underset{\mathbf{V}}{\min}\; \mathcal{L}_\eta \left(\mathbf{Z}^{t+1},\mathbf{V},\mathbf{\Lambda}^t \right) \text{,} \IEEEnonumber \\ \mathbf{\Lambda}^{t+1} &=& \mathbf{\Lambda}^t + \eta\left(\mathbf{Z}^{t+1}-\mathbf{V}^{t+1}\right) \text{.}\IEEEnonumber \end{IEEEeqnarray} It is straightforward to demonstrate that $\mathbf{Z}$ can be updated by solving the linear equation: \begin{equation} (\mathbf{X}^T\mathbf{X} + \eta\mathbf{I})\mathbf{Z} = (\mathbf{X}^T\mathbf{X} - \mathbf{\Lambda} + \eta\mathbf{V}) \IEEEnonumber \text{,} \end{equation} using Cholesky decomposition of $\mathbf{X}^T\mathbf{X} + \eta\mathbf{I}$. On the other hand, solution of $\mathbf{V}$ at each iteration is obtained through soft thresholding operator $S$ defined as: \begin{IEEEeqnarray}{rCl} S_\nu \left( a \right) &=& \begin{cases} a - \nu \quad & a > \nu \IEEEnonumber \\ a + \nu \quad & a < -\nu \; \text{.} \IEEEnonumber \\ 0 & |a| \leq \nu \end{cases} \end{IEEEeqnarray} Details of its update are provided in Appendix \ref{app:v_update}. After $\mathbf{V}$ update, the two following operators are applied to project $\mathbf{V}$ into the feasible set of solutions: \begin{IEEEeqnarray}{rlll} \Pi_{D}(\mathbf{M}_{ij}) {}={}& \begin{cases} \mathbf{M}_{ij} & i \neq j \\ 0 & i = j \text{,} \end{cases} \label{Pi_D} \\ \Pi_{N}(\mathbf{M}_{ij}) {}={}& \begin{cases} \mathbf{M}_{ij} & \mathbf{M}_{ij} \geq 0 \\ 0 & \mathbf{M}_{ij} < 0 \text{.} \end{cases} \label{Pi_N} \end{IEEEeqnarray} The optimization procedure is reported in Algorithm \ref{alg:lasso_solving_admm}: it converges efficiently to an acceptable solution as $\|\mathbf{Z} -\mathbf{V} \|_{\infty} \rightarrow 0$. \begin{algorithm} \caption {Constrained LASSO} \label{alg:lasso_solving_admm} \begin{algorithmic}[0] \Procedure{Constrained\_Lasso}{$\mathbf{X}, \gamma, \eta$} \State \textbf{initialize}: $\mathbf{Z} \gets 0, \mathbf{V} \gets 0, \mathbf{\Lambda} \gets 0, \varepsilon \gets 10^{-4}$ \While {convergence condition is not satisfied} \State Fix the others, update $\mathbf{Z}$ \begin{flalign} \hspace{3.5em}&\mathbf{Z} {} \gets {} (\mathbf{X}^T\mathbf{X}+\eta \mathbf{I})^{-1}(\mathbf{X}^T\mathbf{X} - \mathbf{\Lambda} + \eta\mathbf{V})& \IEEEnonumber \end{flalign} \State Fix the others, update $\mathbf{V}$ \begin{flalign} \hspace{3.5em}&\mathbf{V}_{ij} {} \gets {} S_{\frac{\gamma}{\eta}}\left( \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} \right) &\IEEEnonumber \end{flalign} \vspace{-1.5em} \begin{flalign} \hspace{3.5em}&\mathbf{V}_{ij} {} \gets {} \Pi_{D}(\Pi_{N}(\mathbf{V}_{ij})) &\IEEEnonumber \end{flalign} \State Fix the others, update $\mathbf{\Lambda}$: $\mathbf{\Lambda} \gets \mathbf{\Lambda} + \eta\left(\mathbf{Z}-\mathbf{V}\right)$ \State Check convergence condition: $\|\mathbf{Z} - \mathbf{V}\|_{\infty} < \varepsilon$ \EndWhile \State \Return $\mathbf{Z}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sparse_vs_dense.pdf} \caption{Visual comparison of sparse representation and dense representation (obtained by normalized correlation): (a) synthetic noise sample, (b) realistic noise sample, (c) sparse representation matrix of synthetic noise, (d) dense representation matrix of synthetic noise, (e) sparse representation matrix of realistic noise, (f) dense representation matrix of realistic noise.} \label{fig:sparse-vs-dense} \end{figure} To visually compare sparse representation and normalized correlation matrix, we conduct an analysis on synthetic noise and another one on realistic noise. Synthetic noise is extracted from images generated by the simple imaging model described in \cite{Chen2008} for smooth images (without the attenuation factor $\mathbf{T}$), that is $\mathbf{Y} = \mathbf{Y}^{(0)} + \mathbf{Y}^{(0)}\mathbf{K} + \mathbf{\Theta}$. The clean image $\mathbf{Y}^{(0)}$ is uniform, having pixel value of $0.9$ (relatively bright). $\mathbf{K}_{ij}$ and $\mathbf{\Theta}_{ij}$ are reasonably assumed as white Gaussian noise. As the signal $\mathbf{K}$ is generally weaker than $\mathbf{\Theta}$, the variance of $\mathbf{K}_{ij}$ is selected as $0.001$ and the variance of $\mathbf{\Theta}_{ij}$ is $0.1$ (for pixel values in $[0,1]$). We simulate the situation of $5$ cameras corresponding to $5$ different $\mathbf{K}$ patterns, considering $100$ images for each camera, thus resulting into $500$ different $\mathbf{\Theta}$ patterns. After that, we apply the same wavelet-based denoising filter to extract synthetic noise. A sample of extracted synthetic noise is depicted in Figure \ref{fig:sparse-vs-dense} (a). For the realistic setting, we select $5$ cameras from the Vision dataset \cite{Shullani2017}, $100$ images for each camera, and apply the same denoising procedure. In Figure \ref{fig:sparse-vs-dense} (b) an example of realistic noise is shown. We intentionally group noise residuals of the same camera so that the representation matrix is easily observable. We show sparse representation matrix of synthetic noise in Figure \ref{fig:sparse-vs-dense} (c), and of realistic noise in Figure \ref{fig:sparse-vs-dense} (e). The dense representation matrix in Figure \ref{fig:sparse-vs-dense} (d) and \ref{fig:sparse-vs-dense} (f) are obtained by computing pair-wise normalized correlation for synthetic noise and for realistic noise, respectively. Noticeably, solving the problem in Eq. (\ref{proposed_opt}) obtains meaningful representation where inter-class relations are effectively removed, revealing clearer block-diagonal structure compared to normalized correlation. The sparse representation matrix captures asymmetric relationships among data points, i.e., $\mathbf{Z}_{ij} \neq \mathbf{Z}_{ji}$. For our clustering purpose, we build a weighted undirected graph $\mathbf{G}$ from $\mathbf{Z}$ as $\mathbf{G}=(\mathbf{Z}+\mathbf{Z}^T)/2$. To obtain the final segmentation, we apply the spectral clustering described in \cite{Ng2001} to partition $\mathbf{G}$ into $\kappa$ connected components or clusters. In practice, $\kappa$ can be inferred from the number of small eigenvalues. Therefore, we adopt an approach based on \emph{eigengap heuristic} \cite{vonLuxburg2007} to infer the number of clusters, similarly as \cite{Phan2017}. In summary, the proposed algorithm learns sparse representations of each camera fingerprint. To avoid any confusion with ordinary SSC in \cite{Phan2017}, we denote Sparse Subspace Clustering with Non-negativity Constraint as SSC-NC. This approach provides a good instrument to discover structures on high-dimensional data, but shows a major drawback on scalability, since all data must be loaded in RAM. In Section \ref{sec:COMPLEXITY_ANALYSIS} the computational complexity of Algorithm \ref{alg:lasso_solving_admm} will be demonstrated to be in the order of $n^3$, $n$ being the number of fingerprints. Empirically, this is acceptable only for datasets with $n \leq 6000$. \subsection{Large-scale sparse subspace clustering} \label{sec:LARGE-SCALE CLUSTERING} In this section we extend our methodology to cluster camera fingerprints in large-scale contexts, referring to such extension as large-scale SSC (LS-SSC). First, we address the memory issue using a \emph{divide-and-conquer} strategy, so that compact clusters could be discovered on small data batches. This process is followed by data re-cycling to increase the chance of discovering hidden clusters. Finally, we employ \emph{merging} and \emph{attraction} phases to finalize clustering results. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{ls_schema.pdf} \caption{Schema of the proposed LS-SSC.} \label{fig:ls_schema} \end{figure} \subsubsection{Splitting, Clustering and Recycling} The baseline strategy of our large-scale clustering is the divide-and-conquer paradigm, which breaks an intractable problem into several smaller tractable problems. We randomly split the set of all fingerprints $\mathcal{X}$ into $B$ batches of equal size, $\mathcal{X} = \left\{\mathcal{X}^l\right\}_{l=1,\ldots,B}$, where $B$ is originally set to $\lceil{\frac{n}{p}}\rceil$ and $p$ is the batch size. Only one data batch at a time is loaded on RAM. We then apply Algorithm \ref{alg:lasso_solving_admm} on the data batch to learn sparse representations among fingerprints. We hereby refer to \emph{cluster purity} as the quality of a cluster. A pure cluster should contain only fingerprints of the same camera. The main purpose of this phase is to extract small-size but pure clusters that can be later merged to form larger clusters. As a result of splitting, a fingerprint might not be well reconstructed by only fingerprints from the same camera. To minimize the reconstruction error, the algorithm might select fingerprints from multiple cameras. Such representations are considered as \emph{outliers}, similarly to \cite{You2017}. Let us now consider a sparse representation matrix as a directed graph: outliers have connections to both outliers and inliers, while inliers have connections to inliers only. If we perform a random walk on the graph, the probability of ending at inliers is therefore higher than ending at outliers. We apply the random walk algorithm described in \cite{You2017} with $1000$ steps to acquire the state probabilities, we model such probabilities as a normal distribution, and we keep $80\%$ of the distribution as inliers, thus classifying the rest as outliers. To guarantee the purity of clusters, we avoid spectral clustering, but we attempt to localize dense regions using the interpretability property of our sparse representation. Accordingly, large values indicate \emph{closest} fingerprints. Since our target is to discover small-size but pure clusters, we can further simplify the graph by retaining only $K$ largest entries on each column of the sparse representation matrix, and setting other entries to zero. Each remaining fingerprint is located in a region of $K$ nearest neighbors, and fingerprints in the same cluster should have common neighbors, forming a dense region. After that, we apply \textproc{DBSCAN} \cite{Ester1996} to discover dense regions. This classical clustering technique is computationally feasible for large-scale datasets and does not require the number of clusters to be known. Two parameters need to be indicated as input of \textproc{DBSCAN}: radius $\epsilon$ and minimum number of neighbors $MinPts$. The radius $\epsilon$ should be selected on the basis of $K$ largest values on each column of the sparse representation matrix, while $MinPts$ must be smaller or equal to $K$. If $\epsilon$ is too small, this results in many clusters. Conversely, very limited number of clusters are discovered, complicating the recycling process. On the other hand, if $MinPts > K$, there is no cluster discovered by \textproc{DBSCAN}. We empirically found that setting $MinPts = K$ and $\epsilon$ equal to the mean of non-zero entries, allows discovering pure clusters. After clustering, we obtain the set of inliers and outliers, where inliers are used for the merging phase and outliers are fed to the recycling process. The aim of recycling is to combine outliers from each batch and feed them back to the clustering process, thus increasing the chance to discover hidden clusters. For that reason, clustering and recycling can be seen as an iterative procedure, as outlined in Algorithm \ref{alg:Splitting_Clustering_Recycling}. \begin{algorithm} \caption {Splitting, clustering, recycling} \label{alg:Splitting_Clustering_Recycling} \begin{algorithmic}[0] \Procedure{Splitting\_Clustering\_Recycling}{} \scriptsize \State \textbf{input}: $\mathcal{X}, p, R, K, \gamma, \eta$ \Comment{$\mathcal{X}$: dataset, $p$: batch size, $R$: number of recycling steps, $K$: number of nearest neighbors} \State \textbf{output}: $\mathcal{X}_{\text{in}}$, $\mathcal{X}_{\text{out}}$ \Comment{set of clustered and unclustered fingerprints} \State $\mathcal{X}_\text{in} \gets \emptyset$ \State $B \gets \lceil{\frac{n}{p}}\rceil$ \State Split $\mathcal{X}$ into $\left\{ \mathcal{X}^l \right\}_{l = 1,\ldots,B}$ \State $\mathcal{X}^l_\text{out} \gets \emptyset$, \Comment{$l = 1,\ldots,B$} \For{$l=1 \to B$} \State $\mathcal{\tilde{X}}^l_\text{in}$, $\mathcal{\tilde{X}}^l_\text{out} \gets $ \textproc{Partition}($\mathcal{X}^l$, $K, \gamma, \eta$) \State Append $\mathcal{\tilde{X}}^l_\text{in}$ to $\mathcal{X}_\text{in}$ and append $\mathcal{\tilde{X}}^l_\text{out}$ to $\mathcal{X}^l_\text{out}$ \EndFor \State $t \gets B$, $\tilde{B} \gets B$ \Repeat \State $\mathcal{X}^t_\text{out} \gets \emptyset$, $\mathcal{X}^t \gets \emptyset$ \For{$l = 1 \to \tilde{B}$} \State Pop out randomly $s^l = \frac{\left | \mathcal{X}_\text{out}^l \right | \times p} {\sum_{i=1}^{\tilde{B}} \left | \mathcal{X}^i_\text{out} \right |}$ fingerprints from $\mathcal{X}^l_\text{out}$ \State Append $s^l$ fingerprints to $\mathcal{X}^t $ \EndFor \State $\mathcal{\tilde{X}}^t_\text{in}$, $\mathcal{\tilde{X}}^t_\text{out} \gets $ \textproc{Partition}($\mathcal{X}^t$, $K, \gamma, \eta$) \State Append $\mathcal{\tilde{X}}^t_\text{in}$ to $\mathcal{X}_\text{in}$ and append $\mathcal{\tilde{X}}^t_\text{out}$ to $\mathcal{X}^t_\text{out}$ \State $t \gets t + 1$, $\tilde{B} \gets \tilde{B} + 1$ \Until{$t \geq B+R$} \State $\mathcal{X}_\text{out} \gets \left\{ \mathcal{X}^l_\text{out} \right\}_{l=1,\ldots,B+R}$ \normalsize \EndProcedure % \Procedure{Partition}{} \scriptsize \State \textbf{input}: $\mathcal{X}, K, \gamma, \eta$ \Comment{$\mathcal{X}$: dataset, $K$: number of nearest neighbors} \State \textbf{output}: $\mathcal{X}_{\text{in}}$, $\mathcal{X}_{\text{out}}$ \Comment{set of clustered and unclustered fingerprints} \State Load fingerprints in $\mathcal{X}$ to $\mathbf{X}$ \Comment{$\mathbf{X}$: matrix of fingerprints} \State $\mathbf{Z} \gets$ \textproc{Constrained\_Lasso($\mathbf{X}, \gamma, \eta$)} \State Remove outliers, obtain $\tilde{\mathbf{Z}}$. Append outliers to $\mathcal{X}_\text{out}$ \State Keep only $K$ largest entries on each column of $\tilde{\mathbf{Z}}$, obtain $\tilde{\mathbf{Z}}_\text{KNN}$ \State Apply DBSCAN to discover clusters \State Append inliers to $\mathcal{X}_\text{in}$ \State Append outliers to $\mathcal{X}_\text{out}$ \normalsize \EndProcedure \end{algorithmic} \end{algorithm} \subsubsection{Merging} In the first phase, by increasing \emph{K} we obtain larger clusters at the expense of a lower cluster purity. Conversely, we yield small-size pure clusters. The latter is preferable, as small-size clusters (subclusters) can be merged efficiently to form larger subclusters. Let $\mathbf{W}^A$ and $\mathbf{W}^B$ be two noisy fingerprints of dimension $d$, i.e., two singleton subclusters, reasonably assumed to follow a normal distribution since the denoising filter extracts stationary Gaussian noise in wavelet domain (see Appendix A of \cite{Lukas2006}). $\mathbf{K}^A$ and $\mathbf{K}^B$ are the noise-free fingerprints residing in $\mathbf{W}^A$, $\mathbf{W}^B$. The merging problem can be formulated as a classical hypothesis test: \begin{IEEEeqnarray}{rCl} \text{H}_0&:& \mathbf{K}^A \neq \mathbf{K}^B \text{,} \IEEEnonumber\\ \text{H}_1&:& \mathbf{K}^A = \mathbf{K}^B = \mathbf{K}. \IEEEnonumber \end{IEEEeqnarray} Under null hypothesis, $\rho \left( \mathbf{W}^A, \mathbf{W}^B \right) \sim \mathcal{N}(0,\frac{1}{d})$ according to the Central Limit Theorem (CLT). Two subsclusters can be merged if their normalized correlation exceeds $\frac{1}{\sqrt{d}} Q^{-1}(P_{FA})$, where $Q(t)$ is the probability that a standard normal variable is larger than $t$ and $P_{FA}$ is the expected false alarm rate \cite{Goljan2010}. More generally, if each cluster contains more than one fingerprint, $\mathbf{W}^A$ and $\mathbf{W}^B$ represent respectively the subcluster centroids. Under alternative hypothesis, the correlation between $\mathbf{W}^A$ and $\mathbf{W}^B$ increases if the cardinality of each subcluster increases, as random noise is effectively suppressed by averaging. Obviously, the merging phase will be more reliable if one knows not only the null distribution but also the alternative distribution. To determine the alternative distribution, \cite{ROCFridrich,Lin2017} established a parametric model with some statistical assumptions and determined model parameters. For instance, the true fingerprints are assumed to be additive noise \cite{ROCFridrich,Lin2017}, and the WGN of a camera presents always the same variance \cite{ROCFridrich}. Nevertheless, if those assumptions are not guaranteed, and usually they are not, parameter estimation becomes extremely difficult. We resort the merging problem into finding a threshold value $\tau$ that is able to exclude the null hypothesis and to adapt to the variation of the alternative hypothesis, based on real data. This is achieved by taking into account the cardinality and intra-class correlation within each subcluster. Let $\mathbf{X}^A$ and $\mathbf{X}^B$ be the two matrices containing $n_A$ and $n_B$ fingerprints of each subcluster, and $\rho_A$ and $\rho_B$ be the intra-class correlation within these subclusters. We learn the threshold adaptiveness via linear regression: \begin{IEEEeqnarray}{rCl} \mathcal{R} \left(n_A, n_B, \rho_A, \rho_B \right) &=& \left[n_A, n_B, \rho_A, \rho_B \right]\mathbf{w} + b \IEEEnonumber \text{,} \end{IEEEeqnarray} where $\mathbf{w} \in \mathbb{R}^{4\times1}$ and $b \in \mathbb{R}$ are weights and bias, respectively. From real data, we calculate $\rho_A$, $\rho_B$ and the regression output $\mathcal{R}(\cdot)$ as follows: \begin{IEEEeqnarray}{rCl} \rho_A &=& \frac{1}{n_A(n_A-1)} \sum_{i=1}^{n_A} \sum_{j=1,j\neq i}^{n_A}\rho(\mathbf{X}^A_i, \mathbf{X}^A_j) \IEEEnonumber \text{,}\\ \rho_B &=& \frac{1}{n_B(n_B-1)} \sum_{i=1}^{n_B} \sum_{j=1,j\neq i}^{n_B}\rho(\mathbf{X}^B_i, \mathbf{X}^B_j) \IEEEnonumber \text{,}\\ \mathcal{R}(\cdot) &=& \frac{\rho(\bar{\mathbf{X}}^A, \bar{\mathbf{X}}^B)}{2} \text{,}\IEEEnonumber \end{IEEEeqnarray} where $\bar{\mathbf{X}}^A, \bar{\mathbf{X}}^B$ are respectively two subcluster centroids, i.e., mean of columns in $\mathbf{X}^A$ and $\mathbf{X}^B$. The estimate of $\mathcal{R}(\cdot)$ is interpreted as the central value between mean of null and alternative distribution. The final regressor learnt from real data (we will mention this development set in Section \ref{sec:hyperparameter_selection}) has the form: \begin{IEEEeqnarray}{rCl} \mathcal{R} \left(n_A, n_B, \rho_A, \rho_B \right) &=& 0.0016 \, n_A + 0.0016 \, n_B + \IEEEnonumber \\ & &2.2474 \, \rho_A + 2.2474 \, \rho_B - 0.0474 \text{.} \IEEEnonumber \end{IEEEeqnarray} Careful readers will notice that the regressor is symmetric in terms of cluster role, i.e., $\mathcal{R} \left(\right.n_A, n_B, \rho_A, \rho_B\left. \right)$\;=\;$\mathcal{R}\left(\right.n_B, n_A, \rho_B, \rho_A\left.\right)$. This is achieved by augmenting the training data with the role of two clusters exchanged. The threshold $\tau$ is finally calculated as: \begin{IEEEeqnarray}{rCl} \tau &=& \max \left\{ \frac{1}{\sqrt{d}} Q^{-1}(P_{FA}), \mathcal{R} \left(\cdot \right) \right\} \IEEEnonumber \text{,} \end{IEEEeqnarray} where $P_{FA}$ is chosen as $0.001$ ($0.1\%$ false alarm rate). The merging phase is conducted as an iterative procedure, which respectively selects pairs of subclusters having maximum centroid correlation and compares them to $\tau$. If the correlation is larger than $\tau$, the two subclusters are merged and relevant information is updated. The algorithm stops when no more pairs of subclusters exist that satisfy the merging condition. In practical cases, a good regressor might not be linear, but for $n_A, n_B$ within a reasonably small range, $\tau$ can be fitted by a linear function. Therefore, to calculate a reliable $\tau$, we set $n_A=\min\left\{\tilde{n}_A,50\right\}$ and $n_B = \min\left\{\tilde{n}_B, 50\right\}$ where $\tilde{n}_A,\tilde{n}_B$ are actual cluster cardinalities, and calculate $\rho_A$, $\rho_B$ using bounded sets of fingerprints. The quantity $50$ is suggested as a minimal cardinality for fingerprint estimation \cite{Lukas2005,Lukas2006}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{threshold_comparison_2.pdf} \caption{Comparison of the proposed threshold and Lin's threshold \cite{Lin2017} under two cameras: Kodak M1063 and Nikon CoolPix S710. Better viewed in color.} \label{fig:threshold_comparison_2} \end{figure} To demonstrate the effectiveness of the proposed threshold, we compare with Lin's threshold \cite{Lin2017}, which was previously shown to be superior than thresholds in \cite{Bloy2008} and \cite{Eklann2012}. We select two cameras, namely Kodak M1063 and Nikon CoolPix S710 from Dresden database \cite{Gloe2010}. We randomly split images of one camera into two parts to simulate two same-camera subclusters. Images from different cameras are used to create cross-camera subclusters. The process is replicated $2000$ times and intra-class and inter-class correlations are collected. Figure \ref{fig:threshold_comparison_2} shows how the proposed threshold and the threshold in \cite{Lin2017} (Lin's threshold) separate null and alternative distribution. When the cardinality of same-camera subclusters increases, the alternative distribution shifts towards the right, while the null distribution is centered at $0$. The proposed threshold consistently splits the two distributions, while Lin's threshold tends to be unnecessarily confident when two distributions are close. An interesting behavior of the proposed threshold and Lin's threshold is their adaptiveness to distribution shifting. \subsubsection{Attraction} In attraction phase, we assign remaining fingerprints to available clusters. Let us denote $\mathbf{C} = \left[ \bar{\mathbf{C}}_1, \bar{\mathbf{C}}_2, \ldots, \bar{\mathbf{C}}_L \right] \in \mathbb{R}^{d \times L}$ the matrix containing centroids of $L$ final clusters, and $\mathbf{X^\text{out}} \in \mathbb{R}^{d \times U}$ the data matrix containing $U$ unclustered fingerprints. Since the quality of camera fingerprints is generally non-homogeneous, cluster assignment should be performed for high-quality fingerprints first, in order to minimize assignment errors. The cluster membership $l, 1 \leq l \leq L$ of fingerprint $\mathbf{X}_i^\text{out}$, $1 \leq i \leq U$ is obtained iteratively by finding at each step the pair $l$ and $i$ such that $\rho(\mathbf{X}_i^\text{out}, \bar{\mathbf{C}}_l)$ is maximum and greater than $Q^{-1}(P_{FA})$ which is the threshold used to exclude null hypothesis in merging phase. After being attracted $\mathbf{X}^\text{out}_i$ is discarded, otherwise it is labeled to as \emph{unclustered}. Since the remaining fingerprints to be merged have been classified as outliers after recycling, we can expect that they are low-quality samples. Therefore, to reduce false alarm rate, the cluster centroid is updated only when its cardinality does not exceed $50$, consistently with the empirical value used in the merging phase. Eventually, we obtain the cluster memberships of camera fingerprints in a large-scale database and a number of unclustered fingerprints. \section{Computational Complexity} \label{sec:COMPLEXITY_ANALYSIS} In this section we discuss on the time complexity of our proposed SSC-NC, LS-SSC and two recent works: correlation clustering with consensus (CCC) \cite{Marra2017} and Lin's large-scale method (Lin-LS) \cite{Lin2017}. \textbf{SSC-NC}. SSC-NC is composed by \textproc{Constrained\_Lasso} and spectral clustering. \textproc{Constrained\_Lasso} consists of Cholesky decomposition, linear equation solving and soft thresholding. In the worst case, Cholesky decomposition requires $n^3/3$ flops. Solving linear equations requires $2n^2$ flops of forward and backward substitutions. Soft-thresholding operation on $n^2$ variables requires $n^2$ computations. Let $T_1$ be the bound number of iterations, total cost of \textproc{Constrained\_Lasso} is $\mathcal{O} \left(n^3/3 + 3T_1n^2 \right)$. Spectral clustering consists of maximum $\mathcal{O}\left(n^3 \right)$ computations for eigendecomposition and $\mathcal{O}\left(T_2\kappa^2 n \right)$ for \emph{K}-means clustering on $n$ $\kappa$-dimensional eigenvectors, where $T_2$ is the bound number of iterations in \emph{K}-means and $\kappa$ is the number of clusters. The time complexity of SSC-NC is $\mathcal{O}\left(4n^3/3 + 3T_1n^2 + T_2\kappa^2 n\right)$. {\textbf{CCC}. Similarly to typical clustering methods, CCC computes the correlation matrix which costs $\mathcal{O}\left(n^2 \right)$. Correlation clusterings are afterwards carried out by Adaptive Label Iterated Contitional Modes (AL-ICM) \cite{Besag1986}. AL-ICM, a greedy algorithm, operates in iterative mode. Every fingerprint is initially assigned to a unique label. At each iteration, AL-ICM assigns to a fingerprint the label of its closest fingerprints. This process is repeated until convergence where no label is updated. If $T_3$ is the bound number of itera\-tions, the time complexity of AL-ICM is bounded to $\mathcal{O}\left(T_3n^2 \right)$. In CCC, correlation clustering is performed $Q$ times where $Q$ is the number of similarity thresholding values. Multiple base clusterings are combined to find the final clustering agreement by Weighted Evidence Accumulation Clustering (WEAC) \cite{Huang2015}. The time complexity of WEAC is $\mathcal{O}\left((Q+\log{n})n^2 + Qn \right)$. Finally, $m$ obtained clusters are refined via a merging step which costs $\mathcal{O}\left({m^2\log{m}}\right)$. Total cost of CCC is $\mathcal{O}\left((QT_3 + Q + \log{n} + 1)n^2 + Qn + m^2\log{m}\right)$}. \textbf{LS-SSC}. In large-scale contexts, we suppose that RAM can cache only $p$ fingerprints. The dataset is split into $B$ batches, $B=\lceil{\frac{n}{p}}\rceil$. Clustering each batch requires running \textproc{Constrained\_Lasso}, finding \emph{K} nearest neighbors and \textproc{DBSCAN}. Finding \emph{K} nearest neighbors requires sorting each column of sparse representation matrix, which is $\mathcal{O}\left(p^2\log{p} \right)$. In the worst case, DBSCAN visits $p$ points and scans for their neighbors, which costs $\mathcal{O} \left(p^2 \right)$. Total cost of clustering $B$ batches is $\mathcal{O}\left(B\left[ p^3/3 + (3T_1 + \log{p} + 1)p^2\right]\right)$. In our large-scale experiments, recycling step is replicated $B/2$ times on batches of size $p$. Merging and attracting phase work similarly to agglomerative hierarchical clustering, and their time complexity is respectively $\mathcal{O}\left( L^2\log{L} \right)$ and $\mathcal{O} \left(UL\log{U} \right)$, where $L$ is the number of discovered clusters after the first phase and $U$ is the number of unclustered images. Total cost of LS-SSC is $\mathcal{O}\left( \right. 1.5B^2\left[ p^3/3 + (3T_1 + \log{p} + 1)p^2\right]\ + {L^2\log{L}} + UL\log{U} \left. \right)$. \textbf{Lin-LS}. The time complexity of Lin-LS is analyzed for the first iteration. In the coarse step, the correlation calculation of $B$ batches requires $\mathcal{O}\left(Bp^2 \right)$. If the correlation matrix $n \times n$ has $E$ non-zero entries, Graclus partitioning algorithm \cite{Dhillon2007} has the time complexity of $\mathcal{O}\left(pE/n \right)$. Since the number of clusters in coarse step is fixed to $n^{1/4}$, the calculation of correlation matrix in the fining step costs $\mathcal{O} \left(n^{1/4}b^2 \right)$ where $b$ is the average size of clusters. Markov Clustering Algorithm (MCL) applied on $n^{1/4}$ coarse clusters is bounded to $\mathcal{O} \left( n^{1/4}bK^2 \right)$, where $K$, for abuse of notation, is the maximal number of nonzero entries on each column of the binarized correlation matrix. Similarily to LS-SSC, merging and attraction of Lin-LS can be approximated to $\mathcal{O}\left( L^2\log{L}\right)$ and $\mathcal{O}\left( UL\log{U} \right)$ where $L$ is the discovered number of clusters and $U$ refers to the number of unclustered fingerprints. Since both LS-SSC and Lin-LS aim to obtain high-quality clusters of small size, we can equalize $U,L$ in LS-SSC and $U,L$ in Lin-LS for easy comparison. The first iteration of Lin-LS totally costs $\mathcal{O}\left( {Bp^2 + (K^2b + b^2)n^{1/4} + pE/n + L^2\log{L} + UL\log{U}}\right)$. The two parameters $E$ and $K$ depend on the cluster distribution in the dataset. In medium-size datasets where no divide-and-conquer is needed, i.e., $p=n$, SSC-NC and LS-SSC are cubic while Lin-LS and CCC are approximately quadratic. In large-scale datasets, only algorithms designed with divide-and-conquer strategy can be run under the constraint on RAM as well as computational power. The time complexity of LS-SSC is cubic with respect to $p$, while Lin-LS is almost quadratic with respect to $p$ and cluster distribution. In fact, when $n$ becomes very large, we can fix $p$ in LS-SSC as an upper bound, while Lin-LS requires to synthesize the correlation matrix $n \times n$ in the coarse step. Moreover, the time complexity of Lin-LS is analyzed only on the first iteration, the cost of following iterations have to be accounted. Although LS-SSC is cubic with respect to $p$ due to Cholesky decomposition, we optimize this computation by exploiting LAPACK \cite{lapack} whose implementation of Cholesky decomposition is extremely efficient. Our implementation will be made available upon paper acceptance. \section{Experiments} \label{sec:EXPERIMENTS} In this section, we provide experimental analyses of the proposed clustering framework. Based on real data, hyperparameters are selected and used thorough all experiments. We validate the superiority of our method under intensive settings, both on medium and large-scale clustering contexts. \subsection{Experimental Settings} \textbf{Dataset}. All experiments are conducted on JPEG images of Dresden \cite{Gloe2010} and Vision \cite{Shullani2017}. The top-left regions of size $512 \times 512$ are cropped out for fingerprint extraction. We have tested diverse configurations whose quatitative details are outlined in Table \ref{table:test_config_medium_scale} and Table \ref{table:test_config_large_scale}, considering: \begin{itemize} \item \emph{Cluster symmetry}. On Dresden and Vision, we create symmetric datasets containing $100$ images for each camera, and asymmetric datasets containing all available images on each camera. We denote such configuration on Dresden as $\mathcal{D}^a_c$ $\mathcal{D}^s_c$, and on Vision as $\mathcal{V}^a_c$ $\mathcal{V}^s_c$, where $a$ and $s$ stand for \emph{symmetric} and \emph{asymmetric}, respectively, and $c$ is the number of cameras. \item \emph{Multiple instances of the same model}. On Dresden, we create datasets containing $5$ camera instances of each camera model. Combining with cluster symmetry, we obtain symmetric and asymmetric datasets of this configuration as $\mathcal{D}^{sm}_c$ and $\mathcal{D}^{am}_c$. \item \emph{Number of cameras}. In medium-size datasets, we first select $c=5$, and incrementally add $5$ cameras till $c=20$. \item \emph{Large-scale clustering}. On Dresden, we first select $c=30$, and incrementally add $5$ cameras till the maximum $c=74$, considering all cameras. Since Vision is smaller than Dresden, we start with $c=21$ and incrementally add $3$ cameras till $c=33$. Such configurations on Dresden and Vision are respectively denoted as $\mathcal{LD}^a_c$ and $\mathcal{LV}^a_c$. All these configurations include cameras of same models. \end{itemize} \begin{table} \caption{Testing configurations on medium-size datasets.} \label{table:test_config_medium_scale} \scriptsize \begin{tabular}{| C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} |} \hline \multicolumn{2}{| c |}{Configuration} & \multicolumn{2}{ c |}{\# cameras} & \multicolumn{2}{ c |}{\# models} & \multicolumn{2}{c |}{\# images}\\ \hline Dresden & Vision & Dresden & Vision & Dresden & Vision & Dresden & Vision \\ \hline \hline $\mathcal{D}^s_5$ & $\mathcal{V}^s_5$ & \multicolumn{2}{c |}{$5$} & \multicolumn{2}{c |}{$5$} & \multicolumn{2}{c |}{$500$} \\ \hline $\mathcal{D}^s_{10}$ & $\mathcal{V}^s_{10}$ & \multicolumn{2}{c |}{$10$} & \multicolumn{2}{c |}{$10$} & \multicolumn{2}{c |}{$1000$} \\ \hline $\mathcal{D}^s_{15}$ & $\mathcal{V}^s_{15}$ & \multicolumn{2}{c |}{$15$} & \multicolumn{2}{c |}{$15$} & \multicolumn{2}{c |}{$1500$} \\ \hline $\mathcal{D}^s_{20}$ & $\mathcal{V}^s_{20}$ & \multicolumn{2}{c |}{$20$} & \multicolumn{2}{c |}{$20$} & \multicolumn{2}{c |}{$2000$} \\ \hline $\mathcal{D}^a_5$ & $\mathcal{V}^a_5$ & \multicolumn{2}{c |}{$5$} & \multicolumn{2}{c |}{$5$} & $1089$ & $1041$ \\ \hline $\mathcal{D}^a_{10}$ & $\mathcal{V}^a_{10}$ & \multicolumn{2}{c |}{$10$} & \multicolumn{2}{c |}{$10$} & $1954$ & $2110$ \\ \hline $\mathcal{D}^a_{15}$ & $\mathcal{V}^a_{15}$ & \multicolumn{2}{c |}{$15$} & \multicolumn{2}{c |}{$15$} & $3031$ & $3208$ \\ \hline $\mathcal{D}^a_{20}$ & $\mathcal{V}^a_{20}$ & \multicolumn{2}{c |}{$20$} & \multicolumn{2}{c |}{$20$} & $4186$ & $4435$ \\ \hline $\mathcal{D}^{sm}_{5}$ & $-$ & $5$ & $-$ & $1$ & $-$ & $500$ & $-$ \\ \hline $\mathcal{D}^{sm}_{10}$ & $-$ & $10$ & $-$ & $2$ & $-$ & $1000$ & $-$ \\ \hline $\mathcal{D}^{sm}_{15}$ & $-$ & $15$ & $-$ & $3$ & $-$ & $1500$ & $-$ \\ \hline $\mathcal{D}^{sm}_{20}$ & $-$ & $20$ & $-$ & $4$ & $-$ & $2000$ & $-$ \\ \hline $\mathcal{D}^{am}_{5}$ & $-$ & $5$ & $-$ & $1$ & $-$ & $855$ & $-$ \\ \hline $\mathcal{D}^{am}_{10}$ & $-$ & $10$ & $-$ & $2$ & $-$ & $2663$ & $-$ \\ \hline $\mathcal{D}^{am}_{15}$ & $-$ & $15$ & $-$ & $3$ & $-$ & $3558$ & $-$ \\ \hline $\mathcal{D}^{am}_{20}$ & $-$ & $20$ & $-$ & $4$ & $-$ & $4540$ & $-$ \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Testing configurations on large-scale datasets.} \label{table:test_config_large_scale} \scriptsize \begin{tabular}{| C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} | C{0.65cm} |} \hline \multicolumn{2}{| c |}{Configuration} & \multicolumn{2}{ c |}{\# cameras} & \multicolumn{2}{c |}{\# images}\\ \hline Dresden & Vision & Dresden & Vision & Dresden & Vision \\ \hline $\mathcal{LD}^a_{30}$ & $\mathcal{LV}^a_{21}$ & $30$ & $21$ & $6596$ & $4397$ \\ \hline $\mathcal{LD}^a_{35}$ & $\mathcal{LV}^a_{24}$ & $35$ & $24$ & $7538$ & $5051$ \\ \hline $\mathcal{LD}^a_{40}$ & $\mathcal{LV}^a_{27}$ & $40$ & $27$ & $8545$ & $5773$ \\ \hline $\mathcal{LD}^a_{45}$ & $\mathcal{LV}^a_{30}$ & $45$ & $30$ & $9635$ & $6377$ \\ \hline $\mathcal{LD}^a_{50}$ & $\mathcal{LV}^a_{33}$ & $50$ & $33$ & $10765$ & $7070$ \\ \hline $\mathcal{LD}^a_{55}$ & $-$ & $55$ & $-$ & $11673$ & $-$ \\ \hline $\mathcal{LD}^a_{60}$ & $-$ & $60$ & $-$ & $12729$ & $-$ \\ \hline $\mathcal{LD}^a_{65}$ & $-$ & $65$ & $-$ & $13995$ & $-$ \\ \hline $\mathcal{LD}^a_{70}$ & $-$ & $70$ & $-$ & $14915$ & $-$ \\ \hline $\mathcal{LD}^a_{74}$ & $-$ & $74$ & $-$ & $15677$ & $-$ \\ \hline \end{tabular} \end{table} \textbf{Performance metric.} We report performance in $\mathcal{F}$-measure and Adjusted Rand Index (ARI). In the presence of outliers (unclustered fingerprints), we follow \cite{Lin2017} and treat outliers differently in the computation of True Positive ($\overline{TP}$) and False Positive ($\overline{TP}$). Specifically, \begin{itemize} \item True Positive ($\overline{TP}$): the number of image pairs from the same cluster which are assigned to the same cluster, \emph{excluding outliers}. \item False Positive ($\overline{FP}$): the number of image pairs from different clusters which are assigned to the same cluster, \emph{excluding outliers}. \item True Negative ($TN$): number of image pairs from different clusters which are assigned to different clusters. \item False Negative ($FN$): number of image pairs from the same cluster which are assigned to different clusters. \end{itemize} $\mathcal{F}$-measure is computed based on precision ($\mathcal{P}$) and recall ($\mathcal{R}$): \begin{IEEEeqnarray}{rCl} \mathcal{P} &=& \frac{\overline{TP}}{\overline{TP} + \overline{FP}} \text{,} \quad \mathcal{R} = \frac{\overline{TP}}{\overline{TP} + FN} \text{,} \quad \mathcal{F} = 2 \cdot \frac{\mathcal{P} \cdot \mathcal{C}}{\mathcal{P} + \mathcal{C}} \text{.} \IEEEnonumber \end{IEEEeqnarray} Rand Index (RI) and ARI are computed as: \begin{IEEEeqnarray}{rCl} \text{RI} &=& \frac{\overline{TP} + TN}{\overline{TP} + TN + \overline{FP} + FN} \text{,} \quad \text{ARI} = \frac{\text{RI} - \mathbf{E}[\text{RI}]}{1 - \mathbf{E}[\text{RI}]} \text{,}\IEEEnonumber \end{IEEEeqnarray} where $\mathbf{E}[\text{RI}]$ is the expected value of RI and is computed based on the expected value of $\overline{TP}$ and $TN$. \begin{IEEEeqnarray}{rCl} \mathbf{E}[\text{RI}] &=& \frac{\mathbf{E}[\overline{TP}] + \mathbf{E}[TN]}{\overline{TP} + TN + \overline{FP} + FN} \IEEEnonumber \text{.} \end{IEEEeqnarray} The readers can refer to \cite{Hubert1985} for more details of ARI computation. When the number of outliers is zero, $\mathcal{F}$-measure and ARI become canonically defined. For comparing the number of clusters discovered by each algorithm, we follow \cite{Lin2017} to report the ratio $L_p/L_g$ where $L_p$ refers to the number of predicted clusters and $L_g$ the number of ground-truth clusters. Differently to \cite{Lin2017} where $L_p$ only accounts for \emph{unique} predicted clusters, i.e., $L_p \leq L_g$, it is possible in our evaluation that $L_p/L_g > 1$ if an algorithm overestimates, or $L_p/L_g < 1$ if under-estimating the number of ground-truth clusters. \textbf{Performance comparison}. We compare the results of the proposed methodologies with the state of the art. Tests have been done also with hierarchical clustering \cite{GarciaVillalba2015}, Markov Random Field \cite{Li2010_2}, and Spectral Clustering with Normalized Cut criterion \cite{Amerini2014}, but for the sake of space and readability we only present comparisons with the following top performing works: \begin{itemize} \item \emph{Multiclass Spectral Clustering (MSC)} \cite{Liu2010}. A star graph is built with $5$ nearest neighbors, as suggested in \cite{Liu2010}. % \item \emph{Lin's Large-Scale (Lin-LS) method} \cite{Lin2017}. Lin-LS is implemented with all parameters recommended from \cite{Lin2017}: compressed fingerprints ($256 \times 256$) are binarized by threshold $t_b = 0.008$, while original-size fingerprints ($1024 \times 1024$) are binarized by threshold $t_b = 0.005$. In order to take divide-and-conquer strategy into effect on medium-size datasets, each dataset is split into two equal batches and only one is loaded at once. % \item \emph{Correlation Clustering with Consensus (CCC)} \cite{Marra2017}. Results of CCC are acquired from the implementation provided by the authors. No parameter needs to be specified. \item \emph{Sparse Subspace Clustering (SSC)} \cite{Phan2017}. SSC is implemented similarly to SSC-NC but without the non-negativity constraint. \end{itemize} We analyze the performance of SSC-NC to see the effectiveness of non-negativity constraint, and LS-SSC to verify its adaptation on medium-size and large-scale datasets. To simulate divide-and-conquer on medium-size datasets, LS-SSC splits each dataset into two equal batches and only one is loaded at once in the same manner as Lin-LS. Under large-scale datasets, LS-SSC is compared only to Lin-LS since these methods are particularly designed for large-scale contexts. One matter of clustering on large-scale datasets is the lack of memory. Since only a limited number of fingerprints can be allocated on RAM, we fix this bound to $4000$ ($\approx 4$ GBs are required to store fingerprints). Due to some randomization used in MSC, CCC, Lin-LS and LS-SSC, those methods are run $10$ times, and the average scores are reported. \subsection{Hyperparameter Selection} \label{sec:hyperparameter_selection} In order to select a number of parameters required by our methodologies we collect a dataset, obviously different from the test one. From RAISE dataset \cite{Dang-Nguyen2015} we extract $200$ raw images from Nikon D90 and $250$ from D7000, and perform JPEG compression (quality factor $98$). Since there are only $76$ raw images of Nikon D40, we leave them out and instead select $300$ JPEG images (default JPEG quality setting) from an external Canon 600D. We refer to this dataset as $\mathcal{D}_{\text{dev}}$ including $750$ images from $3$ cameras. \textbf{Selecting $\eta$}. $\eta$ is the augmented Lagrangian hyperparameter which stands for how much penalty added in order to enforce the equality $\mathbf{Z=V}$. This parameter partially decides the convergence speed of \textproc{Constrained\_Lasso}. Small $\eta$ means slow convergence but with high accurate solutions, while large $\eta$ accelerates convergence speed but results in modest accurate solutions. Since sparse representation learning is followed by a clustering procedure, solutions with modest accuracy are sufficient. On $\mathcal{D}_{\text{dev}}$, ${\eta \in [1.0,1.3]}$ results in acceptable solutions and fast convergence. We adopt ${\eta=1.0}$ in all experiments. \textbf{Selecting $\gamma$}. On $\mathcal{D}_{\text{dev}}$, we vary $\gamma$ in the range $[0.0001, 0.02]$ and select $\gamma = 0.0018$ that minimizes the cost function defined in \cite{Phan2017} taken into account normalized cuts and eigengaps as criterions. \textbf{Jointly selecting $R$ and $K$}. In LS-SSC, the main goal of recycling is to reduce the number of undiscovered ground-truth clusters. Let us denote as $L_d, L_g$ the number of ground-truth clusters discovered after merging phase and the number of ground-truth clusters, respectively. The strategy is to adopt the number of recycling steps $R$ such that ${L_d/L_g \rightarrow 1}$ and discovered clusters are pure, namely ${\text{Precision} \rightarrow 1}$. Another parameter which impacts on $L_d$ is the number of nearest neighbors $K$. Small $K$ means more ground-truth clusters are likely to be discovered, otherwise only noticeably dense clusters are discovered. We conduct experiments on an asymmetric dataset from Dresden containing $5$ cameras coming from different models. We split the dataset into $6$ equal batches of size $\approx 182$ in order to simulate splitting step. Figure \ref{fig:para_setting} depicts precision of discovered clusters after merging step in panel (a), and the ratio $L_d/L_g$ in panel (b). It is clear that ${K = 5}$ is a reasonable choice for discovering pure ground-truth clusters. From these plots one can argue that selecting $R=0$ allows to obtain the highest precision in this case. However, it is important to remember that recycling plays an important role since it helps discover more hidden clusters. In principle, high value of $R$ should be chosen considering the computational complexity, but the precision is likely to drop if we run more recycling steps with big $K$. In large-scale contexts, we adopt $R = \lfloor B/2 \rfloor$, where $B$ is the number of batches. In medium-scale contexts, where computational requirement is less important, we run recycling until there is no noticeable subclusters discovered. \begin{figure} \centering \includegraphics[width=\linewidth]{para_setting.pdf} \caption{Precision and $L_d/L_g$ with respect to diverse values of $K$ and $\#$ recycling steps.} \label{fig:para_setting} \end{figure} \begin{figure*}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.85\linewidth]{results_dresden_medium_revised.pdf} \caption{Clustering performance on medium-size datasets of Dresden: (a) symmetric: $\mathcal{D}^s_5, \mathcal{D}^s_{10}, \mathcal{D}^s_{15}, \mathcal{D}^s_{20}$ (b) asymmetric: $\mathcal{D}^a_5, \mathcal{D}^a_{10}, \mathcal{D}^a_{15}, \mathcal{D}^a_{20}$ (c) symmetric + same model: $\mathcal{D}^{sm}_5, \mathcal{D}^{sm}_{10}, \mathcal{D}^{sm}_{15}, \mathcal{D}^{sm}_{20}$ (d) asymmetric + same model: $\mathcal{D}^{am}_5, \mathcal{D}^{am}_{10}, \mathcal{D}^{am}_{15}, \mathcal{D}^{am}_{20}$. Better viewed in color.} \label{fig:results_medium_dresden} \end{figure*} \begin{figure}[!t] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{unclustered_fingerprints_medium.pdf} \caption{Number of unclustered fingerprints on medium-size datasets of Dresden and Vision.} \label{fig:unclustered_fingerprints_medium} \end{figure} \subsection{Numeric Results on Medium-size Datasets} \begin{figure*}[!t] \centering \captionsetup{justification=centering} \includegraphics[width=0.85\linewidth]{results_vision_medium_revised.pdf} \caption{Clustering performance on medium-size datasets of Vision: (a) symmetric: $\mathcal{V}^s_5, \mathcal{V}^s_{10}, \mathcal{V}^s_{15}, \mathcal{V}^s_{20}$ (b) asymmetric: $\mathcal{V}^a_5, \mathcal{V}^a_{10}, \mathcal{V}^a_{15}, \mathcal{V}^a_{20}$. Better viewed in color.} \label{fig:results_medium_vision} \end{figure*} We report performance of all methods on medium-size datasets with the maximum number of images ranging from $4000$ to $5000$. Results on Dresden suggest that MSC performs relatively well on symmetric (in Figure \ref{fig:results_medium_dresden} (a)) and asymmetric (in Figure \ref{fig:results_medium_dresden} (b)) datasets. MSC applies an extra step before clustering. It is the creation of a star graph among fingerprints, where noisy connections are partially eliminated. The star graph can be considered as a suboptimal sparse representation matrix of data. Differently to MSC, SSC finds a sparse representation of data by solving an optimization problem. In Figure \ref{fig:results_medium_dresden}, SSC outperforms MSC in most configurations with high $\mathcal{F}$-measure. As an improved version of SSC, SSC-NC performs equally or better than SSC in the majori\-ty of symmetric and asymmetric datasets. Balanced precision and recall are obtained, gaining high $\mathcal{F}$-measure. The number of predicted clusters $L_p$ obtained by SSC and SSC-NC are identical, approximating well the number of ground-truth clusters $L_g$. Such approximation is the best among all tested algorithms. Although Lin-LS and LS-SSC are especially designed for large-scale datasets, they produce convincing results also on medium-size datasets. Lin-LS aims to obtain high-quality clusters of small size, resulting in high precision. Comparing to Lin-LS, LS-SSC obtains less precise clusters but the precision is still high without penalizing recall. Thanks to this balanced behavior, LS-SSC outperforms Lin-LS in terms of $\mathcal{F}$-measure and ARI. To keep precision high, both Lin-LS and LS-SSC tend to overestimate the number clusters in medium-size datasets. \begin{figure*}[!t] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{results_dresden_ls_revised.pdf} \caption{Clustering results on large-scale datasets of Dresden.} \label{fig:results_ls_dresden} \end{figure*} \begin{figure*}[!t] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{results_vision_ls_revised.pdf} \caption{Clustering results on large-scale datasets of Vision.} \label{fig:results_ls_vision} \end{figure*} Zooming into the cases where cameras of the same model share some commonalities in SPNs, this clearly introduces a certain level of ambiguity. In Figure \ref{fig:results_medium_dresden} (c) and (d) we report results on datasets containing multiple camera models, each model with $5$ camera instances. Despite the fact that all methods suffer from performance degradation, SSC-NC outperforms other methods in $\mathcal{D}_{5}^{sm}, \mathcal{D}_{10}^{sm}$, while LS-SSC is superior in all other configurations. In Figure \ref{fig:results_medium_dresden} (c) and (d), the superiority of SSC-NC over SSC is evident. We argue that, in such complicated contexts where SPNs of the same camera model stay close to each other, SSC-NC can find a better representations of data. We replicate the evaluation of all methods on medium-size datasets of Vision, see Figure \ref{fig:results_medium_vision}. MSC, SSC-NC and LS-SSC perform on par with each other, but SSC-NC achieves more accurate estimation on the number of clusters. On the other hand, SSC-NC also obtains more accurate results than SSC in almost all configurations ($7$ out of $8$). It seems that Lin-LS outperforms all other methods, however, we argue that its performance gain is partially due to high number of unclustered fingerprints it produces. We show in Figure \ref{fig:unclustered_fingerprints_medium} the number of unclustered fingerprints of LS-SSC and Lin-LS on medium-size datasets of Dresden and Vision. It is evident that Lin-LS produces more outliers than LS-SSC, thus gaining a certain advantage over precision, and then $\mathcal{F}$-measure as a consequence. \subsection{Numeric Results on Large-scale Datasets} In practice, there exist large-scale contexts where a large number of images need to be clustered. In Dresden, we conduct experiments on datasets containing $30$ to $74$ cameras, and the number of images exceeds $6000$, while in Vision the number of cameras ranges from $21$ to $33$ and the number of images exceeds $4000$. To the best of our knowledge, Lin-LS \cite{Lin2017} is the only method proposed for large-scale clustering of camera fingerprints, thus results are compared only with it. As depicted in Figure \ref{fig:results_ls_dresden}, Lin-LS achieves high precision, which means $\overline{FP}$ is negligible. Nevertheless, in order to keep high precision a noticeable number of fingerprints are not clustered. Unclustered fingerprints essentially causes low recall, or equivalently high $FN$ due to the separation of pairs belonging to the same cluster. On the contrary, LS-SSC produces less precise clusters with precision from $80 \%$ to $100 \%$. One advantage of our method is the achievement of relatively high recall which slightly oscillates around $80\%$. Apart from keeping precision and recall balanced, we obtain high $\mathcal{F}$-measure. LS-SSC can cluster the whole Dresden dataset with $\mathcal{F}$-measure higher than $80\%$ which substantially improves the $64 \%$ obtained by Lin-LS. The improvement of LS-SSC over Lin-LS should be further amplified because Lin-LS requires to access $1024 \times 1024$ fingerprints in refining step while LS-SSC only works on $512 \times 512$ fingerprints. Moreover, as depicted in Figure \ref{fig:results_ls_dresden} (last panel), LS-SSC produces a higher number of clusters than the ground-truth clusters, but the ratio between the two quantities is relatively constant when the dataset size grows. Vice versa, for Lin-LS this ratio rapidly increases. Shown in Figure \ref{fig:results_ls_vision} are the performance of Lin-LS and LS-SSC on Vision dataset. Lin-LS again produces highly precise clusters, but tends to overestimate the number of ground-truth clusters. The $\mathcal{F}$-measure scores of the two methods are close since unclustered fingerprints are not accounted for precision computation. In Lin-LS, the main cause of unclustered fingerprints are due to the merging step. If the merging threshold is too high, small subclusters cannot be merged to form larger subclusters, and thus filtered out in the end. On the other hand, in LS-SSC a fingerprint is unclustered if the correlation between fingerprint and all available cluster centroids is smaller than a threshold that was used to exclude the null hypothesis. Also for the case of large-scale datasets, we show the number of unclustered fingerprints in Lin-LS and LS-SSC, see last panel of Figure \ref{fig:results_ls_dresden}, \ref{fig:results_ls_vision}. In this scenario, it is clear that to keep precision high Lin-LS produces large number of unclustered fingerprints, not comparable with unclustered fingerprints in LS-SSC. The advantage of this mechanism is to reduce false alarm rate, but its downside is evident since data of interest could be ignored by the algorithm. LS-SSC provides a reasonable tradeoff allowing to cluster large-scale databases without skipping too many images which might be important for forensic analysis. \subsection{LS-SSC Robustness Analysis} In this section, we analyze the robustness of LS-SSC in more realistic testing configurations. \subsubsection{Presence of outliers} Firstly, we test the robustness of LS-SSC to outliers. We select images coming from $20$ cameras of Vision, and add $50$ images randomly collected from Facebook (from different entities) to make sure that they do not share the same source camera. On this dataset, LS-SSC achieves $\mathcal{F}$-measure $0.89$. Remarkably, LS-SSC assigns $69$ images as unclustered, in which $33$ out of $50$ images are truthfully outliers. \subsubsection{Double JPEG compression} Images taken via smartphones usually undergo double JPEG compression once being available on social media sites. Therefore, we test the robustness of LS-SSC on images coming from $20$ cameras of Vision, further compressed using \texttt{convert} tool provided by \texttt{ImageMagick}. The compression quality ranges from $50$ to $95$ (step $5$). Results in Table \ref{table:num_results_vision_20_compressed} expose very reasonable and pretty stable performance of LS-SSC over different quality factors. Indeed, the algorithm is generally robust to double JPEG compression if the quality factor of the second compression is more than $65$. Clustering performance starts to drop if images are aggressively compressed (quality factor smaller than $65$). \begin{table}[!h] \centering \scriptsize \caption{Numeric results of LS-SSC on double compressed images.} \begin{tabular}{ C{0.5cm} | C{0.35cm} C{0.35cm} C{0.35cm} C{0.35cm} C{0.35cm} C{0.35cm} C{0.35cm} C{0.35cm} C{0.35cm} C{0.35cm} } \multirow{2}{*}{Metric} & \multicolumn{10}{c}{Quality factor} \\ \cline{2-11} & 50 & 55 & 60 & 65 & 70 & 75 & 80 & 85 & 90 & 95 \\ \hline $\mathcal{P}$ & $0.79$ & $0.81$ & $0.84$ & $0.88$ & $0.90$ & $0.93$ & $0.94$ & $0.95$ & $0.93$ & $0.96$ \\ $\mathcal{R}$ & $0.58$ & $0.61$ & $0.66$ & $0.72$ & $0.77$ & $0.80$ & $0.83$ & $0.86$ & $0.85$ & $0.88$ \\ $\mathcal{F}$ & $0.67$ & $0.70$ & $0.74$ & $0.79$ & $0.83$ & $0.86$ & $0.88$ & $0.90$ & $0.89$ & $0.92$ \\ ARI & $0.65$ & $0.68$ & $0.73$ & $0.78$ & $0.82$ & $0.85$ & $0.87$ & $0.89$ & $0.88$ & $0.88$ \\ \end{tabular} \label{table:num_results_vision_20_compressed} \end{table} In practice, images may come from online social networks where they undergo double compression. In such scenario, SPNs are further distorted due to resizing, and it has been confirmed by \cite{Marra2017} that performance of all methods drop. \subsubsection{Different SPN sizes} Next, we validate the robustness of LS-SSC to different sizes of SPN. We pick the same set of images used in previous experiment, but crop the top-left region to $4$ different sizes: $256\times256, 512\times512, 768\times768, 1024\times1024$. The hyperparameter $\gamma$ is also re-estimated on the development set where images are cropped to similar sizes. The values of $\gamma$ for each of corresponding size are $0.0045, 0.0018, 0.0012, 0.0008$. In Table \ref{table:num_results_vision_20_size}, the performance generally improves if larger-size SPNs are used. Nevertheless, the results also suggest that using SPN sizes larger than $512 \times 512$ is not the key for the success of LS-SSC. Indeed, using $768 \times 768$ does not gain any improvement over $512 \times 512$ SPNs, and using $1024 \times 1024$ SPNs brings only a minor improvement. \begin{table}[!h] \centering \scriptsize \caption{Numeric results of LS-SSC on SPNs of different sizes.} \begin{tabular}{ C{0.5cm} | C{1.5cm} C{1.5cm} C{1.5cm} C{1.5cm}} \multirow{2}{*}{Metric} & \multicolumn{4}{c}{SPN size} \\ \cline{2-5} & $256 \times 256$ & $512 \times 512$ & $768 \times 768$ & $1024 \times 1024$ \\ \hline $\mathcal{P}$ & $0.85$ & $0.92$ & $0.92$ & $0.89$ \\ $\mathcal{R}$ & $0.84$ & $0.84$ & $0.85$ & $0.90$ \\ $\mathcal{F}$ & $0.84$ & $0.88$ & $0.88$ & $0.89$ \\ ARI & $0.83$ & $0.87$ & $0.87$ & $0.88$ \\ \end{tabular} \label{table:num_results_vision_20_size} \end{table} \subsubsection{Few images per camera} In some specific contexts, forensic analysts might face with databases where the number of cameras is higher than the average number of images acquired by each camera (one camera per each model). To simulate such context, we start with an original set of $20$ cameras selected from Dresden. The number of images on each camera alternatively ranges from $10$ to $50$ (step $10$). For each image, we crop at $50$ different positions, ending an augmented set of images coming from $50$ cameras. Finally we obtain a dataset of $12000$ images of $400$ cameras. It is a challenging dataset since the number of cameras is high, while the number of images for each camera is much lower. LS-SSC assigns images into $258$ clusters, and $469$ images remain unclustered. Obviously, many small-size clusters are hard to be discovered due to random splitting. It is acknowledged in \cite{Lin2017} that Lin-LS is especially designed to cope with such scenarios. However, such capability comes at a cost of discarding many outliers, which might leave images of interest out of consideration. Lin-LS assigns images into $892$ clusters, while $4083$ images remain unclustered. We obtain an $\mathcal{F}$-measure $47\%$ in this dataset, while Lin-LS achieves $52\%$, at a price of a $10$ times larger number of unclustered images. In this scenario, LS-SSC performs not very well, but this is somehow inherently defined in the method itself. Indeed, we know from the theory that learning sparse representation of camera fingerprints requires a sufficient number of images per camera. Without this assumption, the algorithm might learn inexact representations which usually result in high $\overline{FP}$. \subsection{Running time analysis} We measure the running time of SSC-NC and LS-SSC on Dresden images, where the number of cameras ranges from $10$ to $70$. To observe the running time of SSC-NC we assume RAM is sufficient to catch all fingerprints of $70$ cameras, and allows to solve the optimization in Eq. (\ref{proposed_opt}). Figure \ref{fig:running_time_analysis} reveals the fact that LS-SSC requires higher I/O cost due to extra reading/writing operations. SSC-NC, on the other hand, requires much higher computational cost, which are critical in practical usages. For LS-SSC, it takes approximately $1$ hour and $20$ minutes to cluster the whole Dresden dataset. In the case of limited RAM, LS-SSC requires more I/O time while SSC-NC cannot be operated. \begin{figure}[!t] \centering \captionsetup{justification=centering} \includegraphics[width=0.8\linewidth]{running_time_analysis.pdf} \caption{Running time of SSC-NC and LS-SSC.} \label{fig:running_time_analysis} \end{figure} % % \section{Conclusion} We have introduced a clustering framework by exploiting linear dependencies among SPNs in their intrinsic vector subspaces. Each SPN is expressed as a sparse linear combination of all other SPNs. Finding such sparse combinations is equivalent to solving LASSO with constraints, which is done efficiently by ADMM method. Our algorithm can be extended to the case of large-scale databases thanks to the proposed divide-and-conquer strategy. Experiments prove the advantage of sparse representation over normalized correlation. Future extensions will be dedicated to combining sparse representation learning and clustering into a unified end-to-end procedure. Moreover, we foresee to further study the impact of cluster cardinality on the method performance. \appendices \section{Derivation of $\mathbf{V}$ update in Algorithm \ref{alg:lasso_solving_admm}} \label{app:v_update} At each iteration of \textproc{CONSTRAINED\_LASSO}, $\mathbf{V}$ is updated by: % \begin{IEEEeqnarray}{rCl} \mathbf{V} &=& \arg \underset{\mathbf{V}}{\min} \; f(\mathbf{V}) \text{,} \IEEEnonumber \end{IEEEeqnarray} where % \begin{IEEEeqnarray}{rCl} f(\mathbf{V})&=& \gamma\| \mathbf{V}\|_1 + \langle \mathbf{\Lambda},\mathbf{Z}-\mathbf{V} \rangle + \frac{\eta}{2} \|\mathbf{Z} - \mathbf{V}\|_F^2 \text{.} \IEEEnonumber \\ \frac{\partial f}{\partial \mathbf{V}_{ij}} &=& \gamma\frac{\partial |\mathbf{V}_{ij}|}{\partial \mathbf{V}_{ij}} + \eta\mathbf{V}_{ij} - \eta \left( \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} \right) \IEEEnonumber \\ &=& \begin{cases} -\gamma + \eta\mathbf{V}_{ij} - \eta \left( \mathbf{Z}_{ij}+\frac{\mathbf{\Lambda}_{ij}}{\eta} \right), \quad &\mathbf{V}_{ij} < 0 \IEEEnonumber \\ \gamma + \eta\mathbf{V}_{ij} - \eta \left( \mathbf{Z}_{ij}+\frac{\mathbf{\Lambda}_{ij}}{\eta} \right), \quad &\mathbf{V}_{ij} > 0 \IEEEnonumber \\ \text{undefined,} \quad &\mathbf{V}_{ij}=0 \end{cases} \text{.} \IEEEnonumber \\ \frac{\partial f}{\partial \mathbf{V}_{ij}} &=& 0 \; \text{then} \IEEEnonumber \\ \mathbf{V}_{ij} &=& S_{\frac{\gamma}{\eta}}\left( \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} \right) \IEEEnonumber \\ &=& \begin{cases} \frac{\gamma}{\eta} + \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} \text{,} &\text{if} \quad \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} < -\frac{\gamma}{\eta} \IEEEnonumber \\ \frac{-\gamma}{\eta} + \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} \text{,} &\text{if} \quad \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} > \frac{\gamma}{\eta} \IEEEnonumber \\ 0 \text{,} &\text{if} \quad \left| \mathbf{Z}_{ij} + \frac{\mathbf{\Lambda}_{ij}}{\eta} \right| \leq \frac{\gamma}{\eta} \end{cases} \text{.} \end{IEEEeqnarray} Solution $\mathbf{V} \in \mathbb{R}^{n \times n}$ might violate two constraints in Eq. (\ref{proposed_opt}). Denote $\mathcal{C}_1$ the set of all zero-diagonal and $\mathcal{C}_2$ the set of non-negative matrices. $\mathcal{C}_1, \mathcal{C}_2$ are convex. To impose the two constraints on $\mathbf{V}$, it is equivalent to find $\mathbf{V}^{(1,2)} \in \mathcal{C}_1 \cap \mathcal{C}_2$ that minimizes $f$. This can be obtained via von Neumann's alternating projections \cite{Cheney1959}: first Euclidean projection onto $\mathcal{C}_1$, and second Euclidean projection onto $\mathcal{C}_2$. Since $\mathbf{V} \in \mathbb{R}^{n \times n}$ is a minimizer of $f$, $\mathbf{V}^{(1,2)}$ can be obtained by two successive projections: \begin{IEEEeqnarray}{rlll} \mathbf{V}^{(1)} = \text{arg} \underset{\tilde{\mathbf{V}} \in \mathcal{C}_1}{\min} \| \mathbf{V} - \tilde{\mathbf{V}} \|^2_F \text{,} \IEEEnonumber \end{IEEEeqnarray} \begin{IEEEeqnarray}{rlll} \mathbf{V}^{(1,2)} = \text{arg} \underset{\tilde{\mathbf{V}} \in \mathcal{C}_2}{\min} \| \mathbf{V}^{(1)} - \tilde{\mathbf{V}} \|^2_F \text{.} \IEEEnonumber \end{IEEEeqnarray} \IEEEnonumber The two projections are implemented element-wise as in Eq. (\ref{Pi_D}) and Eq. (\ref{Pi_N}), respectively. % % \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,108,101,563,440
arxiv
\section{Introduction} In many real-world problems, decisions have to be made in face of incomplete information. \emph{Partially observable Markov decision processes} (POMDPs, \citet{smallwood1973optimal}) are proposed to deal with the scenarios where the underlying states are not fully observable. POMDPs have important applications in various fields, e.g., operations research (\citet{lovejoy1991survey}), robotics (\citet{pineau2006anytime}), artificial intelligence (\citet{kaelbling1998planning}), and most recently, computational neuroscience (\citet{rao2010decision}). It is well known that a POMDP can be reduced to a standard \emph{Markov decision process} (MDP) where states are completely observable. We refer to \citet{sondik1978optimal} for finite spaces, to \citet{sawaragi1970discrete} for countable spaces, and to \citet[Chapter 4]{hernandez1989adaptive} and \citet{feinberg2016partially} for Borel spaces. For example, in the setting of finite state spaces with a discounted infinite-horizon objective, the optimal solution is given by solving the following Bellman equation \begin{align} \phi(\mu) = \max_{a} \left\{ r(\mu, a) + \alpha \sum_{y} \phi(\mu'(\mu, a, y)) \left( \sum_{x'} P(x'|\mu, a) Q(y|x', a) \right) \right\}, \forall \mu \in \mathscr P. \label{eq:phi} \end{align} Here, $x'$, $a$ and $y$ denote the hidden (or latent) state, the action and the observation respectively, and $\mu \in \mathscr P$ is the distribution over states (also called a belief state), while $\mu'$ is the posterior distribution of the successive state given by \begin{align*} \mu'(\cdot|\mu, a, y) = \frac{P(\cdot|\mu, a) Q(y|\cdot, a)}{\sum_{x'} P(x'|\mu, a) Q(y|x', a)}. \end{align*} $P$ is controls the transition probability between states, $Q$ governs the observation probability of $y$ given states and actions, $r$ denotes the reward function, and $\alpha \in (0,1)$ serves as a discount factor. We will state a formal introduction of POMDPs on Borel spaces in Section \ref{sec:pomdps}. In spite of knowing this theoretical solution, until recently, POMDPs were notoriously difficult to solve in practice, even with a handful of hidden states (\citet{shani2013survey}), since in the Bellman equation \eqref{eq:phi}, the optimal ``value function'' $\phi$ has to be solved for all distributions $\mu \in \mathscr P$, which contains infinite number of points even for a finite hidden state space. Over the past decade, a significant breakthrough has been made in the development of POMDPs solving algorithms, largely due to an approach called \emph{point-based value iteration} (\citet{pineau2006anytime}). It computes the value function $\phi$ over a finite subset of the distribution space with a controllable approximation error. The success of this algorithm is largely attributed to the property of $\phi$ observed by \citet{smallwood1973optimal} for POMDPs with finite spaces: The optimal solution to the Bellman equation \eqref{eq:phi} can be arbitrarily well approximated by a piecewise linear function $\hat \phi$ with following representation \begin{align} \hat \phi(\mu) = \sup_{h \in N} \sum_{x} h(x) \mu(x) \label{eq:dual} \end{align} where $N$ is a finite set of functions on the hidden state space. Applying this representation to the Bellman equation, the sum on the right-hand side of \eqref{eq:phi} becomes the form \begin{align*} \sum_{y} \left( \sup_{h \in N} \sum_{x'} h(x') P(x'|\mu, a) Q(y|x', a) \right), \end{align*} which is the key to designing fast computing algorithms. To our best knowledge, it is still an open question whether the piecewise linear approximation discovered by \citet{smallwood1973optimal} for finite-space POMDPs holds also for POMDPs on general spaces. The major contribution of this paper is, therefore, to provide a confirmative answer to this open question, by applying a conjugate approach to the space of probability measures, equipped with the Wasserstein metric. Our first intuition is that the representation stated in \eqref{eq:dual} is in fact of the Fenchel conjugate form, if $\phi$ is a convex and lower semicontinuous function on $\mathscr P$. However, the standard Fenchel-Moreau theorem (see e.g.\ \citet[Chapter 2]{ioan2009duality}) in the convex analysis cannot be directly applied here, since it is difficult to identify dual spaces of the probability measures $\mathscr P$ on general spaces, with the usual topologies induced by, e.g., the metric of weak convergence or the metric of total variation. To overcome this difficulty, our trick is to apply the Wasserstein metric of order 1 (see e.g.\ \citet{villani2009optimal}, also known as Kontrovich-Rubinstein distance). Endowed with the induced topology, we obtain a Fenchel-Moreau-type dual representation with Lipschitz functions. That is, the set $N$ in \eqref{eq:dual} is a subset of Lipschitz functions on the hidden state space. Thereby we show that it allows a dual representation similar to \eqref{eq:dual}. Secondly, we will show that the convexity and lower semicontinuity of the value function $\phi$ remain unchanged in the course of value iteration, where we iterate $\phi$ as on the right-hand side of \eqref{eq:phi}. This confirms the existence of an arbitrarily good approximation of the optimal value function with a convex and lower semicontinuous function that allows a dual presentation as in \eqref{eq:dual}. Finally, since the topology, with which the probability measure space is endowed, differs from those applied in POMDP-literature, we adapt the whole framework accordingly. We construct the weighted norm space of functions on the Wasserstein space. Under mild assumptions, we show that POMDPs with possibly unbounded reward functions on both sides can be reduced to standard MDPs equipped with weighted norm, where the existence of an optimal solution is guaranteed. This leads to another contribution of this paper: Our results extend the domain of potential applications, in comparison with the literature of POMDPs on Borel spaces (see \citet{hernandez1989adaptive, feinberg2016partially}), where the reward function has to be at least one-side bounded. The paper is organized as follows. In Section \ref{sec:setup}, we introduce the setup of discounted POMDPs on Borel spaces and the concept of the Wasserstein metric, as well as its basic properties. In Section \ref{sec:optimal}, POMDPs are reduced to MDPs on Borel spaces of probability measures equipped with a weight norm that is specifically constructed with the metric of the hidden state space. Adapting to an MDP with the weighted norm, we show the existence of an optimal solution with an optimal deterministic policy to the POMDP. In Section \ref{sec:conjugate}, we present a duality theorem of the Fenchel-Moreau type on Wasserstein space. The duality is then applied to POMDPs and we obtain that under mild assumptions, the optimal solution to the POMDP can be arbitrarily approximated by a convex and lower semicontinuous function of the dual form. Finally, in Section \ref{sec:example}, we present an example of POMDP with dynamics given by a 1-dimensional nonlinear Kalman filter, to demonstrate the applicability of the developed theory. \paragraph{Notation} A \emph{Polish space} is a complete separable metric space and a \emph{Borel space} is a Borel subset of a Polish space. If $\mathsf X$ is a Borel space, its Borel $\sigma$-algebra is denoted by $\mathcal B(\mathsf X)$. Let $\mathsf X$ and $\mathsf Y$ be two Borel spaces. A \emph{stochastic kernel on $\mathsf X$ given $\mathsf Y$} is a function $\psi(B|y), B \in \mathcal B(\mathsf X), y \in \mathsf Y$ such that i) $\psi(\cdot|y)$ is a probability measure on $\mathcal B(\mathsf X)$ for every fixed $y \in \mathsf Y$, and ii) $\psi(B|\cdot)$ is a measurable function on $\mathsf Y$ for every fixed $B \in \mathcal B(\mathsf X)$. \section{Model description and setup} \label{sec:setup} \subsection{POMDPs}\label{sec:pomdps} A partially observable Markov decision process (POMDP, see e.g., \citet[Chapter 4]{hernandez1989adaptive}) is composed of $(\mathsf X, \mathsf Y, \mathsf A, P, Q, \mu, r)$, where: \begin{enumerate}[(a)] \item $\mathsf X$, the (hidden or latent) state space, is a Polish space. Let its metric be denoted by $d$. \item $\mathsf Y$, the space of observation, is a Borel space. \item $\mathsf A$, the action space, is a Borel space. \item $P(dx'|x, a)$, the state transition law, is a stochastic kernel on $\mathsf X$ given $\mathsf K:= \mathsf X \times \mathsf A$. $\mathsf K$ is also a Borel space. \item $Q(dy |a, x)$, the observation kernel, is a stochastic kernel on $\mathsf Y$ given $\mathsf K$. $Q_0$, the initial observation kernel, is a stochastic kernel on $\mathsf Y$ given $\mathsf \mathsf X$. \item $\mu \in \mathscr P$ is the initial distribution with $\mathscr P$ being the space of all probability measures on $(\mathsf X, \mathcal B(\mathsf X))$. \item $r: \mathsf K \rightarrow \mathbb R$ is the one-step reward function, which is $\mathcal B(\mathsf K)$-measurable. \end{enumerate} The POMDP evolves as follows. At time $t=0$, the initial (hidden or latent) state $x_0$ follows a given prior distribution $\mu$, while the initial observation $y_0$ is generated according to an initial observation kernel $Q_0(\cdot|x_0)$. If at time $t$ the state of the system is $x_t$ and the control $a_t \in \mathsf A$ is applied, then the agent receives a reward $r(x_t, a_t)$ and the system transits to state $x_{t+1}$ according to the transition law $P(dx_{t+1}| x_t, a_t)$. The observation $y_{t+1}$ is generated by the observation kernel $Q(dy_{t+1} | a_t, x_{t+1})$. The \emph{observed} history is defined as \begin{align} h_0 := \{ \mu, y_0 \} \in \mathsf H_0 \textrm{ and } h_{t} := \{\mu, y_0, a_0, \ldots, y_{t-1}, a_{t-1}, y_t \} \in \mathsf H_t, t = 1,2, \ldots, \label{eq:h} \end{align} where $\mathsf H_0 := \mathscr P \times \mathsf Y$ and $\mathsf H_{t+1} = \mathsf H_t \times \mathsf Y \times \mathsf A, t=0,1,\ldots.$ Notably, comparing with the canonical Markov decision processes (MDPs, see e.g., \citet{hernandez1989adaptive}), the states $\{ x_t\}$ are not observable and hence, a \emph{policy} depends only on the observed history. A deterministic policy $\boldsymbol \pi := [\pi_0, \pi_1, \ldots]$ is composed of a sequence of one-step policy $\pi_t: \mathsf H_t \rightarrow \mathsf A$, given the observed history up to time $t$. Let $\Pi$ be the set of all deterministic policies. Note that even with extension to nondeterministic policies, it is known (\citet[Chapter 4]{hernandez1989adaptive}) that an optimal policy to POMDP is always deterministic. Hence, we consider in this paper only deterministic policies. The Ionescu-Tulcea theorem (see e.g., \citet[pp.\ 140--141]{bertsekas1978stochastic}) implies that for each $\pi \in \Pi$ and an initial $\mu \in \mathscr P$, along with $P$, $Q$ and $Q_0$, a probability measure $\mathbb P^{\boldsymbol \pi}_{\mu}$ and a stochastic process $\{X_t, Y_t, A_t\}$ can be defined on $(\mathsf X \times \mathsf Y \times \mathsf A)^\infty$ in a canonical way endowed with the $\sigma$-algebra defined by the products of Borel $\sigma$-algebras $\mathcal B(\mathsf X)$, $\mathcal B(\mathsf Y)$, and $\mathcal B(\mathsf A)$. We denote by $\mathbb E^{\boldsymbol \pi}_{\mu}$ the expectation with respect to (w.r.t.) this probability measure $\mathbb P^{\boldsymbol \pi}_{\mu}$. We consider the following discounted cumulative rewards \begin{align} J_{T} (\boldsymbol \pi, \mu) := \mathbb E^{\boldsymbol \pi}_{\mu} \left[ \sum_{t=0}^T \alpha^t r(X_t, A_t) \right], \label{eq:t-stage} \end{align} where $\alpha \in (0,1)$ stands for a discount factor. The objective is then to maximize the expected reward over the set of deterministic policies $\Pi$, \begin{align*} \phi_{T}^*(\mu) := \sup_{\pi \in \Pi} J_{T} (\boldsymbol \pi, \mu), \quad \mu \in \mathscr P(\mathsf X). \end{align*} When $T = \infty$, we write $J_{\infty}$ and $\phi_\infty^*$ as $J$ and $\phi^*$ respectively. \subsection{Reduction to Markov decision process} We show briefly in this subsection that the POMDP can be reduced to a Markov decision process (MDP, see e.g.\ \citet{hernandez1999further}). We follow mostly the derivation by \citet[Chapter 4]{hernandez1989adaptive}. We first introduce the following notation: \begin{align} \tilde r(\mu, a) := \int r(x, a) \mu(dx), \ \textrm{ and } \ \tilde P(\mathsf B|\mu,a) := \int \mu(dx) P(\mathsf B|x,a), \mathsf B \in \mathcal B(\mathsf X). \end{align} For any $\mathsf C \in \mathcal B(\mathsf Y)$ and $\mathsf B \in \mathcal B(\mathsf X)$, we define \begin{align} R(\mathsf B, \mathsf C | \mu, a) := \int_{\mathsf B} Q(\mathsf C | a, x') \tilde P(dx'|\mu, a), \ \textrm{ and } \ \tilde R(\mathsf C | \mu, a) := & R(\mathsf X, \mathsf C | \mu, a). \end{align} \begin{proposition} There exists a stochastic kernel $M$ from $\mathscr P \times \mathsf A \times \mathsf Y$ to $\mathsf X$ such that for each $\mu \in \mathscr P$, $a \in \mathsf A, \mathsf B \in \mathcal B(\mathsf X)$ and $\mathsf C \in \mathcal B(\mathsf Y)$, \begin{align} R(\mathsf B, \mathsf C | \mu, a) = \int_{\mathsf C} M(\mathsf B | \mu, a, y) \tilde R(dy | \mu, a). \label{eq:RM} \end{align} \end{proposition} \noindent {\sc Proof.}\ Direct application of \citet[Corollary 7.27.1]{bertsekas1978stochastic}. {\quad $\square$} Here, $M$ can also be viewed as a mapping $\mathscr P \times \mathsf A \times \mathsf Y \rightarrow \mathscr P$. With a slight abuse of notation, let $M(\mu, a, y) := M(\cdot | \mu, a, y) \in \mathscr P.$ Then the following stochastic kernel \begin{align} \tilde Q(\mathsf D | \mu, a) := \int \mathsf 1_{\mathsf D}(M(\mu, a, y)) \tilde R(dy|\mu, a), D \subset \mathscr P \label{eq:tildeQ} \end{align} presents the new transition kernel on $\mathscr P$, whose metric will be specified in the next subsection. At time $t$ let $\mu_t \in \mathscr P$ be the posterior distribution. Then given an action $a_t \in \mathsf A$ and an observation $y_{t+1} \in \mathsf Y$, the \emph{successive} distribution $\mu_{t+1} \in \mathscr P$ is exactly given by \begin{align} \mu_{t+1} = M(\mu_t, a_t, y_{t+1}). \label{eq:Mmu} \end{align} Note that $\mu_{t+1}$ is a random measure since $y_{t+1}$ is a random variable with the distribution $\tilde R(\cdot|\mu_t, a_t)$. Hence, the POMDP can be reduced to a Markov decision process (MDP) with the \emph{belief} state space $\mathscr P$, the action space $\mathsf A$, the reward function $\tilde r$ on $\mathscr P \times \mathsf A$ and the transition kernel on belief states $\tilde Q$ defined above. Let $\tilde h_t$ be a $t$-stage history for the MDP described above: $$ \tilde h_t := \{ \mu_0, a_0, \ldots, \mu_{t-1}, a_{t-1}, \mu_t \} \in \tilde{\mathsf H}_t := \mathscr P \times (\mathsf A \times \mathsf P)^t, t = 0, 1, \ldots, $$ where $\mu_0$ is the initial posterior distribution and $\mu_t \in \mathscr P$ are recursively defined by \eqref{eq:Mmu}. Given the original $t$-stage history $h_t$ defined in \eqref{eq:h}, let $m_t: \mathsf H_t \rightarrow \tilde{\mathsf H}_t$ be the mapping such that $m_t(h_t) = \tilde h_t, \forall h_t \in \mathsf H_t, t= 0, 1, \ldots.$ For a history-dependent MDP-policy $\boldsymbol \delta := [\delta_0, \delta_1, \ldots]$, where $\delta_t: \tilde{\mathsf H}_t \rightarrow \mathsf A$, we define its counterpart POMDP-policy as $\boldsymbol \pi^\delta = [\pi_0^\delta, \pi_1^\delta, \ldots]$, with $ \pi_t^\delta(h_t) = \delta_t(m_t(h_t)). $ Hence, if a policy $\boldsymbol \delta$ is optimal for the MDP, then its counterpart policy $\boldsymbol \pi^\delta$ is also optimal for the POMDP. For more details, we refer to \citet[Chapter 4]{hernandez1989adaptive} and references therein. After this reduction to MDP, it is well known that under proper assumptions (see, e.g., \cite[Chapter 4]{hernandez1989adaptive} and \cite{feinberg2016partially}), the optimal $\phi^*$ for the infinite-stage case satisfy the following \emph{optimality equation}: \begin{align} \phi(\mu) = \mathcal T(\phi)(\mu) := \sup_{a \in \mathsf A} \left( r(\mu, a) + \alpha \int \phi\left(M(\mu, a, y) \right) \tilde R(dy|\mu, a) \right), \forall \mu \in \mathscr P, \label{eq:optimal} \end{align} where $\mathcal T$ is an operator on the space of functions on probability measures in $\mathscr P$. We will show in Section \ref{sec:optimal} that under general assumptions allowing reward functions unbounded below and above, the existence of a solution of the above equation is guaranteed, as well as the existence of an optimal deterministic policy. \subsection{Wasserstein metric of order 1} In the literature, different topologies on $\mathscr P$ induced by different types of metric have been considered, such as weak convergence and total variation convergence (see, e.g., \citet{feinberg2016partially}). In this paper, we will apply a new type of topology that is induced by the Wasserstein metric of order 1. Let $(\mathsf X, d)$ be a Polish space with metric $d$. Then the Wasserstein space of order 1 is defined as \begin{align*} \mathscr P_{W_1} := \left\{ \mu \in \mathscr P \ \middle| \ \int d(x_0, x) \mu(dx) < \infty \right\} \end{align*} where $x_0 \in \mathsf X$ is arbitrary and its distance (called \emph{Wasserstein metric}) is defined as \begin{align} W_1(\mu, \nu) := \inf \left\{ \mathbb E \left[ d(X, Y) \right] \ \middle| \ \textrm{law}(X) = \mu, \textrm{law}(Y) = \nu \right\}. \label{eq:wmetric} \end{align} For any real-valued function $f: \mathsf X \rightarrow \mathbb R$, its Lipschitz seminorm is defined as \begin{align*} \lVert f \rVert_{\lip} := \sup_{x \neq y} \frac{\lvert f(x) - f(y) \rvert}{d(x,y)}. \end{align*} Denote by $\mathscr L:= \left\{ f: \mathsf X \rightarrow \mathbb R \middle| \lVert f \rVert_{\lip} < \infty \right\}$ the space of all real-valued Lipschitz functions on $\mathsf X$ and by $[\mathscr L]_1 := \{ f\in \mathscr L | \lVert f \rVert_{\lip}\leq 1 \}$ the unit ball of $\mathscr L$. Then, it can be shown that $W_1$ has the following dual representation \begin{align} W_1(\mu, \nu) = \sup_{f \in [\mathscr L]_1} \left( \int f d\mu - \int f d\nu \right). \label{eq:kr-distance} \end{align} It is worth to be mentioned that $W_1$ is also called \emph{Kantorovich-Rubinstein distance} in this dual form. For more detailed discussion, we refer to \citet[Chapter 5]{villani2009optimal}. We have the following property (see \cite[Theorem 6.18]{villani2009optimal}): {\it If $(\mathsf X, d)$ is Polish, $(\mathscr P_{W_1}, W_1)$ is also Polish.} The \emph{weighted norm} is proved to be very useful when dealing with MDPs (see e.g.\ \cite{hernandez1999further}). We now state the weighted norm induced by the Wasserstein metric. First, we specify a weight function $w: \mathsf X \rightarrow [1, \infty)$ as follows \begin{align} w(x) := 1 + k \cdot d(x_0, x), x \in \mathsf X, \label{eq:weightfunc} \end{align} with some fixed $x_0 \in \mathsf X$ and positive constant $k > 0$. This definition implies $w(x_0) = 1$. One can easily verify that $w$ is Lipschitz, and therefore continuous and measurable. Denote by $\mathscr L_{w}$ the space of all continuous functions on $\mathsf X$ such that \begin{align*} \lVert f \rVert_w := \sup_{x \in \mathsf X} \frac{\lvert f(x) \rvert}{w(x)} < \infty. \end{align*} Let $\mathscr P_w \subset \mathscr P$ be the space of probability measures on $\mathsf X$, $\mu$, satisfying $\int w d \mu < \infty$, equipped with the following concept of weak convergence. \begin{definition}\label{def:weak}\it (a) $\mu_n$ is said to converge weakly to $\mu$ in $\mathscr P_w$, if (i) for any bounded continuous function $f$ on $\mathsf X$, it follows that $ \int_{\mathsf X} f(x) \mu_n(dx) \rightarrow \int_{\mathsf X} f(x) \mu(dx) \textrm{ as } n \rightarrow \infty;$ and (ii) $\int_{\mathsf X} w(x) \mu_n(dx) \rightarrow \int_{\mathsf X} w(x) \mu(dx)$ as $n \rightarrow \infty$. (b) A function $\phi: \mathscr P_w \rightarrow \mathbb R$ is said to be lower (resp.\ upper) semicontinuous if $$\liminf_{n\rightarrow \infty} \phi(\mu_n) \geq \phi(\mu) \ (resp.\ \limsup_{n\rightarrow \infty} \phi(\mu_n) \leq \phi(\mu)),$$ whenever $\mu_n$ converges weakly to $\mu$. $\phi$ is said to be continuous, if $\phi$ is both lower and upper continuous. \end{definition} \begin{remark} It is worth to be mentioned that the weak convergence defined above is stronger than the usual weak convergence, which requires only (i). Hence, to emphasize this difference, we call the latter \emph{canonical weak convergence} throughout the rest of this paper. \end{remark} \begin{proposition}\label{prop:weakconv} The following two statements are equivalent: (i) $\mu_n$ converges weakly to $\mu$ in $\mathscr P_w$ and (ii) $\int f d\mu_n \rightarrow \int f d \mu, \forall f \in \mathscr L_w$. \end{proposition} \noindent {\sc Proof.}\ See \cite[Definition 6.8]{villani2009optimal}. {\quad $\square$} \begin{proposition} Let $w$ be defined as in \eqref{eq:weightfunc} with some $k > 0$ and $x_0 \in \mathsf X$. Then (i) $\mathscr P_{W_1} = \mathscr P_w$ and (ii) the weak convergence defined in Definition \ref{def:weak} is equivalent to the convergence in $(\mathscr P_{W_1}, W_1)$, in other words, $W_1$ metrizes the weak convergence. \end{proposition} \noindent {\sc Proof.}\ (i) is obvious by the definition. (ii) is a direct result of Theorem 6.9 of \citet{villani2009optimal}. {\quad $\square$} In the rest of this paper, we always assume that the weight function satisfies \eqref{eq:weightfunc} and hence $\mathscr P_{w}$ are used interchangeably with $\mathscr P_{W_1}$. \section{Optimal solution on weighted space} \label{sec:optimal} In this section, we shall reduce POMDPs to MDPs on Borel spaces equipped with a proper weighted norm, where we mainly follow the approach by \citet[Chapter 8]{hernandez1999further}. Since the setting in this paper is slightly different from the one in \cite{hernandez1999further}, we restate an adapted version of the main results in Appendix \ref{sec:mdps} for the readers' convenience. First of all, by using the Wasserstein space $\mathscr P_w = \mathscr P_{W_1}$, the belief state space is Polish, and therefore, a Borel space. We next specify the weight function and its weighted norm on $\mathscr P_w$. Let $w: \mathsf X \rightarrow [1,\infty)$ be the weight function as defined in \eqref{eq:weightfunc}. Define $\tilde w : \mathscr P_w \rightarrow [1, \infty)$ as \begin{align} \tilde w(\mu) := \int w d\mu. \label{eq:tildew} \end{align} It is easy to check that $\tilde w$ is a continuous function and hence measurable on $(\mathscr P_w, W_1)$. Define the following space of functions on $\mathscr P_w$ with bounded $\tilde w$-norm: \begin{align*} \mathscr B_{\tilde w} := \left\{ \phi : \mathscr P_w \rightarrow \mathbb R \ \middle| \ \phi \textrm{ is } \mathcal B(\mathscr P_w)\textrm{-measurable}, \lVert \phi \rVert_{\tilde w} := \sup_{\mu \in \mathscr P_w}\frac{\lvert \phi(\mu) \rvert}{\tilde w(\mu)} < \infty\right\}. \end{align*} In the next subsection, we shall specify assumption on the original POMDP to ensure the Assumptions \ref{asmp:mdp1} -- \ref{asmp:mdp3} for MDPs. \subsection{Assumptions} We introduce the following assumption for the reward function. \begin{assumption}\label{asmp:reward} (i) There exists a positive constant $\bar r > 0$ such that $\lvert r(x,a) \rvert \leq \bar r w(x)$, for each $(x,a) \in \mathsf K.$ (ii) For each $x \in \mathsf X$, $a \mapsto r(x,a)$ is upper semicontinuous. \end{assumption} \begin{proposition}\label{prop:reward} Under Assumption \ref{asmp:reward}, (i) $\lvert \tilde r(\mu,a) \rvert \leq \bar r \tilde w(\mu), \forall (\mu,a) \in \mathscr P_w \times \mathsf A$; (ii) for each $\mu \in \mathscr P_w$, $a \mapsto \tilde r(\mu, a)$ is upper semicontinuous; (iii) for each $a \in \mathsf A$, $\mu \mapsto \tilde r(\mu, a)$ is continuous in $\mathscr P_w$. \end{proposition} \noindent {\sc Proof.}\ (i) For each $(\mu,a) \in \mathscr P_w \times \mathsf A$, we have $\lvert r(\mu,a) \rvert \leq \int \lvert r(x,a) \rvert \mu(dx) \leq \bar r \int w d\mu = \bar r \tilde w(\mu).$ (ii) Let $\{ a_n, n=1, 2,\ldots \}$ be a sequence of actions converging to $a_0$ and set $r_n(x) := r(x,a_n), n \in \mathbb N$. By Assumption \ref{asmp:reward}(i), $r_n \leq \bar r w$. Applying the reversed Fatou's lemma, we obtain \begin{align*} \limsup_{n \rightarrow \infty} \int r_n d\mu \leq \int \limsup_{n \rightarrow \infty} r_n d\mu \leq \int r_0 d \mu, \end{align*} where the last inequality is due to Assumption \ref{asmp:reward}(ii). Finally, (iii) is a direct result of Proposition \ref{prop:weakconv}(ii). {\quad $\square$} Similar to the assumptions made in the literature of MDPs (see, e.g., \cite[Assumptions 8.3.2 and 8.3.3]{hernandez1999further}), we introduce the following assumption on the transition kernel $P$. \begin{assumption}\label{asmp:P} \begin{enumerate}[(i)] \item There exists a constant $\beta \in (0, \alpha^{-1})$ such that $$\int w(x') P(dx'|x,a) \leq \beta w(x), \forall (x,a) \in \mathsf X \times \mathsf A.$$ \item For each $x \in \mathsf X$, $a \mapsto \int w(x')P(dx'|x,a)$ is continuous. \end{enumerate} \end{assumption} Under the above assumption, we show that the new probability measure $M(\mu, a, y)$ belongs to $\mathscr P_w$ in almost sure sense and furthermore, the transition kernel $\tilde Q$ satisfies that $\int \tilde w(\mu') \tilde Q(d\mu'|\mu,a)$ is continuous in $a$. \begin{proposition}\label{prop:transition} Suppose Assumption \ref{asmp:P} holds. Then for each $\mu \in \mathscr P_w$ and $a \in \mathsf A$, (i) $M(\mu, a, y) \in \mathscr P_w$, $\tilde R(\cdot|\mu, a)$-almost surely; (ii) $\int \tilde w(\mu') \tilde Q(d\mu'|\mu,a) \leq \beta \tilde w(\mu);$ and (iii) for each $\mu \in \mathscr P_w$, $a \mapsto \int \tilde w(\mu') \tilde Q(d\mu'|\mu,a)$ is continuous. \end{proposition} \noindent {\sc Proof.}\ Fix arbitrary $(\mu, a) \in \mathscr P_w \times \mathsf A$. (i) Let $\mathsf C \in \mathcal B(\mathsf Y)$ be a subset such that $\tilde R(\mathsf C|\mu, a) > 0$. Then we have \begin{align*} \int_{\mathsf C} \int_{\mathsf X} w(x') M(dx'|\mu, a, y) \tilde R(dy|\mu, a) = &\int_{\mathsf X} Q(\mathsf C | a, x') w(x') \tilde P(dx'|\mu, a) \\ \leq & \int_{\mathsf X} Q(\mathsf Y | a, x') w(x') \tilde P(dx'|\mu, a) \\ \leq & \int_{\mathsf X} w(x') \tilde P(dx'|\mu, a) \leq \beta \int w d \mu < \infty. \end{align*} It implies that $\int_{\mathsf X} w(x') M(dx'|\mu, a, y) < \infty$, $\tilde R(\cdot|\mu, a)$-almost surely, and hence (i) holds. (ii) By definition, we have \begin{align*} \int \tilde w(\mu') \tilde Q(d\mu'|\mu,a) = & \int_{\mathsf Y} \tilde w(M(\mu, a, y)) \tilde R(ḑy | \mu, a) \\ \textrm{(by \eqref{eq:tildew})}\quad = & \int_{\mathsf Y} \int_{\mathsf X} w(x')M(dx'| \mu, a, y) \tilde R(ḑy | \mu, a) \\ \textrm{(by \eqref{eq:RM} and Fubini's theorem)}\quad = & \int_{\mathsf X} Q(\mathsf Y | a, x') w(x') \tilde P(dx'|\mu, a) \\ \leq & \beta \int w d\mu = \beta \tilde w(\mu). \end{align*} (iii) Note that the above calculation yields $$\int \tilde w(\mu') \tilde Q(d\mu'|\mu,a) = \int w(x') \tilde P(dx'|\mu, a) = \iint_{\mathsf X \times \mathsf X} w(x') P(dx'|x, a) \mu(dx).$$ Let $\{ a_n, n=1, 2,\ldots \}$ be a sequence of actions converging to $a_0$ and set $$f_n(x) := \int w(x') P(dx'|x, a_n), n \in \mathbb N.$$ Hence, the required continuity is equivalent to showing that $\lim_{n \rightarrow \infty}\int f_n d\mu = \int f_0 d\mu.$ Indeed, by Assumption \ref{asmp:P}(i), we have $f_n \leq \beta w, \forall n \in \mathbb N$. The reversed Fatou's lemma implies $\limsup_{n \rightarrow \infty} \int f_n d\mu \leq \int \limsup_{n \rightarrow \infty} f_n d\mu = \int f_0 d\mu.$ On the other hand, we have $f_n \geq -\beta w, \forall n \in \mathbb N$. Then the extended Fatou's lemma implies $ \liminf_{n \rightarrow \infty} \int f_n d\mu \geq \int \liminf_{n \rightarrow \infty} f_n d\mu = \int f_0 d\mu.$ Combining the above two inequalities yields the convergence. {\quad $\square$} \begin{assumption}\label{asmp:PQ} For each $\mu \in \mathscr P_w$, there exist stochastic kernels $M$ on $\mathscr P$ given $\mathscr P_w \times \mathsf A \times \mathsf Y$ and $\tilde R$ on $\mathsf Y$ given $\mathscr P_w \times \mathsf A$ satisfying \eqref{eq:RM} such that, if $\{a_n \in \mathsf A, n=1,2,\ldots\}$ converges to $a_0 \in \mathsf A$ as $n \rightarrow \infty$, \begin{enumerate}[(i)] \item there exists a subsequence $\{ a_{n_k} \} \subset \{ a_{n} \}$ and a measurable set $\bar{\mathsf C} \in \mathcal B(\mathsf Y)$ such that $\tilde R(\bar{\mathsf C} | \mu, a_0) = 1$ and for all $y \in \bar{\mathsf C}$, $M(\mu, a_{n_k}, y)$ converges canonically weakly to $M(\mu, a_0, y)$; \item for each $\mathsf C \in \mathcal B(\mathsf Y)$, $\tilde R(\mathsf C |\mu, a_n) \rightarrow \tilde R(\mathsf C |\mu, a_0)$ as $n \rightarrow \infty$. \end{enumerate} \end{assumption} This assumption is inspired by Condition (c) in \cite[Theorem 3.2]{feinberg2016partially}. A sufficient condition for it will be discussed in the next section (see Remark \ref{rm:ascont}). \begin{proposition}\label{prop:tildeQ} Under Assumptions \ref{asmp:PQ} and \ref{asmp:P}, for each $\mu \in \mathscr P_w$, $a \mapsto \tilde Q(\cdot | \mu, a)$ is canonically weakly continuous. \end{proposition} \noindent {\sc Proof.}\ This canonical weak continuity is guaranteed by \citet[Theorem 3.4]{feinberg2016partially}. {\quad $\square$} Finally, to guarantee the existence of one ``selector'', we assume \begin{assumption}\label{asmp:A} $\mathsf A$ is compact. \end{assumption} Recall that the operator $\mathcal T: \mathscr B_{\tilde w} \rightarrow \mathscr B_{\tilde w}$ is defined as \begin{align*} \mathcal T_a(\phi)(\mu) := \tilde r(\mu, a) + \alpha \int \phi\left(M(\mu, a, y) \right) \tilde R(dy|\mu, a) \quad \textrm{and} \quad \mathcal T(\phi)(\mu) := \sup_{a \in \mathsf A} \mathcal T_a(\phi)(\mu). \end{align*} Under Assumptions \ref{asmp:reward} -- \ref{asmp:PQ}, it is guaranteed that for each $\mu \in \mathscr P_w$, $a \mapsto \mathcal T_a(\phi)(\mu)$ is upper-semicontinuous (for a proof, see \cite[Lemma 8.3.7(a)]{hernandez1999further}). Hence, under the additional Assumption \ref{asmp:A}, the optimal $a$ in the above optimization problem is always attainable in $\mathsf A$ (see, e.g., \cite[Lemma 8.3.8(a)]{hernandez1999further}). Hence, from now on, we replace ``sup'' with ``max''. \subsection{Value iteration} The following \emph{value iteration} is a widely used method to compute the optimal solution for POMDPs, and MDPs as well. Starting from arbitrary \emph{value function} in $\mathscr B_{\tilde w}$, $\phi_0$, at time $t$, we update value function as follows \begin{align*} \phi_{t+1}(\mu) = \mathcal T (\phi_t)(\mu) = \max_{a \in \mathsf A} \left( \tilde r(\mu, a) + \alpha \int \phi\left(M(\mu, a, y) \right) \tilde R(dy|\mu, a) \right) \end{align*} Finally, by an adapted version of Theorem 8.3.6 by \citet{hernandez1999further}, which is stated as Theorem \ref{th:mdps} in Appendix \ref{sec:mdps}, we obtain the following convergence. \begin{theorem}\label{thm:vi} Suppose that Assumptions \ref{asmp:reward} -- \ref{asmp:A} hold. Let $\beta$ be the constant in Assumption \ref{asmp:P}(i) and $\bar r$ be the constant in \ref{asmp:reward}(i) and define $\gamma := \alpha \beta \in (0,1)$. Then \begin{enumerate}[(a)] \item the optimal value function $\phi^*$ is the unique fixed point of the operator $\mathcal T$ satisfying $\phi^* = \mathcal T(\phi^*)$ in $\mathscr B_{\tilde w}$ and $ \lVert \phi_t - \phi^*\rVert_{\tilde w} \leq \bar r \gamma^t/(1-\gamma), t = 1, 2, \ldots. $ \item there exists a selector $f^*: \mathscr P_w \rightarrow \mathsf A$ such that $$ \phi^*(\mu) = \tilde r(\mu, f^*(\mu)) + \alpha \int \phi\left(M(\mu, f^*(\mu), y) \right) \tilde R(dy|\mu, f^*(\mu)), \forall \mu \in \mathscr P_w. $$ and $\boldsymbol \pi^* = (f^*)^\infty$ is one optimal policy satisfying $\phi^*(\mu) = J(\mu, \boldsymbol \pi^*), \forall \mu \in \mathscr P_w$. \end{enumerate} \end{theorem} \noindent {\sc Proof.}\ The original POMDP specified in Subsection \ref{sec:pomdps} can be reduced to an MDP with $(\mathscr P_w, \mathsf A, \tilde r, \tilde Q)$. Under Assumptions \ref{asmp:reward} -- \ref{asmp:A}, Propositions \ref{prop:reward} -- \ref{prop:tildeQ} hold, and therefore, the conditions required by Theorem \ref{th:mdps} in Appendix \ref{sec:mdps} are satisfied. The assertion is then a direct application of Theorem \ref{th:mdps}.{\quad $\square$} {\bf Comparison with literature.} Most early literature on POMDPs (see e.g.\ \cite{sondik1978optimal, hernandez1989adaptive}) considers finite spaces or bounded reward functions only. In a recent work, \citet{feinberg2016partially} consider a more general setting with Borel spaces and reward/cost functions that are bounded on one side, whereas the other side is allowed to be infinite. In this paper, we adopt the setting of MDPs in \cite{hernandez1999further} with Borel spaces and real-valued reward functions. By introducing a weight function, the reward function can be possibly two-sided unbounded. Hence, we can cover many applications (??) that are not allowed in \cite{feinberg2016partially}. We would like also to point out that essentially, the proof in \cite{feinberg2016partially} is based on the monotonicity argument, while our proof is obtained by applying the convergence under weighted norm. \section{A conjugate approach} \label{sec:conjugate} \subsection{A conjugate duality on Wasserstein space} Let $(\mathsf X, d)$ be a Polish space and $\mathscr P_{W_1}$ be the Wasserstein space of probability measures on $\mathsf X$ with order 1, equipped with the Wasserstein metric $W_1$. For any Lipschitz function $f \in \mathscr L$, it is easy to verify $f$ is measurable and the integral $\int f d \mu$ is finite for any $\mu \in \mathscr P_{W_1}$. Let $\phi: \mathscr P_{W_1} \rightarrow \bar{\mathbb R}$. The (Fenchel) conjugate function of $\phi$, $\rho: \mathscr L \rightarrow \bar{\mathbb R}$, is defined as \begin{align} \rho(f) := \sup_{\mu \in \mathscr P_{W_1}} \left( \int f d\mu - \phi(\mu) \right) \label{eq:conjugate} \end{align} and the second conjugate function $\phi^c: \mathscr P_{W_1} \rightarrow \bar{\mathbb R}$ is defined as \begin{align*} \phi^c(\mu) := \sup_{f \in \mathscr L} \left( \int f d \mu - \rho(f) \right). \end{align*} We can show the following conjugate duality. \begin{theorem}\label{th:fm} Let $\phi: \mathscr P_{W_1} \rightarrow \bar{\mathbb R}$ be a convex and lower semicontinuous function on $(\mathscr P_{W_1}, W_1)$ satisfying $\phi(\mu) > - \infty$ for all $\mu \in \mathscr P_{W_1}$ and $\phi(\mu) \in \mathbb R$ for some $\mu \in \mathscr P_{W_1}$. Then $\phi(\mu) = \phi^c(\mu)$, $\forall \mu \in \mathscr P_{W_1}$. \end{theorem} \noindent {\sc Proof.}\ Since the proof is rather technical, we postpone it to the Appendix \ref{sec:dual}. {\quad $\square$} Note that the conjugate function $\rho$ is always convex and has the following properties: \begin{itemize} \item[(i)] (transition invariance) for any constant function $c$, $ \rho(f + c) = \rho(f) + c; $ and \item[(ii)] (monotonicity) for any $f, g$ satisfying $f(x) \leq g(x)$, $ \rho(f) \leq \rho(g). $ \end{itemize} This shows that $\rho$ is in fact a \emph{convex risk measure} (or utility functional) which has been widely applied in mathematical finance (see e.g., \citet[Chapter 4]{follmer2004stochastic}). Let $\phi$ be a function on $\mathscr P_{W_1}$ and $\rho$ be its conjugate as in \eqref{eq:conjugate}. Consider the following sets \begin{align*} \mathcal N_\phi := & \left\{ f \in \mathscr L \middle| \rho(f) = 0 \right\}, \textrm{ and } \\ \bar{\mathcal N}_\phi := & \left\{ f \in \mathscr L \middle| \rho(f) \leq 0 \right\} = \left\{ f \in \mathscr L \middle| \int f d \mu \leq \phi(\mu), \forall \mu \in \mathscr P_{W_1} \right\} . \end{align*} We call $\mathcal N_\phi$ the \emph{null level-set of $\phi$}, whereas the latter set $\bar{\mathcal N}_\phi$ is called the \emph{acceptance set of $\phi$} (cf.\ \cite[Section 4.1]{follmer2004stochastic}). Note that since $\rho$ is convex and lower semicontinuous, $\bar{\mathcal N}_\phi$ is convex and closed (see e.g.\ \cite[Theorem 2.2.9]{ioan2009duality}). We have the following dual representation. \begin{corollary}\label{coro:fm} Let $\phi: \mathscr P_{W_1} \rightarrow \bar{\mathbb R}$ be a function satisfying the condition in Theorem \ref{th:fm}. Then $$\phi(\mu) = \sup_{f \in \mathcal N_\phi} \int f d\mu = \sup_{f \in \bar{\mathcal N}_\phi} \int f d\mu.$$ \end{corollary} \noindent {\sc Proof.}\ (a) We show first $\phi(\mu) = \sup_{f \in \mathcal N_\phi} \int f d\mu$. Indeed, Theorem \ref{th:fm} yields $$\phi(\mu) = \sup_{f \in \mathscr L} \left( \int f d \mu - \rho(f) \right) \geq \sup_{f \in \mathcal N_\phi} \left( \int f d \mu - \rho(f) \right) = \sup_{f \in \mathcal N_\phi} \int f d\mu.$$ Since there exists one $\mu$ such that $\phi(\mu) \in \mathbb R$, we have $\rho(f) > -\infty, \forall f \in \mathscr L$. Thus, \begin{align*} \sup_{f \in \mathscr L} \left( \int f d \mu - \rho(f) \right) = \sup_{f \in \mathscr L: \rho(f) < \infty} \left( \int f d \mu - \rho(f) \right) = \sup_{f \in \mathscr L: \rho(f) \in \mathbb R} \left( \int f d \mu - \rho(f) \right). \end{align*} Due to the translation invariance, $f' := f -\rho(f)$ satisfies that $\rho(f') = 0$ if $\rho(f) \in \mathbb R$. Hence, \begin{align} \phi(\mu) = \sup_{f \in \mathscr L: \rho(f) \in \mathbb R} \left( \int f d \mu - \rho(f) \right) = \sup_{f': f'=f-\rho(f), f\in \mathscr L, \rho(f) \in \mathbb R} \int f' d \mu \leq \sup_{f' \in \mathcal N_\phi} \int f' d \mu. \label{eq:f'} \end{align} Combining the above two inequalities yields $\phi(\mu) = \sup_{f \in \mathcal N_\phi} \int f d\mu$. (b) We show now $\phi(\mu) = \sup_{f \in \bar{\mathcal N}_\phi} \int f d\mu$. Theorem \ref{th:fm} yields $$\phi(\mu) = \sup_{f \in \mathscr L} \left( \int f d \mu - \rho(f) \right) \geq \sup_{f \in \bar{\mathcal N}_\phi} \left( \int f d \mu - \rho(f) \right) \geq \sup_{f \in \bar{\mathcal N}_\phi} \int f d\mu.$$ On the other hand, \eqref{eq:f'} yields $\phi(\mu) \leq \sup_{f' \in \mathcal N_\phi} \int f' d \mu \leq \sup_{f' \in \bar{\mathcal N}_\phi} \int f' d \mu.$ {\quad $\square$} \subsection{Application to POMDPs} \begin{lemma}\label{lm:convex} If $\phi: \mathscr P_w \rightarrow \mathbb R$ is convex, then $\mathcal T_a (\phi)$ is convex, $\forall a \in \mathsf A$, and therefore, $\mathcal T(\phi)$ is convex as well. \end{lemma} \noindent {\sc Proof.}\ It is sufficient to show that $\mu \mapsto r(\mu,a) + \alpha \int \phi(M(\mu, a, y)) \tilde R(dy|\mu, a)$ is convex for each $a \in \mathsf A$. Indeed, take any action $a\in \mathsf A$ and let $\mu_1$ and $\mu_2$ be two arbitrary probability measures in $\mathscr P_w$. Take any $\kappa \in (0,1)$ and define $\mu_\kappa := \kappa \mu_1 + (1-\kappa) \mu_2$. By the definition, for any $\mathsf B \in \mathcal B(\mathsf X)$ and $\mathsf C \in \mathcal B(\mathsf Y)$, we have \begin{align} R(\mathsf B, \mathsf C | \mu_\kappa, a) = & \kappa R(\mathsf B, \mathsf C | \mu_1, a) + (1-\kappa) R(\mathsf B, \mathsf C | \mu_2, a) \\ = & \kappa \int_{\mathsf C} M(\mathsf B | \mu_1, a, y) \tilde R(dy | \mu_1, a) + (1-\kappa) \int_{\mathsf C} M(\mathsf B | \mu_2, a, y) \tilde R(dy | \mu_2, a). \label{eq:rbc} \end{align} On the other hand, a simple calculation yields \begin{align} \tilde R(\mathsf C |\mu_\kappa, a) = \kappa \tilde R(\mathsf C |\mu_1, a) + (1-\kappa) \tilde R(\mathsf C |\mu_2, a), \forall \mathsf C \in \mathcal B(\mathsf Y). \label{eq:rkappa} \end{align} Hence, $\tilde R(\mathsf C |\mu_\kappa, a) = 0$ implies $\tilde R(\mathsf C |\mu_1, a) = 0$ and $\tilde R(\mathsf C |\mu_2, a) = 0, \forall \mathsf C \in \mathcal B(\mathsf Y)$. By Radon-Nikodym theorem, there exist functions $f_i: \mathsf Y \times \mathsf A \rightarrow [0, \infty), i = 1, 2$, which are both $\mathcal B(\mathsf Y)$-measurable for the fixed $a$, such that \begin{align} \tilde R(\mathsf C |\mu_i, a) = \int_{\mathsf C} f_i(y, a) \tilde R(dy |\mu_\kappa, a), i = 1, 2. \label{eq:rtildec} \end{align} Applying these two equations in \eqref{eq:rkappa} accordingly, we obtain $$\tilde R(\mathsf C |\mu_\kappa, a) = \int_{\mathsf C} \left( \kappa f_1(y,a) + (1-\kappa) f_2(y,a) \right) \tilde R(dy |\mu_\kappa, a), \forall \mathsf C \in \mathcal B(\mathsf Y),$$ which implies that $\kappa f_1(y,a) + (1-\kappa) f_2(y,a) = 1$, $\tilde R(\cdot |\mu_\kappa, a)$-almost surely. In other words, there exists a Borel set $\bar{\mathsf C} \in \mathcal B(\mathsf Y)$ such that $\tilde R(\bar{\mathsf C}|\mu_\kappa, a) = 1$ and $\kappa f_1(y,a) + (1-\kappa) f_2(y,a) = 1, \forall y \in \bar{\mathsf C}.$ Applying \eqref{eq:rtildec} to \eqref{eq:rbc}, we obtain \begin{align*} R(\mathsf B, \mathsf C | \mu_\kappa, a) = & \int_{\mathsf C} \big[ \kappa M(\mathsf B | \mu_1, a, y) f_1(y, a) + (1-\kappa) M(\mathsf B | \mu_2, a, y) f_2(y, a) \big] \tilde R(dy |\mu_\kappa, a), \end{align*} and for each $\kappa \in (0,1)$, $M(\cdot | a, y, \kappa) := \kappa M(\cdot | \mu_1, a, y) f_1(y, a) + (1-\kappa) M(\cdot | \mu_2, a, y) f_2(y, a)$ is a valid stochastic kernel satisfying $R(\mathsf B, \mathsf C | \mu_\kappa, a) = \int_{\mathsf C} M(\mathsf B | a, y, \kappa) \tilde R(dy |\mu_\kappa, a), \forall \mathsf B \in \mathcal B(\mathsf X), \mathsf C \in \mathcal B(\mathsf Y).$ Finally, the convexity of $\phi$ implies \begin{align*} & \int \phi(M(\cdot|a,y, \kappa)) \tilde R(dy |\mu_\kappa, a) = \int_{\bar{\mathsf C}} \phi(M(\cdot|a,y, \kappa)) \tilde R(dy |\mu_\kappa, a)\\ \leq & \int_{\bar{\mathsf C}} \left[ \kappa f_1(y, a) \phi( M(\mu_1, a, y) ) + (1-\kappa) f_2(y, a) \phi( M( \mu_2, a, y) ) \right]\tilde R(dy |\mu_\kappa, a) \\ \leq & \kappa \int \phi( M( \mu_1, a, y) ) \tilde R(dy |\mu_1, a) + (1-\kappa) \int \phi( M( \mu_2, a, y) ) \tilde R(dy |\mu_2, a), \end{align*} which yields the required convexity. {\quad $\square$} We introduce the following assumption accompanying Assumption \ref{asmp:PQ}. \begin{assumption}\label{asmp:mucontinuity} For each $a \in \mathsf A$, there exist stochastic kernels $M$ on $\mathscr P$ given $\mathscr P_w \times \mathsf A \times \mathsf Y$ and $\tilde R$ on $\mathsf Y$ given $\mathscr P_w \times \mathsf A$ satisfying \eqref{eq:RM} such that, if $\{\mu_n \in \mathscr P_w, n=1,2,\ldots\}$ converges to $\mu_0 \in \mathscr P_w$ as $n \rightarrow \infty$, \begin{enumerate}[(i)] \item there exists a subsequence $\{ \mu_{n_k} \} \subset \{ \mu_{n} \}$ and a measurable set $\bar{\mathsf C} \in \mathcal B(\mathsf Y)$ such that $\tilde R(\bar{\mathsf C} | \mu_0, a) = 1$ and for all $y \in \bar{\mathsf C}$, $M(\mu_{n_k}, a, y)$ converges canonically weakly to $M(\mu_0, a, y)$; \item for each $\mathsf C \in \mathcal B(\mathsf Y)$, $\tilde R(\mathsf C |\mu_n, a) \rightarrow \tilde R(\mathsf C |\mu_0, a)$ as $n \rightarrow \infty$. \end{enumerate} \end{assumption} \begin{lemma}\label{lm:lsc} Suppose Assumptions \ref{asmp:mucontinuity}, \ref{asmp:reward}(i) and \ref{asmp:P}(i) hold. Then, for any $\phi \in \mathscr B_{\tilde w}$, $\mathcal T_a (\phi)$ is continuous for any $a \in \mathsf A$ and therefore $\mathcal T(\phi)$ is lower semicontinuous. \end{lemma} \noindent {\sc Proof.}\ Fix one $a \in \mathsf A$. Note that Proposition \ref{prop:reward}(iii) ensures the continuity of the function $\mu \mapsto r(\mu, a)$. It remains to show that $\mu \mapsto \int \phi(\mu') \tilde Q(d\mu' | \mu, a)$ is continuous. By Assumption \ref{asmp:mucontinuity} and \cite[Theorem 3.4]{feinberg2016partially}, $\mu \mapsto \tilde Q(d \mu'|\mu, a)$ is canonically weakly continuous. By Assumption, there exists a constant $\bar \phi > 0$ such that $ \lvert \phi(\mu) \rvert \leq \bar \phi \tilde w(\mu) = \bar \phi \int w d\mu.$ Let $\phi'(\mu) := \phi(\mu) + \bar \phi \int w d\mu$, which is nonnegative. Hence, it is a limit of a nondecreasing sequence of measurable bounded function $\{ \phi'_m \}$ such that $\phi'_m \uparrow \phi'$. Let $\{ \mu_n \in \mathscr P_w\}$ be a converging sequence with under the $W_1$-metric to a limit $\mu_0 \in \mathscr P_w$. We have then \begin{align*} \liminf_{n \rightarrow \infty} \int \phi'(\mu') \tilde Q(d\mu' | \mu_n, a) \geq \liminf_{n \rightarrow \infty} \int \phi'_m(\mu') \tilde Q(d\mu' | \mu_n, a) = \int \phi'_m(\mu') \tilde Q(d\mu' | \mu_0, a). \end{align*} Hence, letting $m \rightarrow \infty$, monotone convergence yields that \begin{align} \liminf_{n \rightarrow \infty} \int \phi'(\mu') \tilde Q(d\mu' | \mu_n, a) \geq \int \phi'(\mu') \tilde Q(d\mu' | \mu_0, a). \label{eq:phi'} \end{align} On the other hand, we have for each $n \in \mathbb N$, \begin{align*} \int \tilde w(\mu') \tilde Q(d\mu' | \mu_n, a) = & \int_{\mathsf Y} \int_{\mathsf X} w(x') \tilde P(dx'|\mu_n, a) Q(dy|x', a) = \int_{\mathsf X} \int_{\mathsf X} w(x') P(dx'|x,a) \mu_n(dx). \end{align*} Note that by Assumption \ref{asmp:P}(i), we have for each $a\in \mathsf A$, $w'(x, a) := \int_{\mathsf X} w(x') P(dx'|x,a) \in \mathscr L_w$. Proposition \ref{prop:weakconv}(ii) yields that $ \lim_{n \rightarrow \infty} \int \tilde w(\mu') \tilde Q(d\mu' | \mu_n, a) = \int \tilde w(\mu') \tilde Q(d\mu' | \mu_0, a). $ Hence, \eqref{eq:phi'} implies that $\liminf_{n \rightarrow \infty} \int \phi(\mu') \tilde Q(d\mu' | \mu_n, a) \geq \int \phi(\mu') \tilde Q(d\mu' | \mu_0, a).$ In other words, $\mu \mapsto \int \phi(\mu') \tilde Q(d\mu' | \mu, a)$ is lower semicontinuous. We apply this fact to $-\phi$ in lieu of $\phi$ and obtain that $\mu \mapsto \int \phi(\mu') \tilde Q(d\mu' | \mu, a)$ is also upper semicontinuous. Thus the required continuity holds. {\quad $\square$} We immediately obtain the following result. \begin{theorem} Suppose Assumptions \ref{asmp:mucontinuity}, \ref{asmp:reward}(i) and \ref{asmp:P}(i) hold. If $\phi \in \mathscr B_{\tilde w}$ is convex, then $\mathcal T(\phi)$ is convex and lower semicontinuous. \end{theorem} \begin{remark}\label{rm:ascont} \citet[Theorem 3.6]{feinberg2016partially} show that one sufficient condition to guarantee both Assumption \ref{asmp:PQ} and \ref{asmp:mucontinuity} is that (i) The stochastic kernel $P(dx'|x,a)$ is canonically weakly continuous and (ii) the stochastic kernel $Q(dy|x,a)$ is continuous in total variation. In addition, it is demonstrated in \cite[Example 4.1]{feinberg2016partially} that the latter continuity in total variation cannot be weakened to the canonical weak continuity. \end{remark} \subsection{Set iteration} Recall that Corollary \ref{coro:fm} imply that a convex and lower semicontinuous function $\phi$ admits a representation of $\phi(\mu) = \sup_{f \in \mathcal N} \inf f d\mu$ with some set $\mathcal N \subset \mathscr L$. Hence, instead of iterating the value function, we can iterate the acceptance set, which is described as follows. \begin{algorithm}\label{algo:1} Start with any set $\bar{\mathcal N}_0 \subset \mathscr L$. At time $t$, update the acceptance set using the following two steps: \begin{align*} \phi_{t+1}(\mu) = & \max_{a \in \mathsf A} \left( r(\mu, a) + \alpha \int \left( \sup_{f \in \bar{\mathcal N}_t} \int f(x') M(dx'|\mu, a, y) \right) \tilde R(dy|\mu, a) \right)\\ \bar{\mathcal N}_{t+1} = & \left\{ f\in\mathscr L \ \middle| \ \phi_{t+1}(\mu) \geq \int f d\mu , \forall \mu \in \mathscr P_w \right\}. \end{align*} These steps will be repeated until some stopping criterion is satisfied. \end{algorithm} An iteration of null level-sets can be analogously designed as above and is therefore omitted. Note that in course of iteration, Lemma \ref{lm:convex} and \ref{lm:lsc} guarantee that $\phi_t$ is convex and lower semicontinuous for each $t = 1, 2, \ldots.$ Hence, Corollary \ref{coro:fm} ensures that $\phi_t(\mu) = \sup_{f \in \bar{\mathcal N}_t} \inf f d\mu, \forall \mu \in \mathscr P_w,$ and for each $t = 1, 2, \ldots.$ By Theorem \ref{thm:vi}(a), we immediately obtain the following result. \begin{theorem} Suppose Assumptions \ref{asmp:reward}, \ref{asmp:P}, \ref{asmp:PQ}, \ref{asmp:A} and \ref{asmp:mucontinuity} hold. Let $\phi^*$ be the optimal value function for the POMDP, $\bar r > 0$ and $\gamma \in (0,1)$ be the constants as in Theorem \ref{thm:vi}(a). Then, \begin{align*} \lVert \phi^* - \phi_t \rVert_{\tilde w} \leq \bar{r} \gamma^t/(1-\gamma), \textrm{ where } \phi_t(\mu) = \sup_{f \in \bar{\mathcal N}_t} \int f d \mu, \forall \mu \in \mathscr P_w. \end{align*} \end{theorem} This implies that the optimal value function $\phi^*$ can be arbitrarily well approximated by a convex and lower semicontinuous function $\phi$ of the dual form. \begin{corollary} Suppose Assumptions \ref{asmp:reward}, \ref{asmp:P}, \ref{asmp:PQ}, \ref{asmp:A} and \ref{asmp:mucontinuity} hold. For any $\epsilon > 0$, there exists a set $\mathcal N^\epsilon \subset \mathscr L$ satisfying $ \lVert \phi^* - \phi^\epsilon \rVert_{\tilde w} \leq \epsilon, \textrm{ where } \phi^\epsilon(\mu) := \sup_{f \in \mathcal N^\epsilon} \int f d \mu, \forall \mu \in \mathscr P_w. $ \end{corollary} Similar to Q-value iteration in MDP literature (see e.g.\ \citet{hernandez1996discrete}), we can iterate acceptance sets depending on $a$. \begin{algorithm}\label{algo:2} Start with an initial set $\bar{\mathcal N}_0 \subset \mathscr L$ and set $\bar{\mathcal N}_0^a = \bar{\mathcal N}_0, \forall a \in \mathsf A$. At time $t$, update the null level set using the following two steps: \begin{align*} \phi_{t+1}(\mu, a) = & \tilde r(\mu, a) + \alpha \int \left( \sup_{f \in \bigcup_{a \in \mathsf A} \bar{\mathcal N}_t^a} \int f(x') M(dx'|\mu, a, y) \right) \tilde R(dy|\mu, a) \\ \bar{\mathcal N}_{t+1}^a = & \left\{ f\in\mathscr L \ \middle| \ \phi_{t+1}(\mu, a) \geq \int f d\mu, \forall \mu \in \mathscr P_w \right\}. \end{align*} These steps will be repeated until some stopping criterion is satisfied. \end{algorithm} Under Assumptions \ref{asmp:reward}, \ref{asmp:P}, \ref{asmp:PQ}, \ref{asmp:A}, the iteration stated in Algorithm \ref{algo:2} satisfies \begin{align*} \lVert \phi^*(\cdot) - \max_{a \in \mathsf A} \phi_t(\cdot, a) \rVert_{\tilde w} \leq \bar{r} \gamma^t/(1-\gamma). \end{align*} \paragraph{$Q$ with a reference measure} Let us assume that there exists a reference (probability) measure $\varphi$ on $\mathsf Y$ such that $Q(\cdot | x',a) \ll \varphi(\cdot)$ for all $(x',a) \in \mathsf X \times \mathsf A$. Note that POMDPs in many applications satisfy this assumption. For example, the assumption holds automatically if the observation space is finite. Hence, the density of $Q$ w.r.t.\ $\varphi$ exists and is denoted by $q(y|x',a)$. In this case, the iteration can be further simplified. Indeed, it is easy to verify that $$ M(dx'|\mu, a, y) = \frac{\tilde P(dx'|\mu, a)q(y|x',a)}{\int_{\mathsf X} \tilde P(dx'|\mu, a)q(y|x',a)} \quad \textrm{and} \quad \tilde R(dy|\mu, a) = \int_{\mathsf X} \tilde P(dx'|\mu, a)q(y|x',a) \varphi(dy) $$ satisfy \eqref{eq:RM}. Under this setup, the calculation of iteration becomes much simpler. Suppose $\phi \in \mathscr B_{\tilde w}$ is convex and lower semicontinuous, then we have by Corollary \ref{coro:fm}, \begin{align} \mathcal T_{a}(\phi)(\mu) = r(\mu, a) + \alpha \int \left( \sup_{f \in \bar{\mathcal N}_\phi} \int f(x') \tilde P(dx'|\mu, a)q(y|x',a) \right) \varphi(dy). \label{eq:Ta} \end{align} In particular, the continuity of $Q$ in total variation mentioned in Remark \ref{rm:ascont} is \begin{align*} \int \lvert q(y|x_n, a_n) - q(y|x_0, a_0) \rvert \varphi(dy) \rightarrow 0, \textrm{ as } (x_n, a_n) \rightarrow (x_0, a_0). \end{align*} \section{One example} \label{sec:example} In this section, we apply our results to the nonlinear Kalman filter in discrete time, which has been widely used in signal processing and finance (see e.g.\ \citet{bain2009fundamentals}). To adapt it to the POMDP-framework, we introduce a control variable in the dynamics of both hidden states and observations. We consider the one-dimensional case only, but results can be easily extended to multidimensional cases. By this example we would like to demonstrate, that the assumptions made in the previous sections can be satisfied for a properly chosen metric. Consider a 1-dimensional simple linear model \begin{align*} x_{t+1} = d + b(a_t) x_t + \sigma n_t \end{align*} where $n_t$ is i.i.d.\ white noise with $\sigma > 0$ being the standard deviation, $d$ is a real constant, and $b: \mathsf A \rightarrow \mathbb R$ is a continuous function on $\mathsf A$ satisfying $\sup_{a \in \mathsf A} \lvert b(a)\rvert = \epsilon < 1$. Hence, the transition kernel $P$ is given by \begin{align*} P(dx'|x,a) = \frac{1}{\sqrt{2\pi} \sigma} e^{-\frac{(x' - d - b(a)x)^2}{2 \sigma^2}} dx'. \end{align*} We assume that the observation dynamic is given by \begin{align*} y_t = h(x_t, a_t) + \tilde \sigma \tilde n_t \end{align*} where $h: \mathbb R \times \mathsf A \rightarrow \mathbb R$ is a continuous function on $\mathbb R \times \mathsf A$, $\tilde n_t$ is i.i.d.\ white noise which is independent of $n_t$ with $\tilde \sigma > 0$ being the standard deviation. Hence, $Q$ is given by \begin{align*} Q(dy|x, a) = \frac{1}{\sqrt{2\pi} \tilde \sigma} e^{-\frac{(y - h(x, a))^2}{2 \tilde \sigma^2}} dy. \end{align*} Let $d(x,x_0) = \lvert x - x_0 \rvert$ be the Euclidean distance on $\mathbb R$ and set $x_0 = 0$. Then \begin{align*} \left( \int d(x',x_0) P(dx'|x, a) \right)^2 \leq \int d^2(x',x_0) P(dx'|x, a) = \sigma^2 + (d+b(a)x)^2 \end{align*} Since $b^2(a) \leq \epsilon^2 < 1$ for all $a \in \mathsf A$, there exists a $\tilde \beta \in (\epsilon, 1)$ and $K > 0$ such that \begin{align*} \int d(x',x_0) P(dx'|x, a) \leq \tilde \beta d(x, x_0) + K, \forall x \in \mathbb R. \end{align*} For any discount factor $\alpha \in (0,1)$, we may select any $\beta \in (1, 1/\alpha)$ and set $k := \frac{\beta - 1}{K}$. The above inequality yields \begin{align*} \int \left(1 + k d(x',x_0)\right) P(dx'|x, a) \leq & 1 + \frac{\beta - 1}{K}(\tilde \beta d(x, x_0) + K) \\ \leq & \tilde \beta (1 + k d(x, x_0)) \leq \beta (1 + k d(x, x_0)). \end{align*} Hence, the Assumption \ref{asmp:P}(i) holds. It is easy to verify that Assumption \ref{asmp:P}(ii) holds due to the continuity of $b$. For the reward function, we assume that there exists a positive constant $\tilde r < \infty$ such that $\lvert r(x,a) \rvert \leq \tilde r(1+d(x,x_0)) = \tilde r(1+\lvert x \rvert)$ holds for all $(x,a) \in \mathsf K.$ Furthermore, we assume that $a \mapsto r(x,a)$ is upper semicontinuous. Then, it is easy to verify Assumption \ref{asmp:reward}. It is worth to be mentioned that the reward function in this example is allowed to be unbounded on both sides. Finally, it remains to verify Assumption \ref{asmp:PQ} and \ref{asmp:mucontinuity}. By Remark \ref{rm:ascont} (see \citet[Theorem 3.6]{feinberg2016partially}), it is sufficient to show that $P(\cdot|x, a)$ is canonically weakly continuous and $Q(dy|x, a)$ is continuous in total variation. The weak continuity is easy to verify due to the continuity of $b$. We only show the continuity of $Q(dy|x, a)$ in total variation. Let $q(y, x) := \frac{1}{\sqrt{2\pi} \tilde \sigma} e^{-\frac{(y - h(x))^2}{2 \tilde \sigma^2}}$. Hence, it is to show that \begin{align} \int_{-\infty}^\infty \left\lvert q(y, x_n) - q(y, x_0) \right\rvert dy = \int_{-\infty}^\infty \left\lvert \frac{q(y, x_n)}{q(y, x_0)} - 1 \right\rvert q(y, x_0) dy \rightarrow 0, \textrm{ as } n\rightarrow \infty.\label{eq:tv} \end{align} Indeed, since $h$ is continuous, we have for each $y \in \mathbb R$, $\lim_{n \rightarrow \infty} \frac{q(y, x_n)}{q(y, x_0)} = 1$. On the other hand, we have \begin{align*} \frac{q(y, x_n)}{q(y, x_0)} \leq e^{\frac{h^2(x_0) - h^2(x_n)}{2\tilde \sigma^2}} e^{\frac{\epsilon^2 y^2 }{2 \tilde \sigma^2 } + \frac{ (h(x_0) - h(x_n))^2}{2 \epsilon^2\tilde \sigma^2 }}, \forall \epsilon > 0, \end{align*} and $\lvert h(x_n) \rvert$ is uniformly bounded w.r.t.\ $n$ due to the continuity of $h$. Since $\epsilon$ can be taken any value, we may select some $\epsilon$ such that $\int e^{\frac{\epsilon^2 y^2 }{2 \tilde \sigma^2 }} q(y, x_0) < \infty$. Hence, there exists a function $g(y)$ such that $\frac{q(y, x_n)}{q(y, x_0)} \leq g(y)$ and $\int g(y) q(y, x_0) d y < \infty.$ By Fatou-Lebesgue theorem, we obtain the required convergence stated in \eqref{eq:tv}.
1,108,101,563,441
arxiv
\section{Introduction} \label{Intro} \renewcommand{\theequation}{1.\arabic{equation}}\setcounter{equation}{0} The inflationary paradigm not only resolves several long-standing puzzles in the standard big-bang cosmology \cite{ag1981}, but also explains the origin of the large scale structure in the cosmos \cite{lss}. However, the classical inflationary spacetimes are past incomplete and the big bang singularity is inevitable if the solutions are evolved backward to the regime where the energy density of the universe gets close to the Planck scale \cite{BV94}. In order to extend the inflationary paradigm to the Planckian regime, quantum geometrical effects have to be taken into consideration. One of the successful attempts to achieve this goal is the loop quantum cosmology (LQC) which results from applying the techniques of loop quantum gravity (LQG) to the cosmological settings where a symmetry reduction is first performed before the quantization \cite{review2}. The evolution of the quantum spacetime in LQC is governed by a non-singular quantum difference equation which results in a resolution of the big bang singularity replacing it with a quantum bounce when spacetime curvature becomes Planckian \cite{aps1,aps2,aps3,acs2010}. The robustness of this result has been shown for a variety of isotropic and anisotropic spacetimes in the presence of a massless scalar field and constant potentials \cite{review2}. Recently, there is even some progress on quantizing the inflationary spacetimes by using the reduced phase space quantization of LQC where the role of the physical clocks is played by the dust and Klein-Gordon fields \cite{gls2020} which indicates singularity resolution as in other models of LQC. The phenomenological implications for inflationary background and perturbations in LQC have been studied using the so called effective spacetime description (see \cite{as2017} for a review) whose validity has been verified for isotropic and anisotropic spacetimes \cite{numlsu-1,numlsu-2,numlsu-3,numlsu-4}. In addition to the standard LQC in which the Lorentzian term of the classical Hamiltonian constraint is treated in the same way as the Euclidean term in the Friedmann-Lema\^itre-Robertson-Walker (FLRW) universe, robustness of the singularity resolution has also been studied with respect to the different quantizations of the classical Hamiltonian constraint in modified LQC. Two notable examples are the so-called modified LQC-I (mLQC-I) \cite{YDM09,DL17,mehdi,lsw2018} and modified LQC-II (mLQC-II) models \cite{YDM09,lsw2018}. These two models differ from the standard LQC by different regularizations of the Lorentzian term which result in fourth-order non-singular quantum difference equations \cite{ss6}. It has been shown that in these models, there is a generic resolution of singularity as in LQC \cite{ss5}. The big bang singularity is resolved in the Planck regime and replaced by a quantum bounce and the inflationary phase can naturally take place with a high likelihood when the inflaton field is coupled to the gravitational sector with an inflationary potential \cite{lsw2018,lsw2018b,lsw2019}. Although the dynamics in LQC and mLQC-II is qualitatively similar, the difference between LQC and mLQC-I become manifest in the contracting phase where an emergent quasi-de Sitter space is present in mLQC-I with Planckian values implying that the contracting phase in mLQC-I is purely a quantum regime without the classical limit. \footnote{Existence of such a phase is not confined to FLRW models but also exists even with standard loop quantization in certain anisotropic spacetimes \cite{djs}.} Given these different regularizations of the Hamiltonian constraint in LQC, an important question is whether the physical predictions resulting from different quantum spacetimes are robust for cosmological perturbations. To answer these questions one needs to carefully understand the way modifications in the Hamiltonian constraint result in modifications to the primordial power spectrum. In the literature, there currently exist four primary approaches which address the impacts of the quantum geometry on the primordial power spectra in isotropic LQC (for earlier works see for eg. \cite{pert-old}). These are the deformed algebra approach \cite{bhks2008,cbgv2012,cmbg2012}, the separate universe approach \cite{wilson2017}, the dressed metric approach \cite{aan2012,aan2013,aan2013b} and the hybrid approach \cite{mm2012, mm2013, gmmo2014,gbm2015,mo2016} (for a recent discussion about similar ideas in anisotropic Bianchi I LQC spacetimes see Refs. \cite{b1-lett,b1-long}). Among these, the latter two approaches are most widely studied in recent years \cite{d1,d2, bo2016,gbmo2016, tao2017,tao2018,abs2018,nbm2018,nbm2018a}. The dressed metric approach is based on the work by Langlois on the gauge-invariant perturbations in the Arnowitt-Deser-Misner (ADM) phase space \cite{lang94} where lapse and shift are treated as Lagrange multipliers.\footnote{This restriction can be lifted in the extended phase space where a generalization of Langlois' treatment has been recently found which allows construction of gauge-invariant variables other than the Mukhanov-Sasaki variable in canonical theory \cite{giesel1,giesel2}.} \footnote{A treatment similar to Langlois' analysis for Bianchi-I spacetimes has been carried out in \cite{b1-class}.} In this approach, after expanding the scalar constraint up to the second order in the perturbations, the zeroth-order scalar constraint is loop quantized and the second-order scalar constraint becomes the physical Hamiltonian that prescribes the dynamics of the inhomogeneous linear perturbations. After quantization, the inhomogeneous gauge invariant degrees of freedom can be interpreted as propagating on a quantum background spacetime which is described by a dressed metric. Furthermore, when the test-field approximation is employed, in which the background quantum states are sharply peaked around the classical trajectories at late times, the quantum corrected Mukhanov-Sasaki equation takes same form as its classical counterpart as long as the relevant background quantities in the equation follow the effective dynamics of LQC. Recently, the dressed metric approach has also been extended to mLQC-I/II with special emphasis on the physical consequences of the regularization ambiguities of the conjugate momentum of the scale factor \cite{lsw2020}. Other relevant work on applying the dressed metric approach to mLQC-I can be found in \cite{IA19}. Though the hybrid approach shares a common feature with the dressed metric approach in the sense that perturbations are Fock quantized while the background is loop quantized, it has some important differences. Based on the work by Halliwell and Hawking \cite{hawking}, in this approach, one usually assumes the spatial geometry to be a three torus and then expands the spacetime metric and the scalar field on the bases formed by the eigenfunctions of the Laplace-Beltrami operator compatible with the auxiliary three metric. After truncating the total Hamiltonian to the second-order in the perturbations, a canonical transformation which concerns both the background variables and the inhomogeneous degrees of freedom is implemented to express the Hamiltonian in terms of the gauge invariant observables, including the Mukhanov-Sasaki variable, the abelianized linear perturbative constraints and their respective conjugate variables, while keeping the full canonical structure of the system. The conjugate momentum of the Mukhanov-Sasaki variable is also carefully chosen so that a unitary implementation of the quantum dynamics can be fulfilled \cite{gbm2015, mmo2012}. Afterwards, the hybrid quantization ansatz is employed: the background geometry is loop quantized, the zero-mode of the scalar field is quantized in the standard Schr\"odinger representation while the inhomogeneous perturbations are Fock quantized. The solution to the resulting quantum dynamical equation is then solved by using the Born-Oppenheimer ansatz which approximates the physical state by a direct product of the quantum background state and the states only depending on the gauge invariant modes. Similar to the dressed metric approach, for the sharply peaked semi-classical background states, there also exists an effective description of the quantum dynamics in the hybrid approach which greatly simplifies the dynamical equations \cite{bo2016}. Recently, the hybrid approach is also applied to the modified loop cosmological models, such as mLQC-I \cite{qm2019, gqm2020} for which the time-dependent mass of the perturbations is analyzed and discussed in \cite{qmp2020}. The goal of this paper is to study the imprints of the different quantizations of the background geometry on the scalar power spectrum in the framework of the hybrid approach. In order to achieve this goal, we apply the effective dynamics of the hybrid approach in mLQC-I/II to obtain the numerical results of the scalar power spectra from these two models and then compare them with the results from LQC. We assume the gravitational sector of LQC and mLQC-I/II is minimally coupled to the inflaton field with a Starobinsky inflationary potential whose mass is fixed via the recent Planck 2018 data. After specifying the initial conditions of the background at the bounce and the initial states of the linear perturbations at some time in the contracting phase, the scalar power spectra are obtained by numerically integrating the effective equations of the background and perturbations using the Mathematica internal solver. The results from LQC and mLQC-I/II are then compared from the infrared regime to the ultraviolet regime of the power spectra. In particular, we find the predictions on the power spectrum of mLQC-I from the hybrid approach is in remarkable contrast with the results of the same model from the dressed metric approach in the infrared and oscillatory regimes. Our results show that for LQC and mLQC-II the situation is similar to that in the dressed metric approach, but for mLQC-I there are significant differences in predictions between the two approaches. This manuscript is organized as follows. In Sec. \ref{review}, starting from the classical Hamiltonian constraint, we will briefly review the effective dynamics of the hybrid approach in LQC. The Hamilton's equations of the background dynamics and the Mukhanov-Sasaki equation in LQC will be given as the basis for the numerical simulations in the following section. In Sec. \ref{mLQC}, we first review the effective dynamics of the background in mLQC-I/II and then discuss the effective dynamics of the hybrid approach in these two models. We will focus on the Mukhanov-Sasaki equations from the hybrid approach. In Sec. \ref{power}, based on the results from the previous two sections, we will present the numerical results of the primordial scalar power spectra from the hybrid approach in LQC, mLQC-I and mLQC-II for some representative initial conditions. A comparison among the effective masses and the resulting power spectra from different models and their relative difference will also be given. Finally, in Sec. \ref{summary}, the main results obtained in this paper are summarized. In our paper, we will use the Planck units with $\hbar=c=1$ while keeping Newton's constant $G$ explicitly. Also, the greek letters are used to denote the 4-dimensional spacetime indices while the Latin letters are for the indices of the tensors on the 3-dimensional hypersurface. \section{A brief review of the hybrid approach in LQC} \label{review} \renewcommand{\theequation}{2.\arabic{equation}}\setcounter{equation}{0} In this section, we give a brief review of the hybrid approach in LQC. Since the content has been widely discussed in various articles \cite{nbm2018,bo2016,mo2016,gbmo2016,gbm2015,gmmo2014,mm2013,mop2011}, we only outline the basic ideas and quote the results that are relevant to the purpose of this paper. In the following, we will consider a flat FLRW universe with a $\mathbb{T}^3$ topology in which the four-dimensional globally hyperbolic spacetime is ADM decomposed into $\mathcal M=\mathcal R\times \mathbb{T}^3$ and the four-metric of the manifold is parameterized in terms of the lapse $N$, shift $N^i$ and the three-metric $q_{ij}$ in the ADM decompositions. Without the inhomogeneities, the homogeneous background in the spatially-flat universe with a $\mathbb{T}^3$ topology is described by \begin{equation} \label{2a1} \mathrm{d} s^2=-N^2_0(t) \mathrm{d} t^2+a^2(t){}^0h_{ij}\mathrm{d}\theta_i\theta_j, \end{equation} where $N_0(t)$ is the lapse function, $a(t)$ the scale factor and ${}^0h_{ij}$ the comoving three-metric. The value of each angular coordinate $\theta_i$ ranges between $0$ and $l_0$ and thus the comoving (physical) volume of the three torus is $l^3_0$ ($a^3 l^3_0$). Any functions defined on the spatial manifold $\mathbb{T}^3$ can be expanded in terms of the eigenfunctions of the Laplace-Beltrami operator compatible with the metric ${}^0h_{ij}$. These eigenfunctions are usually denoted by $\tilde Q_{\vec n, \pm}(\vec \theta)$ with eigenvalues $-\omega^2_n=-4\pi^2\vec n\cdot \vec n/l^2_0$, where $\pm$ stands for the cosine and sine modes respectively, and $\vec n=(n_1,n_2,n_3)\in \nonumber{Z}^3$ is any tuple of integers with its first component being a strictly positive integer. In order to incorporate the inflationary phase driven by a single scalar field, we consider a massive scalar field $\phi$ with the scalar potential $U(\phi)$ minimally coupled to the gravity sector. Following the analysis in \cite{gbm2015}, one can proceed to consider the scalar perturbations around the homogeneous FLRW universe described by the metric (\ref{2a1}). The inhomogeneities in the lapse, shift and the three-metric can be expanded in the basis of the cosine and sine mode functions $\tilde Q_{\vec n, \pm}$ on the three-torus. The perturbative expansion of the total action of the system which consists of the Einstein-Hilbert action together with the action for a massive scalar field minimally coupled to the gravity is then truncated to the second order in the perturbations, yielding a total Hamiltonian that is a linear combination of three terms: the first term proportional to the homogeneous mode of the scalar constraint which also includes the quadratic contributions from the linear perturbations, the second term proportional to the perturbed scalar constraint to the first order in perturbations and the third term proportional to the perturbed momentum constraint to the first order in perturbations. However, this total Hamiltonian is a functional of the gauge variant perturbations, i.e. inhomogeneous degrees of freedom that are not left invariant by the gauge transformation generated by the linear scalar and momentum constraints. In order to extract the physical implications from the theory, it is more convenient to work with the gauge-invariant variables, i.e. the Dirac observables. In general, this can be achieved by a suitable canonical transformation. In the current context, the appropriate canonical transformations are introduced in \cite{gbm2015} in the whole phase space including both homogeneous and inhomogeneous degrees of freedom, which separate the gauge-invariant variable, namely the Mukhanov-Sasaki variable denoted in the following by $\nu_{\vec n, \epsilon}$ and its momentum $\pi_{\nu_{\vec n, \epsilon}}$, from the other variables $\nu^{(i)}_{\vec n,\epsilon}$ and their respective momenta $\pi_{\nu^{(i)}_{\vec n,\epsilon}}$ with $i=1,2$. In terms of these new canonical variables, the total Hamiltonian up to the second order in perturbations can be explicitly written as \cite{gbm2015} \begin{equation} \label{2a2} \mathcal H_T=\frac{N_0}{16 \pi G} \left(C_0+ \sum_{\vec n, \epsilon}C^{\vec n, \epsilon}_2\right)+ \sum_{\vec n, \epsilon} G_{\vec n, \epsilon} \pi_{\nu^{(1)}_{\vec n,\epsilon}}+ \sum_{\vec n, \epsilon} K_{\vec n, \epsilon} \pi_{\nu^{(2)}_{\vec n,\epsilon}}, \end{equation} where $G_{\vec n, \epsilon}$ and $K_{\vec n, \epsilon} $ are the the coefficients of the Fourier modes of the linear perturbations of the lapse and shift. Besides, $\pi_{\nu^{(1)}_{\vec n,\epsilon}}$ and $\pi_{\nu^{(2)}_{\vec n,\epsilon}}$ are equivalent to the perturbed scalar and momentum constraints which are linear in perturbations. When the theory is quantized by following the Dirac quantization approach, the physical states will be independent of $\nu^{(1)}_{\vec n,\epsilon}$ and $\nu^{(2)}_{\vec n,\epsilon}$. As a result, the sector $(\nu^{(i)}_{\vec n,\epsilon}, \pi_{\nu^{(i)}_{\vec n,\epsilon}})$ is decoupled from the physical one. The first term in the total Hamiltonian only concerns the homogeneous background and the Mukhanov-Sasaki variable and its momentum, which is explicitly given by \begin{equation} \label{2a3} \mathcal H_\mathrm{MS}=\frac{N_0}{16 \pi G} \left(C_0+ \sum_{\vec n, \epsilon}C^{\vec n, \epsilon}_2\right), \end{equation} where the subscript `$\mathrm{MS}$' implies that the Hamiltonian $\mathcal H_\mathrm{MS}$ generates the dynamics of the Mukhanov-Sasaki variable and its momentum. The unperturbed scalar constraint is given by \begin{equation} \label{2a4} C_0=-\frac{6}{\gamma^2 }\frac{\Omega^2}{v}+8\pi G\left(\frac{p^2_\phi}{v}+2 v U(\phi)\right), \end{equation} where $\gamma$ is the Barbero-Immirzi parameter which is usually set to $0.2375$ from the black hole thermodynamics in LQG, $p_\phi$ is the conjugate momentum of the scalar field and $U(\phi)$ represents the potential of the scalar field. For the geometrical degrees of freedom, instead of the scale factor and its momentum, we use the variables $(v, b)$ which will be more convenient for our later discussion of the effective dynamics in the loop cosmological models. In the classical theory, $\Omega=vb$ with $v$ representing the physical volume of the 3-torus and $b=\gamma H$ where $H$ is the Hubble parameter. Meanwhile, $C^{\vec n, \epsilon}_2$ denotes the quadratic corrections from the modes labeled by $(\vec n , \epsilon)$, which takes the form \cite{gmmo2014, bo2016} \begin{equation} \label{2a5} C^{\vec n, \epsilon}_2=\frac{8\pi G}{v^{1/3}}\left(\pi^2_{\nu_{\vec n, \epsilon}}+E^n \nu^2_{\vec n, \epsilon}\right), \end{equation} with \begin{eqnarray} \label{2a6} E^n&=&\omega^2_n+s,\\ \label{2a7} s&=&\frac{4 \pi G p^2_\phi}{3 v^{4/3}}\left(19-24 \pi G \gamma^2 \frac{p^2_\phi}{\Omega^2}\right)+ v^{2/3}\left(U_{, \phi\phi}+\frac{16 \pi G \gamma p_\phi }{\Omega}U_{,\phi}-\frac{16\pi G}{3}U\right), \end{eqnarray} where $U_{,\phi}\equiv \partial U/\partial \phi$ and so on. With the Poisson brackets given by \begin{equation} \label{2a8} \{b,v\}=4\pi G \gamma, \quad \quad \{\phi, p_\phi\}=1, \quad \quad \{\nu_{\vec n, \epsilon},\pi_{\nu_{\vec n, \epsilon}} \}=\delta_{\vec n \vec n' }\delta_{\epsilon \epsilon'}, \end{equation} it is straightforward to find the Hamilton equations for the canonical variables and their respective momenta. However, different quantizations of the geometric sector in LQC can result in different forms of $\Omega$ in the effective description of the quantum dynamics. In order to cast the Hamilton equations into the most general form which will also be valid in the modified LQC models, we will keep $\Omega$, as a function of $v$ and $b$, explicit in the equations. Hence, when ignoring the back-reaction of the perturbations on the homogeneous and isotropic background, the evolution of the background dynamics obeys the following equations \begin{eqnarray} \label{h1} \dot v&=&N_0\frac{3\Omega}{v\gamma}\frac{\partial \Omega}{\partial b},\\ \label{h2} \dot b&=&\frac{3N_0\Omega^2}{2v^2\gamma}-\frac{3N_0\Omega}{\gamma v}\frac{\partial \Omega}{\partial v}-4\pi G\gamma N_0 P,\\ \label{matter1} \dot \phi &=&N_0\frac{p_\phi}{v}, \\ \label{matter2} \quad \dot p_\phi&=&-N_0v U_{,\phi}, \end{eqnarray} where $P$ denotes the pressure of the scalar field which is given by \begin{equation} P=\frac{p^2_\phi}{2v^2}-U. \end{equation} Meanwhile, the time evolution of the scalar modes $\nu_{\vec n, \epsilon}$ is governed by \begin{equation} \label{numeric} {\dot \nu}_{\vec n, \epsilon}=\frac{N_0}{v^{1/3}}\pi_{\nu_{\vec n, \epsilon}}, \quad \quad {\dot \pi}_{\nu_{\vec n, \epsilon}}=-\frac{N_0E^n \nu_{\vec n, \epsilon}}{v^{1/3}}. \end{equation} In the above formulae, if the lapse $N_0$ is taken to be $v^{1/3}$, then the overdots represent the differentiations with respect to the conformal time, in which the equation of motion of each scalar mode takes the form \begin{equation} \label{2a9} \nu^{\prime\prime}_{\vec n, \epsilon}+\left(\omega^2_n+s\right)\nu_{\vec n, \epsilon}=0, \end{equation} where a prime denotes the differentiation with respect to the conformal time and $s$ is given in (\ref{2a7}). In terms of the Mukhanov-Sasaki variable $Q_k=\nu_k/a$, the above equation is equivalent to \begin{equation} \label{2a10} \ddot Q_k+3H\dot Q_k+\left(\frac{\omega^2_n+s}{a^2}+H^2+\frac{\ddot a }{a}\right)Q_k=0, \end{equation} where $H$ is the Hubble rate, as mentioned above, and the derivatives of the relevant quantities are with respect to the cosmic time when the lapse $N_0$ is set to unity. The above equations (\ref{2a10}) and (\ref{h1})-(\ref{matter2}) constitute a fundamental set of the equations which describe the dynamics of both background and linear perturbations in the hybrid approach at the classical level. For the pragmatic purpose, in the following, the discrete spectra $\omega^2_n$ is set equal to the continuous comoving wavenumber $k$ which is equivalent to taking the limit $l_0\rightarrow \infty$. \subsection{The hybrid quantization} In the hybrid approach, the quantization of the Hamiltonian constraint (\ref{2a3}) is implemented in two successive steps which involves certain assumptions. First, the homogeneous gravitational sector is loop quantized in the $\bar \mu$ scheme in LQC and the matter sector is quantized in the usual Schr\"odinger representation. Note that in the Dirac quantization in LQC, the quantization of background is not yet available in the presence of a potential. As a result one generally assumes, as in the dressed metric approach, an existence of background quantization with a physical inner product generally taken to be the same as in absence of potentials. In the following we work with same assumption as being made in previous works but note that this limitation can be overcome given recent developments to include a potential in the reduced phase space quantization \cite{gls2020}. Second, as in the dressed metric approach, the inhomogeneous degrees of freedom are not loop but Fock quantized. As a result, the kinematic Hilbert space is a tensor product of the individual Hilbert space for each sector, that is, $\mathcal H_\mathrm{kin}=\mathcal H^\mathrm{grav}_\mathrm{kin}\otimes \mathcal H^\mathrm{matt}_\mathrm{kin}\otimes \mathcal F$. More specifically, the kinematic Hilbert space of the homogeneous gravitational sector is $\mathcal H^\mathrm{grav}_\mathrm{kin}=L^2(\mathbb{R}_\mathrm{Bohr},d \mu_\mathrm{Bohr})$ where $\mathbb{R}_\mathrm{Bohr}$ is the Bohr compactification of the real line and $d \mu_\mathrm{Bohr}$ its Haar measure. $ \mathcal H^\mathrm{grav}_\mathrm{kin}$ is spanned by the eigenstates of the volume operator which are usually denoted by $\{|v\rangle, v\in \mathbb{R} \}$ with the discrete norm $\langle v_1| v_2\rangle=\delta_{v_1,v_2}$. The fundamental operators in $\mathcal H^\mathrm{grav}_\mathrm{kin}$ in the $\bar \mu$ scheme in LQC \cite{aps3} are the volume operator $\hat v$ and the holonomy operator $\hat N_{\bar \mu}=\widehat{e^{-i \lambda b/2}}$ with $\lambda =\sqrt{\Delta}$ and $\Delta$($=4\sqrt{3}\pi\gamma l^2_\mathrm{Pl}$) is the minimum area eigenvalue in LQG. In the hybrid approach, one usually considers the Martin-Benito-Mena Marugan-Olmedo prescription \cite{bmo2009} for the factor ordering in the Hamiltonian constraint operator. With this prescription, the zero mode of the homogeneous sector is represented by \begin{equation} \label{2a11} \hat C_0=\left(\widehat{\frac{1}{v}}\right)^{1/2}\left(-\frac{6}{\gamma^2} \hat \Omega^2 +8\pi G \hat p^2_\phi+2 \hat v^2 U(\hat \phi)\right)\left(\widehat{\frac{1}{v}}\right)^{1/2}, \end{equation} here $\hat \phi$ and $\hat p_\phi$ ($=-i\hbar \frac{\partial}{\partial \phi}$) are the operators in the kinematic Hilbert space of the matter sector which is $\mathcal H^\mathrm{matt}_\mathrm{kin}=L^2(\mathbb{R}, d\phi)$. The operator $\hat \Omega$ is given by \begin{equation} \label{2a12} \hat \Omega=\frac{1}{4 i \sqrt{\Delta}}\hat v^{1/2}\left(\widehat{\mathrm{sgn}(v)}\left(\hat N_{2\bar \mu}-\hat N_{-2\bar \mu}\right)+\left(\hat N_{2\bar \mu}-\hat N_{-2\bar \mu}\right)\widehat{\mathrm{sgn}(v)}\right)\hat v^{1/2}. \end{equation} The operator $\hat \Omega^2$ annihilates the zero volume state $|v=0\rangle$ and selects a group of separable subspaces $\mathcal H^\pm_\epsilon$ which are formed by the states with support on the lattices ${\cal L}_{\pm \epsilon} = \{ v = \pm (4n + \epsilon)\}$ with $n \in \mathbb{N}$ and $\epsilon \in (0,4]$. The action of $\hat \Omega^2$, as well as $\hat C_0$, leaves these subspaces invariant and do not mix states with support on the opposite signs of the volume. On the other hand, the inhomogeneous sector is quantized in the Fock representation by choosing the annihilation-like variable \begin{equation} a_{v_{\vec n, \epsilon}}=\frac{1}{\sqrt{2\omega_n}}\left(\omega_n v_{\vec n, \epsilon} +i \pi_{v_{\vec n ,\epsilon}}\right), \end{equation} and its complex conjugate $a^*_{v_{\vec n, \epsilon}}$ as the creation-like variable. The quantization is then implemented by promoting these variables to the annihilation and creation operators. The resulting Fock space is spanned by the direct products of the eigenstates of the occupation number operator $\mathcal N_{\vec n, \epsilon}$ for each mode $(\vec n ,\epsilon)$. Finally, the physical states described by $\Psi(v, \phi, \mathcal N)$ should be annihilated by the quantum Hamiltonian constraint \begin{equation} \hat {\mathcal H}_\mathrm{MS}=\frac{1}{16 \pi G} \left(\hat C_0+ \sum_{\vec n, \epsilon}\hat {C}^{\vec n, \epsilon}_2\right), \end{equation} where we take $N_0=1$ at the classical level and $\hat C_0$ is given in (\ref{2a11}). Here the operator $\hat {C}^{\vec n, \epsilon}_2$ is promoted from its classical counterpart (\ref{2a5}) whose explicit form can be found in \cite{gbm2015}. Here, we want to emphasize that to obtain $\hat {C}^{\vec n, \epsilon}_2$, the $\Omega^2$ term in the effective mass (\ref{2a7}) is promoted to $\hat \Omega^2$ given by the square of (\ref{2a12}). However, the $1/\Omega$ term in the effective mass can not be directly promoted to the desired operator as $ \hat \Omega$ is a difference operator which only translates eigenstates $| v\rangle$ by two units. In order to make $1/\Omega$ not mix the states from different superselection subspaces, the following prescription is used in hybrid approach: \begin{equation} \frac{1}{\Omega}\rightarrow \hat \Omega^{-1} \hat \Lambda \hat \Omega^{-1}, \end{equation} with $\hat \Lambda $ given by \begin{equation} \label{2a13} \hat \Lambda=\frac{1}{8 i \sqrt{\Delta}}\hat v^{1/2}\left(\widehat{\mathrm{sgn}(v)}\left(\hat N_{4\bar \mu}-\hat N_{-4\bar \mu}\right)+\left(\hat N_{4\bar \mu}-\hat N_{-4\bar \mu}\right)\widehat{\mathrm{sgn}(v)}\right)\hat v^{1/2}. \end{equation} As compared with $\hat \Omega$, $\hat \Lambda$ is defined with holonomies of double fiducial length and hence preserves the superselection sectors. For the other homogeneous factors in the effective mass, a symmetric factor ordering is employed. Finally, the physical quantum states is governed by \begin{equation} \hat {\mathcal H}_\mathrm{MS} \Psi(v, \phi, \mathcal N)=0. \end{equation} In general, it is very difficult to solve this equation for physical quantum states (see \cite{mm2013} for a specific algorithm though). Approximated solutions where one adopts a Born-Oppenheimer ansatz have been studied \cite{gmmo2014,gbm2015,mo2016}. Here, under some reasonable approximations, one can derive a dressed metric formulation for both scalar and tensor perturbations. But, in practice, one usually focuses on sharply peaked states and hence turns into the effective description of the quantum dynamics to extract the physical implications of the theory. \subsection{Effective dynamics in the hybrid approach } In LQC, although the Schr\"odinger equation for the physical quantum states is a non-singular quantum difference equation, the effective description of the quantum spacetime for the semi-classical states which are sharply peaked around the classical solutions at late times have proved to accurately capture the properties of the quantum evolution in LQC for a variety of isotropic and anisotropic models \cite{VT08, aps2, numlsu-2, numlsu-3, numlsu-4}. For the spatially-flat model, it turns out that the effective description of the background dynamics is based on an effective Hamiltonian which can be obtained by replacing the momentum variable $b$ with $\sin(\lambda b)/\lambda$ in the classical Hamiltonian (\ref{2a3}). This substitution can be obtained from the operator (\ref{2a12}) for the semi-classical states in which the expectation values of products of operators are replaced with the products of expectation values of the same operators. As a result, in LQC, the equations of motion for the effective background dynamics are given in (\ref{h1})-(\ref{matter2}) with $\Omega$ given by \begin{equation} \Omega_\mathrm{LQC}=v\frac{\sin(\lambda b)}{\lambda}. \end{equation} In the classical limit, $\lambda b\ll1$ which reduces $\Omega_\mathrm{LQC}$ to its classical expression $\Omega=vb$. Therefore, the equations of motion for the background dynamics in LQC take the form \begin{eqnarray} \label{lqc1} \dot v&=&\frac{3v}{2\lambda \gamma}\sin(2\lambda b), \\ \label{lqc2} \dot b&=&-\frac{3\sin^2\left(\lambda b\right)}{2 \gamma \lambda^2}-4\pi G\gamma P, \end{eqnarray} where $N_0$ is set to unity and overdots represent differentiation with respect to the cosmic time. Note that the equations of motion in matter sector are still given by (\ref{matter1})-(\ref{matter2}). Similarly, the effective dynamics of the scalar perturbations is prescribed by the Mukhanov-Sasaki equations (\ref{2a9}) and (\ref{2a10}) under the conditions that: (i) the evolution of all the relevant background quantities agrees with their effective dynamics described in (\ref{matter1})-(\ref{matter2}) and (\ref{lqc1})-(\ref{lqc2}); (ii) as in the quantum theory, in the effective mass $s$ given in (\ref{2a7}), the $1/\Omega^2$ and $1/\Omega$ is given by their effective expressions which in the semi-classical limit take the form \cite{bo2016} \begin{equation} \label{2b1} \frac{1}{\Omega^2}\rightarrow \frac{1}{\Omega^2_\mathrm{LQC}}, \quad \quad \frac{1}{\Omega}=\frac{\Lambda_\mathrm{LQC}}{\Omega^2_\mathrm{LQC}}, \end{equation} with \begin{equation} \label{2b2} \Lambda_\mathrm{LQC}=v\frac{\sin(2\lambda b )}{2\lambda}. \end{equation} Here $\Lambda_\mathrm{LQC}$ is the semi-classical limit of the operator (\ref{2a13}) for the highly peaked semi-classical states. As a result, the effective mass in the Mukhanov-Sasaki equation in LQC is explicitly given by \begin{equation} s=\frac{4 \pi G p^2_\phi}{3 v^{4/3}}\left(19-24 \pi G \gamma^2 \frac{p^2_\phi}{\Omega^2_\mathrm{LQC}}\right)+ v^{2/3}\left(U_{, \phi\phi}+\frac{16 \pi G \gamma p_\phi \Lambda_\mathrm{LQC}}{\Omega^2_\mathrm{LQC}}U_{,\phi}-\frac{16\pi G}{3}U\right). \end{equation} As now we are equipped with a complete set of dynamical equations for both background and scalar perturbations, the scalar power spectrum can be obtained through numerical simulations as long as the initial conditions and initial states are specified. To summarize, we have discussed the basic ideas in the hybrid approach in LQC and gave the fundamental Hamilton equations for the background and the Mukhanov-Sasaki equation of the scalar perturbations in the effective description of the quantum theory. This effective dynamics is based on the Born-Oppenheimer ansatz and the assumption that there exist some semi-classical states in which the effective equations of motion of the expectation values of the fundamental observables are consistent with the effective dynamics in LQC. We will employ the same ansatz and the assumption in the next section to obtain the effective equations of motion for both of the background and the scalar perturbations in the modified LQC models. \section{The modified loop quantum cosmology and the hybrid approach} \label{mLQC} \renewcommand{\theequation}{3.\arabic{equation}}\setcounter{equation}{0} In this section, we briefly review two modified LQC models, namely mLQC-I and mLQC-II. We will focus on their effective dynamics and give their respective Hamilton's equations for the background evolution and the relevant equations for the scalar perturbations in the hybrid approach. We follow the conventions in Refs. \cite{lsw2018,lsw2018b,lsw2019} which can be referred to for further discussion. \subsection{mLQC-I} The mLQC-I model was first proposed as an alternative quantization of the Hamiltonian constraint in a spatially-flat FLRW universe \cite{YDM09}. It was later rediscovered in \cite{DL17} by computing the expectation values of the Hamiltonian constraint with the complexifier coherent states. Phenomenologically, this model is characterized by an asymmetric bounce with its contracting phase quickly tending to a quasi de Sitter phase with an effective Planck-scale cosmological constant \cite{mehdi} and a rescaled Newton's constant \cite{lsw2018}. Similar to the standard LQC, we assume the validity of the effective description of the quantum spacetime. The effective dynamics in mLQC-I can be obtained from an effective Hamiltonian which can be arrived at by the prescription \begin{equation} \label{3a1} \Omega^2_{{\scriptscriptstyle{\mathrm{I}}}}=-\frac{v^2\gamma^2}{\lambda^2}\Big\{\sin^2\left(\lambda b\right)-\frac{\gamma^2+1}{4\gamma^2} \sin^2\left(2 \lambda b\right) \Big\}. \end{equation} Substituting the above expression of $\Omega^2_{{\scriptscriptstyle{\mathrm{I}}}}$ into (\ref{h1}) and (\ref{h2}), one finds the Hamilton's equations for the effective dynamics in mLQC-I: \begin{eqnarray} \label{mLQCIa} \dot v&=&\frac{3v\sin(2\lambda b)}{2\gamma \lambda}\Big\{(\gamma^2+1)\cos(2\lambda b)-\gamma^2\Big\},\\ \label{mLQCIb} \dot b&=&\frac{3\sin^2(\lambda b)}{2\gamma \lambda^2}\Big\{\gamma^2\sin^2(\lambda b)-\cos^2(\lambda b)\Big\} -4\pi G\gamma P, \end{eqnarray} where $N_0$ is set to unity and the overdots in the evolution equations represent the differentiation with respect to the cosmic time. Besides, the equations of motion of the matter sector still take the form of (\ref{matter1}) and (\ref{matter2}) as long as the lapse in those equations is set to unity. From the Hamilton equations, it is straightforward to derive the Friedmann equation in mLQC-I. The Friedmann equation develops two distinctive expressions in the contracting and the expanding phases resulting in an asymmetric bounce (for their exact forms and details see \cite{lsw2018}). In mLQC-I, the bounce takes place when the energy density reaches its maximum value at \begin{equation} \label{3a2} \rho=\rho_c^{{\scriptscriptstyle{\mathrm{I}}}} \equiv \frac{\rho_c}{4\left(\gamma^2+1\right)}. \end{equation} Similar to LQC, the momentum $b$ in mLQC-I is also a monotonically decreasing function in the forward evolution which ranges between $\Big[0, \frac{1}{\lambda}\arcsin(\sqrt{1/(\gamma^2+1)})\Big]$ and equals $\frac{1}{\lambda}\arcsin(\sqrt{1/(2\gamma^2+2)})$ at the bounce. For mLQC-I the hybrid approach for the primordial power spectrum has been studied earlier in \cite{qm2019,gqm2020}. Similar to LQC, the kinematic Hilbert space is a direct product of the three subspaces, namely, $\mathcal H_\mathrm{kin}=\mathcal H^\mathrm{grav}_\mathrm{kin}\otimes \mathcal H^\mathrm{matt}_\mathrm{kin}\otimes \mathcal F$. However, in mLQC-I, the gravitational sector of the quantum Hamiltonian constraint changes. In particular, the operator $\hat \Omega^2 $ now becomes \cite{qm2019,gqm2020} \begin{equation} \label{3a3} \hat {\Omega}^2_{{\scriptscriptstyle{\mathrm{I}}}}=-\gamma^2\left(\hat \Omega^2_{2\bar \mu}-\frac{\gamma^2+1}{4\gamma^2}\hat \Omega^2_{4\bar \mu}\right), \end{equation} where the subscript of $\hat {\Omega}^2_{{\scriptscriptstyle{\mathrm{I}}}}$ indicates it is the $\hat \Omega^2 $ operator in mLQC-I. In the above formula, we defined for an arbitrary integer $n$ \begin{equation} \label{3a4} \hat \Omega_{n\bar \mu}=\frac{1}{4 i \sqrt{\Delta}}\hat v^{1/2}\left(\widehat{\mathrm{sgn}(v)}\left(\hat N_{n\bar \mu}-\hat N_{-n\bar \mu}\right)+\left(\hat N_{n\bar \mu}-\hat N_{-n\bar \mu}\right)\widehat{\mathrm{sgn}(v)}\right)\hat v^{1/2}, \end{equation} with $\hat N_{n\bar \mu}=\widehat {e^{-i n \lambda b/2}}$. The operator $\hat {\Omega}^2_{{\scriptscriptstyle{\mathrm{I}}}}$ is also compatible with the same superselection subspace $\mathcal H^\pm_\epsilon$ with support on the lattices with step four. As a result, the operator $\hat \Lambda$ can be chosen in the same form as in LQC which turns out to be \begin{equation} \label{3a5} \hat \Lambda_{{\scriptscriptstyle{\mathrm{I}}}}= \hat \Omega_{4\bar \mu}/2. \end{equation} In the effective description of quantum dynamics, the evolution of the scalar perturbations in mLQC-I is prescribed by the same form of the Mukhanov-Sasaki equation in (\ref{2a9}) and (\ref{2a10}) under the following conditions: (i) the evolution of the homogeneous background quantities are now governed by the effective equations (\ref{matter1})-(\ref{matter2}) and (\ref{mLQCIa})-(\ref{mLQCIb}); (ii) in the effective mass $s$, the following substitutions are employed \begin{equation} \frac{1}{\Omega^2}\rightarrow \frac{1}{\Omega^2_{{\scriptscriptstyle{\mathrm{I}}}}}, \quad \quad \frac{1}{\Omega}\rightarrow \frac{\Lambda_{{\scriptscriptstyle{\mathrm{I}}}}}{\Omega^2_{{\scriptscriptstyle{\mathrm{I}}}}}, \end{equation} where $\Lambda_{{\scriptscriptstyle{\mathrm{I}}}}$ is the expectation value of the operator $\hat \Lambda_{{\scriptscriptstyle{\mathrm{I}}}}$ for the sharply peaked semiclassical states. It takes the same form as $\Lambda_\mathrm{LQC}$ given by (\ref{2b2}). As a result, the effective mass of the Mukhanov-Sasaki equation (\ref{2a9}) and (\ref{2a10}) in mLQC-I is explicitly given by \begin{equation} s=\frac{4 \pi G p^2_\phi}{3 v^{4/3}}\left(19-24 \pi G \gamma^2 \frac{p^2_\phi}{\Omega^2_{{\scriptscriptstyle{\mathrm{I}}}}}\right)+ v^{2/3}\left(U_{, \phi\phi}+\frac{16 \pi G \gamma p_\phi \Lambda_{{\scriptscriptstyle{\mathrm{I}}}}}{\Omega^2_{{\scriptscriptstyle{\mathrm{I}}}}}U_{,\phi}-\frac{16\pi G}{3}U\right), \end{equation} with $\Omega^2_{{\scriptscriptstyle{\mathrm{I}}}}$ given in (\ref{3a1}). \subsection{mLQC-II} The mLQC-II model was also first proposed in \cite{YDM09} as a different quantization of the classical Hamiltonian in the spatially-flat FLRW universe. Its effective dynamics and implications on the inflationary paradigm were later studied in detail in \cite{lsw2019, lsw2018b}. Similar to the standard LQC, the evolution of the universe in mLQC-II is symmetric with respect to the bounce when only a massless scalar field is coupled to the gravitational sector. Its effective dynamics can be described by an effective Hamiltonian constraint which leads to the Hamilton equations in the same form as (\ref{h1}) and (\ref{h2}) as long as $\Omega^2$ is replaced by its corresponding form in mLQC-II given by \begin{equation} \label{3a3} \Omega^2_{{\scriptscriptstyle{\mathrm{II}}}}=\frac{4v^2}{\lambda^2}\sin^2\left(\frac{\lambda b}{2}\right)\Big\{1+\gamma^2 \sin^2\left(\frac{\lambda b}{2}\right) \Big\}. \end{equation} Correspondingly, the Hamilton equations in mLQC-II read \begin{eqnarray} \label{mLQCIIa} \dot v&=&\frac{3v\sin(\lambda b)}{\gamma \lambda}\Big\{1+\gamma^2-\gamma^2\cos\left(\lambda b\right)\Big\}, \\ \label{mLQCIIb} \dot b&=&-\frac{6\sin^2\left(\frac{\lambda b}{2}\right)}{\gamma \lambda^2}\Big\{1+\gamma^2\sin^2\left(\frac{\lambda b}{2}\right)\Big\}-4\pi G\gamma P, \end{eqnarray} where the lapse is set to unity and the overdots represent the differentiation with respect to the cosmic time. In mLQC-II, the bounce takes place when the energy density reaches its maximum value at \begin{equation} \rho=\rho_c^{{\scriptscriptstyle{\mathrm{II}}}} \equiv 4(\gamma^2+1)\rho_c. \end{equation} The momentum $b$ in mLQC-II monotonically decreases in the forward evolution of the universe from $2\pi/\lambda$ to $0$ and equals $\pi/\lambda$ at the bounce. The quantization of the homogeneous and inhomogeneous sectors can be carried out in a similar way as in LQC, and the only difference lies in the gravitational sector which due to a difference in quantization corresponds to a different operator in the kinematic Hilbert space. More specifically, the $\hat \Omega^2$ operator in mLQC-II takes the form \begin{equation} \hat \Omega^2_{{\scriptscriptstyle{\mathrm{II}}}}=4 \hat \Omega^2_{\bar \mu}+4\gamma^2\lambda^2\left(\widehat{\frac{1}{v}} \right)\hat \Omega^4_{\bar \mu}\left(\widehat{\frac{1}{v}} \right), \end{equation} which is compatible with the superselection subspaces with support on the semilattices with step two. As a result, in order for the total Hamiltonian constraint $\hat{ \mathcal H}_\mathrm{MS}$ to be well-defined in the same subspaces, the $\hat \Lambda$ operator can be chosen as a quantum difference operator that translates the eigenstates $| v\rangle$ with a displacement of any multiples of two. The simplest choice in this case would be \begin{equation} \hat \Lambda_{{\scriptscriptstyle{\mathrm{II}}}}=\hat \Omega_{2\bar \mu}, \end{equation} which in the effective dynamics corresponds to \begin{equation} \Lambda_{{\scriptscriptstyle{\mathrm{II}}}}=v \frac{\sin\left(\lambda b\right)}{\lambda}. \end{equation} Together with $\Omega^2_{{\scriptscriptstyle{\mathrm{II}}}}$ in (\ref{3a3}), it determines the exact form of the effective mass in the Mukhanov-Sasaki equation (\ref{2a10}) in mLQC-II, which reads \begin{equation} s=\frac{4 \pi G p^2_\phi}{3 v^{4/3}}\left(19-24 \pi G \gamma^2 \frac{p^2_\phi}{\Omega^2_{{\scriptscriptstyle{\mathrm{II}}}}}\right)+ v^{2/3}\left(U_{, \phi\phi}+\frac{16 \pi G \gamma p_\phi \Lambda_{{\scriptscriptstyle{\mathrm{II}}}}}{\Omega^2_{{\scriptscriptstyle{\mathrm{II}}}}}U_{,\phi}-\frac{16\pi G}{3}U\right). \end{equation} Thus, focusing on the effective description of the hybrid approach in LQC, both the Hamilton equations of the background dynamics and the Mukhanov-Sasaki equations of the scalar perturbations can be obtained in mLQC-I and mLQC-II. In particular, we have specified the exact forms of the operators $\hat \Omega$ and $\hat \Lambda$, as well as their counterparts in the effective dynamics in each model. These equations will be used in the numerical simulations of the primordial scalar power spectrum in mLQC-I/II in the next section. \section{Primordial power spectrum from the hybrid approach in modified loop quantum cosmology} \label{power} \renewcommand{\theequation}{4.\arabic{equation}}\setcounter{equation}{0} In this section, based on the effective dynamics of the background and the perturbations introduced in the previous sections, we proceed with the numerical simulations and compare the scalar power spectra between LQC and mLQC-I/II in the hybrid approach. Moreover, we will also compare the difference between the dressed metric approach and the hybrid approach in the context of mLQC-I where the de Sitter phase differentiates these two approaches by allowing for different types of the initial conditions in the contracting phase. Here we would use the results in our previous paper in the dressed metric approach \cite{lsw2020}. We will start with the fixation of the free parameter in the inflationary model. Based on the Planck 2018 data which favors an inflationary potential with a plateau, we choose the scalar potential $U(\phi)$ to be the Starobinsky potential which is explicitly given by \begin{equation} \label{4.1} U=\frac{3m^2}{32\pi G}\left(1-e^{-\sqrt{\frac{16\pi G}{3}} \phi}\right)^2. \end{equation} Due to the almost flat right wing of the potential, the tensor-to-scalar ratio predicted in the inflationary models with the Starobinsky potential fits the observational data very well. The pivot mode is chosen at $k^0_*=0.05\,({\rm Mpc})^{-1}$ where the superscript `0' refers to the value at present. With the scalar power spectrum $A_s$ and the scalar spectral index $n_s$ given respectively by \cite{Planck2018} \begin{equation} \label{4.2} \ln (10^{10}A_s)=3.044\pm0.014 ~(68\% \mathrm{CL}) ,\quad\quad n_s=0.9649\pm0.0042 ~(68\% \mathrm{CL}), \end{equation} one can fix the mass of the scalar field to be $m=2.44\times10^{-6}$. Some of the relevant observables at the horizon crossing during inflation can also be computed, which are \begin{equation} \label{4.3} \phi_*=1.07,\quad \quad \dot \phi_*=-5.02\times10^{-9},\quad \quad H_*=1.20\times10^{-6}. \end{equation} Since the Hubble rate decreases during the slow-roll inflation, the moment for the horizon exit of the pivot mode denoted by $t_*$ is then determined when the Hubble rate decreases to the value of $H_*$ in the slow-roll phase of our numerical solutions. In addition, all our simulations were performed using a combination of the StiffnessSwitching and ExplicitRungeKutta numerical methods in Mathematica. The background solutions were obtained from numerical integrations of the modified Friedmann equations in LQC and mLQC-I/II, while the primordial power spectrum for the linear perturbations was found from numerically integrating (\ref{numeric}) with the respective effective mass in each model. \subsection{The initial conditions of the background and the initial states of the scalar perturbations} In our simulations, the initial conditions of the background dynamics are chosen at the bounce where the energy density reaches its maximum value. Due to the rescaling freedom in volume, we choose $v_0=1$ for our numerical solutions. The canonical variable $b$ is fixed at the bounce. More specifically, as discussed in the last section, at the bounce, $b_0=\frac{\pi}{2\lambda}$ in LQC; $b_0=\mathrm{arcsin}(\sqrt{1/(2\gamma^2+2)})/\lambda$ in mLQC-I; $b_0=\frac{\pi}{\lambda}$ in mLQC-II. The degrees of freedom in the matter sector consists of $\phi$ and $p_\phi$. When the energy density reaches its maximum value, \begin{equation} \label{4a1} \rho=\frac{p^2_{\phi_0}}{2v_0^2}+U(\phi_0)=\rho^i_c, \end{equation} here the subscript `$0$' indicates the values of the relevant quantities are set at the bounce and $\rho^i_c$ stands for the maximum energy density in LQC and mLQC-I/II. With regard to the initial states of the scalar perturbations, they are chosen at some finite time in the contracting phase. In general, the choice of the initial states is based on their equation of motion \begin{equation} \label{4a2} \nu_k^{\prime \prime}+\left(k^2+s\right)\nu_k=0, \end{equation} where $s$ is the effective mass and the mode function satisfies the Wronskian condition \begin{equation} \label{wronskian} \nu_k(\nu^\prime_k)^*-(\nu_k)^*\nu^\prime_k=i, \end{equation} with the asterisk standing for the complex conjugate. As discussed in \cite{lsw2020}, the initial states in the contracting phase can be chosen as the adiabatic states, given explicitly by the WKB solutions of (\ref{4a2}), \begin{equation} \label{4a3} \nu_k=\frac{1}{\sqrt{2 W_k}}e^{-i \int^\eta W_k(\bar \eta)d\bar \eta}. \end{equation} Substituting the above solution back into (\ref{4a2}), one can find an iterative equation for $W_k$. Then, starting from the zeroth order solution, $W^{(0)}_k=k$, the adiabatic solutions at the second and fourth orders can be obtained as \begin{equation} \label{4a4} W^{(2)}_k=\sqrt{k^2+s}, \quad \quad W^{(4)}_k=\frac{\sqrt{f(s,k)}}{4|k^2+s|}. \end{equation} Here $f(s,k)=5s'^2+16k^4(k^2+3s)+16s^2(3k^2+s)-4s^{''}(s+k^2)$. For any two sets of the initial states, say $\{\nu_k\}$ and $\{\mu_k\}$, they are related via the Bogoliubov transformation, which is \begin{equation} \label{4a5} \nu_k=\alpha_k \mu_k+\beta_k \mu^*_k, \end{equation} with $|\alpha_k|^2-|\beta_k|^2=1$ for any $k$. Since (3.5) is a linear equation and the Bogoliubov coefficients are time-independent, the power spectra resulting from these two sets of initial states can be shown as \begin{equation} \label{4a6} \mathcal P_{\nu_k}=\left(1+2|\beta_k|^2+2\mathrm{Re} \left(\alpha_k\beta^*_k\mu^2_k/|\mu_k|^2\right) \right)\mathcal P_{\mu_k}. \end{equation} As is common in the literature, for a comparison with observations it is more convenient to provide the power spectrum of the comoving curvature perturbation ${\cal R}_k$, which is related to the Mukhanov-Sasaki variable by means of ${\cal R}_k=\nu_k/z$, with $z=a\dot \phi/H$. Its power spectrum then reads \begin{equation} \label{4a7} \mathcal P_{{\cal R}_k}=\frac{\mathcal P_{\nu_k}}{z^2}=\frac{k^3}{2\pi^2}\frac{|\nu_k|^2}{z^2}. \end{equation} As usual, the power spectrum is evaluated at the end of inflation when all the relevant modes are well outside the Hubble horizon. It should be noted that although the above formula can be used to generate the new power spectra from the already-existing ones, it is only applicable to the regimes where $W^{(2)}_k$ or $W^{(4)}_k$ remains a real number at the initial time, which equivalently requires $k^2+s\ge 0$ for $W^{(2)}_k$ and $f(s,k)\ge 0$ for $W^{(4)}_k$. As the effective mass $s$ is generally a function of time, the validity regime of (\ref{4a6}) changes when the initial states are imposed at different initial times. \subsection{Comparison of the power spectra among loop cosmological models in the hybrid approach } In this subsection, we compare the scalar power spectra in the three loop cosmological models from the effective dynamics of the hybrid approach. The difference among the three models mainly originates from the different quantizations of the gravitational sector of the classical Hamiltonian constraint in the spatially flat universe. As a result, although the Mukhanov-Sasaki equations in these models take the same form given in (\ref{2a10}), the explicit form of the time-dependent mass $s$ and the evolution of the background quantities, such as the scale factor and the Hubble rate, satisfy their respective Hamilton's equations in each model. In order to obtain the primordial scalar power spectrum in each model from numerical simulations, one needs to first fix the background dynamics. As discussed in the last subsection, the initial conditions for the background dynamics are chosen at the bounce. The parameter space is one dimensional which is determined by the value of the scalar field and the sign of its velocity. In order to facilitate comparison of the three models, the initial conditions for the background are chosen so that the number of the inflationary e-foldings are the same which is fixed to be $66.8$ in all three models. Moreover, the initial values of the inflaton field are chosen at the left wing of the Starobinsky potential with a positive velocity. Under these conditions, the initial values of the inflaton field in LQC, mLQC-I and mLQC-II are given respectively by \begin{equation} \label{4b1} \phi_\mathrm{LQC}=-1.44,\quad \quad \phi_{{\scriptscriptstyle{\mathrm{I}}}}=-1.32, \quad \quad \phi_{{\scriptscriptstyle{\mathrm{II}}}}=-1.55. \end{equation} Under these initial conditions, we find the number of the pre-inflationary e-foldings which is counted from the bounce to the onset of inflation, in each model, turns out to be, respectively \begin{equation} \label{4b2} N^\mathrm{LQC}_\mathrm{pre}=4.86, \quad \quad N^{{\scriptscriptstyle{\mathrm{I}}}}_\mathrm{pre}=4.62,\quad \quad N^{{\scriptscriptstyle{\mathrm{II}}}}_\mathrm{pre}=5.10. \end{equation} In addition, when the pivot mode crosses the horizon during the slow-roll phase, its co-moving wavenumber in three models are found to be \begin{equation} \label{comoving} k^\mathrm{LQC}_*=5.15, \quad \quad k^{{\scriptscriptstyle{\mathrm{I}}}}_*=4.05,\quad \quad k^{{\scriptscriptstyle{\mathrm{II}}}}_*=6.56. \end{equation} Therefore, the observable window which is about $k/k_*\in(0.1,1000)$ in the three models is slightly shifted when they have the same inflationary e-foldings. Of course, one can fine tune the initial conditions so that the observable window is the same but inflationary e-foldings are different in the three models. \begin{figure} \includegraphics[width=8cm]{1a} \includegraphics[width=8cm]{1b} \caption{The left panel compares the effective masses in LQC ($s_\mathrm{LQC}$) and mLQC-II ($s_{{\scriptscriptstyle{\mathrm{II}}}}$) from the hybrid approach in the contracting phase until the moment when the initial states are imposed. The right panel depicts the absolute value of the effective masses in LQC, mLQC-I ($s_{{\scriptscriptstyle{\mathrm{I}}}}$) and mLQC-II until $t= 10^7 ~t_\mathrm{Pl}$. Right after the bounce, the effective masses take the positive values in all three models. During inflation, the effective masses change signs which produces the spikes in the right panel.} \label{f1} \end{figure} \begin{figure} \includegraphics[width=8cm]{2a} \includegraphics[width=8cm]{2b} \caption{In this figure, we compare the behavior of the effective masses in mLQC-I in the hybrid approach and the dressed metric approach. The left panel shows the effective mass $s_{{\scriptscriptstyle{\mathrm{I}}}}$ near the bounce in the hybrid approach while the right panel depicts the effective mass $s_\mathrm{dm}$ in the dressed metric approach. Since in the contracting phase the universe quickly approaches the de Sitter space in the backward evolution, $s_\mathrm{dm}$ exponentially tends to negative infinity which is in contrast with the positive $s_{{\scriptscriptstyle{\mathrm{I}}}}$.} \label{f2} \end{figure} After fixing the background, one can then proceed to choose the initial states for the scalar perturbations. These initial states are set in the contracting phase. For LQC and mLQC-II, we set the initial states at $t=-10^4~t_\mathrm{Pl}$ while for mLQC-I, the initial states are set at $t=-2$ where the spacetime is well approximated as being sourced by a positive cosmological constant. Different models are mainly differentiated by the effective masses in the Mukhanov-Sasaki equation, and we compare these masses in the three models in Fig. \ref{f1}. In the right panel of Fig. \ref{f1}, the absolute values of the effective masses in LQC, mLQC-I/II are depicted in the expanding phase until $t=10^7~ t_\mathrm{Pl}$. Right after the bounce, the effective masses in all three models take positive values. During inflationary phase, the effective masses change their signs and thus produce the spikes in the figure. As can be seen from the figure, the behavior of the effective masses is qualitatively similar in these three models in the expanding phase while in the contracting phase, only LQC and mLQC-II have the qualitatively similar behavior of effective masses. The behavior of the effective mass in mLQC-I is quite different from LQC and mLQC-II in the pre-bounce phase, and is shown separately in the left panel of Fig. \ref{f2} where it is compared with the effective mass in the dressed metric approach in the right panel. Note that in the dressed metric approach, the Mukhanov-Sasaki equation in the quasi de Sitter contracting phase of mLQC-I takes the form \begin{equation} \nu^{\prime \prime}_k+\left(k^2-\frac{2}{\eta^2}\right)\nu_k=0, \end{equation} so the corresponding effective mass is given by \begin{equation} \label{4b3} s_\mathrm{dm}= -\frac{2}{\eta^2}, \end{equation} where the prime denotes the derivatives with respect to the conformal time and the contributions from the inflationary potential is ignored as it is much smaller when compared with the contributions from the Planck-scale curvature near the bounce \cite{lsw2020}. One can immediately find the difference between the effective masses in the two different approaches. In the dressed metric approach, the effective mass takes the negative values and increases exponentially in magnitude during the backward evolution in the contracting phase. Then, the following initial state of the linear perturbations is chosen \begin{equation} \label{bd} \nu_k=\frac{e^{-ik\eta}}{\sqrt{2k}}\left(1-\frac{i}{k\eta}\right). \end{equation} It should be noted that the modes in the infrared and intermediate regimes are outside the Hubble horizon initially, which indicates $k\eta\ll1$ at the time when the initial states are imposed. As a result, the second term in the parenthesis of (\ref{bd}) can not be ignored for those modes. Only the modes in the ultraviolet regime are initially inside the Hubble horizon and hence their initial states coincide with the zeroth order adiabatic states. On the other hand, in the hybrid approach, the property of the effective mass turns out to be quite different. The effective mass now takes positive values and increases in a non-exponential way. Consequently, in the hybrid approach, all the relevant modes are inside the Hubble horizon at the initial time. For this reason, one would expect the difference between the power spectra from two approaches will mainly occur in the infrared and intermediate regimes. However, we will use the second order adiabatic states for the numerical simulations of the power spectrum in the hybrid approach. The use of other initial states, like the zeroth or fourth order adiabatic state, will not qualitatively change our results. \begin{figure} \includegraphics[width=8cm]{3a} \includegraphics[width=8cm]{3b} \caption{ In this figure, we compare the scalar power spectra for the modes $k\in(10^{-5},50)$ in LQC (blue square) and mLQC-I (red triangle) from the hybrid approach when the initial states are chosen to be the second order adiabatic states and imposed in the contracting phase. The right panel shows the relative difference of the two power spectra defined in (\ref{4b4}). } \label{f3} \end{figure} \begin{figure} \includegraphics[width=8cm]{4a} \includegraphics[width=8cm]{4b} \caption{The scalar power spectra from the hybrid approach are depicted for LQC (blue square) and mLQC-II (red triangle) with the initial states imposed at $t=-10^4$ in both models. These initial states are the second order adiabatic states. The right panel shows the relative difference between the two models.} \label{f4} \end{figure} \begin{figure} \includegraphics[width=8cm]{5a} \includegraphics[width=8cm]{5b} \caption{We compare the scalar power spectra (left panel) in mLQC-I (red triangle) and mLQC-II (blue square) from the hybrid approach and show the relative difference between the power spectra (right). The initial states are imposed at $t=-10^4$ in mLQC-II and $t=-2$ in mLQC-I. } \label{f5} \end{figure} Our final results on the power spectra are presented and compared in Figs. \ref{f3}-\ref{f5}. In Fig. \ref{f3}, the scalar power spectra in LQC and mLQC-I are compared in the range of the co-moving wavenumber $k\in (10^{-5}, 50)$. The power spectrum can still be divided into three distinctive regimes: the suppressed infrared regime for $k\approx (10^{-5}, 10^{-4})$, the amplified oscillatory regime for $k\approx(10^{-4},1)$ and the scale invariant regime for $k\approx (1, 50)$. Although the power spectra in LQC and mLQC-I have the similar qualitative behavior throughout the considered range of the wavenumber, their quantitative difference can be seen from the right panel of Fig. \ref{f3} in which the relative difference $\mathcal E$ is shown. For any two quantities $\mathcal Q_1$ and $\mathcal Q_2$, the relative difference $\mathcal E$ is defined by \begin{equation} \label{4b4} \mathcal E= 2\frac{|\mathcal Q_1-\mathcal Q_2|}{|\mathcal Q_1+\mathcal Q_2|}. \end{equation} In the infrared and oscillatory regimes, the relative difference can reach as large as $100\%$ while the difference reduces to less than $1\%$ in the scale invariant regime. This is primarily because LQC and mLQC-I have the same classical limit in the expanding phase, and as shown in Fig. \ref{f1}, the effective masses in both approaches also tend to the same value in the inflationary phase. It is also remarkable to note that in the infrared and oscillatory regimes, the power spectrum in mLQC-I is suppressed as compared with its counterpart in LQC. This is a very unique feature manifest only in the hybrid approach. In the dressed metric approach, the power spectrum in mLQC-I is largely amplified in the infrared regime where its magnitude is as large as of the Planck scale \cite{IA19,lsw2020}. The main reason that causes this seemingly contradictory behavior of the power spectrum in mLQC-I in both approaches lies in the distinctive behavior of the effective masses in the two approaches as depicted in Fig. \ref{f2} and the corresponding choices of the initial states in the contracting phase. In Fig. \ref{f4}, the power spectra in LQC and mQLC-II are compared. As expected from the similarity of the effective masses in these two models, the relative difference between the power spectra of these two models are smaller than the relative difference between LQC and mLQC-I. In the infrared regime, the relative difference is around $50\%$. The relative difference in the oscillatory regime also oscillates as in this regime the oscillations of the power spectrum in LQC and mLQC-II are in general out of phase. The comparison between the power spectra from mLQC-I and mLQC-II are presented in Fig. \ref{f5} where we find that a large relative difference (more than $100\%$) is still present in the infrared and oscillatory regimes while the relative difference in the scale invariant regime is around $2\%$. In the above analysis, we have compared the scalar power spectrum from the three models when the initial conditions of the background dynamics are chosen at the bounce so that the numbers of the inflationary e-foldings turn out to be the same. This results in some difference in the co-moving wavenumbers of the pivot mode at the horizon crossing in the three models as presented in (\ref{comoving}). Due to this, there is an overlap in the value of $k$ for the observable window. Hence, a comparison of the primordial power spectrum among the three different models can be reliably made. In principle, one can choose other initial conditions such that the wavenumbers of the pivot mode are exactly the same in the three models. This would imply the inflationary e-foldings would be different. Then the observable window corresponds to the same range of the co-moving wavenumbers in all three models. For this set of initial conditions of the background and with the same initial states for the perturbations used in Figs. \ref{f3}-\ref{f5}, we find very similar results as presented in those figures. Let us summarize the results from numerical simulations. After fixing the initial conditions of the background dynamics and the initial states of the scalar perturbations for LQC, mLQC-I and mLQC-II in the hybrid approach we compared the effective masses and the resulting power spectra in these three models. We found a similar pattern of the power spectra from the three models which can be divided into three distinctive regimes. The maximum relative difference of the power spectra from different models are reached in the infrared and oscillatory regimes while in the scale invariant regime, all three models predict a similar result which is consistent with the current CMB observations. It is to be emphasized that in the hybrid approach, the power spectrum in mLQC-I is suppressed in the infrared and oscillatory regimes which is in a striking contrast with the results from the dressed metric approach. This remarkable difference originates from the distinctive properties of the effective masses in these two approaches and reveals for the first time differences in predictions due to underlying construction in these two approaches. \section{Conclusions} \label{summary} \renewcommand{\theequation}{5.\arabic{equation}}\setcounter{equation}{0} In this paper, we discussed the effective dynamics of the hybrid approach in the modified loop cosmological models, namely mLQC-I and mLQC-II. For this purpose, we first briefly reviewed the effective dynamics of the hybrid approach in LQC, including the effective equations for the background dynamics and the gauge invariant perturbations. An important step for deriving the Mukhanov-Sasaki equation in LQC is the specification of the operator $\hat \Lambda$ which is well-defined in the subspaces $\mathcal H^\pm_\epsilon$ selected by the homogeneous scalar constraint. Following the same strategy, we specified the operator analogs to $\hat \Lambda$ and their effective counterparts in mLQC-I/II. It turns out that the Mukhanov-Sasaki equation takes the same form in these two models as in LQC, and the only difference lies in the effective masses which have distinct behavior in each model. In order to quantitatively study the difference in the power spectra of the three loop cosmological models, we then considered the Starobinsky inflation driven by a single scalar field and found numerical solutions of the background and the perturbations under a representative set of initial conditions which makes the inflationary e-foldings equal in the three models. The initial states for the perturbations are chosen to be the second order adiabatic states and imposed in the contracting phase. Under these conditions, we compared the effective masses and the scalar power spectra in LQC and mLQC-I/II. In the expanding phase, the effective masses are qualitatively similar in the three models, they are initially positive valued and deceasing in the pre-inflationary stage. Later, the effective masses change sign during inflation and their magnitudes keep increasing until the end of the inflation. Since the square of the comoving Hubble horizon is given by the negative of the inverse of the effective mass, the behavior of the effective masses in the three models is consistent with the deceasing comoving Hubble horizon during inflation. Based on the numerical solutions of the background dynamics, we find in the contracting phase the effective masses in LQC and mLQC-II have similar properties, both of them tend to decrease in the backward evolution from the bounce while the effective mass in mLQC-I has qualitative different behavior. Initially, the effective mass in mLQC-I is decreasing in the backward evolution from the bounce. When the background spacetime becomes the de Sitter space, the effective mass tends to climb up. We find that in the hybrid approach, the change rate of the effective mass is much slower than that in the dressed metric approach, and most importantly the effective masses in these two approaches also have opposite signs. As a result, in the dressed metric approach, only the ultraviolet modes are inside the Hubble horizon when the initial states are imposed while in the hybrid approach, all the relevant modes are well inside the horizon. This is a key difference between the two approaches. The resulting power spectra in LQC and mLQC-I/II also assume the similar patterns with three distinctive regimes: the infrared regime, the oscillatory regime and the ultraviolet regime. The magnitudes of the power spectra in the three models are comparable in all three regimes. Quantitatively, more diversities are present in the infrared and oscillatory regimes than in the ultraviolet regime. The relative difference of the power spectra can be as large as $100\%$ between LQC/mLQC-II and mLQC-I and $50\%$ between LQC and mLQC-II in the former regimes while in the ultraviolet regime, all three models predict the scale invariant power spectra which are consistent with the observations within the numerical errors. Furthermore, the magnitude of the power spectrum in mLQC-I is suppressed in the infrared and oscillatory regimes as compared with the power spectra in LQC and mLQC-II. This behavior is very distinct if compared with the results from the dressed metric approach in \cite{lsw2020} where a Planck scale magnitude of the power spectrum in the same regimes is found in mLQC-I. The difference between the two different approaches for mLQC-I originates from the distinctive properties of the effective masses in the two approaches. It is remarkable since this is for the first time in loop cosmology the dressed and hybrid approaches yield significantly different predictions in the power spectrum. We would like to emphasize that our results are robust with respect to the choices of the initial conditions and the initial adiabatic states. Although the initial volume is set to unity, we have to note that only the holonomy corrections are considered in the effective dynamics. As the equations of the motion are invariant under the rescaling of the volume, it is convenient to set the initial volume to unity at the bounce. Any rescaling of the initial volume is equivalent to rescale the comoving wavenumbers and thus translate the power spectrum as a whole (as it would happen in standard GR). The different choices of the initial values of the scalar field can change the e-folding from the bounce to the horizon exit of the pivot mode and thus move the observable windows in the power spectrum. The different choices of the adiabatic states can be related via the Bogoliubov transformation, and the resulting averaged power spectra will differ by a constant determined by the initial states. As a result, the relative difference of the power spectra from different models will not change by specifying a different initial state instead of the second order adiabatic states. However, the absolute magnitude of the power spectra in the infrared regime does depend on the initial states as shown in \cite{bo2016}. Since our main purpose is to study the difference between LQC and mLQC-I/II, we find it sufficient to show for the considered set of initial conditions. Moreover, our result that power spectrum in mLQC-I is significantly different in the dressed and hybrid approaches is also independent of the choices discussed above since it is tied to the effective masses which turn out to be significantly different in both of the approaches. Finally, we conclude with following remarks. Although from our numerical analysis we found that the power spectra from both, the hybrid and the dressed metric approaches for mLQC-I are only different in the infrared and oscillatory regimes and consistent with the CMB observations in the ultraviolet regime at the level of the linear perturbations, it is essential to consider the non-Gaussianity in mLQC-I to fully compare the differences between two approaches in the observable regime as the magnitude of the power spectrum from the dressed metric approach are of the Planck scale in the infrared and oscillatory regimes. Therefore, the perturbations with the Planck-scale magnitude in the long wavelength modes are quite likely to affect the magnitude of the power spectrum of the short wavelength modes through the interactions between these modes. Unlike the dressed metric approach, the small magnitude of the power spectrum in the hybrid approach throughout the whole spectrum justifies its application to mLQC-I at the level of the linear perturbations. It also implies that at the linear order, the hybrid approach is well suited to the different quantizations of the classical Hamiltonian constraint in a spatially flat FLRW universe in LQC. \section*{Acknowledgements} We thank Guillermo Mena Marug\'an for comments on this manuscript. B.F.L. and P.S. are supported by NSF grant PHY-1454832. J.O. acknowledges the Operative Program FEDER 2014-2020 and Consejer\'ia de Econom\'ia y Conocimiento de la Junta de Andaluc\'ia. A.W. is supported in part by the National Natural Science Foundation of China (NNSFC) with the Grants Nos. 11675145 and 11975203.
1,108,101,563,442
arxiv
\section{Motivations} The twin paradox is a well-known puzzle in the theory of relativity: one of the two identical twins travels on a high-speed spacecraft away from Earth and then turns around and comes back, while the other stays on Earth. The puzzle appears because each twin apparently sees the other as moving, and therefore time dilation seems to suggest that, paradoxically, the twins should find each other less aged by the time when they meet. As the paradox is resolved within the standard theory of relativity, it turns out the traveling twin is less aged than the earthbound sibling, not the other way around. There have been various explanations of the twin paradox, all recognizing the crucial fact that the symmetry between the two twins is in fact illusory. The earthbound twin is in the same inertial (rest) frame all the time, while the traveling twin undergoes two different (outbound and inbound) initial frames throughout the journey. The frame switch upon the traveling twin is essentially the reason for the aging difference. The switch of initial frames implies that the traveling twin must experience acceleration at the time of turnaround, which can also be used to account for his slowed aging in terms of gravitational time dilation (although it is often argued that acceleration \textit{per se} plays no direct role). For more discussions on the twin paradox, see \cite{Debs:1996} and references therein. The puzzle strikes again when we consider the twin paradox in a flat spacetime with one spatial dimension compactified. If the traveling twin moves at a constant velocity in the compact direction, his frame remains inertial for the entire journey, yet the topology allows him to meet the earthbound twin after he circumnavigates the compact dimension (see \figref{fig:twin paradox}). As the traveling twin undergoes no frame switch at all, the standard explanation of the aging difference no longer works. The resolution to the puzzle lies in the fact that compactifying a spatial dimension breaks the global Lorentz invariance. As a consequence, there is now a class of \emph{preferred} inertial reference frames, namely, those at rest in the compact direction \cite{Brans:1973,Peters:1983as,Dray:1990,Barrow:2001rj,Uzan:2000wp}. As inertial frames are not all equivalent now, an observer in principle can experimentally determine the frame's moving velocity in the compact direction with respect to the preferred frame. This can be done by performing a ``global'' experiment: sending two light beams in opposite directions along the compact dimension and measuring the arrival time of both signals when they come back. The frame's moving velocity relative to the preferred frame can be inferred from the time delay between the two arriving signals \cite{Brans:1973,Peters:1983as}. On the other hand, a ``local'' experiment is also possible. For instance, as one spatial dimension is compactified, the form of the electrostatic field of a point charge is deviated from $1/r^2$. Measuring the deviation can also determine the frame's moving velocity relative to the preferred frame \cite{Bansal:2005ue}. It is instructive to look for other kinds of local experiments, as they will teach us to what extent the initial reference frames are inequivalent. Particularly, as the velocity relative to the preferred frame bears an absolute meaning now (and in a sense analogous to acceleration in the Minkowski spacetime), it is suggestive that even the Unruh-DeWitt detector moving at a constant velocity might register signals. Recently, it was shown that, in the Minkowski spacetime, coupled to a massless scalar field in the \emph{polymer quantization} (which implements some features of the microscopic discreteness in loop quantum gravity) \cite{Hossain:2010eb}, the Unruh-DeWitt detector moving at constant velocity detects radiation \cite{Kajuri:2015oza}. This is essentially because the Lorentz invariance is violated in the UV scale by the microscopic discreteness. In our case of flat spacetime with a compact dimension, as the Lorentz invariance is violated in the IR scale by the large length of the compact dimension, it is curious to know whether the Unruh-DeWitt detector moving at a constant velocity also sees radiation. This paper investigates the response of the Unruh-DeWitt detector coupled to a massless scalar field in a flat spacetime with one spatial dimension compactified. It turns out the Unruh-DeWitt detector moving at a constant velocity (as in the left of \figref{fig:twin paradox}) does \emph{not} register signals, contrary to the case of the polymer quantization. However, \emph{within} the inertial frame (as in the right of \figref{fig:twin paradox}), the frame's moving velocity in the compact direction in principle can be determined by instantaneously accelerating the Unruh-DeWitt detector and measuring the instantaneous transition probability. That is, the response of the Unruh-DeWitt detector can be used to discriminate between inertial reference frames with different velocities in the compact direction. \begin{figure} \begin{tikzpicture} \begin{scope}[shift={(0,0)},scale=0.75] \begin{scope}[shift={(0,0)},scale=1] \draw [gray,dotted] (1.2,0) arc (0:180:1.2 and 0.4); \draw [gray] (-1.2,0) arc (180:360:1.2 and 0.4); \end{scope} \begin{scope}[shift={(0,6-0.2)},scale=1] \draw [gray] (0,0) ellipse (1.2 and 0.4); \end{scope} \draw [-,gray] (1.2,0) -- (1.2,6-0.2); \draw [-,gray] (-1.2,0) -- (-1.2,6-0.2); \draw [->] (-2,0.1) -- (-2,3); \node at (-2,3.3) {$t$}; \draw [->] (-1.2,-0.3) arc (180:360:1.2 and 0.4); \node at (1.9,-0.2) {$x^{d-1}$}; \draw [-,thick,blue] (0,-0.4) to [out=45,in=260] (1.2,1.5); \draw [-,thick,dotted,blue] (1.2,1.5) to [out=100,in=300] (-1.2,3.8); \draw [-,thick,blue] (-1.2,3.8) to [out=70,in=225] (0,5.6-0.2); \draw [-,thick,red] (0,-0.4) -- (0,5.6-0.2); \end{scope} \begin{scope}[shift={(6,0)},scale=0.75] \begin{scope}[shift={(0,0)},scale=1] \draw [gray,dotted] (1.2,0) arc (0:180:1.2 and 0.4); \draw [gray] (-1.2,0) arc (180:360:1.2 and 0.4); \end{scope} \begin{scope}[shift={(0,6-0.2)},scale=1] \draw [gray] (0,0) ellipse (1.2 and 0.4); \end{scope} \draw [-,gray] (1.2,0) -- (1.2,6-0.2); \draw [-,gray] (-1.2,0) -- (-1.2,6-0.2); \draw [-,thick,blue] (0,-0.4) to [out=45,in=260] (1.2,1.5); \draw [-,thick,dotted,blue] (1.2,1.5) to [out=100,in=300] (-1.2,3.8); \draw [-,thick,blue] (-1.2,3.8) to [out=70,in=225] (0,5.6-0.2); \begin{scope}[shift={(0,0)},scale=1] \begin{scope}[shift={(0.1,-0.3)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(0.39,0)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(0.65,0.3)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(0.865,0.6)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(1.025,0.9)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \end{scope} \begin{scope}[shift={(-1.2,4.2-0.1)},scale=1] \begin{scope}[shift={(0.13,0)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(0.30,0.3)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(0.524,0.6)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(0.80,0.9)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \begin{scope}[shift={(1.1,1.2)},scale=1] \draw[blue] (-0.1,-0.1) rectangle (0.1,0.1); \end{scope} \end{scope} \draw [-,thick,red] (0,-0.4) -- (0,5.6-0.2); \begin{scope}[shift={(0,-0.2)},scale=1] \foreach \i in {0,0.3,...,5.7} \draw[red] (-0.1,-0.1+\i) rectangle (0.1,0.1+\i); \end{scope} \end{scope} \end{tikzpicture} \caption{[Left:] Two inertial world lines (geodesics): one stays at rest and the other circumnavigates the compact dimension. [Right:] Two inertial reference frames comoving with the two world lines.}\label{fig:twin paradox} \end{figure} \section{The Unruh-DeWitt detector} The Unruh-DeWitt detector \cite{Unruh:1976db,DeWitt:1979} is an idealized point-particle detector coupled to a scalar field $\phi$ via a monopole interaction. If the detector moves along a world line $x^\mu(\tau)$, where $\tau$ is the detector's proper time, the Lagrangian for the monopole interaction is given by $cm(\tau)\phi(x^\mu(\tau))$, where $c$ is a small coupling constant and $m$ is the operator of the detector's monopole moment. For a generic trajectory $x^\mu(\tau)$, the detector in general does not remain in its ground state labelled by the energy $E_0$ but can ascend to an excited state with energy $E>E_0$, while at the same time the field $\phi$ makes a transition from the vacuum state $\ket{0}$ to an excited state $\ket{\psi}$. By first-order perturbation theory, the amplitude for the transition $\ket{0,E_0}\rightarrow\ket{\psi,E}$ is given by \begin{equation} ic\,\bra{\psi,E}\int_{-\infty}^\infty m(\tau)\phi\left(x^\mu(\tau)\right) d\tau \ket{0,E_0}, \end{equation} which leads to the transition probability to all possible $E$ and $\psi$ as \begin{equation}\label{transition amplitude} c^2\sum_E\abs{\bra{E}m(0)\ket{E_0}}^2\, \mathfrak{F}(E-E_0), \end{equation} where \begin{equation}\label{response function} \mathfrak{F}(\Delta E) = \int_{-\infty}^\infty d\tau \int_{-\infty}^\infty d\tau' e^{-i\Delta E(\tau-\tau')} G^+(x(\tau),x(\tau')) \end{equation} is the \emph{response function} and the remaining factor represents the detector's \emph{selectivity}. The Wightman functions $G^{\pm}$ are defined as \begin{subequations} \begin{eqnarray} G^+(x,x')&:=&\bra{0}\phi(x)\phi(x')\ket{0},\\ G^-(x,x')&:=&\bra{0}\phi(x')\phi(x)\ket{0}. \end{eqnarray} \end{subequations} If the detector is in equilibrium with the field $\phi$ along the trajectory, we have \begin{equation} G^+(x(\tau),x(\tau')) = G^+(\Delta\tau), \quad \Delta\tau:=\tau-\tau'. \end{equation} In this case, the transition probability per unit proper time is given by \begin{equation}\label{transition rate} \mathcal{R}= c^2\sum_E\abs{\bra{E}m(0)\ket{E_0}}^2 \int_{-\infty}^\infty d(\Delta\tau) e^{-iE\Delta\tau} G^+(\Delta\tau). \end{equation} More details can be found in \cite{Birrell:1982ix}.\footnote{This paper follows the notations used in \cite{Birrell:1982ix} as closely as possible.} \section{The Wightman function} Consider a real scalar field $\phi(x)\equiv\phi(t,\mathbf{x})$ in $d$-dimensional spacetime, where events are coordinated as $x^\mu=(x^0,\mathbf{x})=(t,x^1,\dots,x^{d-1})$. The mode expansion of $\phi(x)$ is given by \begin{equation} \phi(t,\mathbf{x})=\sum_\mathbf{k} \left(a_\mathbf{k}u_\mathbf{k}(t,\mathbf{x}) +a_\mathbf{k}^\dag u_\mathbf{k}^*(t,\mathbf{x})\right). \end{equation} If the spacetime is flat but the $(d-1)$-th spatial direction is compactified with a finite length $L$, the Fourier modes $u_\mathbf{k}$ are given by \begin{equation} u_\mathbf{k}(t,\mathbf{x}) = \frac{1}{\left(2\omega_\mathbf{k}(2\pi)^{d-2}L\right)^{1/2}}\, e^{i\mathbf{k}\cdot\mathbf{x}-i\omega_\mathbf{k}t}, \end{equation} where the frequency associated with $\mathbf{k}=(k^1,\dots,k^{d-1})$ is \begin{equation} \omega_\mathbf{k} := \sqrt{\mathbf{k}^2+m^2}, \end{equation} and the $(d-1)$-th component of $\mathbf{k}$ takes only discrete values: \begin{equation} k^{d-1}=\frac{2\pi n}{L}, \quad n\in\mathbb{Z}. \end{equation} Let $\ket{0_L}$ be the vacuum state in accordance with the above mode expansion, i.e., \begin{equation} a_\mathbf{k}\ket{0_L} = 0, \quad \text{for}\ \forall\,\mathbf{k}. \end{equation} The Wightman function $G_L^+(x,x')$ then takes the form \begin{eqnarray}\label{GL+} G_L^+(x,x') &:=& \bra{0_L}\phi(x)\phi(x')\ket{0_L} \nonumber\\ &=& \left(\frac{1}{L}\sum_{k^{d-1}}\right) \int \frac{d^{d-2}\mathbf{k}}{(2\pi)^{d-2}} \frac{1}{2\omega_\mathbf{k}}\, e^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{x'})-i\omega_\mathbf{k}(t-t')} \nonumber\\ &=& \sum_{n=-\infty}^{\infty} \int \frac{d^{d-1}\mathbf{k}}{(2\pi)^{d-1}} \frac{1}{2\omega_\mathbf{k}}\, e^{i\mathbf{k}\cdot(\mathbf{x}-\mathbf{x'})-i\omega_\mathbf{k}(t-t')} e^{-inLk^{d-1}}, \end{eqnarray} where we have used the Poisson summation formula.\footnote{The Poisson summation formula is \begin{equation*} \sum_{n=-\infty}^\infty f(n)= \sum_{k=-\infty}^\infty \int_{n=-\infty}^\infty dx\, f(x) e^{-2\pi i n x}. \end{equation*}} The Wightman function depends only on the difference of $x$ and $x'$, i.e., $G_L^+(x,x')=G_L^+(x-x')$; furthermore, as can be seen from \eqnref{GL+}, it is periodic in the $x^{d-1}$ direction, i.e., \begin{equation} G_L^+(t'-t',x^1-x^{1'},\dots,x^{d-1}-x^{d-1'}+nL) = G_L^+(t'-t',x^1-x^{1'},\dots,x^{d-1}-x^{d-1'}), \quad n\in\mathbb{Z}. \end{equation} Eq.~\eqnref{GL+} can be cast as \begin{equation}\label{GL+ 2} G_L^+(x,x') = \sum_{n=-\infty}^\infty G^+(t'-t',x^1-x^{1'},\dots,x^{d-1}-x^{d-1'}-nL), \end{equation} where $G^+(x,x')\equiv G_{L\rightarrow\infty}^+(x,x')$ is the ordinary Wightman function in the Minkowski spacetime. The Green functions (Wightman function included) are generally very complicated. In the case of a massless ($m=0$) scalar field in 4-dimensional spacetime, $G^+(x,x')$ can be explicitly calculated as (see \cite{Birrell:1982ix}) \begin{equation}\label{G+} G^+(x,x') = -\frac{1}{4\pi^2}\, \frac{1}{(t-t'-i\epsilon)^2-\abs{\mathbf{x}-\mathbf{x'}}^2}, \end{equation} where a small (infinitesimal) imaginary number $i\epsilon$, $\epsilon>0$, is prescribed as a regularization parameter to ensure convergency. The rest part of this paper will focus on this 4-dimensional case with $x^\mu=(t,x,y,z)$. We will first study the response of the Unruh-DeWitt detector moving at a constant velocity and then the detector moving with a constant acceleration, in the compact and noncompact directions, respectively. \section{Constant velocity} Consider that the detector moves in the compact ($z$) direction with a constant velocity $\mathbf{v}=\abs{\mathbf{v}}\hat{z}$. The trajectory is given by the world line: \begin{equation} t=u^0\tau,\quad x=y=\text{const},\quad z=\abs{\mathbf{u}}\tau, \end{equation} where the 4-velocity $u^\mu$ is given by \begin{equation} u^\mu = (u^0,\mathbf{u}) = \left(\frac{1}{\sqrt{1-\mathbf{v}^2}},\frac{\mathbf{v}}{\sqrt{1-\mathbf{v}^2}}\right). \end{equation} Eq.~\eqnref{GL+ 2} with \eqnref{G+} now reads as \begin{eqnarray}\label{GL+ const v} &&G_L^+(x(\tau),x(\tau')) \equiv G_L^+(\Delta\tau) = -\frac{1}{4\pi^2}\sum_{n=-\infty}^\infty \frac{1}{(u^0\Delta\tau-i\epsilon)^2-(\abs{\mathbf{u}}\Delta\tau-nL)^2} \nonumber\\ &=& -\frac{1}{4\pi^2}\sum_{n=-\infty}^\infty \frac{1}{\left((u^0+\abs{\mathbf{u}})\Delta\tau-i\epsilon-nL\right) \left((u^0-\abs{\mathbf{u}})\Delta\tau-i\epsilon+nL\right)}. \end{eqnarray} Each summand has two poles $\pm nL+i\epsilon/(u^0\pm\abs{\mathbf{u}})$ if $\Delta\tau$ is considered to be a complex number. The transition rate \eqnref{transition rate} can be calculated by a contour integral along an infinite semicircle on the lower-half $\Delta\tau$ plane. However, as $u^0>\abs{\mathbf{u}}$, all poles in \eqnref{GL+ const v} are on the upper-half plane and hence the contour integral turns out to be zero. This tells that no particles are detected. Furthermore, although $\ket{0_L}$ is not invariant under the boost in the $z$ direction, it remains invariant under boosts in $x$ and $y$ directions. Therefore, the response of the detector should be the same if we boost it in $x$ and $y$ directions. Consequently, we arrive at the conclusion that no signals are registered if the detector moves with an arbitrary constant velocity. \section{Constant acceleration in the compact direction}\label{sec:acceleration in compact direction} Consider that the detector moves in the $z$ direction with a constant acceleration $1/\alpha$. The trajectory is given by the world line: \begin{equation} t=\alpha\sinh\frac{\tau}{\alpha},\quad x=y=\text{const},\quad z=\alpha\cosh\frac{\tau}{\alpha}. \end{equation} Eq.~\eqnref{GL+ 2} with \eqnref{G+} now reads as \begin{eqnarray}\label{GL+ a in z original} &&G_L^+(x(\tau),x(\tau'))\nonumber\\ &=&-\frac{1}{4\pi^2}\sum_{n=-\infty}^\infty \frac{1}{\left(\alpha\sinh\frac{\tau}{\alpha}-\alpha\sinh\frac{\tau'}{\alpha}-i\epsilon\right)^2 -\left(\alpha\cosh\frac{\tau}{\alpha}-\alpha\cosh\frac{\tau'}{\alpha}-nL\right)^2} \end{eqnarray} Consequently (see \hyperref[app:details]{Appendix}), we have \begin{equation}\label{GL+ a in z} G_L^+(x(\tau),x(\tau'))=-\frac{1}{16\pi^2\alpha^2}\sum_{n=-\infty}^\infty \frac{1}{\sinh^2\left(\frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{2\alpha}\right) +n\left(\frac{L}{\alpha}\right)\sinh\frac{\tau+\tau'}{2\alpha}\sinh\frac{\Delta\tau}{2\alpha} -\frac{n^2}{4}\left(\frac{L}{\alpha}\right)^2}. \end{equation} When $L\gg \alpha$, only the summand with $n=0$ survives and \eqnref{GL+ a in z} reduces to the ordinary result of the Minkowski spacetime: \begin{equation}\label{G+ a in z} G^+(x(\tau),x(\tau'))=-\frac{1}{16\pi^2\alpha^2} \frac{1}{\sinh^2\left(\frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{2\alpha}\right)} =-\frac{1}{4\pi^2}\sum_{k=-\infty}^\infty \frac{1}{\left(\Delta\tau-i\epsilon+2\pi i\alpha k\right)^2}, \end{equation} where we have used the identity \begin{equation}\label{identity a} \csc^2\pi x = \frac{1}{\pi^2}\sum_{k=-\infty}^\infty \frac{1}{(x-k)^2}. \end{equation} Taking \eqnref{G+ a in z} into \eqnref{transition rate} and performing the contour integral, we obtain the transition rate as \begin{equation} \mathcal{R}=\frac{c^2}{2\pi} \sum_E \frac{(E-E_0)\abs{\bra{E}m(0)\ket{E_0}}^2}{e^{2\pi(E-E_0)\alpha}-1}. \end{equation} The celebrated Planck factor $(e^{2\pi(E-E_0)\alpha}-1)^{-1}$ indicates that the accelerated detector registers particles of $\phi$ as if it was immersed in a bath of thermal radiation at the temperature \begin{equation} T=\frac{1}{2\pi k_\mathrm{B}\alpha} \equiv \frac{\abs{\text{acceleration}}}{2\pi k_\mathrm{B}}. \end{equation} For generic cases that $L\not\gg \alpha$, \eqnref{GL+ a in z} can be cast in a closed form by the identity \begin{equation}\label{identity b} \sum_{n=-\infty}^\infty \frac{1}{n^2+an+b} = \frac{\pi \cot \left(\frac{1}{2} \left(\pi a-\pi \sqrt{a^2-4 b}\right)\right)-\pi \cot \left(\frac{1}{2} \left(\pi \sqrt{a^2-4 b}+\pi a\right)\right)}{\sqrt{a^2-4 b}}. \end{equation} Consequently, it turns out that $G_L^+(x(\tau),x(\tau'))$ depends not only on $\Delta\tau\equiv\tau-\tau'$ but also on $\tau+\tau'$, indicating that the detector moving in the $z$ direction with a constant acceleration is \emph{not} in equilibrium with the field $\phi$.\footnote{It is crucial to know whether the dependence on $\tau+\tau'$ in \eqnref{GL+ a in z} is erased away under the summation over $n$. By the identity \eqnref{identity b}, it is rigourously proven that $G_L^+(x(\tau),x(\tau'))$ is dependent on both $\Delta\tau$ and $\tau+\tau'$.} Despite the fact that the detector is not in equilibrium with $\phi$, we can still ask what the ``instantaneous'' transition probability $\mathcal{P}_{\tau_1\rightarrow\tau_2}$ is if the detector is adiabatically switched on from $\tau_1$ to $\tau_2$ for a short period (i.e., $\tau_1\lesssim\tau_2$). In this setting, \eqnref{transition amplitude} with \eqnref{response function} should be modified accordingly and the instantaneous transition probability from $\tau_1$ to $\tau_2$ is given by \begin{equation}\label{P 1 to 2} \mathcal{P}_{\tau_1\rightarrow\tau_2}= c^2\sum_E\abs{\bra{E}m(0)\ket{E_0}}^2 \int_{\tau_1}^{\tau_2}d\tau' \int_{\tau_1-\tau_2}^{\tau_2-\tau_1} d(\Delta\tau)\, e^{-iE\Delta\tau} G_L^+(\Delta\tau;\tau'), \end{equation} where from \eqnref{GL+ a in z} we have \begin{equation}\label{GL+ tau'} G_L^+(\Delta\tau;\tau')= -\frac{1}{16\pi^2\alpha^2}\sum_{n=-\infty}^\infty \frac{1}{\sinh^2\left(\frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{2\alpha}\right) +n\left(\frac{L}{\alpha}\right)\sinh\frac{\Delta\tau+2\tau'}{2\alpha}\sinh\frac{\Delta\tau}{2\alpha} -\frac{n^2}{4}\left(\frac{L}{\alpha}\right)^2}. \end{equation} Note that the regularization parameter $\epsilon$ in \eqnref{GL+ tau'} is in fact superfluous, as the delimiters of $\int d(\Delta\tau)$ in \eqnref{P 1 to 2} are finite now. If $\alpha\ll\tau_2-\tau_1$, all the summands in \eqnref{GL+ tau'} vanish for large $\Delta\tau$ and thus the integral $\int_{\tau_1-\tau_2}^{\tau_2-\tau_1}d(\Delta\tau)$ in \eqnref{P 1 to 2} can be replaced by $\int_{-\infty}^\infty d(\Delta\tau)$ provided $\epsilon$ is kept in \eqnref{GL+ tau'}; consequently, we can define the instantaneous transition rate from $\tau_1$ to $\tau_2$ as \begin{equation} \mathcal{R}_{\tau_1\lesssim\tau\lesssim\tau_2}= c^2\sum_E\abs{\bra{E}m(0)\ket{E_0}}^2 \int_{-\infty}^\infty d(\Delta\tau)\, e^{-iE\Delta\tau} G_L^+(\Delta\tau;\tau'), \quad \text{for}\ \alpha\ll\tau_2-\tau_1. \end{equation} Because $\ket{0_L}$ is invariant under arbitrary spacetime translations as well as Lorentz boosts in $x,y$ directions, the fact that $G_L^+(\Delta\tau;\tau')$ depends on $\tau'$ is better understood as that it depends on the $z$-component of the \emph{instantaneous} 4-velocity $u^z(\tau'):=dz(\tau')/d\tau'=\sinh(\tau'/\alpha)$ for $\tau_1\lesssim\tau'\lesssim\tau_2$. This suggests that, \emph{within} an initial reference frame moving with a constat velocity (as in the right of \figref{fig:twin paradox}), one can in principle discern the $z$-component of the frame's moving velocity (provided $L$ is already known) by instantaneously accelerating the Unruh-DeWitt detector in the $z$ direction and measuring the instantaneous transition probability. \section{Constant acceleration in noncompact directions}\label{sec:acceleration in noncompact directions} Consider that the detector moves with a constant acceleration $1/\alpha$ in noncompact directions (say, $x$ direction). The trajectory is given by the world line: \begin{equation} t=\alpha\sinh\frac{\tau}{\alpha},\quad y=z=\text{const},\quad x=\alpha\cosh\frac{\tau}{\alpha}. \end{equation} Eq.~\eqnref{GL+ 2} with \eqnref{G+} now reads as \begin{eqnarray}\label{GL+ a in x} &&G_L^+(x(\tau),x(\tau'))\nonumber\\ &=&-\frac{1}{4\pi^2}\sum_{n=-\infty}^\infty \frac{1}{\left(\alpha\sinh\frac{\tau}{\alpha}-\alpha\sinh\frac{\tau'}{\alpha}-i\epsilon\right)^2 -\left(\alpha\cosh\frac{\tau}{\alpha}-\alpha\cosh\frac{\tau'}{\alpha}\right)^2-n^2L^2} \nonumber\\ &=&-\frac{1}{16\pi^2\alpha^2}\sum_{n=-\infty}^\infty \frac{1}{\sinh^2\left(\frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{2\alpha}\right) -\frac{n^2}{4}\left(\frac{L}{\alpha}\right)^2}, \end{eqnarray} where the similar calculation as shown in \hyperref[app:details]{Appendix} has been repeated. Using the identity \begin{equation} \sum_{n=-\infty}^\infty \frac{1}{n^2-a^2} = -\frac{\pi \cot(\pi a)}{a} \end{equation} as a special case of \eqnref{identity b}, we can rewrite \eqnref{GL+ a in x} as \begin{equation}\label{GL+ a in x 2} G_L^+(\Delta\tau)= -\frac{1}{8\pi\alpha L}\frac{\cot\left(\frac{2\pi\alpha}{L} \sinh\left(\frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{2\alpha}\right)\right)} {\sinh\left(\frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{2\alpha}\right)}. \end{equation} Note that \eqnref{GL+ a in x 2} reduces to \eqnref{G+} when $L\gg a$. It is crucial to know whether the dependence on $L$ in $G_L^+(\Delta\tau)$ goes away when $G_L^+(\Delta\tau)$ is integrated in \eqnref{transition rate} to obtain $\mathcal{R}$. To know the answer, we compute the derivative of $\mathcal{R}$ with respect to $L$: \begin{eqnarray}\label{dL R} &&\partial_L\mathcal{R}(L)\nonumber\\ &=& c^2\sum_E\abs{\bra{E}m(0)\ket{E_0}}^2 \int_{-\infty}^\infty d(\Delta\tau) e^{-iE\Delta\tau} \partial_L G_L^+(\Delta\tau) \nonumber\\ &=& c^2\sum_E\abs{\bra{E}m(0)\ket{E_0}}^2 \int_{-\infty}^\infty d(\Delta\tau) e^{-iE\Delta\tau} \left( -\frac{1}{L}G_L^+(\Delta\tau) -\frac{\alpha\csc^2 \left(\frac{2\pi\alpha}{L}\sinh\left(\frac{\Delta\tau}{2\alpha} -\frac{i\epsilon}{2\alpha}\right)\right)}{4L^3} \right) \nonumber\\ &=& -\frac{\mathcal{R}(L)}{L} -\frac{c^2\alpha}{4L^3}\sum_E\abs{\bra{E}m(0)\ket{E_0}}^2 \int_{-\infty}^\infty d(\Delta\tau) e^{-iE\Delta\tau} \sum_{k=-\infty}^\infty \frac{1}{\left( \frac{2\pi\alpha}{L}\sinh\left(\frac{\Delta\tau}{2\alpha} -\frac{i\epsilon}{2\alpha}\right)-\pi k \right)^2},\qquad \end{eqnarray} where we have used the identity \eqnref{identity a}. Considering the \emph{formal} limit $\alpha\gg L$ with $L$ fixed, we have \begin{equation} \frac{2\pi\alpha}{L}\sinh\left(\frac{\Delta\tau}{2\alpha} -\frac{i\epsilon}{2\alpha}\right) -\pi k \approx \frac{\pi}{L}\left(\Delta\tau-i\epsilon\right) -\pi k. \end{equation} In this limit, the poles of the summands of $\sum_{k=-\infty}^\infty$ in \eqnref{dL R} are all on the upper-half plane, thus giving rise to zero when integrated over $\int_{-\infty}^\infty d(\Delta\tau) e^{-iE\Delta\tau}\cdots$, as the integral can be calculated along an infinite semicircle on the lower-half $\Delta\tau$ plane. Therefore, it follows from \eqnref{dL R} that $\partial_L\mathcal{R}(L)=-\mathcal{R}(L)/L\neq0$ in the formal limit $\alpha\gg L$, and thus we have rigorously shown that $\mathcal{R}$ cannot be independent of $L$. The fact that $G^+(x(\tau),x(\tau'))$ depends only on $\Delta\tau:=\tau-\tau'$ indicates that the accelerating detector is in equilibrium with the field $\phi$. However, the form of the transition rate $\mathcal{R}$ is very complicated and it is unlikely that the equilibrium is thermal. The independence of $\tau+\tau'$ in $G^+(x(\tau),x(\tau'))$ also means that the response of the detector cannot know about the frame's moving velocity in $x,y$ directions. Furthermore, as $\partial_L\mathcal{R}(L)\neq0$, one can in principle infer the length $L$ by measuring $\mathcal{R}$ in relation to $1/\alpha$. \section{Summary and discussion} If the Unruh-DeWitt detector moves with an arbitrary constant velocity, it registers no signals. In this sense, the \emph{local} Lorentz invariance is still preserved while the \emph{global} Lorentz invariance is violated in the flat spacetime with a compact dimension. This is in contrast to the case of the field in the polymer quantization as studied in \cite{Kajuri:2015oza}. If the detector moves with an constant acceleration in the compact ($z$) direction, it registers particles but is not in equilibrium with the field $\phi$. Within an inertial frame, by instantaneously accelerating the detector in $z$ direction, the $z$-component of the frame's moving velocity can be inferred from the instantaneous transition probability (provided $L$ is known). If the detector moves with an constant acceleration in noncompact ($x,y$) directions, it registers particles and is in equilibrium with the field $\phi$, but the equilibrium is very likely non-thermal. Within an inertial frame, by instantaneously accelerating the detector in $x,y$ directions and measuring the transition rate in relation to acceleration, one can infer the length $L$. The frame's moving velocity in the noncompact directions, however, remain unknowable. Therefore, even though we cannot discriminate between the Unruh-DeWitt detectors moving along different inertial world lines (see the left of \figref{fig:twin paradox}), the response of the Unruh-DeWitt detector can be used to infer the length $L$ and to discriminate between inertial reference frames with different velocities in the compact direction (see the right of \figref{fig:twin paradox}). However, whether we can definitely assert that inertial frames are discriminable by ``local'' experiments is a matter of interpretation. After all, the vacuum state $\ket{0_L}$ is a global concept and an accelerating Unruh-DeWitt detector knows about $\ket{0_L}$ only if the walls of the moving inertial reference frame are transparent to the field $\phi$.\footnote{In the same regard, the experiment by measuring the deviation of the electrostatic field of a point charge (as studied in \cite{Bansal:2005ue}) is not to be viewed as completely local either, since the deviation relies on the fact that the electric field stretches out to the entire universe and the local electric field is deformed anyway if the charge source is screened by the frame walls.} It should be noted that the Unruh-DeWitt detector in flat spacetime with a compact dimension as studied in this paper is equivalent to that in Minkowski space in the presence of two parallel Casimir plates which impose periodic boundary conditions as one of the cases studied in \cite{Davies:1989me}. However, as the work of \cite{Davies:1989me} focused only on the response of stationary detectors with static boundaries, acceleration in the direction perpendicular to the Casimir plates (which is equivalent to acceleration in the compact direction in this paper) was not considered.\footnote{It is obvious that the response of the detector accelerating in the direction perpendicular to the Casimir plates is not stationary (i.e., not in equilibrium with the field) if Dirichlet or Neumann boundary conditions are imposed on the plates. However, whether it is stationary or not is not obvious if periodic boundary conditions are imposed.} In this paper, we obtain a rigorous formula for the response of the Unruh-DeWitt detector accelerating in the compact direction. By virtue of the close investigation, we arrive at the conclusion that the response is not in equilibrium with the field $\phi$ and furthermore the frame's moving velocity can be inferred from it, thus bringing out the connection between responses of the Unruh-DeWitt detector and nonequivalence of inertial frames. Finally, it should be stressed that measurement of responses of the Unruh-DeWitt detector is completely out of reach of current technology, let alone to distinguish the difference. Nevertheless, it is conceptually important to understand (non)equivalence of inertial reference frames in light of responses of the Unruh-DeWitt detector. \begin{acknowledgments} The author would like to thank Tun-Hao Chan (NTNU), Chia-Hsun Chang (NTNU), and En Shih (NTU) for having inspiring discussions and Thomas Roman (CCSU) for bringing the related references to his attention. This work was supported in part by the Ministry of Science and Technology of Taiwan under the Grant No.\ 101-2112-M-003-002-MY3. \end{acknowledgments} \newpage
1,108,101,563,443
arxiv
\section{Introduction} Let $E$ denote an elliptic curve over $\mathbb{Q}$ without complex multiplication with conductor $N_E$. For a prime $p$ of good reduction, $E$ reduces to an elliptic curve over the finite field $\mathbb{F}_p$ and we denote by $a_p(E)$ the trace of the Frobenius automorphism acting on the points of $E$ over ${\overline {\mathbb{F}}}_p$. Then, $a_p(E) = p+1-\# E(\mathbb{F}_p)$, and the Hasse bound $|a_p(E) | \leq 2 \sqrt{p}$ holds. The last few decades have seen much research pertaining to the distribution of the sequence $a_p(E)/2\sqrt{p}$. Sato and Tate formulated a conjecture for the distribution of this sequence in the interval $[-1,1]$ around 1960 which was proven by Taylor, Clozel, Harris and Shepherd-Barron in \cite{Taylor2008, CHT2008, HSBT2010}. \begin{thm*}[Sato-Tate conjecture] Let $E$ be a non-CM elliptic curve over $\mathbb{Q}$ with conductor ${N}_E$. Let $\alpha, \beta \in \mathbb{R}$ with $-1 \leq \alpha \leq \beta \leq 1$. Then, as $x \rightarrow \infty$, $$\frac{1}{\pi(x)} \# \left\{ p \leq x, p\nmid {N}_E \;:\; \frac{a_p(E)}{2 \sqrt{p}} \in ( \alpha, \beta ) \right\} \sim \frac{2}{\pi} \int_\alpha^\beta \sqrt{1-t^2} ~dt .$$ \end{thm*} The measure $\mu_{ST}([\alpha, \beta]) := \frac{2}{\pi}\int_\alpha^{\beta}\sqrt{1-t^2}~dt$ is also known as the semicircle measure. The shape of the distribution clearly suggests fewer primes at the ends of the interval in comparison to those at the middle, so it is interesting to see if a precise statement can be made about the behaviour at the extremes. The objects of our study here are such primes, called {\it extremal primes} i.e. primes $p$ satisfying $a_p(E) = \pm[2\sqrt{p}]$, where $[\cdot]$ denotes the greatest integer function. These were first studied by James {\it et al.} \cite{James2016} (see also \cite{JP2017}) who conjectured that, as $x\to \infty$, \begin{eqnarray}\nonumber && \hspace{-1cm} \# \left\{ p \leq x :p\nmid N_E, a_p(E) = \pm [ 2 \sqrt{p} ] \right\} \\ &\sim& \begin{cases} \displaystyle \frac{2}{3\pi} \; \frac{x^{3/4}}{\log{x}} & \mbox{if $E$ has complex multiplication (CM);} \vspace{5pt}\\ \label{conj-EP-CM} \displaystyle \frac{8}{3\pi} \; \frac{x^{1/4}}{\log{x}} & \mbox{if $E$ does not have CM.} \end{cases} \end{eqnarray} The number of extremal primes is smaller for the case of non-CM curves since the measure of the interval at the ends of the Sato-Tate semi-circle distribution is smaller. On the other hand, for CM curves there is an excess of such primes since the measure for the distribution of $a_p(E)/(2 \sqrt{p})$ in $[\alpha, \beta] \subseteq [-1,1] / \{0\}$ is given by $\frac{1}{2 \pi} \int_\alpha^\beta \frac{dt}{\sqrt{1-t^2}}.$ This is reminiscent of the Lang-Trotter conjecture where the trace is a fixed integer rather than being a function of the prime. More precisely, \begin{conjecture} [Lang-Trotter conjecture]\label{conj-EP} \[\# \left\{ p \leq x : p\nmid N_E , \, a_p(E) = t \right\} \sim C_{E,t} \frac{x^{1/2}}{\log{x}}\] \end{conjecture} as $x\to \infty$, where $t\in \mathbb{Z}$ is fixed and $C_{E,t}$ is a constant depending on the curve $E$ and the integer $t$. This conjecture is far from being proved and thus upper bounds remain of great interest. The asymptotics in \eqref{conj-EP-CM} for extremal primes for CM curves were proven by James and Pollack \cite{JP2017}. In a subsequent paper, Agwu, Harris, James, Kannan and Li \cite{AHJKL} studied asymptotic behavior of the primes where $a_p(E)$ falls within a small range of the end of the Hasse interval. In the case of non-CM curves, the asymptotic was proved to be true on average in the Ph. D. thesis of Giberson \cite{Giberson, GJ}. In contrast, the asymptotics \eqref{conj-EP-CM} for a fixed non-CM curve seem to be out of reach with current techniques. In \cite{Win4}, along with David, Gafni, and Turnage-Butterbaugh, the authors established the following upper bounds for a single curve $E/\mathbb{Q}$. \begin{thm*}[\cite{Win4}]\label{Win4result} Let $E/\mathbb{Q}$ be a non-CM elliptic curve. Assume that for $n\geq 0$, the $L$-functions $L(s; Sym^n(E))$ have analytic continuation to the entire complex plane (except for a simple pole at $s = 1$ when $n = 0$), satisfy the expected functional equation and the Generalized Riemann Hypothesis (GRH). Then \begin{equation}\label{Win4bound} \# \{ x < p \leq 2x : p\nmid N_E,\; a_p(E) = [2\sqrt{p}] \} \ll_E {x^{1/2}}. \end{equation} \end{thm*} \begin{remark} Due to the recent breakthrough of Newton and Thorne \cite{newton2020symmetric}, the only unproven hypothesis in the above result is GRH for symmetric powers of $L$-functions. \end{remark} More recently, from Gafni, Thorner and Wong \cite{gafni2020applications} it follows that the bound in \eqref{Win4bound} can be made $x(\log \log x)^2/(\log x)^2$ unconditionally In this article, we follow a different approach to count extremal primes. Inspired by the work of Serre \cite{Serre-Inventiones} and Murty, Murty and Saradha \cite{MMS} on bounds for the Lang-Trotter conjecture, we investigate the number of primes up to $x$ satisfying the extremality condition modulo a large prime $\ell$. The distribution of $a_p(E)$ modulo $\ell$ is known by the Chebotarev Density Theorem applied to the Galois extensions $\mathbb{Q}(E[\ell]) / \mathbb{Q}$, the field obtained by adjoining the coordinates of $\ell$-torsion points of $E$ to $\mathbb{Q}$. A novelty in our application is the observation that studying primes $p$ at the end of the Sato-Tate distribution leads to studying the distribution of the fractional part of $2 \sqrt{p}$. Balog \cite{Balog-1} showed that the latter distribution is uniform. For our purpose, we require the joint distribution of rational primes satisfying a Chebotarev condition in the extensions $\mathbb{Q}(E[\ell]) / \mathbb{Q}$ with this fractional part lying in certain interval. Most of the paper is devoted to proving this result for arbitrary finite Galois extensions over $\mathbb{Q}$, so as to be of independent interest. We also give a version of this joint distribution in the case when the fractional part $\{\sqrt{p}\}<p^{-\lambda}$ for $\lambda<1/4$. Note that for $\lambda=1/2$, this concerns the infinitude of primes $p$ of the form $n^2+1$ for some $n \in \mathbb{N}$, also known as one of Landau's four problems and is widely open. Before stating our main results, we fix some notations. Throughout the paper, $p$ and $\ell$ denote primes, $\pi(x)$ denotes the number of primes up to $x$, and $\{y\}, [y]$ denote the fractional part and the integer part of $y$, respectively. For a finite Galois extension $L/\mathbb{Q}$ and $C$ a union of conjugacy classes in $\mbox{Gal}(L/\mathbb{Q})$, define \begin{align*} \pi_C(x,L) &:= \pi_C(x,L/\mathbb{Q}):= \#\{ x<p \le 2x: p \text{ unramified in } L, \sigma_p \in C\} \end{align*} where $\sigma_p$ is the conjugacy class of the Frobenius automorphism associated with any prime lying above $p$. To simplify exposition, henceforth when we write $\sigma_p \in C$ we assume that $p$ is unramified in $L$ unless mentioned otherwise. We now state our main results. \begin{thm}\label{Balog+CDT} Consider a finite Galois extension $L/\mathbb{Q}$ with $n_L=[L:\mathbb{Q}]$. Let $\alpha>0$, $\omega\geq 1$ and $0\le\delta_1<\delta_2\le 1$ with $\delta:=\delta_2-\delta_1$. Let $\theta \in [0,1]$ be fixed. Assume that GRH holds for the Dedekind zeta function $\zeta_L(s)$ and the parameters $n_L, \alpha, \omega$ and $\delta$ satisfy \[ \alpha^{1/4}(\omega n_L/\delta)^{1/2}(\log x)^2 \ll_\theta x^{(1-\theta)/4}.\] Then the following holds uniformly for $\delta, \alpha$ and $\omega$. \begin{align*} &\nonumber\#\{ x<p\leq 2x : \, \delta_1 \leq \{\alpha p^{\theta}\}<\delta_2 \text{ and } \sigma_p \in C \} - \delta \pi_C(x,L) \\ \nonumber &\quad \ll_\theta \frac{|C|}{|G|}n_L \log x \left(\frac{(\delta \omega)^{1/2}\alpha^{1/4}}{n_L^{1/2}}x^{(3+\theta)/4} + \frac{\delta \omega}{\alpha^{1/2}}x^{1-\theta/2} \log x \right)\\ &\quad +(\delta n_L \omega)^{1/2}\alpha^{1/4} x^{(1+\theta)/4}\log x + \frac{|C|}{|G|}\frac{\delta x}{\omega \log x}. \end{align*} \end{thm} \begin{thm}\label{main} Let $E/\mathbb{Q}$ be a non-CM elliptic curve. Assume that GRH holds for $\zeta_{\mathbb{Q}(E[\ell])}(s)$, where $\ell$ is a large prime. Then, for $\ell \ll x^{1/18}\omega^{-2/9}\log ^{-8/9}x $, $\omega\ge1$, \begin{align*} &\# \left\{ x <p \leq 2x: p\nmid N_E,\; a_p(E) \equiv [2 \sqrt{p}]\bmod \ell \right\} = \frac{\pi(x)}{\ell}\\ &\quad +\operatorname{O}_E\left(\frac{x}{\omega \ell \log x} +\omega^{1/2}\ell^{5/4}x^{7/8}\log x + \omega \ell^{7/2} x^{3/4}(\log x)^{2 \right). \end{align*} \end{thm} Making the simple observation that \begin{align*} &\# \left\{ x< p \leq 2x: p\nmid N_E,a_p(E) = [2 \sqrt{p}] \right\} \\ & \quad \leq \# \left\{ x< p \leq 2x:p\nmid N_E, a_p(E) \equiv [2 \sqrt{p}] \bmod \ell \right\} \end{align*} and setting $\omega=1$ and $\displaystyle{\ell = x^{1/18}\log ^{-8/9} x}$ in Theorem \ref{main}, one obtains the following upper bounds for the number of extremal primes up to $x$. \begin{corollary}\label{ExtremalPrimesBound} Assume GRH as in Theorem \ref{main}. For a non-CM elliptic curve $E/\mathbb{Q}$, and large $x$, \[\left\{ x< p \leq 2x : p\nmid N_E, \; a_p(E) = [2 \sqrt{p}] \right\} \ll_E x^{17/18}(\log x)^{-1/9}. \] \end{corollary} \begin{remark} Similar results can be obtained for extremal primes with $a_p(E) = - [2 \sqrt{p}]$ using essentially the same arguments as presented here. Moreover, because of the generality in Theorem \ref{Balog+CDT}, one can write more general versions of Theorem \ref{main} and Corollary \ref{ExtremalPrimesBound} where $\sqrt{p}$ is replaced by $p^\theta$ for $\theta\in [0,1]$. \end{remark} As in the case of upper bounds for the Lang-Trotter conjecture, we obtain better estimates for the joint distribution in the particular case of $a_p(E) \equiv 0 \mod \ell$. To be precise, the following holds. \begin{corollary}\label{a=0Cor} Let $E$ be a non-CM elliptic curve over $\mathbb{Q}$. Assume that GRH holds for $\zeta_{\mathbb{Q}(E[\ell])}(s)$ where $\ell$ is a large prime with $\ell \ll x^{1/14}(\log x)^{-8/7}\omega^{-2/7}$. Then \begin{align*} &\# \left\{ x <p \leq 2x: p\nmid N_E, \; a_p(E) \equiv [2 \sqrt{p}]\equiv 0 \bmod \ell\right\} = \frac{\pi(x)}{\ell^2 } \\ &\quad + \operatorname{O}\left(\frac{x}{\ell^2 \omega \log x}+ \frac{\omega^{1/2} x^{7/8}\log x}{\ell^{1/4}}+\omega \ell^{3/2} x^{3/4}(\log x)^{2}\right). \end{align*} \end{corollary} In fact, using the same ideas as in Theorem \ref{Balog+CDT}, one can prove a more general result, stated below, where the bounds for the fractional part of the prime are themselves a function of the prime. \begin{thm}\label{Gen-Landau+CDT} Let $\alpha, \lambda>0$ and $\theta \in [0,1]$ be fixed. For finite Galois extension $L/K$, assume GRH for $\zeta_L(s)$. Then \begin{align*} &\#\{ x<p\leq 2x : \{\alpha p^{\theta}\}<p^{-\lambda} \text{ and } \sigma_p \in C \} - \sum_{\substack{x<p\le 2x \\ \sigma_p \in C}} p^{-\lambda}\\ &\ll \frac{|C|}{|G|} \frac{x^{1-\lambda}}{\omega \log x} + \omega{\alpha^{1/2}} \ x^{\theta/2} \log d_L \log^3 x \\ &\quad + \frac{|C|}{|G|} {\alpha^{1/2}} \ x^{(1+\theta)/2} \log^3 x \left(\log d_L + n_L \log x\right) \left( \omega^2 +\frac{ \omega x^{1/2 -\theta-\lambda}}{\alpha^{1/2}}\right), \end{align*} where $\omega \ge 1$ is a parameter at our disposal. \end{thm} Setting $\theta=1/2, \alpha=1$ and $\lambda =1/4-\epsilon$ for any $\epsilon>0$, we obtain the following asymptotic result, in spirit of the Landau's prime counting problem where the primes satisfy $\{\sqrt{p}\} <p^{-1/2}$ i.e. $p-1$ is a perfect square. \begin{corollary}\label{Landau+CDT} For a finite Galois extension $L/\mathbb{Q}$, assume that GRH holds for $\zeta_L(s)$. Then, for a fixed $\epsilon>0$ \begin{align*} & \#\{ x<p\leq 2x : \{\sqrt{p}\}<p^{-1/4+\epsilon} \text{ and } \sigma_p \in C \} - \sum_{\substack{x<p\le 2x\\ \sigma_p \in C}} p^{-1/4+\epsilon}\\ &\ll \frac{|C|}{|G|} \frac{x^{3/4+\epsilon}}{\omega \log x} + \omega x^{1/4} \log d_L \log^3 x + \frac{|C|}{|G|} \omega^2 x^{3/4} \log^3 x \left(\log d_L + n_L \log x\right). \end{align*} \end{corollary} Setting $\alpha =1, \theta=1/2$, and $ \delta_1 =0$ in Theorem \ref{Balog+CDT}, one obtains the following (conditional) generalization of \cite{Balog-2} that investigates the distribution of fractional parts of $p^{\theta}$ with $p\equiv a \bmod q$. \begin{corollary} Assume the notations and hypotheses as in Theorem \ref{Balog+CDT}. Then \begin{align*} &\#\{ x< p\leq 2x : \{\sqrt{p}\}< \delta \text{ and } \sigma_p \in C\} - \delta \pi_C(x,L)\\ &\ll \frac{|C|}{|G|}\left( (\delta \omega n_L)^{1/2}x^{7/8}\log x + n_L\delta\omega x^{3/4}(\log x)^{2} + \frac{\delta x}{\omega\log x}\right) . \end{align*} \end{corollary} Theorem \ref{Balog+CDT} is also of a similar flavour as some other interesting joint distribution theorems such as asymptotics for the number of Charmichael numbers composed of primes satisfying the Chebotarev condition studied in \cite{Carmichael+CDT}. In \cite{Piatetski-Shapiro+CDT}, an asymptotic estimate for the number of Piatetski-Shapiro primes up to $x$ satisfying the Chebotarev condition is derived. \begin{remark} One can also write versions of Theorems \ref{Balog+CDT} and \ref{Gen-Landau+CDT} for general extensions $L/K$ where $K$ is a finite extension of $\mathbb{Q}$ and $L$ is a normal extension of $K$ with Galois group $G=G(L/K)$. For a fixed conjugacy class $C$ of $G$, here one would count primes $\mathfrak{p}\in K$ of norm $N_{K/\mathbb{Q}} \mathfrak{p} \le x $ which are unramified in $L$ such that the Artin symbol satisfies $[\frac{L/K}{\mathfrak{p}}]=C$ and simultaneously have the fractional part $\{\alpha (N_{K/\mathbb{Q}}\mathfrak{p})^\theta\}$ lying in the interval $[\delta_1,\delta_2)$ of length $\delta$. The main term would then be $\delta \pi_C(x,L/K)$ and the error terms can be shown to be as in Theorems \ref{Balog+CDT} and \ref{Gen-Landau+CDT}. \end{remark} The structure of this paper is as follows. We begin Section \ref{CDTsection} with some preliminaries and then prove Theorem \ref{main} assuming Theorem \ref{Balog+CDT}. In Section \ref{section-Balog}, we first prove Theorem \ref{Balog+CDT} adapting the ideas of Balog \cite{Balog-1} and Lagarias-Odlyzko \cite{L-O}. The proofs of Theorems \ref{Gen-Landau+CDT} and Corollary \ref{a=0Cor} are also presented in the same section. Lastly, in Section \ref{Proof of F estimate}, we provide details of the results needed in the proof of Theorem \ref{Balog+CDT}. \vspace{0.2cm} \noindent{\bf Acknowledgements:} The authors are grateful to Chantal David for suggesting the problem and for many fruitful discussions related to the paper. The first author is supported by NSF Grant DMS-1854398. \section{Preliminaries and proof of Theorem \ref{main}}\label{CDTsection} \vspace{0.2cm} Let $N_E$ denote the conductor of the elliptic curve (without complex multiplication) over $\mathbb{Q}$. For a given prime $\ell$, let $E[\ell]$ denote the $\ell$-torsion points subgroup of $E[\bar{\mathbb{Q}}]$. It is known that the Galois representation \begin{equation*} \rho_{E,\ell}: \mbox{Gal}(\bar{\mathbb{Q}}/\mathbb{Q}) \to \mbox{Aut}_{\mathbb{F}_\ell}(E[\ell]) \cong \mbox{GL}_2(\mathbb{F}_\ell) \end{equation*} is unramified at all primes $p \nmid N_E\ell$. The field $\mathbb{Q}(E[\ell])$, obtained by adjoining the coordinates of all the $\ell$-torsion points of $E$ to $\mathbb{Q}$, is the fixed field in $\bar{\mathbb{Q}}$ of ker$\rho_{E,\ell}$. Serre \cite{Serre-Inventiones} showed that for all but finitely many primes $\ell$, the representation $\rho_{E,\ell}$ is surjective. Thus, using $\rho_{E,\ell}$ we see that for all but finitely many primes $\ell$, the Galois group $G_\ell := \mbox{Gal}(\mathbb{Q}(E[\ell])/\mathbb{Q})$ is equal to $\mbox{GL}_2(\mathbb{F}_\ell)$. Moreover, the characteristic polynomial of $\rho_{E,\ell}(\sigma_p)$ is given by $$x^2 -a_p(E) x + p \, (\bmod \ell).$$ That is, $a_p(E)$ is the trace of the Frobenius automorphism at $p$. Therefore, for $a \in \mathbb{F}_\ell$, if $C_\ell(a)$ denotes the union of conjugacy classes in $\mbox{GL}_2(\mathbb{F}_\ell)$ of elements of trace $a$ modulo $ \ell$, then $$a_p(E) \equiv a \bmod \ell \iff \sigma_p\in C_\ell(a).$$ We now review the structure of conjugacy classes in $\mbox{GL}_2(\mathbb{F}_\ell)$ for an odd prime $\ell$. This is well known, see for example \cite[Section 5.2]{Fulton-Harris} and is listed below for the convenience of the reader. \vspace{0.2cm} \begin{center} \begin{tabular}{| m{8cm} | m{2.8cm}| m{2.3cm} |} \hline ~~Class representative~~ &{No. of classes}~~& {Size of class} \\ \hline $\begin{pmatrix} \lambda &0\\ 0&\lambda \end{pmatrix}$,\, $\lambda\in \mathbb{F}_\ell^{\times}$& $\ell-1$ & $1$ \\ \hline $\begin{pmatrix} \lambda & 1\\ 0&\lambda \end{pmatrix}$, \, $\lambda \in \mathbb{F}_\ell^{\times}$ & $\ell-1$ & $\ell^2-1$\\ \hline $\begin{pmatrix} \lambda_1 &0\\ 0&\lambda_2 \end{pmatrix}$, \, $\lambda_1\neq\lambda_2$, $\lambda_1,\lambda_2 \in \mathbb{F}_\ell^{\times}$ & $\frac{1}{2}(\ell-1)(\ell-2)$ & $\ell(\ell+1)$\\ \hline $\begin{pmatrix} \alpha & D\beta \\ \beta&\alpha \end{pmatrix}$, $\alpha,\beta \in \mathbb{F}_\ell, \beta\neq 0$, $D$ is a fixed non-square in $\mathbb{F}_\ell^{\times}$ & $\frac{\ell}{2}(\ell-1)$ & $\ell(\ell-1)$ \\ \hline \end{tabular} \end{center} \vspace{.1in} Therefore, \begin{eqnarray} \label{C/G size} \frac{|C_\ell(a)|}{|G_\ell|} &=& \begin{cases} \displaystyle \frac{\ell^2-\ell-1}{(\ell-1)^2(\ell+1)} & \mbox{for $a \neq 0$} \vspace{5pt}\\ \displaystyle \frac{\ell}{(\ell-1)(\ell+1)} & \mbox{for $a = 0$} \end{cases} \\ \nonumber &=& \frac{1}{\ell} + \operatorname{O} \left( \frac{1}{\ell^2} \right). \end{eqnarray} Moreover, for $a\neq 0$, $C_\ell(a)$ is a union of $\ell$ conjugacy classes, while $C_\ell(0)$ is a union of $\ell-1$ conjugacy classes. We use the following effective Chebotarev density theorem of Lagarias-Odlyzko \cite{L-O} which is sufficient for our purposes. \begin{thm*}[\cite{L-O}] Let $L/K$ be a finite Galois extension of number fields with Galois group $G$ and $C$ be a conjugacy class in $G$. There exists an effectively computable positive absolute constant $c_1$ such that if $\zeta_L(s)$ satisfies the GRH, then for every $x\geq 2$ \begin{equation}\label{LO-CDT} \pi_C(x,L/K)-\frac{|C|}{|G|}\pi(x) \leq c_1 \frac{|C|}{|G|}x^{1/2}\log d_Lx^{n_L} + \log d_L. \end{equation \end{thm*} We now prove Theorem \ref{main}. \begin{proof} For each residue $a \in \left\{ 0, 1, \ldots, \ell-1 \right\}$, we have $$[2 \sqrt{p}] \equiv a \bmod \ell \iff \left\{\frac{2 \sqrt{p} }{\ell}\right\} \in \left[ \frac{a}{\ell}, \frac{a+1}{\ell} \right)$$ and $$ a_p(E) \equiv a \bmod \ell \iff \sigma_p \in C_\ell(a) \subseteq G_\ell.$$ Therefore, by Theorem \ref{Balog+CDT} with $\theta=1/2, \alpha =2/\ell,$ and $\delta = 1/\ell$, we conclude \begin{align*} \# &\left\{ p \leq x \;:\; a_p(E) \equiv [2 \sqrt{p}]\equiv a \bmod \ell \right\} = \frac{x}{\ell \log x} \\ &\quad + \operatorname{O}\left(\omega^{1/2} \ell^{1/4} x^{7/8}\log x + \frac{x}{\omega \ell^2 \log x}\right). \end{align*} This gives us, \begin{align*} \# &\left\{ p \leq x \;:\; a_p(E) \equiv [2 \sqrt{p}] \bmod \ell \right\} \\ & = \sum_{a \bmod \ell} \# \left\{ p \leq x \;:\; \sigma_p \in C_\ell(a)\;\text{and}\; \left\{\frac{2 \sqrt{p} }{\ell}\right\} \in \left[ \frac{a}{\ell}, \frac{a+1}{\ell} \right) \right\} \\ &= \sum_{a \bmod \ell} \frac{1}{\ell}\ \pi_{C_\ell(a)}\left(x, \mathbb{Q}(E[\ell])\right) + \operatorname{O}\left(x^{7/8}\ell^{1/4} \log x+ \frac{x}{\omega\ell ^2\log x} \right)\\ &= \pi_{C_\ell(a)}\left(x, \mathbb{Q}(E[\ell])\right) + \operatorname{O}\left( \omega^{1/2} \ell^{5/4} x^{7/8}\log x + \frac{ x}{\omega \ell \log x \right), \end{align*} where we have used \eqref{C/G size} and \eqref{LO-CDT} to compute $\pi_{C_\ell(a)}\left(x, \mathbb{Q}(E[\ell])\right)$. \end{proof} \section{The joint distribution theorem and some applications\label{section-Balog}} \vspace{0.2cm} \noindent{\it Remark.} In what follows, we may assume that the parameters $\alpha$ and $\theta$ satisfy $\alpha x^\theta \geq 1$ since for $\alpha x^\theta<1$, $\alpha x^\theta=\{\alpha x^\theta\} \in [\delta_1, \delta_2)$, and therefore the desired quantity can be computed using the Prime Number Theorem. \subsection{Proof of Theorem \ref{Balog+CDT}} \begin{proof} To start with, we write the fractional part condition as follows: \begin{eqnarray*} [\alpha p^\theta - \delta_1] - [\alpha p^\theta - \delta_2] &=& \begin{cases} 1 & \mbox{if $\delta_1 \leq \{ \alpha p^\theta \} < \delta_2$;} \\ 0 & \mbox{otherwise.} \end{cases} \end{eqnarray*} First, we obtain the result when $x_j<p\le x_{j+1}$ for $x_j:= [x(1+j/B)]+1/2$ for $j=0,\ldots, B$ and $B=[\omega]$. Summing over $j$ then establishes the result for $x<p\le 2x$. With $\delta = \delta_2 - \delta_1$, the length of the interval, we set \begin{eqnarray*} U_{-} &:=& \frac{\alpha x_j^\theta }{\delta}, \qquad U_{+} := \frac{\alpha x_{j+1}^\theta }{\delta}. \end{eqnarray*} Then for $x_j< p \le x_{j+1}$, we have \[\alpha p^{\theta} \left(1- \frac{1}{U_{-}} \right) - \delta_1 \le \alpha p^{\theta} - \delta_2 \leq \alpha p^{\theta} \left(1- \frac{1}{U_{+}} \right) - \delta_1.\] Note that we are interested in the sum \[S:=\sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}} [\alpha p^{\theta} - \delta_1 ] - [\alpha p^{\theta} - \delta_2]\] in order to bound the number of primes $x_j <p\le x_{j+1}$ such that $\sigma_p\in C$ and $\delta_1 \le \{\alpha p^\theta\} < \delta_2.$ This implies \begin{eqnarray*} S &\geq& \sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}} \left[\alpha p^\theta - \delta_1 \right] - \left[\alpha p^\theta \left(1-\frac{1}{U_{+}} \right) - \delta_1 \right] \end{eqnarray*} and \begin{eqnarray*} S &\leq& \sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}} \left[\alpha p^\theta - \delta_1 \right] - \left[\alpha p^\theta \left(1-\frac{1}{U_{-}} \right) - \delta_1 \right]. \end{eqnarray*} Therefore, using \[ \frac{\alpha p^\theta}{U_{\pm}}=\delta+\operatorname{O}\left(\frac{\delta}{\omega}\right),\] in order to obtain the claimed asymptotics for $S$, it suffices to prove \begin{align}\label{Suff-cond} &\nonumber\sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}} \left[\alpha p^\theta - \delta_1 \right] - \left[\alpha p^\theta \left(1-\frac{1}{U_{\pm}} \right) - \delta_1 \right]-\frac{\alpha p ^\theta}{U_{\pm}} \\ &\nonumber \ll \frac{|C|}{|G|}\log x \left( (\delta n_L/\omega)^{1/2}\alpha^{1/4}x^{(3+\theta)/4} + \frac{\delta n_L}{\alpha^{1/2}}x^{1-\theta/2}\log x\right)\\ & \quad + (\delta n_L/\omega)^{1/2}\alpha^{1/4}x^{(1+\theta)/4}\log x. \end{align} We may write the summand \begin{align}\label{sum-of-1's} \left[\alpha p^\theta - \delta_1 \right] - \left[ \alpha p^\theta \left(1-\frac{1}{U_{\pm}} \right) - \delta_1 \right] & = \sum_{m \le\alpha p^\theta-\delta_1}1\ - \sum_{m \leq \alpha p^\theta \left(1-\frac{1}{U_{\pm}} \right)-\delta_1 } 1. \end{align} Define the sequence $(a_m)_{m\in \mathbb{N}}$ \begin{equation*} a_m:= \begin{cases} 1 &\text{ if } \frac{1}{3} \alpha x^\theta-\delta_1 < m \le 3 \alpha x^\theta-\delta_1\\ 0 &\text{ otherwise}. \end{cases} \end{equation*} Set $$A_{\delta_1}(M) := \sum_{m \leq M-\delta_1} a_m = \sum_{m\geq 1}a_m \ f\left( \frac{m+\delta_1}{M}\right)$$ where \begin{equation*} f(y)= \begin{cases} 1 &\text{ if } 0<y<1\\ 1/2 & \text{ if } y=1\\ 0 &\text{ if } y>1 \end{cases} \end{equation*} Using the inverse Mellin transform, for $\sigma >0$, we write $$f(y) = \frac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+ i\infty} \frac{y^{-s}}{s} ~ds.$$ For $y= (m+\delta_1)/M$, this gives \begin{equation}\label{shifted perron untruncated} A_{\delta_1}(M) = \frac{1}{2\pi i}\int_{\sigma-i\infty}^{\sigma+ i\infty} \sum_{m\ge 1} \frac{a_m}{(m+\delta_1)^s}\frac{M^s}{s}~dx. \end{equation} Thus, using the definition of the sequence $(a_m)$, we rewrite \eqref{sum-of-1's} as \begin{align*} \left[\alpha p^\theta - \delta_1 \right] - \left[ \alpha p^\theta \left(1-\frac{1}{U_{\pm}} \right) - \delta_1 \right] &=\sum_{m \le \alpha p^\theta -\delta_1 }a_m\ - \sum_{m \le (\alpha p^\theta) \left(1-\frac{1}{U_{\pm}} \right)-\delta_1 }a_m\\ &= A_{\delta_1}(\alpha p^{\theta}) - A_{\delta_1}\left(\alpha p^{\theta}\left(1-\frac{1}{U_{\pm}} \right) \right). \end{align*} In order to estimate the integrals $A_{\delta_1}(\alpha p^{\theta})$ and $A_{\delta_1}\left(\alpha p^{\theta}\left(1-\frac{1}{U_{\pm}} \right) \right)$, given by \eqref{shifted perron untruncated}, we make use of the truncated Perron's formula \cite[Lemma 3.12]{Titchmarsh}, and obtain \begin{align*} &\nonumber\left[\alpha p^\theta - \delta_1 \right] - \left[ (\alpha p^\theta) \left(1-\frac{1}{U_{\pm}} \right) - \delta_1 \right] = \frac{1}{2\pi i} \int_{1/2-i T_1}^{1/2+i T_1} L(s) H(s) \ p^{\theta s} ~ds \\ & \quad+\operatorname{O}\Bigg(\sum_{\frac13 \alpha x^\theta < m+\delta_1 \le 3 \alpha x^\theta}\! \min\left\{1,T_1^{-1}\left|\log\frac{\alpha p^\theta}{m+\delta_1}\right|^{-1}\right\}\Bigg) \\ & \quad+\operatorname{O}\Bigg(\sum_{\frac13 \alpha x^\theta < m+\delta_1 \le 3 \alpha x^\theta}\!\min\left\{1,T_1^{-1}\left|\log\frac{\alpha p^\theta}{m+\delta_1} \left(1-\frac{1}{U_{\pm}}\right)\right|^{-1} \right\}\Bigg)\\ &\label{PerronError}= \frac{1}{2\pi i} \int_{1/2-i T_1}^{1/2+i T_1} L(s) H(s) p^{\theta s} ~ds +\operatorname{O}\Bigg(\frac{\alpha x^{\theta} }{T_1}\log(\alpha x^{\theta})\Bigg),\\ \end{align*} where \[H(s) := \frac{1}{s}\left(1-\left(1-\frac{1}{U_{\pm}}\right)^s\right) \ll \frac{1}{U_{\pm}}\] and \[L(s) := \alpha^s \sum_{\frac13 \alpha x^\theta - \delta_1 < m \le 3 \alpha x^\theta - \delta_1} \frac{1}{ (m+\delta_1)^{s}}.\] Summing over the primes, we find \begin{align} &\nonumber\sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}} \Bigg(\left[\alpha p^\theta\ - \delta_1 \right] - \left[\alpha p^\theta \left(1-\frac{1}{U_{\pm}} \right) -\delta_1 \right]\Bigg) \\ &\hspace{.25in}\label{Perron1-error} = \frac{1}{2\pi i} \int_{1/2-iT_1}^{1/2+iT_1} L(s) H(s) F(-\theta s)~ds + \operatorname{O}\left( \frac{\alpha x^{\theta}}{T_1} \log(\alpha x^{\theta})\pi_C(x_j,L) \right) \end{align} where \begin{eqnarray*} \label{def-F} F(s) := \sum_{\substack{x_j<p\le x_{j+1} \\ \sigma_p \in C}} p^{- s} \qquad \text { and } \qquad \pi_C(x_j,L) := \sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}}1. \end{eqnarray*} First, we compute the above integral in the smaller range, up to $T_0 :=\alpha x^\theta$. Observe that in the range $|t|\leq T_0$, we have \begin{eqnarray*} H(s) =\frac{1}{U_{\pm}} + \operatorname{O} \left(\frac{|s-1|}{U_{\pm}^2} \right) \label{asymp-H} \end{eqnarray*} and \begin{equation*} \label{boundL} L(s) = \alpha^s \frac{(3 \alpha x^\theta)^{1-s} - (\alpha x^\theta/3)^{1-s}}{1-s} + \operatorname{O}\left( x^{-\theta\Re(s)} \right) \end{equation*} Therefore, for the integral in \eqref{Perron1-error} in the range $|t|\leq T_0$, we have \begin{align} & \nonumber \frac{1}{2\pi i} \int_{1/2-iT_0}^{1/2+iT_0} L(s) H(s)F(-\theta s)\,ds \\ \label{integral} &= \frac{1}{U_{\pm}} \sum_{\substack{x_j < p \le x_{j+1} \\ \sigma_p \in C}} \alpha p^{\theta} \; \frac{1}{2\pi i} \int_{1/2-iT_0}^{1/2+iT_0} (\alpha p^{\theta})^{s-1} \frac{(3 \alpha x^\theta)^{1-s} - (\alpha x^\theta/3)^{1-s}}{1-s}~ds \\ &\label{ET-integral} \quad + \operatorname{O}\left( \frac{\delta}{\alpha} x^{-3\theta/2} \int_{-T_0}^{T_0} |F(-\theta/2 - i \theta t)| \; dt \right). \end{align} Using Cauchy-Schwarz inequality, the error term \eqref{ET-integral} is bounded by \begin{eqnarray} \label{ET-afterCS} &&\ll \frac{\delta}{\alpha} x^{-3\theta/2} T_0^{1/2} \left( \int_{-T_0}^{T_0} |F(-\theta/2 - i \theta t)|^2~dt \right)^{1/2}. \end{eqnarray} Also, by applying the mean value theorem for Dirichlet polynomials, we have $$\int_{-T_0}^{T_0} \left\vert F(-\theta/2-i \theta t) \right\vert^2 ~dt \ll x^{\theta} \pi_C(x_j,L) \;\left( T_0 + x/ \omega \right).$$ Inserting this estimate in \eqref{ET-afterCS}, the error term \eqref{ET-integral} is bounded by \begin{align} \nonumber&\ll \delta T_0^{1/2}(\alpha x^\theta)^{-1} \pi_C(x_j,L)^{1/2} \left(T_0^{1/2}+x^{1/2}/\omega^{1/2}\right)\\ \label{error from integral up to T_0} &\ll \delta \pi_C(x_j,L)^{1/2} \left(1+x^{(1-\theta)/2}(\alpha\omega)^{-1/2}\right), \end{align} using $T_0=\alpha x^\theta$. We now compute the integral in \eqref{integral} to obtain the desired main term. The change of variable $w=1-s$ yields \begin{align*} \frac{1}{U_{\pm}} \sum_{\substack{x_j< p \le x_{j+1}\\ \sigma_p \in C}} \alpha p^{\theta} \frac{1}{2\pi i} \int_{1/2-iT_0}^{1/2+iT_0} \frac{1}{w} \left(\left(\frac{3 x^\theta}{ p^\theta}\right)^{w} - \left(\frac{ x^\theta}{3p^\theta}\right)^{w} \right)~dw. \end{align*} By Perron's formula, for all values $x_j < p \leq x_{j+1}$, $$\frac{1}{2\pi i} \int_{1/2-iT_0}^{1/2+iT_0} \frac{1}{w} \left(\left(\frac{3 x^\theta}{ p^\theta}\right)^{w} - \left(\frac{ x^\theta}{3p^\theta}\right)^{w} \right)~dw =1 + \operatorname{O} \left( \frac{1}{T_0} \right).$$ Therefore \eqref{integral} becomes \begin{equation}\label{psi/UT_0} \sum_{\substack{x_j < p \le x_{j+1}\\ \sigma_p \in C}} \frac{\alpha p^{\theta}}{U_{\pm}} \ + \operatorname{O} \left( \frac{1}{T_0 U_{\pm}} \sum_{\substack{x_j < p \le x_{j+1}\\ \sigma_p \in C}} \alpha p^{\theta} \right). \end{equation} Using $T_0 =\alpha x^{\theta}$ and collecting the terms from \eqref{Perron1-error}, \eqref{error from integral up to T_0} and \eqref{psi/UT_0}, we obtain \begin{align}\label{total error up to T_0} &\nonumber\sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}} \left( \left[\alpha p^\theta\ - \delta_1 \right] - \left[\alpha p^\theta \left(1-\frac{1}{U_{\pm}} \right) -\delta_1 \right] - \frac{\alpha p^{\theta}}{U_{\pm}}\right) \\ &\nonumber \ll \delta \pi_C(x_j,L)^{1/2} \left(1+\frac{x^{(1-\theta)/2}}{(\alpha\omega)^{1/2}} \right) + \frac{\delta}{\alpha x^{\theta}} \ \pi_C(x_j,L) \\ &\quad + \frac{\alpha x^{\theta}}{T_1} \log(\alpha x^{\theta}) \pi_C(x_j,L) + \frac{1}{U_{\pm}} \int_{T_0}^{T_1} |L(1/2+it)| |F(-\theta/2-i \theta t)| ~dt \end{align} In order to estimate the integral in the error term above, we use dyadic division, and bound the following integrals \[\frac{1}{2\pi i} \int_{1/2+iT'}^{1/2+i2T'} L(s) H(s)F(-s)~ds \ll \frac1U_{\pm} \int_{T'}^{2T'} |L(1/2+it)| |F(-\theta/2-i \theta t)| ~dt\] for $T_0 \leq T' \leq T_1/2$ using Cauchy-Schwarz inequality. This is achieved by obtaining the following uniform bound for $T' \leq \tau \leq 2T'$ and $[x_j, x_{j+1}]\subseteq [x,2x] $ established in Proposition \ref{F-estimate-prop} for $K=\mathbb{Q}$ \begin{align*} F(-\theta/2-i \theta \tau) &\ll x^{\theta/2 \left(\log d_L+b+\frac{bx\log x}{T'} \right) \\ & \quad + \frac{|C|}{|G|}x^{(1+\theta)/2}\bigg(\log d_L +n_L\log T'\bigg) \left(\frac{\log T'}{\log x} +\frac{x^{1/2}}{T'} \right); \end{align*} and the mean value estimate below given by Lemma \ref{L-estimate by MV} $$\int_{T'}^{2T'}|L(1/2+it)|^2~dt \ll \alpha {T'} + \alpha^2 x^{\theta} (\log \alpha x^{\theta}).$$ This gives us \begin{align*} \nonumber &\int_{T'}^{2T'} |L(1/2+it)| |F(-\theta/2 - i \theta t)| ~dt \\ &\ll \left( \alpha {T'}^2 + \alpha^2 x^{\theta} T'(\log \alpha x^{\theta}) \right)^{1/2}\bigg\{x^{\theta/2} \left(\log d_L +b +b \frac{x\log x}{T'} \right) \\ &\quad +\frac{|C|}{|G|}x^{(1+\theta)/2}\bigg(\log d_L +n_L\log T'\bigg) \left(\frac{\log T'}{\log x} +\frac{x^{1/2}}{T'} \right)\bigg\} & {\vspace{1cm}}\\ &\ll x^{\theta/2} \left(\log d_L + b \right) \left(\alpha {T'}^2 + \alpha^2 T' x^\theta \log(\alpha x^{\theta})\right)^{1/2}\\ &\quad+ b x^{1+\theta/2} \log x \left(\alpha + \alpha^2 {T'}^{-1} x^\theta \log(\alpha x^{\theta}) \right)^{1/2}\\ &\quad+\frac{|C|}{|G|} x^{(1+\theta)/2} \left(\log d_L + n_L\log T'\right)\\ &\quad \left(\left(\alpha {T'}^2 + \alpha^2 T' x^{\theta} \log(\alpha x^{\theta})\right)^{1/2} \frac{\log T' }{\log x}+ \left(\alpha + \alpha^2 {T'}^{-1} x^{\theta} \log(\alpha x^{\theta}) \right)^{1/2} x^{1/2}\right). \end{align*} Recall that $U_{\pm}\ll \dfrac{\alpha x^{\theta}}{\delta}$, $\alpha x^{\theta} \leq T' < T_1$, and $ \alpha x^{\theta}\log (\alpha x^{\theta})\ll {\delta} T_1$. Therefore, \begin{align*} & \nonumber \frac{1}{U_{\pm}} \int_{T'}^{2T'} |L(1/2+it)| |F(-\theta/2 - i \theta t)| ~dt \\ &\ll \frac{\delta}{(\alpha x^\theta)^{1/2}}\ \bigg\{\frac{|C|}{|G|} x^{1/2} (\log d_L + n_L\log T_1) \left(T_1 \frac{\log T_1}{\log x}+x^{1/2} \right) \\ &\quad+ T_1 \left(\log d_L + b\right) + b x \log x \bigg\}. \end{align*} We now set \begin{align}\label{T_1value?} T_1 &=\frac{\alpha^{3/4}}{(n_L\delta \omega)^{1/2}\log x}x^{(1 +3\theta)/4}. \end{align} Note that $\log T_1 \ll \log x$ since we have assumed $\displaystyle{\alpha^{1/4}(\omega n_L/ \delta)^{1/2}(\log x)^2 \ll x^{(1-\theta)/4}}.$ Thus, using $\log d_L \ll n_L\log n_L$ (which follows from the discussion in Serre\cite[Section I.3]{Serre-Inventiones}), we conclude \begin{align*} \nonumber \frac{1}{U_{\pm}} \int_{T'}^{2T'} &|L(1/2+it)| |F(-\theta/2 - i \theta t)| ~dt \ll (\delta n_L/\omega)^{1/2}\alpha^{1/4}x^{(1 +\theta)/4}\\ & \quad + \frac{|C|}{|G|}\left((\delta n_L/\omega)^{1/2}\alpha^{1/4}x^{3/4+\theta/4} + \frac{\delta n_L}{\alpha^{1/2}}x^{1-\theta/2}\log x\right). \end{align*} Given the value of $T_1$, since we use dyadic division of the interval $[T_0, T_1]$, the number of integrals that we need to add is $\operatorname{O}(\log x)$. With this, inserting the above estimate in \eqref{total error up to T_0} gives us \begin{align} &\nonumber\sum_{\substack{x_j<p\le x_{j+1}\\ \sigma_p \in C}} \left(\left[\alpha p^\theta\ - \delta_1 \right] - \left[\alpha p^\theta \left(1-\frac{1}{U_{\pm}} \right) -\delta_1 \right] - \frac{\alpha p^{\theta}}{U_{\pm}}\right) \\ &\label{line1} \ll \delta \pi_C(x_j,L)^{1/2} \left(1+\frac{x^{(1-\theta)/2}}{(\alpha\omega)^{1/2}} \right) + \frac{\delta}{\alpha x^{\theta}} \ \pi_C(x_j,L) \\ & \label{line2} \quad + \frac{|C|}{|G|}\log x \left( (\delta n_L/\omega)^{1/2}\alpha^{1/4}x^{(3+\theta)/4} + \frac{\delta n_L}{\alpha^{1/2}}x^{1-\theta/2}\log x\right)\\ & \nonumber\label{line3} \quad + (\delta n_L/\omega)^{1/2}\alpha^{1/4}x^{(1+\theta)/4}\log x. \end{align} Note that the error terms in \eqref{line1} can be absorbed into \eqref{line2}. Invoking \eqref{Suff-cond}, this completes the proof of Theorem \ref{Balog+CDT}. \end{proof} \vspace{0.3cm} \subsection{Proof of Theorem \ref{Gen-Landau+CDT}} Since the proof is quite similar to that of Theorem \ref{Balog+CDT}, we point out only the main differences below and omit the details. \begin{proof} We proceed as in the proof of Theorem \ref{Balog+CDT} with \[\delta_1=0, \; \quad \delta_2 = p^{-\lambda}.\] Note that in this case $\delta=p^{-\lambda} \le x_j^{-\lambda} $ and hence we can eliminate $\delta$ from the error terms. We follow the proof above until \eqref{T_1value?} and choose \[T_1= \alpha \omega x_j^{\theta+\lambda} \log x .\] Following the reasoning after \eqref{T_1value?} in the proof of Theorem \ref{Balog+CDT} with the above value of $T_1$, we obtain the asymptotics when $x_j<p \le x_{j+1}$. Lastly, summing over $j$ gives the desired result claimed in Theorem \ref{Gen-Landau+CDT}. \end{proof} \vspace{0.3cm} \subsection{Proof of Corollary \ref{a=0Cor}} We show $$\# \left\{ x< p \leq 2x \;:\; \sigma_p \in C_0\;\text{and}\; \left\{\frac{2 \sqrt{p} }{\ell}\right\} \in \left[ 0, \frac{1}{\ell} \right) \right\} = \frac{x}{\ell^2 \log x} + \operatorname{O}\left(\frac{x^{7/8}}{\ell^{1/4}}\log x\right),$$ where $C_0$ denotes the union of conjugacy classes of trace zero in $\mbox{Gal}(L/\mathbb{Q}) =\mbox{GL}_2(\mathbb{F}_\ell)$. Before giving the proof, we first fix some notations. For a group $G$ and $C\subset G$, let $\delta_C: G\to \{0,1\}$ denote the class function such that $\delta_C(g)=1$ if and only if $g\in C$. Then, \[\pi_C(x,L) = \sum_{\substack{ p \text { prime }\\ p \text{ unramified in } L}\\{x< p\leq 2x} } \delta_C(\sigma_p).\] Let \[\Phi_{C, [\delta_1,\delta_2)}(x,L,\alpha) := \sum_{ \substack{ p \text { prime }\\ p \text{ unramified in } L\\ {x< p\leq 2x}\\ \delta_1\le\{\alpha p^{\theta}\}< \delta_2 } } \delta_C(\sigma_p).\] We now define an analogue of these functions that include contributions from ramified primes as well. Let $D_{\mathfrak{p}}$ and $I_{\mathfrak{p}}$ denote the decomposition and inertia subgroups of $G$, respectively at a chosen prime ideal $\mathfrak{p}$ lying above $p$. Consider $\mbox{Frob}_{\mathfrak{p}} \in D_{\mathfrak{p}}/I_{\mathfrak{p}}$, the Frobenius element at $\mathfrak{p}$. Then, for each integer $m\geq 1$, we define $$\delta_C(\sigma_p^m) := \frac{1}{|I_{\mathfrak{p}}|}\sum_{\substack{{g\in D_{\mathfrak{p}}}\\{ gI_{\mathfrak{p}}= \mbox{Frob}_{\mathfrak{p}}^m \in D_{\mathfrak{p}}/I_{\mathfrak{p}}} }} \delta_C(g).$$ Note that $\delta_C(\sigma_p^m)$ is independent of the choice of $\mathfrak{p}$ and the above definition agrees with the usual definition of $\delta_C(\sigma_p^m)$ for primes $p$ that are unramified in $L$. Define \[\tilde{\pi}_C(x,L) := \sum_{\substack{{p \text{ prime}, \, m\geq 1}\\ {x< p^m\leq 2x}}} \frac{\delta_C(\sigma_p^m)}{m} \quad \text{and} \quad \tilde{\Phi}_{C,[\delta_1,\delta_2)}(x,L,\alpha) := \sum_{\substack{{p \text{ prime}, \, m\geq 1}\\ {x< p^m\leq 2x}\\ \delta_1\le\{\alpha p^{m\theta}\}< \delta_2}} \frac{\delta_C(\sigma_p^m)}{m}. \] With these notations, we state two lemmas from \cite{Zywina} to be used later, and we state them for our case when the base field is $\mathbb{Q}$. \begin{lemma}\cite[Lemma 2.7]{Zywina} For any subset $C$ of $G$ stable under conjugation, \begin{equation}\label{pi-tilde to pi} \tilde{\pi}_C(x, L) = \pi_C(x,L) + \operatorname{O}\left( \frac{x^{1/2}}{\log x} + \log d_L\right). \end{equation} \end{lemma} The following result follows from Proposition 8 of \cite{Serre-Inventiones}. \begin{lemma}\cite[Lemma 2.6 (ii)]{Zywina}\label{Zywina-2.6} Let $N$ be a normal subgroup of $G$ and let $C$ be a subset of $G$ stable under conjugation that satisfies $NC\subseteq C$. Then $$\tilde{\pi}_C(x, L) = \tilde{\pi}_{C'}(x, L^N),$$ where $C'$ is the image of $C$ in $G/N = \mbox{Gal}(L^N/\mathbb{Q})$. \end{lemma} We are now ready to prove Corollary \ref{a=0Cor}. \begin{proof} Consider the extension $L/\mathbb{Q}$ with $L= \mathbb{Q}(E[\ell])$. Observe that $C_0$ is stable under multiplication by $H_\ell$, the subgroup of scalar matrices in $ \mbox{GL}_2(\mathbb{F}_\ell)$. Moreover, it is the inverse image of $C_0'$, the subset of order two elements in $G_\ell' := G_\ell /H_\ell = \mbox{PGL}_2(\mathbb{F}_\ell)$. Applying Lemma \ref{a=0Cor}, we obtain $$\tilde{\pi}_{C_0}(x, L) = \tilde{\pi}_{C_0'}(x, L^{H_\ell}).$$ Therefore, using \eqref{pi-tilde to pi} \begin{align*} & \# \left\{ x< p \leq 2x: \sigma_p \in C_0,\;p\text{ unramified and}\; \left\{\frac{2 \sqrt{p} }{\ell}\right\} \in \left[ 0, \frac{1}{\ell} \right) \right\}\\ &= \Phi_{C_0, [0,\frac{1}{\ell})}\left(x,L, \frac{2}{\ell} \right) \\ &= \tilde{\Phi}_{C_0, [0,\frac{1}{\ell})}\left(x,L, \frac{2}{\ell}\right)+ \operatorname{O}\left(\frac{x^{1/2}}{\log x} + \log d_L\right)\\ &= \tilde{\Phi}_{C_0',[0,\frac{1}{\ell})}\left(x,L^{H_{\ell}}, \frac{2}{\ell}\right) + \operatorname{O}\left(\frac{x^{1/2}}{\log x} + \log d_L\right)\\ &= \Phi_{C_0',[0,\frac{1}{\ell})}\left(x,L^{H_{\ell}}, \frac{2}{\ell}\right) + \operatorname{O}\left(\frac{x^{1/2}}{\log x} + \log d_L\right). \end{align*} Using Theorem \ref{Balog+CDT} and Chebotarev density theorem to the sub-extension $L^{H_\ell}/\mathbb{Q}$, \begin{align*} & \Phi_{C_0',[0,\frac{1}{\ell})}\left(x,L^{H_{\ell}}, \frac{2}{\ell}\right) + \operatorname{O}\left(\frac{x^{1/2}}{\log x} + \log d_L\right)\\ & = \frac{\pi_{C_0'}(x, L^{H_\ell})}{\ell} + \operatorname{O}\left(\frac{x}{\ell^2 \omega \log x}+ \omega^{1/2}\frac{x^{7/8}}{\ell^{1/4}}\log x+\omega \ell^{3/2} x^{3/4}(\log x)^{2}\right \\ & =\frac{\pi(x)}{\ell^2} + \operatorname{O}\left(\frac{x}{\ell^2 \omega \log x}+ \omega^{1/2}\frac{x^{7/8}}{\ell^{1/4}}\log x+\omega \ell^{3/2} x^{3/4}(\log x)^{2}\right) \end{align*} noting that $\log d_L \ll [L^{H_\ell} :\mathbb{Q}] \ll \ell^3 $ and $\ell \ll x^{1/14} \omega^{-2/7} \log ^{-8/7}x$. This completes the proof. \end{proof} \vspace{0.2cm} \section{Proofs of intermediate results }\label{Proof of F estimate} \vspace{0.2cm} \subsection{Mean value estimation of $L(1/2 +it)$} \begin{lemma}\label{L-estimate by MV} \[\int_{T'}^{2 T'} |L(1/2+it)|^2 ~dt \ll \alpha T' + {\alpha}^2 x^{\theta} \log(\alpha x^{\theta}) .\] \end{lemma} \begin{proof} We have \begin{align*} &\int_{T'}^{2T'} \left|L(1/2+it)\right|^2 ~dt =\alpha \int_{T'}^{2T'} \left|\sum_{\alpha x^\theta/3-\delta_1< m\leq 3\alpha x^\theta-\delta_1}(m+\delta_1)^{-1/2-it}\right|^2 ~dt \\ &= \alpha \int_{T'}^{2T'} \sum_{\alpha x^\theta/3-\delta_1< m\leq 3\alpha x^\theta-\delta_1}(m+\delta_1)^{-1}+ \sum_{\alpha x^\theta/3-\delta_1<k\neq m\leq 3\alpha x^\theta-\delta_1}\frac{(k+\delta_1)^{-1/2+it}}{(m+\delta_1)^{1/2+it}}~dt \\ &= \sum_{\alpha x^\theta/3-\delta_1< m\leq3\alpha x^\theta-\delta_1}\frac{\alpha T'}{(m+\delta_1)}+ \operatorname{O}\left( \sum_{ \alpha x^\theta/3-\delta_1<k< m\leq3\alpha x^\theta-\delta_1}\frac{\alpha((m+\delta_1)(k+\delta_1))^{-1/2}}{\log((m+\delta_1)/(k+\delta_1))} \right). \end{align*} Using $\delta_1 < \alpha x^\theta/3$ and rewriting the sum in the error term, we obtain \begin{eqnarray*} &&\sum_{\alpha x^\theta/3-\delta_1< m\leq3\alpha x^\theta-\delta_1}\sum_{r=1}^{(m+\delta_1)/9}\frac{\alpha}{(m+\delta_1)\sqrt{1-r/(m+\delta_1}) \log\left(1-r/(m+\delta_1)\right)}\\ && \ll \sum_{\alpha x^\theta/3-\delta_1< m\leq3\alpha x^\theta-\delta_1}\sum_{r=1}^{(m+\delta_1)/9}\frac{ \alpha }{r} \ll \alpha^2 x^\theta \log (\alpha x^\theta). \end{eqnarray*} This gives us \begin{eqnarray*} \int_{T'}^{2T'} |L(1/2+it)|^2 dt &=&\alpha T'\sum_{\alpha x^\theta/3-\delta_1< m\leq 3\alpha x^\theta-\delta_1}(m+\delta_1)^{-1}+ \operatorname{O}\left( \alpha^2 x^\theta \log (\alpha x^\theta) \right) \\ &\ll& \alpha T' + \alpha^2 x^\theta \log(\alpha x^\theta). \end{eqnarray*} This completes the proof of the lemma. \end{proof} \vspace{0.3cm} \subsection{Estimation of $F(- \theta / 2 - i \theta t)$} For each sub-interval $[x_j, x_{j+1}] \subset[x,2x]$ we prove the following. \begin{proposition}\label{F-estimate-prop} Let $\theta \in [0,1]$ be fixed. Given a Galois extension of number fields $L$ over $K$, and $C$ being a union of $b$ conjugacy class in Gal$(L/K)$, define $$F(-\theta/2 -i\theta\tau) = \sum\limits_{\substack{x_j < N(\mathfrak{p})\leq x_{j+1} \\ \sigma_\mathfrak{p} \in C}} (N\mathfrak{p})^{\theta/2 +i\theta \tau}.$$ Assume the Riemann Hypothesis for the Dedekind zeta function of the field extension $L/K$. Then \begin{align*} F(-\theta/2 - i \theta \tau) &\ll \nonumber \frac{|C|}{|G|} x^{(1+\theta)/2}(\log d_L + n_L\log T') \left(\frac{\log T'}{\log x}+ \frac{x^{1/2}}{T'}\right) \\%\label{3rd term in F estimate} & \quad + x^{\theta/2} \left( \log d_L + b n_K + bn_K\frac{ x\log x}{T'}\right) \end{align*} uniformly for $0<T'\leq \tau\leq 2T'$. \end{proposition} The proof follows along the same lines as in \cite{L-O}. The function $F(-\theta/2-i\theta\tau)$ here is similar to that of $\pi_C(x,L/K)$ in \cite{L-O}, the main difference being a shift in the complex variable $s=\sigma +it$ by $\theta/2 +i\theta\tau$. While the shift in the real part by $ \theta/2$ results in a factor of $x^{\theta/2}$ tagging along with the error terms obtained in \cite{L-O}, the shift in the imaginary part is where the saving is obtained. To be precise, we choose a contour that is a box which avoids the real axis, therefore the only poles in the interior are the non-trivial zeros of $L(s,\chi, L/E)$, and a pole at $-\theta/2-i\theta\tau$. In particular, the residue from the pole at $s=1$ that makes up the main term in the proof by Lagarias-Odlyzko \cite{L-O} does not appear here, giving us a power saving under GRH. We now provide details of the proof. \begin{proof} We first consider the function \begin{equation*} \Psi_C(-\theta/2 -i\theta\tau):= \sum\limits_{\substack{N\mathfrak{p} ^m \in (x_j,x_{j+1}]\\ \mathfrak{p} \text{:unramified}\\ \left[\frac{L/K}{\mathfrak{p} }\right]^m = C}} \frac{\log N\mathfrak{p} }{N\mathfrak{p} ^{m(-\theta/2-i\theta\tau)}} \end{equation*} for a single conjugacy class $C$ where $\Lambda$ denotes the von-Mangoldt function. We use partial summation to pass on to the bounds for $F(-\theta/2 -i\theta\tau)$. As in \cite{L-O}, in order to use Hecke $L$-functions, we need to consider the ramified primes as well, (which are later removed). For $\Re s>1$, let \begin{equation}\label{Z_C-alt-defn} Z(s) := -\frac{|C|}{|G|} \sum_{\chi} \bar{\chi}(g) \frac{L'}{L}(s,\chi, L/E), \end{equation} where $\chi$ runs over the irreducible characters of $H= Gal(L/E)$ and $E$ is the fixed field of $H$, the cyclic subgroup of $G$ generated by a chosen element $g \in C$. Note that \begin{equation*} Z(s) = \sum_{\mathfrak{p} , m} \Theta(\mathfrak{p} ^m)\log(N\mathfrak{p} ) (N\mathfrak{p} )^{-ms} \end{equation*} where for an unramified prime $\mathfrak{p} \subseteq \mathcal{O}_K,$ $$\Theta(\mathfrak{p} ^m)= \begin{cases} 1 & \text{if } \left[\frac{L/K}{\mathfrak{p} }\right]^m = C\\ 0 & \text{otherwise} \end{cases} $$ and $|\Theta(\mathfrak{p} ^m)|\leq 1$ if $\mathfrak{p} $ ramifies in $L$. We use Perron's formula to estimate $F(-\theta/2 -i\theta\tau)$ by considering partial sums of $Z(s)$, which include the ramified primes as well. Define \begin{equation}\label{I_C(x,t) defn} I(x_j,T) := I_C(x_j,T,\theta,\tau)= \frac{1}{2\pi i}\int\limits_{\sigma_0-iT}^{\sigma_0+iT} Z(s-\theta/2-i\theta\tau) \frac{{x_{j+1}}^s-{x_{j}}^s}{s}ds \end{equation} where $T = \theta T'/2$ and $\sigma_0=1+\theta/2+1/\log x$. Then, \begin{align}\label{error-perron+ram-primes} \nonumber|\Psi_C(-\theta/2 -i\theta\tau) - I(x_j,T)| & \leq \left| I(x_j,T) - \sum\limits_{\substack{\mathfrak{p} ,m\\ x_j< N\mathfrak{p} \leq x_{j+1}}} \frac{\Theta(\mathfrak{p} ^m)\log(N\mathfrak{p} )}{(N\mathfrak{p} )^{m(-\theta/2-i\theta\tau)}}\right| \\ & \quad + \left| \sum\limits_{\substack{\mathfrak{p} ,m\\ x_j<N\mathfrak{p} \leq x_{j+1}\\ \mathfrak{p} \text{:ramified}}} \frac{\Theta(\mathfrak{p} ^m)\log(N\mathfrak{p} )}{(N\mathfrak{p} )^{m(-\theta/2-i\theta\tau)}}\right|. \end{align} The two terms on the right hand side of the above equation are now estimated. Using Lemma 3.1 of \cite{L-O}, we have, \begin{align*} &\left| I(x_j,T) - \sum\limits_{\substack{\mathfrak{p} ,m\\ x_j< N\mathfrak{p} \leq x_{j+1}}} \frac{\Theta(\mathfrak{p} ^m)\log(N\mathfrak{p} )}{(N\mathfrak{p} )^{m(-\theta/2-i\theta\tau)}}\right| \\ & \hspace{1cm} \leq \sum\limits_{\substack{\mathfrak{p} ,m \\ N\mathfrak{p} ^m =x_{j+1} \text{ or } N\mathfrak{p} ^m= x_j}} \left( \frac{\log N\mathfrak{p} }{(N\mathfrak{p} ^m)^{-\theta/2 }}+ \frac{\sigma_0}{T}\right) \\ & \hspace{1.75cm} + \sum\limits_{\substack{\mathfrak{p} ,m \\ N\mathfrak{p} ^m \neq x_{j+1} }} \left( \frac{x_{j+1}}{N\mathfrak{p} ^m}\right)^{\sigma_0} \min \left(1, T^{-1}\left| \log\frac{x_{j+1}}{N\mathfrak{p} ^m}\right|^{-1} \right)\frac{\log N\mathfrak{p} }{(N\mathfrak{p} ^m)^{-\theta/2}} \\ & \hspace{1.75cm} + \sum\limits_{\substack{\mathfrak{p} ,m \\ N\mathfrak{p} ^m \neq x_j }} \left( \frac{x_j}{N\mathfrak{p} ^m}\right)^{\sigma_0} \min \left(1, T^{-1}\left| \log\frac{x_j}{N\mathfrak{p} ^m}\right|^{-1} \right)\frac{\log N\mathfrak{p} }{(N\mathfrak{p} ^m)^{-\theta/2}}. \end{align*} Following arguments from \cite{L-O} to estimate the terms on the right side of the above inequality, and noting that $x_j\leq 2x$ for each $j=0,\ldots, B$, we get \begin{equation}\label{Perron error} I(x_j,T) - \sum\limits_{\substack{\mathfrak{p} ,m\\ x_j< N\mathfrak{p} \leq x_{j+1}}} \frac{\Theta(\mathfrak{p} ^m)\log(N\mathfrak{p} )}{(N\mathfrak{p} )^{m(-\theta/2-i\theta\tau)}} \ll x^{\theta/2}n_K \log x + n_K\frac{\sigma_0}{T} + n_K\frac{x^{1+\theta/2}\log^2x}{T}. \end{equation} Moreover, \begin{align}\label{ram-primes-error} \sum\limits_{\substack{\mathfrak{p},m\\ x_j< N\mathfrak{p} \leq x_{j+1}\\ \mathfrak{p} \text{:ramified}}} \frac{\Theta(\mathfrak{p} ^m)\log(N\mathfrak{p} )}{(N\mathfrak{p} )^{m(-\theta/2-i\theta\tau)}} \ll x^{\theta/2}\log x \log d_L. \end{align} Putting \eqref{Perron error} and \eqref{ram-primes-error} together in \eqref{error-perron+ram-primes}, we see that \begin{equation}\label{R_1} \Psi_C(-\theta/2 -i\theta\tau) = I(x_j,T)+\operatorname{O}\left( x^{\theta/2}\log x \left( \log d_L + n_K + \frac{n_K x \log x}{T}\right)\right). \end{equation} Next, we estimate $I(x_j,T)$. From \eqref{I_C(x,t) defn} and \eqref{Z_C-alt-defn}, we have \begin{equation*} I(x_j,T) = -\frac{|C|}{|G|} \sum_{\chi} \bar{\chi}(g)\frac{1}{2\pi i} \int\limits_{\sigma_0-iT}^ {\sigma_0 + iT}\frac{L'}{L}\left(s-{\theta}/{2}-i\theta\tau,\chi,L/E\right)\frac{x_{j+1}^s-x_j^s}{s} ~ds, \end{equation*} where $\chi$ runs through irreducible characters of $H= \langle g\rangle$, $T'\leq\tau \leq 2T'$ is fixed, and $T = \theta T'/2$. We make the change of variable $s \leftrightarrow s-\frac{\theta}{2} - i\theta\tau$ to rewrite $$I(x_j,T) = -\frac{|C|}{|G|} \sum_{\chi} \frac{\bar{\chi}(g)}{2\pi i} \int\limits_{1+\frac{1}{\log x}-iT-i\theta\tau}^ {1+ \frac{1}{\log x}+ iT-i\theta\tau}\frac{L'}{L}\left(s,\chi,L/E\right)\frac{x_{j+1}^{s+\theta/2+i\theta\tau}-x_j^{s+\theta/2+i\theta\tau}}{s+\theta/2+i\theta\tau} ~ds.$$ Abbreviating $\frac{L'}{L}(s,\chi,L/E)$ by $\frac{L'}{L}(s,\chi)$, we evaluate for each character $\chi$ of $H$, the integral \begin{equation*}\label{I_chi-defn} I_{\chi}(x_j,T) := \frac{1}{2\pi i} \int\limits_{1+\frac{1}{\log x}-iT-i\theta\tau}^ {1+ \frac{1}{\log x}+ iT-i\theta\tau}\frac{L'}{L}(s,\chi)\frac{x_j^{s+\theta/2+i\theta\tau}}{s+\theta/2+i\theta\tau} ~ds \end{equation*} for each $j=0,1,\ldots, B$. We may assume that $T+\theta\tau$ and $ T-\theta\tau$ don't coincide with the imaginary part of a zero of any of the $L(s,\chi)$. To estimate this integral, we move the line of integration and consider the integral over a rectangle and apply Cauchy's theorem. More specifically, for $J:= m+ \frac{1}{2}$ where $m$ is a non-negative integer, let $B_{T,J,\theta}$ be the positively oriented rectangle with vertices at $1+\frac{1}{\log x} -i(T+\theta\tau), \, 1+\frac{1}{\log x} +i(T-\theta\tau),\, -J-\frac{\theta}{2} +i(T-\theta\tau)$ and $-J-\frac{\theta}{2} -i(T+\theta\tau).$ Observe that this box does not intersect the real-axis, because $T-\theta\tau <0$ for all $\tau \in [T',2T']$. Define \begin{equation*} I_{\chi}(x_j, T, J) := \frac{x_j^{\theta/2 + i\theta\tau}}{2 \pi i}\int_{B_{T,J,\theta}} \frac{L'}{L}(s,\chi)\frac{x_j^s}{s+\theta/2 +i\theta\tau}~ds. \end{equation*} Now we estimate the error term \begin{align}\label{R-chi-defn} R_{\chi}(x_j,T,J) := I_{\chi}(x_j,T,J)-I_{\chi}(x_j,T) \end{align} uniformly for each $j=0,\ldots, B$. Here, the error $R_{\chi}(x_j,T,J)$ consists of sum of one vertical integral $V_{\chi}(x_j,T,J)$, and two horizontal integrals $H_{\chi}(x_j,T,J) $ and $H^*_{\chi}(x_j,T)$ which we now estimate, following the line of proof in \cite[Section 6, Lemma 6.2]{L-O}. We deduce \begin{align*} V_{\chi}(x_j,T,J) &:= \frac{1}{2\pi i} \int\limits_{T}^{-T} \frac{x_j^{-J+it}}{-J+it} \frac{L'}{L}(-J-\theta/2+i(t-\theta\tau),\chi) ~dt \\ &\ll \frac{x^{-J}}{J}T\left( \log A(\chi) + n_E \log(|T+\theta\tau| + |J+\theta/2| \right); \end{align*} \begin{align*} H_{\chi}(x_j,T,J) &:=\frac{x_j^{\theta/2+i\theta\tau}}{2\pi i} \int\limits_{-J-\theta/2}^{-1/4} \frac{x_j^{\sigma -iT}}{\sigma+\theta/2-iT} \frac{L'}{L}(\sigma-i(T+\theta\tau), \chi)~d\sigma \\ &\quad - \frac{x_j^{\theta/2+i\theta\tau}}{2\pi i} \int\limits_{-J-\theta/2}^{-1/4}\frac{x_j^{\sigma +iT}}{\sigma+\theta/2+iT} \frac{L'}{L}(\sigma+i(T-\theta\tau), \chi) ~d\sigma \\ & \ll \frac{x^{-1/4 +\theta/2}}{T\log x} \left( \log A(\chi) + n_E\log|T+\theta\tau| \right); \end{align*} and lastly \begin{align*} H^*_{\chi}(x_j,T) &:= \frac{x_j^{\theta/2+i\theta\tau}}{2\pi i} \int\limits_{-1/4}^{1+1/\log x} \frac{x_j^{\sigma -iT}}{\sigma+\theta/2-iT} \frac{L'}{L}(\sigma-i(T+\theta\tau), \chi)~d\sigma \\ & \quad- \frac{x_j^{\theta/2+i\theta\tau}}{2\pi i} \int\limits_{-1/4}^{1+1/\log x}\frac{x_j^{\sigma +iT}}{\sigma+\theta/2+iT} \frac{L'}{L}(\sigma+i(T-\theta\tau), \chi) ~d\sigma \\ &= \frac{1}{2\pi i} \int_{-1/4}^{1+1/\log x}\frac{ x_j^{\sigma +\theta/2 -iT}}{\sigma+\theta/2-iT} \sum_{\substack{\rho \\ |\gamma-(T+\theta\tau)|\leq 1}} \frac{~d\sigma}{\sigma-i(T+\theta\tau)-\rho} \\ &\quad - \frac{1}{2\pi i} \int_{-1/4}^{1+1/\log x}\frac{ x_j^{\sigma +\theta/2 +iT}}{\sigma+\theta/2+iT} \sum_{\substack{\rho \\ |\gamma+(T-\theta\tau)|\leq 1}} \frac{d\sigma}{\sigma+i(T-\theta\tau)-\rho}\\ & \quad + \operatorname{O}\left(\frac{x^{1+\theta/2}}{T\log x}(\log A(\chi) +n_E\log|T+\theta\tau|)\right), \end{align*} where the sum above is taken over the non-real zeros $\rho$ of $L(s,\chi)$ and $A(\chi)= d_EN_{E/\mathbb{Q}}(f_\chi)$, $f_\chi$ being the conductor of $\chi$. The proof of Lemma $6.3$ in \cite{L-O} can be modified to show that \begin{align*} &\int\limits_{-1/4}^{1+1/\log x}\frac{ x_j^{\sigma +\theta/2 -iT}}{\sigma+\theta/2-iT} \sum_{\substack{\rho \\ |\gamma-(T+\theta\tau)|\leq 1}} \frac{1}{\sigma-i(T+\theta\tau)-\rho}~d\sigma \\ & \hspace{1cm} \ll \frac{x^{1+1/\log x}}{T\log x} n_\chi(T+\theta\tau)\\ & \hspace{1cm} \ll \frac{x^{1+\theta/2}\log x}{T}(\log A(\chi)+ n_E\log|T+\theta\tau|). \end{align*} Here $n_\chi(t)$ denotes the number of zeros $\rho=\beta+i\gamma$ of $L(s,\chi)$ with $0<\beta <1$ and $|\gamma-t|\le 1$. A similar estimate holds for the sum over $\rho$ with $|\gamma+ (T-\theta\tau)|\leq 1$. Therefore, for each $j=0, 1, \ldots, B$, $$H_\chi^*(x_j,T)\ll \frac{x^{1+\theta/2}\log x}{T} \left( \log A(\chi) + n_E\log|T+\theta\tau| \right).$$ Note that the estimate for $H_{\chi}(x_j,T)$ is bounded above by the estimate for $H_\chi^*(x_j,T)$. Therefore, from \eqref{R-chi-defn}, \begin{align}\label{R-chi-estimates} \nonumber R_{\chi}(x_j,T,J) &\ll \frac{x^{1+\theta/2}\log x}{T} \left( \log A(\chi) + n_E\log|T+\theta\tau| \right)\\ & + \frac{x^{-J}}{J}T\left( \log A(\chi) + n_E \log(|T-\theta\tau| + |J+\theta/2| \right). \end{align} We remark here that one only needs to consider the first term above; the second term goes to zero as $J\to \infty$. Next, by Cauchy's theorem, $I_{\chi}(x_j,T,J)$ is the sum of the residues at the poles of the integrand inside $B_{t,J}$. For our specified contour, the poles occur only at the non-real zeros of $L(s,\chi)$, and at $s=-\theta/2-i\theta\tau$. This gives \begin{equation}\label{Residues} I_{\chi}(x_j,T,J) = \sum\limits_{|\gamma +\theta\tau|<T} \frac{x_j^{\rho+\theta/2+i\theta\tau}}{\rho+\theta/2+i\theta\tau} + \frac{L'}{L}(-\theta/2 -i\theta\tau). \end{equation} The term $\frac{L'}{L}(-\theta/2 -i\theta\tau)$ is estimated using the following lemma which is a slightly general version of Lemma 6.2 in \cite{L-O} and can be proved essentially using the same arguments, so we omit the details here. \begin{lemma} If $s=\sigma+it$ with $\sigma\leq -\theta/2$ and $|s+m| \geq \theta/2$ for all non-negative integers $m$, then $$\frac{L'}{L}(s,\chi) \ll \log A(\chi) +n_E\log(|s|+2).$$ \end{lemma} Applying these bounds, we get \begin{equation}\label{L'/L-pole} \frac{L'}{L}(-\theta/2 -i\theta\tau) \ll \log A(\chi) + n_E\log(|\theta/2 +i\theta\tau| +2). \end{equation} Using \eqref{L'/L-pole}, \eqref{Residues} and \eqref{R-chi-estimates} in \eqref{R-chi-defn}, we have \begin{align*} I_{\chi}(x_j,T) &= \sum\limits_{|\gamma +\theta\tau|<T} \frac{x_j^{\rho+\theta/2+i\theta\tau}}{\rho+\theta/2+i\theta\tau} \\ & \quad + \operatorname{O}\left( \left( \frac{x^{1+\theta/2}\log x}{T} +1 \right) \left( \log A(\chi) + n_E\log|T+\theta\tau| \right) \right). \end{align*} We plug this into the definition of $I(x_j,T)$ to obtain \begin{align}\label{I_x} \nonumber &I(x_j,T) - S(x_{j+1}, T) + S(x_j,T)\\ & \nonumber \ll \frac{|C|}{|G|}\left( \frac{x^{1+\theta/2}\log x}{T} +1 \right) \sum_{\chi} \left( \log A(\chi) + n_E\log|T+\theta\tau| \right)\\ & \ll \frac{|C|}{|G|} \left( \frac{x^{1+\theta/2}\log x}{T} +1 \right)(\log d_L + n_L\log |T+\theta\tau|), \end{align} where $$S(y,T) := \frac{|C|}{|G|} \sum_{\chi}\bar{\chi}(g) \sum\limits_{|\gamma +\theta\tau|<T} \frac{y^{\rho+\theta/2+i\theta\tau}}{\rho+\theta/2+i\theta\tau}.$$ Under GRH, with slight modification to the calculation in the proof of Theorem $9.1$ of \cite{L-O}, and summing over $\chi$, we get \begin{align*} S(x_j,T) \ll \frac{|C|}{|G|}x^{1/2 +\theta/2}\log T (\log d_L + n_L\log |T+\theta\tau|) \end{align*} Finally, using the above bounds in \eqref{I_x} and recalling \eqref{R_1}, we conclude \begin{align*} \Psi_C(-\theta/2 -i\theta\tau) &\ll \frac{|C|}{|G|} x^{1/2 +\theta/2} \left(\log d_L + n_L\log |T+\theta\tau|\right) \left(\log T +\frac{x^{1/2}}{T}\log x\right) \\ &\quad + x^{\theta/2} \log x\left(\log d_L + n_K + n_K\frac{ x \log x}{T}\right). \end{align*} Using partial summation and $\tau\leq 2T'$, and setting $T= \theta T'/2$, one gets the desired bounds for $F$ in the case of a fixed conjugacy class $C$. Next, suppose $C = \cup_{m=1}^b C_m$ is a union of $b$ conjugacy classes for some integer $b\geq 1$. Then, using the estimates established for each conjugacy class $C_m$ and summing over $m$, we have \begin{align*} F(-\theta/2 -i\theta\tau ) & \ll \frac{\sum_{m=1}^b|C_j|}{|G|} x^{(1+\theta)/2}\bigg(\log d_L + n_L\log T'\bigg) \left(\frac{\log T'}{\log x} + \frac{x^{1/2}}{T'}\right)\\ &\quad + x^{\theta/2} \left( \log d_L + b n_K + bn_K\frac{x \log x}{T'}\right)\\ & {} \\ & = \frac{|C|}{|G|} x^{(1 +\theta)/2} \bigg(\log d_L + n_L\log T'\bigg) \left(\frac{\log T'}{\log x} + \frac{x^{1/2}}{T'}\right)\\ &\quad + x^{\theta/2} \left( \log d_L + bn_K + bn_K\frac{x \log x}{T'} \right), \end{align*} noting that the error term in \eqref{ram-primes-error} that contributes $x^{\theta/2}\log d_L$ remains unchanged even if $C$ is a single class or a union of conjugacy classes. This completes the proof of the proposition. \end{proof} \bibliographystyle{alpha}
1,108,101,563,444
arxiv
\section{Introduction} A variety of space-borne experiments in the course of the last two decades (IRAS, Spitzer, \textit{Herschel}) have shown that the Galaxy is filled with relatively cold dust (15$\leq\mathrm{T}_{d}\leq30$ K, e.g. \citet{Ferriere01} distributed along each Line of Sight (LOS), and associated with the atomic, molecular and ionized gas phases. This finding has also been recently confirmed by the \textit{Planck} whole-sky observations \citep{Planck_A22,Planck_A23,Planck_A24,Planck_A25}. Dust grains can absorb and re-emit a large fraction of the radiation provided by stellar sources. Depending on the size, they are stochastically heated (i.e. the smaller grains, which emit mainly in the Near/Mid-IR), or reach thermal equilibrium (i.e. the bigger grains, which emit predominantly in the Far-IR regime). Dust emission properties have been intensely studied at high Galactic latitudes, where each gas phase can be assumed relatively isolated from the others, and mixing along the LOS can be avoided \citep[e.g.][]{Boulanger88, Boulanger96,Lagache00}. On the contrary, in the Galactic Plane disentangling dust emission arising from different gas components and intrinsically different environments is a complicated problem which requires additional kinematic information on the emitting gas \citep[e.g.][]{Bloemen90}. The separation can be achieved with the so-called \textit{inversion method}, originally introduced by \citet{Bloemen86} to analyse the individual contribution of atomic and molecular gas to the integrated $\gamma$-ray emission. The model was later extended by \citet{Bloemen90} to include the FIR emission across the Galactic Plane observed with IRAS at 60 $\mu$m\ and 100 $\mu$m . So far, the application of the inversion method has been limited by the angular resolution of the available IR and ancillary data (equal or close to $1^{\circ}$) which does not allow to, e.g., spatially separate individual clouds \citep{Bloemen90, Giard94, Sodroski97, Paladini07, Planck_Marshall11}. These works have demonstrated that, on average, most of the Galactic IR luminosity is associated with dust in the atomic gas component, which is primarily irradiated by the local radiation field. Dust associated with the molecular and ionized components, on the other hand, is mostly heated by O and B stars embedded in molecular clouds \citep{Sodroski97,Paladini07}. \citet{Planck_Marshall11} decomposed the emission from 12 $\mu$m to millimeter wavelengths using IRAS, COBE-DIRBE, WMAP and the recent \textit{Planck} data in the latitude range $\vert b\vert\leq10^{\circ}$ and found evidence of the existence of a significant amount of cold atomic and warm molecular gas not accounted for by the standard tracers. This gas is typically referred to as \textit{dark gas} \citep{Grenier05}. In particular, the authors of this work claim that the \textit{dark gas} column density is comparable - or greater - to the column density of molecular gas outside the Molecular Ring, i.e. a region of the Galaxy comprised between Galactocentric radius 4 kpc $<$ R $<$ 8 kpc, where 70 percent of the molecular gas resides \citep{Combes91}. As well as using low-spatial resolution, early inversion works often did not include the ionized gas component, as historically its contribution to the overall IR emission was thought to be negligible \citep{Bloemen90,Giard94}. A few studies \citep{Sodroski97, Paladini07, Planck_Marshall11} did take this phase of the gas into account, but used the only available data at the time for tracing ionized gas, that is radio continuum data. There are two problems with this approach. The first is that radio continuum emission is not uniquely associated with the interaction - and deceleration - of free electrons with ions in a plasma, the free-free or bremsstrahlung emission: at low frequencies (5 GHz or less), one has to estimate and subtract a possible contamination due to synchrotron emission \citep[e.g.][]{Paladini05}, while at relatively high frequencies ($>$ 10 GHz), spinning dust emission may become significant \citep[e.g.][]{Planck_Dickinson11, Planck_Marshall11}. The second, even more important, issue is the fact that radio continuum data are unable to provide the 3D-information on the location of the emitting gas which is required by inversion techniques. The work we describe in this paper is therefore motivated by the following considerations: \begin{enumerate} \item the resolution and sensitivity of the newly available Spitzer and \textit{Herschel} data dramatically improve our view of the Galactic Plane at IR wavelengths: the combined GLIMPSE \citep{Benjamin03}, MIPSGAL \citep{Carey09} and Hi-GAL \citep{Molinari10_PASP} surveys have mapped the inner Galactic Plane in the wavelength range 8 $\mu$m $\leq\lambda\leq$ 500 $\mu$m with a resolution of 35 arcsec, or higher. These new data allow us to set more stringent constraints on dust properties and on their variations with Galactocentric radius; \item the last couple of years have witnessed a tremendous improvement in the quality of available data on the ionized gas. In particular, hydrogen recombination lines (RRLs) have been observed for large portions of the Galactic Plane \citep{Staveley-Smith96,Alves11}. These data are sampled with a resolution of $\simeq15$ arcmin, which allows us to work with a $\sim$4 times better resolution than the previous inversions. Even more important, they provide unprecedented information about the properties and distribution of the ionized gas component along the LOS; \item in previous inversion works, the decomposition into Galactocentric bins has been done {\em{blindly}}, that is without taking into account local features present at a given location of the Galaxy or on specific angular scales. The higher resolution of the IR as well as of the ancillary data allow now a more targeted approach; \item last but not least, if on one side the higher angular resolution of the available data makes it possible to devote more attention to the specific content of the Galactic region to {\em{invert}}, on the other hand it sheds light on the limitations of the inversion technique itself. Hence, one of the goals of this work is to investigate and describe these limitations. \end{enumerate} In this first paper we concentrate on a $2^{\circ}\times2^{\circ}$ field centred on \textit{(l,b)}=(30$^{\circ}$.0,0$^{\circ}$.0) observed by the GLIMPSE, MIPSGAL and Hi-GAL surveys. This field was one of two observed during the Hi-GAL Science Demonstration Phase (SDP). Therefore, hereafter we will refer to it as SDPF1 ({\em{Science Demonstration Phase Field 1}}). The paper is organized as follows. A review of the content of SDPF1 is presented in Section \ref{sec:l30_field}. The IR and ancillary radio data used for the analysis are described in Section \ref{sec:infrared_ancillary}. The inversion model is discussed in Section \ref{sec:model_data}, as well as the Galactocentric region subdivision and the gas column density evaluation for each gas phase. We find evidence of untraced cold atomic and possibly warm molecular gas features extending up to several arcmins. These features do not allow a correct evaluation of the gas column densities, thereby preventing the inversion model from working properly. The regions where these features are dominant, however, can be predicted by analyzing the correlation between the gas column densities and the IR maps, as explained in Section \ref{sec:pearsons}. In Section \ref{sec:results_discussion} we present our results, and discuss the limitations of the inversion model in its current stage of development. We also investigate the importance of accounting for dust associated with the ionized gas, by demonstrating with a simple test the erroneous conclusions that one may reach by neglecting this component. In Section \ref{sec:conclusions} we describe our conclusions. \section{The content of SDPF1}\label{sec:l30_field} According to the \citet{Russeil03} model of the Galactic spiral structure, SDPF1 intercepts the Sagittarius arm twice, and both the Norma-Cygnus arm and the Perseum arm once. Furthermore, the LOS also cuts through the Scutum-Crux arm at the tangent point which, adopting a Sun-Galactic centre distance of 8.5 kpc, is found at a Galactocentric distance R=4.25 kpc. Both the source content and local Interstellar Medium (ISM) of SDPF1 have been extensively investigated (see Figure \ref{fig:pmw_l30}). The mini-starbust complex W43, located near l=30$^{\circ}.8$ \citep{Bally10}, is the brightest source in the field and one of the brightest FIR sources in the entire Galaxy. It is located at d$\simeq$5.5 kpc, corresponding to a Galactocentric distance R$\simeq$ 4.65 kpc. This giant HII region contains a cluster of OB and Wolf-Rayet stars and its FIR continuum luminosity is $\sim 3.5\times 10^{6}$ L$_{\sun}$ \citep{Smith78}. An additional 29 HII regions are located in SDPF1 \citep[][]{Anderson09}, most of which are evolved \citep{Paladini12}. Toward the centre of the field is located the well-studied ultra-compact HII region G29.944-0.04 \citep{Quireza06}. The N49 HII bubble, centred on \textit{(l,b)}=(28.83$^{\circ}$,-0.23$^{\circ}$), is a known case of triggering scenario \citep{Zavagno10}. Two separate HII regions forming the RCW175 group and located towards the edge of the field at $(l,b)=(29.07,-0.68)$ exhibit spinning dust emission (Tibbs et al. 2012). About 50 molecular clouds sit in the proximity of W43, and a giant cloud with a diameter of $\sim 20$ pc surrounds W43 itself \citep[][and references therein]{Bally10}. In total, $\sim$75 molecular clouds can be found in SDPF1 \citep[][see Figure \ref{fig:pmw_l30}]{Rathborne09}, together with $\sim 340$ Infrared Dark Clouds (IRDC) from the \citet{Peretto09} catalogue. A cold layer of atomic hydrogen \citep{Gibson04} has also been observed in SDPF1. As discussed in Section \ref{sec:missing_column}, this layer is seen in absorption in the HI 21cm line. It is composed of a combination of HI Self-Absorption features \citep[HISA, e.g.][]{Gibson10}, which require a warmer background provided by HI itself, and HI Continuum Absorption features \citep[HICA,][]{Strasser04}, which instead require a continuum background. The most relevant features are seen in correspondence of W43. A detailed investigation of the properties of the ISM in SDPF1 is presented in \citet{Bernard10}. By comparing the Hi-GAL maps with a 3D-extinction map, these authors find that dust temperature is higher when the LOS intercepts the spiral arms. The relation between dust emissivity spectral index $\beta$ and dust temperature T$_{d}$ is instead analysed by \citet{Paradis10}, by combining Hi-GAL and IRAS data. Evidence for a T$_{d}-\beta$ anti-correlation is reported in their work, although the authors caution against possible spurious effects due to temperature mixing along the LOS. \begin{figure} \centering \includegraphics[width=8cm]{figure_1.eps} \caption{ Superposition of the $\textit{l}=30^{\circ}$ LOS with the \citet{Russeil03} Galactic model. The circles represent the star forming complexes used by the author to estimate the Galactic spiral arm pattern. The best-fit model has four arms. 1: Sagittarius-Carina arm; 2: Scutum-Crux arm; 1 arcmin: Norma-Cygnus arm; 2 arcmin: Perseus arm. A star denotes the position of the Sun. The $\textit{l}=30^{\circ}$ LOS is tangent to the Scutum-Crux arm. The short dashed line, which passes through the $\textit{l}=30^{\circ}$ LOS, is the expected departure from a logarithmic spiral arm observed for the Sagittarius-Carina arm. More information about the model can be found in \citet{Russeil03}.} \label{fig:Galactic_model} \end{figure} \begin{figure*} \centering \includegraphics[width=12cm]{figure_2.eps} \caption{Hi-GAL 500 $\mu$m data for SDPF1. The black circles represent W43, G29.944-0.04 and N49. The blue circles denote a sub-samples of the $\sim$75 molecular clouds found in SDPF1, as well as one of the most prominent IRDCs catalogued by \citet{Peretto09} and an example of a mid-IR bright and mid-IR dark cloud studied by \citet{Battersby11} in these 2$\times$2 square degrees. Finally, the black-dotted circles indicate HI absorption features, a combination of HICA and HISA (see Section \ref{sec:l30_field} for details).} \label{fig:pmw_l30} \end{figure*} \section{IR and Ancillary radio data}\label{sec:infrared_ancillary} In this Section we describe the input IR data and the ancillary radio data used to trace the different gas phases. \subsection{IR data}\label{sec:IR_data} We consider IR imaging data at 7 different wavelengths covering the range 8 $\mu$m $\leq\lambda\leq$ 500 $\mu$m. From the GLIMPSE and MIPSGAL surveys we use the 8 $\mu$m and 24 $\mu$m data respectively. The map resolution is 2 arcsec for GLIMPSE 8 $\mu$m and 6 arcsec for MIPSGAL 24 $\mu$m . Details of the data reduction can be found in \citet{Benjamin03} and in \citet{Carey09} for GLIMPSE and MIPSGAL respectively. At longer wavelengths, we use data from the Hi-GAL survey, which has mapped the whole Galactic Plane in five bands in the range 70 $\mu$m $\leq\lambda\leq$ 500 $\mu$m. The first Hi-GAL survey covers almost 280 square degrees of the inner Galaxy, with a $2^{\circ}$ wide strip centred on the Galactic Plane in the longitude range $\vert l\vert\leq 70^{\circ}$. The scanning strategy of the survey is organized in tiles of 2$\times$2 square degrees. During the \textit{Herschel} SDP two Hi-GAL tiles were observed, the SDPF1 field studied in this work and another, which also covers 2$\times$2 square degrees, centred on \textit{(l,b)} = (59$^{\circ}$, 0$^{\circ}$). The data have been taken with both the PACS \citep{Poglitsch10} and SPIRE \citep{Griffin10} instruments in parallel mode and are reduced with the ROMAGAL pipeline \citep{Traficante11}. PACS has observed the sky at 70 $\mu$m and 160 $\mu$m, while SPIRE at 250 $\mu$m, 350 $\mu$m and 500 $\mu$m. The Hi-GAL spatial resolution is $\simeq10$ arcsec and $\simeq13.5$ arcsec for PACS and 18, 24 and 34.5 arcsec for SPIRE \citep{Traficante11}. The zero-level offsets in the Hi-GAL maps are evaluated as described in \citet{Bernard10}. The seven maps are point-source subtracted in order to avoid contamination at high spatial frequencies. Point sources were identified and removed by using two software tools, one tailored for Spitzer and the other for \textit{Herschel}. For the Spitzer data, ie. IRAC 8 $\mu$m and MIPS 24 $\mu$m, we used the Spitzer Science Center APEX package \citep{Makovoz05} under the MOPEX tools{\footnote{http://irsa.ipac.caltech.edu/data/SPITZER/docs/dataanalysistools/tools/mopex/}}. For the \textit{Herschel} data, for which we had to build our own Point Spread Functions from the data itself, we used ``Starfinder"{\footnote{http://www.bo.astro.it/StarFinder/paper6.htm}} \citep{Diolaiti00}. In both cases the removal is carried out in such a way that the residuals from the point source subtraction match as close as possible the noise properties of the original images. The seven IR maps are shown in Figure \ref{fig:IR_maps}. \subsection{Ancillary data}\label{sec:ancillary_data} \subsubsection{Atomic hydrogen}\label{sec:atomic_column} The atomic gas phase (HI) is traced with the 21 cm line, which can be easily detected across the Galaxy \citep{Stil06}. We used the VLA Galactic Plane Survey (VGPS) data, covering the longitude region $18^{\circ}\leq l\leq67^{\circ}$ for $\vert b\vert\leq1.3^{\circ}$ up to $\vert b\vert\leq 2.3^{\circ}$ \citep{Stil06}. These data are part of the International Galactic Plane Survey (IGPS) which includes the VGPS, the Canadian Galactic Plane Survey (CGPS) and the Southern Galactic Plane Survey (SGPS) and covers $\sim$90 percent of the 21 cm line Galactic disk emission. The VGPS brightness temperature (\textit{T}$_{b}$ ) maps have an angular resolution of 1 arcmin with a velocity resolution of $1.56$ km s$^{-1}$. The survey covers the velocity range $-113\leq \mathrm{V_{LSR}}\leq 167$ km s$^{-1}$. The \textit{r.m.s.} noise per channel is $1.8$ K on average, depending on the location and velocity \citep{Stil06}. Since the VGPS 21 cm line data are continuum subtracted, a few pixels in correspondence of bright HII regions have very low (even negative) values due to strong HI continuum emission which arises from the HII regions. In order to identify and flag these pixels, we have used the VGPS continuum data \citep{Stil06}. We masked the pixels in all velocity channels of the 21 cm data cube with continuum temperature T$_{c}\ge 50$ K. This threshold allows us to mask pixels in correspondence of bright HII regions but without including the cold diffuse regions, which also appear as absorption features in the HI 21 cm data (see Section \ref{sec:missing_column}). In total, less than 1 percent of the pixels have been masked, the majority of them in correspondence of W43. The same pixels flagged in the 21 cm data cube were also flagged in the data cubes for the other gas tracers. \subsubsection{Molecular hydrogen}\label{sec:H2_column} The molecular gas (H$_{2}$) has no observable transitions under the conditions typical of molecular clouds, but it can be indirectly traced with carbon monoxide (CO) emission. H$_{2}$ is primarily traced by measuring the $J=1\rightarrow0$ rotational transition line of the most abundant CO isotope in the Galaxy, $^{12}$CO. However, SDPF1 is characterized by the presence of many molecular clouds, IRDCs and HI absorption features (see Section \ref{sec:l30_field}). These potentially cold, dense regions are better traced with the $J=1\rightarrow0$ line emitted from $^{13}$CO isotope (see Section \ref{sec:molecular_column}). The $J=1\rightarrow0$ lines of $^{12}$CO and $^{13}$CO used in this work were observed with the Massachusetts-Stony Brook Galactic plane CO survey \citep[UMSB,][]{Sanders86} and the Galactic Ring Survey \citep[GRS,][]{Jackson06} respectively. Both surveys were carried out with the Boston-FCRAO 14m telescope. The UMSB survey covers the range $8^{\circ}\leq l\leq 90^{\circ}$, $\vert b\vert\leq 1.05^{\circ}$. The spatial resolution is 44 arcsec, even though the sky was sampled with a 3 arcmin step. The pixel size of the map is 3 arcmin, which subsamples the telescope beam and does not preserve the spatial information below this value. In order to alleviate the effect of the under sampling, we first regridded the data to 44 arcsec, and then assigned each pixel value in the initial map to a pixel in the new map. We then interpolated consecutive pixel values using a cubic interpolation routine, and finally smoothed and re-binned the map to the original 3 arcmin spatial resolution. This approach assures a better continuity in surface brightness across the map. However, we notice that the arc minute structures present in the data are smeared out at our final working resolution of 14.8 arcmin (see Section 4.1), and the results of the analysis do not vary significantly if we do not apply the cubic interpolation step described above. The spectral resolution is 1 km s$^{-1}$ with a \textit{r.m.s.} sensitivity of 0.4 K per velocity channel. The GRS has mapped the first quadrant of the Milky Way for $18^{\circ}\leq l\leq 55.7^{\circ}$, $\vert b\vert\leq 1^{\circ}$. The sensitivity is $\leq$0.4 K per velocity channel, the spectral resolution is 0.212 km s$^{-1}$ and the angular resolution is 46 arcsec sampled with a 22 arcsec grid. Since we use both the CO isotopes to evaluate molecular hydrogen column density (see Section \ref{sec:molecular_column}) we rebinned the $^{12}$CO and $^{13}$CO data on the same grid and, using a gaussian kernel, we convolved both datasets to 9 arcmin. The convolved maps have a pixel size of 3 arcmin, which is the same spatial resolution of the $^{12}$CO map and allows us to properly sample the kernel. \subsubsection{Ionized hydrogen} The Warm Ionized Medium (WIM) is generally traced using either free-free continuum emission or optical (e.g. H$\alpha$) and radio (RRL) transition lines. In the Introduction, we have already discussed the limitations of using free-free emission in the context of inversion works. Both the H$\alpha$ line and RRLs are part of the cascade of recombination transitions. However, the H$\alpha$ line is emitted when a transition occurs between relatively low quantuum levels, i.e. from $n$ = 3 to 2, while RRLs correspond to transitions between high quantuum levels, with $n$ typically $>$ 40. With respect to H$\alpha$, RRLs have the advantage of not being affected by dust absorption \citep[e.g.][]{Dickinson03}, thus providing a {\em{clean}} view of the ionized gas across the Galaxy. In this work, to trace HII\ and evaluate the corresponding column density we make use primarily of RRL data. In particular, we use RRLs corresponding to three different transitions, i.e. H166$\alpha$, H167$\alpha$ and H168$\alpha$. These have been observed by the HI Parkes All-Sky Survey \citep[HIPASS,][]{Staveley-Smith96} and the associated Zone of Avoidance (ZOA), aimed at detecting galaxies in the local Universe. The integration time of the ZOA survey, 2100 s per beam, is five times higher than that of the HIPASS. The data from the two surveys have been combined and the three RRLs have been stacked to achieve a final \textit{r.m.s.} sensitivity of 3 mK/beam/channel. The beam is 14.8 arcmin, the pixel dimension is set to 4 arcmin and the spectral resolution is 20.0 km s$^{-1}$ \citep{Alves11}. Details of the data reduction are given in \citet{Alves10}. We complement the information provided by the RRL data, especially in terms of electron density of the observed ionized gas, with free-free radio continuum data. The free-free data for our region are available from the observations made at 5 GHz (i.e. 6 cm) with the Parkes 64 m telescope \citep{Haynes78}, digitised by the Max Planck Institute in Bonn{\footnote{http://www3.mpifr-bonn.mpg.de/survey.html}}. The survey covers the range $-170^{\circ}\leq l\leq40^{\circ}$, $\vert b\vert\leq1.5^{\circ}$ at an angular resolution of 4.1 arcmin and a sensitivity of around 10 mJy. We apply baseline/offset removal (of 0.1 K and 1 K respectively) and estimate/subtract the synchrotron contribution according to Figure 7 in \citet{Paladini05}. \section{The analysis method}\label{sec:model_data} In the following we describe the inversion method used to disentangle the dust properties in each gas phase and along different Galactocentric positions. The model requires a subdivision of the Galaxy into Galactocentric bins (hereafter referred to as \textit{rings}) and evaluation of the gas column density for each phase (Section \ref{sec:dataset}). \subsection{The 3D-inversion model}\label{sec:model} We follow the prescription of the inversion model of \citet{Bloemen90} by assuming that dust emissivities vary with Galactocentric distance, thus remaining constant within fixed Galactocentric rings. This working hypothesis, although not entirely true, has yet its foundations in the real-case scenario. For instance, \citet{Dunham11} find, by comparing the emission at 1.1mm with that for different transitions ((1,1), (2,2) and (3,3)) of NH$_{3}$, that the properties of cold clumps seem to vary as a function almost exclusively of Galactic radius. In our inversion model, we further assume a single dust temperature and gas-to-dust mass ratio, hence a constant emissivity value per H atom, for each gas phase and Galactocentric ring. We emphasise that this hyphothesis implies that dust emits (or absorbs) coherently within each gas phase and Galactocentric ring, thus ignoring local effects which may alter the emissivities along specific LOS, and this is an intrinsic limitation of the model. Under these assumptions, however, we can express the IR emission at a wavelength $\lambda$ in a pixel $j$, $I^{j}(\lambda)$, as a linear combination of gas column densities and dust emissivities associated with each gas phase in each ring $i$ \begin{centering} \begin{eqnarray}\label{eq:decomposition} I^{j}(\lambda) & = & \sum_{i=1}^{n}\epsilon_{\mathrm{H_{I}}}(\lambda,R_{i})N_{\mathrm{H_{I}}}^{j}(R_{i}) + \epsilon_{\mathrm{H_{2}}}(\lambda,R_{i})N_{\mathrm{H_{2}}}^{j}(R_{i}) + \nonumber \\ & & + \epsilon_{\mathrm{HII}}(\lambda,R_{i})N_{\mathrm{HII}}^{j}(R_{i}) \end{eqnarray} \end{centering} In the expression above, $\epsilon_{\alpha}$ are the dust emissivities (in units of MJy sr$^{-1}$ / $10^{20}$ atoms) for the three gas phases ($\alpha=(\mathrm{H_{I}}, \mathrm{H_{2}}, \mathrm{HII})$) in each ring $R_{i}$, and $N_{\alpha}^{j}$ are the corresponding column densities in pixel $j$. The summation is taken over the $n$ rings adopted for the decomposition. Once the gas column densities have been evaluated for each ring, the model allows us to disentangle the fraction of the integrated IR emission which is generated within each ring and gas phase, and to estimate the corresponding dust emissivities by solving Equation \ref{eq:decomposition} with a least-squares fit analysis. The column density for each gas component and ring is convolved with a gaussian kernel at the lowest available resolution, 14.8 arcmin (the RRL data resolution), and the maps rebinned with a pixel scale of 4 arcmin. We note that this spatial resolution is four times higher than in previous inversion works ($\simeq1^{\circ}$). Due to the reduced size of the region under investigation (4 square degrees), the number of available pixels that we can use to estimate the dust emissivities is only $\sim 700$ \citep[i.e. a factor $\sim10^{2}$--$10^{3}$ lower with respect to, e.g.,][]{Paladini07,Planck_Marshall11}. Since the error bars associated with the emissivities in our model are estimated with a bootstrap procedure, a small number of samples induces an increase of the random sampling errors arising from the bootstrap method itself \citep{Efron79}. In Section \ref{sec:no_HII} we will show that, if we do not include in the inversion matrix the RRL data, we can afford working with a smaller pixel size, although at the expense of not being able to correctly account for the contribution of dust emission associated with the ionized medium. Noteworthy, the IDL{\footnote{Interactive Data Language}} code which evaluates the gas column densities and solves Equation \ref{eq:decomposition} is based on the code of \citet{Paladini07}. The code has been fully re-designed and now only takes $\simeq 4$ minutes to complete the inversion of a 2$^{\circ}\times$2$^{\circ}$ Hi-GAL tile, rather than $\simeq$ 100 minutes required by the old version. The test has been performed on an Intel Core 2 Duo at 2.2 GHz. This code optimization turned out to be critical for this work, as it allowed to test a large number of model configurations. \subsection{Selection of Galactocentric rings}\label{sec:ring_selection} Figure \ref{fig:radius_vs_intensity} presents the median brightness temperature \textit{T}$_{b}$ values for each gas tracer as a function of the Galactocentric radius. Radial velocities are converted into Galactocentric distances by applying the \citet{Fich89} rotation curve and by assuming that the circular motion is the dominant component of velocity. With this assumption, the emission at a particular radial velocity increment translates identically to emission at a single Galactocentric bin. We note also that since the Galactocentric distance is a function of the radial velocities and of the longitude (see the \citet{Fich89} model), some velocity channels can have part of the emission belonging to one ring and part of it that belongs to the subsequent ring. The longitude value that determines the separation is in correspondence of the Galactocentric distance equal to the ring edges. In order to allow comparison among different data sets, all data cubes have been rebinned with the RRL velocity resolution, i.e. $~20.0$ km s$^{-1}$. In addition, the mean profiles for each gas phase have been normalized to the peak in the first Galactocentric ring of the HI profile. From Figure \ref{fig:radius_vs_intensity}, we see that HI has two prominent peaks, one at R$\simeq$ 5 kpc and the other at R$\simeq$7 kpc. There is no clear correlation between HI emission and spiral arms pattern in our Galaxy \citep[e.g.][]{Gibson05}, however the first peak is likely due to emission from the Scutum-Crux arm, being located at the tangent point (see Figure \ref{fig:Galactic_model}). HI data can be used to trace the Galactic edge up to distances of $\sim20-25$ kpc from the Galactic centre, depending on the LOS \citep[e.g.,][]{Weaver74}. In our case, we truncate the HI data to R=16 kpc since, beyond this value, the S/N drops significantly. The molecular gas peaks at approximately R$\simeq$4.5 kpc and R$\simeq$8 kpc. The first peak falls in correspondence of the the Scutum-Crux arm. The second peak in CO data is either produced by H$_{2}$ located in the local Sagittarius-Carina arm or, alternatively, in the interarm region between the Sagittarius-Carina arm (at the second passage through the $\textit{l}=30^{\circ}$ LOS) and the Perseus arm. It is commonly accepted that interarm regions, despite being characterized by an average gas density $\sim3.6$ lower than the spiral arms \citep{Ferriere01}, are not devoid of molecular gas. We truncate both the $^{12}$CO and $^{13}$CO data at R$\simeq$8.5 kpc since the H$_{2}$ column density is known to drop off rapidly beyond R=$R_{\sun}$ \citep[e.g.][]{Combes91,Ferriere01}. RRL data also peaks around R$\simeq4.5-5$ kpc, in the Scutum-Crux arm region. This arm contains 80 percent (i.e. 23 out of a total of 29) of the known HII regions in SDPF1 \citep{Anderson09}, including the W43 complex and the bright source G29.944-0.04. This trend mimics the overall radial distribution of ionized medium across the Galaxy \citep{Ferriere01}. The RRL signal decreases exponentially in the outer Galaxy, being consistent with pure noise at R$\simeq$8.5 kpc. To perform the 3D-inversion, we must subdivide the Galaxy into Galactocentric rings by minimizing their cross-correlations. In previous analyses, this step has been performed following a mathematical approach, i.e. using the eigenvalues of the cross-correlation matrix. In our case, we use the physical properties of the gas tracers to guide our decomposition. In Figure \ref{fig:radius_vs_intensity}, four distinct regions of emission can be clearly identified. The first region (Ring 1, 4.25 kpc $\leq$R$\leq$ 5.6 kpc) is characterized by a peak of emission in all tracers (HI, $^{12}$CO and $^{13}$CO, RRLs) and defines the edges of the Scutum-Crux arm. The second region (Ring 2, 5.6 kpc $\leq$R$\leq$ 7.4 kpc) presents a prominent feature in HI. The third region (Ring 3, 7.4 kpc $\leq$R$\leq$ 8.5 kpc) contains significant CO emission and cold HI features (HICA and HISA, see Section \ref{sec:missing_column}), while HII emission is entirely accounted for by a diffuse component. Finally, region four (Ring 4, 8.5 kpc $\leq$R$\leq$ 16 kpc) corresponds to the outer Galaxy and is visible in HI emission only. \begin{figure*} \centering \includegraphics[width=12cm]{figure_3.ps} \caption{Mean brightness temperature, as a function of Galactocentric radius, for HI (solid line), $^{12}$CO (dashed line), $^{13}$CO (dotted line) and RRLs (dash-dotted line), respectively. Each datacube has been rebinned to the same velocity resolution, namely 20.0 km s$^{-1}$. For each gas phase and velocity channel, a median brightness temperature has been computed. All datasets are normalized to the amplitude of the first HI peak. The boundaries of the Galactocentric rings are R=[4.25, 5.6, 7.4, 8.5,16.0] kpc.} \label{fig:radius_vs_intensity} \end{figure*} \subsection{Column density evaluation}\label{sec:dataset} In the following we describe how we evaluate the column densities of each gas phase using the tracers described in Section \ref{sec:ancillary_data}. \subsubsection{Atomic hydrogen column density}\label{sec:HI_data} Due to the blending in SDPF1 of cold and warm HI, the 21 cm line brightness temperature, \textit{T}$_{b}$ , in each ring cannot be converted into HI column density, N(HI), with a fixed spin temperature, \textit{T}$_{s}$ , following the standard approach of previous inversion works \citep[see e.g.][]{Paladini07,Planck_Marshall11}. In fact, since in the cold neutral medium (CNM) 10 K $\leq T_{s}\leq100$ K, and in the warm neutral medium (WNM) $T_{s}$ ranges from $\sim500$ K to $T_{s}\sim5000$ K \citep[e.g.][]{Strasser04,Heiles03}, the assumption of a constant $T_{s}$ is too simplistic. However, the HI column density can be evaluated by assuming a single emission component in the \textit{optically thin} limit \citep{Dickey90}: \begin{equation}\label{eq:hi_column_opt_thin} \mathrm{N(HI)}_{\tau << 1} \simeq C_{0} \int{T_{b}(\mathrm{v}) d\mathrm{v}} \qquad 10^{20} \ \mathrm{H\ atoms / cm}^{2} \end{equation} where $C_{0}=1.823\times 10^{18}$ cm$^{2} \mathrm{/ (K\ km\ s^{-1})}$. \textit{T}$_{b}$ $(\mathrm{v})$ is the observed 21 cm line brightness temperature at the velocity position $\mathrm{v}$. The integral is taken over the ensemble of the velocity channels corresponding to each ring. The integrated \textit{T}$_{b}$ map in Ring 3 shows an offset around \textit{l}=29.5$^{\circ}$ and \textit{l}=30.8$^{\circ}$ in correspondence of the longitudes in which some velocity channels have their emission split among Ring 3 and Ring 2. These offsets are present also in Ring 2. However, the steps amount to only a few K and induce a step in column density of $\simeq 1\times 10^{20}\ \mathrm{H\ atoms / cm}^{2}$ from the two regions, lower than the mean and the \textit{r.m.s.} value of the map ($\simeq 20\times 10^{20}\ \mathrm{H\ atoms / cm}^{2}$ and $\simeq 2.5\times 10^{20}\ \mathrm{H\ atoms / cm}^{2}$ respectively) and in Ring 2 they are not visible (see Figure \ref{fig:HI_column_density}). The HI column density evaluated under these hypothesis could underestimate the ``true" column density by a factor of 1.3 -- 1.5, depending on the location \citep{Strasser04}. As a simple test, we tried to rescale the HI column density in each Ring by a fixed fraction (the HI column density in each Ring has been multiplied by a factor 1.3). The estimated emissivities do not sensibly change from the results showed in Section \ref{sec:results_discussion}, most likely because the regions in which the assumption of having one component in the optically thin limit fails have their own spatial structure along the LOS. Since in previous inversion studies the HI column density was evaluated by assuming a fixed spin temperature, we compared the HI column density and dust emissivities for the three phases estimated with the two methods: the optically thin limit and the assumption of a spin temperature. Setting a unique value for the spin temperature is not straightforward. However, considering that \textit{T}$_{s}$ has to be higher then the observed \textit{T}$_{b}$ , we can fix \textit{T}$_{s}$ at the highest \textit{T}$_{b}$ value measured along the LOS \citep[e.g.][]{Johannesson10}. In SDPF1, \textit{T}$_{b}$ peaks at \textit{T}$_{b}$ =160 K. We then adopt this value to compute N(HI) for all pixels. In doing so, we need to remember that this operation can lead to overestimate \textit{T}$_{s}$ in regions far from the peak. The total HI column density estimated with this approach is $\simeq30$ percent higher than the value obtained in the optically thin limit, with peaks of $\sim80$ percent for pixels in correspondence of very bright HI emission. Interestingly, such a column density increase is accompanied by a change in morphology of HI emission in the rings. We also run a series of tests adopting a spin temperature in the range $160\leq T_{s}\leq250$ K, following \citet[][]{Planck_Marshall11}. All these tests show that dust emissivities do not change significantly over this temperature range. Median emissivity differences (for all rings) between the isothermal and optically thin models in HI, H2, and HII are, respectively, 55 , 2, and 8 percent for \textit{T}$_{s}$ = 160 K and 29, 3, and 6 percent for \textit{T}$_{s}$ = 250 K. In addition, the dust temperature derived from the emissivities using both methods are the same, as is the convergence of the code in different rings (see Section \ref{sec:results_discussion}), which are the main results of this work. Both models are not defining accurately the temperature distribution along the LOS, but they are useful to describe a limiting case (the optically thin assumption) or a weighted average (the spin temperature assumption) for obtaining column densities. At the same time, assuming the optically thin condition does not require the evaluation of the optical depth or the spin temperature along the LOS. It represents a well-defined lower limit to the total HI column density, despite the fact that it cannot correctly reproduce the amount of gas along the LOS in cold, dense regions. We therefore decided to apply Equation \ref{eq:hi_column_opt_thin} without any further assumptions. \subsubsection{Molecular hydrogen column density}\label{sec:molecular_column} The H$_{2}$ column density is evaluated in the Local Thermal Equilibrium (LTE) approximation by assuming a constant $[\textrm{CO}/\textrm{H}_{2}]$ ratio and by using both the $^{12}$CO and $^{13}$CO isotopes. In the denser and colder part of the molecular clouds, $^{12}$CO becomes optically thick while $^{13}$CO can still be considered optically thin. On the other hand, in the external regions of the clouds $^{12}$CO is optically thin but $^{13}$CO emission is too weak to detect. Thus, we use $^{12}$CO where only this isotope is observed, and both $^{13}$CO and $^{12}$CO otherwise. We emphasise that this approach, although it allows us to better recover the molecular hydrogen column density, is nevertheless affected by some limitations. Noteworthy, CO lines can be self-absorbed, and H$_{2}$ cannot be traced in optically thick regions \citep{Pineda08}. For each velocity channel, we searched for pixels where both $^{12}$CO and $^{13}$CO are detected following these steps: \begin{enumerate} \item[1.] we compute the median value over all the pixels; \item[2.] we compute the sigma ($\sigma$) with respect to the median; \item[3.] we isolate the pixels with a value greater than the median value plus 2.5$\sigma$ in both $^{12}$CO and $^{13}$CO maps; \item[4.] we flag the pixels isolated in both maps as common regions. \end{enumerate} The sigma evaluated with respect to the median, rather than the mean, assures more stability to fluctuations induced by the signal, especially in channels where bright star forming regions strongly contribute to the overall emission. The 2.5$\sigma$ threshold can, in some cases, lead to an inclusion of noisy outliers. A value higher than 2.5$\sigma$, however, results in missing pixels where $^{13}$CO emission is clearly seen, while a lower value induces the identification of too many outliers. With a threshold of 2.5$\sigma$ we identify an average of 5 percent common pixels in each velocity channel, concentrated in the densest regions. Assuming only residual white noise, the outliers are randomly distributed across the map, so the probability that the same outliers are falsely detected in both the $^{12}$CO and $^{13}$CO maps is $<0.1$ percent. In pixels where only $^{12}$CO is observed, \textit{T}$_{b}$ is converted into molecular hydrogen column density N(H$_{2}$) using the optically thin hypothesis. Then, for a given ring and in each pixel, N(H$_{2}$) is proportional to $\int{T_{b}(\mathrm{v})}d\mathrm{v}$, where the integration is taken over the velocity range corresponding to that ring. We set the conversion factor $X_{\mathrm{CO}}$, that is the ratio of the molecular hydrogen column density (in units $10^{20}$ H\ atoms cm$^{-2}$) to the velocity-integrated CO intensity (K km s$^{-1}$), equal to 1.8 $\times\ 10^{20}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$, in agreement with the recommended value suggested by \citet{Bolatto13}. Recent studies suggest that $X_{\mathrm{CO}}$ may vary with metallicity, in particular increasing with Galactocentric distance by a factor 5 -- 10, with an estimated value at R$\simeq 1.5$ kpc of $X_{\mathrm{CO}}=0.3$ \citep[e.g.,][]{Strong04,Abdo10}. If this is true, our working hypothesis of $X_{\mathrm{CO}}$ = 1.8 has the effect of producing an overestimate of the effective molecular gas column densities. When $^{13}$CO is also observed, in order to evaluate the column density, $\textrm{N}^{13}(\textrm{CO})$, we need to know the excitation temperature $T_{ex}$ and the $^{13}$CO optical depth $\tau_{13}$ at each velocity $v$ \citep[][]{Duval10,Pineda10}. The details of this analysis are given in Appendix \ref{sec:appendix}. N($^{13}$CO) is then converted into N(H$_{2}$) applying \citep{Stahler05} \begin{equation} \mathrm{N({H_{2}})}=7.5\times 10^{5}\ \mathrm{N(^{13}CO)} \qquad (10^{20} \mathrm{atoms}\ \mathrm{cm}^{-2}) \end{equation} A total of $\sim$27 percent pixels of the column density maps are observed in both the $^{12}$CO and $^{13}$CO maps. For $\sim$ 20 percent of these common pixels, the estimated column density is $\geq$20 percent higher than the one calculated with the $^{12}$CO data only, confirming the necessity of combining the information, when this is available, from the two isotopes. \subsubsection{Ionized hydrogen column density}\label{sec:ionized_column} At low Galactic latitudes, the Warm Ionized Medium (WIM) is characterized by a thin layer which consists of both individual HII regions and diffuse emission \citep{Paladini05}. RRLs allow us to trace simultaneously these two components. The RRL brightness temperature peaks in the Scutum-Crux ring where the majority of the HII regions in SDPF1, in particular W43, are located (Figure \ref{fig:radius_vs_intensity}) and it is significantly low in the other rings. The ionized hydrogen column density, N(HII), is proportional to the RRLs brightness temperature, as we are going to show in the following, therefore the HII column density also peaks in Ring 1. The HII column density can be defined as the mean electron density $\langle n_{e}\rangle$ integrated along the LOS: \begin{equation} \mathrm{N(HII)} = \int {\langle n_{e}\rangle} ds \end{equation} The quantity above can be estimated from the Emission Measure (EM). If we assume along each LOS a standard effective electron density, $n_{eff}$: \begin{equation}\label{eq:EM_Ne} \textrm{EM}=\int n_{e}^{2}ds=n_{eff}\times\int \langle n_{e}\rangle ds=n_{eff}\times \mathrm{N(HII)} \end{equation} EM is evaluated in turn from the integral of the RRL line temperature $T_{L}$ \citep{Alves10}: \begin{equation}\label{eq:TL_EM} \int T_{L}d\nu=1.92\times10^{3}T_{e}^{-1.5}\ \textrm{EM} \end{equation} where $T_{e}$ is the mean electron temperature and $d\nu$ is the frequency interval in kHz . The continuum emission brightness temperature \textit{T}$_{b}$ is also proportional to EM \citep{Mezger67}: \begin{equation}\label{eq:Tb_EM} T_{b} = 8.235 \times 10^{-2} a(T_{e})T_{e}^{-0.35}\nu_{\mathrm{GHz}}^{-2.1}(1+0.08)\ \mathrm{EM} \end{equation} where the factor $(1+0.08)$ takes into account the additional contribution to \textit{T}$_{b}$ from helium, and $\nu_{\mathrm{GHz}}$ is the frequency expressed in GHz. Combining this Equation with Equation \ref{eq:TL_EM} we obtain: \begin{equation}\label{eq:TL_Tb} \frac{\int{T_{L}}dV}{T_{b}} = 6.985\times10^{3}\frac{1}{a(T_{e})}\frac{1}{n\mathrm{(He)}/[1+n\mathrm{(H)}]}T_{e}^{-1.15}\nu_{\mathrm{GHz}}^{1.1} \end{equation} In the expression above, $a$(\textit{T}$_{e}$ ) is a slowly varying function of \textit{T}$_{e}$ and $dV$ is expressed in km s$^{-1}$. The factor $n(He)/[1+n(H)]$ is equivalent to the factor (1+0.08) in Equation \ref{eq:Tb_EM}. In order to derive N(HII) from Equation \ref{eq:EM_Ne} to \ref{eq:Tb_EM}, we need to know both \textit{T}$_{e}$ and $n_{eff}$\ for each of the physical regimes we are considering, that is HII regions and the diffuse medium. At the RRL data angular resolution (14.8 arcmin), each of the 29 HII regions in SDPF1 falls within a single pixel, except for the W43 complex. Therefore, apart from W43 which we treat as a special case, we assume that the HII emission in the field originates exclusively in the diffuse emission, thus neglecting the contribution from individual HII regions. \begin{itemize} \item\textit{Mean electron temperature:} For HII regions, $T_{e}$ is known to increase with Galactocentric radius \citep[e.g.,][]{Shaver83, Paladini04}, varying in the range 4000 to 8000 K. \citet{Alves11} have recently shown that the diffuse ionized gas along the Galactic Plane has a similar electron temperature of that of HII regions. Therefore, for simplicity, we assume a constant $T_{e}$ for both W43 and the diffuse HII component and, following \citet{Alves10}, we take $T_{e}$ = 5500 K.\\ \item\textit{Effective electron density:} In general $n_{eff}$\ strongly depends on the physical conditions of a given environment. Previous inversion works have often used values close or equal to 10 cm$^{-3}$ \citep[e.g.][]{Paladini07,Planck_Marshall11}. This value is in agreement with measurements in compact HII regions. In the diffuse ionized medium $n_{eff}$\ is tipically lower, i.e. $0.03<n_{eff}<0.08$ cm$^{-3}$ \citep[e.g.,][]{Haffner09}, especially at intermediate and high Galactic latitudes. To estimate $n_{eff}$ for both the diffuse medium in SDPF1 and W43, we use the Parkes 5 GHz data. To this end, we note that the observed free-free emission is proportional to EM and follows Equation \ref{eq:Tb_EM}. If we assume that $\langle n_{e}\rangle=n_{eff}$, it follows that \begin{equation}\label{eq:EM_free_free} n_{eff} = \left(\frac{T_{b}} {8.235} \frac{10^{2}}{a(T_{e})} T_{e}^{0.35}\nu_{\mathrm{GHz}}^{2.1}\frac{1}{(1+0.08)}\frac{1}{d}\right)^{1/2} \end{equation} where $d$ is the linear size of the parcel of emitting gas. We emphasize that Equation \ref{eq:EM_free_free} holds true only if the electron density does not vary significantly along the LOS. To ensure that this condition is satisfied, we solve Equation \ref{eq:EM_free_free} independently for W43 and the diffuse medium: \begin{itemize} \item[1.] \textit{W43 complex}: In this case, the assumption that the emission is dominated by the star forming complex with minor contributions from the foreground/background emission is motivated by the fact that \textit{T}$_{L}$ in correspondence of the HII region is several orders of magnitudes higher with respect to its surroundings. In the RRL data cube, W43 is at V$_{LSR}$=107 km s$^{-1}$, which corresponds to a Galactocentric radius of $R\simeq$ 4.65 kpc. The complex is known to be on the near side of the Galaxy \citep{Wilson70}, so its solar distance is D$\simeq$ 5.5 kpc, in agreement with \citet{Bally10}. We measure an angular size of $\simeq29.1$ arcmin which, at a distance of 5.5 kpc, is equivalent to an effective size d$\simeq$47.8 pc. Assuming a spherical geometry, we adopt this value to evaluate $n_{eff}$\ from Equation \ref{eq:EM_free_free}. For \textit{T}$_{b}$ = $T_{b, W43peak}$, we obtain $n_{eff, W43peak}=43.8\ \textrm{cm}^{-3}$. \item[2.] \textit{diffuse emission}: For the diffuse component, the definition of the edges of the emitting region is less straightforward. In Section 4.2, we have seen that the emission from HII detected through RRLs drops dramatically beyond $R\simeq$ 8.5 kpc. At the far side of the Galaxy the $R\simeq$ 8.5 kpc circle intersects $\textit{l}=30^{\circ}$ at a distance of D$\simeq$ 14.7 kpc from the Sun. We take this as the outer boundary of the emitting region. To derive $n_{eff}$, we then first mask the pixels across the W43 complex, and then compute the median of the values obtained by applying Equation \ref{eq:EM_free_free}. The resulting effective electron density is $n_{eff, diffuse}=0.39\ \textrm{cm}^{-3}$. \end{itemize} At this stage, in order to obtain a continuous solution, we scale $n_{eff, W43peak}$ to $n_{eff, diffuse}$. For this purpose, we use a 2d-Gaussian profile with a full-width-half-maximum equal to the measured angular size of W43. Having estimated $n_{eff}$, we derive N(HII) by combining Equations \ref{eq:EM_Ne}, \ref{eq:TL_EM} and \ref{eq:TL_Tb}. \end{itemize} \subsubsection{Column density distribution}\label{sec:total_density} We compare the contribution to the total gas column density provided by each ring and obtain that most of the gas ($\sim$ 65 percent) is located in Ring 1. Ring 2 accounts for another $\sim$ 20 percent, while the remaining $\sim$15 percent is distributed between Ring 3 and 4. The column density maps for each gas phase and Ring are in Figures \ref{fig:HI_column_density}, \ref{fig:H2_column_density} and \ref{fig:HII_column_density}. \section{Statistical correlation analysis}\label{sec:pearsons} The inversion model is based on the hypothesis that the total IR emission at a given wavelength can be decomposed into dust emission associated with different gas phases and Galactocentric rings. This hypothesis implies that the integrated IR dust emission at each given wavelength linearly correlates with the dust emission associated with each gas component and Galactocentric ring and the degree of correlation depends on the contribution of dust associated with each gas component in each ring to the total IR emission. A low degree of correlation can be the natural result of either a gas phase being less abundant (hence with a lower column density) or intrinsically less emissive with respect to other phases. Two caveats of this approach have to be kept in mind: \begin{table*} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multicolumn{14}{c}{\textit{Pearson's coefficient}} \\ & & & & & & & & & & & & &\\ \hline \textit{Band} & \multicolumn{4}{c}{\textit{Ring 1}} & \multicolumn{4}{c}{\textit{Ring 2}} & \multicolumn{4}{c}{\textit{Ring 3}} & \textit{Ring 4} \\ ($\mu$m) & \multicolumn{4}{c}{(4.25-5.6 kpc)} & \multicolumn{4}{c}{(5.6-7.4 kpc)} & \multicolumn{4}{c}{(7.4-8.5 kpc)} & (8.5-16.0 kpc) \\ \\ \textit{gas phase} & & HI & H$_{2}$ & HII & & HI & H$_{2}$ & HII & & HI & H$_{2}$ & HII & HI\\ \hline \\ 8 & & 0.51 & 0.74 & 0.82 & & 0.38 & 0.52 & 0.41 & & -0.44 & 0.28 & -0.01 & -0.10\\ \\ 24 & & 0.27 & 0.52 & 0.65 & & 0.21 & 0.34 & 0.28 & & -0.44 & 0.14 & -0.04 & -0.06\\ \\ 70 & & 0.11 & 0.51 & 0.67 & & 0.19 & 0.31 & 0.40 & & -0.53 & 0.10 & 0.02 & -0.02\\ \\ 160 & & 0.42 & 0.79 & 0.86 & & 0.39 & 0.55 & 0.53 & & -0.46 & 0.21 & 0.07 & -0.08\\ \\ 250 & & 0.47 & 0.85 & 0.87 & & 0.45 & 0.61 & 0.56 & & -0.41 & 0.21 & 0.15 & -0.04\\ \\ 350 & & 0.48 & 0.86 & 0.87 & & 0.47 & 0.62 & 0.57 & & -0.41 & 0.22 & 0.14 & -0.06\\ \\ 500 & & 0.50 & 0.86 & 0.86 & & 0.48 & 0.62 & 0.56 & & -0.40 & 0.21 & 0.15 & -0.05\\ \hline \end{tabular} \end{center} \caption{Pearson's correlation coefficients for the input IR maps and the column density maps for each gas phase and ring. In Ring 1 (which includes the Scutum-Crux arm), there is a high degree of correlation between the IR maps - at all wavelengths - and the column density maps of HI, H$_{2}$ and HII. A lower degree of correlation, even a signature of anticorrelation, is instead revealed by the Pearson's coefficients between the column densities in Ring 2, 3 and 4 and the input IR maps.} \label{tab:scatterplot_columns_IR} \end{table*} \begin{itemize} \item[1.] in the short IR bands (e.g. 8 $\mu$m), the emission is proportional to the product N$_{H}$ $\times$ G$_{0}$, where N$_{H}$ is the total hydrogen column density and G$_{0}$ represents a scaling of the \citet{Mathis83} radiation field in the solar neighbourhood \citep[see e.g.][]{Compiegne10}. This is also true at 24 $\mu$m and, partly, at 70 $\mu$m, given that these bands are contributed to by emission from very small grains (VSGs) which, as PAHs, are stochastically heated by the local radiation field. Conversely, the emission at wavelength longer than 70 $\mu$m\ is produced by big grains (BGs) which are in thermal equilibrium with the radiation field. The G$_{0}$ intensity defines the BG equilibrium temperature, while the total hydrogen column density determines the absolute level of the BG emission. Hence, if the radiation field (therefore the BG equilibrium temperature) varies smoothly across the region, the intensity variations in the FIR emission are dominated by the column density variations across the field \citep{Compiegne10}. As a result, the net effect is an increasing degree of correlation going from shorter to longer wavelengths. \item[2.] A second limitation of the inversion approach lies in the fact that, in order to solve Equation \ref{eq:decomposition}, the column densities associated to each gas phase for each LOS have to be accurately recovered. However, this might not occur, either when part of the gas in a given phase is not traceable by standard methods (i.e. warm H$_{2}$), or when a fraction of the gas along the LOS absorbs rather than emitting (i.e. the cold HI). In the first case, the morphology of the column density maps is artificially altered, leading to an ``excess'' of IR emission and to a consequent lower degree of correlation between the column density maps and the input IR maps. In the second case, if the optically thick regions are in correspondence of strong IR emission features, the column density maps, estimated in the optically thin limit, and the IR maps will show an anti-correlation. \end{itemize} Such a correlation can be investigated in terms of the Pearson's coefficients \citep[e.g.,][]{Edwards76}. Given two vectors \textit{X} and \textit{Y}, the Pearson's coefficient $\rho$ is defined as the ratio between the covariance $cov(X,Y)=\sum_{i=1}^{n}(X_{i}-\bar{X})(Y_{i}-\bar{Y})$ and the product of their standard deviations $\sigma_{X}$ and $\sigma_{Y}$ \begin{equation} \rho(X,Y)=\frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}} \end{equation} $\rho$ is defined in the range $\vert \rho \vert\leq 1$ and $\rho \simeq1$ indicates strong correlation, while $\rho \simeq$ -1 suggests a strong anti-correlation. Table \ref{tab:scatterplot_columns_IR} provides a summary of our computed Pearson's coefficients ($\rho$). Figure \ref{fig:scatterplot} shows two examples of correlation plots, obtained from comparison of the 500 $\mu$m emission with the molecular and atomic column densities. These plots correspond to the highest (N(H$_{2}$), Ring 1) and lowest (N(HI), Ring 3) degree of correlation among all the considered cases for the 500 $\mu$m emission. \begin{figure*} \centering \includegraphics[width=8cm]{figure_4.ps} \includegraphics[width=8cm]{figure_5.ps}\\ \caption{Correlation plots between Hi-GAL 500 $\mu$m map and N(H$_{2}$) Ring 1 map (left image) and Hi-GAL 500 $\mu$m map and N(HI) Ring 3 map (right). The Pearson's coefficients are $\rho=0.86$ and $\rho=-0.40$ respectively, the lowest and the highest between this IR map and the column density maps in this region.} \label{fig:scatterplot} \end{figure*} The correlations in Ring 1 are the highest, at all wavelengths and for each gas phase. The correlation is significant (up to 0.87) for both the H$_{2}$ and HII column densities, while it is noticeably lower for the atomic phase. Averaging across wavelengths, we obtain: $\rho_{HI,Ring1}$ = 0.39, $\rho_{H{_{2}},Ring1}$ = 0.73 and $\rho_{H{II},Ring1}$ = 0.80. In general, the aromatic infrared bands (AIB, 8 $\mu$m\ and 24 $\mu$m) and the 70 $\mu$m band display a lower degree of correlation with respect to longer wavelengths, hence corroborating the hypothesis that they trace the intensity of the radiation field as well as the total hydrogen column density. For Ring 2, we have that the average correlations are: $\rho_{HI,Ring2}$ = 0.37, $\rho_{H{_{2}},Ring2}$ = 0.51 and $\rho_{HII,Ring2}$ = 0.48. These values are lower than for Ring 1, especially for H$_{2}$ and HII, indicating that Ring 2 either contributes less than the Scutum-Crux region to the total IR emission or that part of the gas is not entirely seen in emission with the standard tracers. In Ring 3 the IR maps are poorly correlated with the molecular and ionized gas column densities and partially anticorrelated with the atomic gas column density. This is possibly due to the presence of gas not entirely traced by the standard tracers, as supported by observations of a cold layer of HI and strong individual absorption features located at 7.4 kpc$\leq$R$\leq$8.5 kpc (see Section \ref{sec:missing_column}). A weak anticorrelation is also present in Ring 4. In this case, the negative Pearson's coefficients can be explained through a combination of both absorption features and intrinsic weak emission. \begin{table*}\label{tab:scatterplot_HI_CO_HII} \begin{center} \begin{tabular}{c|c|c|c} \hline \hline \multicolumn{4}{c}{\textit{Pearson's coefficient}} \\ & & &\\ \cline{1-4} \textit{Gas} & \textit{Ring 1}& \textit{Ring 2}& \textit{Ring 3} \\ \textit{phases} & (4.25-5.6 kpc) & (5.6-7.4 kpc) & (7.4-8.5 kpc) \\ \hline \\ HI - H$_{2}$ & 0.34 & 0.61 & -0.21\\ \hline HI - HII & 0.24 & 0.38 & 0.14\\ \hline H$_{2}$ - HII & 0.77 & 0.64 & -0.09\\ \hline \end{tabular} \end{center} \caption{Pearson's coefficients for the correlation between different gas phases in each ring. In Ring 1 and 2, the molecular component is well correlated with the ionized component, while the atomic component, which is more ubiquitous, is less correlated with both H$_{2}$ and HII. The low/negative coefficients in Ring 3 are likely the consequence of an underestimate of the total column density.} \label{tab:scatterplot_HI_CO_HII} \end{table*} The Pearson's coefficients also measure the correlation among the various gas phases in each ring. These coefficients are shown in Table \ref{tab:scatterplot_HI_CO_HII}. The correlation between atomic and molecular gas in Ring 1 is not very strong ($\rho_{HI,H_{2},Ring1}$ = 0.34), likely a consequence of the ubiquitous distribution of HI with respect to H$_{2}$. Conversely, molecular and ionized hydrogen appear to be strongly correlated ($\rho_{H_{2},HII,Ring1}$ = 0.77), as expected in star forming regions. The correlation between HI and HII ($\rho_{HI,HII,Ring1}$ = 0.24) likely reflects the fact that both phases have a diffuse component. In Ring 2, the Pearson's coefficients follow the same trend as in Ring 1, with a high correlation between molecular and ionized components ($\rho_{H_{2},HII,Ring2}$ = 0.64), confirming the tight spatial correlation among these two phases, and a partial correlation among HI and HII. In Ring 3, there is evidence of anticorrelation between HI and H$_{2}$ components, indicating the presence of pixels with detected H$_{2}$ in correspondence of HI absorption features. In Section \ref{sec:missing_column}, we investigate the possible reasons behind this effect. The ionized gas in this region is dominated by the diffuse component, as there are no cataloged HII regions at 7.4 kpc$\leq$R$\leq$8.5 kpc towards this LOS, and it appears not to be associated with either HI nor H$_{2}$, as illustrated by the poor correlations (and, even, anticorrelation) with these phases. \section{Discussion and results}\label{sec:results_discussion} In the following, we solve Equation \ref{eq:decomposition} to recover the emissivities for dust associated with the different gas phases in each ring. The code converges and retrieves positive and reliable emissivity coefficients only in Ring 1 (Section \ref{sec:dustem}) which, as we show in Section \ref{sec:ring1_dominate} and Appendix \ref{app:model_test}, dominates the IR emission at all wavelengths. For Ring 2, 3 and 4, where more than 50 percent of the Pearson's coefficients across all the wavelengths are lower than 0.5, the code either does not converge or returns negative emissivities (Section \ref{sec:missing_column}). Whereas these results are difficult to interpret at first, they take a more clear meaning when we investigate the presence of matter not entirely accounted for by the gas tracers (Section \ref{sec:missing_column} and \ref{sec:extinction}). Finally, we analyze the consequences of excluding the ionized gas component from the decomposition, a procedure often adopted in past inversion works (Section \ref{sec:no_HII}). \subsection{Ring 1: fitting the emissivities with DustEM}\label{sec:dustem} The emissivities for dust associated with the different gas phases in Ring 1 are shown in Table \ref{tab:emissivities}. We fit these with the DustEM model \citep{Compiegne11} which incorporates three populations of dust grains: PAHs, hydrogenated small amorphous carbons (VSGs), and a combination of large amorphous carbons and amorphous silicates (BGs). The DustEM fits are shown in Figure \ref{fig:dustem}. We note that the dust properties in SDPF1 are evaluated with respect to a reference SED obtained for the diffuse ISM at high ($|b| >$ 15$^{\circ}$) Galactic latitudes (hereafter referred to as DHGL). \begin{table*} \begin{center} \begin{tabular}{c|c|c|c} \hline \hline \textit{Band} & & $\epsilon_{\nu}$ & \\ ($\mu$m) & & [ MJy sr$^{-1}$ (10$^{20}$ H atoms cm$^{-2}$)$^{-1}$ ] & \\ \hline & HI & H$_{2}$ & HII \\ \hline \\ 8 & 0.839 $\pm$ 0.073 & 0.033 $\pm$ 0.012 & 0.116 $\pm$ 0.018\\ \\ 24 & 0.380 $\pm$ 0.141 & 0.014 $\pm$ 0.010 & 0.101 $\pm$ 0.016\\ \\ 70 & 3.443 $\pm$ 0.629 & 0.400 $\pm$ 0.217 & 2.546 $\pm$ 0.401\\ \\ 160 & 13.387 $\pm$ 1.687 & 1.736 $\pm$ 0.463 & 5.743 $\pm$ 0.670 \\ \\ 250 & 11.611 $\pm$ 1.308 & 1.470 $\pm$ 0.204 & 3.343 $\pm$ 0.235\\ \\ 350 & 4.922 $\pm$ 0.312 & 0.606 $\pm$ 0.092 & 1.168 $\pm$ 0.132\\ \\ 500 & 1.820 $\pm$ 0.162 & 0.249 $\pm$ 0.026 & 0.388 $\pm$ 0.058\\ \hline \end{tabular} \end{center} \caption{Emissivities, and corresponding uncertainties, of dust associated with the three gas phases in Ring 1.} \label{tab:emissivities} \end{table*} \begin{figure*} \centering \includegraphics[width=10cm]{figure_6.eps}\\ \includegraphics[width=10cm]{figure_7.eps}\\ \includegraphics[width=10cm]{figure_8.eps} \caption{DustEM fit of the emissivities evaluated with the inversion model for dust associated with the HI (top panel), H$_{2}$ (medium panel) and HII (lower panel) gas components. The PAHs, VSGs and BGs contributions to each SED are plotted with dotted, dashed and dash-dotted line, respectively. G$_{0}$ and Y$_{\mathrm{PAHs,VSGs,BGs}}$ are defined in the text. The results are for Ring 1 only.} \label{fig:dustem} \end{figure*} From DustEM we estimate the intensity of the radiation field associated with each phase of the gas. We obtain, for the ratio between the local radiation field and the \citet{Mathis83} value for the solar neighborhood, G$_{0}$: G$_{0}$(HI)=1.54$\pm$0.23, G$_{0}$(H$_{2}$)=1.55$\pm$0.51 and G$_{0}$(HII)=4.47$\pm$0.75. In the ionized phase, the radiation field is remarkably higher than in the other phases and consistent with a star formation scenario. Dust temperatures are evaluated by DustEM separately for each of the three gas phases by applying \citep{Bernard08} \begin{equation} T_{d} = 17.5 \times \mathrm{G}_{0} ^ {1/(4+\beta)} \end{equation} with the spectral emissivity index $\beta=1.9$, in agreement with the result of \citet{Paradis10} for this field. We find: $T_{d,\ \mathrm{HI}}=18.82\pm0.47$ K, $T_{d,\ \mathrm{H_{2}}}=18.84\pm1.06$ K and $T_{d,\ \mathrm{HII}}=22.56\pm0.64$ K. As expected, dust in the ionized gas phase is warmer compared to dust in the atomic and molecular phases since, in the proximity of star forming complexes, the radiation causing ionization of interstellar hydrogen also induces heating of dust particles. In DustEM, under the assumption of a constant gas-to-dust mass ratio in each gas phase, dust abundances are expressed relative to H atoms \citep{Compiegne10}, $[\mathrm{M}_{PAH}/\mathrm{M}_{H}]$, $[\mathrm{M}_{VSG}/\mathrm{M}_{H}]$ and $[\mathrm{M}_{BG}/\mathrm{M}_{H}]$. In addition, the code retrieves abundances normalized to the values in the DHGL \citep[][]{Compiegne10}. We refer to these normalized abundances as: Y$_{\mathrm{PAH}}$, Y$_{\mathrm{VSG}}$\ and Y$_{\mathrm{BG}}$. From the fits of the emissivities in Ring 1, we obtain: Y$_{\mathrm{PAH}}$(HI)=$22.51\pm$3.35, Y$_{\mathrm{PAH}}$(H$_{2}$)=0.86$\pm$0.35, Y$_{\mathrm{PAH}}$(HII)=1.07$\pm$0.22 and Y$_{\mathrm{BG}}$(HI)=$9.70\pm$0.85, Y$_{\mathrm{BG}}$(H$_{2}$)=1.27$\pm$0.21, Y$_{\mathrm{BG}}$(HII)=1.64$\pm$0.17. Surprisingly, both in the atomic and molecular phase, DustEM is able to reproduce the emissivity values without invoking a VSG contribution. Only in the ionized phase we have Y$_{\mathrm{VSG}}$\ different from zero and equal to Y$_{\mathrm{VSG}}$(HII)=1.14$\pm$0.58. Noteworthy, the apparent lack of VSGs in the atomic and molecular gas phases could be ascribed to the limitation of the 3D-inversion model in accounting for the dependence of the AIB emission on the intensity of the radiation field, as discussed in the previous section. Furthermore, for very low values of Y$_{\mathrm{BG}}$, DustEM is not able to distinguish between a PAH and a VSG contribution, and typically tends to increase Y$_{\mathrm{PAH}}$\ at the expense of Y$_{\mathrm{VSG}}$. Most importantly, the result of the fit reveals a significant decrease of the relative abundance of PAHs in the molecular and ionized phase with respect to the neutral phase. In fact, while [Y$_{\mathrm{PAH}}$(HI)/Y$_{\mathrm{BG}}$(HI)] = 2.3, [Y$_{\mathrm{PAH}}$(H$_{2}$)/Y$_{\mathrm{BG}}$(H$_{2}$)] = 0.67 and [Y$_{\mathrm{PAH}}$(HII)/Y$_{\mathrm{BG}}$(HII)] = 0.65, suggesting that PAHs are somehow depleted in these two environments. The destruction of PAHs in the ionized gas has been investigated from a theoretical standpoint by \citet{Draine11} and observational evidence of these predictions are reported by, e.g \citet{Povich07} and \citet{Paradis11}. For PAH depletion in the molecular gas component, although no theoretical prescription is readily available to interpret this result, we speculate, along the lines of \citet{Paradis11}, that this effect could be attributed to both the interstellar and local radiation field, i.e. by penetrating the cloud and causing partial destruction of the aromatic molecules. \subsection{Ring 2-3-4: missing column density and \textit{dark gas}}\label{sec:missing_column} In Ring 2, 3 and 4 the code either retrieves only partial positive solutions or does not converge. The emissivities and the associated errors for these rings are in Table \ref{tab:emissivities_ring234}. Although the results in the Table do not have a direct physical interpretation (i.e. negative emissivities), they are an indication of the limitations of the basic assumptions of our model. From Equation \ref{eq:decomposition} and under the hypothesis of a single emissivity value for each gas component, the integrated IR emission is expected, by construction, to positively correlate with the gas column densities (see Section \ref{sec:pearsons}). In this framework, the negative emissivities obtained in Ring 2-3-4 suggests that in these regions the model fails to adequately reproduce the details of emission associated to each gas phase. In the following, we investigate whether the failure of the model can be attributed to a mis-match between the column densities retrieved by the tracers described in Section \ref{sec:dataset} and the actual total amount of gas located in these rings. \begin{table*} \begin{center} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline \textit{Band} & \multicolumn{7}{c}{$\epsilon_{\nu}$}\\ ($\mu$m) & \multicolumn{7}{c}{[ MJy sr$^{-1}$ (10$^{20}$ H atoms cm$^{-2}$)$^{-1}$ ]}\\ \hline & \multicolumn{3}{c}{HI} & \multicolumn{2}{c}{H$_{2}$} & \multicolumn{2}{c}{HII} \\ & \textit{Ring 2} & \textit{Ring 3} & \textit{Ring 4} & \textit{Ring 2} & \textit{Ring 3} & \textit{Ring 2} & \textit{Ring 3} \\ \hline \\ 8 & 0.038 $\pm$ 0.166 & -1.766 $\pm$ 0.507 & 0.035 $\pm$ 0.198 & 0.086 $\pm$ 0.059 & 0.076 $\pm$ 0.043 & -0.059 $\pm$ 0.090 & -0.366 $\pm$ 0.134\\ \\ 24 & 0.194 $\pm$ 0.112 & -0.945 $\pm$ 0.416 & 0.147 $\pm$ 0.157 & -0.003 $\pm$ 0.048 & 0.055 $\pm$ 0.057 & 0.013 $\pm$ 0.076 & -0.484 $\pm$ 0.156 \\ \\ 70 & 2.219 $\pm$ 2.240 & -7.109 $\pm$ 2.517 & -3.600 $\pm$ 0.892 & -0.577 $\pm$ 0.326 & 1.686 $\pm$ 0.676 & 1.849 $\pm$0.343 & 0.439 $\pm$ 1.841 \\ \\ 160 & 4.611 $\pm$ 7.196 & -14.980 $\pm$ 10.86 & -16.330 $\pm$ 2.316 & 2.189 $\pm$ 0.909 & 0.332 $\pm$ 2.241 & 3.318 $\pm$ 1.644 & 4.331 $\pm$ 5.278 \\ \\ 250 & 2.374 $\pm$ 3.456 & -23.792 $\pm$ 6.040 & -6.929 $\pm$ 2.683 & 2.298 $\pm$ 0.624 & -3.470 $\pm$ 1.806 & 0.038 $\pm$ 1.276 & 3.154 $\pm$1.276 \\ \\ 350 & 0.970 $\pm$ 1.409 & -6.615 $\pm$ 3.321 & -2.587 $\pm$ 0.604 & 0.698 $\pm$ 0.450 & -1.346 $\pm$ 0.746 & 0.202 $\pm$ 0.447 & 1.681 $\pm$ 0.739 \\ \\ 500 & 0.198 $\pm$ 0.391 & -3.122 $\pm$ 1.627 & -0.555 $\pm$ 0.203 & 0.268 $\pm$ 0.174 & -0.570 $\pm$ 0.401 & -0.090 $\pm$ 0.153 & 0.701 $\pm$ 0.377 \\ \hline \end{tabular} \end{center} \caption{Emissivities, and corresponding uncertainties, of dust associated with the three gas phases in Ring 2-3-4.} \label{tab:emissivities_ring234} \end{table*} \citet{Grenier05} have shown, by comparing HI/CO data with $gamma$-ray emission, the existence in the solar neighbourhood of the so-called \textit{dark gas}, a mixture of cold HI, which is optically thick in the 21cm line, and warm H$_{2}$ which cannot be observed with the standard tracers. They claim that, although it is not clear which component (cold HI or warm H$_{2}$) dominates, the contribution from dark H$_{2}$ must be considerable, up to 50 percent of the total column density in regions where there is no CO detection. An important dark gas contribution to the overall IR emission is also reported in \citet{Planck_Bernard11}. In this inversion analysis, applied to the entire Galactic Plane at 1$^{\circ}$ resolution, the authors find that \textit{dark gas} is mostly distributed around major molecular cloud complexes. Theoretical predictions from \citet{Wolfire10} indicate that a significant amount of warm H$_{2}$ is located in the exterior of photodissociation regions, where the transition of atomic into molecular gas occurs. Recent \textit{Herschel}-HIFI \texttt{CII} observations by \citet{Pineda13} have allowed to slightly revise the early estimate by \citet{Grenier05} relative to the total amount of dark H$_{2}$ in the Galaxy. These authors estimate that warm H$_{2}$ is likely to account for $\simeq30$ percent of the total molecular gas, and find that the fraction of dark H$_{2}$ increases with Galactocentric distance. Therefore, with the standard CO tracers, we have an intrinsic limitation in estimating all the H$_{2}$ along the LOS. In light of the considerations above, we have re-analyzed the content of Ring 2-3-4 and checked if the negative emissivities derived for these rings can be due to the presence of untraced gas, specifically warm H$_{2}$ and cold HI. In Ring 2, we note (Figure \ref{fig:radius_vs_intensity}) that both CO isotopes are characterized by a pronounced peak of emission, indicating the presence of molecular clouds which, following the results from \citet{Planck_Marshall11}, might be associated with warm H$_{2}$. The same peak of emission in $^{12}$CO and $^{13}$CO is also observed in correspondence of Ring 3. However, in this case, as mentioned in Section \ref{sec:l30_field} and Section \ref{sec:HI_data}, a cold HI layer is also found, as reported by \citet{Gibson04}. The existence of cold (i.e. optically thick) HI could explain the anti-correlation and the lack of convergence of the code (Section \ref{sec:pearsons}). From the inspection of the VGPS data cube for SDPF1, we have identified two additional prominent features (either HISA or HICA) in the same range of Galactocentric radii: \begin{itemize} \item[1] H30.74-0.05 at $11\leq\mathrm{V_{LSR}}\leq15$ km s$^{-1}$ \item[2] H30.39-0.24 at $8\leq\mathrm{V_{LSR}}\leq14.5$ km s$^{-1}$ \end{itemize} For these cold regions, we have made an attempt to derive an indicative upper limit of their spin temperatures by measuring the maximum of the median \textit{T}$_{b}$ values evaluated in the pixels in proximity of the regions of absorption. We recall that, as discussed in Section \ref{sec:HI_data}, the spin temperature of cold HI is noticeably lower than in the WNM, and that \textit{T}$_{s}$ is meaningful only in the case of warm HI in optically thin conditions. For this reason, it cannot be used to derive HI column densities in regions populated by different HI components \citep[e.g.,][]{Strasser04}. Figure \ref{fig:HISA_profiles} shows the brightness temperature profiles for the cold features H30.74-0.05 and H30.39-0.24, for which we obtain a mean value in each feature of $\mathrm{T_{b}}$=48.6 K and $\mathrm{T_{b}}$=38.7 K, respectively. Interestingly, HISA features have been found mixed with molecular clouds. For instance, in the Perseus star forming region, a significant fraction of cold HI is undergoing the transition to the molecular phase, and the HISA features appear to be non-gravitationally bound regions of molecular material not detected in CO \citep{Klaassen05}. In summary, we speculate that in Ring 2 the derivation of dust emissivities possibly fails due to a significant amount of warm H$_{2}$ not properly accounted for by the $^{12}$CO and $^{13}$CO data, while in Ring 3 the lack of convergence could be ascribed to both (untraced) warm H$_{2}$ and (badly traced) cold HI. Regarding Ring 4, our current hypothesis is that, after subtracting the contributions from the other rings and given the small amount of HI (only a few percent, see Section \ref{sec:total_density}), the code has little to no leverage to return meaningful values. \begin{figure} \centering \includegraphics[width=8cm]{figure_9.ps}\\ \includegraphics[width=8cm]{figure_10.ps} \caption{Maximum of the median brightness temperature (red line) in the surrounding of H30.74-0.05 and H30.39-0.24. Shown is the mean of these values across each feature. For comparison, the black line denotes the maximum of the median brightness temperatures measured in the same region at the nearby Galactocentric positions.} \label{fig:HISA_profiles} \end{figure} \subsubsection{Solving for the distance ambiguity of cold HI features} As a by-product of our study, we can solve for the distance ambiguity of some of these cold features by comparing the HI data with the maps for the other tracers as well as with the input IR maps. In Figure \ref{fig:PSW_HISA} we show the absorption features in silhouette against the warm background provided by the HII regions in SDPF1 compared to the VGPS 21-cm map integrated along Ring 3. H30.74-0.05 is dominated by absorption of the continuum background provided by W43, and it is therefore primarily an HICA feature. Since W43 is at the tangent point of the Scutum-Crux arm, we can solve the distance ambiguity for this cold structure and locate it at 0.7 kpc $\leq\mathrm{d}\leq$ 1.1 kpc, i.e. on the Sagittarius-Carina arm (see Figure \ref{fig:Galactic_model}). In addition, the 29 HII regions in SDPF1 are distributed between Ring 1 and 2 \citep{Anderson09}, thus all the absorption features, including H30.39-0.24, observed in Ring 3 in correspondence of these HII regions are also likely associated with the Sagittarius-Carina arm. \begin{figure*} \centering \includegraphics[width=8.5cm]{figure_11.eps} \qquad \includegraphics[width=8.5cm]{figure_12.eps} \caption{HI brightness temperature map integrated over Ring 3 (left panel) and Hi-GAL 500 $\mu$m data (right panel). The white contours in the HI map denote HICA and HISA features. The same regions at 500 $\mu$m (the black contours) overlap with either the W43 complex or with other HII regions in the field. In the HI map, some of the dark pixels within the cold features are pixels where \textit{T}$_{b}$ is negative due to strong absorption. The majority are found associated with the brightest HII regions, especially W43 and G29.944-0.04.} \label{fig:PSW_HISA} \end{figure*} \subsection{The dominant contribution of Ring 1 to the integrated intensity maps}\label{sec:ring1_dominate} In Section 4.3.4 we have seen that roughly 65 percent of the total gas column density of SDPF1 is located in Ring 1. In this Section, we want to show that dust emission associated with this ring accounts indeed for the bulk of dust emission in the input maps. To this end, we compare the longitude profiles of the output model with those of the input maps, and in doing so we consider the model contribution from Ring 1 only. The longitude intensity profiles are generated by averaging, for a given Galactic longitude, all latitude pixel values. These profiles, as well as the residuals obtained by subtracting the model from the input maps are shown in Figure \ref{fig:ring1_profile_8_160}. The model intensities obtained from Ring 1 appear to satisfactorily reproduce the input intensities at 24, 70, and 160 $\mu$m, with a discrepancy of less than 10 percent. At longer wavelengths, the model tends to overestimates the input profiles and the discrepancy is more pronounced, of the order of 30 percent. This effect is likely related to the fact that dust associated with the cold untraced gas, which would emit at long wavelengths (e.g. $>$ 160 $\mu$m), is mostly located outside Ring 1 (see Section \ref{sec:dustem}). Therefore its contribution is visibly missing when only Ring 1 is considered, and this produces the observed model overshooting. Conversely, in the range 24 $\mu$m $< \lambda <$ 160 $\mu$m\ both dust and gas are properly accounted for and the model is able to reproduce the input emission. We note that at 8 $\mu$m\ the residuals are higher comparatively to other wavelengths and of the order of 40 percent. This is expected: at this wavelength the coarse assumption of the model of a one-to-one correlation between input intensity and column densities reveals its limitation, due to the degenerate PAH emission dependence on both radiation field amplitude and column density (Section \ref{sec:pearsons}). The residuals obtained from our analysis are comparable to earlier results. For instance, from the longitude profiles of \citet{Paladini07}, in the region of overlap with this work, i.e. 29$^{\circ} <$ l $<$ 31$^{\circ}$, and between 60 $\mu$m\ and 240 $\mu$m, the residuals appear to be in the range 10 to 30 percent. \citet{Planck_Marshall11} provide residuals only at 1.4 GHz, 30 GHz and 857 GHz (350 $\mu$m). No longitude profiles are given. At the common 350 $\mu$m\ wavelength and for 29$^{\circ} <$ l $<$ 31$^{\circ}$, the residuals are of the order of 15 - 20 percent. Considering that both \citet{Paladini07} and \citet{Planck_Marshall11} work on the full sky at a 1$^{\circ}$ resolution, the residuals obtained for our 2$\times$2 square degree decomposition at 14.8 arcmin are in excellent agreement with these previous analyses. In summary, the test shows that Ring 1 accounts for approximately 70 to 90 percent of the total emission in the input maps at all wavelengths, while the combined Ring 2, 3 and 4 contributes only for the remaining 30 to 10 percent, therefore strongly corroborating our earlier statement that, despite the lack of meaningful convergence of the method in the other rings, the results of Ring 1 are robust. In Appendix \ref{app:model_test} we will return on this point, by further investigating possible biases introduced by Ring 2, 3 and 4. \begin{figure*} \centering \includegraphics[width=8cm]{figure_13.eps} \qquad \includegraphics[width=8cm]{figure_14.eps}\\ \includegraphics[width=8cm]{figure_15.eps} \qquad \includegraphics[width=8cm]{figure_16.eps} \\ \includegraphics[width=8cm]{figure_17.eps} \qquad \includegraphics[width=8cm]{figure_18.eps} \\ \includegraphics[width=8cm]{figure_19.eps} \caption{Comparison of the intensity longitude profiles for the input maps (red line) and for the Ring 1 inversion model (black dotted line). Overlaid are the $\pm$ 1$-\sigma$ boostrap errors (magenta dotted line). The separate panels at the bottom of the longitude profiles show the residuals (in percentage) obtained by subtracting the inversion model for Ring 1 from the input profiles (dashed green line).} \label{fig:ring1_profile_8_160} \end{figure*} \begin{figure*} \centering \includegraphics[width=16cm]{figure_20.eps} \caption{K$_{s}$ vs J-K$_{s}$ diagrams. The black line in the left panel corresponds to the cut applied to remove from the sample contaminating dwarfs. Identified dwarfs and giants are shown in the middle and right panel, respectively.} \label{fig:color_magnitude} \end{figure*} \begin{figure*} \centering \includegraphics[width=15cm, height=10cm]{figure_21.eps}\\ \caption{Extinction map obtained from giant stars in the UKIDSS, 2MASS and GLIMPSE catalogues. The resolution is 1 arcmin. Units are magnitude of A$_{\mathrm{V}}$. The contours correspond to A$_{\mathrm{V}}$\ of 10, 20 and 30 mag. Regions with the highest extinction (dark red) are associated with cold HI features.} \label{fig:extinction_Laurent} \end{figure*} \subsection{Total column density and extinction maps}\label{sec:extinction} An alternative method to recover the missing hydrogen column density is through extinction. We have attempted to generate an extinction map for SDPF1 using colour excess templates derived from observations of giant stars. For this purpose, we use UKIDSS \citep{Lawrence07} and 2MASS \citep{Skrutskie06} $JHK_{s}$ and GLIMPSE \citep{Churchwell09} 3.6 $\mu$m and 4.5 $\mu$m data. The first step is to minimize contamination from sources other than giants, in particular from dwarf stars. To this end, we first build colour-magnitude $J-K_{s}/K_{s}$ and $J-[3.6]/[3.6])$ diagrams using the photometric measurements provided in the UKIDSS/2MASS/GLIMPSE catalogues. Then we compare these with the predictions from the Besan\c{c}on Stellar Population Synthesis Model \citep{Robin03}. From this model, we derive the colour criteria to separate the dwarf from the giant stars, i.e.: $K_{s}\geq(J-K_{s})*3.8-7.8$ and $[3.6]\geq(J-[3.6])*3.2+7.4$ (see Figure \ref{fig:color_magnitude}). Applying these criteria, we select 45 percent of the sources in the original catalogues, in practice 4.4$\times10^{5}$ out of $\simeq10^{6}$ sources. The colour excess in $H$ and $K_{s}$ bands measured for the selected sources is converted into extinction using the extinction law derived by \citet{Rieke85}. At longer wavelengths, the correct extinction law is still a matter of debate. Variations have been reported to occur from one molecular cloud to another, and even within the same cloud \citep[][and references therein]{Cambresy11}. Also, the extinction law appears to change with Galactocentric radius \citep{Zasowski09}. For this work, we adopt the extinction law derived by \citet{Cambresy11} in the Trifid Nebula. The extinction law values are: A$_{K_{s}}$/A$_{\mathrm{V}}$=0.112, A$_{H}$/A$_{K_{s}}$=1.56, A$_{[3.6]}$/A$_{K_{s}}$=0.611 and A$_{[4.5]}$/A$_{K_{s}}$=0.500. The resulting A$_{\mathrm{V}}$\ map is a combination of two maps: for visual extinctions $<10$ mag, it is generated using $H$ and $K_{s}$ bands, while for A$_{\mathrm{V}}$$>15$ mag it is obtained using the 3.6 $\mu$m and 4.5 $\mu$m bands. In the range $10 \mathrm{\ mag}\leq$A$_{\mathrm{V}}$$\leq15 \mathrm{\ mag}$, it is a linear combination of the two. The zero point (A$_{\mathrm{V}}$=6 mag) is evaluated from the 2MASS data for $\vert l\vert\leq10^{\circ}$. The final extinction map, with a resolution of 1 arcmin, is shown in Figure \ref{fig:extinction_Laurent}. This map is converted into total hydrogen column density using the relation \citep{Guver09} \begin{equation} \mathrm{N(HI)} = 22.1\ A_{\mathrm{V}} \qquad [10^{20}\ \mathrm{atoms/cm}^{2}] \end{equation} We then compare the total column density map derived from extinction with the one derived from the gas tracers, obtained by summing up the individual contributions from the three gas phases and from each ring (see Figure \ref{fig:extinction_laurent_column}). From this comparison, we find that the extinction column density map, minus an offset of roughly 100 $\times$ 10$^{20}$ atoms cm$^{-2}$, is lower with respect to its gas tracers counterpart by $\sim$ 70 percent. We formulate the hypothesis that this discrepancy is mainly due to the fact that the gas tracers allow us to probe much larger distances with respect to the catalogued giants. To verify this scenario, we first note that Figure \ref{fig:radius_vs_intensity} shows that the bulk of material in SDPF1 is within $R <$ 8.5 kpc, corresponding for this LOS to 14.7 kpc, at the far distance. Therefore, the extinction map has to reach at least this distance in order to compare in column density with the gas tracers. Although we do not have an individual distance estimate for all our giants, we can use the Besan\c{c}on model to have a rough idea of their distance distribution. Figure \ref{fig:distance_extinction} illustrates the number of sources as a function of solar distance for the selected giants and dwarfs. Most of the sources appear to be located around 10 kpc, setting the approximate limit of our extinction map and showing that, at least with the available data, extinction is not yet a viable route for estimating accurate column densities towards the inner Galaxy. \begin{figure} \centering \includegraphics[width=8cm]{figure_22.eps}\\ \caption{Number of sources as a function of solar distance for both the selected giant stars and the dwarfs from the Besan\c{c}on model.} \label{fig:distance_extinction} \end{figure} \begin{figure} \centering \includegraphics[width=8.4cm]{figure_23.eps} \includegraphics[width=8.4cm]{figure_24.eps} \includegraphics[width=8.4cm]{figure_25.eps} \caption{Top panel: column density map obtained by summing up the contributions of the three gas phases in each each ring. Middle panel: column density map derived from the extinction map generated using giant stars. Both maps are convolved to 14.8 arcmin. Bottom panel: pixel-to-pixel correlation plot of the two column density maps. The blue line shows the best-fit to the distribution. The black dot-dashed line denotes the y=x relation.} \label{fig:extinction_laurent_column} \end{figure} We have also explored an alternative approach, based on the model of \citet{Marshall06} which allows building extinction maps in 3D. This model makes use of the combined 2MASS Point Source Catalog and Besan\c{c}on model to calculate the extinction at increasing solar distances. The maximum distance reached towards any given LOS depends on the completeness in J and K$_{s}$ bands in the 2MASS catalogue as well as on the effective column density. After generating an extinction map for SDPF1 following this prescription, we have compared the corresponding column density map with the one derived from the gas tracers. The comparison is not straightforward. In fact, although the \citet{Marshall06} extinction map has a resolution of 15 arcmin, which is comparable to our working resolution (14.8 arcmin), the data are not Nyquist sampled, as the pixel size is also set to 15 arcmin. Therefore, for consistency the spatial resolution of the gas tracers column density map is also downgraded to 15 arcmin using a 15 arcmin pixel size. With this procedure, we obtain a total of 50 pixels in each map. The analysis is limited to the Galactocentric ring containing the Scutum-Crux arm intersection, which is the only region accurately reconstructed by the \citet{Marshall06} model. Figure \ref{fig:extinction_map} shows the comparison of the column density estimated from the \citet{Marshall06} extinction map with the column density evaluated from the tracers of the gas phases. We notice, as for Figure \ref{fig:extinction_laurent_column}, that the two column density maps are separated by an offset which, in this case, is of the order of $\simeq$500 $\times$ 10$^{20}$ atoms cm$^{-2}$. Moreover, the column densities computed from the the 3D extinction map are even lower than those obtained from the previous extinction map, accounting only for $\sim$ 9 percent of the gas column densities. \begin{figure} \centering \includegraphics[width=8.4cm]{figure_26.eps} \includegraphics[width=8.4cm]{figure_27.eps} \includegraphics[width=8.4cm]{figure_28.eps} \caption{Same as Figure \ref{fig:extinction_laurent_column} but for the extinction map obtained from the \citet{Marshall06} model (middle panel). Both the column density map from the gas tracers (top panel) and the extinction map are downgraded to 15 arcmin, using a pixel size of 15 arcmin.} \label{fig:extinction_map} \end{figure} \subsection{Testing the exclusion of the ionized phase from the inversion model}\label{sec:no_HII} The Pearson's coefficients study described in Section \ref{sec:pearsons} show the existence, at every wavelength, of a high degree of correlation between the ionized gas column density and IR emission, in particular in Ring 1. In this section, we explore the consequences of performing inversion analysis without taking into account the ionized gas phase. For this purpose, we carry out a simple test, consisting in deriving the emissivity coefficients for Ring 1 ignoring the RRL data. We note that, if we do not make use of these data, we can afford to work at a higher resolution, hence with a larger number of pixels. By including only the atomic and molecular gas components, the pixel size is set by the $^{12}$CO data, i.e. 3 arcmin, and we obtain $\sim1200$ pixels in each map, almost twice the previous number. We now solve Equation \ref{eq:decomposition} (setting N(HII)=0 in all rings and pixels) and analyse the recovered emissivities, focusing on the long wavelengths ($>$ 70 $\mu$m) which we can model with a simple grey-body. The fitted SEDs for both the atomic and molecular phases are shown in Figure \ref{fig:no_RRLs}. We fix the grey-body spectral emissivity index to $\beta=1.9$ to be consistent with the results obtained including also the RRLs in the analysis (see Section \ref{sec:dustem}). From the fit we obtain: T$_{d,HI}=19.81\pm0.70$ K and T$_{d,H_{2}}=22.11\pm0.21$ K. We note that the temperature of dust in the molecular phase is now higher compared to what we obtained in Section \ref{sec:dustem}, and comparable to our previous result for HII. This result can be explained once again in light of the Pearson's coefficients displayed in Table \ref{tab:scatterplot_columns_IR}: the correlation analysis shows that the second most correlated component, after the ionized gas, with the IR templates at all wavelengths, is the molecular gas phase. This, mathematically, translates into artificially boosted emissivities for dust associated with H$_{2}$, as the molecular phase compensates for the absence of the HII component. \begin{figure} \centering \includegraphics[width=8cm]{figure_29.ps}\\ \includegraphics[width=8cm]{figure_30.ps} \caption{Grey-body fits to the dust emissivities associated with the atomic (top panel) and molecular (bottom) gas phases. Results refer to Ring 1 when the ionized gas component (i.e. the RRL data) is not included in the inversion equation.} \label{fig:no_RRLs} \end{figure} \section{Conclusions}\label{sec:conclusions} We have investigated dust properties in a 2$\times$2 square degrees Hi-GAL field (SDPF1) centred on (\textit{l,b})=(30$^{\circ}$,0$^{\circ}$), in the wavelength range 8 $\mu$m\ $\leq\lambda\leq500$ $\mu$m. For this purpose, we have used an inversion technique, first introduced by \citet{Bloemen86}, to decompose the observed integrated IR emission into individual contributions associated with dust in the atomic, molecular and ionized phase of the gas and located at different Galactocentric distances. We have used, for the first time in an inversion analysis, Radio Recombination Lines (RRLs) to trace the ionized gas. In addition, the decomposition into Galactocentric bins (or {\em{rings}}) is performed exploiting the natural boundaries of the structures (i.e. segments of spiral arms) as they appear in the gas data cubes. We have solved the inversion equation for all the decomposition rings (i.e. Ring 1, 2, 3 and 4), and obtained positive solutions only for Ring 1. A Pearson's coefficient and longitude profiles analyses reveal that Ring 1, which covers Galactocentric distances between 4.2 and 5.6 kpc and hosts the mini-starburst W43, dominates the total IR emission towards SDPF1. For this ring only, we have fitted with DustEM the emissivities retrieved by the inversion method. These fits allow estimating, for each phase of the gas, dust temperatures and abundances, as well as the intensity of the local radiation field normalized to the intensity of the radiation field in the solar neighbourhood. In particular we find, for the ionized gas phase with respect to the other gas phases, an indication of PAH depletion and an intensity of the local radiation field two times higher, which reflects into a higher dust temperature. For the other rings (Ring 2, 3 and 4), the inversion equation either cannot be solved or returns negative emissivities. The Pearson's coefficients suggest a weak degree of correlation with the IR templates and, in a few cases, even an anti-correlation. For Ring 2 and 3, this result might be ascribed to the presence of a large amount of \textit{untraced gas}, either associated with warm H$_{2}$ and/or cold HI. This hypothesis could find support in the fact that, in Ring 3, cold HI structures are indeed found. In this scenario, the column densities derived from the standard tracers would not be able to fully account for the observed IR emission, hence the assumption of the inversion model would break down and the resulting emissivities (e.g. negative) be unreliable. In Ring 4, which covers the outer Galaxy, the slight degree of anti-correlation with the input IR maps is probably indicative of the intrinsic low emissivity of the region, due to a combined drastic decrease of both hydrogen column density and intensity of the interstellar radiation field. We have investigated the role of extinction in evaluating total column densities along the LOS as an alternative method to gas tracers. For this purpose, we have attempted to build an extinction map for SDPF1 in two independent ways, i.e. using deep catalogues of giant stars and with a 3D-extinction model. Although both methods appear to be promising, they currently face the severe limitation of not being able to trace extinction beyond $\sim$ 10 -- 15 kpc from the Sun. Finally, we have explored the impact of neglecting the ionized gas phase in inversion analysis, as often done in the past. We have shown that, by not including this gas component, the temperature of dust associated with the molecular phase is artificially increased, due to its high degree of correlation with the input IR templates. We conclude with a general remark. In this work we have improved, with respect to previous inversion studies, the estimation of the HI, H$_{2}$ and HII column densities. However, we believe that this analysis has shown, above all, that local effects, such as departures from circular motion and the presence of cold HI structures (and, likely, warm H$_{2}$), become important when a 3D-inversion is performed in small sections of the Plane and with an angular resolution higher or comparable to the angular scale on which these effects dominate the total emission. Conversely, when the entire Galactic Plane is {\em{inverted}} at low angular resolution, the peculiarities of each LOS are averaged out. Further developments of 3D-inversion models will have to take into account these limitations, both by including non-radial motions, i.e. the grand spiral design of the Galaxy, and by estimating total column densities accounting for the blending of cold and warm material along the LOS. \section*{acknowledgements} The authors want to thank Mark Calabretta and Lister Staveley-Smith for their work on the RRL data. AT is supported by an STFC grant to JBCA. CD acknowledges an STFC Advanced Fellowship and an EU Marie-Curie grant under the FP7. MIRA acknowledges the support by the European Research Council grant MISTIC (ERC-267934). \textit{Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. \label{lastpage} \bibliographystyle{mn2e}
1,108,101,563,445
arxiv
\section{Introduction}\label{s:intro} The discovery of strong correlations between the masses of super massive black holes (SMBHs) and several properties of their host galaxy (stellar velocity dispersion, bulge mass, and bulge luminosity; e.g., \citealt{gebhardt00a, ferrarese00, tremaine02, gultekin09}) have led to various suggestions about the connection between SMBH growth and stellar mass growth in galaxies (e.g., \citealt{silk98, kauffmann00, zubovas14}). Active galactic nuclei (AGN) feedback, in the form of powerful outflows, had been suggested as a way to couple the energy released by the accreting SMBH with the ISM of its host galaxy, providing the necessary link between the SMBH and its host galaxy growth. In particular, several galaxy formation theories suggest that during the AGN phase, the AGN drives galactic-scale winds that expel gas from its host galaxy, shuts down additional gas accretion onto the SMBH, terminates the star formation (SF) in the galaxy, and enriches its circumgalactic medium (CGM) with metals \citep{silk98, fabian99, benson03, king03, dimatteo05, hopkins06, gaspari11}. However, more recent zoom-in simulations suggest that these winds escape along the path of least resistance, with little impact on the bulk of gas in the galaxy (e.g., \citealt{gabor14, hartwig18, nelson19}). Galactic scale winds are ubiquitous and span a large range of host galaxy properties (see recent review by \citealt{veilleux20}). They are detected is systems at different evolutionary stages, from star-bursting ultra luminous infrared galaxies (ULIRGs), through typical main sequence galaxies, to quenched elliptical galaxies (e.g., \citealt{rupke05b, rupke05c, mullaney13, veilleux13, cazzoli16, cheung16, fiore17, rupke17, forster_schreiber18, baron19a, baron19b}). They are detected on different physical scales, from the vicinity of the SMBH (e.g., \citealt{blustin03, reeves03, tombesi10}), to galactic scales, at hundreds of parsecs (\citealt{husemann16, fischer18, tadhunter18, baron19a}) to 1-10 kpc (\citealt{cano12, liu13a, harrison14, rupke17, leung19}). They are detected through different gas phases, from high velocity X-ray and UV absorption lines (e.g. \citealt{blustin03, reeves03, tombesi10, arav13}), through ionized emission lines \citep{mullaney13, harrison14, perna17, rupke17, baron19b, mingozzi19, rojas19, shimizu19}, to atomic and molecular emission and absorption lines \citep{feruglio10, veilleux13, cicone14, burillo15, cazzoli16, rupke17}. The nature of multi-phased (molecular, atomic, and ionized gas) outflows remains largely unconstrained, and different phases \emph{detected in different systems} show different outflow velocities, covering factors, and mass (e.g., \citealt{fiore17, cicone18, veilleux20}). It is unclear whether these phases are connected, and multi-phased outflow studies were only conducted in a handful of sources so far (see comprehensive review by \citealt{cicone18}). For example, IC5063 is the only galaxy where the ionized, neutral, and molecular phases of the outflow show similar kinematics and spatial extents \citep{tadhunter14}. \citet{rupke05c} and \citet{rupke17} conducted a detailed comparison between the neutral and ionized outflow phases in their sample, finding similarities in some of the systems. Using the infrared emission of dust that is mixed with the ionized outflow (\citealt{baron19a}), we suggested the existence of a significant amount of neutral atomic gas at the back of the outflowing ionized gas clouds in a large sample of type II AGN (\citealt{baron19b}), which has significant implications for the estimated mass and energetics of such flows. There are two major uncertainties concerning AGN-driven feedback in active galaxies. The first is related to the relative contribution of AGN versus supernovae to the observed winds, where the latter is directly related to SF activity in the galaxy. This is due to the fact that most systems showing AGN activity also show significant SF activity, with some correlation between the AGN bolometric luminosity and the SF luminosity (e.g. \citealt{netzer09}). Many systems with more powerful AGN, that are capable of producing stronger AGN-driven winds, also undergo powerful SF episodes, capable of driving stronger supernovae-driven winds. Studies typically assume that the source that photoionizes the stationary and outflowing gas is also the main driver of the observed winds (outflows that are observed in systems with emission line ratios that are consistent with AGN photoionization, are considered to be driven by the AGN; e.g., \citealt{mullaney13, karouzos16b, forster_schreiber18, leung19}). However, in \citet{baron19b}, using a large sample of type II AGN, we found evidence that although the AGN dominates the ionization of the outflowing gas, the ionized outflows are more likely to be driven by supernovae (see also \citealt{husemann19}). The second uncertainty is related to the timescale of the observed winds. To quantify the effect of these outflows on the host galaxy, it is important to map the evolutionary stages in which the feedback is most prominent. In addition, the total energy that is transferred from the accreting SMBH to its host galaxy ISM depends on the mass outflow rate and on the duration of these flows. Unfortunately, in the large majority of systems where outflows are currently detected, it is difficult to estimate the total duration of the feedback episode (see however example in \citealt{baron18}). Post starburst E+A galaxies offer an advantage over other galaxy samples in dealing with these uncertainties. Their optical spectra show a narrow stellar age distribution, with a significant contribution from A-type stars, and no contribution from O and B stars, indicating a recent starburst that was quenched abruptly (e.g., \citealt{dressler99, poggianti99, goto04, dressler04, french18}). The estimated SFRs during the burst range from 50 to 300 $\mathrm{M_{\odot}/yr}$ \citep{poggianti00, kaviraj07}, and the mass fractions forming in the burst are high, 10\%--80\% of the total stellar mass \citep{liu96, bressan01, norton01, yang04, kaviraj07, french18}. Many of these systems show bulge-dominated morphologies, with tidal features or close companions, which suggests a late-stage merger \citep{canalizo00, yang04, goto04, cales11}. Since the SF in these systems is completely quenched, any observed outflows can be attributed solely to the AGN. In addition, due to their narrow stellar age distribution, their starburst age can be used as a clock (see e.g., \citealt{wild10, french18}), where one can study the evolution of outflow properties over hundreds of Myrs. Various studies suggest that post starburst E+A galaxies are the evolutionary link between gas rich major mergers (ULRIGs) and quiescent, early-type, galaxies \citep{yang04, yang06, kaviraj07, wild09, cales11, cales13, yesuf14, cales15, french15, wild16, baron17b, baron18, li19}. According to simulations, a gas-rich major merger triggers a powerful starburst, and gas is funneled to the vicinity of the SMBH, triggering an AGN. Soon after, the AGN launches nuclear winds which sweep-up the gas in the galaxy, shutting down the current starburst abruptly, and removing the gas from the host galaxy (e.g., \citealt{springel05, hopkins06}). The more recent simulations suggest that AGN-driven outflows have a limited impact on the ISM of the host galaxy (e.g., \citealt{gabor14, hartwig18}). Instead, it is suggested that a more significant effect is heating of the CGM which prevents further gas accretion, and thus quenches SF in the host galaxy (e.g., \citealt{bower17, pillepich18}). In addition, recent observational evidence challenge the simple picture, by finding large molecular gas reservoirs in such systems (e.g., \citealt{french15, french18}), and finding a significant delay between the onset of the starburst and the peak of AGN activity (e.g, \citealt{wild10, cales15, yesuf14}). For example, \citet{yesuf17} show that post starburst galaxies have a wide range of gas fractions, some are gas rich and some are gas poor. \citet{french18} discover a statistically-significant decline in the molecular gas to stellar mass fraction with the post starburst age. They argued that the implied rapid gas depletion rate of 2--150 $\mathrm{M_{\odot}/yr}$ cannot be due to current SF or supernova feedback, but rather due to AGN feedback (see also \citealt{li19}). We have recently detected the first evidence of an AGN-driven outflow, traced by ionized gas, in a post starburst E+A galaxy (\citealt{baron17b}; see also \citealt{tremonti07}, \citealt{tripp11}, and \citealt{yesuf17b}). SDSS J132401.63+454620.6 was discovered as an outlier by the anomaly detection algorithm of \citet{baron17}, and our ESI/Keck spectroscopy revealed a post starburst system with powerful ionized outflows, with a mass outflow rate in the range 4--120$\mathrm{M_{\odot}/yr}$. Since then, we have constructed a sample of such galaxies, with fully-quenched starbursts and powerful AGN-driven winds. In a second post starburst E+A galaxy observed with the Keck Cosmic Web Imager (KCWI) on Keck \citep{baron18}, we found that while the stellar continuum is detected to about 3 kpc from the BH, we detect gas outflows to a distance of at least 17 kpc. Our models suggest that the ionized gas outside the galaxy forms a continuous flow, and its mass, roughly $10^{9}\,\mathrm{M_{\odot}}$, exceeds the total gas mass within the galaxy. This suggests that we are witnessing a short-lived phase, in which the AGN is successfully removing most of the gas from its host galaxy. In this work we present spatially resolved spectroscopy, obtained with MUSE/VLT, of a third E+A galaxy at z=0.090, SDSS J124754.95-033738.6. Our observations reveal galactic-scale ionized and neutral outflows, where we find a remarkable similarity between the spatial extents and kinematics of the two phases. We describe the observations in section \ref{s:observations}, and discuss the general observed properties of the system in section \ref{s:physical_props}. We then describe a single model that accounts for both the neutral and the ionized outflow phases in section \ref{s:models}. We summarize and conclude in section \ref{s:concs}. Throughout this paper we assume a cosmology with $\Omega_{\mathrm{M}}=0.3$, $\Omega_{\Lambda}=0.7$, and $h=0.7$, thus 1'' corresponds to 1.68 kpc for the system in question. This paper concerns only with the optical properties of the source and further study of the infrared is delayed to future publications. \section{Observations}\label{s:observations} \subsection{1D spectroscopy from SDSS}\label{label:s_sdss_1d} SDSS J124754.95-033738.6 was observed as part of the general SDSS survey \citep{york00}. The spectrum was obtained using a 3'' fiber with the SDSS spectrograph, covering the wavelength range 3800\AA--9200\AA, with a resolving power of 1500 at 3800\AA\, and 2500 at 9000\AA. The publicly-available spectrum of the galaxy is combined from three 15m sub-exposures, with the signal-to-noise ratio (SNR) per resolution element of the combined spectrum ranging from 10 to 50. We did not find spectral differences between the 3 sub-exposures. In figure \ref{f:sdss_1d_spectrum_and_fit} we show the 1D SDSS spectrum and its best-fitting stellar model (see section \ref{s:stellar_props} for details about the stellar modeling). \begin{figure*} \includegraphics[width=0.95\textwidth]{figures_for_paper/sdss_1d_spectrum_and_fit.pdf} \caption{Top panel: the SDSS 1D spectrum of SDSS J124754.95-033738.6 (black) and the best-fitting stellar population synthesis model (red) using pPXF (see section \ref{s:stellar_props} for details). Bottom panel: residual spectrum showing the emission and absorption line spectrum.}\label{f:sdss_1d_spectrum_and_fit} \end{figure*} \subsection{Spatially-resolved spectroscopy with MUSE}\label{label:s_muse_2d} MUSE is a second generation integral field spectrograph on the VLT \citep{bacon10}. It consists of 24 integral field units that cover the wavelength range 4650\AA--9300\AA, achieving a spectral resolution of 1750 at 4650\AA\, to 3750 at 9300\AA. In its Wide Field Mode (WFM), MUSE splits the 1$\times$1 arcmin$^{2}$ field of view (FOV) into 24 channels which are further sliced into 48 15''$\times$0.2'' slitlets. SDSS J124754.95-033738.6 was observed as part of our program "Mapping AGN-driven outflows in quiescent post starburst E+A galaxies" (0100.B-0517(A); P.I. R. Davies). The observations were performed over two observing blocks (OBs) on March 9 and April 7 2018, in a seeing-limited WFM with a pixel scale of 0.2''. The total exposure time of the combined MUSE cube is 2.0 hours, with an average spatial resolution of FWHM=0.71''. We downloaded the data from the ESO phase 3 online interface, which provides fully reduced, calibrated, and combined MUSE data for all targets with multiple OBs. Given the spaxel scale of the WFM and the spatial resolution of the combined cube, we obtain our final data cube by summing $3 \times 3$ spaxel regions in the original cube. At the redshift of the system ($z = 0.09$), each spaxel of 0.6'' in the final cube represents 1 kpc in the galaxy. The resulting signal to noise ratio (SNR) of spaxels where the source is detected ranges between 5 to 25 per spatial and wavelength resolution element. In figure \ref{f:muse_continuum_emission} we show the integrated stellar continuum emission in the rest-frame wavelength range 5300--5700\AA\, using the final MUSE data cube. The image shows a spiral galaxy with two clear spiral arms, extending to distances of more than 20 kpc from the center. \begin{figure} \includegraphics[width=3.5in]{figures_for_paper/continuum_emission.pdf} \caption{\textbf{Integrated stellar continuum emission in the rest-frame wavelength range 5300--5700\AA\, obtained from the MUSE data cube.} The yellow line at the bottom left represents 10 kpc at the redshift of the galaxy ($z = 0.09$). The SNR of the pixels in this image ranges between 100 and 500.}\label{f:muse_continuum_emission} \end{figure} \subsection{X-ray observations with XMM-Newton}\label{label:s_xray} SDSS J124754.95-033738.6 was observed by \emph{XMM-Newton} (\citealt{jansen01}) on August 14 2018, as part of our program "X-ray properties of quiescent post starburst galaxies with AGN-driven winds" (AO17 82008; P.I. H. Netzer), in which we observed six post starburst E+A galaxies with massive AGN-driven winds. The \emph{XMM-Newton} instrument modes were full-frame, with a thin filter for the three EPIC cameras, and the optical/UV filters U, UVW1, and UVM2 for the OM\footnote{See additional details at: \url{https://xmm-tools.cosmos.esa.int/external/xmm_user_support/documentation/uhb/omfilters.html}}. The total observation duration for EPIC was 12 ks, and the exposure time for OM was 6 ks. The raw data files were processed, using standard procedures, with version 15.0 of the \emph{XMM-Newton} Scientific Analysis Software (SAS2) package, with the 2017 release of the Current Calibration File. Source events were extracted from a circular region of 20 arcsec in radius, and background events from a region of 1 arcmin in radius, which was away from the source and clean from other sources. \section{Physical properties}\label{s:physical_props} In this section we present the physical properties of SDSS J124754.95-033738.6. We study the stellar properties of the system in section \ref{s:stellar_props}, the AGN properties in section \ref{s:agn_props}, and the gas properties (ionized and neutral) in section \ref{s:gas_props}. \begin{figure*} \includegraphics[width=0.95\textwidth]{figures_for_paper/stellar_properties.pdf} \caption{\textbf{Spatially-resolved stellar properties, obtained from {\sc pPXF} fitting of individual spaxels.} The top panels show (from left to right): integrated stellar continuum in the rest-frame wavelength range 5300--5700\AA, mass-weighted age of the best-fitting stellar model, and mass-weighted age of stars younger than 1 Gyr. The bottom panels show (from left to right): dust reddening towards the stars, stellar velocity with respect to systemic velocity, and stellar velocity dispersion. The red contours represent different integrated stellar continuum values in the rest-frame wavelength range 5300--5700\AA, ranging from $10^{7}$ to $10^{4}$ in logarithmic steps of 0.5 dex, with physical units of $10^{-20}\, \mathrm{erg/(sec\, cm^{2} \AA)}$. }\label{f:2d_stellar_properties} \end{figure*} \subsection{Stellar properties}\label{s:stellar_props} In this section we use the 1D and spatially-resolved spectra to study the stellar properties of the source, to constrain the star formation history, and to measure the stellar kinematics and the reddening towards the stars. The 1D SDSS spectrum of SDSS J124754.95-033738.6 (figure \ref{f:sdss_1d_spectrum_and_fit}) is dominated by A-type stars and shows strong Balmer absorption lines, which is a clear signature of post starburst E+A galaxies. We used the 1D SDSS spectrum to measure the equivalent width (EW) of the H$\delta$ absorption line and found EW(H$\delta$)=6.7\AA, which is above the typical 5\AA\, threshold used to select post starburst E+A galaxies (e.g., \citealt{goto07, alatalo16a}). We then fitted a stellar population synthesis model using the {\sc python} implementation of Penalized Pixel-Fitting stellar kinematics extraction code ({\sc pPXF}; \citealt{cappellari12}). {\sc pPXF} is a public code for extraction of the stellar kinematics and stellar population from absorption line spectra of galaxies \citep{cappellari04}. Its output includes the best-fitting stellar model, the relative contribution of stars with different ages, the stellar velocity dispersion, and the dust reddening towards the stars (assuming a \citealt{calzetti00} extinction law). The code uses the MILES library, which contains single stellar population synthesis models that cover the entire optical wavelength range with a FWHM resolution of 2.3\AA\,\citep{vazdekis10}. We used models produced with the Padova 2000 stellar isochrones assuming a Chabrier initial mass function (IMF; \citealt{chabrier03}). The stellar ages range from 0.03 to 14 Gyr, thus allowing the analysis of systems with different star formation histories. The best-fitting stellar model is marked with red in figure \ref{f:sdss_1d_spectrum_and_fit}. Its stellar age distribution consists of two star formation episodes, with a recent short episode that started 70 Myrs ago and was quenched 30 Myrs ago, and an older long episode that started 14 Gyrs ago and ended 6 Gyrs ago. The older episode is poorly constrained and we find similarly reasonable fits when forcing the code to use templates with different ages within the range 3--10 Gyrs. According to the best-fitting model, the stellar velocity dispersion is 185 km/sec and the dust reddening towards the stars is $E_{\mathrm{B-V}} = 0.188$ mag. The total stellar mass is $\mathrm{M_{*}} = 10^{10.8}\,M_{\odot}$, which is consistent with other estimates (e.g., \citealt{chen12}). About $\sim$2\% of this mass was formed during the recent burst. The integrated stellar continuum emission (using the MUSE cube; figure \ref{f:muse_continuum_emission}) indicates that the system is a spiral galaxy, with at least two spiral arms which extend to distances of more than 20 kpc from the center of the galaxy. The ordered and symmetric stellar distribution in SDSS J124754.95-033738.6 argues against the system being a product of a major merger, since such systems usually show disturbed and asymmetric stellar distributions. One might argue that this is at odds with our spectral classification of this galaxy as an E+A galaxy, since many E+A galaxies show tidal features or close companions, which are suggestive of a late-stage merger (e.g., \citealt{canalizo00, yang04, goto04, cales11}). However, \citet{cales11}, who studied a sample of 29 post starburst quasars using images from the Hubble Space Telescope, found an equal number of spiral (13/29) and early-type (13/29) host galaxies. Thus, post starburst E+A spectral signatures can also be present in spiral galaxies that show no sign of a major morphological disturbance, where the starburst might have been triggered by a minor merger (see section \ref{s:gas_props} and figure \ref{f:ionized_gas_emission} for an additional indication for a minor merger). To obtain the spatially-resolved stellar properties of the system, we fitted stellar population synthesis models to the individual spaxels in the MUSE data cube. In figure \ref{f:2d_stellar_properties} we present the results of our {\sc pPXF} fits. The top panel shows the integrated stellar continuum in the rest-frame wavelength range 5300--5700\AA, the mass-weighted age of the best-fitting stellar model, and the mass-weighted age of stars younger than 1 Gyr. Similar to the 1D SDSS spectrum, all spaxels show contributions from two separate star formation episodes, one considerably younger than 1 Gyr and one which is older than 1 Gyr. The top middle panel in figure \ref{f:2d_stellar_properties} represents the distribution of all stellar ages, and is thus more sensitive to the older stellar population. The top right panel in the figure represents the distribution of stars younger than 1 Gyr, and is therefore sensitive only to the recent star formation episode. The bottom panels of figure \ref{f:2d_stellar_properties} show the dust reddening towards the stars, the stellar velocity with respect to the systemic velocity, and the stellar velocity dispersion. We also use red contours to mark steps of 0.5 dex in stellar continuum emission in all the panels. To facilitate the comparison between the different diagrams, we use the same contours in most of the figures shown the paper. The stellar continuum emission shows no evidence for ongoing SF throughout the FOV. However, in section \ref{s:gas_props} we perform emission line decomposition and show that the emission line ratios in the central spaxels are consistent with composite spectra, suggesting that the gas is ionized by a combination of AGN and SF radiation. To estimate the SFR and its relative contribution to the ionization of the different lines, we use the prescription by \citet{wild10} and the narrow kinematic components of the emission lines: [OIII], H$\beta$, [NII], and H$\alpha$ (see additional details in section \ref{s:gas_props}). We find that the relative contribution of SF to the H$\alpha$ luminosity ranges from 0.6 to 1 in the few central spaxels. Assuming Kroupa IMF and using the galaxy-integrated dust-corrected $\mathrm{L_{SF}(H\alpha)}$, we find SFR of $2.2\,\mathrm{M_{\odot}/yr}$. Although non-negligible, this SFR places this system 0.4 dex (roughly 1$\sigma$) below the SF main sequence at z=0.1 \citep{whitaker12}. Without additional far-infrared data, we cannot say anything about the mid-infrared properties of the outflow \citep{baron19a} and the possibility of heavily obscured SF regions. \subsection{AGN type and bolometric luminosity}\label{s:agn_props} The EPIC PN spectrum of SDSS J124754.95-033738.6 is consistent with a typical low-SNR unobscured type I AGN. To fit the X-ray spectrum, we used the {\sc XSPEC} task \citep{arnaud96} version 12.8.2. To obtain a similar SNR for the different spectral bins, we grouped the PI channels of the source-spectra, using the task \emph{grppha} of the {\sc FTOOLS} \citep{blackburn95} version 6.10. We fitted the spectrum with an absorbed power-law with photon index $\Gamma$ and an obscuring column of $\mathrm{N_{H}}$. The best-fitting power-law has $\Gamma=1.801 \pm 0.094$, and a normalization of $1.66 \times 10^{-05} \pm 2.0 \times 10^{-06}\,\mathrm{photons\,cm^{-2}\,sec^{-1}\,keV^{-1}}$. The best-fitting column density is $\mathrm{N_{H} = 5.6 \times 10^{20} \pm 3.6 \times 10^{20}\, cm^{-2}}$, consistent with galactic obscuration. Using the best-fitting model, we get $\mathrm{f(2-10\,keV) = 3 \times 10^{-13}\, erg/cm^{2}/sec}$, corresponding to $\mathrm{L(2-10\,keV) = 6 \times 10^{42}\, erg/sec}$. We estimate the bolometric luminosity of the AGN. The accretion disk (AD) calculations by \citet{netzer19} suggest a 2--10 keV bolometric correction factor in the range 10--30 with a mean value of $\mathrm{k_{bol}(2-10\,keV)} \sim 11$. However, the resulting bolometric luminosity is an order of magnitude lower than the AGN luminosity derived from the narrow emission lines [OIII], H$\beta$, and [OI] (see \citealt{netzer09}, and additional details in section \ref{s:origin_of_narrow_lines}). As a compromise, we choose $\mathrm{k_{bol}(2-10\,keV)} \sim 25$, which is within the range suggested by \citet{netzer19} and is also consistent with possible variations of the X-ray continuum given that the the emission line luminosity represents the long-term average of the optical-UV and X-ray continuum. This gives $\mathrm{L_{AGN} = 1.5 \times 10^{44}\, erg/sec}$, which is probably correct to a factor of 5. The optical spectrum of SDSS J124754.95-033738.6 is dominated by stellar continuum with no evidence for an AGN continuum emission in these wavelengths. We considered the possibility of a Compton-thick type II AGN, where the observed spectrum is dominated by reflection. However, the reddening-corrected narrow [OIII] luminosity is $\mathrm{L([OIII]) = 3 \times 10^{41}\,\mathrm{erg/sec}}$, resulting in a luminosity ratio $\mathrm{L_{2-10\,keV}/L_{[OIII]} \sim 20}$. This contradicts the results in figure 1 from \citet{bassani99}, which shows that Compton-thick type II AGN have $\mathrm{L_{2-10\,keV}/L_{[OIII]} < 1}$. We thus conclude that this source is consistent with a typical unobscured type I AGN. The reddening we derive towards the stars and gas in the central spaxel is $\mathrm{E}_{B-V}$ of 0.5--0.8 mag (see details in section \ref{s:gas_props}), which, for a standard gas-to-dust ratio, would give $\mathrm{N_{H} > 3 \times 10^{21}\, cm^{-2}}$. Furthermore, \citet{maiolino01} suggested that the gas-to-dust ratio in some AGN is larger than the standard value, leading to an even larger obscuring column (see also \citealt{burtscher16} who suggested that this is due to additional obscuration by BLR clouds, and references therein). These obscuring columns are inconsistent with the one derived from the X-ray fitting. The different columns might be related to the different components originating from different lines of sight: since the diameter of the central spaxel is $\sim$1 kpc, it is possible that the vicinity of the BH is unobscured, while the stars and gas in the central 1 kpc are. Since the source is classified as an unobscured type I AGN, we expect to detect blue AD continuum and broad Balmer emission lines. We calculated the expected contributions from these components using the derived X-ray luminosity and the expression given in \citet{netzer19}. We find $\mathrm{k_{bol}(5100\,\AA)}=28$. Therefore, the AD continuum emission is expected to be $\mathrm{\nu L_{\nu} (5100\AA) = 5.3 \times 10^{42}\, erg/sec}$. The dust-corrected stellar continuum luminosity at the central spaxel at $\lambda=5100$\AA\, is larger than this value by an order of magnitude. This suggests that the AD cannot be detected even in the bluest wavelengths of the optical spectrum. Next, we examine whether we expect to detect the wings of a broad H$\alpha$ line that originates in the BLR. We assume that the broad H$\alpha$ is described by a Gaussian profile with a typical FWHM=4\,500 km/sec\footnote{The conclusions remain unchanged for FWHM=3\,000 km/sec.}. The emission line is obscured by dust with dust reddening of $\mathrm{A_{V}}=0.3$ mag, consistent with the obscuring column derived from the X-ray fitting. We do not detect any broad wings that originate from a broad H$\alpha$ line in the central source. This places an upper limit on the broad H$\alpha$ luminosity: $3 \times 10^{41}$ erg/sec. We compare this limit to the expected line luminosities using scaling relations from the literature. These relations tie the optical continuum emission or the X-ray luminosity to the broad H$\alpha$ luminosity. Using the relations by \citet{greene05}, \citet{panessa06}, and \citet{stern12}, we expect the broad H$\alpha$ luminosity to be: $5 \times 10^{40}$ erg/sec, $1 \times 10^{41}$ erg/sec, or $5 \times 10^{41}$ erg/sec respectively. The upper limit we deduce is consistent with most of the relations cited above, suggesting that the broad lines are not detected due to the significant stellar continuum emission. To summarize, we suggest that the system hosts an unobscured type I AGN. We do not detect blue AD continuum and broad Balmer emission lines in optical wavelengths due to the significant stellar radiation in the central spaxel of our source. \begin{figure} \includegraphics[width=3.5in]{figures_for_paper/ionized_gas_emission.pdf} \caption{\textbf{Ionized gas and stellar emission distribution.} The colors represent the integrated line emission in the rest-frame wavelength range 6530--6600\AA, which includes the H$\alpha$ and [NII]~$\lambda \lambda$ 6548,6584\AA\, emission lines. The red contours represent steps of 0.5 dex in stellar continuum emission (see figure \ref{f:2d_stellar_properties}). The green contours represent different integrated H$\alpha$+[NII] flux values, ranging from $10^{5}$ to $10^{3}$ in logarithmic steps of 0.5 dex, with physical units of $10^{-20}\, \mathrm{erg/(sec\, cm^{2} \AA)}$. Clearly the ionized gas is disturbed and does not follow the stellar emission.}\label{f:ionized_gas_emission} \end{figure} \subsection{Gas properties}\label{s:gas_props} In this section we study the properties of the ionized and neutral gas in the system. In section \ref{s:model_inedpendent_props} we present general observed properties. In particular, we present evidence for disturbed ionized gas morphology that is suggestive of a minor-merger, and evidence of spatially-resolved NaID emission and absorption. We further discuss the kinematic connection between the neutral and the ionized gas phases. In section \ref{s:emis_decomp} we present our method of emission and absorption line decomposition, and suggest a novel treatment of the NaID emission and absorption complex. In section \ref{s:derived_gas_props} we present the derived ionized and neutral gas properties. For the ionized gas, we study its spatial distribution and kinematics, the main source of ionization, the gas ionization state, reddening, and electron density. For the neutral gas, we study its spatial extent and kinematics, its emission luminosity and absorption equivalent width (EW), and its optical depth and covering factor. Finally, in section \ref{s:connection_between_gas_phases} we discuss the connection between the neutral and ionized gas phases in the system. \subsubsection{General observed properties}\label{s:model_inedpendent_props} We detect various emission and absorption lines, tracing both ionized and neutral gas phases, throughout the entire FOV. To examine the emission lines, we subtract the best-fitting stellar model from each spaxel, resulting in an emission line cube. In figure \ref{f:ionized_gas_emission} we show the integrated flux in the rest-frame wavelength 6530--6600\AA\, of the emission line cube, a range that includes the H$\alpha$ and [NII]~$\lambda \lambda$ 6548,6584\AA\, emission lines. Contrary to the ordered and symmetric stellar distribution we found in section \ref{s:stellar_props}, the line emitting gas appears to be disturbed and asymmetric, with a clear tidal tail extending to the north-east (top left) direction. In figure \ref{f:oiii_and_hbeta_chanel_maps} in the appendix we show the integrated H$\beta$ and [OIII]~$\lambda \lambda$ 4959,5007\AA\, flux for different velocity channels, where we detect both narrow and broad emission lines throughout the FOV. The disturbed gas morphology observed in both of these diagrams reinforces the suggestion that SDSS J124754.95-033738.6 is an E+A galaxy that has experienced a minor merger in its recent past. The post starburst E+A spectral signatures might be the result of a star formation burst that was triggered during the accretion of a companion galaxy (see additional example by \citealt{cheung16}). \begin{figure*} \includegraphics[width=0.95\textwidth]{figures_for_paper/naid_emission_example_1.pdf} \caption{\textbf{Evidence of NaID emission.} Four representative spaxels (out of a total of 80 spaxels) in which we detect NaID emission. Each row represents a different spaxel. The left panels show the observed spectra, centered around the stellar MgIb line, and their best-fitting stellar population synthesis models. The middle panels show the observed spectra in the NaID region with their corresponding best-fitting stellar models. The right panels show the spectra after subtracting the best-fitting stellar models, centered around the NaID region. We mark the systemic wavelength of the HeI emission and NaID absorption in the right panels. }\label{f:naid_emission_example_1} \end{figure*} \begin{figure*} \includegraphics[width=1\textwidth]{figures_for_paper/naid_emission_example_2} \caption{\textbf{Spatial distribution of the NaID emission and absorption.} The different panels show different velocity channels, ranging from -1400 km/sec to +1400 km/sec. Since we integrate over the residual flux, emission lines are represented by positive values (purple colors) and absorption lines by negative values (orange colors). The NaID-emitting gas reaches very large velocities of $-$800 km/sec and +1400 km/sec. }\label{f:naid_emission_example_2} \end{figure*} We detect resonant NaID absorption and \textbf{emission} throughout a large fraction of the FOV. Blueshifted NaID absorption that traces cool neutral outflows has been detected and analyzed in numerous systems with and without an AGN (e.g., \citealt{rupke05c, martin06, shih10, rupke13, cazzoli16, rupke17} and references therein). While such features have been predicted to exist \citep{prochaska11}, resonant NaID emission has only been detected in a handful of sources so far (\citealt{phillips93, rupke15, perna19}; see also \citealt{rubin10, rubin11} for detection of resonant MgII emission). SDSS J124754.95-033738.6 is the third reported system in which this emission is spatially-resolved. In figure \ref{f:naid_emission_example_1} we show four representative spaxels (out of a total of 80) in which we detect NaID emission. The left panels show the observed spectra, centered around the stellar MgIb line, and their best-fitting stellar population synthesis models. The stellar population synthesis model fits the stellar continuum and the stellar absorption lines very well. The middle panels show the observed spectra in the NaID region, where there is a significant disagreement between the spectrum and the best-fitting stellar model. This disagreement remains unchanged for different choices of stellar model parameters, like stellar ages, stellar initial mass function, and isochrones. Since the strengths of stellar MgIb and stellar NaID are proportional to each other (\citealt{heckman00, rupke02}), the agreement in the MgIb region and the disagreement in the NaID region suggest an additional NaID absorption and emission component. In the right panels of figure \ref{f:naid_emission_example_1} we show the spectra after subtracting the best-fitting stellar models, and mark the systemic velocity of the HeI emission line and NaID absorption lines. Clearly, there is a redshifted NaID emission that cannot be mistaken for HeI emission, which is mixed with broad and blueshifted NaID absorption. In section \ref{s:emis_decomp} below, we perform a global emission and absorption line decomposition into all of these components. It is informative to examine the spatial distribution of the NaID emission and absorption with respect to the stellar continuum, in a way which is independent of our adopted line decomposition technique. To do so, we subtract the best-fitting stellar models from all the spaxels in our cube, resulting in an emission line cube. We then integrate the residual flux in wavelength bins that correspond to different velocity channels with respect to the systemic NaID absorption. In figure \ref{f:naid_emission_example_2} we show the result of this process, for velocity channels in the range -1400 km/sec to +1400 km/sec. Since we integrate over the residual flux, emission lines are represented by positive integrated flux values and absorption lines are represented by negative integrated flux values\footnote{We note that such a representation is not physically meaningful since an absorption line is a multiplicative, rather than additive, effect, and the negative flux values have no clear physical meaning. A physically-meaningful representation requires emission and absorption line decomposition which requires model assumptions and is subjected to various fitting degeneracies.}. Inspecting the panels from left to right and from top to bottom reveals: HeI emission close to systemic velocity (panels 1-2), blueshifted NaID absorption (panels 3-4), a combination of NaID emission and absorption close to systemic velocity (panels 5-6), and redshifted NaID emission (panels 7-10). Finally, one can see that the NaID absorption and emission occupy roughly the same spatial regions in the galaxy, suggesting that most of the spaxels in the cube consist of a combination of NaID absorption and emission, which we further discuss below. Most of the spaxels show a combination of narrow and broad HeI emission (similarly to the other ionized emission lines), narrow and broad NaID emission, and broad blueshifted NaID absorption (see e.g. figure \ref{f:emis_and_abs_fitting_example_neutral_only_with_explanations}). We also find a surprising similarity between the kinematics of the different NaID components and the kinematics of different ionized emission components. We find similar velocities and velocity dispersions for the narrow NaID emission lines and the narrow ionized lines. The broad blueshifted NaID absorption shows a similar kinematic profile to the blueshifted wing of the broad ionized emission lines, and the broad redshifted NaID emission shows a similar profile to the redshifted wing of the broad ionized emission lines. These similarities suggest that the NaID and the ionized lines originate from the \emph{same} outflowing clouds, which are distributed in a double-cone or an outflowing shell-like geometry. In such a case, the approaching side of the outflow will contribute blueshifted ionized emission and blueshifted NaID absorption, and the receding part of the outflow, if not too extincted by dust, will contribute redshifted ionized emission (which we observe) and redshifted NaID emission. Here, and in other cases where NaID is observed in emission, the redshifted NaID emission is the result of absorption of continuum photons by NaI atoms in the receding side of the outflow and isotropic reemission. Such photons are redshifted with respect to systemic velocity and are not absorbed by NaI atoms in the approaching side of the outflow (e.g., \citealt{prochaska11}). This results in a classical P-Cygni profile. Therefore, the observed NaID emission and absorption profile in our source suggests a neutral gas outflow, where we observe both the approaching gas through blueshifted absorption and the receding gas through redshifted emission. In this configuration, the gas that produces the blueshifted NaID absorption is \emph{in front} of the gas that produces the redshifted emission. Since the neutral and ionized lines originate from the same outflowing clouds, this also suggests that the gas that produces the NaID absorption is \emph{in front} of the ionized gas that produces the redshifted emission. We use these observations in section \ref{s:emis_decomp} below to construct a model for the emission and absorption in this system. \subsubsection{Emission and absorption line decomposition}\label{s:emis_decomp} We detect ionized line emission in 259 spaxels throughout the FOV. Out of these, we detect additional broad ionized kinematic components in 73 spaxels, and NaID emission and absorption in 80 spaxels. The MUSE rest-frame wavelength range allows us to observe the following ionized emission lines: $\mathrm{H\beta}$~$\lambda$ 4861\AA, $\mathrm{HeI}$~$\lambda$ 5876\AA, $\mathrm{H\alpha}$~$\lambda$ 6563\AA, [OIII]~$\lambda \lambda$ 4959,5007\AA, [OI]~$\lambda \lambda$ 6300,6363\AA, [NII]~$\lambda \lambda$ 6548,6584\AA, and [SII]~$\lambda \lambda$ 6717,6731\AA\, (hereafter $\mathrm{H\beta}$, $\mathrm{HeI}$, $\mathrm{H\alpha}$, $\mathrm{[OIII]}$, $\mathrm{[OI]}$, $\mathrm{[NII]}$, and $\mathrm{[SII]}$). We start our emission line decomposition with the H$\alpha$ and [NII] emission lines, since these are the strongest emission lines we observe in our emission line cube\footnote{Many studies start the emission line decomposition from the [OIII] line, since it is one of the strongest optical emission lines and since it is not blended. In our source, the [OIII] emission line is much weaker than the H$\alpha$ line due to significant dust extinction, and thus its SNR is smaller than that of the H$\alpha$ line. }. We model each of the emission lines using one or two Gaussians, where the first represents the narrow kinematic component and the second the broader kinematic component. We tie the central wavelengths and widths of the narrow Gaussians to have the same velocity and velocity dispersion, and do the same for the broader Gaussians. We force the [NII] intensity ratio to its theoretical value. The broad kinematic component is kept only if its flux in H$\alpha$ and [NII] is detected to more than $3\sigma$. Otherwise, we perform an additional fit with a single narrow kinematic component. Once we obtain the best-fitting model for the H$\alpha$+[NII] complex, we use the best-fitting parameters to constrain the fit of the other ionized emission lines. We fit narrow and broad (if exists in the H$\alpha$+[NII] fit) kinematic components to the H$\beta$, [OI], [OIII], and [SII] lines, where we force their central wavelengths and widths to show the same velocities and velocity dispersions to those we found for the H$\alpha$ and [NII] lines. We further force the [OIII] intensity ratio to its theoretical value, and limit the [SII] intensity ratio, $\mathrm{[SII]\lambda 6717\AA / [SII]\lambda 6731\AA}$, to be in the range 0.44--1.44. We show examples of the best-fitting models in figure \ref{f:emis_and_abs_fitting_example_ionized_only} in the appendix, where the total model is marked with a red line, and the narrow and broad kinematic components are marked with green and blue lines respectively. One can see that the two Gaussians model provides an adequate representation of the ionized emission lines. Out of the 259 spaxels we fitted, 186 require only one kinematic component and 73 require two kinematic components. One can see in figure \ref{f:emis_and_abs_fitting_example_ionized_only} that the broad ionized emission lines often show both redshifted and blueshifted wings (with respect to the narrow kinematic component), suggesting that we detect both the approaching and the receding sides of the outflow. One could, in principle, fit these spectra with three kinematic components, one that represents the narrow core of the line, and two that represent the blueshifted and redshifted wings of the line, which are associated with the two sides of the outflow. We attempted to perform such fits and found that the model is too complex, with the different components being somewhat degenerate with each other. In particular, the best-fitting models of the three components showed a large variation in emission line ratios and widths between neighboring spaxels, which we believe to not be physical. In the case of the two kinematic components (narrow and broad), we found a continuous change of different line properties (such as line ratios and line widths) across spaxels, without imposing such constrains explicitly. We therefore chose the two component model over the three component one. Next, we model the HeI+NaID complex. The observed spectrum in this wavelength range includes a contribution from stellar continuum (which contributes to the NaID absorption), narrow and broad HeI emission, narrow and broad redshifted NaID emission, and broad blueshifted NaID absorption. The full model can be expressed as: \begin{equation}\label{eq:1} \begin{split} & \mathrm{f_{total}(\lambda) = \Big[ f_{stars}(\lambda) + f_{HeI}(\lambda) + f_{narrow\,NaID\,emis}(\lambda) + } \\ & \mathrm{+ f_{broad\,NaID\,emis}(\lambda) \Big] \times f_{broad\,NaID\,abs}(\lambda)}, \end{split} \end{equation} where $\mathrm{f_{stars}(\lambda)}$ is the stellar continuum, $\mathrm{f_{HeI}(\lambda)}$ is the narrow and broad HeI emission, $\mathrm{f_{narrow\,NaID\,emis}(\lambda)}$ is the narrow NaID emission, $\mathrm{f_{broad\,NaID\,emis}(\lambda)}$ is the broad NaID emission, and $\mathrm{f_{broad\,NaID\,abs}(\lambda)}$ is the broad NaID absorption. In our adopted model, the NaID absorption affects all the emission components in the system. This is justified by our observation that the gas that produces the blueshifted NaID absorption is in front of the gas that produces the redshifted NaID and ionized emission lines (see section \ref{s:model_inedpendent_props}). Our proposed model is too complex and its parameters cannot be constrained by our observations. In particular, we expect various parameters to be degenerate with each other (e.g., the strength of the HeI emission is degenerate with the strength of the blueshifted NaID absorption). We therefore made several simplifying assumptions that resulted in a simpler model with fewer free parameters. We use the best-fitting stellar population synthesis model (see section \ref{s:stellar_props}) as the stellar model, $\mathrm{f_{stars}(\lambda)}$. We model the narrow+broad HeI emission using the best-fitting parameters from the H$\alpha$ line fitting: the velocity and the velocity dispersion of the HeI line are taken to be exactly equal to those of the H$\alpha$ line, which is a very reasonable assumption for these two recombination lines. For case-B recombination, the HeI to H$\alpha$ intensity ratio depends only slightly on the electron temperature, and is 0.033 for $T_{e}=10^{4}$ K \citep{osterbrock06, draine11}. Therefore, the narrow+broad HeI model, $\mathrm{f_{HeI}(\lambda)}$, is determined completely from the best-fitting narrow+broad H$\alpha$ line. Thus, $\mathrm{f_{stars}(\lambda)}$ and $\mathrm{f_{HeI}(\lambda)}$ are determined by other observations and have no free parameters. \begin{figure*} \includegraphics[width=0.9\textwidth]{figures_for_paper/naid_fitting_explained.pdf} \caption{\textbf{Illustration of the process used to construct the NaID absorption and emission templates.} The left panel shows an example of a best-fitting emission line profile of one of the ionized lines (black dashed line), which is a combination of a narrow (green line) and a broad (cyan and blue line) kinematic components. The broad emission component is split into its redshifted (blue line) and blueshifted (cyan line) parts with respect to the velocity threshold used to model the NaID emission and absorption. These two represent the receding and approaching sides of the ionized outflow respectively. The right panel shows the resulting templates for the NaID emission and absorption, where narrow NaID emission is marked with green, broad NaID emission is marked with blue, and broad NaID absorption is marked with cyan. The red dotted line in both of the panels represents the velocity threshold that is used to split the ionized emission profile into a template P-Cygni-like profile for the broad NaID emission and absorption (see the text for additional details). }\label{f:naid_fitting_explained} \end{figure*} Next, we model the various NaID components. As noted previously, we find a surprising similarity between the kinematics of the narrow and broad NaID and the kinematics of the narrow and broad ionized emission lines. If the neutral and the ionized phases are indeed connected in our source, we can use the best-fitting kinematic parameters of the ionized emission lines to constrain the kinematics of the NaID absorption and emission. We visualize this process in figure \ref{f:naid_fitting_explained}, where the left panel shows a best-fitting profile for the narrow and broad ionized emission, and the right panel shows the NaID emission and absorption templates built using the ionized emission kinematics. The full NaID profile is a combination of two such templates, separated by the appropriate wavelength difference of the two NaID doublet lines. The narrow NaID emission, which is marked with a green line in figure \ref{f:naid_fitting_explained}, is given by: \begin{equation}\label{eq:2} \begin{split} & \mathrm{f_{narrow\,NaID}(\lambda) = A_{n,K} e^{-(\lambda - \lambda_{n,K})^2 / 2 \sigma_{n}^2} + } \\ & \mathrm{ \,\,\, + 2 A_{n,K} e^{-(\lambda - \lambda_{n,H})^2 / 2 \sigma_{n}^2}}, \end{split} \end{equation} where $\lambda_{n,K}$ and $\lambda_{n,H}$ are the central wavelengths of the narrow NaID $K$ and $H$ components, which are determined from the narrow ionized emission line velocity with respect to systemic. $\mathrm{\sigma_{n}}$ is the velocity dispersion of the narrow emission lines, which is the best-fitting velocity dispersion of the narrow ionized lines\footnote{The assumption that the NaID emission shares similar kinematics with the H$\alpha$ emission suggests that the cool neutral gas and warm ionized gas share the same kinematics. While this is clearly what we observe in this particular system, it is certainly not a general statement, and different gas phases can show very different kinematics (see e.g., \citealt{rupke05c, rupke13, cazzoli16, rupke17}).}. The intensity ratio of the two NaID doublet lines can range between 1 and 2, corresponding to optically-thick and optically-thin gas respectively. Throughout our experiments, when we let the intensity ratio to vary during the fit, we found best-fitting values that are close to 2. We therefore force the intensity ratio of the two lines to be 2. Thus, our model of the narrow NaID emission, $\mathrm{f_{narrow\,NaID}(\lambda)}$, has only one free parameter, which is the amplitude of the NaID~$\lambda$ 5897\AA\, component, $\mathrm{A_{n,K}}$. We note that the narrow NaID emission shows two well-resolved doublet lines, and thus this component is not significantly degenerate with the broad NaID emission we describe below. To model the broad NaID emission and absorption, we use the best-fitting broad emission line profile, which includes contributions from both the receding and approaching sides of the outflow, and convert it into a template P-Cygni-like profile for the NaID absorption and emission. The broad NaID absorption and emission templates are constructed by splitting the broad emission line into two parts using some velocity threshold value, marked with red in figure \ref{f:naid_fitting_explained}. The redshifted part of the emission line (with respect to the velocity threshold) serves as the broad redshifted NaID emission template, and both are marked with a blue line in figure \ref{f:naid_fitting_explained}. The blueshifted part of the emission line (with respect to the \emph{same} velocity threshold) serves as the broad blueshifted NaID absorption template, and both are marked with a cyan line in the figure \ref{f:naid_fitting_explained}. The velocity threshold used for the splitting is, in principle, a free parameter of the model. Throughout our experiments we found that the best-fitting velocity threshold is close to the velocity of the narrow emission. We therefore force the velocity threshold to be equal to the velocity of the narrow emission, which is given by the best-fitting velocity of the narrow H$\alpha$ emission line. \begin{figure*} \includegraphics[width=1\textwidth]{figures_for_paper/emis_and_abs_fitting_example_neutral_only.pdf} \caption{\textbf{Three representative spectra around the NaID region and their best-fitting models according to equation \ref{eq:1}.} Each row represents one spaxel in the FOV. The first column shows the observed spectrum (dashed black line), the best-fitting total model $\mathrm{f_{total}(\lambda)}$ (red line), and the stellar model $\mathrm{f_{stars(\lambda)}}$ (grey dotted line) of this spaxel. To illustrate the individual best-fitting emission and absorption components, it is more convenient to subtract the stellar model from the observed spectrum. The second column shows observed spectrum after the stellar model subtraction (black solid line), and presents the individual best-fitting emission and absorption components: narrow NaID emission (green solid line), broad NaID emission (blue solid line), narrow and broad HeI emission (grey solid line), and broad NaID absorption (cyan solid line). The third column represents an integration of the different components shown in the second column, where the observed spectrum after the stellar model subtraction is marked with a black solid line. The contribution of all the emission lines is marked with a grey dotted line, and the contribution of all emission and absorption, which is essentially the best-fitting model $\mathrm{f_{total}(\lambda) - f_{stars(\lambda)}}$, is marked with a red solid line. }\label{f:emis_and_abs_fitting_example_neutral_only_with_explanations} \end{figure*} The broad NaID emission, which is marked with a blue line in figure \ref{f:naid_fitting_explained}, is given by: \begin{equation}\label{eq:3} \begin{split} & \mathrm{f_{broad\,NaID\,emis}(\lambda)} = \begin{cases} \mathrm{A_{b,K} e^{-(\lambda - \lambda_{b,K})^2 / 2 \sigma_{b}^2}} & \lambda > \lambda_{n,K}, \\ 0 & \mathrm{otherwise}. \end{cases} + \\ \,\,\,\, & \begin{cases} \mathrm{2 A_{b,K} e^{-(\lambda - \lambda_{b,H})^2 / 2 \sigma_{b}^2}} & \lambda > \lambda_{n, H}, \\ 0 & \mathrm{otherwise}. \end{cases} \end{split}, \end{equation} where $\lambda_{b,K}$ and $\lambda_{b,H}$ are the central wavelengths of the broad NaID components, which are determined from the broad emission line velocities, and $\lambda_{n,K}$ and $\lambda_{n,H}$ are the central wavelengths of the narrow NaID components, which are used as the velocity threshold. $\mathrm{\sigma_{b}}$ is the best-fitting velocity dispersion of the broad ionized lines, and $\mathrm{A_{b,K}}$ is the amplitude of the $\lambda_{K}=5897$ \AA\, Gaussian, which is a free parameter of the model. The broad NaID absorption is modeled as part of the P-Cygni profile we constructed, with the same velocity threshold we used for the broad NaID emission. We assume a Gaussian optical depth, which is a common choice when modeling the NaID absorption (\citealt{rupke05a, rupke15, perna19}). Therefore, the broad NaID absorption, which is marked with a cyan line in figure \ref{f:naid_fitting_explained}, is given by: \begin{equation}\label{eq:4} \mathrm{f_{broad\,NaID\,abs}(\lambda)} = \mathrm{e^{-\tau_{b, K}(\lambda) - \tau_{b, H}(\lambda)}}, \end{equation} and the optical depths are given by: \begin{equation}\label{eq:5} \mathrm{\tau_{b, K}(\lambda)} = \begin{cases} \mathrm{\tau_{0,K} e^{-(\lambda - \lambda_{b,K})^2 / 2 \sigma_{b}^2}} & \lambda < \lambda_{n,K}, \\ 0 & \mathrm{otherwise} \end{cases} \end{equation} and: \begin{equation}\label{eq:6} \mathrm{\tau_{b, H}(\lambda)} = \begin{cases} \mathrm{2 \tau_{0,K} e^{-(\lambda - \lambda_{b,H})^2 / 2 \sigma_{b}^2}} & \lambda < \lambda_{n,H}, \\ 0 & \mathrm{otherwise} \end{cases}, \end{equation} where $\mathrm{\tau_{0,K}}$ is the absorption optical depth of the $\lambda_{K}=5897$ \AA\, component, which is a free parameter of the model. Here too, we forced the ratio of optical depths at line center of the two NaID components to be 2. By doing so, we assume that the absorbing gas is optically-thin. This is justified by: (1) when we allow the ratio to vary, the best-fitting ratio is close to 2, and (2) the best-fitting absorption optical depths, $\mathrm{\tau_{0,K}}$ and $\mathrm{\tau_{0,H}}$, are always smaller than 1. Several studies modeled the NaID absorption as a combination of clouds with different optical depths and different covering factors (see e.g., \citealt{rupke05a, rupke15, perna19}). We examined a more general model for the blueshifted NaID absorption, where we allowed the covering factor to vary, and found that the best-fitting covering factor is always close to 1. In addition, in section \ref{s:derived_gas_props} below we examine the physical properties of the neutral gas, and show that the combination of NaID emission and absorption suggests a covering factor that is close to 1. Therefore, our final adopted model has only three free parameters: $\mathrm{A_{n,K}}$, $\mathrm{A_{b,K}}$, and $\mathrm{\tau_{0,K}}$. In figure \ref{f:emis_and_abs_fitting_example_neutral_only_with_explanations} we show the best-fitting models for three representative spectra, where we break the model into all the individual components for clarity. Despite the few free parameters, the model tracks the observed spectra well. Our modeling of the NaID complex is not fully consistent. First, in a classical P-Cygni profile, the transition between the redshifted emission and the blueshifted absorption is expected to be gradual, while we have modeled it using step functions. This results in zero absorption (emission) above (below) a certain velocity threshold. This is a direct result of our model of the broad ionized emission lines, where we use a single broad Gaussian component to describe both the receding and the approaching sides of the outflow (and not two separate kinematic components). This forces us to construct a P-Cygni-like profile for the NaID lines by splitting the broad ionized emission with step functions. A more complete approach would be to model the approaching and receding parts of the ionized outflow by two Gaussians, and use each of the components to model the broad NaID absorption and emission separately. Such a decomposition is not possible for our source. Second, we suggest (see section \ref{s:origin_of_narrow_lines}) that the narrow redshifted NaID emission originates in the receding part of the outflow as well. A symmetric outflow would require an additional narrow blueshifted NaID absorption. We found no evidence for such a component, and thus did not include it in our model. Such a component may exist, but its contribution is negligible compared with the broad blueshifted NaID absorption. \subsubsection{Derived ionized and neutral gas properties}\label{s:derived_gas_props} \begin{figure*} \includegraphics[width=0.99\textwidth]{figures_for_paper/ionized_line_properties_extended.pdf} \caption{\textbf{Derived ionized gas properties in SDSS J124754.95-033738.6.} The top row represents the narrow emission line properties: H$\alpha$ flux, dust reddening towards the emission lines, centroid velocity with respect to systemic velocity, and velocity dispersion. The bottom row represents the broad emission line properties: H$\alpha$ flux, dust reddening towards the emission lines, and the minimal and maximal velocity in each spaxel, which are defined as $\Delta v - 2\sigma$ and $\Delta v + 2\sigma$ respectively. The red contours represent steps of 0.5 dex in stellar continuum emission (see figure \ref{f:2d_stellar_properties}). The black bar represents a distance of 10 kpc at the redshift of the system. While the narrow lines are distributed over tens of kpc, and their centroid follows roughly the stellar rotation pattern, the broad lines are detected up to 5 kpc and show very large positive and negative velocities, suggestive of a fast outflow. }\label{f:ionized_line_properties_extended} \end{figure*} Having decomposed the ionized and neutral emission and absorption, we can now examine the physical properties of the neutral and ionized gas. We start by presenting the properties of the ionized gas. In particular, we examine its spatial distribution and kinematics, its main source of photoionization, the dust reddening, and the electron density in the outflow. We then present the derived properties of the neutral gas, such as emission line luminosity, absorption EW, optical depth and covering factor. Finally, we discuss the observed connection between the NaID absorption and emission in this system. In figure \ref{f:ionized_line_properties_extended} we present several properties of the narrow and broad ionized emission lines. For the narrow lines, we show the integrated H$\alpha$ flux, the dust reddening towards the emission lines (see equation 1 in \citealt{baron19b}), the centroid velocity with respect to systemic velocity, and the velocity dispersion. For the broad lines, we show the H$\alpha$ flux, the dust reddening towards the emission lines, and minimal and maximal outflow velocities, which are defined as $\Delta v - 2\sigma$ and $\Delta v + 2\sigma$ respectively. Similar to figure \ref{f:ionized_gas_emission}, the narrow H$\alpha$ flux appears to be disturbed and asymmetric, and it extends to distances larger than 30 kpc. The broad H$\alpha$ flux appears to be more symmetric, and it extends to distances of 5 kpc from the center of the galaxy. Similarly to other post starburst E+A galaxies with ionized outflows (\citealt{baron17b, baron18}), the dust reddening towards the narrow and broad lines is high, $\mathrm{E}_{B-V} \sim 0.6$ mag, which is higher than the dust reddening typically observed in type II AGN that host ionized outflows \citep{baron19b}. The centroid velocity of the narrow lines is consistent with rotation, and appears to be roughly similar to the rotation pattern of the stars (see figure \ref{f:2d_stellar_properties}). We find approximately the same rotational axis for the narrow emission lines and the stars, but we find some inconsistency in the velocity field at a distance of 6--10 kpc in the north-south (up-down) direction. This might suggest that some of the narrow emission lines are not emitted by stationary NLR-like gas, but rather by the outflowing gas, which we further discuss in section \ref{s:origin_of_narrow_lines}. The velocity dispersion of the narrow lines is rather low and uniform throughout the FOV, except for two regions with a somewhat lower velocity dispersion, both of which coincide with an upturn of a spiral arm. Since we fit a single broad Gaussian to describe both the approaching and the receding sides of the outflow, the centroid velocity and velocity dispersion of the broad lines are difficult to interpret. Instead, it is useful to examine the minimum and maximum velocity in each spaxel. The maximal positive and negative velocities are quite similar in all spaxels: $\sim \pm$1\,000 km/sec. The symmetry between the positive and negative maximal velocities suggests that if this is indeed a double-cone, it is in a nearly face-on configuration. An alternative scenario is a full expanding shell ,where the opening angle is close to $90^{\circ}$. However, in such a scenario, one would expect to find lower velocities in spaxels at the edges of the outflow due to projection effects. We find no evidence for this. In section \ref{s:outflow_geometry} we discuss the geometry of the outflow in detail, and summarize the evidence for a face-on double-cone outflow. Next, we examine the main source of ionizing radiation in the regions emitting the narrow and broad emission lines, as well as their degree of ionization. In figure \ref{f:emission_line_ratios_new} we show the narrow and the broad emission components on several standard line diagnostic diagrams. The narrow and broad kinematic components are consistent with AGN photoionization in the large majority of spaxels, with some contribution from HII regions in the central regions of the galaxy. We estimate the relative contribution of SF to the ionization of the narrow and broad emission lines, using the prescription by \citet{wild10}. Outside the few central spaxels, we find a negligible contribution of SF to the ionization of the narrow lines, with a relative contribution of 0--0.2. For the broad lines, we find a contribution of roughly 0.3. We therefore conclude that the AGN is the main source of ionizing radiation of both narrow and broad kinematic components in the majority of the spaxels, with a significant contribution from SF only in the central region. The degree of ionization of the gas can be roughly estimated using the observed $\mathrm{[OIII]/H\beta}$ line ratio, since this ratio strongly depends on the ionization parameter of the ionized gas (see \citealt{baron19b} and references therein). In particular, for the ionization parameters observed in this system, we expect the $\mathrm{[OIII]/H\beta}$ line ratio to be larger in more ionized regions, and to be smaller in less ionized regions. Figure \ref{f:emission_line_ratios_new} shows a continuous change in the \emph{narrow} $\mathrm{[OIII]/H\beta}$ line ratio as a function of distance from the center of the galaxy, with smaller $\mathrm{[OIII]/H\beta}$ ratios close to the center and larger ratios farther out. This suggests that there is a continuous change in the degree of ionization of the line emitting gas, with lower ionization closer to the center of the galaxy and higher ionization farther out. This suggests that the density falls faster than $\mathrm{n_{H}(r) \propto r^{-2}}$. In contrast to the narrow lines, the \emph{broad} $\mathrm{[OIII]/H\beta}$ line ratio remains nearly constant and consistent with LINER-like emission throughout the FOV, suggesting that the degree of ionization of the gas that emits the broad lines is roughly the same in the different spaxels. Since the degree of ionization depends on the gas density and on its distance from the central ionizing source, a constant gas density would imply that the gas that emits the broad lines is located at the same distance from the central source. Finally, we estimated the ionization parameter in the narrow and broad kinematic components using the expression from \citet{baron19b}. The ionization parameter of the narrow line-emitting gas ranges from $\log{U} = -3.7$ in the center to $\log{U} = -3.2$ at 15 kpc. The ionization parameter of the broad line-emitting gas is roughly constant throughout the FOV at $\log{U} = -3.7$. These estimates are used in the photoionization models in section \ref{s:photoionization_models}. An alternative scenario to account for the radial variation in the narrow [OIII]/H$\beta$ is mixing of SF and AGN ionization throughout the FOV, where regions that are dominated by SF show lower [OIII]/H$\beta$ ratios, and regions dominated by the AGN showing larger [OIII]/H$\beta$ ratios. We find this scenario to be unlikely since: (1) the relative contribution of SF to the ionization of the narrow lines is negligible outside the central region, and (2) a varying contribution of SF and AGN to the gas ionization results in diagonal "mixing tracks" on the BPT diagram, where both [OIII]/H$\beta$ and [NII]/H$\alpha$ line ratios increase with increasing AGN contribution (see e.g., \citealt{wild10, davies14}). In our source we find a vertical trend, where the [OIII]/H$\beta$ varies dramatically while the [NII]/H$\alpha$ remains roughly constant. This is in line with the photoionization models of \citet{baron19b}, that show that for gas that is photoionized by an AGN, a variation of the ionization parameter results in a vertical trend of increasing [OIII]/H$\beta$ ratio and quite constant [NII]/H$\alpha$ ratio. We therefore conclude that the radial variation in the narrow [OIII]/H$\beta$ ratio is most likely due to a variation of the ionization parameter. \begin{figure*} \includegraphics[width=1\textwidth]{figures_for_paper/emission_line_ratios_new.pdf} \caption{\textbf{Line diagnostic diagrams showing the main source of photoionization in the warm ionized gas, and its degree of ionization.} The left column shows the emission line ratios $\mathrm{\log{} [OIII]/H\beta}$ versus $\mathrm{\log{} [NII]/H\alpha}$ (top) and $\mathrm{\log{} [OIII]/H\beta}$ versus $\mathrm{\log{} [SII]/H\alpha}$ (bottom). We mark the two separating criteria that are used to separate star forming from AGN-dominated galaxies (\citealt{kewley01, kauff03a}; Ke01 and Ka03 respectively), and the two LINER-Seyfert separating lines from \citet[CF10]{cidfernandes10} and \citet[Ke06]{kewley06}. We mark the narrow lines with circles and the broad lines with triangles. The middle and right columns show the spaxels in which we detected narrow (middle column) and broad (right column) kinematic components, color-coded according to their location in the line-diagnostic diagrams. The narrow and broad kinematic components are consistent with AGN photoionization in the large majority of spaxels, with some contribution from HII regions in the central regions of the galaxy. The narrow $\mathrm{[OIII]/H\beta}$ line ratio suggests that the ionization of the gas increases with increasing distance. The more-or-less constant broad $\mathrm{[OIII]/H\beta}$ line ratio suggests a constant ionization parameter and hence gas which is at the same distance from the central source. }\label{f:emission_line_ratios_new} \end{figure*} \begin{figure*} \includegraphics[width=0.9\textwidth]{figures_for_paper/electron_densities_all.pdf} \caption{\textbf{Two methods to measure the electron density in the warm ionized gas.} The first is based on the commonly-used [SII] line ratio (top panels), and the second on the $\mathrm{\log{} [OIII]/H\beta}$ and $\mathrm{\log{} [NII]/H\alpha}$ line ratios, and on the known distance of gas from the central source (bottom panels). The left panels show the spatial distribution of the electron densities measured for the narrow lines, the middle panels show the spatial distribution for the broad lines, and the right panels compare the histograms of electron densities of both of the kinematic components. The two methods give consistent electron densities throughout the FOV. This suggests that the projected distance is similar to the actual distance of the gas, which indicates that the opening angle of the outflowing cone is not small ($\gtrsim 45^{\circ}$; see text for additional details). }\label{f:electron_densities_all} \end{figure*} Next, we estimate the electron densities in the warm ionized gas using two methods. The first method is based on the commonly-used [SII] line ratio and is sensitive to electron densities in the range $10^{2}\, \mathrm{cm^{-3}} < n_{e} < 10^{4} \, \mathrm{cm^{-3}}$ (e.g., \citealt{osterbrock06}). The top panels in figure \ref{f:electron_densities_all} show the results for this method, where we show the spatial distribution of electron densities in the narrow line-emitting gas in the left panel, the distribution in the broad line-emitting gas in the middle panel, and compare the electron density histograms of the two components in the right panel. In \citet{baron19b} we discussed the uncertainties that are associated with electron densities based on the [SII] method. In particular, we showed that the ratio is poorly constrained for the majority of the objects in our type II sample, and the resulting electron densities were consistent with the entire range of possible electron densities. For SDSS J124754.95-033738.6, the exceptional quality of the MUSE observations allows us to place tight constraints on the [SII] line ratios in most of the spaxels, resulting in robust electron density estimates. \citet{baron19b} proposed a novel method to estimate the electron density in ionized gas, which is based on the bolometric luminosity of the AGN, the optical $\mathrm{[OIII]/H\beta}$ and $\mathrm{[NII]/H\alpha}$ line ratios, and the distance of the gas from the central source (see sections 4.2 and 5.2.2 in \citealt{baron19b}). The electron density is given by: \begin{equation}\label{eq:7} {n_{\mathrm{e}} \approx 3.2 \Big(\frac{L_{\mathrm{bol}}}{10^{45}\, \mathrm{erg/sec}}\Big) \Big( \frac{r}{1\,\mathrm{kpc}} \Big)^{-2} \Big(\frac{1}{U}\Big) \, \mathrm{cm^{-3}} }, \end{equation} where $L_{\mathrm{bol}}$ is the AGN bolometric luminosity and $r$ is the distance from the central source. The ionization parameter, $U$, is given by (\citealt{baron19b}): \begin{equation}\label{eq:8} \begin{split} & \log U = a_{1} + a_{2}\Big[\log\mathrm{\Big(\frac{[OIII]}{H\beta}\Big)}\Big] + a_{3}\big[\log\mathrm{\Big(\frac{[OIII]}{H\beta}\Big)}\Big]^{2} + \\ & + a_{4}\Big[\log\mathrm{\Big(\frac{[NII]}{H\alpha}\Big)}\Big] + a_{5}\Big[\log\mathrm{\Big(\frac{[NII]}{H\alpha}\Big)}\Big]^{2}, \end{split} \end{equation} where the constants ($a_{1}$, $a_{2}$, $a_{3}$, $a_{4}$, $a_{5}$) are given by (-3.766, 0.191, 0.778, -0.251, 0.342). In our case, we only know the projected distance, which can be very different from the actual distance. In particular, if the broad emission lines originate from a double cone outflow that is viewed face on, the difference between the projected distance and the actual distance is related to the opening angle of the cones, such that for small opening angles, which implies elongated cones, the projected distance can be much smaller than the actual distance. For a large opening angle, the projected distance is roughly similar to the actual distance. Since the ionization parameter-based method depends on the location of the outflow while the [SII]-based method does not, we can estimate the de-projected distance of the gas by comparing the densities derived with the two methods. We estimate the electron densities using equations \ref{eq:7} and \ref{eq:8}, taking the distance to be the measured projected distance. If the opening angle of the cone is large, and the projected distance is roughly similar to the actual distance, we expect to find similar electron densities using the two methods: $\mathrm{n_{e}([SII]) = n_{e}(U;r=r_{projected})}$. On the other hand, if the opening angle is small, and the projected distance is much smaller than the actual distance, we expect the electron densities estimated using our method to be larger than those estimated using the [SII]-method. For example, for an opening angle of 6$^{\circ}$, the projected distance is ten times smaller than the actual distance, and we would expect a factor of 100 difference between the two electron density estimates. For example, for $\mathrm{n_{e}([SII]) = 10^{2}\, cm^{-3}}$, the estimated electron density using the ionization parameter method will be $\mathrm{n_{e}(U; r=r_{projected}) = 10^{4}\, cm^{-3}}$. \citet{baron19b} noted that the [SII]-based electron densities can be lower than the [OIII] and H$\alpha$-based densities because the [OIII] and H$\alpha$ lines are emitted throughout most of the ionized cloud, while the [SII] lines are emitted close to the ionization front, where the electron density can be significantly smaller. This effect can account for a difference in densities of up to a factor of 10, but not larger. Due to the degeneracy between the two scenarios (small opening angle versus the photoionization considerations mentioned above), this comparison cannot be used to accurately determine the de-projected distance of the gas. However, it can be used to check whether the de-projected distance is close to the observed projected one, as we do here. In the bottom panels of figure \ref{f:electron_densities_all} we show the electron densities estimated using our method. One can see that the electron densities inferred for the narrow lines are roughly consistent throughout the FOV. Nevertheless, we do find a significant difference between the electron density estimates in the east (right) side of the narrow line-emitting region. Inspection of the best-fitting spectra reveals that the SNR of the [SII] lines is small in these regions, which might explain the discrepancy. We find that the electron densities inferred for the broad lines are of the same order of magnitude for the two different methods. This suggests that the projected distance is not very different from the actual distance, and that the opening angle of the cone is not small ($\gtrsim 45^{\circ}$, where $90^{\circ}$ represents a full shell). We therefore adopt $r=5\,\mathrm{kpc}$, which is the projected distance of the spaxels located on the edge of the broad line-emitting region, as the representative distance of the outflowing gas from the central source. The electron densities we infer for the ionized outflow in this object are about $n_{e} \sim 10^{2}\,\mathrm{cm^{-3}}$, which is two orders of magnitude lower than our estimated electron densities in ionized outflows in type II AGN \citep{baron19b}. This difference can be attributed to: (1) SDSS J124754.95-033738.6 is a post starburst E+A galaxy, while the type II AGN sample studied in \citet{baron19b} consists of typical star forming galaxies, and (2) we estimate the location of the ionized outflow in SDSS J124754.95-033738.6 to be $r \sim$5 kpc, while the typical location of the outflows in the type II AGN sample is much smaller, $r \sim$ 200 pc (\citealt{baron19a}). Finally, we use the best-fitting models of the NaID complex to study the neutral gas properties. In figure \ref{f:naid_properties_all_with_explanations} we present a summary of the NaID emission and absorption properties. The first row shows the dust-corrected narrow and broad NaID emission line luminosities, where we used the reddening derived from the narrow and broad ionized emission lines respectively. This correction is justified since according to our proposed model, which we further discuss in section \ref{s:photoionization_models}, the NaID and the ionized emission lines are emitted by the same clouds, and therefore we expect them to be obscured by roughly the same dust column densities. The second row represents the broad NaID absorption properties, where we show the EW and optical depth at line center for the NaID$_{K}$ component. The optical depth is low, $\tau_{0} < 0.1$, suggesting that the absorbing gas is optically-thin. The NaID emission and absorption coincide in most of the spaxels throughout the FOV, with stronger emission and absorption closer to the center of the galaxy, and weaker emission and absorption farther out. This reinforces our suggestion for a face-on outflow that produces a P-Cygni-like profile. \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{figures_for_paper/naid_properties_all.pdf} \caption{\textbf{Properties of the NaID emission and absorption lines.} \textbf{The first row} represents the emission properties, where we show the dust-corrected NaID$_{K}$ luminosities for the narrow and broad kinematic components (left and right panels respectively). \textbf{The second row} represents the absorption properties, where we show the EW and optical depth at line center of the broad NaID$_{K}$ component (left and right panels respectively). The insets of the first two rows summarize the fitted kinematic components and the chosen constrains during the fit. \textbf{The third row} presents the observed connection between the broad NaID emission and absorption. The panel shows L$^{*}_{\mathrm{emis}}$(broad NaID$_{K}$), which is the NaID$_{K}$ luminosity prior to dust correction, versus $\mathrm{ EW_{abs}(broad\,\, NaID_{K}) \times L_{\lambda; \, stars}(\lambda=5897\,\AA) }$, where $\mathrm{EW_{abs}(broad\,\, NaID_{K})}$ is the absorption EW, and $\mathrm{L_{\lambda; \, stars}(\lambda=5897\,\AA)}$ is the stellar continuum at the NaID$_{K}$ wavelength. The dashed grey line represents the expected relation between the absorption and emission for a symmetric neutral wind without dust. Dusty outflow are expected to occupy the region beneath the dashed line (grey background color). The area above the line is excluded for symmetric outflows (purple background color). The distribution in the different spaxels suggests that the wind is dusty and its approaching and receding sides are roughly symmetrical. }\label{f:naid_properties_all_with_explanations} \end{figure*} According to our model, the broad NaID emission and absorption originate from a double-cone neutral outflow, where the redshifted emission is due to the receding part of the outflow, while the blueshifted absorption is due to the approaching side. We can therefore compare the emission and absorption properties and examine how symmetric the outflow is. If the approaching and receding sides of the outflow are completely symmetric (same density, spatial extent, covering factor, etc), and assuming no dust, we expect the blueshifted absorption and redshifted emission to have the same EW (NaID is a resonant transition, and every absorption results in reemission). Therefore, the NaID emission line luminosity can be expressed as: $\mathrm{L_{emis}(NaID) = EW_{abs}(NaID) \times L_{\lambda; stars}(NaID)}$, where $\mathrm{EW_{abs}(NaID)}$ is the absorption EW, and $\mathrm{L_{\lambda; stars}(NaID)}$ is the stellar continuum at the NaID wavelength. For dusty gas, the EW of the redshifted emission will be smaller than the blueshifted absorption, since photons that are emitted by the receding side of the outflow go through a larger column density of dust. As a result, $\mathrm{L_{emis}(NaID) < EW_{abs}(NaID) \times L_{\lambda; stars}(NaID)}$. We show these two scenarios in the bottom row of figure \ref{f:naid_properties_all_with_explanations}, where a symmetric outflow with no dust is marked with a dashed line. Dusty outflows will occupy the region beneath the dashed line (grey color). In case of a symmetric outflow, the area above the line (purple color) is excluded. We compare the broad NaID emission to the broad NaID absorption in the bottom rows of figure \ref{f:naid_properties_all_with_explanations}. Since we want to examine the effect of dust reddening, the y-axis shows the broad NaID$_{K}$ luminosity without correcting for dust reddening (hereafter L$^{*}$(NaID$_{K}$)). The x-axis shows $\mathrm{ EW_{abs}(broad\,\, NaID_{K}) \times L_{\lambda; stars}(\lambda=5897\,\AA) }$, where $\mathrm{EW_{abs}(broad\,\, NaID_{K})}$ is the absorption EW, and $\mathrm{L_{\lambda; stars}(\lambda=5897\,\AA)}$ is the stellar continuum at the NaID$_{K}$ wavelength, which we take from the best-fitting stellar model. The product of the two represents the expected NaID$_{K}$ luminosity produced by the approaching side of the outflow (via scattering of the absorption photons), as would have been observed by an observer from the opposite direction. By comparing the luminosities of the receding (x-axis) and approaching (y-axis) sides of the outflow, we can examine how symmetric the outflow is. One can see that most of the spaxels occupy the $\mathrm{E}_{B-V} > 0$ region in the diagram, and only a small minority occupy the "excluded" region in the diagram. Furthermore, if instead we show the \emph{dust-corrected} broad NaID$_{K}$ luminosity in the y-axis, most of the measurements cluster around the $\mathrm{E}_{B-V} = 0$ line. While these arguments are somewhat simplistic and the measurements are subjected to significant uncertainties, they are consistent with the suggestion that the neutral outflow gas is dusty, and that the approaching and receding parts of the outflow are roughly symmetric. \subsubsection{Connection between the neutral and ionized gas phases}\label{s:connection_between_gas_phases} \begin{figure*} \includegraphics[width=0.8\textwidth]{figures_for_paper/connection_between_the_neutral_and_ionized_phases.pdf} \caption{\textbf{Connection between the neutral and ionized outflow phases.} The top row compares between the dust-corrected NaID$_{K}$ luminosity and the H$\alpha$ luminosity, where we show the narrow and broad kinematic components in the left and right panels respectively. The solid blue lines represent the case where L(NaID$_{K}$)=0.1L(H$\alpha$). The bottom row compares between the spatial extents of the NaID and H$\alpha$ emission, where we show the narrow and broad kinematic components in the left and right panels respectively. We mark the rescaled stellar continuum luminosity, $\mathrm{\nu L_{\nu}(5897 \AA)/100}$ with grey rectangles, the NaID emission with blue crosses, and the H$\alpha$ emission with black crosses. The narrow and broad kinematic components show a constant and similar L(NaID)/L(H$\alpha$) ratio throughout the FOV. In addition, the NaID and H$\alpha$-emitting gas show similar spatial extents. These observations suggest that the neutral and ionized gas phases originate from the same outflowing clouds. }\label{f:connection_between_the_neutral_and_ionized_phases} \end{figure*} In section \ref{s:model_inedpendent_props} we already noted the similarity between the kinematics of the NaID emission and absorption and the kinematics of the ionized emission lines. We used this similarity to model the NaID emission and absorption complex, where we set the kinematic parameters of the different NaID components to be the same as the best-fitting kinematic parameters of the ionized emission lines. The success of these fits in all the spaxels (see e.g., figure \ref{f:emis_and_abs_fitting_example_neutral_only_with_explanations}) reinforces the suggestion of a kinematic connection between the two phases. In figure \ref{f:connection_between_the_neutral_and_ionized_phases} we further compare the neutral and ionized line luminosities and spatial extents. In the top row we show the dust-corrected NaID and H$\alpha$ luminosities for the narrow and broad kinematic components. The two luminosities show a strong correlation in both cases, with a constant luminosity ratio of L(NaID$_{K}$)/L(H$\alpha$)$\sim 0.1$. This correlation remains as significant when we compare the NaID$_{K}$ and H$\alpha$ luminosities prior to the reddening correction, suggesting that the correlation is not driven by the dust distribution. In the bottom row we examine how these luminosities change as a function of projected distance from the center. The panels show the reddening-corrected narrow and broad NaID$_{K}$ and H$\alpha$ luminosities, and the reddening-corrected stellar continuum at $\lambda = 5897$\AA\, as a function of projected distance from the center of the galaxy. The neutral and ionized emission lines show similar spatial extents in the broad kinematic component throughout the FOV, and similar extents and behavior in the narrow kinematic component up to a distance of 5 kpc. These similarities suggest a common origin of the neutral and ionized emission lines. We use these in our proposed model in section \ref{s:models} below. \begin{figure*} \includegraphics[width=1\textwidth]{figures_for_paper/doodle_interpretation.pdf} \caption{\textbf{The proposed outflow geometry.} The ionized emission and neutral emission and absorption originate from the same outflowing clouds, which are embedded in a face-on double-cone outflow with a moderate opening angle ($\sim 45^{\circ}$), at a distance of $\sim$5 kpc from the center. The central AGN ionized the inner parts of the outflowing clouds, creating an ionization structure that produces the observed ionized lines. The observed ionized emission line profile shows a combination of redshifted and blueshifted emission, where the redshifted emission is due to the receding part of the outflow, while the blueshifted is due to the approaching side. The hydrogen column density in these clouds is large enough to make the clouds radiation bounded. As a result, the outer parts of the clouds are neutral, where NaI atoms can absorb and reemit stellar photons. This results in a P-Cygni-like profile for the NaID, where the redshifted NaID emission originates in the receding side of the outflow, and the blueshifted absorption originates in the approaching side. This model naturally explains the observed similarities between the neutral and ionized gas phases. }\label{f:doodle_interpretation} \end{figure*} \section{Modeling the outflow}\label{s:models} In this section we construct a model for the outflow in the system. We propose that the ionized emission, and neutral emission and absorption, originate from the \emph{same} clouds, which are embedded in a double-cone outflow. We present our proposed model in section \ref{s:outflow_geometry}, where we describe all the observational evidence that support the suggested geometry. Then, in section \ref{s:photoionization_models} we construct photoionization models and show that our suggested model is consistent with the various observational constraints. We then use the observed NaID$_{K}$/H$\alpha$ ratio to constrain the Sodium neutral fraction, the size, and the neutral-to-ionized gas mass ratios in the outflowing clouds. In section \ref{s:mass_and_energy} we use the ionized and neutral gas properties to estimate the mass outflow rate and energetics in the ionized and neutral phases of the observed outflow and in section \ref{s:origin_of_narrow_lines} we present evidence that much of the narrow emission line luminosity also originates in the outflow, rather than from a stationary NLR. \subsection{Outflow geometry}\label{s:outflow_geometry} In figure \ref{f:doodle_interpretation} we present an illustration of the suggested outflow geometry. In this model, the system hosts a face-on double-cone outflow, with a moderate opening angle, at a distance of $\sim$5 kpc from the center. The central AGN ionizes the inner parts of the outflowing clouds, creating an ionization structure that produces the observed ionized lines. The emission line profiles are a combination of redshifted and blueshifted emission, where the redshifted emission is due to the receding part of the outflow, and the blueshifted is due to the approaching side. The hydrogen column density in these clouds is large enough to make the clouds radiation bounded. As a result, the outer parts of the clouds are neutral, and the local NaI atoms can absorb and reemit stellar photons. The blueshifted NaID absorption is due to absorption of stellar photons in the approaching side of the outflow and the redshifted emission is due to absorption of stellar photons in the receding side of the outflow, which are then reemitted isotropically. These photons are not absorbed by the approaching side because of their redshifted central wavelength. This results in the observed P-Cygni-like profile. In this model, the NaID absorption affects both NaID and the ionized line emission (see section \ref{s:emis_decomp}). The roughly constant NaID$_{K}$/H$\alpha$ ratio throughout the FOV suggests that all the outflowing clouds have roughly the same hydrogen column densities, sizes, and neutral-to-ionized gas mass ratios. The above model is supported by all the observations presented earlier. The neutral and ionized gas phases are believed to be part of the same outflowing clouds due to the observed correspondence in gas kinematics, spatial extent, and constant NaID$_{K}$/H$\alpha$ ratio throughout the FOV (see section \ref{s:connection_between_gas_phases} for details). The full photoionization model presented in section \ref{s:photoionization_models} below is consistent with all these observations. The suggested double-cone outflow is supported by the fact that we detect redshifted and blueshifted kinematic components in both the ionized and neutral lines (see figures \ref{f:emis_and_abs_fitting_example_neutral_only_with_explanations}, \ref{f:ionized_line_properties_extended}, and \ref{f:emis_and_abs_fitting_example_ionized_only}). We argue that the cones are seen face-on since (i) the maximal positive and negative velocities we observe in the ionized emission lines are similar ($\pm$1\,000 km/sec) and are constant throughout the FOV (see figure \ref{f:ionized_line_properties_extended}), (ii) the NaID emission and absorption coincide throughout the entire FOV, where spaxels with strong emission also show strong absorption and vice versa (see figure \ref{f:naid_properties_all_with_explanations}), and (iii) the central source is classified as an unobscured type I AGN (see section \ref{s:agn_props}). Finally, in section \ref{s:derived_gas_props} we used two different methods to estimate the electron density in the outflow, one which depends on the de-projected distance of the gas and one that does not. By comparing the two methods, we argued that the projected and de-projected locations of the outflow are roughly similar, and are $r \sim $5 kpc. This also suggests that the opening angle of the cone is not small ($\gtrsim 45^{\circ}$). On the other hand, very large opening angles are ruled out since we do not observe lower outflow velocities at the edges of the FOV (see figure \ref{f:ionized_line_properties_extended} and section \ref{s:derived_gas_props}). We therefore suggest that the opening angle of the cone is moderate. \subsection{Photoionization modeling}\label{s:photoionization_models} We model the central source of ionizing radiation using standard assumptions about the spectral energy distribution (SED) of AGN. The SED consists of a combination of an optical-UV continuum emitted by an optically-thick geometrically-thin accretion disk, and an additional X-ray power-law source that extends to 50 keV with a photon index of $\Gamma = 1.9$. The normalization of the UV (2500 \AA) to X-ray (2 keV) flux is defined by $\alpha_{OX}$, which we take to be 1.37. We choose an SED with a mean energy of an ionizing photon of 2.56 Ryd. The AGN bolometric luminosity is $1.5\times 10^{44}\,\mathrm{erg/sec}$, which results in a number of ionizing photons of $\mathrm{Q(Lyman) = 1.7 \times 10^{54}}$. The results discussed below depend primarily on the ionization parameter, and are less affected by the specific choices of SED shape and bolometric luminosity. The ionization parameter is defined as: $\mathrm{U = Q(Lyman)/4\pi r^2 n_{H} c}$, where $\mathrm{r}$ is the distance of the gas from the central source, $\mathrm{n_{H}}$ is the hydrogen number density, and $\mathrm{c}$ is the speed of light. We estimated the ionization parameter as a function of projected distance from the center of the galaxy, and found ionization parameters in the range $\log{\mathrm{U}}=-3.7$ to $\log{\mathrm{U}}=-3.2$, with the broad emission lines showing a constant $\log{\mathrm{U}}=-3.7$. We therefore focus on three models, with ionization parameters of $\log{\mathrm{U}}=-3.7$, $\log{\mathrm{U}}=-3.4$, and $\log{\mathrm{U}}=-3.2$\footnote{For these low ionization parameters, we do not expect Radiation Pressure Confinement (RPC; e.g., \citealt{dopita02, stern12}) to have a significant effect on the cloud structure, since gas pressure is larger than radiation pressure. Therefore, for the ionization parameters discussed here, constant-pressure models are effectively similar to constant-density models.}. Our models consist of optically-thick shells of dusty gas, with ISM-type grains, at a distance of 5 kpc from the central ionizing source. The hydrogen density is determined from the required ionization parameter, and is $100\,\mathrm{cm^{-3}}$, $50\,\mathrm{cm^{-3}}$, and $30\,\mathrm{cm^{-3}}$ respectively. To ensure that the column density is large enough to include neutral Na atoms, we set the total hydrogen column density to be $\mathrm{N_{H}=10^{24}\,cm^{-2}}$. The stellar mass of SDSS J124754.95-033738.6 is estimated to be $\mathrm{M_{*} = 10^{10.8}\, M_{\odot}}$ (e.g., \citealt{chen12}). Therefore, the stellar mass-metallicity relation suggests metallicity which is roughly twice solar (e.g., \citealt{t04}). However, it is unclear whether the metallicity of the outflowing gas should be similar to the metallicity of the stationary ISM in the host galaxy. Therefore, and to allow for a straightforward comparison with previous studies (e.g., \citealt{shih10, rupke15, perna19}), we report results for solar and twice solar metallicity. We note that the main results presented below do not depend strongly on the metallicity. \begin{figure*} \includegraphics[width=0.95\textwidth]{figures_for_paper/photoionization_models_selected.pdf} \caption{\textbf{Photoionization model results.} Photoionization models with solar metallicity, where the rows represent the different ionization parameters, logU=-3.7, logU=-3.4, and logU=-3.2 respectively. The left panels show the ionization fraction of H and Na as a function of the hydrogen column density. The second panels from the left show the optical depth at line center of the NaID$_{K}$ component, and the column density of NaI, as a function of the hydrogen column density in the cloud. The third panels show the expected H$\alpha$ and NaID$_{K}$ line luminosities as a function of hydrogen column density, and the right panels the L(H$\alpha$)/L(NaID$_{K}$) line ratio. We mark the hydrogen column density for which L(H$\alpha$)/L(NaID$_{K}$)=10 using a red line in all the panels. Using these models and the observed L(H$\alpha$)/L(NaID$_{K}$) ratio, we can constrain the hydrogen column density, the Sodium neutral fraction, and the neutral-to-ionized gas mass ratio in the outflow. }\label{f:photoionization_models_selected} \end{figure*} In figure \ref{f:photoionization_models_selected} we show the results of the photoionization modeling for solar metallicity, where each row represents a different ionization parameter. In the left panels we show the ionized fractions of H and Na as a function of the hydrogen column density in the cloud. The second panels from the left show the optical depth at line center of the D2 component, $\mathrm{\tau_{0} (NaID_{K})}$, and the column density of neutral Na, $\mathrm{N_{NaI}}$, as a function of the hydrogen column density in the cloud. For solar metallicity, the column density of neutral Na is calculated using: \begin{equation}\label{eq:9} \mathrm{N_{NaI} = N_{H} (1-y) 10^{a + b}}, \end{equation} where $\mathrm{N_{H}}$ is the hydrogen column density, and $\mathrm{y=1- n(NaI)/n(Na)}$ is the fraction of ionized Na, which is taken from the model. To be consistent with \citet{shih10}, we adopt $\mathrm{a = \log{} \big[ N_{Na}/N_{H} \big]} = -5.69$, which is the solar Na abundance, and $\mathrm{b = \log{} \big[ N_{Na}/N_{H,total} \big] - \log{} \big[ N_{Na}/N{H,gas} \big] = -0.95 }$, which is the ISM depletion of Na. For twice solar metallicity, the righthand side expression in equation \ref{eq:9} is multiplied by two. The optical depth at line center of the $K$ component is then \citep{draine11}: \begin{equation}\label{eq:10} \begin{split} & \mathrm{ \tau_{0}(NaID_{K}) = 0.7580 \times } \\ & \times \mathrm{\Big( \frac{N_{Na}}{10^{13}\, cm^{-2}} \Big) \Big(\frac{f_{lu}}{0.4164} \Big) \Big(\frac{\lambda_{lu}}{1215\,\AA}\Big) \Big(\frac{10\,km/sec}{b}\Big)}, \end{split} \end{equation} where $\mathrm{f_{lu}}=0.32$, $\mathrm{\lambda_{lu}=5897}$ \AA, and we take $b=500$ km/sec, which is close to the median velocity width we infer from the broad ionized emission lines and the broad NaID emission and absorption lines. We calculate the expected H$\alpha$ and NaID emission line luminosities as a function of hydrogen column density. To calculate the H$\alpha$ luminosity, we estimate the emissivity using the expression given in \citet{draine11} for case B recombination, and using the electron temperature and electron density directly from the models. We calculate the expected NaID luminosity using: $\mathrm{L(NaID_{K}) = EW_{abs}(NaID_{K}) \times L_{\lambda; stars}(\lambda = 5987\,\AA)}$, where $\mathrm{EW_{abs}(NaID_{K})}$ is the EW of the NaID$_{K}$ absorption, which we calculate using the optical depth at line center ($\mathrm{\tau_{0}(NaID_{K})}$, equation \ref{eq:10}), and $\mathrm{L_{\lambda; stars}(\lambda = 5987\,\AA)}$ is the stellar continuum emission at $\lambda = 5897$\AA, which we take directly from observations. The dust-corrected stellar continuum emission at $\mathrm{\lambda = 5897}$\AA\, in the central spaxel is $3\times 10^{43}\,\mathrm{erg/(sec\, cm^{2})}$, and the integrated stellar continuum emission at $\mathrm{\lambda = 5897}$\AA\, is $2\times 10^{44}\,\mathrm{erg/(sec\, cm^{2})}$. These are one and two orders of magnitude larger than the expected AGN continuum emission at the same wavelength. Therefore, the NaID emission is powered by the stellar continuum emission and not by the AGN. Thus, according to our suggested model, while the AGN is responsible for the ionization structure in the cloud, the NaID line luminosity is powered by the stellar continuum. In the third panels of figure \ref{f:photoionization_models_selected} we show the H$\alpha$ and NaID$_{K}$ line luminosities, for a unit covering factor, as a function of the hydrogen column density. The H$\alpha$ luminosity increases with the ionized column all the way to the ionization front, at $\mathrm{N_H}$ of a few $10^{19}\, \mathrm{cm^{-2}}$, beyond which very few line photons are created. The NaID$_{K}$ line luminosity increases as a function of depth into the cloud, with the steepest rise beyond the ionization front. The saturation of NaID$_{K}$ luminosity occurs when the absorption line transitions from the optically-thin regime to the flat part of the curve of growth. In the right panels of figure \ref{f:photoionization_models_selected} we show the L(H$\alpha$)/L(NaID$_{K}$) line ratio as a function of the hydrogen column density, where we mark the observed ratio of L(H$\alpha$)/L(NaID$_{K}$)=10 using a red line. This sets the physical size of the cloud, the total dust column density and extinction, the ionized fraction of Na atoms, and the optical depth and EW of the NaID$_{K}$ absorption. In table \ref{tab:natbib} we list several of the resulting properties of the clouds, for solar and twice solar metallicities. Interestingly, these models reproduce, to within a factor of 2--3, all the observed properties of the NaID$_{K}$ and the dust in our system. In particular, for the same ionization parameter to that observed, the resulting NaID$_{K}$ absorption optical depth and EW are roughly similar to those we measure. More importantly, the derived dust reddening from the models is roughly consistent with the estimated reddening from the \emph{ionized emission lines}. This correspondence suggests that the proposed model, where the ionized and neutral phases are part of the same outflowing clouds, is consistent with the observations. One can see that the main difference between the solar and twice solar metallicity models is in the hydrogen column density for which L(H$\alpha$)/L(NaID$_{K}$)=10, where models with solar metallicity require twice as much $\mathrm{N_{H}}$ to reach the requirement. This also sets the different neutral-to-ionized gas mass ratio, where models with solar metallicity require twice as much neutral-to-ionized gas mass ratio. \begin{table*} \caption{Results of the photoionization models with solar and twice solar metallicities. The columns from left to right are: (1) the ionization parameter of the ionized gas, (2) gas metallicity, (3) hydrogen column density at the ionization front, (4) hydrogen column density at L(H$\alpha$)/L(NaID$_{K}$)=10, (5) the neutral-to-ionized gas mass ratio at L(H$\alpha$)/L(NaID$_{K}$)=10, (6) the inferred dust reddening at L(H$\alpha$)/L(NaID$_{K}$)=10, (7) the inferred optical depth at line center at L(H$\alpha$)/L(NaID$_{K}$)=10, (8) the inferred EW at L(H$\alpha$)/L(NaID$_{K}$)=10, and (9) the neutral fraction of Sodium at L(H$\alpha$)/L(NaID$_{K}$)=10. }\label{tab:natbib} \begin{tabular}{ccccccccc} \hline (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ $\mathrm{log{U}}$ & $Z$ & $\mathrm{N_{H}}$(H$_{0}$=H$^{+}$) & $\mathrm{N_{H}(L_{H\alpha}/L_{NaID_{K}}=10)}$ & $\mathrm{\frac{M_{neutral}}{M_{ionized}}}$ & $\mathrm{E}_{B-V}$ & $\tau_{0, K}$ & EW(NaID$_{K}$) & $\mathrm{(1 - y) }$ \\ & [$\mathrm{Z_{\odot}}$] & [$\mathrm{cm^{-2}}$] & [$\mathrm{cm^{-2}}$] & & [mag] & & [\AA\,] & \\ \hline -3.7 & 1 & $2.5\times10^{19}$ & $9.0 \times 10^{20}$ & 36 & 0.16 & 0.03 & 0.87 & 0.05 \\ -3.7 & 2 & $2.1\times10^{19}$ & $4.0 \times 10^{20}$ & 19 & 0.14 & 0.02 & 0.87 & 0.05 \\ \hline -3.4 & 1 & $5.4\times10^{19}$ & $1.8 \times 10^{21}$ & 34 & 0.32 & 0.05 & 2.0 & 0.05 \\ -3.4 & 2 & $4.4\times10^{19}$ & $8.2 \times 10^{20}$ & 19 & 0.29 & 0.05 & 1.9 & 0.06 \\ \hline -3.2 & 1 & $7.8\times10^{19}$ & $2.3 \times 10^{21}$ & 32 & 0.42 & 0.06 & 2.7 & 0.07 \\ -3.2 & 2 & $6.1\times10^{19}$ & $1.1 \times 10^{21}$ & 18 & 0.38 & 0.06 & 2.6 & 0.07 \\ \hline \end{tabular} \end{table*} Having a consistent photoionization model, we can use the model to put constrains on additional properties of the clouds. First, we find that the mass of the outflowing neutral gas is 20--40 times larger than the outflowing ionized gas, depending on the gas metallicity. This is somewhat larger than the neutral-to-ionized gas mass ratio we suggested for typical ionized outflows in type II AGN \citep{baron19b}. Secondly, we can use the observed narrow and broad L(H$\alpha$) to constrain the covering factor of the outflowing gas. For the assumed AGN bolometric luminosity and for a covering factor of 1, the reddening-corrected luminosity is L(H$\alpha$)$=2.25\times 10^{42}\,\mathrm{erg/sec}$. We measure dust-corrected L(H$\alpha$)$=1.57\times 10^{42}\,\mathrm{erg/sec}$ and L(H$\alpha$)$=8\times 10^{41}\,\mathrm{erg/sec}$ for the broad and narrow components respectively, suggesting a covering factor of 0.7 for the broad outflowing gas and 0.35 for the narrow component. The large covering factor we find for the outflow is in line with the large covering factor required in the NaID fitting. The covering factor of the narrow component, 0.35, is higher than the covering factor of typical NLR in AGN (e.g., \citealt{baskin05, mor09}). This reinforces our suggestion that a large fraction of the narrow component is not stationary NLR, but rather part of the outflow. We can also estimate the size of the outflowing clouds. Assuming a distance of 5 kpc for the outflowing clouds and a solar (twice solar) metallicity, the size of the clouds is 3 pc, 12 pc, and 25 pc (1.5 pc, 6 pc, 12 pc) for ionization parameters of $\log{\mathrm{U}}=-3.7$, $\log{\mathrm{U}}=-3.4$, and $\log{\mathrm{U}}=-3.2$ respectively. Finally, we can use the photoionization models and the required L(H$\alpha$)/L(NaID$_{K}$)=10 to constrain the neutral fraction of Sodium in the cloud, $(1 - y)$. In earlier works of this type, the neutral fraction was assumed to be $(1 - y)=0.1$, which is similar to the value measured in the Milky-Way (e.g., \citealt{rupke05a, rupke05b, shih10, rupke15, rupke17}). We measure $(1 - y) \sim 0.05$. To the best of our knowledge, this is the first time that $(1- y)$ is estimated, rather than assumed, in an outflow. In section \ref{s:mass_and_energy} below we use our best fit value of $y$ to estimate the mass and energetics of the observed outflows. \textbf{ In summary,} the photoionization models account for both the ionized and the neutral gas phases. We find consistent L(H$\alpha$)/L(NaID) emission line ratios, consistent ionization parameters and ionized line ratios, consistent NaID absorption optical depths and EWs, and consistent dust reddening, to those observed. In addition, we used the photoionization model and the observed L(H$\alpha$)/L(NaID) line ratio to constrain, for the first time, the size, the Sodium neutral fraction, and the neutral-to-ionized gas mass ratio in the outflowing clouds. The size of each of the outflowing clouds is 1.5--3 pc, the Sodium neutral fraction is 0.05, and the neutral-to-ionized gas mass ratio is 20--40. To the best of our knowledge, this is one of first cases in which all these properties were estimated in a single outflow. \subsection{Mass outflow rate and energetics}\label{s:mass_and_energy} Our model allows us to estimate the outflowing gas mass and its kinetic energy. For the ionized gas we use the expressions given in section 5 in \citet{baron19b}. These expressions require the dust-corrected H$\alpha$ (or [OIII]) line luminosity\footnote{Note that the broad H$\alpha$ and [OIII] line luminosities are corrected for dust reddening using the extinction derived towards the \emph{broad} lines, not the narrow lines.}, the electron density in the outflow, the gas emissivity, the location of the outflow, and the outflow velocity. It is important to note that to estimate the ionized gas mass using recombination or forbidden lines, we do not need to have knowledge of the outflow geometry (such as covering factor, filling factor, and the entire 3D spatial distribution of the gas), since the measured line luminosity, which is used to estimate the mass, already encompasses this information (see e.g., \citealt{cano12}). This is contrary to the estimate of neutral gas mass using absorption lines, where one must assume a geometry, and have a knowledge of the covering factor and filling factor of the wind. To estimate the mass and energetics of the ionized outflows, we use the measured broad H$\alpha$ luminosities and the electron densities derived using the ionization parameter method. We assume the emissivities calculated in section 4.3 in \citet{baron19b}, and assume that the outflow location is 5 kpc from the central BH. We take the velocity of the outflow to be the maximal positive velocity, $\Delta v + 2\sigma$ (see bottom row of figure \ref{f:ionized_line_properties_extended}). We find an ionized outflowing gas mass of $\mathrm{M_{ion} = 8 \times 10^6 \, M_{\odot}}$, a mass outflow rate of $\mathrm{\dot{M}_{ion} = 1 \, M_{\odot}/yr}$, kinetic power of $\mathrm{\dot{E}_{ion} = 1.4 \times 10^{41}\, erg/sec}$, and a kinetic coupling efficiency of $\mathrm{\epsilon_{ion} = 10^{-3}}$. Next, we estimate the \emph{neutral} gas mass, mass outflow rate, kinetic power, and kinetic coupling efficiency, using the expressions given in section 5.3 in \citet{shih10}, pertaining to the time-averaged thin-shell model. It has been widely used to estimate the outflowing gas mass from absorption line spectroscopy (e.g., \citealt{rupke05a, rupke05b, rupke05c, shih10, cazzoli16, rupke17}). This model requires the hydrogen column density in the outflow, the gas covering factor, the location of the outflow, and the outflow velocity. The hydrogen column density is estimated using equations \ref{eq:9} and \ref{eq:10}, combined with the measured properties of the NaID \emph{absorption}. We assume twice solar metallicity, and use the measured velocity width $\mathrm{b = \sqrt{2} \sigma_{broad}}$, where $\mathrm{\sigma_{broad}}$ is the velocity dispersion in the broad emission and absorption lines (see figure \ref{f:ionized_line_properties_extended}). Since we estimate the mass and energetics of the neutral gas with the NaID absorption, we take the covering factor to be 0.7, similar to what we found in our photoionization model. Similarly to the ionized outflow, we assume that the outflow location is 5 kpc and take the velocity to be the maximal positive velocity, $\Delta v + 2\sigma$. We find a neutral outflowing gas mass of $\mathrm{M_{neutral} = 1.8 \times 10^8 \, M_{\odot}}$, a mass outflow rate of $\mathrm{\dot{M}_{neutral} = 26 \, M_{\odot}/yr}$, kinetic power of $\mathrm{\dot{E}_{neutral} = 4.4 \times 10^{42}\, erg/sec}$, and a kinetic coupling efficiency of $\mathrm{\epsilon_{neutral} = 0.03}$. The measured neutral-to-ionized gas mass ratio is 23, and the neutral-to-ionized mass outflow rate ratio is 26. These values are very close to the neutral-to-ionized gas mass ratio we calculated using the photoionization models in section \ref{s:photoionization_models}. This consistency is not a trivial consequence of our assumptions. To estimate the ionized and neutral gas mass and outflow rate, we used the dust-corrected H$\alpha$ luminosity and the NaID absorption line optical depth respectively. For the former, it is not required to assume a geometry, while for the latter we assumed the time-averaged thin-shell model with a covering factor of 0.7. Furthermore, the expressions for the ionized and neutral gas mass depend \emph{differently} on $r$, the outflow location. Therefore, the consistent neutral-to-ionized mass ratio justify the assumptions about the geometry and outflow location. These estimates pertain only to the broad emission lines. \subsection{Evidence that the narrow emission lines are part of the outflow}\label{s:origin_of_narrow_lines} A standard assumption in studies of ionized outflows in AGN is that the emission lines can be decomposed into two components: a narrow core which originates from stationary gas in the galaxy, and a broad component which is emitted by fast-moving outflowing clouds (see e.g., \citealt{mullaney13, karouzos16a, baron17b, rupke17, baron19b}). This is obviously a simplified assumption. For example, in \citet{baron18} we detected an outflow in an almost edge-on configuration, where the emission lines that were emitted by the outflow were narrow due to projection effects. More generally, the 3D biconical outflow models of \citet{bae16} show that emission lines that originate in the outflow present complex line kinematics, and depending on the geometry, can exhibit both narrower and broader components. Thus, a decomposition into narrow and broad kinematic components, where the former represents the stationary gas and the latter represents the outflow, may underestimate the total mass outflow rate. Regardless of the question of whether typical NLRs host stationary or outflowing gas\footnote{Here, "typical NLR" is defined as the region that emits the narrow emission lines in AGN. This region is not necessarily stationary, and could host an outflow component.}, we find several differences between the properties of the narrow line-emitting gas in our source and the properties of many other well studied NLRs. First, we detect narrow NaID emission with the same kinematics as the narrow ionized emission lines, suggesting that both of these are emitted by the same clouds. This is very different from typical NLRs, where narrow NaID \emph{emission} was not detected so far. Second, we found in section \ref{s:photoionization_models} that the covering factor of the narrow line-emitting gas is 0.35, which is significantly higher than the covering factor observed in typical NLRs (e.g., \citealt{baskin05, mor09}). Moreover, if we use the dust corrected narrow optical line luminosities to estimate the AGN bolometric luminosity (see \citealt{netzer09, netzer19}), we find a bolometric luminosity which is $\sim$5 times higher than the bolometric luminosity we infer from the 2--10 keV luminosity (section \ref{s:agn_props}). These suggest that the narrow line luminosities in this source are significantly higher than those expected from a typical NLR of an AGN with the same bolometric luminosity. Therefore, the narrow lines in our source are not emitted by a typical NLR. The most direct observational evidence that the narrow emission originates in the outflow is the detection of a narrow redshifted NaID emission line. Since the emitted NaI radiation is only due to the scattering of the absorbed stellar continuum, this emission cannot be explained with a stationary NLR-like gas (unless a special geometry is invoked) and requires an outflow. Since we argue that the neutral and ionized emission lines are emitted by the same clouds, this suggests that the narrow ionized emission lines are emitted by the outflow as well. To estimate the exact contribution of the outflow to the narrow emission lines, it is necessary to perform full dynamical modeling of the system, which is beyond the scope of this paper. However, we can estimate the mass and energetics of the observed outflows in the case where most of the narrow lines are emitted by the outflow. The dust-corrected narrow line luminosities are roughly half of the broad line luminosities. Combining the two kinematic components we find that the outflowing gas mass is $\mathrm{M_{ion} = 1.2 \times 10^7 \, M_{\odot}}$, the mass outflow rate is $\mathrm{\dot{M}_{ion} = 1.5 \, M_{\odot}/yr}$, the kinetic power is $\mathrm{\dot{E}_{ion} = 2.1 \times 10^{41}\, erg/sec}$, and the kinetic coupling efficiency is $\mathrm{\epsilon_{ion} = 1.5 \times 10^{-3}}$. For the neutral gas, the gas mass is $\mathrm{M_{neutral} = 2.7 \times 10^8 \, M_{\odot}}$, the mass outflow rate is $\mathrm{\dot{M}_{neutral} = 39 \, M_{\odot}/yr}$, the kinetic power is $\mathrm{\dot{E}_{neutral} = 6.6 \times 10^{42}\, erg/sec}$ and the kinetic coupling efficiency is $\mathrm{\epsilon_{neutral} = 0.045}$. \section{Summary and conclusions}\label{s:concs} This work is part of a long-term project to map and analyze multi-phase AGN-driven winds at the specific evolutionary stage of post starburst E+A galaxies. In this paper, we present new MUSE/VLT observations of SDSS J124754.95-033738.6, at z=0.09. The spatially-resolved spectroscopy presented here allowed us to study the stellar and the neutral and ionized gas properties throughout the entire FOV. Our results can be summarized as follows: \begin{enumerate} \item SDSS J124754.95-033738.6 is a spiral galaxy with spiral arms extending to distances of $\sim$20 kpc. Its optical spectrum suggests a post starburst E+A galaxy. While its stars are showing typical stellar rotation, we detect spatially and kinematically disturbed gas throughout the entire FOV, suggestive of a minor merger that probably triggered the recent burst. \item Using stellar population synthesis modeling, we find that the system had two SF episodes, with a recent short episode that started 70 Myrs ago and ended 30 Myrs ago, and an older long episode that started 14 Gyrs ago and ended 6 Gyrs ago. The total stellar mass of the system is $\mathrm{M_{*} = 10^{10.8}\,M_{\odot}}$, where 2\% of the mass was formed in the recent burst. The estimated peak SFR during the recent burst is about 150 $\mathrm{M_{\odot}}$/yr. The ionized emission lines in the central spaxels suggest residual SF that is obscured by dust, with estimated SFR of $2.2\,\mathrm{M_{\odot}/yr}$. This SFR places the system roughly 0.4 dex below the SF main sequence. \item We detect two kinematic components that trace ionized gas, both of which are ionized by the central AGN. Our analysis of the broad kinematic component suggests a fast-moving, $v \sim 1\,000$ km/sec, ionized outflow, in a face on double-cone configuration with a moderate opening angle of about $45^{\circ}$. We further suggest that most of the narrow line emission originates in the outflow as well, where the total emission line profile is shaped by projection effects and dust reddening. \item We detect NaID \emph{emission} and absorption throughout a large fraction of the FOV. We detect narrow and broad kinematic components in emission, and broad kinematic components in absorption, suggesting a fast-moving, $v \sim 1\,000$ km/sec, neutral outflow. This is the third reported case of resolved NaID emission in an outflow. \item We find a remarkable similarity between the kinematics and spatial extents of the ionized and the neutral gas. Furthermore, we find a constant L(H$\alpha$)/L(NaID) luminosity ratio throughout the entire FOV, suggesting that the H$\alpha$-emitting and NaID-emitting gas are part of the \emph{same} clouds. \item We propose a model in which the ionized line emission and the neutral line emission and absorption originate in the same clouds, which are embedded in a fast-moving double-cone outflow. These outflowing clouds are exposed to the AGN radiation, which results in ionized gas which emits the observed emission lines. The back of the cloud, which is neutral, is exposed to the stellar radiation field and produces the observed NaID emission and absorption lines. This model naturally explains the observed kinematic connection between the two phases, and the detection of both NaID emission and absorption. \item We present photoionization models of the outflowing clouds, which account for both the ionized and the neutral gas phases. The models successfully reproduce various observed properties of the emission and absorption lines. In particular, we find consistent L(H$\alpha$)/L(NaID) emission line ratios, consistent ionization parameters and ionized line ratios, consistent NaID absorption optical depths and EWs, and consistent dust reddening values, to those we observe. \item Using the photoionization model, we are able to estimate the properties of the outflowing clouds. The predicted size of the clouds is 1.5--3 pc, and the neutral-to-ionized gas mass ratio is 20--40. To the best of our knowledge, this is one of the first cases in which these properties are directly measured in an outflow. \item The best-fitting photoionization model allowed us to constrain, for the first time, the neutral fraction of Sodium atoms within the clouds, $(1- y) = 0.05$. The neutral fraction is of particular importance in the context of neutral outflows, since the estimated outflow mass and mass outflow rate depend on it. \item We estimate the mass and energetics of the ionized and neutral outflows in our system. For the ionized gas, the outflowing gas mass is $\mathrm{M_{ion} = 1.2 \times 10^7 \, M_{\odot}}$, the mass outflow rate is $\mathrm{\dot{M}_{ion} = 1.5 \, M_{\odot}/yr}$, the kinetic power is $\mathrm{\dot{E}_{ion} = 2.1 \times 10^{41}\, erg/sec}$, and the kinetic coupling efficiency is $\mathrm{\epsilon_{ion} = 1.5 \times 10^{-3}}$. For the neutral gas, the gas mass is $\mathrm{M_{neutral} = 2.7 \times 10^8 \, M_{\odot}}$, the mass outflow rate is $\mathrm{\dot{M}_{neutral} = 39 \, M_{\odot}/yr}$, the kinetic power is $\mathrm{\dot{E}_{neutral} = 6.6 \times 10^{42}\, erg/sec}$ and the kinetic coupling efficiency is $\mathrm{\epsilon_{neutral} = 0.045}$. \item We measure neutral-to-ionized gas mass ratio of 23, and neutral-to-ionized mass outflow rate ratio of 26, both very close to the ratio we calculated using our photoionization models. This consistency is not a trivial consequence of our assumptions. It confirms the accuracy of our estimates (outflow extent, dust reddening, covering factor, etc) are our assumed thin-shell geometry. \end{enumerate} The E+A galaxy SDSS J124754.95-033738.6 shows one of the most direct connections between the neutral and ionized outflow phases, and thus allowed us to estimate several previously-unknown properties of the outflowing clouds. It remains to be seen whether additional post starburst E+A galaxies show similar connections between the different outflowing phases. We are currently involved in a detailed analysis of similar objects and results will be reported in forthcoming publications. \section*{Acknowledgments} We thank the referee, R. Maiolino, for useful comments and suggestions that helped improve this manuscript. We thank S. Cazzoli, M. Perna, T. Shimizu, and H. Yesuf for useful discussions regarding different aspects presented in this manuscript. D. Baron is supported by the Adams Fellowship Program of the Israel Academy of Sciences and Humanities. This research made use of {\sc Astropy}\footnote{http://www.astropy.org}, a community-developed core Python package for Astronomy \citep{astropy2013, astropy2018}. This work made use of SDSS-III\footnote{www.sdss3.org} data. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. \bibliographystyle{mn2e}
1,108,101,563,446
arxiv
\section{Introduction} In $\setR^n$, the classical Dirichlet energy is the functional defined on $H^1$ by $$E(u):= \int_{\setR^n} |\nabla u|^2$$ for any $u \in H^1$. As well-known, it is related to the Laplace operator $\Delta:=\sum_{k=1}^{n}\partial_{kk}$ by the integration by parts formula, namely $$ E(u,v)=-\int_{\setR^n} (\Delta u) v $$ for any $u,v \in H^1$ such that $\nabla u\in H^1$, where $E(u,v):=\int_{\setR^n} \langle \nabla u, \nabla v\rangle$. Standard tools from spectral theory show that $\Delta$ generates a semi-group of operators $(e^{t\Delta})_{t>0}$ sending any $u_0 \in L^2$ to the family $(u_t)_{t>0}\subset H^1$ satisfying the heat equation $\partial_t u_t = \Delta u_t$ with $u_0$ as an initial condition. The semi-group $(e^{t\Delta})_{t>0}$ admits a smooth kernel $p$, so that for any $f \in L^2$, $x \in \setR^n$ and $t>0$, $$ e^{t\Delta} f(x) = \int_{\setR^n} p(x,y,t)f(y) \di y. $$ The explicit expression of this heat kernel is well-known: for any $x,y \in \setR^n$ and $t>0$, $$ p(x,y,t) = \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{|x-y|^2}{4t}}. $$ In the more general context of a measured space $(X,\mu)$, the Dirichlet energy possesses abstract analogues called Dirichlet forms. Associated with any such a form $\cE$ is a self-adjoint operator $L$ whose properties are similar to the Laplace operator; in particular, the spectral theorem applies to it and provides a semi-group $(P_t)_{t>0}$ delivering the solution of the equation $\partial_t u_t = L u_t$ starting from any square integrable initial condition. Under suitable assumptions, this semi-group admits a kernel. When the space $X$ is equipped with a metric $\dist$ generating the $\sigma$-algebra on which $\mu$ is defined, this kernel is often compared with the Gaussian term $$ \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4t}} $$ through upper and lower estimates: see \cite{Sturm2}, for instance. From this perspective, a natural question arises: what happens when the kernel of $\cE$ coincides with this Gaussian term? In this article, we answer this question by showing that the unique metric measure space admitting such a kernel is the Euclidean space. The precise statement of our main result is the following: \begin{theorem}\label{th:main} Let $(X,\dist)$ be a complete metric space equipped with a non-negative regular Borel measure $\mu$. Assume that there exists a symmetric Dirichlet form $\cE$ on $(X,\mu)$ admitting a heat kernel $p$ such that for some $\alpha>0$, \begin{equation}\label{eq:heatkernel} p(x,y,t) = \frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\dist^2(x,y)}{4t}} \end{equation} holds for any $x,y \in X$ and any $t>0$. Then $\alpha$ is an integer, $(X,\dist)$ is isometric to $(\setR^\alpha,\dist_e)$ where $\dist_e$ stands for the classical Euclidean distance, and $\mu$ is the $\alpha$-dimensional Hausdorff measure. \end{theorem} Then we show that this rigidity result can be turned quantitative via a suitable contradiction argument. Denoting by $\dist_{GH}$ the Gromov-Hausdorff distance and by $\setB^n_r$ any Euclidean ball in $\setR^n$ with radius $r>0$, we obtain the following: \begin{theorem}\label{th:almostrigidity} For any $\upepsilon>0$, there exists $\updelta>0$ depending only on $\upepsilon$ and $n$ such that if $(X,\dist,\mu)$ is a complete metric measure space endowed with a symmetric Dirichlet form $\cE$ admitting a heat kernel $p$ satisfying \begin{equation}\label{eq:almostheatkernel} (1-\updelta) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1-\updelta)t}}\le p(x,y,t) \le (1+\updelta) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1+\updelta)t}} \end{equation} for any $x,y \in X$ and $t \in (0,T]$, for some given $T>0$, then for any $x \in X$ and $r \in (0,\sqrt{T})$, \begin{equation}\label{eq:Reifenberg} \dist_{\mathrm GH}\left( B_r(x), \setB^n_r\right)< \upepsilon r. \end{equation} \end{theorem} The intrinsic Reifenberg theorem of Cheeger and Colding \cite[Theorem A.1.1.]{CheegerColding} provides the following immediate topological consequence, where $\Psi(\cdot|n)$ is a function depending only on $n$ with $\Psi(r|n) \to 0$ when $r \to 0^+$. \begin{corollary}\label{cor:topo} There exists $\updelta_n>0$ depending only on $n$ such that if $(X,\dist,\mu)$ is a complete metric measure space endowed with a symmetric Dirichlet form $\cE$ admitting a heat kernel $p$ such that for some numbers $\delta \in (0,\delta_n)$ and $T>0$, $$ (1-\updelta)\frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1-\updelta) t}}\le p(x,y,t) \le (1+\updelta)\frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1+\updelta) t}} $$holds for all $x, y \in X$ and $t \in (0,T)$, then for any $x \in X$, there exists a topological embedding of $\mathbb{B}_{\sqrt{T}}^{n}$ into $B_{\sqrt{T}}(x)$ whose image contains $B_{(1-\Psi(\updelta|n))\sqrt{T}}(x)$.\end{corollary} We point out the two previous results are also true in case $T=+\infty$. Moreover, Theorem \ref{th:almostrigidity} can be used to give an alternative proof of a celebrated result established by T-H. Colding \cite[Theorem 0.8]{Colding}, namely the almost rigidity of the volume for Riemannian manifolds with non-negative Ricci curvature. Let us recall this statement: \begin{theorem}[Colding]\label{th:Colding}For any $\upepsilon>0$, there exists $\updelta>0$ depending only on $\upepsilon$ and $n$ such that if $(M^n,g)$ is a complete Riemannian manifold with non-negative Ricci curvature such that for any $x \in M$ and $r>0$, \begin{equation}\label{eq:vol}\vol B_r(x) \ge \left(1-\updelta\right) \omega_n\, r^n,\end{equation} then for any $x\in M$ and $r>0$, $$\dist_{\mathrm GH}\left( B_r(x), \setB^n_r\right)\le \upepsilon r.$$ \end{theorem} This theorem is a direct consequence of our almost rigidity theorem coupled with an intermediary result, Theorem \ref{th:estimate}, which states, roughly speaking, that a complete Riemannian manifold satisfying the volume estimate \eqref{eq:vol} has necessarily an almost Euclidean heat kernel. Our proof of this result is based on previous works by J.~Cheeger and S.-T.~Yau \cite{CheegerYau}, P.~Li and S.-T.~Yau \cite{LiYau} and especially P.~Li, L.-F.~Tam and J.~Wang \cite{LiTamWang}. Finally, in the last section of this paper, we investigate the case of a metric measure space equipped with a spherical heat kernel. To be precise, the sphere $\mathbb{S}^n$ has a heat kernel which can be written as $$ K_t^{(n)}(\dist_{\mathbb{S}^n}(x,y)) $$ where $K_t^{(n)}$ is an explicit function and $\dist_{\mathbb{S}^n}$ is the classical round Riemannian distance. We show that if a metric measure space $(X,\dist,\mu)$ is equipped with a Dirichlet form admitting a heat kernel $p$ such that $$p(x,y,t)=K_t^{(n)}(\dist(x,y))$$ for all $x, y \in X$ and $t>0$, then $(X,\dist)$ is isometric to $(\mathbb{S}^n,\dist_{\mathbb{S}^n})$. Let us spend some words to describe our proof of Theorem \ref{th:main}. A key point is the celebrated result of T.H.~Colding and W.P.~Minicozzi II asserting that on any complete Riemannian manifold satisfying the doubling and Poincaré properties, the space of harmonic maps with linear growth is finite-dimensional \cite{ColdingMinicozzi}. As already observed in non-smooth contexts \cite{Hua, HuaKellXia}, the proof of this result can be carried out on any complete metric measure spaces satisfying the doubling and Poincaré properties. It turns out that admitting a Dirichlet form with an Euclidean heat kernel forces the metric measure space to satisfy these two properties, see Proposition \ref{prop:important}. Then we consider the functions $$B(x,\cdot):=\frac{1}{2}(\dist^2(o,x)+\dist^2(o,\cdot)-\dist^2(x,\cdot)) \qquad x \in X$$ which are easily shown to have linear growth. When $(X,\dist,\mu)$ is equipped with a Dirichlet form $\cE$ satisfying the assumptions of Theorem \ref{th:main}, these functions are locally $L$-harmonic: this follows from establishing $$ L\textbf{1}=0 \qquad \text{and} \qquad L\dist^2(x,\cdot) = 2 \alpha. $$ Therefore, the vector space $\cV$ generated by the functions $B(x,\cdot)$ has a finite dimension $n$. Choosing a suitable basis $(h_1,\ldots,h_n)$ of this space, we embed $X$ into $\setR^n$ by setting $$ H(x)=(h_1(x),\ldots,h_n(x)) $$ for any $x \in X$. More precisely, there exists $x_1,\ldots,x_n \in X$ such that $(\delta_{x_1},\ldots,\delta_{x_n})$ is a basis of $\cV^{*}$, and $(h_1,\ldots,h_n)$ is chosen as the dual of this basis. Setting $Q(\xi) := \sum_{i,j} B(x_i,x_j)\xi_i \xi_j$ for any $\xi=(\xi_1,\cdots,\xi_n) \in \setR^n$, we easily get \begin{equation}\label{eq:isometry} Q(H(x)-H(y))=\dist^2(x,y) \end{equation} for any $x, y \in X$, thus $H$ is an embedding. To conclude, we establish $\alpha = n$ and show that $Q$ is non-degenerate, so that $\dist_Q(\xi,\xi')=\sqrt{Q(\xi-\xi')}$ defines a distance on $\setR^n$ that is isometric to the Euclidean distance: then \eqref{eq:isometry} shows that $H$ is an isometric embedding onto its image a final argument proves to be $\setR^n$. We prove these two concluding assertions by the study of asymptotic cones at infinity of $(X,\dist,\mu)$. It is worth mentioning that in the case of $(X,\dist,\mu,\cE)$ equipped with a spherical heat kernel, we embed $X$ into $E_1:=\mathrm{Ker}(-L-\lambda_1 I)$, where $\lambda_1$ is the first non-zero eigenvalue of $-L$, and show that $H(X)$ is isometric to $\Sigma:=\{Q=1\}$ for some suitable quadratic form $Q$. The paper is organized as follows. Our proof of Theorem \ref{th:main} relies on several notions and results from different areas that we collect in the preliminary Section 2. Then in Section 3 we establish simple rigidity results for metric measure spaces with an Euclidean heat kernel. We use these results in Section 4 which is dedicated to the proof of Theorem \ref{th:main}. Section 5 is devoted to the almost rigidity result, namely Theorem \ref{th:almostrigidity}, and Section 6 explains our new proof of Colding's volume almost rigidity theorem. Finally Section 7 contains our study of the case of metric measure spaces equipped with a spherical heat kernel. \smallskip\noindent \textbf{Acknowledgement.} The first author thanks the Centre Henri Lebesgue ANR-11-LABX-0020-01 for creating an attractive mathematical environment; he was also partially supported by the ANR grants: {\bf ANR-17-CE40-0034}: {\em CCEM} and {\bf ANR-18-CE40-0012}: {\em RAGE}. The second author thanks S.~Honda for interesting remarks and questions at a late stage of this work and for the good working conditions in Tohoku University that grandly helped completing this article. \begin{comment} \cite{Hua}: Colding-Minicozzi on Alexandrov spaces with generalized sectional curvature \cite{HuaKellXia}: Colding-Minicozzi on $\RCD(0,N)$ spaces \cite[Th.~4.3]{Saloff-Coste}: toute fonction harmonique positive ou minoré est constante sur les variétés riemanniennes pondérées \cite[Lem.~6.3]{AldanaCarronTapie}: Let $(M,g)$ be a complete Riemannian manifold satisfying the elliptic Harnack inequality. Then any harmonic function bounded from below is constant. \cite{BarlowMurugan}: amazing introduction on the elliptic and parabolic harnach inequalities, with many details, many contexts, many examples... \end{comment} \section{Preliminaries} \quad \, Throughout the article, we shall call metric measure space any triple $(X,\dist,\mu)$ where $(X,\dist)$ is a $\sigma$-compact metric space and $\mu$ is a non-negative $\sigma$-finite Radon measure on $(X,\dist)$ such that $\supp \mu = X$. Here $\supp \mu$ denotes the support of $\mu$. We shall keep fixed a number $\alpha>0$ and denote by $\omega_\alpha$ the quantity \begin{equation}\label{eq:omega_n} \omega_\alpha = \frac{\pi^{\alpha/2}}{\Gamma(\alpha/2+1)} \end{equation} where $\Gamma$ denotes the usual Gamma function $\{\mathrm{Re}>0\} \ni z \mapsto \int_0^{+\infty} t^{z-1}e^{-t} \di t$. Note that when $\alpha$ is an integer $n$, then $\omega_n$ is the volume of the unit Euclidean ball in $\setR^n$. We shall use classical notations for the functional spaces defined on $(X,\dist, \mu)$, like $C(X)$ (resp.~$C_c(X)$) for the space of continuous (resp.~compactly supported continuous) functions, $\Lip(X)$ (resp.~$\Lip_c(X)$)) for the space of Lipschitz (resp.~compactly supported Lipschitz) functions, $L^p(X,\mu)$, where $p \in [1,+\infty)$, for the space of (equivalent classes of) $\mu$-measurable functions whose $p$-th power is $\mu$-integrable, $L^\infty(X,\mu)$ for the space of $\mu$-essentially bounded functions, and so on. We shall write $\supp f$ for the support of a function $f$ and $1_A$ for the characteristic function of a set $A \subset X$. A generic open ball in $(X,\dist)$ will be denoted by $B$, and we will write $\lambda B$ for the ball with same center as $B$ but radius multiplied by $\lambda>0$. We will extensively make use of the following definition. \begin{definition} We say that a metric measure space $(X,\dist,\mu)$ has an $\alpha$-dimensional volume whenever $\mu(B)=\omega_\alpha r^\alpha$ for any metric ball $B\subset X$ with radius $r>0$.\\ \end{definition} \textbf{Dirichlet forms. Let us recall some basic facts about Dirichlet forms, refering to e.g. \cite{FukushimaOshidaTakeda, Sturm1, KoskelaZhou} for more details. Let $(X,\tau)$ be a topological space equipped with a $\sigma$-finite Borel measure $\mu$. A Dirichlet form $\cE$ on $(X,\mu)$ is a non-negative definite bilinear map $\cE : \cD(\cE) \times \cD(\cE) \to \setR$, with $\cD(\cE)$ being a dense subset of $L^2(X,\mu)$, satisfying closedness, meaning that the space $\cD(\cE)$ is a Hilbert space once equipped with the scalar product \begin{equation}\label{eq:scalE} \langle f, g \rangle_{\cE} := \int_{X} f g \di \mu + \cE(f,g)\qquad \forall f, g \in \cD(\cE), \end{equation} and the Markov property: for any $f \in \cD(\cE)$, the function $f_{0}^{1} = \min ( \max(f,0),1)$ belongs to $\cD(\cE)$ and $\cE(f_{0}^{1},f_{0}^{1}) \le \cE(f,f)$. We denote by $|\cdot|_\cE$ the norm associated with $\langle \cdot, \cdot \rangle_\cE$. We focus only on symmetric Dirichlet forms, i.e.~those $\cE$ for which $\cE(f,g)=\cE(g,f)$ holds for all $f,g \in \cD(\cE)$. Therefore, in the rest of the article, by Dirichlet form we will always tacitly mean \textit{symmetric} Dirichlet form. Finally, let us recall that any Dirichlet form is associated with a non-negative definite self-adjoint operator $L$ with dense domain $\cD(L) \subset L^2(X,\mu)$ characterized by the following: $$ \cD(L):=\left\{f \in \cD(\cE) \, : \, \exists h=:Lf \in L^2(X,\mu)\, \, \text{s.t.}\, \, \cE(f,g)= -\int_X h g \di \mu \, \, \, \forall g \in \cD(\cE)\right\}. $$ \hfill We now additionally assume that $(X,\tau)$ is locally compact and separable and that $\mu$ is a Radon measure such that $\supp \mu = X$. A Dirichlet form $\cE$ on $(X,\mu)$ is called \textit{strongly local} if $\cE(f,g)=0$ for any $f, g \in \cD(\cE)$ such that $f$ is constant on a neighborhood of $\supp g$, and \textit{regular} if $C_c(X) \cap \cD(\cE)$ contains a subset (called a \textit{core}) which is both dense in $C_c(X)$ for $\|\cdot\|_{\infty}$ and in $\cD(\cE)$ for $|\cdot|_\cE$. A celebrated result by A.~Beurling and J.~Deny \cite{BeurlingDeny} implies that any strongly local regular Dirichlet form $\cE$ on $(X,\mu)$ admits a non-negative definite symmetric bilinear map $\Gamma : \cD(\cE) \times \cD(\cE) \to \mathrm{Rad}$, where $\mathrm{Rad}$ denotes the set of signed Radon measures on $(X,\tau)$, such that $$ \cE(f,g) = \int_X \di \Gamma(f,g) \qquad \forall f,g \in \cD(\cE), $$ where $\int_X \di \Gamma(f,g)$ denotes the total mass of the measure $\Gamma(f,g)$. From now until the end of this paragraph, we assume that $\cE$ is strongly local and regular. Let us mention that the map $\Gamma$ is concretely given as follows: for any $f \in \cD(\cE) \cap L^\infty(X,\mu)$, the measure $\Gamma(f):=\Gamma(f,f)$ is defined by its action on test functions: \begin{equation}\label{eq:2000} \int_X \phi \di \Gamma(f) := \cE(f,f\phi) - \frac{1}{2}\cE(f^2, \phi) \qquad \forall \phi \in \cD(\cE) \cap C_c(X). \end{equation} Regularity of $\cE$ allows to extend \eqref{eq:2000} to the set of functions $\phi \in C_c(X)$, providing a well-posed definition of $\Gamma(f)$ by duality between $C_c(X)$ and $\Rad$. In case $f \in \cD(\cE)$ is not essentially bounded, $\Gamma(f)$ is obtained as the limit of the increasing sequence of measures $(\Gamma(f_{-n}^{n}))_{n\in\setN}$ where $f_{-n}^n:=\min(\max(f,-n),n)$ for any $n \in \setN$. The general expression of $\Gamma(f,g)$ for any $f,g \in \cD(\cE)$ is then obtained by polarization: $$ \Gamma(f,g) := \frac{1}{4}(\Gamma(f+g,f+g) - \Gamma(f-g,f-g)). $$ Strong locality of $\cE$ implies locality of $\Gamma$, that is $$ \int_A \di \Gamma(u,w) = \int_A \di \Gamma(v,w) $$ for any open set $A\subset X$ and any functions $u,v,w \in \cD(\cE)$ such that $u=v$ on $A$. This property allows to extend $\Gamma$ to the set $\cD_{loc}(\cE)$ made of those $\mu$-measurable functions $f$ for which for any compact set $K\subset X$ there exists $g \in \cD(\cE)$ such that $f=g$ $\mu$-a.e.~on $K$. Then $\Gamma$ satisfies the Leibniz rule: \begin{equation}\label{eq:Leibniz} \Gamma(fg,h)=f \Gamma(g,h) + g \Gamma(f,h) \qquad \forall u,v \in \cD_{loc}(\cE) \cap L^{\infty}_{loc}(X,\mu), \, \, \forall h \in \cD_{loc}(\cE), \end{equation} and the chain rule: \begin{align}\label{eq:chain} \Gamma(\eta \circ f,g)=(\eta'\circ f) \Gamma(f,g) \qquad \qquad & \forall \eta \in C^1_{b,bd}(\setR), \, \, \forall f \in \cD_{loc}(\cE),\nonumber \\ \text{and} \quad & \forall \eta \in C^1(\setR), \, \, \forall f \in \cD_{loc}(\cE)\cap L^{\infty}(X,\mu), \end{align} where $C^1_{b,bd}(\setR)$ stands for the set of bounded $C^1$ functions on $\setR$ with bounded derivative. For our purposes, we also need to define $\cD_{loc}(\Omega,\cE)$ as the set of functions $f\in L^2_{\loc}(\Omega)$ for which for any compact set $K\subset \Omega$ there exists $g \in \cD(\cE)$ such that $f=g$ $\mu$-a.e.~on $K$; here $\Omega$ is an open subset of $X$. The so-called \textit{intrinsic} extended pseudo-distance $\dist_\cE$ associated with $\cE$ is defined by: \begin{equation}\label{eq:defdist} \dist_\cE(x,y):=\sup \{|f(x)-f(y)| \, : \, f \in C(X) \cap \cD_{loc}(\cE) \, \, \, \text{s.t.} \, \, \Gamma(f) \le \mu\}\quad \forall x,y \in X. \end{equation} Here $\Gamma(f) \le \mu$ means that $\Gamma(f)$ is absolutely continuous with respect to $\mu$ with density lower than $1$ $\mu$-a.e.~on $X$, and ``extended'' refers to the fact that $\dist_\cE(x,y)$ may be infinite. When the topology $\tau$ is generated by a distance $\dist$ on $X$, we call asumption (A) the following statement: \begin{equation}\label{eq:A}\tag{A} \text{$\dist_\cE$ is a distance inducing the same topology as $\dist$}. \end{equation} A final consequence of strong locality and regularity is that the operator $L$ canonically associated to $\cE$ satisfies the classical chain rule: \begin{equation}\label{eq:chainrule} L(\phi \circ f) = (\phi' \circ f) Lf + (\phi''\circ f) \Gamma(f) \qquad \forall f \in \mathbb{G}, \,\, \, \forall \phi \in C^\infty([0,+\infty),\setR), \end{equation} where $\mathbb{G}$ is the set of functions $f \in \cD(L)$ such that $\Gamma(f)$ is absolutely continuous with respect to $\mu$ with density also denoted by $\Gamma(f)$. In particular: \begin{equation}\label{eq:chainrulesquare} Lf^2 = 2fLf + 2 \Gamma(f) \qquad \forall f \in \mathbb{G}. \end{equation} \vspace{4mm} \textbf{Heat kernel associated to a Dirichlet form.} Let $(X,\tau)$ be a topological space equipped with a $\sigma$-finite Borel measure $\mu$. The spectral theorem (see e.g.~\cite[Th.~VIII.5]{ReedSimon}) implies that the operator $L$ associated to any Dirichlet form $\cE$ on $(X,\tau)$ defines an analytic sub-Markovian semi-group $(P_t)_{t>0}$ acting on $L^2(X,\mu)$ where for any $f \in L^2(X,\mu)$, the map $t \mapsto P_tf$ is characterized as the unique $C^1$ map $(0,+\infty)\to L^2(X,\mu)$, with values in $\cD(L)$, such that $$ \begin{cases} \frac{\di}{\di t} P_tf = L(P_t f) \qquad \forall t>0,\\ \lim\limits_{t \to 0} \|P_t f - f\|_{L^2(X,\mu)}=0. \end{cases} $$ One can then recover $\cD(L)$ and $L$ from $(P_t)_{t>0}$ in the following manner: $$\cD(L)=\left\{f \in L^2(X,\mu) \, : \, \left(\frac{P_t f - f}{t}\right)_{t>0} \, \text{converges in the $L^2$-norm when $t\downarrow 0$}\right\} ,$$ \begin{equation}\label{eq:carL} Lf = \lim\limits_{t \downarrow 0} \frac{P_t f-f}{t} \qquad \forall f \in \cD(L). \end{equation} We say that $\cE$ admits a heat kernel if there exists a family of $(\mu \otimes \mu)$-measurable functions $(p(\cdot, \cdot, t))_{t>0}$ on $X\times X$ such that for all $t>0$ and $f\in L^2(X,\mu)$, one has $$P_t f (x) = \int_X p(x,y,t) f(y) \di \mu(y) \qquad \text{for $\mu$-a.e.~$x \in X$};$$ the function $p=p(\cdot,\cdot,\cdot)$ is then called the heat kernel of $\cE$. In this case, the semi-group property (namely $P_{s+t}=P_s \circ P_t$ for any $s,t>0$) implies that $p$ satisfies the so-called Chapman-Kolmogorov property: \begin{equation}\label{eq:ChapmanKolmogorov} \int_X p(x,z,t)p(z,y,s) \di \mu(z) = p(x,y,t+s), \qquad \forall x,y \in X, \, \, \forall s,t>0. \end{equation} Moreover, for any $t>0$, $p(\cdot,\cdot,t)$ is symmetric and uniquely determined up to a $(\mu\otimes \mu)$-negligible subset of $X\times X$. When $\cE$ admits a heat kernel, the space $(X,\tau,\mu,\cE)$ is called stochastically complete whenever $$ \int_X p(x,y,t) \di \mu(y) = 1 \qquad \forall x \in X, \forall t >0. $$ Under stochastic completeness, one can show that the domain of $\cE$ coincides with $$ \left\{f \in L^2(X,\mu) \, : \, t \mapsto \frac{1}{2t} \iint_{X\times X} (f(x)-f(y))^2 p(x,y,t) \di \mu(x) \di \mu(y) \, \, \text{is bounded} \right\} $$ and that \begin{equation}\label{eq:Grigor'yan} \cE(f,g) = \lim\limits_{t \downarrow 0} \frac{1}{2t} \iint_{X\times X} (f(x)-f(y))(g(x)-g(y)) p(x,y,t) \di \mu(x) \di \mu(y) \end{equation} for any $f, g \in \cD(\cE)$ and \begin{equation}\label{eq:Grigor'yan2} \int_X \phi \di \Gamma(f) = \lim\limits_{t \downarrow 0} \frac{1}{2t} \iint_{X\times X} \phi(x)(f(x)-f(y))^2 p(x,y,t) \di \mu(x) \di \mu(y) \end{equation} for any $f \in \cD_{loc}(\cE)$ and $\phi \in C_c(X)$: see \cite[2.2]{Grigor'yan}, for instance. As well-known, the classical Dirichet energy on $\setR^n$ admits the Gaussian heat kernel $$ p(x,y,t) = \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist_e^2(x,y)}{4t}} \qquad \forall x,y \in X, \, \forall t >0, $$ where $\dist_e$ is the usual Euclidean distance. This motivates the next definition. \begin{definition}\label{def:Euclideanheatkernel} Let $(X,\dist,\mu)$ be a metric measure space and $\cE$ a Dirichlet form on $(X,\mu)$. We say that $(X,\dist,\mu,\cE)$ has an $\alpha$-dimensional Euclidean heat kernel if $\cE$ admits a heat kernel $p$ such that: $$ p(x,y,t) = \frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\dist^2(x,y)}{4t}}\qquad \forall x,y \in X, \, \forall t>0.$$ \end{definition} \hfill \textbf{Harnack inequalities.} Let $(X,\dist,\mu)$ be a metric measure space equipped with a Dirichlet form $\cE$ with associated operator $L$. Let $\leb^1$ be the Lebesgue measure on $\setR$. In order to properly state what a Harnack inequality means for $(X,\dist,\mu,\cE)$, let us introduce some notions. We refer e.g.~to \cite{Sturm2} and the references therein for more details. Note first that any element $f \in L^2(X,\mu)$ uniquely defines a continuous linear form on $\cD(\cE)$, namely $g \mapsto \int_X fg \di \mu$. Thus $L^2(X,\mu)$ embeds into $\cD(\cE)^*$ whose norm we denote $|\cdot|_{\cE,*}$. For any open interval $I \subset \setR$, we consider the following functional spaces: \begin{itemize} \item $L^2(I,\cD(\cE))$ is the space of $\leb^1$-measurable functions $u:I\to \cD(\cE)$, $u_t:=u(t)$, equipped with the Hilbert norm $\|u\|_{L^2(I,\cD(\cE))}:=(\int_I |u_t|_{\cE}^2 \di t)^{1/2}$; \item $H^1(I,\cD(\cE)^*)$ is the space of $\leb^1$-measurable functions $u:I\to \cD(\cE)^*$ admitting a distributional derivative $\partial_t u \in L^2(I,\cD(\cE)^*)$ on $I$ equipped with the Hilbert norm $\|u\|_{H^1(I,\cD(\cE)^*)}:=(\int_I |u_t|_{\cE,*}^2 \di t + \int_I |(\partial_t u)_t |_{\cE,*}^2 \di t )^{1/2}$, where $(\partial_t u)_t := \partial_t u(t)$; \item $\cD_{par,I}(\cE):=L^2(I,\cD(\cE))\cap H^1(I,\cD(\cE)^*)$ equipped with the Hilbert norm $\|u\|_{par,I}:=(\int_I |u_t|_{\cE}^2 \di t + \int_I |(\partial_t u)_t |_{\cE,*}^2 \di t )^{1/2}$. \end{itemize} We can define a Dirichlet form $\cE_I$ on $\cD_{par,I}(\cE)$ by setting $$ \cE_I(u,v):=\int_I\cE(u_t,v_t) \di t - \int_I (\partial_t u)_t \cdot v_t \di t \qquad \forall u, v \in \cD_{par,I}(\cE). $$ Let $\Omega \subset X$ be an open set. Denote by $Q$ the parabolic cylinder $I \times \Omega$. Let $\cD_{Q}(\cE)$ be the set of $(\leb^1 \otimes \mu)$-measurable functions defined on $Q$ such that for every relatively compact open set $\Omega' \Subset \Omega$ and every open interval $I' \Subset I$ there exists a function $u' \in \cD_{par,I}(\cE)$ such that $u=u'$ on $I' \times \Omega'$. We also define $\cD_{Q,c}(\cE)$ as the set of functions $u \in \cD_{Q}(\cE)$ such that for any $t \in I$, the function $u_t$ has compact support in $\Omega$. \begin{definition} We call local solution on $Q$ of the equation $(\partial_t+L)u = 0$ any function $u \in \cD_Q(\cE)$ such that $\cE_I(u,\phi)=0$ holds for any $\phi \in \cD_{Q,c}(\cE)$. \end{definition} When $\cE$ admits a heat kernel $p$, one can show that for any $x \in X$ and $t>0$ the function $p(x,\cdot,t)$ is a local solution of the equation $(\partial_t+L)u=0$. The next important proposition is a combination of several famous results \cite{Grigor'yan92,Saloff-Coste,Sturm3}. \begin{proposition}\label{prop:important} Let $(X,\dist,\mu)$ be a metric measure space equipped with a strongly local and regular Dirichlet form $\cE$ satisfying assumption \eqref{eq:A}. Let $L$ be the operator canonically associated to $\cE$. Then the following statements are equivalent: \begin{enumerate} \item the combination of a) the doubling property: there exists a constant $C_D>0$ such that for any ball $B \subset X$, \begin{equation}\label{eq:doubling} \mu(2B)\le C_D \mu(B), \end{equation} b) the local Poincaré inequality: there exists a constant $C_P>0$ such that for any $f \in \cD(\cE)$ and any ball $B \subset X$ with radius $r>0$, setting $f_B := \mu(B)^{-1}\int_B f \di \mu$, \begin{equation}\label{eq:Poincare} \int_B |f-f_B|^2 \di \mu \le C_P r^2 \cE(f), \end{equation} \item the existence of a heat kernel $p$ for $\cE$ satisfying double-sided Gaussian estimates: there exists $C_G>0$ such that for any $x,y \in X$ and any $t>0$, \begin{equation}\label{eq:LiYau} \frac{C_G^{-1}}{\mu(B_{\sqrt{t}}(x))} e^{-\frac{\dist^2(x,y)}{4t}} \le p(x,y,t) \le \frac{C_G}{\mu(B_{\sqrt{t}}(x))} e^{-\frac{\dist^2(x,y)}{4t}}, \end{equation} \item the parabolic Harnack inequality: there exists a constant $C_H>0$ such that {\color{blue} for any $s\in \setR$}, any ball $B$ with radius $r>0$ and any non-negative local solution $u$ on $(s-r^2,s)\times B$ of the parabolic equation $(\partial_t + L)u=0$, we have \begin{equation}\label{eq:parabolicHarnack} \essssup_{Q_-} (u) \le C_H \esssinf_{Q_+} (u) \end{equation} where $Q_-:=(s-(3/4)r^2,s-(1/2)r^2) \times (1/2)B$ and $Q_+:=(s-(1/4)r^2,s)\times (1/2)B$. \end{enumerate} \end{proposition} Note that the parabolic Harnack inequality \eqref{eq:parabolicHarnack} implies the elliptic one introduced below in Lemma \ref{lem:ellipticHarnack}.\\ \textbf{Locally $L$-harmonic functions.} Let $(X,\dist,\mu)$ be a metric measure space equipped with a strongly local and regular Dirichlet form $\cE$ with associated operator $L$. We set $$ \cD_c(\cE):=\{ \phi \in \cD(\cE) \, \text{with compact support}\}.$$ \begin{definition}\label{def:localsol} Let $\Omega \subset X$ be an open set. 1. We call local solution on $\Omega$ of the Laplace equation $Lu=0$ any function $u \in \cD_{loc}(\Omega,\cE)$ such that $\cE(u,\phi)=0$ holds true for any $\phi \in \cD_c(\cE)$ with $\supp \phi \subset \Omega$. 2. We call locally $L$-harmonic function any function $u \in \cD(\cE)$ such that $\cE(u,\phi)=0$ holds true for any $\phi \in \cD_c(\cE)$. 3. For any $f \in L^1_{loc}(X,\mu)$, we call local solution on $\Omega$ of the Poisson equation $Lu=f$ any function $u \in \cD_{loc}(\Omega, \cE)$ such that $\cE(u,\phi)=-\int_X f \phi \di \mu$ holds true for any $\phi \in \cD_c(\cE)$ with $\supp \phi \subset \Omega$. \end{definition} We shall often simply write ``$Lu=f$ on $\Omega$'' to mean that $u \in \cD_{loc}(\Omega, \cE)$ is a local solution on $\Omega$ of the equation $Lu=f$, and ``$Lv=0$'' to express that $v \in \cD_{loc}(\cE)$ is locally $L$-harmonic. Lastly, we point out that strong locality directly implies that constant functions are locally $L$-harmonic, i.e. $$ L\mathbf{1}=0. $$ Let us state a classical lemma (Liouville theorem under elliptic Harnack inequality) whose proof is omitted here (see e.g.~\cite[Lem.~6.3]{AldanaCarronTapie}). \begin{lemma}\label{lem:ellipticHarnack} Let $(X,\dist,\mu)$ be a metric measure space equipped with a Dirichlet form $\cE$ whose associated operator $L$ satisfies an elliptic Harnack inequality, meaning that there exists a constant $C_E>0$ such that for any ball $B \subset X$ and any non-negative local solution $h$ of $Lu=0$ on $B$, we have \begin{equation}\label{eq:ellipticHarnack} \essssup_{(1/2)B} \, h \le C_E \esssinf_{(1/2)B} \, h. \end{equation} Then any non-negative locally $L$-harmonic function is constant.\\ \end{lemma} \textbf{Strongly harmonic functions.} Let $(X,\dist,\mu)$ be a metric measure space. Following the terminology adopted in \cite{GaczkowskiGorka, AGG19}, for any open set $\Omega \subset X$ we call strongly harmonic function on $\Omega$ any function $h:\Omega\to\setR$ satisfying the mean value property: $$h(x) = \fint_{B_r(x)} h \di \mu \qquad \forall x \in \Omega, \, \, \forall \, r\in(0, \dist(x,^c \Omega)).$$ \begin{remark}\label{rem:defalternative} It can easily be checked that a function $h:\Omega\to\setR$ is strongly harmonic if and only if for any $x\in \Omega$ and any $u\in C_c^1\left( \left[0, \dist(x,^c \Omega)\right]\right)$ with $\int_X u(\dist(x,y))\di \mu(y) = 1$ one has $$h(x) = \int_{X}u\left(\dist(x,y)\right) h(y) \di \mu(y).$$ \end{remark} Under mild assumptions on $(X,\dist,\mu)$, an elliptic Harnack inequality holds true for strongly harmonic functions, provided the doubling condition \eqref{eq:doubling} is satisfied: see \cite[Lemma 4.1]{AGG19}. The next lemma is an easy consequence of this fact. We recall that a metric space is called proper if any closed ball is compact, and that proper metric spaces are complete and locally compact. \begin{lemma}\label{lem:Harnackstrongly} Let $(X,\dist)$ be a proper metric space equipped with a regular Borel measure $\mu$ such that $0<\mu(B)<+\infty$ for any metric ball $B \subset X$. Assume that $(X,\dist,\mu)$ satisfies the doubling condition \eqref{eq:doubling}. Then any non-negative strongly harmonic function on $X$ is constant. \end{lemma} \begin{comment} Moreover, under stronger assumptions, strongly harmonic functions are locally Hölder \cite[Th.~4.1]{AGG19}. We recall that a minimizing geodesic in $(X,\dist)$ is a continuous map $c:[0,L]\to X$ such that $\dist(c(t),c(t'))=|t-t'|\dist(0,L)$ for any $t,t' \in [0,L]$ and that $(X,\dist)$ is called geodesic whenever two points can be joined by a minimizing geodesic. \begin{proposition}\label{prop:AGGHölder} Let $(X,\dist)$ be a geodesic metric space equipped with a doubling measure $\mu$ with doubling constant $C_D$. Then any strongly harmonic function $f$ on $(X,\dist,\mu)$ satisfies the following local Hölder estimate: for any $x \in X$ and $r>0$, $$ |f(x)-f(y)|\le C \bigg(\sup_{B_r(x)} f - \inf_{B_r(x)}f \bigg) \left( \frac{\dist(x,y)}{r}\right)^{\beta} $$ holds for any $y \in B_r(x)$, where $C>0$ and $\beta \in (0,1)$ depend only on $C_D$. \end{proposition} \end{comment} When $(X,\dist,\mu)$ has an $\alpha$-dimensional volume, strongly harmonic functions satisfy the following two properties: \begin{lemma}\label{lem:0109} Let $(X,\dist,\mu)$ be with an $\alpha$-dimensional volume and $h:X\to\setR$ be strongly harmonic. Then: \begin{enumerate} \item[(i)] if $h$ has linear growth -- meaning that there exists $C>0$ such that $|h|\le C(1+\dist(o,\cdot))$ for some $o \in X$ -- then $h$ is Lipschitz; \item[(ii)] if $h$ is continuous and such that $\sup_{\partial B_{r_i}(o)} |h| =o(r_i)$ for some point $o \in X$ and some sequence $\{r_i\}_i \subset (0,+\infty)$ such that $r_i \to +\infty$, then $h$ is constant. \end{enumerate} \end{lemma} \begin{proof} Let us first prove $(i)$. Assuming $h$ to have linear growth, we know that there exists $o \in X$, $r_o>0$ and $M>0$ such that $|h(z)| \le M\dist(o,z)$ for all $z \in X \backslash B_{r_o}(o)$. Since $h$ is strongly harmonic, we have $$ \mu(B_{r+d}(x)) h(x) - \mu(B_r(y)) h(y) = \int_{B_{r+d}(x) \backslash B_r(y)} h \di \mu $$ for all $r>0$ and any given $x,y \in X$, where we have set $d:=\dist(x,y)$. Since $\mu(B_{r+d}(x)) = \omega_\alpha (r+d)^\alpha$ and $\mu(B_r(y)) = \omega_\alpha r^\alpha$, we obtain \begin{align}\label{eq:D4.1} |\omega_\alpha (r+d)^\alpha h(x) - \omega_\alpha r^\alpha h(y)| & \le \omega_\alpha ((r+d)^\alpha - r^\alpha)\, \mathrm{sup}_{B_{r+d}(x){\color{blue}\backslash B_r(y)}} \, |h| \nonumber \\ & \le \omega_\alpha ((r+d)^\alpha - r^\alpha) \, \mathrm{sup}_{B_{r+d+\dist(o,x)}(o){\color{blue}\backslash B_r(y)}} \, |h| \end{align} since $B_{r+d}(x) \subset B_{r+d+\dist(o,x)}(o)$. Choosing $r>r_o{\color{blue}+\dist(o,y)}$ in order to ensure that $B_r(y)$ contains $B_{r_o}(o)$, we get $ \mathrm{sup}_{B_{r+d+\dist(o,x)}(o){\color{blue}\backslash B_r(y)}} \, |h| \le M(r+d+\dist(o,x)),$ hence $$|(1+d/r)^\alpha h(x) - h(y)| \le ((1+d/r)^\alpha-1)M(r+d+\dist(o,x)).$$ Letting $r \to +\infty$ and applying $(1+d/r)^\alpha-1 =\alpha d/r +o(1/r)$ yields to $|h(x) - h(y)| \le \alpha dM$. To prove $(ii)$, apply \eqref{eq:D4.1} with $r=R_i:=r_i-d-\dist(o,x)$ to get $$ \left| h(x) (1+d/R_i)^\alpha - h(y)\right| \le ((1+d/R_i)^\alpha-1) \,\mathrm{sup}_{B_{r_i}(o)} |h|. $$ By the weak maximum principle \cite[Cor.~4.3]{AGG19}, we have $\sup_{B_{r_i}(o)} |h| = \sup_{\partial B_{r_i}(o)} |h|$. Since $(1+d/R_i)^\alpha-1= \alpha d/ R_i + o(1/R_i) = O(1/r_i)$ when $i\to +\infty$, then there exists $i_o$ and $C>0$ such that $$\left| h(x) (1+d/R_i)^\alpha - h(y)\right| \le C r_i^{-1}\mathrm{sup}_{\partial B_{r_i}(o)} |h| $$ for all $i \ge i_o$. This implies $h(x)=h(y)$ by letting $i$ tend to $+\infty$. \end{proof} \hfill \textbf{Tangent cones at infinity.} We refer to \cite{Gromov} for a definition of the Gromov-Hausdorff distance $\dist_{GH}$ between compact metric spaces and only mention here that a sequence of compact metric spaces $\{(X_i,\dist_i)\}$ converges to another compact metric space $(X,\dist)$ with respect to the Gromov-Hausdorff distance (what we denote by $\dist_{GH}(X_i,X) \to 0$) if and only if there exists an infinitesimal sequence $\{\eps_i\}_i \subset (0,+\infty)$ and functions $\phi_i : X_i \to X$ called $\eps_i$-isometries such that $|\dist(\phi_i(x),\phi_i(x'))-\dist_i(x,x')|\le \eps_i$ for any $x,x' \in X_i$ and any $i$. If $x_i \in X_i$ for any $i$ and $x \in X$ are such that $\dist(\phi_i(x_i),x) \to 0$, we write $x_i \stackrel{GH}{\to} x$. When dealing with non-compact spaces, we say that a sequence of pointed metric spaces $\{(X_i, \dist_i, x_i)\}_i$ converges in the pointed Gromov-Hausdorff topology to $(X, \dist, x)$ if there exist sequences of positive numbers $\eps_i \downarrow 0$, $R_i \uparrow \infty$, and of Borel maps $\phi_i:B_{R_i}(x_i) \to X$, also called $\eps_i$-isometries, such that such that for any $i$ the ball $B_{R_i}(x)$ is included in the $\eps_i$-neighborhood of $\phi_i(B_{R_i}(x_i))$, namely $ \bigcup_{y \in \phi_i(B_{R_i}(x_i))} B_{\eps_i}(y)$, $|\dist_i(y, z)-\dist(\phi_i(y), \phi_i(z))|<\eps_i$ for any $y,\, z \in B_{R_i}(x_i)$, and $\dist(\phi_i(x_i),x)\to 0$ (which we also abbreviate to $x_i \stackrel{GH}{\to} x$). Pointed measured Gromov-Hausdorff convergence of a sequence of pointed metric measure spaces $\{(X_i, \dist_i, \mu_i, x_i)\}$ to $(X, \dist,\mu,x)$ is set as pointed Gromov-Hausdorff convergence of $\{(X_i, \dist_i, x_i)\}$ to $(X, \dist, x)$ with the additional requirement $(\phi_i)_{\sharp}\mu_i \stackrel{C_\bs(X)}{\rightharpoonup} \mu$ where $C_\bs(X)$ is the space of continuous functions with bounded support and $ f_\sharp $ is the push forward operator between measures induced by a Borel map $f$. A metric space $(X,\dist)$ is called metric doubling if there exists a positive integer $N$ such that any ball in $(X,\dist)$ can be covered by at most $N$ balls with half its radius. Whenever $(X,\dist)$ is a doubling space, for any $o \in X$, the family of pointed spaces $\{(X,r^{-1}\dist,o)\}_{r>1}$ satisfies the assumptions of Gromov's precompactness theorem \cite[Prop.~5.2]{Gromov}, henceforth it admits limit points in the pointed Gromov-Haudorff topology as $r\uparrow +\infty$. These pointed metric spaces are called tangent cones at infinity of $(X,\dist)$ in $o$. It is well-known that when $(X,\dist,\mu)$ is satisfies the doubling property \eqref{eq:doubling}, then the metric space $(X,\dist)$ is metric doubling: see e.g.~\cite[Section 2.5]{AmbrosioColomboDiMarino}. When a metric measure space $(X,\dist,\mu)$ has an $\alpha$-dimensional volume, a simple computation shows that it is measure doubling, with $C_D=2^\alpha$. Moreover, one can equip any of its tangent cones at infinity $(\uX,\udist,\uo)$ with a limit measure $\umu$ in the following way. Let $\{r_i\}_i$ be a sequence of positive real numbers diverging to $+\infty$ such that $(\uX,\udist,\uo)$ is the pointed Gromov-Hausdorff limit of $\{(X,r_i^{-1}\dist,o)\}_i$. Set $\mu_i:=r_i^{-\alpha}\mu$ for any $i$, and note that $$ \mu_i(B_r^{\dist_i}(x)) = \mu_i(B_{r r_i}(x)) = \omega_\alpha r^\alpha \qquad \forall x \in X, r>0. $$ Set $\underline{V}(x,r):=\omega_\alpha r^\alpha$ for any $x \in X$ and $r>0$. Then for any $\delta>0$ and any Borel set $A$ of $(\uX,\udist)$, setting $$ \umu_\delta(A) := \inf\left\{ \sum_{i} \underline{V}(z_i,r) \, : \, \{B_{r_i}(z_i)\}_i\, \, \text{s.t. $A \subset \bigcup_i B_{r_i}(z_i)$ and $r_i \le \delta$} \right\} $$ and then $\umu(A) = \lim\limits_{\delta \to 0} \umu_\delta(A)$ defines a metric outer measure $\umu$ on $(\uX,\udist)$ whose canonically associated measure, still denoted by $\umu$, is a Radon measure satisfying $\umu(B_r(\ux)) = \omega_\alpha r^\alpha$ for any $\ux \in \uX$ and $r>0$. This shows that $(\uX,\udist,\umu)$ has an $\alpha$-dimensional volume. Moreover, we obviously have $\umu(B_r(\ux))=\lim_{i \to +\infty} \mu_i(B_r^{\dist_i}(x_i))$ for any $r>0$ and any sequence $x_i \stackrel{GH}{\to} x$; by density in $C_\bs(\uX)$ of the space spanned by the collection of characteristic functions of balls, this implies the pointed measured Gromov-Hausdorff convergence $(X,r_i^{-1}\dist,\mu_i,o) \to (\uX,\udist,\umu,\uo)$.\\ \textbf{Ascoli-Arzelà type theorems} Let $\{(X_i,\dist_i,x_i)\}_i, (X,\dist,x)$ be pointed proper metric spaces such that $$(X_i,\dist_i,x_i) \to (X,\dist,x)$$ in the pointed Gromov-Hausdorff topology and $\phi_i:B_{R_i}(x_i) \to X$ be $\eps_i$-isometries, where $\{\eps_i\}_i, \{R_i\}_i \subset (0,+\infty)$ are such that $\eps_i \downarrow 0$ and $R_i \uparrow +\infty$. For any $i$, let $K_i$ be a compact subset of $X_i$, and assume that there exists $K \subset X$ compact such that $\dist_{GH}(K_i,K) \to 0$. We say that functions $f_i:X_i\to\setR$ converge to $f:X\to\setR$ uniformly over $K_i \to K$ if $\sup_{K_i} |f_i - f \circ \phi_i| \to 0$. Note that this definition depends on the choice of the $\eps_i$-isometries $\phi_i$ that we keep fixed for the rest of this paragraph. \begin{remark} In the rest of the article, whenever we consider a convergent sequence of pointed metric spaces $(X_i,\dist_i,x_i) \to (X,\dist,x)$, we always implicitly assume that sequences $\{\eps_i\}_i, \{R_i\}_i \subset (0,+\infty)$ with $\eps_i\downarrow 0, R_i \uparrow +\infty$ and $\eps_i$-isometries $\phi_i:B_{R_i}(x_i)\to X$ have been chosen a priori and that the statements ``$x_i \stackrel{GH}{\to} x$'' and ``$f_i \to f$ uniformly on compact sets'' are meant with these $\eps_i$-isometries. \end{remark} In this context, we have the following Ascoli-Arzelà theorem \begin{proposition}\label{prop:AA1} Let $\{(X_i,\dist_i,x_i)\}_i, (X,\dist,x)$ be as above, and $r>0$. For any $i$, let $f_i \in C(X_i)$ be such that: \begin{itemize} \item $\sup_i \|f_i\|_{L^{\infty}(\overline{B}_r(x_i))} < +\infty$, \item the sequence $\{f_i\}_i$ is asymptotically uniformly continuous on $\overline{B}_r(x)$ (see \cite[Def.~3.2]{Honda}). \end{itemize} Then $\{f_i\}_i$ admits a subsequence $(f_{i(j)})_j$ which converges to $f$ uniformly over $\overline{B}_r(x_i) \to \overline{B}_r(x)$. \end{proposition} \begin{proof} From \cite[Prop.~3.3]{Honda}, we know that for $\{f_i\}_i$ satisfying the above assumptions, there exists $f \in C(B_r(x))$ and a subsequence $(f_{i(j)})_j$ such that $f_{i(j)}(x_j) \to f(x)$ whenever $x_j \stackrel{GH}{\to} x \in B_r(x)$. With no loss of generality, we can assume that the subsequence is the whole sequence itself. By contradiction, assume that the uniform convergence $f_i\to f$ over $\overline{B}_r(x_i)\to \overline{B}_r(x)$ is not satisfied. Then there is some $\eps>0$ and a subsequence $(f_{i(\ell)})_\ell$ such that $\inf_\ell \{\sup_{\overline{B}_r(x_{i(\ell)})} |f_{i(\ell)}-f \circ \phi_{i(\ell)}|\} \ge \eps$. Again, we can assume that the subsequence is the whole sequence itself. For any $i$, choose $y_i \in \overline{B}_r(x_i)$ such that $|f_i(y_i)-f \circ \phi_i(y_i)| \ge \eps/2$ and set $z_i:=\phi_i(y_i) \in \overline{B}_{r+\eps_i}(x)$. Properness of $X$ implies that the sequence $\{z_i\}_i$ converges to some $z \in \overline{B}_r(x)$, up to extraction. In particular, $y_i \stackrel{GH}{\to} z$. Then in $$ \eps/2 \le |f_i(y_i)-f(z)| + |f(z)-f\circ \phi_i(y_i)|, $$ the first term in the right-hand side goes to $0$ when $i$ tend to $+\infty$. Since $f$ is continuous, we also have $|f(z)-f\circ \phi_i(y_i)|\to 0$ when $i$ tend to $+\infty$, hence a contradiction. \end{proof} Let $\{(X_i,\dist_i,x_i)\}_i, (X,\dist,x),\{\phi_i\}_i$ be as above. Let $(Y,\dist_Y)$ be another metric space. We say that $f_i : Y\to X_i$ converge to $f : Y\to X$ uniformly on compact subsets of $Y$ if $\sup_K \dist_i(\phi_i \circ f_i,f) \to 0$ for any compact set $K \subset Y$. An Ascoli-Arzelà theorem is also available in this context. We state it with an equi-Lipschitz assumption which is enough for our purposes. The proof is omitted for brevity. \begin{proposition}\label{prop:AA2} Let $\{(X_i,\dist_i,x_i)\}_i, (X,\dist,x)$ be as above. Let $(Y,\dist_Y)$ be a metric space and $f_i : Y\to X_i$ be Lipschitz functions such that: \begin{itemize} \item $L:=\sup_i \Lip(f_i) < +\infty$, \item there exists $y\in Y$ and $r>0$ such that $\dist_i(f_i(y),x_i)\le r$ for any $i$. \end{itemize} Then $\{f_i\}_i$ admits a subsequence converging uniformly on compact sets of $Y$ to some Lipschitz function $f:Y\to X$, and $\Lip(f) \le L$. \end{proposition} Let us conclude this paragraph with a stability result for strongly harmonic functions. \begin{proposition}\label{prop:stabstrongharmonic} Let $\{(X_i,\dist_i,\mu_i,x_i)\}_i, (X,\dist,\mu,x)$ be proper pointed metric measured spaces such that $(X_i,\dist_i,\mu_i,x_i) \to (X,\dist,\mu,x)$ in the pointed measured Gromov-Hausdorff topology. Let $f_i \in C(X_i)$ be converging to $f \in C(X)$ uniformly over $\overline{B}_r(x_i) \to \overline{B}_r(x)$ for any $r>0$. Assume that $f_i$ is strongly harmonic for any $i$. Then $f$ is strongly harmonic. \end{proposition} \begin{proof} By the characterization of strongly harmonic functions stated in Remark \ref{rem:defalternative}, it is enough to establish \begin{equation}\label{eq:fstrongharm} f(y) = \fint_{B_r(y)}u(\dist(y,z))f(z)\di \mu(z) \end{equation} for any given $r>0$, $y \in X$ and $u \in C^1_c([0,+\infty))$ such that $\int_X u(\dist(y,z)) \di \mu(z)=1$. Let $y_i \in X_i$ for any $i$ be such that $y_i \stackrel{GH}{\to} y$. For any $i$, set $$u_i:=\frac{u}{\int_{X_i} u(\dist_i(y_i,z)) \di \mu_i(z)}$$ and note that \begin{equation} \int_{X_i} u_i(\dist_i(y_i,z)) \di \mu_i(z) = 1 \end{equation} so that $f_i$ being strongly harmonic implies \begin{equation}\label{eq:fistronharm} f_i(y_i) = \fint_{B_r(y_i)}u_i(\dist_i(y_i,z))f_i(z)\di \mu_i(z). \end{equation} But \begin{align*} \int_{X_i} u(\dist_i(y_i,z)) \di \mu_i(z) & = - \int_0^{+\infty} u'(r) \mu_i(B_r^{\dist_i}(y_i)) \di r\\ & \to - \int_0^{+\infty} u'(r) \mu(B_r^{\dist}(y)) \di r = \int_{X} u(\dist(y,z)) \di \mu(z)=1, \end{align*} so $u_i \to u$ uniformly on $(0,+\infty)$: this implies that the functions $u_i(\dist_i(y_i,\cdot))f_i \in C(X_i)$ converge uniformly over all compact sets to $u(\dist(y,\cdot))f \in C(X)$. Therefore, letting $i$ tend to $+\infty$ in \eqref{eq:fistronharm} provides \eqref{eq:fstrongharm}. \begin{comment} Take $r>0$ and $y \in X$. Let $y_i \in X_i$ for any $i$ be such that $y_i \stackrel{GH}{\to} y$. Then $$ \left|\fint_{B_r(y)} f \di \mu - f(y)\right| \le \left|\fint_{B_r(y)} f \di \mu - \fint_{B_r(y_i)} f_i \di \mu_i\right| + \underbrace{\left|\fint_{B_r(y_i)} f_i \di \mu_i - f_i(y_i)\right|}_{=0} + \underbrace{\left|f_i(y_i) - f(y)\right|}_{\to 0} $$ and \begin{align*} \left|\fint_{B_r(y)} f \di \mu - \fint_{B_r(y_i)} f_i \di \mu_i \right| & \le \underbrace{\left|\fint_{B_r(y)} f \di \mu - \fint_{B_r(y)} f_i \circ \phi_i \di \mu \right|}_{\to 0}\\ & + \left|\fint_{B_r(y)} f_i \circ \phi_i \di \mu - \fint_{B_r(y_i)} f_i \di \mu_i\right| \end{align*} where the functions $\phi_i:B_r(y)\to B_{r_i}(y_i)$ are $\eps_i$-approximations for some infinitesimal sequence $\{\eps_i\}_i \subset (0,+\infty)$. Since $$ \fint_{B_r(y)} f_i \circ \phi_i \di \mu = \fint_{\phi_i(B_r(y))} f_i \di (\phi_i)_{\#}\mu, $$ it follows from the definition of measured Gromov-Hausdorff convergence that $$ \left|\fint_{B_r(y)} f_i \circ \phi_i \di \mu - \fint_{B_r(y_i)} f_i \di \mu_i\right| \to 0. $$ \end{comment} \begin{comment} {\color{green} Let $\phi_i:B_{R_i}(x_i)\to B_{R_i}(x)$ be $\eps_i$-approximations for some infinitesimal sequence $\{\eps_i\}_i \subset (0,+\infty)$ and $R_i \to\infty$. Take $y \in X$. Let $y_i \in X_i$ for any $i$ be such that $y_i \stackrel{GH}{\to} y$.} \end{comment} \begin{comment} Let $u\in C_c \left([0, \infty)\right)$ with $\int_0^{\infty} u(r)dr=1$ then $$f_i(y_i) = \int_{X}u\left(\dist_i(y_i,z)\right) f_i(z) \di \mu_i(z).$$ Assume that $\supp u\subset [0,R]$ and assume that $R_i> R+d(x,y)$. By assumptions, we have $\lim_i f_(y_i)=f(y)$. Hence we only need to check that $$\lim_i \int_{X_i}u\left(\dist_i(y_i,z)\right) f_i(z) \di \mu_i(z)=\int_{X}u\left(\dist(y,z)\right) f(z) \di \mu(z).$$ Defined $h_i(z)=u\left(\dist_i(y_i,z)\right) f_i(z)$ and $h(z)=u\left(\dist(y,z)\right) f(z)$ and noticed that these functions are continuous with compact support. Then we have $\lim_i \| h\circ\varphi_i-h_i\|_\infty=0$ so that \begin{align*} \left|\fint_{X)} h \di \mu - \fint_{X_i} h_i \di \mu_i \right| & \le \underbrace{\left|\fint_{X_i} \left(h\circ\varphi_i -h_i\right)\di \mu_i \right|}_{\to 0}\\ & + \left|\fint_{X_i} h \circ \phi_i \di \mu_i - \fint_{X} h \di \mu\right|; \end{align*} it follows from the definition of measured Gromov-Hausdorff convergence that $$ \lim_i \left|\fint_{X_i} h \circ \phi_i \di \mu_i - \fint_{X} h \di \mu\right|= 0. $$} \end{comment} \end{proof} \hfill \textbf{Length structures.} Let $(X,\dist)$ be a metric space. A path in $X$ is a continuous map $c:[0,1]\to X$. It is called rectifiable if its length \[ L_\dist(c):=\sup \left\{ \sum_{i=1}^n \dist(c(t_i),c(t_{i-1})) \, \, : \, \, 0 = t_0 < \ldots < t_n = 1, \, \, n \in \setN\backslash\{0\} \right\} \] is finite. $(X,\dist)$ is called length metric space if for any $x, y \in X$, $$ \dist(x,y) = \inf\{L_\dist (c) \, : \, c \in \Omega_{xy}\}, $$ where $\Omega_{xy}$ is the set of rectifiable paths in $X$ such that $c(0)=x$ and $c(1)=y$. A geodesic space is a trivial example of length space. Equivalently, $(X,\dist)$ is length if $\dist$ coincides with its associated length distance $\overline{\dist}$ defined by: $$ \overline{\dist}(x,y) := \inf\{L_\dist (c) \, : \, c \in \Omega_{xy}\}\qquad \forall x,y \in X, $$ in which case we say that $\dist$ is a length distance. Note that we always have $\dist \le \overline{\dist}$ and $L_{\dist}(c)=L_{\overline{\dist}}(c)$ whenever $c$ is a rectifiable path in $X$. Moreover, $$ L_\dist(c) = \lim\limits_{\alpha \to 0^+} L_{\dist,\alpha}(c) $$ where \[ L_{\dist,\alpha}(c) = \sup \left\{ \sum_{i=1}^n \dist(c(t_i),c(t_{i-1})) \, : \, 0 = t_0 < \ldots < t_n = 1, |t_i - t_{i+1}|<\alpha \, \, \forall i,\, \, n \in \setN\backslash\{0\} \right\}. \] In this context, we have the following lemma. \begin{lemma}\label{lem:length} Let $(X,\delta)$ be a length metric space. Assume that $\dist$ defined as $\dist:=2\sin(\delta/2)$ is a distance. Then its associated length distance $\overline{\dist}$ coincides with $\delta$. \end{lemma} \begin{proof} First note that $(2/\pi) \delta \le \dist \le \delta$ because $(2\pi)x\le2\sin(x/2)\le x$ for any $x\ge 0$. In particular, a map $c:[0,1]\to X$ is continous for $\delta$ if and only if it is for $\dist$. Moreover, since $\delta$ is a length distance, $\dist \le \delta$ implies $\overline{\dist} \le \delta$, so we are left with proving the converse inequality. Let $c$ be a path in $X$. Being continuous, $c$ is also uniformly continuous: for any $\epsilon\in (0,1)$, there exists $\alpha>0$ such that for any $t,s \in [0,1]$, $$ |t-s| < \alpha \quad \Rightarrow \quad \delta(c(t),c(s)) < \epsilon. $$ Since $x- 2\sin(x/2)\le x^2$ for any $x\ge0$, then $$\delta - \dist \le \delta^2,$$ so that for any $t,s \in [0,1]$: $$ |t-s| < \alpha \quad \Rightarrow \quad \delta(c(t),c(s)) - \dist(c(t),c(s)) \le \epsilon \delta(c(t),c(s)). $$ This implies $L_{\delta,\alpha}(c) - L_{\dist,\alpha}(c) \le \epsilon L_{\delta,\alpha}(c)$ and thus $(1-\eps)L_\delta(c) \le L_\dist(c)$ by letting $\alpha$ tend to $0$. Letting $\eps$ tend to $0$ provides $L_\delta(c) \le L_\dist(c)$. This implies $\delta \le \overline{\dist}$. \end{proof} \hfill \textbf{Busemann functions.} Let $(X,\dist)$ be a metric space. A geodesic ray in $X$ is a continuous function $\gamma:[0,+\infty)\to X$ such that $\dist(\gamma(t),\gamma(s))=|t-s|$ for any $s,t\ge 0$. The Busemann function associated to a geodesic ray $\gamma$ is defined by $$ b_\gamma(x)=\lim\limits_{t \to +\infty} t - \dist(x,\gamma(t)). $$ Note that this limit is well-defined for any $x \in X$ since the function $t \mapsto t - \dist(x,\gamma(t))$ is non-decreasing and bounded from above by $\dist(o,x)$. Note also that $b_\gamma$ is $1$-Lipschitz, since for any $x, y \in X$ and any $t>0$, one has $t - \dist(x,\gamma(t)) - (t - \dist(y,\gamma(t)) \le \dist(x,y)$, henceforth $b_\gamma(x) - b_\gamma(y) \le \dist(x,y)$ by letting $t \to +\infty$. Moreover, for any $s>0$, one can easily check that \begin{equation}\label{eq:Busemann} b_\gamma(\gamma(s))=s. \end{equation} We shall need the following lemma. \begin{lemma}\label{lem:preparatory} Let $(X,\dist,\mu)$ be a metric measure space equipped with a Dirichlet form $\cE$ with associated operator $L$, and $\alpha \in \setR$. Assume \[ L\textbf{1}=0, \qquad \dist(x,\cdot) \in \cD_{loc}(\cE) \quad \, \text{and} \quad \, L\dist(x,\cdot)=\alpha/\dist(x,\cdot) \, \, \text{on $X\backslash \{x\}$}, \] for any $x \in X$. Then any Busemann function on $(X,\dist)$ is locally $L$-harmonic. \end{lemma} \begin{proof} Let $b_\gamma$ be a Busemann function on $(X,\dist)$. Set $f_s:=s-\dist(\gamma(s),\cdot)$ for any $s>0$ and observe that the assumptions imply that $f_s$ is a local solution on $X \backslash \{x\}$ of $Lu = -\alpha/\dist(\gamma(s),\cdot)$. For any $\phi \in \cD_c(\cE)$, since $\gamma(s) \notin \supp \phi$ and thus $\dist(\gamma(s),\cdot)>0$ on $\supp \phi$ for $s$ large enough, then $$ |\cE(f_s,\phi)|= \left|\int_X (Lf_s) \phi \di \mu\right| \le \frac{|\alpha|}{\dist(\gamma(s),\supp \phi)} \int_X |\phi| \di \mu \to 0 \quad \text{when $s \to +\infty$}. $$ Moreover, as $(f_s)_s$ is increasing and converges pointwise to $b_\gamma$, then $f_s \to b_\gamma$ in $L^2(X,\mu)$. Since $\cD_{loc}(\cE)$ is a Fréchet space that can be equipped with the family of semi-norms $\{p_i(g):=(\|g\|_{L^2(K_i)}^2 +( \sup\{\cE(g,\phi):\phi \in \cD(\cE), \, \supp \phi \subset K_i\})^2)^{1/2}\}_i$, where $\{K_i\}_i$ is an exhaustion of $X$ by compact sets, then $b_\gamma \in \cD_{loc}(\cE)$ and $\cE(b_\gamma,\phi)=0$ for any $\phi \in \cD_c(\cE)$. In particular, $b_\gamma$ is locally $L$-harmonic.\\ \end{proof} \textbf{Laplace transform.} Let $F:[0,+\infty)\to \setR$ be a locally integrable function such that $F(t)=O(e^{\gamma t})$ when $t \to +\infty$ for some $\gamma \in \setR$. The Laplace transform of $F$ is the complex-valued function $\mathcal{L}\{F\}$ defined by: $$ \mathcal{L}\{F\}(z) = \int_0^{+\infty} F(\xi)e^{-z\xi}\di \xi, \qquad \forall z \in \{\mathrm{Re}(\cdot) > \gamma\}. $$ Lerch's theorem asserts that if $F_1, F_2 : [0,+\infty)\to\setR$ are two continuous, locally integrable functions satisfying $F_1(t), F_2(t)=O(e^{\gamma t})$ when $t \to +\infty$ and $\mathcal{L}\{F_1\} = \mathcal{L}\{F_2\}$ on $\{\mathrm{Re}>\gamma\}$, then $F_1=F_2$ (see e.g.~\cite[Th.~2.1]{Cohen}). This provides the following lemma. \begin{lemma}\label{lem:Laplace} Let $F:(0,+\infty)\to \setR$ be a continuous and locally integrable function such that $F(t)=O(e^{\eps t})$ when $t \to +\infty$ for any $\eps>0$ and \begin{equation}\label{eq:Laplace} \mathcal{L}\{F\}(\lambda) = \lambda^{-\alpha-1} \qquad \forall \lambda >\lambda_o \end{equation} for some $\alpha>0$ and $\lambda_o \ge 0$. Then $F(\xi)=\xi^\alpha/\Gamma(\alpha+1)$ for any $\xi\ge0$. \end{lemma} \begin{proof} Since $F$ is locally integrable, one can apply the classical theorem on holomorphy under the integral sign to get that $\mathcal{L}\{F\}$ is holomorphic on any compact subset of $\{\mathrm{Re}>0\}$. Therefore, by analytic continuation, \eqref{eq:Laplace} implies $\mathcal{L}\{F\}(z) = z^{-\alpha-1}$ for any $z \in \{ \mathrm{Re}(\cdot)>0\}$. Since the Laplace transform of $\xi \mapsto \xi^{\alpha}$ is $z\mapsto \Gamma(\alpha +1) z^{-\alpha-1}$, Lerch's theorem gives $F(\xi)=\xi^\alpha/\Gamma(\alpha+1)$ for any $\xi\ge0$. \end{proof} \section{First rigidity results for spaces with an Euclidean heat kernel} \quad \, In this section, we establish several properties of metric measure spaces equipped with a Dirichlet form admitting an $\alpha$-dimensional Euclidean heat kernel. We shall use most of these results in the next section to prove Theorem \ref{th:main}. \subsection{Stochastic completeness and consequences} We begin with stochastic completeness. \begin{lemma}\label{lem:stochastic} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then $(X,\dist,\mu,\cE)$ is stochastically complete. \end{lemma} \begin{proof} Take $t,s>0$ and $x \in X$. By \eqref{eq:ChapmanKolmogorov}, for any $y \in X$ we have \begin{equation}\label{eq:Lem3.1} \int_X p(x,z,t)e^{-\frac{\dist^2(z,y)}{4s}} \di \mu(z) = \left(\frac{s}{t+s}\right)^{\alpha/2} e^{-\frac{\dist^2(x,y)}{4(t+s)}}. \end{equation} Letting $s \to +\infty$ and applying the monotone convergence theorem, we get the result.\\ \end{proof} As a consequence of Lemma \ref{lem:stochastic}, we can show that spaces with an $\alpha$-dimensional Euclidean heat kernel have an $\alpha$-dimensional volume. \begin{lemma}\label{lem:euclideanvolume} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then $(X,\dist,\mu)$ has an $\alpha$-dimensional volume. \end{lemma} \begin{proof} Take $x \in X$. By Lemma \ref{lem:stochastic}, we have: $$ \int_X p(x,y,t) \di \mu(y) = 1 \qquad \forall \, t>0, $$ so that the hypothesis on the heat kernel implies: \begin{equation}\label{eq:001} \int_X e^{-\frac{\dist^2(x,y)}{4t}} \di \mu(y) = (4\pi t)^{\alpha/2} \qquad \forall \, t>0. \end{equation} By Cavalieri's principle (see for instance \cite[Lemma 5.2.1]{AmbrosioTilli}), we have $$ \int_X e^{-\frac{\dist^2(x,y)}{4t}}\di \mu(y) = \int_0^{+\infty} \mu(\{e^{-\frac{\dist^2(x,\cdot)}{4t}}>s\}) \di s. $$ Since for any $y \in X$, one has $e^{-\frac{\dist^2(x,y)}{4t}}>s$ if and only if $\dist^2(x,y) < - 4t \log(s)$, then $$ \bigg\{e^{-\frac{\dist^2(x,\cdot)}{4t}}>s\bigg\} = \begin{cases} \emptyset & \text{if $s\ge 1$},\\ B_{\sqrt{-4t\log(s)}}(x) & \text{if $s<1$}. \end{cases} $$ Therefore, the change of variable $\xi=-4t\log(s)$ yields to $$ \int_X e^{-\frac{\dist^2(x,y)}{4t}}\di \mu(y) = \frac{1}{4t} \int_0^{+\infty} e^{-\frac{\xi}{4t}} \mu(B_{\sqrt{\xi}}(x)) \di \xi. $$ Coupled with \eqref{eq:001}, and setting $\lambda=1/(4t)$, this leads to: $$ \int_0^{+\infty} e^{-\lambda \xi} \mu(B_{\sqrt{\xi}}(x)) \di \xi = \pi^{\alpha/2} \lambda^{-\alpha/2-1} \qquad \forall \lambda>0. $$ Applying Lemma \ref{lem:Laplace} and \eqref{eq:omega_n} provides the result. \end{proof} A second consequence is that complete spaces with an $\alpha$-dimensional Euclidean heat kernel are proper; in particular, they are locally compact. Note that the space $(\setR^n \backslash \{0\}, \dist_e,\leb^n)$ shows that completeness is a non-removable assumption. \begin{lemma} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel and such that $(X,\dist)$ is complete. Then any closed ball in $X$ is compact. \end{lemma} \begin{proof} Let $B$ be a closed ball in $X$ with center $o$ and radius $R$. Take $(x_k)_k \subset B$ and $t>0$. Set $u_k:=p(x_k,\cdot,t/2)$ for any $k$, and note that $\int_X u_k^2 \di \mu=p(x_k,x_k,t)=(4\pi t)^{-\alpha/2}$ by the Chapman-Kolmogorov property \eqref{eq:ChapmanKolmogorov}. Then the sequence $(u_k)_k$ is bounded in $L^2(X,\mu)$, so it weakly converges to some $u_\infty \in L^2(X,\mu)$. Let $L$ be the operator canonically associated to $\cE$. Set $v_k:=e^{-(t/2)L}u_k$ for any $k$ and $v_\infty = e^{-(t/2)L}u_\infty$. By the Chapman-Kolmogorov property, we have $v_k(y)=p(x_k,y,t)$ for any $k$ and $y \in X$, and since $\dist^2(y,o)/2 \le \dist^2(y,x_k) + \dist^2(x_k,o) \le \dist^2(y,x_k) + R^2$, then $$ |v_k(y)| = (4\pi t)^{-\alpha/2}e^{-\frac{\dist^2(x_k,y)}{4t}} \le (4\pi t)^{-\alpha/2}e^{-\frac{\dist^2(o,y)}{8t}} e^{\frac{R^2}{4t}}=:w(y). $$ The Chapman-Kolmogorov property implies easily that $w$ is in $L^2(X,\mu)$. Moreover, for any $y \in X$, the $L^2$ weak convergence $u_k \to u_\infty$ implies $$ v_k(y)=\int_X p(y,z,t/2)u_k(z)\di \mu(z) \to \int_X p(y,z,t/2)u_\infty(z)\di \mu(z) = v_\infty(y). $$ Then $v_k \to v_\infty$ in $L^2(X,\mu)$ by Lebesgue's dominated convergence theorem. Therefore, $(v_k)_k$ is a Cauchy sequence in $L^2(X,\mu)$. Since \begin{align*} \|v_k - v_l\|_{L^2}^2 & = \|v_k \|_{L^2}^2 + \|v_l\|_{L^2}^2 - 2 \int_X v_k v_l \di \mu \\ & = p(x_k,x_k,2t) + p(x_l,x_l,2t) - 2 p(x_k,x_l,2t) = \frac{2-2e^{-\frac{\dist^2(x_k,x_l)}{8t}}}{(8\pi t)^{\alpha/2}} \end{align*} for all $k,l$, we get that $(x_k)_k$ is a Cauchy sequence, hence the result. \end{proof} Let us conclude with an important lemma. \begin{lemma} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel such that $(X,\dist)$ is complete. Then $(X,\dist)$ is a geodesic space. \end{lemma} \begin{proof} Let us begin with showing that any two points $x,y \in X$ admit a midpoint, i.e.~a point $m \in X$ such that \begin{equation}\label{eq:middle} \dist(x,m)=\dist(y,m)=\frac{\dist(x,y)}{2}\, \cdot \end{equation} Set $F(z)=\dist^2(x,z)+\dist^2(z,y)$ for any $z \in X$. Since $F(z) \to +\infty$ when $\dist(x,z), \dist(z,y) \to +\infty$, then there exists a ball $B \subset X$ such that $\inf_X F = \inf_B F$. From the previous lemma, we know that balls in $X$ are compact, so $\inf_B F$ is attained in some $m \in B$. Therefore, setting \[\lambda:=F(m)=\dist^2(x,m)+\dist^2(m,y),\] we have $$ \|e^{-\frac{F}{4}}\|_{L^\infty(X,\mu)} = e^{-\frac{\lambda}{4}}. $$ By the Chapman-Kolmogorov identity \eqref{eq:ChapmanKolmogorov}, we have for any $t>0$ $$ \int_X e^{-\frac{\dist^2(x,z)+\dist^2(z,y)}{4t}}\di \mu(z) = e^{-\frac{\dist^2(x,y)}{8t}}(2\pi t)^{\alpha/2} $$ which can be raised to the power $t$ to provide $$ \|e^{-\frac{F}{4}}\|_{L^{1/t}(X,\mu)} = e^{\frac{\dist^2(x,y)}{8}}(2\pi t)^{\alpha t/2}. $$ Letting $t$ tend to $0$, this yields to $e^{-\frac{\lambda}{4}}=e^{-\frac{\dist^2(x,y)}{8}}$ hence $\lambda=\frac{\dist^2(x,y)}{2}$, thus \begin{equation}\label{eq:lambda2} \dist^2(x,m)+\dist^2(m,y) = \frac{\dist^2(x,y)}{2} \end{equation} by definition of $\lambda$. Since for any $z \in X$, \begin{align*} \dist^2(x,z)+\dist^2(z,y) & = \frac{1}{2}(\dist(x,z)+\dist(z,y))^2 + \frac{1}{2}(\dist(x,z)-\dist(z,y))^2\\ &\ge \frac{1}{2}(\dist(x,y))^2 + \frac{1}{2}(\dist(x,z)-\dist(z,y))^2, \end{align*} taking $z=m$ and using \eqref{eq:lambda2} implies $\dist(x,m)=\dist(m,y)$. The existence of midpoints implies that $(X,\dist)$ is a length space, see \cite[Th.~2.4.16, 1.]{BuragoBuragoIvanov}. Then the result follows from \cite[Th.~2.5.23]{BuragoBuragoIvanov} and \cite[Th.~2.5.9]{BuragoBuragoIvanov}. \end{proof} \begin{remark}\label{rem:geodesicplus} The previous proof can be adapted to show that if a complete proper metric measure space $(X,\dist,\mu)$ can be endowed with a symmetric Dirichlet form $\cE$ admitting a heat kernel $p$ such that for any $x,y\in X$, $$\lim_{t\to 0+}-4t\log p(x,y,t)=\dist^2(x,y),$$ where the convergence holds locally uniformly, then $(X,\dist)$ is geodesic. \end{remark} \begin{comment} Recall that the Hopf-Rinow theorem states that any complete locally compact length metric space is geodesic. This provides the following corollary. \begin{corollary} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel and such that $(X,\dist)$ is a complete length metric space. Then $(X,\dist)$ is geodesic, meaning that any two points $x,y \in X$ can be joined by a minimizing geodesic. \end{corollary} \end{comment} \subsection{Strong locality and regularity of the Dirichlet form} Let us show now that having an $\alpha$-dimensional Euclidean heat kernel forces a Dirichlet form to satisfy several properties. We start with the following. \begin{lemma} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then $\Lip_c(X) \subset \cD(\cE)$. \end{lemma} \begin{proof} Let $f \in \Lip_c(X)$ be with support $K$. Thanks to \eqref{eq:Grigor'yan}, we only need to show that $$ F \, : \, (0,+\infty) \ni t \mapsto \frac{1}{2t} \iint_{X\times X} (f(x)-f(y))^2 \frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\dist^2(x,y)}{4t}} \di \mu(x) \di \mu(y) $$ is a bounded function. Set $D(x,y)=f(x)-f(y)$ for any $(x,y) \in X \times X$. It is easily checked that $\supp(D) \subset (K \times X) \cup (K \times X)$, so for any $t>0$, using the symmetry in $x$ and $y$ of the integrand we get \begin{align*} F(t) & \le \frac{\Lip(f)^2}{t} \int_K\int_X \dist^2(x,y) \frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\dist^2(x,y)}{4t}} \di \mu(x) \di \mu(y)\\ & = \frac{4 \Lip(f)^2}{(4 \pi t)^{\alpha/2}} \int_K \int_X \frac{\dist^2(x,y)}{4t} e^{-\frac{\dist^2(x,y)}{4t}} \di \mu(x) \di \mu(y). \end{align*} For any measurable function $g:X\to [0,\infty)$ and any $C^1$ function $\phi : [0,\infty) \to[0,\infty)$ satisfying $\lim_{\lambda\to+\infty}\varphi(\lambda)=0$ and $\int_0^{+\infty} |\varphi'|(\lambda)\ \mu\left(\left\{g<\lambda\right\}\right)d\lambda<\infty$, writing $\varphi(g(x))=\int_{g(x)}^{+\infty} \varphi'(\lambda)d\lambda$ and applying Fubini's theorem leads to \begin{equation}\label{eq:Fub} \int_X \varphi(g(x))\di\mu(x)=-\int_0^{+\infty} \varphi'(\lambda)\ \mu\left(\left\{g<\lambda\right\}\right)\di\lambda. \end{equation} For any $y \in K$, using this fact with $g(x)=\dist^2(x,y)/(4t)$ and $\phi(\xi)=\xi e^{-\xi}$, we get $$ F(t) \le \frac{4 \Lip(f)^2}{(4 \pi t)^{\alpha/2}} \int_K \int_0^{+\infty} (\lambda - 1) e^{-\lambda} \mu(B_{\sqrt{4 t \lambda}}(y)) \di \lambda \di \mu(y). $$ Setting $C_o=C_o(\alpha):=4 \int_0^{+\infty}(\lambda-1)e^{-\lambda}\lambda^{\alpha/2} \di \lambda$ and recalling that $\mu(B_{\sqrt{4 t \lambda}}(y))= \omega_{\alpha} (4 t \lambda)^{\alpha/2}$, we obtain $F(t) \le \Lip(f)^2 \mu(K) C_o \omega_{\alpha} \pi^{-\alpha/2}$, thus $F$ is bounded. \end{proof} \begin{corollary}\label{cor:1} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then $\Lip(X) \subset \cD_{loc}(\cE)\cap C(X)$ and for some constant $C_1$ depending only on $\alpha$, we have: \begin{equation}\label{eq:Lip} \Gamma(f) \le C_1 \Lip(f)^2 \mu \qquad \forall f \in \Lip(X). \end{equation} \end{corollary} \begin{proof} Take $f \in \Lip(X)$. For any compact set $K \subset X$, the function $\phi_K:=\max(1-\dist(\cdot,K),0)$ is a compactly supported Lipschitz function constantly equal to $1$ on $K$. Therefore, $f \phi_K$ coincides with $f$ on $K$, and thanks to the previous lemma, $f \phi_K$ belongs to $\cD(\cE)$. This shows that $f \in \cD_{loc}(\cE)$. Moreover, for any non-negative $\phi \in C_c(X)$ and $t>0$, a direct computation like in the proof of the previous lemma implies $$ \frac{1}{2t} \iint_{X\times X} \phi(x)(f(x)-f(y))^2 p(x,y,t) \di \mu(x) \di \mu(y) \le C_1 \Lip(f)^2 \int_X \phi(x) \di \mu(x) $$ with $C_1$ depending only on $\alpha$, so that letting $t$ tend to $0$ and applying formula \eqref{eq:Grigor'yan2} yields to \eqref{eq:Lip}. \end{proof} We are now in a position to show the following crucial result. \begin{proposition}\label{prop:slandreg} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then $\cE$ is strongly local and regular. \end{proposition} \begin{proof} In the proof of \cite[Th.~4.1]{Grigor'yan}, it is shown that if $(X,\dist,\mu,\cE)$ admits a stochastically complete heat kernel $p$ satisfying $$t^{-\gamma/\beta} \Phi_1(\dist(x,y)t^{-1/\beta})\le p(x,y,t)\le t^{-\gamma/\beta} \Phi_2(\dist(x,y)t^{-1/\beta})$$ for any $x,y \in X$ and $t>0$, where $\beta$ and $\gamma$ are positive constants and $\Phi_1, \Phi_2$ are monotone decreasing functions from $[0,+\infty)$ to itself such that $\Phi_1>0$ and $\int^{+\infty}s^{\beta + \gamma-1}\Phi_2(s)\di s<+\infty$, then \begin{equation}\label{eq:Grigor'yan} \cE(f) \simeq \limsup_{t\to 0} t^{-\frac{(\beta + \gamma)}{2}} \iint_{\{\dist(x,y)<t^{1/2}\}}(f(x)-f(y))^2\di \mu(x)\di \mu(y) \end{equation} holds for all $f \in \cD(\cE)$, what in turn implies strong locality of $\cE$. Here we have used $A\simeq B$ to denote the existence of a constant $c>1$ such that $c^{-1}A\le B \le cA$. Choosing $\Phi_1(s)=\Phi_2(s)=e^{-s^2/4}$, $\beta=2$ and $\gamma = \alpha$, we can apply this result in our context to get strong locality of $\cE$. To prove regularity, let us show that $\Lip_c(X)$ is a core for $\cE$. \textit{Density in $C_c(X)$.} Let $f \in C_c(X)$ be with support $K$. For any $R>0$ and $x \in X$, set $f_R(x)=\inf_y \{f(y)+R\dist(x,y)\}$. Note that $f_R(x)\le f(x)$ for any $x \in X$, and since $f(y)+R\dist(x,y)\to +\infty$ when $\dist(x,y) \to +\infty$ and $(X,\dist)$ is proper, the infimum in the definition of $f_R$ is always attained at some $x' \in X$. Then $f_R(x)=f(x')+R\dist(x,x')$ implies $$ \dist(x,x') \le \frac{2\|f\|_{\infty}}{R}. $$ Being continuous with compact support, $f$ is uniformly continuous, so it admits a modulus of continuity $\omega$ which we can assume non-decreasing with no loss of generality. Then for any $x \in X$, $$ |f_R(x)-f(x)|=f(x)-f_R(x)=f(x)-f(x')+\underbrace{f(x')-f_R(x)}_{=-R\dist(x',x)\le 0} \le \omega(\dist(x,x')) $$ so that $$ \|f_R-f\|_{\infty} \le \omega\left( 2 \|f\|_{\infty}/R\right) \to 0$$ when $R \to +\infty$. Therefore, setting $\phi_K:=\max(1-\dist(\cdot,K),0)$ and $g_R:=\phi_K f_R$ for any $R>0$, we get a sequence of compactly supported Lipschitz functions $(g_R)_R$ converging uniformly to $f$. \textit{Density in $\cD(\cE)$.} Let $\Lip_o(X)$ be the set of Lipschitz functions $f$ on $X$ vanishing at infinity, i.e.~such that for some $o \in X$ one has $f(x)\to 0$ when $\dist(o,x)\to+\infty$. We are going to show that $\Lip_c(X)$ is dense in $\Lip_o(X)\cap \cD(\cE)$ for the norm $|\cdot|_{\cE}$. By \eqref{eq:Grigor'yan}, we know that that there exists a constant $C_\alpha>0$ such that if $f\in \cD(\cE)$, then $$ \frac{1}{C_\alpha}\limsup_{t\to 0}\int_X E(f,x,t)\di\mu(x)\le \cE(f) \le C_\alpha\limsup_{t\to 0}\int_X E(f,x,t)\di\mu(x)$$ where $$E(f,x,t)= t^{-\frac{(\alpha+2)}{2}} \int_{B_{\sqrt{t}}(x)}(f(x)-f(y))^2\di \mu(y).$$ Let $f\in \Lip_o(X)\cap \cD(\cE)$. For any $R>0$, we set $$\varphi_R(x):=\left(1-\frac{\dist(x,B_R(o)}{R}\right)_+$$for any $x \in X$, and $f_R := f \varphi_R$. By monotone convergence, we have $\lim_{R\to +\infty} \|f-f_R\|_2=0.$ We look now at $E(f_R,x,t)$ and we distinguish 3 cases: \begin{itemize} \item if $x\in B_{R-\sqrt{t}}(o)$, then $E(f_R,x,t)=E(f,x,t)$; \item if $x\not\in B_{2R+\sqrt{t}}(o)$, then $E(f_R,x,t)=0$; \item if $x\in B_{2R+\sqrt{t}}(o)\setminus B_{R-\sqrt{t}}(o)$, \end{itemize} then using $f_R(x)-f_R(y)=\varphi_R(x)(f(x)-f(y))+f(y)(\varphi_R(x)-\varphi_R(y))$ we get \begin{align*} E(f_R,x,t)&\le 2 t^{-\frac{(\alpha+2)}{2}} \left(\int_{B_{\sqrt{t}}(x)}(f(x)-f(y))^2\di \mu(y)+\int_{B_{\sqrt{t}}(x)}f^2(y)(\varphi_R(x)-\varphi_R(y))^2\di\mu(y)\right)\\ &\le 2E(f,x,t)+ 2 t^{-\frac{(\alpha+2)}{2}} \int_{B_{\sqrt{t}}(x)}f^2(y)\frac{t}{R^2}\di\mu(y) \end{align*} where we have used the fact that $\varphi_R$ is $1/R$-Lipschitz. By Fubini's theorem, \[ \int_X \int_{B_{\sqrt{t}}(x)}f^2(y)\di \mu(y) \di \mu(x) = \int_X f^2(y)\mu(B_{\sqrt{t}}(y))\di \mu(y) = \omega_\alpha t^{\frac{\alpha}{2}} \int_X f^2(y)\di \mu(y), \] thus \begin{align*}\cE(f_R)&\le C_\alpha \limsup_{t\to 0}\int_{X} E(f,x,t)\di\mu(x)+C_\alpha \frac{2}{R^2} \omega_\alpha \int_X f^2(y)\di\mu(y)\\ &\le C_\alpha^2 \cE(f)+C_\alpha \frac{2}{R^2} \omega_\alpha \int_X f^2(y)\di\mu(y). \end{align*} Hence $\{f_R\}_{R\ge 1}$ is bounded in $\cD(\cE)$. The fact that $\lim_{R\to +\infty} \|f-f_R\|_2=0$ implies that $f_R$ converges weakly to $f$ in $\cD(\cE)$ when $R\to+\infty$. By Mazur's lemma, there exists a sequence $\{u_\ell\}_\ell \subset \Lip_c(X)$ made of convex combination of $\{f_R\}_{R\ge 1}$ such that $$\lim_{\ell\to +\infty} \|f-u_\ell\|_2=0.$$ Therefore, it is enough to show that $\Lip_o(X)$ contains a subset that is dense in $\cD(\cE)$ for $|\cdot|_\cE$. Let $L^2_c$ be the set of compactly supported functions $f$ in $L^2(X,\mu)$. Then $P_t(L^2_c) \subset \Lip_o(X)$ for any fixed $t>0$. Indeed, for any $f \in L^2_c$ and $x, y \in X$, $$ |P_tf(x) - P_tf(y)|\le \frac{1}{(4\pi t)^{\alpha/2}} \int_X \Big|e^{-\frac{\dist^2(x,z)}{4t}}-e^{-\frac{\dist^2(y,z)}{4t}}\Big| |f(z)| \di \mu(z). $$ Setting $\phi(s)=e^{-\frac{s^2}{4t}}$ and noting that $|\phi'(s)|\le |\phi'(\sqrt{2t})|=:c_t$ for all $s>0$, we get from the mean value theorem, the triangle inequality and Hölder's inequality, that $|P_tf(x) - P_tf(y)|\le C(t,f) \dist(x,y)$ with $C(t,f):=c_t(4 \pi t)^{-\alpha/2} \mu(\supp f)^{1/2} \| f \|_{L^2}$. Moreover, $$ |P_tf(x)| = \left|\frac{1}{(4\pi t)^{\alpha/2}} \int_{\supp f} f(y) e^{-\frac{\dist^2(x,y)}{4t}} \di \mu(y)\right| \le \frac{ e^{-\frac{\dist^2(o,x)}{8t}} }{ (4\pi t)^{\alpha/2} } \int_{\supp f} |f(y)| e^{\frac{\dist^2(o,y)}{4t}} \di \mu(y) \to 0 $$ when $\dist(o,x) \to +\infty$. Let us show now that $P_t(L^2_c)$ is dense in $\cD(\cE)$ by proving that its $\langle \cdot , \cdot \rangle_{\cE}$-orthogonal complement $F$ in $\cD(\cE)$ reduces to $\{0\}$. For any $v\in F$, we have: \begin{equation}\label{eq:18.07} \int_X v P_t f \di \mu + \cE(v,P_t f)=0 \qquad \forall f \in L^2_c. \end{equation} Since $P_t$ maps $L^2(X,\mu)$ into $\cD(L)$ and is self-adjoint, then \begin{align*} \cE(v,P_t f) & = - \int_X v L (P_t f) \di \mu = -\int_X v \frac{\di}{\di t} P_t f \di \mu = -\frac{\di}{\di t} \int_X v P_t f \di \mu\\ & = - \frac{\di}{\di t} \int_X (P_tv) f \di \mu = -\int_X \frac{\di}{\di t}(P_tv) f \di \mu = - \int_X L(P_tv) f \di \mu = \cE(P_tv,f) \end{align*} for any $f \in L^2_c$, so \eqref{eq:18.07} becomes: $$ \int_X (P_tv) f \di \mu + \cE(P_t v,f)=0 \qquad \forall f \in L^2_c. $$ This implies $P_t v \in \cD(L)$ with $L (P_t v) = P_t v$. Since $L$ is a non-positive operator, $1$ cannot be an eigenvalue of $L$, so we necessarily have $P_t v = 0$. This implies $v=0$ since the spectral theorem ensures that $0$ cannot be an eigenvalue of $P_t$. \end{proof} By the Beurling-Deny theorem, Proposition \ref{prop:slandreg} ensures the existence of a $\Gamma$ operator for any Dirichlet form $\cE$ with an $\alpha$-dimensional Euclidean heat kernel defined on a metric measure space $(X,\dist,\mu)$. We can then define the associated pseudo-distance $\dist_\cE$ as recalled in Section 2. It turns out that in this case, $\dist_\cE$ is equivalent to the initial distance $\dist$, as shown in the next proposition. \begin{proposition}\label{eq:propA} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then there exists $C_2>0$ depending only on $\alpha$ such that $C_2 \dist \le \dist_{\cE} \le \dist$. In particular, the assumption \eqref{eq:A} is satisfied. \end{proposition} \begin{comment} stochastically complete and with $\cE$ strongly local and regular. Assume that the following dimensional Gaussian estimates hold: there exists $C>0$ such that for any $x,y \in X$ and $t>0$, \begin{equation}\label{eq:dimensionalGaussian} \frac{C^{-1}}{t^{n/2}} e^{-\frac{\dist^2(x,y)}{ct}} \le p(x,y,t) \le \frac{C}{t^{n/2}} e^{-\frac{\dist^2(x,y)}{c^{-1}t}}. \end{equation} \end{comment} \begin{proof} Let us first show that $C_2\dist \le \dist_\cE$ for some $C_2>0$. Set $\Lambda:=\{f \in \Lip(X) \, : \, C_1\Lip(f)^2 \le 1\}$ where $C_1$ is as in \eqref{eq:Lip}. It follows from Corollary \ref{cor:1} that $\Lambda$ is included in the set of test functions in \eqref{eq:defdist}. Noting that $f:=C_1^{-1/2}\dist(x,\cdot)$ is in $\Lambda$ for all $x \in X$ and that $|f(x)-f(y)|=C_1^{-1/2}\dist(x,y)$ for all $x,y \in X$, with $C_2:=C_1^{-1/2}$ we get: $$\dist_\cE(x,y)\ge C_2\dist(x,y) \qquad \forall x,y \in X.$$ Let us show now that $\dist_\cE \le \dist$. To this aim, we follow the lines of \cite{Grigor'yanEd}. Let $v \in \cD_{loc}(\cE)\cap C(X)$ be bounded and such that $\Gamma(v)\le \mu$. For any $a \in \setR$, $t>0$ and $x \in X$, set $\xi_a(x,t):=a v(x) - \frac{a^2}{2}t$. \begin{claim} For any $f \in L^2(X,\mu)$, the quantity $$ I(t):=\int_X f_t^2 e^{\xi_a(\cdot,t)} \di \mu, $$ where $f_t:=P_t f$, does not increase when $t>0$ increases. \end{claim} Indeed, for any $t>0$, writing $\xi_a$ for $\xi_a(\cdot,t)$ and $\xi_a'$ for $\frac{\di}{\di t} \xi_a(\cdot,t)$, we have \begin{align*} \frac{\di}{\di t} (f_t^2e^{\xi_a}) & = 2 f_t \left(\frac{\di}{\di t} f_t \right)e^{\xi_a} + f_t^2\xi_a' e^{\xi_a} = 2 f_t Lf_t e^{\xi_a}-\frac{a^2}{2}f_t^2 e^{\xi_a}. \end{align*} Since $e^{\xi_a} \le e^{a \|v\|_\infty}$, this implies $$ \left|\frac{\di}{\di t} (f_t^2e^{\xi_a}) \right| \le e^{a \|v\|_\infty}(2|f_t||Lf_t| + a^2|f_t|^2/2) \, \in \, L^1(X,\mu), $$ so we can differentiate under the integral sign to get $$ I'(t) = 2 \int_X f_t Lf_t e^{\xi_a} \di \mu - \frac{a^2}{2} \int_X f_t^2 e^{\xi_a} \di \mu. $$ The Leibniz rule \eqref{eq:Leibniz} implies \begin{align*} \int_X f_t Lf_t e^{\xi_a} \di \mu = & \, \, - \cE(f_t,f_te^{\xi_a}) = - \int_X \Gamma(f_t,f_t e^{\xi_a}) \di \mu \\ = & - \int_X f_t \underbrace{\Gamma(f_t,e^{\xi_a})}_{=\Gamma(f_t,e^{\xi_a/2}e^{\xi_a/2})} \di \mu - \int_X e^{\xi_a} \Gamma(f_t) \di \mu\\ = & - 2 \int_X f_t e^{\xi_a/2} \Gamma(f_t,e^{\xi_a/2})\di \mu - \int_X e^{\xi_a} \Gamma(f_t) \di \mu \end{align*} and, starting from $\Gamma(f_te^{\xi_a/2})$, $$- 2 f_t e^{\xi_a/2} \Gamma(f_t,e^{\xi_a/2}) = - \Gamma(f_te^{\xi_a/2}) + f_t^2 \Gamma(e^{\xi_a/2}) + e^{\xi_a} \Gamma(f_t),$$ so that \begin{align*} I'(t) & = 2 \int_X f_t^2 \Gamma(e^{\xi_a/2}) - 2 \int_X \Gamma(f_t e^{\xi_a/2})\di \mu - \frac{a^2}{2}\int_X f_t^2 e^{\xi_a} \di \mu \\ & \le \int_X f_t^2 \left( 2 \Gamma(e^{\xi_a/2})-\frac{a^2}{2} e^{\xi_a} \right) \di \mu. \end{align*} Since $v$ is bounded, we can apply the chain rule \eqref{eq:chain} with $\eta(\xi):=e^{\xi/2}$ to get $\Gamma(e^{\xi_a/2})=(1/4)e^{\xi_a}\Gamma(\xi_a)$ and thus $$ 2 \Gamma(e^{\xi_a/2}) - \frac{a^2}{2}e^{\xi_a} = \frac{1}{2}e^{\xi_a}\Gamma(\xi_a) - \frac{a^2}{2}e^{\xi_a} = \frac{1}{2}e^{\xi_a} a^2 \Gamma(v)-\frac{a^2}{2}e^{\xi_a}\le 0, $$ so $I'(t) \le 0$. \begin{comment} \begin{proof} Since \begin{align*} \left| \frac{\di}{\di t} (f_t^2(x)e^{\xi_a(t,x)})\right| & \le 2 |L f_t(x)||f_t(x)|e^{\xi_a(t,x)} + f_t^2(x)\frac{a^2}{2} e^{\xi_a(t,x)}\\ & \le e^{a \|v\|_\infty}(|L f_t(x)|^2 + |f_t(x)|^2(1+a^2/2)) \end{align*} and $f_t, Lf_t \in L^2(X,\mu)$, we can differentiate under the integral sign. Thus $$ \int_X f_t L (f_t e^{\xi_a(t,\cdot)}) \di \mu = \int_X f_t L (f_t) e^{\xi_a(t,\cdot)} \di \mu + \int_X f_t^2 L e^{\xi_a(t,\cdot)} \di \mu $$ {\color{red} To be finished...}\end{proof} \end{comment} \hfill From now on, assume $a \ge 0$. Apply the claim to $f=1_A$, where $A$ is some Borel subset of $X$. Then for any $t>0$ and any Borel subset $B$ of $X$, $$ \int_B f_t^2 e^{\xi_a(\cdot,t)} \di \mu \le \int_X f_t^2 e^{\xi_a(\cdot,t)} \di \mu =I(t)\le I(0) = \int_{A} e^{a v} \di \mu, $$ hence $$ \int_B f_t^2 e^{\xi_a(\cdot,t)} \di \mu \le \mu(A) e^{a \sup_A v}. $$ Moreover, since the heat kernel is Euclidean, we have \begin{align*} \int_B f_t^2 e^{\xi_a(\cdot,t)} \di \mu & = \int_{B} \left(\int_A p(x,y,t) \di \mu(y)\right)^2 e^{\xi_a(x,t)} \di \mu(x)\\ & \ge \frac{e^{-\frac{\sup_{A\times B} \dist^2}{2t}}}{(4\pi t)^\alpha} \mu(B) \mu(A)^2 e^{a \inf_B v - a^2 \frac{t}{2}} \end{align*} thus \[ \frac{e^{-\frac{\sup_{A\times B} \dist^2}{2t}}}{(4\pi t)^\alpha} \mu(B) \mu(A) e^{a (\inf_B v - \sup_A v) - a^2 \frac{t}{2}} \le 1. \] Take $x,y \in X$. With no loss of generality we can assume $v(y)-v(x)>0$. Choose $t$ such that $\sqrt{t}<\dist(x,y)/3$. Apply the previous inequality with $A=B_{\sqrt{t}}(x)$ and $B=B_{\sqrt{t}}(y)$. In this case, $\sup_{A\times B} \dist^2 = \dist^2(x,y) + 2 \sqrt{t}$. Moreover, since $v$ is continuous, we have $\inf_B v - \sup_A v = v(y)-v(x) + \eps(t)$ where $\eps(t)\to 0$ when $t \to 0^+$. Then \[ \frac{e^{-\frac{\dist^2(x,y) + 2\sqrt{t}}{2t}}}{(4\pi)^\alpha} \omega_\alpha^2 e^{a (v(y)-v(x) + \eps(t)) - a^2 \frac{t}{2}} \le 1 \] for any $t \in (0,\dist^2(x,y)/9)$ and any $a \ge 0$. Now for $t \in (0,\dist^2(x,y)/9)$, choose $a=a(t)=(v(y)-v(x) + \eps(t))/t$ to get \[ \frac{e^{-\frac{\dist^2(x,y) + 2\sqrt{t}}{2t}}}{(4\pi)^\alpha} \omega_\alpha^2 e^{\frac{1}{2t}(v(y)-v(x) + \eps(t))^2} \le 1 \] Apply the logarithm function, multiply the resulting inequality by $2t$ and then add $\dist^2(x,y)$ to get \[ -2\sqrt{t} + 2t \ln(\omega_\alpha^2/(4\pi)^\alpha) + (v(y)-v(x) + \eps(t))^2 \le \dist^2(x,y). \] Letting $t$ tend to $0$ gives \begin{equation}\label{16sept2} (v(x)-v(y))^2 \le \dist^2(x,y). \end{equation} Since for any $u \in \cD_{loc}(\cE)$ and any $R>0$, the function $u_R:=\max(u,R)$ is in $\cD_{loc}(\cE)$ with $\Gamma(u_R)\le \Gamma(u)$, approximating any possibly unbounded $v \in \cD_{loc}(\cE)\cap C_c(X)$ with $\Gamma(v) \le \mu$ by $(v_R)_{R>0}$ provides \eqref{16sept2} for any $x,y \in X$ for such a $v$. This implies $\dist_\cE \le \dist$. \end{proof} \begin{remark} Though we will not use it in the sequel, let us point out that Proposition \ref{eq:propA} can be upgraded into $\dist=\dist_\cE$ provided a suitable technical assumption holds: see I. in \cite[Th.~2.5]{terElstRobinsonSikora}. \end{remark} \subsection{Evaluation of $L$ on squared distance functions} Let us show now that the operator associated to the Dirichlet form of a space with an $\alpha$-dimensional Euclidean heat kernel behaves on squared distance functions as the Laplacian does in $\setR^n$. \begin{lemma}\label{lem:Ld^2} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then $L\dist^2(x,\cdot)=2\alpha$, $L\dist(x,\cdot)=(\alpha-1)/\dist(x,\cdot)$ on $X\backslash \{x\}$ and $\Gamma(\dist(x,\cdot))=1$ $\mu$-a.e.~on $X$. \end{lemma} \begin{proof} Take $x\in X$. Note first that Corollary \ref{cor:1} guarantees that $\dist^2(x,\cdot), \dist(x,\cdot) \in \cD_{loc}(\cE)$. For any $t>0$, a direct computation relying on the chain rule \eqref{eq:chainrule} and starting from the equation $$ \left( \frac{\di}{\di t} - L \right) \frac{e^{-\frac{\dist^2(x,\cdot)}{4t}}}{(4 \pi t)^{\alpha/2}} = 0 $$ (recall the remark after Definition 2.3) provides $$ \frac{\dist^2(x,\cdot)}{4t^2} + \frac{\alpha}{2t} - \frac{1}{4t}L\dist^2(x,\cdot) - \frac{1}{(4t)^2}\Gamma(\dist^2(x,\cdot)) =0. $$ Multiplying by $(4t)^2$ and letting $t$ tend to $0$ gives $\Gamma(\dist^2(x,\cdot))=4\dist^2(x,\cdot)$ hence $\Gamma(\dist(x,\cdot))=1$ by \eqref{eq:chain}, while multiplying by $4t$ and letting $t$ tend to $+\infty$ gives $L\dist^2(x,\cdot)=2\alpha$, from which follows $L\dist(x,\cdot)=(n-1)/\dist(x,\cdot)$ by \eqref{eq:chainrulesquare}. \begin{comment} Take $t,s>0$ and $x,y \in X$. For the sake of brevity, let us write $\dist^2$ instead of $\dist^2(x,y)$. Thanks to \eqref{eq:carL}, subtracting $e^{-\frac{\dist^2}{4s}}$ in \eqref{eq:Lem3.1}, dividing by $t$ and letting $t \to 0$ provides $$ -Le^{-\frac{\dist^2}{4s}} = \lim\limits_{t\to 0} \frac{1}{t} \left( \left(\frac{s}{t+s}\right)^{n/2} e^{-\frac{\dist^2}{4(t+s)}} - e^{-\frac{\dist^2}{4s}} \right). $$ Since \begin{align*} \left(\frac{s}{t+s}\right)^{n/2} e^{-\frac{\dist^2}{4(t+s)}} - e^{-\frac{\dist^2}{4s}} & = e^{-\frac{\dist^2}{4s}} \left[ \left( \frac{1}{1+t/s}\right)^{n/2} e^{-\frac{\dist^2}{4s}\left(\frac{1}{1+t/s}-1\right)} - 1\right]\\ & = e^{-\frac{\dist^2}{4s}} \left[ \left( 1-t/s+o(t)\right)^{n/2} e^{-\frac{\dist^2}{4s}\left(t/s+o(t)\right)} - 1\right]\\ & = e^{-\frac{\dist^2}{4s}} \left[ \left( 1-(n/2)(t/s)+o(t)\right)\left(1 - \frac{\dist^2}{4s}(t/s)+o(t)\right) - 1\right] \\ & = e^{-\frac{\dist^2}{4s}} \left[ - \left(\frac{n}{2s} - \frac{\dist^2}{4s^2}\right) t + o(t) \right], \end{align*} we get $$ -Le^{-\frac{\dist^2}{4s}} = - e^{-\frac{\dist^2}{4s}} \left(\frac{n}{2s} - \frac{\dist^2}{4s^2}\right). $$ On the other hand, the chain rule \eqref{eq:chainrule} provides $$ Le^{-\frac{\dist^2}{4s}} = -\frac{1}{4s} e^{-\frac{\dist^2}{4s}} L\dist^2 - \frac{1}{16 s^2}e^{-\frac{\dist^2}{4s}}\Gamma(\dist^2), $$ and since $\Gamma(\dist^2) = 4 \dist$\footnote{{\color{red} comment justifie-t-on que $\Gamma(\dist) =1$ $\mu$-p.p.?}}, we get $$ \frac{1}{4s} e^{-\frac{\dist^2}{4s}} L\dist^2 + \frac{1}{4 s^2}e^{-\frac{\dist^2}{4s}}\dist = - e^{-\frac{\dist^2}{4s}} \left(\frac{n}{2s} - \frac{\dist^2}{4s^2}\right) $$ thus $$ L\dist^2 + \frac{\dist}{s} = -2n +\frac{\dist^2}{s}\, \cdot $$ Letting $s$ tend to $+\infty$ gives the first equality, and the second equality follows by \eqref{eq:chainrulesquare}. \end{comment} \end{proof} As a consequence of Lemma \ref{lem:Ld^2}, we can show that locally $L$-harmonic functions on spaces with an $\alpha$-dimensional Euclidean heat kernel are necessarily strongly harmonic. \begin{lemma}\label{lem:Lstrong} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Let $\Omega \subset X$ be an open set and $h$ a locally integrable local solution of $Lh=0$ on $\Omega$. Then for any $x \in \Omega$, the function defined on $(0,\dist(x,^{c}\Omega))$ by $$ r\mapsto \fint_{B_r(x)}h \di \mu$$ is a constant. Therefore, $h$ has a continuous representative strongly harmonic in $\Omega$. \end{lemma} \begin{proof} Take $x \in \Omega$ and set $R:=\dist(x,^c\Omega)$. From $Lh=0$ we get that for any $\phi \in \cD(L)$ with compact support in $\Omega$, $$\langle h,L\phi\rangle_{L^2}=0.$$ Take $\phi \in C_c^\infty((0,R))$ and set $u=\phi \circ \dist(x,\cdot)$ on $X$. Then $u$ belongs to $\cD(L)$ and has compact support included in $\Omega$. The chain rule \eqref{eq:chainrule} and Lemma \ref{lem:Ld^2} yields to \[ Lu = \chi\circ \dist(x,\cdot) \] where we have set $$ \chi(r) := \phi''(r) + \frac{\alpha - 1}{r} \phi'(r) = r^{1-\alpha} \left(r^{\alpha-1} \phi'\right)' $$ for any $r \in (0,R)$. Since $\chi \circ \dist(x,\cdot)=-\int_{\dist(x,\cdot)}^R \chi'(s)ds$, we have \begin{align*} -\int_0^R \chi'(s)\left(\int_{B_s(x)}h\di\mu\right)\di s & = -\int_0^R \chi'(s)\left(\int_X h1_{B_s(x)}\di\mu\right)\di s\\ & = \int_X \left( - \int_0^R \chi'(s) 1_{(\dist(x,\cdot),+\infty)}(s) \di s \right) h \di \mu\\ & = \int_X (\chi \circ \dist(x,\cdot)) h \di \mu = \langle h,Lu\rangle_{L^2} = 0. \end{align*} This implies that the function $s\mapsto I(s):= \int_{B_s(x)}h\di \mu$ satisfies the equation $$ \left[r^{\alpha-1}\left(r^{1-\alpha}y'\right)'\right]'=0$$ in the distributional sense on $(0,R)$. Then there exists real-valued constants $a$, $b$ and $c$ such that for any $s \in (0,R)$, $$I(s)=as^\alpha+bs^2+c.$$ Since $h$ is locally integrable, $c=\lim_{s\to 0} I(s) = 0$. Then $s^{-\alpha}I(s)= a+b s^{2-\alpha}$ for any $s \in (0,R)$. Using test functions $\phi$ that are constantly equal to $1$ in a neighborhood of $0$, we can get $(2-\alpha)b=0$, from which follows that $s \mapsto s^{-\alpha} I(s)$ is a constant. Since $(X,\dist,\mu)$ has an $\alpha$-dimensional volume, this provides the result.\\ \end{proof} \subsection{Spaces of locally $L$-harmonic functions with polynomial growth} Let us conclude with a result that shall be crucial in the next section. We recall that for any positive integer $m$, a function $h:X \to \setR$ has polynomial growth of rate $m$ if there exists $C>0$ such that $|h|\le C(1+\dist^m(o,\cdot))$ holds for some $o \in X$. The case $m=1$ corresponds to a linear growth. Note that functions with a fixed polynomial growth rate form a vector space. \begin{proposition}\label{prop:finitedim} Let $(X,\dist,\mu,\cE)$ be with an $\alpha$-dimensional Euclidean heat kernel. Then for any $m \in \setN\backslash \{0\}$, the space of locally $L$-harmonic functions $h:X\to\setR$ with polynomial growth of rate $m$ is finite dimensional. \end{proposition} \begin{proof} Having an $\alpha$-dimensional Euclidean heat kernel, $(X,\dist,\mu,\cE)$ trivially satisfies the Gaussian estimate \eqref{eq:LiYau}. Moreover, we know from Proposition \ref{eq:propA} that it satisfies the assumption \eqref{eq:A}. Consequently, by Proposition \ref{prop:important}, $(X,\dist,\mu,\cE)$ has the doubling and Poincaré properties. Therefore, the arguments of \cite{ColdingMinicozzi} carry over. \end{proof} \section{Construction of the isometry} \quad \, In this section, we construct an isometry between a given metric measure space $(X,\dist,\mu)$ equipped with a Dirichlet form $\cE$ admitting an $\alpha$-dimensional Euclidean heat kernel and an Euclidean space $\setR^l$ equipped with a distance $\dist_Q$ associated to a suitable quadratic form $Q$. Let us recall that a quadratic form on a $\setR$-vector space $V$ is a map $Q : V \to \setR$ for which there exists a bilinear symmetric form $\beta : V \times V \to\setR$ such that $Q(u)=\beta(u,u)$ for any $u \in V$, in which case one has $\beta(u,v)=\frac{1}{2}(Q(u+v)-Q(u)-Q(v))$ and then $Q(u+v)=Q(u) +2\beta(u,v) + Q(v)$ for any $u,v \in V$. Moreover, when $Q$ is positive definite, setting\begin{equation}\label{eq:dist_Q}\dist_Q(u,v):=\sqrt{Q(u-v)}\end{equation} for any $u,v \in V$ defines a distance on $V$ canonically associated to $Q$. When $V=\setR^l$, Sylvester's law of inertia states that $Q$ can be transformed into $(v_1,\ldots,v_l) \mapsto \sum_i v_i^2$ via a suitable change of basis. This implies that $(\setR^l,\dist_Q)$ and $(\setR^l,\dist_e)$ are isometric, so that the construction made in this section proves Theorem \ref{th:main}. \subsection{The quadratic form $Q$ and the coordinate function $H$} Let us explain how to define $Q$ in our context. We first fix a base point $o \in X$ and set $$B(x,y):=\frac{1}{2}(\dist^2(o,x)+\dist^2(o,y)-\dist^2(x,y))$$for any $x,y \in X$. Note that \begin{equation}\label{eq:D6.1} \dist^2(x,y)= B(x,x)+B(y,y)-2B(x,y). \end{equation} \begin{remark} In case $(X,\dist)=(\setR^l,\dist_{e})$ and $o$ is the origin in $\setR^l$, the law of cosines gives $B(x,y)=\scal{x}{y}$ for any $x,y \in \setR^l$, where $\scal{\cdot}{\cdot}$ is the usual Euclidean scalar product in $\setR^l$. \end{remark} For any $x \in X$, it follows from Lemma \ref{lem:Ld^2} and the fact that constant functions are locally $L$-harmonic that $B(x,\cdot)$ is locally $L$-harmonic. Moreover, for any $x,y \in X$, since $\dist^2(o,y)-\dist^2(x,y) = (\dist(o,y)-\dist(x,y))(\dist(o,y)+\dist(x,y))$, $\dist(o,y)-\dist(x,y)\le \dist(o,x)$ and $\dist(x,y)\le \dist(x,o)+\dist(o,y)$, we have \begin{align*} B(x,y) & \le \frac{1}{2}(\dist^2(o,x)+\dist(o,x)(\dist(o,x)+2\dist(o,y)))\\ & = \dist^2(o,x)+\dist(o,x)\dist(o,y) \le C_x(1+\dist(o,y)) \end{align*} with $C_x:=\max(\dist^2(o,x),\dist(o,x))$. This shows that $B(x,\cdot)$ has linear growth for any $x \in X$. Then $\cV:=\Span\{B(x,\cdot): x \in X\}$ is a subspace of the space of locally $L$-harmonic functions with linear growth. Using Proposition \ref{prop:finitedim}, we know that this space has a finite dimension, so $\cV$ has a finite dimension which we denote by $l$. Let us then consider the subspace $\mathcal{D}:=\Span\{\delta_x : x \in X\}$ of the algebraic dual $\cV^*$ of $\cV$. If $f \in \cV$ is such that $\theta(f)=0$ for any $\theta \in \cD$, then $f=0$; since the duality pairing $\cV\times\cV^* \to\setR$ is non-degenerate, this implies $\cD = \cV^*$. Therefore, there exist $x_1, \cdots, x_l \in X$ such that $\{\delta_{x_1},\cdots,\delta_{x_l}\}$ is a basis of $\mathcal{V}^*$. Let $\{h_1,\cdots,h_l\}$ be the associated basis of $\mathcal{V}$. Then for any $x \in X$, \begin{equation}\label{eq:basis} B(x,\cdot) = \sum_{i=1}^l \delta_{x_i}(B(x,\cdot)) h_i = \sum_{i=1}^l B(x,x_i) h_i \end{equation} and for any $i \in \{1,\cdots,l\}$, $$ B(x,x_i) = B(x_i,x) = \sum_{j=1}^l \delta_{x_j}(B(x_i,\cdot)) h_j(x)= \sum_{j=1}^l B(x_i,x_j)h_j(x). $$ Therefore, we have \begin{equation}\label{eq:D6.2} B(x,y) = \sum_{i,j=1}^l B(x_i,x_j)h_j(x)h_i(y) \end{equation} for any $x,y \in X$. We now define $Q$ on $\setR^l$ by setting: $$ Q(\xi) := \sum_{i,j=1}^lB(x_i,x_j)\xi_i \xi_j \qquad \forall \, \xi=(\xi_1,\cdots,\xi_l) \in \setR^l. $$ Then $Q$ is a quadratic form whose associated symmetric form $\beta$ is given by $$\beta(\xi,\xi') = \sum_{i,j=1}^l B(x_i,x_j)\xi_i \xi_j' $$ for any $\xi, \xi' \in \setR^l$. Note that $\beta$ is non-degenerate. Indeed, if $\xi \in \setR^l $ is such that $\beta(\xi,\cdot)=0$, then for any $y \in X$ we have $\sum_{i=1}^l \xi_i B(x_i,y)=0$ because $$\sum_{i=1}^l \xi_i B(x_i,y)=\sum_{i,j=1}^l \xi_i h_j(y)B(x_i,x_j)=\beta(\xi,(h_1(y),\ldots,h_l(y)))=0.$$ But $\{B(x_i,\cdot)\}_i$ is a basis of $\cV$ since $$ B(x,y)=\sum_{i=1}^l \delta_{x_i}(B(\cdot,y))h_i(x)= \sum_{i=1}^l B(x_i,y)h_i(x) $$ for any $x,y \in X$, thus $\xi=0_l$, where $0_l$ denotes the origin in $\setR^l$. We are now in a position to introduce our ``coordinate'' function $H:X\to\setR^l$ which we define as $$ H:=(h_1,\cdots,h_l). $$ This function $H$ is continuous because $h_1,\cdots,h_l$ are so. Moreover, for any $x,y \in X$, we have \begin{equation}\label{eq:D6.3} \beta(H(x),H(y)) = B(x,y) \end{equation} thanks to \eqref{eq:D6.2} and \begin{equation}\label{eq:D6.4} \dist^2(x,y) = Q(H(x)-H(y)) \end{equation} thanks to \eqref{eq:D6.1}. Note that $H(o)=0_l$ because $B(x,o)=0$ for any $x \in X$. Moreover, \eqref{eq:D6.4}, the continuity of $H$ and the completeness of $(X,\dist)$ imply that $H(X)$ is a closed set of $\setR^l$. \begin{claim}\label{claim} $H$ is an injective map. Moreover, $\Span(H(X)) = \setR^l$ -- in fact, the closed convex hull of $H(X)$ is $\setR^l$. \end{claim} \begin{proof} If $H(x)=H(y)$ then \eqref{eq:D6.4} gives $\dist^2(x,y)=Q(0)=0$, so $x=y$. Then $H$ is injective. For the second statement, let us recall that the closed convex hull $\overline{\mathrm{conv}}(E)$ of a closed set $E \subset \setR^l$ is defined as the smallest convex subset of $\setR^l$ containing $E$; moreover, $\overline{\mathrm{conv}}(E)$ coincides with $$ \bigcap_{\lambda \in \mathcal{A}(E)} \{\lambda \ge 0\} $$ where $\mathcal{A}(E)$ is the set of affine functions on $\setR^l$ that are non-negative on $E$. Note that being closed and convex, $\Span(E)$ contains $\overline{\mathrm{conv}}(E)$. Take $\lambda \in \mathcal{A}(H(X))$. Then $\lambda \circ H : X \to \setR$ is an affine combination of locally $L$-harmonic functions, hence it is a locally $L$-harmonic function too. Since $\lambda \circ H$ is non-negative on $X$, Lemma \ref{lem:ellipticHarnack} implies that it is a constant. Therefore, $\overline{\mathrm{conv}}(H(X))=\setR^l$, what brings the result. \end{proof} Note that the right-hand side in \eqref{eq:D6.4} does not define any squared distance on $\setR^l$ unless $Q$ is shown to be positive definite, see Subsection 4.3. \subsection{Conical structure of tangent cones at infinity} Let $(\uX,\udist,\uo)$ be a tangent cone at infinity of $(X,\dist)$ at $o$. We denote by $\{(X_i,\dist_i:=r_i^{-1}\dist,o)\}_i$, where $\{r_i\}_i \subset (0,+\infty)$ converges to $+\infty$, the sequence of rescalings of $(X,\dist,o)$ converging in the pointed Gromov-Hausdorff topology to $(\uX,\udist,\uo)$. Note that whenever $\ux \in \uX$, there exists a sequence $\{x_i\}_i\subset X$ such that $x_i \stackrel{GH}{\to} \ux$; in particular, $\dist_i(o,x_i) \to \udist(\uo,\ux)$ and $\dist(o,x_i)\to +\infty$.\\ \textbf{\underline{Step 1.}} [Construction of a Busemann function $h_\infty$ associated with a divergent sequence] Let $\{x_i\}_i \subset X$ be a sequence such that $\dist(o,x_i) \to +\infty$. For any $i$, setting $$D_i:=\dist(o,x_i),$$ we define $$h_i(y):=D_i-\dist(x_i,y)$$ for any $y \in X$ and call $c_i:[0,D_i] \to X$ a minimizing geodesic joining $o$ to $x_i$. On one hand, the triangle inequality implies that the functions $h_i$ are all $1$-Lipschitz, so by the Ascoli--Arzelà theorem, up to extracting a subsequence, we can assume that the sequence $\{h_i\}_i$ converges uniformly on compact subsets of $X$ to a $1$-Lipschitz function $h_\infty$. On the other hand, the minimizing geodesics $c_i$ being $1$-Lipschitz too, we can use again the Ascoli-Arzelà theorem to assume, up to extraction, that they converge uniformly on compact sets of $[0,+\infty)$ to a geodesic ray $\gamma$. \begin{claim}\label{claim2} The function $h_\infty$ constructed as above coincides with the Busemann function $b_\gamma$ associated with $\gamma$. \end{claim} \begin{proof} Thanks to Lemma \ref{lem:Ld^2} and the fact that constant functions are locally $L$-harmonic, we know that for any $i$ we have $h_i \in \cD_{loc}(\cE)$ and $$ Lh_i = \frac{\alpha-1}{\dist(x_i,\cdot)} \qquad \text{on $X \backslash \{x_i\}$}. $$ Therefore, for any $R\in(0,\dist(o,x_i))$, since $\dist(x_i,y)\ge D_i-\dist(y,o)\ge D_i-R$ holds for any $y \in B_R(o)$, we get \begin{equation}\label{eq:D5.1} |Lh_i| \le \frac{\alpha-1}{D_i-R} \qquad \text{on $B_R(o)$.} \end{equation} Then \begin{equation}\label{eq:wesh} |\cE(h_i,\phi)| = |\langle \phi, L h_i\rangle_{L^2}| \le \frac{\alpha-1}{D_i-R}\|\phi\|_1 \to 0 \qquad \text{when $i \to +\infty$} \end{equation} for any $\phi \in \cD_c(\cE)$. Since $h_i \to h_\infty$ uniformly on compact sets, then $h_i \to h_\infty$ in $L^2_{loc}(X,\mu)$. As in the proof of Lemma \ref{lem:preparatory}, this implies $h_\infty \in \cD_{loc}(\cE)$ with $Lh_\infty=0$. Moreover, $Lb_\gamma=0$ by Lemma \ref{lem:preparatory}, so $h_\infty - b_\gamma$ is a locally $L$-harmonic function. Let us show that it is non-negative. For any $i$ and $s \in [0,D_i]$, set: $$h_{i,s}(y)=s-\dist(y,c_i(s)) \qquad \forall y \in X.$$ Since $\dist(x_i,y) \le \dist(x_i,c_i(s))+\dist(y,c_i(s))=D_i -s + \dist(y,c_i(s))$ for all $y \in X$, we have $$ h_{i,s} \le h_i. $$ As the curves $c_i$ pointwise converge to $\gamma$, the functions $h_{i,s}$ pointwise converge to $g_s:y \mapsto s - \dist(y,\gamma(s))$, so that letting $i$ tend to $+\infty$ provides $$ g_s \le h_\infty $$ and then letting $s$ tend to $+\infty$ gives $$ b_\gamma \le h_{\infty}. $$ Then $h_\infty - b_\gamma$ is a non-negative locally $L$-harmonic function on $X$ hence it is a constant because of Lemma \ref{lem:ellipticHarnack}. But $b_\gamma(o)=0=h_i(o)$ for any $i$, so this constant is equal to $0$. \end{proof} \hfill \textbf{\underline{Step 2.}} [Behavior of $H$ in the convergence $(X,\dist_i) \to (\uX,\udist)$ and link with $h_\infty$] Recall that for any $1 \le j \le n$, the function $h_j$ has linear growth: $|h_j(x)| \le C_j (1+\dist(o,x))$ for any $x \in X$, where $C_j>0$ is some constant. Then the rescalings $h_j^{i}:=r_i^{-1}h_j$ are such that $|h_j^{i}(x)| \le C_j(r_i^{-1}+ \dist_i(o,x))$ for any $x \in X$, hence $$ \|h_j^{i}\|_{L^\infty(B_r^{\dist_i}(o))} \le C_j(r_i^{-1}+r)\le C_j(1+r) $$ holds for any $r>0$ and any $i$ such that $r_i>1$. Moreover, since $h_j$ is locally $L$-harmonic, it is strongly harmonic by Lemma \ref{lem:Lstrong} and then Lipschitz by Lemma \ref{lem:0109}: there is some constant $C_j'>0$ such that $$ |h_j(x)-h_j(y)|\le C_j' \dist(x,y) $$ for any $x,y \in X$. This implies that the sequence $\{h_j^{i}\}_i$ is asymptotically uniformly continuous on $\overline{B}_r(x)$. It is immediate to check that the rescalings $h_j^{i}$ are also strongly harmonic in $(X,\dist_i,\mu_i)$ where $\mu_i:=r_i^{-\alpha}\mu$. Then Proposition \ref{prop:AA1} and Proposition \ref{prop:stabstrongharmonic} imply that up to extracting a subsequence, we can assume that for any $j=1,\dots, l$, the functions $h_j^{i}$ converge uniformly on all compact sets to a strongly harmonic function $\uh_j : \uX \to \setR$. We set $$ \uH := (\uh_1,\ldots,\uh_l) : \uX \to \setR^l. $$ \begin{claim} For any given $\ux \in \uX$, the function $X \ni y \mapsto \beta(\uH(\ux),H(y))$ is a multiple of a Busemann function. \end{claim} \begin{proof} Let $\{x_i\}_i \subset X$ be such that $x_i \stackrel{GH}{\to} \ux$. Denote by $h_\infty$ the Busemann function associated to $\{x_i\}_i$ as in the previous step. Then for any $y \in X$, \begin{align*} \beta(\uH(\ux),H(y)) & = \lim\limits_{i \to +\infty} \beta(H_i(x_i),H(y))\\ & = \lim\limits_{i \to +\infty} \frac{1}{r_i} \beta(H(x_i),H(y))\\ & = \lim\limits_{i \to +\infty} \frac{1}{2r_i} (\dist^2(o,x_i)+\dist^2(o,y)-\dist^2(x_i,y)) \qquad \text{by \eqref{eq:D6.3}}\\ & = \lim\limits_{i \to +\infty} \frac{(\dist(o,x_i)-\dist(x_i,y))(\dist(o,x_i)+\dist(x_i,y))}{2r_i}\\ & = h_\infty(y) \left(\frac{\udist(\uo,\ux)}{2} + \lim\limits_{i \to +\infty} \frac{\dist(x_i,y)}{2r_i} \right) \end{align*} since $\dist(o,x_i)-\dist(x_i,y) \to h_\infty(y)$. Now $$ \underbrace{\frac{\dist(x_i,o)-\dist(o,y)}{r_i}}_{\to \udist(\ux,\uo)} \le \frac{\dist(x_i,y)}{r_i} \le \underbrace{\frac{\dist(x_i,o)+\dist(o,y)}{r_i}}_{\to \udist(\ux,\uo)}\, , $$ so \begin{equation} \beta(\uH(\ux),H(y)) = \udist(\uo,\ux) h_\infty(y). \end{equation} \end{proof} Note that we also have the following. \begin{claim}\label{uclaim} $\uH$ is an injective map. Moreover, $\Span(\uH(\uX)) = \setR^l$. \end{claim} \begin{proof} Dividing $\eqref{eq:D6.4}$ by $r_i^2$ and taking $i\to+\infty$ implies $\udist^2(x,y) = Q(\uH(x)-\uH(y))$ for any $x,y \in X$, hence the injectivity of $\uH$. To prove the second part of the statement, let us show that $\uH(\uX)$ is contained in no hyperplan of $\setR^l$. Take a linear form $\lambda :\setR^l\to\setR$ vanishing on $\uH(\uX)$. Considering the convergent sequence $(X_i,\dist_i,o_i)\to(\uX,\udist,\uo)$, Proposition \ref{prop:AA1} implies that up to extraction the equi-Lipschitz functions $r_i^{-1} \lambda \circ H : X \to \setR$ converge to $0=\lambda \circ \uH : \uX \to \setR$ over $\overline{B}^{\dist_i}_r(o) \to\overline{B}_r(o)$ for any $r>0$. Therefore, we have $$\sup_{\partial B_{r_i}(o)} |\lambda \circ H| = \sup_{\partial B_1^{\dist_i}(o)} |\lambda \circ H|= o(r_i).$$ Being a linear combination of locally $L$-harmonic functions, $\lambda \circ H$ is locally $L$-harmonic hence strongly harmonic by Lemma \ref{lem:Lstrong}. Thus Lemma \ref{lem:0109} implies that $\lambda \circ H$ is constantly equal to $0$. Since $\Span(H(X))=\setR^l$, this implies $\lambda = 0$. \end{proof} \hfill \textbf{\underline{Step 3.}} [Construction of the bijection] Our goal now is to construct a natural bijection between $\uX \backslash \{\uo\}$ and $\uS \times (0,+\infty)$, where $\uS:=\{\ux \in \uX : \udist(\uo,\ux)=1\}$.\\ Let us start with some heuristics. For $\ux \in \uX \backslash \{\uo\}$ given, we look for $\usigma \in \uS$ and $t\in(0,+\infty)$ uniquely determined by $\ux$. Here is how we are going to proceed: 1. prove that there exists only one minimizing geodesic $\uc$ joining $\uo$ to $\ux$; 2. show that $\uc$ extends in an unique way into a geodesic ray $\ugamma$. \noindent Indeed, the unique geodesic ray $\ugamma$ that we are going to construct necessarily crosses $\uS$ at a single point $\usigma$ (otherwise $\ugamma$ would fail to be a minimizing geodesic) and it is such that $\ugamma(t)=\ux$ for a unique time $t>0$. Conversely, a pair $(\usigma,t)$ would uniquely determine a point $\ugamma(t) \in \uX$.\\ Let us proceed now with the construction. Take $\ux \in \uX \backslash \{\uo\}$. Let $\{x_i\}_i \subset X$ be such that $x_i \stackrel{GH}{\to} \ux$. For any $i$, let $c_i$ be the minimizing $\dist_i$-geodesic joining $o$ to $x_i$. As done previously, up to extracting a subsequence we can assume that $\{c_i\}_i$ converges uniformly on compact subsets of $[0,+\infty)$ to a geodesic ray $\gamma : [0,+\infty) \to X$. We know from Claim \ref{claim2} and the previous step that \begin{equation}\label{eq:asabove} \beta(\underline{H}(\ux),H(y)) = \udist(\uo,\ux) b_\gamma(y) \end{equation} holds for any $y \in X$, where $b_\gamma$ is the Busemann function associated with $\gamma$. Consider now a minimizing geodesic $\uc$ in $(\uX,\udist)$ joining $\uo$ to $\ux$ and set $$\uD:=\udist(\uo,\ux).$$ For any $s \in [0,\uD]$, acting as we did to establish \eqref{eq:asabove}, we can prove that $$\beta(\underline{H}(\uc(s)),H(y)) = s b_\gamma(y)$$ holds for any $y \in X$. Subtracting \eqref{eq:asabove} to this latter equality yields to \begin{equation}\label{eq:draft23} \beta\left( \underline{H}(\uc(s)) - \frac{s}{\uD}\uH(\ux) , H(y)\right) = 0 \end{equation} for any $y \in X$. By Claim \ref{claim}, this implies \begin{equation}\label{eq:draft23} \beta\left( \underline{H}(\uc(s)) - \frac{s}{\uD}\uH(\ux) , \xi \right) = 0 \end{equation} for any $\xi \in \setR^n$ and then \begin{equation}\label{eq:H} \underline{H}(\uc(s)) = \frac{s}{\uD}\uH(\ux) \end{equation} since $\beta$ is non-degenerate. Uniqueness of $\uc$ follows: if $\uc_1$ and $\uc_2$ are two minimizing geodesics joining $\uo$ to $\ux$, for any $s \in [0,\uD]$ one has $$ \uH(\uc_1(s)) = \frac{s}{\uD}\uH(\ux) =\uH(\uc_2(s)) $$ and thus $\uc_1(s)=\uc_2(s)$ since $\uH$ is injective.\\ Let us show now that $\uc$ extends in an unique way into a geodesic ray. Our argument is inspired by the analysis done by J. Cheeger about generalized linear functions \cite[Section 8]{CheegerRademacher}. For any $i$, set $D_i:=\dist(o,x_i)$ and write $\gamma_i:[0,+\infty) \to X$ for the geodesic ray in $(X,\dist_i)$ defined by: $$ \gamma_i(s) = \gamma(sr_i + D_i) \qquad \forall s>0. $$ On one hand, by Proposition \ref{prop:AA2}, we know that up to extracting a subsequence we can assume that $\{\gamma_i\}_i$ converges uniformly on compact subsets of $[0,+\infty)$ to a geodesic ray $\tilde{\gamma}:[0,+\infty] \to \uX$ whose associated Busemann function we denote by $b_{\tilde{\gamma}}.$ On the other hand, if we write $b_{\gamma_i}$ for the Busemann function associated with $\gamma_i$, we can proceed as in Step 2 with $\dist_{i}, H_{i},\gamma_{i}$ in place of $\dist, H, \gamma$ respectively to get \begin{equation} b_{\gamma_{i}}(y) = \frac{\beta(\uH(\ux),H_{i}(y))}{\uD} \end{equation} for any $y \in X$. Then the sequence $(b_{\gamma_i})$ converges pointwise to the function $$ F:\uX \ni \uy \mapsto \frac{\beta(\uH(\ux),\uH(\uy))}{\uD} $$ and we have the following: \begin{claim} \begin{equation}\label{eq:uniqueness} F=\underline{b}_{\tilde{\gamma}}+\uD. \end{equation} \end{claim} \begin{proof} Observe first that $F$ is strongly harmonic since it is a linear combination of the strongly harmonic functions $\uh_1,\ldots,\uh_l$. Let us show that $b_{\tilde{\gamma}}$ is strongly harmonic too. For any $i$, set $$ p_i(x,y,t):=\frac{1}{(4\pi t)^{\alpha/2}} e^{-\frac{\dist_i^2(x,y)}{4t}}= r_i^\alpha p(x,y,r_i^2 t)$$ for any $x,y \in X$ and $t>0$, and $$ \underline{p}(\ux,\uy,t) := \frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\udist^2(\ux,\uy)}{4t}} $$ for any $\ux,\uy \in X$ and $t>0$. Then for any $x,y \in X$ and $s,t>0$, \begin{align*} p_i(x,y,t+s) & = r_i^\alpha p(x,y,r_i^2 t + r_i^2 s) = r_i^\alpha \int_X p(x,z,r_i^2 t)p(z,y,r_i^2 s) \di \mu(z)\\ & = \int_X p_i(x,z,t)p(z,y,r_i^2s) r_i^\alpha \frac{\di \mu(z)}{r_i^\alpha} = \int_X p_i(x,z,t)p_i(z,y,s) \di \mu_i(z). \end{align*} For any $\ux, \uy \in \uX$ and $\{x_i\}_i, \{y_i\}_i \in X$ such that $x_i \stackrel{GH}{\to} \ux$ and $y_i \stackrel{GH}{\to} \uy$, the convergence $\dist_i(x_i,y_i) \to \udist(\ux,\uy)$ implies $p_i(x_i,y_i,t) \to \up(\ux,\uy,t)$ for any $t>0$, hence: $$ \underline{p}(\ux,\uy,t+s) = \int_X \underline{p}(\ux,\uz,t) \underline{p}(\uz,\uy,s) \di \umu(\uz) \qquad \forall \ux, \uy \in \uX, \, \forall \, t,s>0. $$ By a standard procedure described for instance in \cite[Section 2]{Grigor'yan}, we can construct a Dirichlet form $\underline{\cE}$ on $(\uX,\udist,\umu)$ admitting a heat kernel given by $\underline{p}$. In particular, $(\uX,\udist,\umu,\underline{\cE})$ has an $\alpha$-dimensional Euclidean heat kernel. Writing $\underline{L}$ for the associated self-adjoint operator, we deduce from Lemma \ref{lem:preparatory} that $b_{\tilde{\gamma}}$ is locally $\underline{L}$-harmonic, then Lemma \ref{lem:Lstrong} implies that $b_{\tilde{\gamma}}$ is strongly harmonic. Let us show now that $F - \underline{b}_{\tilde{\gamma}} \ge 0$. Take $i\in \setN$, $s>0$ and $y \in X$. Then $b_\gamma(y) \ge r_i s + D_i - \dist(\gamma(r_i s+D_i),y)$ by definition of a Busemann function, hence $r_i^{-1}b_\gamma(y) \ge s+ r_i^{-1} D_i - \dist_i(\gamma_i(s),y)$. Since \begin{align*} r_i^{-1}b_\gamma(y) & = \lim\limits_{s \to +\infty} r_i^{-1} s - r_i^{-1} \dist(\gamma(s),y)\\ & = D_i r_i^{-1} + \lim\limits_{s' \to +\infty} s' - \dist_i(\gamma(D_i+r_i s'),y)\qquad \text{where $s'=(s-D_i)r_i^{-1}$}\\ & = D_i r_i^{-1} + b_{\gamma_i}(y), \end{align*} we get $b_{\gamma_i}(y) \ge s - \dist_i(\gamma_i(s),y)$. Letting $i$ tend to $+\infty$ provides $F(y)\ge s - \udist(\tilde{\gamma}(s),y)$, after what letting $s$ tend to $+\infty$ gives $F\ge b_{\tilde{\gamma}}$. By Lemma \ref{lem:ellipticHarnack}, we get that $F-b_{\tilde{\gamma}}$ is a constant function. Since $F(\ux)=\uD=\udist(\uo,\ux) $ and $b_{\tilde{\gamma}}(\ux)=0$, the claim is proved. \end{proof} Let $\ugamma:[0,+\infty)\to \uX$ be the concatenation of $\uc$ and $\tilde{\gamma}$, i.e. $$ \ugamma(t):= \begin{cases} \uc(t) & \text{if $0<t\le \uD$,}\\ \tilde{\gamma}(t-\uD) & \text{if $t\ge\uD$.} \end{cases} $$ By construction, $\ugamma$ is $1$-Lipschitz: $\udist(\ugamma(t),\ugamma(s))\le |t-s|$ for any $s,t>0$. Moreover \eqref{eq:H} implies $F(\ugamma(t))=t$ when $0<t\le \uD$ while \eqref{eq:uniqueness} implies $F(\ugamma(t))=t$ when $t\ge \uD$. Since the function $F$ is $1$-Lipschitz we get $|t-s| \le \udist(\ugamma(t),\ugamma(s))$ for any $s,t>0$, thus $\ugamma$ is a geodesic ray that extends $\uc$. Let us show that this extension $\ugamma$ is unique. By \eqref{eq:uniqueness}, we have \begin{equation}\label{eq:un} \beta(\uH(\ux),\uH(\uy)) = \uD( \ub_{\tilde{\gamma}}(\uy)-\uD) \end{equation} for any $\uy \in \uX$ and we can obtain \begin{equation} \beta(\uH(\tilde{\gamma}(t)),\uH(\uy)) = t( \ub_{\tilde{\gamma}}(\uy)-\uD) \end{equation} for any $\uy \in \uX$ and $t>\uD$ by a similar reasoning. Then if $\ugamma'$ is another extension of $c$ obtained from a geodesic ray $\tilde{\gamma}'$ emanating from $\ux$, we get $$ \beta(\uH(\tilde{\gamma}(t))-\uH(\tilde{\gamma}'(t)),\uH(\uy)) =0 $$ for any $\uy \in \uX$ and $t>\uD$, from which Claim $\ref{uclaim}$ yields $\tilde{\gamma}(t)=\tilde{\gamma}'(t)$. \begin{remark} Note that \eqref{eq:un} implies $\beta(\uH(\tilde{\gamma}(t)),\uH(\uy)) = t\beta(\uH(\tilde{\gamma}(1)),\uH(\uy))$ for all $\uy \in \uX$, hence \begin{equation}\label{eq:conicalstructure} \uH(\tilde{\gamma}(t)))= t\uH( \tilde{\gamma}(1)). \end{equation} \end{remark} \hfill \textbf{\underline{Step 4.}} [Construction of the isometry] Let $\Phi$ be the inverse of the bijection constructed in the previous step, i.e.~$$\Phi : \begin{array}{ccr} (0,+\infty) \times \uS & \to & \uX \backslash\{\uo\} \\ (t,\usigma) & \mapsto & \ugamma_{\usigma}(t), \end{array} $$ where $\ugamma_{\usigma}$ is the geodesic ray obtained by extending the minimizing geodesic joining $\uo$ to $\usigma$. Note that \eqref{eq:conicalstructure} implies \begin{equation}\label{eq:conicalstructure2} \uH(\Phi(t,\usigma))= t\uH( \Phi(1,\usigma)) \end{equation} for any $(t,\usigma)\in (0,+\infty) \times \uS$. Let $\dist_C$ be the cone distance on $(0,+\infty)\times \uS$ defined by $$ \dist_C^2((t,\usigma),(t',\usigma')):=(t-t')^2 + 2 t t' \sin^2\left(\frac{\udist_{\uS}(\usigma,\usigma')}{2}\right) $$ for any $(t,\usigma),(t',\usigma') \in (0,+\infty) \times \uS$, where $\udist_{\uS}$ is the length distance associated with the distance on $\uS$ obtained by restricting $\udist$ to $\uS \times \uS$. We are going to establish \begin{equation}\label{eq:isometry} \udist(\ux,\ux') = \udist_C((t,\usigma),(t',\usigma')) \end{equation} for any $\ux = \Phi(t,\usigma), \, \ux' = \Phi(t',\usigma') \in \uX \backslash \{\uo\}$. \begin{claim}\label{claim:isometrywithdelta} There exists $\delta(\usigma,\usigma') \in [0,\pi]$ such that \begin{equation}\label{eq:coeur2} \udist^2(\ux,\ux') = (t-t')^2 + 4 tt' \sin^2\left(\frac{\delta(\usigma,\usigma')}{2}\right)\,\cdot \end{equation} \end{claim} \begin{proof} Choose $\{x_i\}_i, \{x_i'\}_i \subset X$ such that $x_i \stackrel{GH}{\to} \ux$ and $x_i' \stackrel{GH}{\to} \ux'$. For any $i$, divide \eqref{eq:D6.4} by $r_i^2$ to get $ \udist_i^2(x_i,x_i') = Q(H_i(x_i)-H_i(x_i')).$ Letting $i$ tend to $+\infty$ implies $ \udist^2(\ux,\ux') = Q(\uH(\ux)-\uH(\ux'))$, hence $$ \udist^2(\ux,\ux') = Q(t \uH(\Phi(1,\usigma)) - t' \uH(\Phi(1,\usigma'))) $$ thanks to \eqref{eq:conicalstructure2}. To compute $Q(t \uH(\Phi(1,\usigma)) - t' \uH(\Phi(1,\usigma')))$, let us use $\underline{h}_i(\usigma)$ as a shorthand for $\underline{h}_i(\Phi(1,\usigma))$. Then: \begin{align} \udist^2(\ux,\ux') & = Q(t\uh_1(\usigma) - t' \uh_1(\usigma'), \ldots, t\uh_l(\usigma) - t' \uh_l(\usigma')) \nonumber \\ & = \sum_{i,j=1}^l B(x_i,x_j) (t\underline{h}_i(\usigma)-t'\underline{h}_j(\usigma'))(t \underline{h}_j(\usigma)-t'\underline{h}_j(\usigma'))\nonumber \\ & = \left(\sum_{i,j=1}^l B(x_i,x_j) \uh_i(\usigma)\uh_j(\usigma)\right) t^2 + \left(\sum_{i,j=1}^l B(x_i,x_j) \uh_i(\usigma')\uh_j(\usigma')\right) (t')^2\nonumber\\ & - 2 t t' \left(\sum_{i,j=1}^l B(x_i,x_j) \uh_i(\usigma')\uh_j(\usigma)\right)\nonumber\\ & = Q(\uH(\usigma)) t^2 + Q(\uH(\usigma')) (t')^2 - 2 t t' \beta(\uH(\usigma),\uH(\usigma'))\nonumber\\ & = \udist^2(\usigma,\uo)t^2 + \udist^2(\usigma',\uo) (t')^2 - 2 t t' \beta(\uH(\usigma),\uH(\usigma'))\nonumber\\ & = t^2 + (t')^2 - 2 t t' \beta(\uH(\usigma),\uH(\usigma'). \end{align} Set $\uB(\ux,\ux'):=\frac{1}{2}(\udist^2(\uo,\ux)+\udist^2(\uo,\ux')-\udist^2(\ux,\ux'))$. Write \eqref{eq:D6.3} with $x=x_i$, $x'=x_i'$, divide by $r_i^2$ and let $r_i$ tend to $+\infty$ to get $\ubeta(\uH(\ux),\uH(\ux'))=\uB(\ux,\ux')$ and then \begin{equation}\label{eq:coeur} \udist^2(\ux,\ux') = t^2 + (t')^2 - 2 t t' \uB(\usigma,\usigma'). \end{equation} Assuming $t=t'=1$ provides $\uB(\usigma,\usigma')=1-\frac{1}{2}\udist^2(\usigma,\usigma')$. The triangle inequality implies $\udist^2(\usigma,\usigma')\le 4$, thus $\uB(\usigma,\usigma') \in [-1,1]$, so we can set $$ \cos(\delta(\usigma,\usigma')):=\uB(\usigma,\usigma') $$ for some $\delta(\usigma,\usigma') \in [0,\pi]$. \end{proof} In particular, \eqref{eq:coeur2} implies $\udist(\usigma_0,\usigma_1)=2\sin(\delta(\usigma_0,\usigma_1)/2)$ for any $\usigma_0,\usigma_1 \in \uS$. \begin{claim}\label{claim:geodesicdelta} The function $\delta$ defines a geodesic distance on $\uS$. \end{claim} \begin{proof} Let us first show that $\delta$ defines a distance on $\uS$. We only prove the triangle inequality since the two other properties are immediate. For given $\usigma_0, \usigma_1,\usigma_2 \in \uS$, let us set $\alpha := \delta(\usigma_0,\usigma_1)$, $\beta:=\delta(\usigma_1,\usigma_2)$ and $\gamma:=\delta(\usigma_0,\usigma_2)$. We can assume $\alpha + \beta \le \pi$ because otherwise we would have $\alpha + \beta > \pi \ge \gamma$, thus nothing to prove. For any $t,s,r>0$, the triangle inequality for $\udist$ written with \eqref{eq:coeur2} gives $$ \sqrt{(t-s)^2 + 4 ts\sin^2(\alpha/2)} + \sqrt{(s-r)^2 + 4 sr\sin^2(\beta/2)} \ge \sqrt{(t-r)^2 + 4tr\sin^2(\gamma/2)}. $$ Considering the three complex numbers $z_0=t$, $z_1 = se^{i\alpha}$ and $z_2 = re^{i(\alpha + \beta)}$, this can be rewritten as $$ |z_0-z_1| + |z_1-z_2| \ge \sqrt{(t-r)^2 + 4tr\sin^2(\gamma/2)}. $$ Choosing $s$ so that $z_0, z_1, z_2$ are aligned implies $|z_0-z_2| = |z_0-z_1| + |z_1 - z_2|$ thus $$ \sqrt{(t-r)^2 + 4tr\sin^2((\alpha+\beta)/2)} = |z_0-z_2| \ge \sqrt{(t-r)^2+ 4tr\sin^2(\gamma/2)} $$ which yields to $\alpha + \beta \ge \gamma$. Let us show now that $\delta$ is geodesic. For given $\usigma_0, \usigma_1 \in \uS$ with $\usigma_0 \neq \usigma_1$, we aim at finding $\usigma_m \in \uS$ such that $$\delta(\usigma_0,\usigma_m)=\delta(\usigma_m,\usigma_1)=\frac12\delta(\usigma_0,\usigma_1).$$ Let $c:[0,\udist(\usigma_0,\usigma_1)]\to X$ be the minimizing $\udist$-geodesic between $\usigma_0$ and $\usigma_1$. Assume first $\delta(\usigma_0,\usigma_1)<\pi$ so that $c(\udist(\usigma_0,\usigma_1)/2)\neq 0$. Then $c(\udist(\usigma_0,\usigma_1)/2)$ writes as $\Phi(s,\usigma_m)$ for some $(s,\sigma_m) \in (0,1) \times \uS$. We have $$ \udist(\usigma_0,\usigma_m) = \udist(\usigma_m,\usigma_1) = \frac{1}{2}\udist(\usigma_0,\usigma_1) $$ from which follows \begin{equation}\label{eq:star} (1-s)^2 + 4 s \sin^2\left(\frac{\alpha_0}{2}\right)=(1-s)^2 + 4 s \sin^2\left(\frac{\alpha_1}{2}\right)= \sin^2\left(\frac{\beta}{2}\right) \end{equation} thanks to \eqref{eq:coeur2}, where we have set $\alpha_0:=\delta(\usigma_0,\usigma_m)$, $\alpha_1:=\delta(\usigma_m,\usigma_1)$ and $\beta:=\delta(\usigma_0,\usigma_1).$ Note first that \eqref{eq:star} immediately implies $\alpha_0=\alpha_1$. Moreover, for any $t>0$, $$ \udist\left(\usigma_0,\Phi(t,\sigma_m)\right)+\udist\left(\usigma_1,\Phi(t,\sigma_m)\right)\ge \udist\left(\usigma_0,\usigma_1\right) $$ implies $$ \sqrt{(1-t)^2 + 4t \sin^2(\alpha_0/2)} + \sqrt{(1-t)^2 + 4t \sin^2(\alpha_1/2)} \ge 2 \sin\left(\frac{\beta}{2}\right) $$ thus $$ (1-t)^2 + 4t \sin^2(\alpha_0/2) \ge \sin^2\left(\frac{\beta}{2}\right). $$ Therefore, the polynomial function $F : t \mapsto (1-t)^2 + 4t\sin^2(\alpha_o/2) - \sin^2(\beta/2)$ is non-negative and vanishes only at $t=s$, so $F'(s)=0$ hence $$ 2(1-s) = 4 \sin^2\left( \frac{\alpha_0}{2}\right). $$ Plugging this in \eqref{eq:star} leads to $\sin^2(\beta/2) = \sin^2(\alpha_0/2)$ hence $\alpha_0=\beta_2$. \end{proof} Claim \ref{claim:geodesicdelta} and Lemma \ref{lem:length} implies $\delta=\udist_{\uS}$ which yields \eqref{eq:isometry} by Claim \ref{claim:isometrywithdelta}. \subsection{Equality $l=\alpha$ and positive definiteness of $Q$} Since $\beta$ is non-degenerate, we can write \begin{equation}\label{eq:decomposition} \setR^l = E_+\oplus E_- \end{equation} where $E_+$ is a subspace of $\setR^l$ with maximal dimension where $\beta$ is positive definite and $E_-$ is its $\beta$-orthogonal complement; $\beta$ is negative definite on $E_-$. We call $p_+$ the dimension of $E_+$ and $p_-$ the dimension of $E_-$. Note that $l = p_+ + p_-$ so in particular, $l \ge p_+$. Let us prove $p_+ \ge \alpha$, then $l = \alpha$, in order to reach our conclusion that is $l=p_+=\alpha$. \\ \textbf{\underline{Step 1.}} [$p_+ \ge \alpha$] Let us write $\uH=(\uH_+,\uH_-)$ where $\uH_+:=\mathrm{proj}_{E_+} \circ \uH$ and $\uH_-:=\mathrm{proj}_{E_-} \circ \uH$, and $\mathrm{proj}_{E_+}, \mathrm{proj}_{E_-}$ are the projections associated to the decomposition \eqref{eq:decomposition}. Moreover, we set $q_+(v_+):=\beta(v_+,v_+)$ for any $v_+ \in E_+$ and $q_-(v_-):=\beta(v_-,v_-)$ for any $v_- \in E_-$. Then for any $\ux, \uy \in \uX$, $$ Q(\uH(\ux)-\uH(\uy)) = q_+(\uH_+(\ux)-\uH_+(\uy)) + q_-(\uH_-(\ux)-\uH_-(\uy)), $$ thus \begin{equation}\label{eq:D9.1} \udist^2(\ux,\uy) - q_-(\uH_-(\ux)-\uH_-(\uy)) = q_+(\uH_+(\ux)-\uH_+(\uy)). \end{equation} Since $q_- \le 0$, it follows from \eqref{eq:D9.1} that $q_+(\uH_+(\ux)-\uH_+(\uy)) \ge \udist^2(\ux,\uy)$. Moreover, $- q_-(\uH_-(\ux)-\uH_-(\uy))$ is bounded from above by $\lambda \udist^2(\ux,\uy)$ where $\lambda$ is the largest modulus an eigenvalue of $Q$ can have, so $q_+(\uH_+(\ux)-\uH_+(\uy)) \le (1+\lambda)\udist^2(\ux,\uy)$. Finally, since $\uH$ is injective, then $\uH_+$ is injective too. Therefore, the map $\uH_+$ is a bi-Lipschitz embedding of $(\uX,\udist)$ into $(E_+,\dist_{q_+})$ where $\dist_{q_+}(v_+,v_+'):=\sqrt{q_+(v_+-v_+')}$ for any $v_+, v_+' \in E_+$. This implies that $p_+$ is greater than or equal to the local Hausdorff dimension of $\uX$ which is equal to $\alpha$.\\ \textbf{\underline{Step 2}.} [$l=\alpha$] Set $\tilde{\mu} := (\Phi^{-1})_{\#}(\umu \measrestr \uX \backslash \{\uo\})$. Then $\tilde{\mu}$ is a Borel measure on $(0,+\infty)\times \uS$ equipped with $\dist_C$. We complete $(0,+\infty)\times \uS$ by adding the point $\uo$ corresponding to the tip of this metric cone. Let $\di t \otimes \nu_t$ be the disintegration of $\tilde{\mu}$ with respect to the first variable $t$ (we refer to \cite[2.5]{AmbrosioFuscoPallara} for the definition of disintegration of a measure). Since for any $\lambda,r>0$, we have $\umu(h_{\lambda}(B_r(\uo)))=\lambda^{\alpha} \umu(B_r(\uo))$ where $h_\lambda : (t,\usigma)\mapsto (\lambda t, \usigma)$, then $\di \nu_t=t^{\alpha-1}\nu_1$ for any $t>0$ and $\unu_1(\uS)=\alpha \omega_\alpha$. Let us write $\unu$ instead of $\unu_1$. \begin{comment} For any $t>0$, one has $\tilde{\mu}((0,t)\times \uS) = \int_0^t \nu_s(\uS) \di s$ and thus $\nu_t(\uS)=\alpha \omega_{\alpha}t^{\alpha-1}$ by differentiation. Moreover, for any Borel set $A \subset \uS$, a direct computation shows that $f_A:t\mapsto \tilde{\mu}((0,t) \times A)$ satisfies the differential equation $ (t^{-(\alpha-1)} y)' = (\alpha-1)t^{-\alpha}y$, so we have $f_A(t)=f_A(1)t^{2(\alpha-1)}$ for any $t>0$. Therefore, setting $\unu(A):=f_A(1)$ for any Borel set $A \subset \uS$, we get $\di t \otimes \nu_t = t^{\alpha-1} \di t \otimes \unu$ and $\unu(\uS)=\alpha \omega_\alpha$. \end{comment} \begin{claim} For any $\usigma \in \uS$ and $\uh \in \ucV:=\Span(\uh_1,\cdots,\uh_l)$, \begin{equation}\label{proK} \uh(\usigma) = \frac{\alpha}{\unu(\uS)} \int_{\uS} \cos(\udist_{\uS}(\usigma,\uphi)) \uh(\uphi) \di \unu(\uphi). \end{equation} \end{claim} \begin{proof} Take $\uh \in \ucV$ and $t>0$. Since $\uh_1,\cdots,\uh_l$ are locally $\uL$-harmonic, then for any $\ux \in \uX$, \begin{equation}\label{eq:13.56} \uh(\ux) = \int_{\uX} \up(\ux,\uy,t)\uh(\uy)\di\umu(\uy). \end{equation} Use the notation $\ux = \Phi(r,\usigma)$ and $\uy=\Phi(s,\uphi)$ and note that $\uh(\Phi(t,\usigma)) = r \uh(\usigma)$ and $\uh(\Phi(s,\uphi)) = s \uh(\uphi)$ thanks to \eqref{eq:conicalstructure}. Then \eqref{eq:13.56} writes \begin{equation}\label{eq:draft21} r \uh(\usigma)=\frac{1}{(4\pi t)^{\alpha/2}}\int_0^{+\infty} \int_{\uS} e^{\frac{-r^2-s^2+2rs\cos(\udist_{\uS}(\usigma,\phi))}{4t}}\uh(\uphi)s^\alpha\di s \di \unu(\uphi). \end{equation} Since $$ \frac{\di}{\di r} \left( e^{\frac{-r^2+2rs\cos(\udist_{\uS}(\usigma,\uphi))}{4t}}\right) = \left( - \frac{r}{2t} + \frac{s\cos(\udist_{\uS}(\usigma,\uphi)}{2t}\right)e^{\frac{-r^2+2rs\cos(\udist_{\uS}(\usigma,\uphi))}{4t}}, $$ differentiating \eqref{eq:draft21} with respect to $r$ and evaluating at $r=0$ gives $$ \uh(\usigma) = \frac{1}{(4\pi t)^{\alpha/2}} \int_0^{+\infty} e^{-\frac{s^2}{4t}}\frac{s^{\alpha+1}}{2t} \di s \int_{\uS} \cos(\udist_{\uS}(\usigma,\uphi)) \uh(\uphi) \di \unu(\uphi). $$ A direct computation using the change of variable $\xi=\frac{s^2}{4t}$ shows that $$ \frac{1}{(4\pi t)^{\alpha/2}} \int_0^{+\infty} e^{-\frac{s^2}{4t}}\frac{s^{\alpha+1}}{2t} \di s = \frac{1}{\omega_\alpha} = \frac{\alpha}{\unu(\uS)}\, . $$ \end{proof} Set $\cW:=\{\uh_{|S} \, : \, \uh \in \ucV\}$. Note that \eqref{eq:conicalstructure} implies that the restriction map $\ucV\rightarrow \cW$ is a bijection, hence $\dim \cW=l$. We introduce the operator $\mathcal{K}\colon L^2(\uS,\di\unu)\rightarrow L^2(\uS,\di\unu)$ defined by $$\mathcal{K}(f)(\usigma):= \frac{\alpha}{\unu(\uS)} \int_{\uS} \cos(\udist_{\uS}(\usigma,\uphi)) f(\uphi) \di \unu(\uphi) $$ for any $f \in L^2(\uS,\di\unu)$ and $\umu$-a.e.~$\usigma \in \uS$. Since for any $\usigma,\usigma' \in \uS$, \begin{equation}\label{expcos}\cos(\dist_S(\usigma,\usigma')=\uB(\usigma,\usigma')=\beta\left(\uH(\Phi(1,\usigma)),\uH(\Phi(1,\usigma'))\right)=\sum_{i,j} B(x_i,x_j) \uh_i(\usigma)\uh_j(\usigma'),\end{equation} then the image of $\mathcal{K}$ is contained in $\cW$ and according to \eqref{proK}, we have $$\mathcal{K}f=f \qquad \text{for every $f \in \cW$.}$$ Hence $\mathcal{K}$ is the orthogonal projection onto $\cW$ and if $\underline{k}_1,\ldots, \underline{k}_l$ form an orthonormal basis of $\cW$ for the $L^2(\uS,\unu)$ scalar product, then for any $f\in L^2(\uS,\di\nu):$ $$\mathcal{K}(f)(\usigma)= \sum_{i=1}^l \underline{k}_i(\usigma) \int_{\uS} \underline{k}_i(\uphi) f(\uphi) \di \unu(\uphi) $$ This implies \begin{equation}\label{eq:0409} \frac{\alpha}{\unu(\uS)} \cos(\udist_{\uS}(\usigma,\uphi)) = \sum_{i=1}^l \underline{k}_i(\usigma) \underline{k}_i(\uphi) \end{equation} for $\unu\otimes \unu$-a.e.~$(\usigma,\uphi) \in \uS \times \uS$. Since for any $i$, the function $\underline{k}_i$ admits a continuous representative -- still denoted by $\underline{k}_i$ -- defined by $$ \underline{k}_i (\usigma) = \int_{\uS} \cos(\udist_{\uS}(\usigma,\uphi)) \uh(\uphi) \di \unu(\uphi) $$ for any $\usigma \in \uS$, then \eqref{eq:0409} holds for all $(\usigma,\uphi) \in \uS \times \uS$. In particular, we can take $\usigma = \uphi$ in \eqref{eq:0409} to get $$ \frac{\alpha}{\unu(\uS)} = \sum_{i=1}^l \underline{k}_i(\usigma)^2. $$ Integrating over $\uS$ with respect to $\unu$ gives $\alpha = l.$\\ \subsection{Conclusion} From the previous subsections, we get that $H$ is an isometric embedding of $(X,\dist)$ into $(\setR^l,\dist_Q)$ or, as explained at the beginning of this section, into $(\setR^l,\dist_e)$. Therefore, $H(X)$ equipped with the restriction of $\dist_e$ is geodesic. Minimizing geodesics in $(\setR^l,\dist_e)$ being straight lines, this implies that $H(X)$ is convex. Being also closed, $H(X)$ is equal to its closed convex hull that is equal to $\setR^l$ by the proof of Claim \ref{claim}, hence Theorem \ref{th:main} is proved. \section{Almost rigidity result for the heat kernel} In this section, we show how our rigidity result (Theorem \ref{th:main}) provides an almost rigidity result (Theorem \ref{th:almostrigidity}). We fix a positive constant $T>0$, a positive integer $n$, and we recall that $\setB^n_r$ stands for an Euclidean ball in $\setR^n$ with radius $r>0$ (where this ball is centered as no importance), and $\dist_{GH}$ for the Gromov-Hausdorff distance. \begin{comment} \begin{theorem}\label{th:almostrigidity} For any $\upepsilon>0$, there exists $\updelta=\updelta(\upepsilon,n)>0$ such that if $(X,\dist,\mu)$ is a complete length metric measure space endowed with a symmetric Dirichlet form $\cE$ admitting a heat kernel $p$ such that \begin{equation}\label{eq:almostheatkernel} (1-\updelta) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1-\updelta)t}}\le p(x,y,t) \le (1+\updelta) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1+\updelta)t}}, \end{equation} for all $x,y \in X$ and $t \in (0,T]$, then for any $x\in X$ and $r\in (0, \sqrt{T}]$, $$\dist_{\mathrm GH}\left( B_r(x), \setB^n_r\right)< \upepsilon r.$$\end{theorem} Note that using the intrinsic Reifenberg theorem of Cheeger and Colding \cite[Theorem A.1.1.]{CheegerColding}, we get the following topological consequence. \begin{corollary}There is some $\updelta_n>0$ such that if $(X,\dist,\mu)$ is a complete length metric measure space endowed with a symmetric Dirichlet form $\cE$ admitting a heat kernel $p$ satisfying $$ (1-\updelta_n) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1-\updelta_n) t}}\le p(x,y,t) \le (1+\updelta_n) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4(1+\updelta_n) t}} \qquad \forall x,y \in X, \, \forall t\in (0,T], $$ then any ball $B\subset X$ with radius $r \in (0,\sqrt{T}]$ is homeomorphic to $\setB^n_r$. \end{corollary} \end{comment} We begin with the following lemma: \begin{lemma}\label{lem:voleu} If $(X,\dist,\mu)$ is a complete metric measure space endowed with a symmetric Dirichlet form $\cE$ admitting a heat kernel $p$ such that for some $\gamma >1$, \begin{equation}\label{eq:estheatkernel} \frac{\upgamma ^{-1}}{(4 \pi t)^{n/2}} e^{-\upgamma\frac{\dist^2(x,y)}{4t}}\le p(x,y,t) \le \frac{\upgamma }{(4 \pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4\upgamma t}} \end{equation} for all $x, y \in X$ and $t \in (0,T]$, then there exists positive constants $c(n,\upgamma), C(n,\upgamma)$ such that for any $x\in X$ and $r\le \sqrt{T}$, $$c(n,\upgamma)\, r^n\le \mu(B_r(x)))\le C(n,\upgamma)\, r^n.$$ \end{lemma} \begin{remark} The upper bound is quite classical, the novelty is the lower bound which was nonetheless known for stochastically complete spaces (see \cite[Th.~2.11]{Grigor'yan}). \end{remark} \proof For any $x \in X$ and $r>0$, integrating the lower bound in \eqref{eq:estheatkernel} gives $$e^{-\upgamma\frac{r^2}{4t}} \mu\left(B_r(x)) \right)\le \int_{B_r(x)} e^{-\upgamma\frac{\dist^2(x,y)}{4t}}\di\mu(y)\le \gamma (4 \pi t)^{n/2}$$ hence $\mu\left(B_r(x)) \right)\le \gamma (4 \pi t)^{n/2} e^{\upgamma\frac{r^2}{4t}}$ for any $t \in (0,T]$. Consequently, when $r\le \sqrt{T}$, choosing $t=r^2$ provides \begin{equation}\label{eq:uppervolume} \mu\left(B_r(x) \right)\le e^{\frac \upgamma 4} \upgamma(4 \pi )^{n/2}\, r^n, \end{equation} while when $r\ge \sqrt{T}$, choosing $t=T$ gives \begin{equation}\label{eq:uppervolumeT} \mu\left(B_r(x) \right)\le \upgamma\,(4 \pi T)^{n/2}\, e^{\upgamma\frac{r^2}{4T}}\,. \end{equation} Note that \eqref{eq:uppervolume} is the desired upper bound. Take $t \in (0,T/2]$. Combining \eqref{eq:estheatkernel} with the Chapman-Kolmogrov formula, we get $$ \frac{\upgamma^{-1}}{(8\pi t)^{n/2}} \le p(x,x,2t) = \int_X p(x,y,t)^2 \di \mu(y) \le \frac{\upgamma^2}{(4 \pi t)^n} \int_X e^{-\frac{\dist^2(x,y)}{2 \gamma t}} \di \mu(y) $$ hence \begin{equation}\label{eq:est4} \upgamma^{-3}\,(2\pi)^{n/2}\, t^{n/2} \le \int_X e^{-\frac{\dist^2(x,y)}{2\upgamma t}}\di\mu(y) \le \mu(B_r(x))+\int_{X\setminus B_r(x)} e^{-\frac{\dist^2(x,y)}{2\upgamma t}}\di\mu(y). \end{equation} From now on, assume $r \le \sqrt{T}$. By Cavalieri's principle and the estimates \eqref{eq:uppervolume} and \eqref{eq:uppervolumeT}, we get \begin{align}\label{eq:est1} & \int_{X\setminus B_r(x)} e^{-\frac{\dist^2(x,y)}{2\upgamma t}}\di\mu(y)=\int_r^{+\infty} e^{-\frac{\rho^2}{2\upgamma t}}\frac{\rho}{\upgamma t}\mu(B_\rho(x))\di\rho \nonumber\\ \le & \, \, (4 \pi )^{n/2}e^{\frac{\upgamma}{4}} \int_r^{\sqrt{T}} e^{-\frac{\rho^2}{2\upgamma t}}\frac{\rho}{ t} \, \rho^n \di \rho+(4 \pi T)^{n/2}\int_{\sqrt{T}}^{+\infty} e^{-\frac{\rho^2}{2\upgamma t}}\frac{\rho}{ t}\, e^{\upgamma\frac{\rho^2}{4T}}\di\rho.\end{align} A direct computation shows that for any $n \in \setN$, there exists $C_0>0$ depending only on $n$ such that for any $A\ge 1$, $$ \int_A^{+\infty} e^{-\frac{\xi^2}{2}} \xi^{n+1} \di \xi \le C_0 A^n e^{-\frac{A^2}{2}}. $$ Therefore, using the change of variable $\xi= \rho/\sqrt{\upgamma t}$ to get $$ \int_r^{\sqrt{T}} e^{-\frac{\rho^2}{2\upgamma t}}\frac{\rho}{ t} \rho^n \di \rho\le \int_r^{+\infty} e^{-\frac{\rho^2}{2\upgamma t}}\frac{\rho}{ t}\, \rho^n \di\rho = \upgamma^{\frac n2+1} t ^{n/2} \int_{r/\sqrt{\upgamma t}}^{+\infty} e^{-\frac{\xi^2}{2}}\xi^{n+1} \di \xi, $$ we obtain that $r\ge \sqrt{\upgamma t}$ implies \begin{equation}\label{eq:est2} \int_r^{\sqrt{T}} e^{-\frac{\rho^2}{2\upgamma t}}\frac{\rho}{\ t}\, \rho^n \di \rho\le C_0 \upgamma r^n e^{-\frac{r^2}{2\upgamma t}}. \end{equation} To bound the second term in \eqref{eq:est1}, assume $t\le T/\upgamma^2$. Then a straightforward computation shows that $-\frac{\rho^2}{2\gamma t} + \frac{\gamma \rho^2}{4T} \le - \frac{\rho^2}{4 \gamma t}$ holds, thus \begin{equation}\label{eq:est3} \int_{\sqrt{T}}^{+\infty} e^{-\frac{\rho^2}{2\upgamma t}}\frac{\rho}{ t}\, e^{\upgamma\frac{\rho^2}{4T}}\di \rho \le \int_{\sqrt{T}}^{+\infty} e^{-\frac{\rho^2}{4\upgamma t}}\frac{\rho}{ t}\, \di \rho = 2\upgamma e^{-\frac{T}{4\upgamma t}}. \end{equation} Combining \eqref{eq:est1}, \eqref{eq:est2} and \eqref{eq:est3}, we get existence of a constant $C>0$ depending only on $n$ such that if $r^2 \ge \gamma^2 t$ (this implies both $r\ge \sqrt{\gamma t}$ and $t \le T/\gamma^2$), then $$\int_{X\setminus B_r(x)} e^{-\frac{\dist^2(x,y)}{2\upgamma t}}\di\mu(y)\le C\left(\upgamma e^{\frac{\upgamma}{4}}r^n e^{-\frac{r^2}{2\upgamma t}}+\upgamma T^{n/2} e^{-\frac{T}{4\upgamma t}}\right).$$ Then \eqref{eq:est4} implies \begin{equation}\label{eq:lem} \upgamma^{-3}\,(2\pi)^{n/2}\, t^{n/2} \le \mu(B_r(x))+C\left(\upgamma e^{\frac{\upgamma}{4}}r^n e^{-\frac{r^2}{2\upgamma t}}+\upgamma T^{n/2} e^{-\frac{T}{4\upgamma t}}\right) \end{equation} for any $t \in (0,r^2/\gamma^2)$, what can be rewritten as $$ c'(n,\gamma)t^{n/2}\le \mu(B_r(x))+Ct^{n/2}\left(\upgamma e^{\frac{\upgamma}{4}} F(r^2/t)+\upgamma G(T/t)\right) $$ where $c'(n,\gamma):=\upgamma^{-3}\,(2\pi)^{n/2}$ and $F(s):=s^{n/2}e^{-\frac{s}{2\gamma}}$, $G(s):=s^{n/2}e^{-\frac{s}{4\gamma}}$ for any $s\ge 0$. The function $G$ is decreasing on $(2 n\gamma,+\infty)$ so if $r^2/t \ge 2 n\gamma$, since $r^2 \le T$, we get $G(T/t) \le G(r^2/t)$. As $\lim\limits_{s \to +\infty}F(s)=\lim\limits_{s \to +\infty} G(s)=0$, then there exists $s(n,\gamma)>0$ such that if $s\ge s(n,\gamma)$, $$ C\left(\upgamma e^{\frac{\upgamma}{4}} F(s)+\upgamma G(s)\right) \le \frac{c'(n,\gamma)}{2}\, \cdot $$ Then for any $t >0$ such that $r^2/t \ge \max(\gamma^2,2 n\gamma,s(n,\gamma))=:\theta(n,\gamma)$, we get $$ \frac{c'(n,\gamma)}{2}t^{n/2} \le \mu(B_r(x)). $$ Choosing $t=t(r)$ such that $\theta(n,\gamma) t \le r^2 \le 2 \theta(n,\gamma)t$, we get $$ \frac{c'(n,\gamma)}{2^{n/2+1}\theta(n,\gamma)^{n/2}} r^{n} \le \mu(B_r(x)). $$ \begin{comment} Since $t^{-n/2}\left(\upgamma e^{\frac{\upgamma}{4}}r^n e^{-\frac{r^2}{2\upgamma t}}+\upgamma T^{n/2} e^{-\frac{T}{4\upgamma t}}\right) \to 0$ when $t \to 0$, we can choose $t=t(n,\gamma)>0$ in the form $t=\theta r^2$, where $\theta = \theta(n,\gamma)$ is a small positive number, in order to have at the same time $r \ge \sqrt{\upgamma t}$, $\upgamma^2 t \le T$ and $$C\left(\upgamma e^{\frac{\upgamma}{4}}r^n e^{-\frac{r^2}{2\upgamma t}}+\upgamma T^{n/2} e^{-\frac{T}{4\upgamma t}}\right)\le \frac 12 \upgamma^{-3}\,(2\pi)^{n/2}\, t^{n/2}.$$ With this choice, we eventually get $$\underbrace{\frac 12 \upgamma^{-3}\,(2\pi)^{n/2}\, \theta^{n/2}}_{=:c(n,\gamma)} r^n \le\mu(B_r(x)).$$ \end{comment} \endproof We shall also need the next proposition. \begin{proposition}\label{prop:5.3} Let $(X,\dist,\mu)$ be a measure metric space satisfying the local doubling condition, namely there exists $r_o>0$ and $C_D>0$ such that $\mu(B_{2r}) \le C_D \mu(B_r)$ for any $r \in (0,r_o)$, and such that for some $\alpha >0$, we have $$\int_X \frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\dist^2(x,z)}{4t}}\frac{1}{(4 \pi s)^{\alpha/2}} e^{-\frac{\dist^2(z,y)}{4s}} \di \mu(z)=\frac{1}{(4 \pi (t+s))^{\alpha/2}} e^{-\frac{\dist^2(x,y)}{4(t+s)}}$$ for all $x,y \in X$ and $t,s \in (0,T)$. Then there exists a symmetric Dirichlet form $\cE$ on $(X,\dist,\mu)$ admitting an $\alpha$-Euclidean heat kernel. \end{proposition} \proof By \cite[Lem.~3.9]{Carron}, the space $(X,\dist,\mu)$ satisfies $ \mu(B_R(x))/\mu(B_r(x)) \le c_o e^{c_1 R/r}$ for any $x \in X$, $r\in(0,r_o)$ and $R\ge r$, where $c_o$ and $c_1$ depend only on $C_D$. For any $C>0$ and $z \in X$, applying \eqref{eq:Fub} with $\phi(\lambda)=\lambda^2 e^{-C\lambda^2}$ and $g(y)=\dist(z,y)$ yields to \begin{align}\label{eq:exp} \int_X e^{-C\dist^2(z,y)}\di \mu(y) & = \int_0^{+\infty} 2 C \lambda e^{-C\lambda^2} \mu(B_\lambda(y))\di \lambda \nonumber \\ & \le \left(\int_0^r 2 C \lambda e^{-C\lambda^2} \di \lambda + \int_r^{+\infty} 2 C \lambda c_o e^{-C\lambda^2 + c_1 \lambda/r} \di \lambda\right)\mu(B_r(y)) \nonumber \\ & \le c_2 \mu(B_r(y)) \end{align} for any $r \in(0,r_o)$, where $c_2$ depends only on $C_D$ and $C$. For any $x,y\in X$ and $t\in \setC\setminus \{0\}$, we set $$\bP_\alpha(x,y,t):=\frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\dist^2(x,y)}{4t}}\, .$$ Take $x,y \in X$ and $t\in(0,T)$. By assumption, the identity $$\int_X \bP_\alpha(x,z,t)e^{-\frac{\dist^2(z,y)}{4s}}\di \mu(z)= \left(\frac{s}{t+s}\right)^{\frac \alpha2}e^{-\frac{\dist^2(z,y)}{4(t+s)}}$$ is valid for any $s \in (0,T-t)$. However both expressions are holomorphic in $s\in \setC_+:=\{z\in \setC, \mathrm{Re} z>0\}$ (the left-hand side can be proved holomorphic by a suitable application of the dominated convergence theorem using \eqref{eq:exp}), thus the identity holds for any $s\in \setC_+$. Freezing $s$ and letting $t$ be variable, we can apply the same reasoning to get the identity valid for any $s, t \in \setC_+$. In particular, we obtain: $$\int_X \bP_\alpha(x,z,t) \bP_\alpha(z,y,s)\di\mu(z)= \bP_\alpha(x,y,t+s) \qquad \forall s,t >0.$$ Thus for any $x \in X$ and $t>0$, \begin{align*} \int_X \bP_\alpha(x,z,t) \di \mu(z) & = (4 \pi t)^{\alpha/2} \int_X \left( \frac{1}{(4 \pi t)^{\alpha/2}} e^{-\frac{\dist^2(x,z)}{8t}} \right)^2 \di \mu(z)\\ & = (16 \pi t)^{\alpha/2} \int_X \bP_\alpha(x,z,2t)^2 \di \mu(z)\\ & =(16 \pi t)^{\alpha/2} \bP_\alpha(x,x,4t) = 1. \end{align*} This easily implies that for any $f\in L^2(X,\mu)$, if $f_t(x)=\int_X \bP_\alpha(x,z,t)f(z) \di\mu(z)$, then $$\lim_{t\to 0+} \|f_t-f\|_{L^2}=0.$$ Then by a standard procedure described for instance in \cite[Section 2]{Grigor'yan}, we can build a symmetric Dirichlet form whose heat kernel is $\bP_\alpha$. \endproof We can now prove Theorem \ref{th:almostrigidity}. \proof The metric spaces considered in this proof are all complete. Assume that the result is not true. Then there exists some $\upepsilon>0$ such that for any $\updelta>0$ we can find:\begin{itemize} \item $T_\updelta>0$, \item a metric measure space $(X_\updelta,\dist_\updelta, \mu_\updelta)$ endowed with a symmetric Dirichlet form $\cE_\updelta$ admitting a heat kernel $p_\updelta$ satisfying $$(1-\updelta) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist_\updelta^2(x,y)}{4(1-\updelta)t}}\le p_\upepsilon(x,y,t) \le (1+\updelta) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist_\updelta^2(x,y)}{4(1+\updelta) t}} $$ for any $x,y \in X_\updelta$ and $t\in (0,T_\updelta]$, \item $x_\updelta\in X_\updelta$ and $r_\updelta\in (0,\sqrt{T_\updelta}]$ such that $\dist_{\mathrm GH}\left( B_{r_\updelta}(x_\updelta), \setB^n_{r_\updelta}\right) \ge \upepsilon r_\updelta$. \end{itemize} By a rescaling of the distance and of the measure, we can assume that $r_\updelta=1$ and $T_\updelta=1$. It follows from Lemma \ref{lem:voleu} that the set of pointed metric measure space $$\left\{(X_\updelta,\dist_\updelta,\mu_\updelta,x_\updelta)\right\}_{\updelta\in (0,1/2)}$$ satisfies a uniform local doubling condition, thus it is precompact for the pointed measure Gromov-Hausdorff topology. Therefore, we can consider an infinitesimal sequence $\{\updelta_\ell\}_\ell \subset (0,1/2)$ and a sequence of pointed metric measure spaces $$\left\{(X_\ell,\dist_\ell,\mu_\ell,x_\ell)\right\}_{\ell}$$ converging to some pointed metric measure space $(X_\infty,\dist_\infty,\mu_\infty,x_\infty)$ such that for any $\ell$: \begin{itemize} \item the space $(X_\ell,\dist_\ell,\mu_\ell)$ is endowed with a symmetric Dirichlet form $\cE_\ell$ admitting a heat kernel $p_\ell$ satisfying \begin{equation}\label{eq:14.17} (1-\updelta_\ell) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist_\ell^2(x,y)}{4(1-\updelta_\ell)t}}\le p_\ell(x,y,t) \le (1+\updelta_\ell) \frac{1}{(4 \pi t)^{n/2}} e^{-\frac{\dist_\ell^2(x,y)}{4(1+\updelta_\ell)t}}\end{equation} for any $x,y \in X_\ell$ and $t\in (0,1]$, \item $\dist_{\mathrm GH}\left( B_1(x_\ell), \setB^n_1\right) \ge \upepsilon. $ \end{itemize} In particular, letting $l$ tend to $+\infty$ gives: \begin{equation}\label{disHG}\dist_{\mathrm GH}\left( B_1(x_\infty), \setB^n_1\right) \ge \upepsilon.\end{equation} Since for any $\ell$ we have $$\int_{X_\ell}p_\ell(x,z,t)p_\ell(z,y,s)d\mu_\ell(z)=p_\ell(t+s,x,y)$$ for all $x,y \in X_\ell$ and $t,s>0$, we deduce from \eqref{eq:14.17} that when $t+s<1$, $$\frac{(1-\updelta_\ell)^{\frac n2+1}}{(1+\updelta_\ell)^{n+1}}\ \bP_n\left(x,y,\frac{1-\updelta_\ell}{1+\updelta_\ell}(t+s)\right)\le \int_{X_\ell}\bP_n(x,z,t)\bP_n(z,y,s)d\mu_\ell(z)$$ and $$\int_{X_\ell}\bP_n(x,z,t)\bP_n(z,y,s)d\mu_\ell(z)\le \frac{(1+\updelta_\ell)^{\frac n2+1}}{(1-\updelta_\ell)^{n+1}}\ \bP_n\left(x,y,\frac{1+\updelta_\ell}{1-\updelta_\ell}(t+s)\right).$$ From this, we obtain for any $x,y\in X_\infty$ and any $t,s>0$ with $t+s<1$, $$\int_{X_\infty}\bP_n(x,z,t)\bP_n(z,y,s)d\mu_\infty(z)= \bP_n(x,y,t+s).$$ Then Proposition \ref{prop:5.3} and Theorem \ref{th:main} imply that $(X_\infty,d_\infty)$ is isometric to $(\setR^n,\dist_e)$. But this is in contradiction with \eqref{disHG}. \endproof \section{A new proof of Colding's almost rigidity theorem} In this section, we show how our almost rigidity result (Theorem \ref{th:almostrigidity}) can be used to give an alternative proof of the almost rigidity theorem for the volume of Riemannian manifolds with non-negative Ricci curvature (Theorem \ref{th:Colding}). Here again $n$ is a fixed positive integer and $\mathbb{B}_r^n$ is an Euclidean ball in $\setR^n$ with radius $r>0$. \begin{comment} \begin{theorem}\label{th:Colding} For any $\upepsilon>0$, there exists $\updelta=\updelta(n,\upepsilon)>0$ such that if $(M^n,g)$ is a complete Riemannian manifold with non-negative Ricci curvature satisfying: \begin{equation}\label{eq:vol}\vol B_r(x) \ge \left(1-\updelta\right) \omega_n\, r^n \qquad \forall x \in M, \forall r>0,\end{equation} then for any $x\in M$ and any $r>0$, $$\dist_{\mathrm GH}\left( B_r(x), \setB^n_r\right)\le \upepsilon r.$$ \end{theorem} \end{comment} We recall that whenever $(M^n,g)$ has non-negative Ricci curvature, the Bishop-Gromov comparison theorem states that the function $r\mapsto \omega_n^{-1} r^{-n}\vol (B_r(x))$ is non-increasing for any $x \in M$ and the quantity \begin{equation}\label{eq:theta} \uptheta=\lim_{r\to +\infty} \frac{\vol (B_r(x))}{\omega_n\, r^n} \end{equation} does not depend on $x$. When $\uptheta>0$, we say that $(M^n,g)$ has Euclidean volume growth, in which case one has \begin{equation}\label{eq:volumelowerbound} \vol (B_r(x)) \ge \uptheta \omega_n r^n \end{equation} for any $x \in X$ and $r>0$. Note that a manifold satisfying \eqref{eq:vol} has Euclidean volume growth with $\uptheta \ge 1-\delta$. Our proof of Theorem \ref{th:Colding} is a direct application of Theorem \ref{th:almostrigidity} together with the following heat kernel estimate. \begin{theorem}\label{th:estimate} There exists a function $\upgamma\colon [0,1]\to [1,\infty)$ satisfying $\lim_{\theta\to 1^-} \upgamma(\theta)=1$ such that whenever $(M^n,g)$ is a complete Riemannian manifold with non-negative Ricci curvature and Euclidean volume growth, then the heat kernel $p$ of $(M^n,g)$ satisfies $$\frac{1}{(4\pi t)^{\frac n2}}e^{-\frac{d^2(x,y)}{4t}}\le p(x,y,t)\le \upgamma(\uptheta) \frac{1}{(4\pi t)^{\frac n2}}e^{-\frac{d^2(x,y)}{\upgamma(\uptheta)4t}} $$ for all $x,y \in M$ and $t>0$, where $\uptheta$ is given by \eqref{eq:theta}. \end{theorem} \begin{remark} Our proof of the above heat kernel upper bound follows the arguments of P. Li, L-F. Tam and J. Wang \cite{LiTamWang}. \end{remark} \begin{proof} The lower bound is the comparison theorem of J.~Cheeger and S-T.~Yau \cite{CheegerYau}: for any $t>0$ and $x,y\in M$, we have \begin{equation}\label{eq:CheegerYau}\bP_n(x,y,t)\le p(x,y,t)\end{equation} where $\displaystyle \bP_n(x,y,t)=(4\pi t)^{-\frac n2}e^{-\frac{d^2(x,y)}{4t}}.$ Consequently we only need to prove the upper bound. Take $x,y \in X$ and $t>0$. We shall need the following estimates from P. Li and S-T. Yau (see \cite[Formula (2.1)]{LiTamWang}): for any $r,\tau>0$, \begin{equation}\label{eq:LY1}\int_{B_r(x)}p(x,z,\tau)\di\vol(z)\ge \int_{\mathbb{B}^n_r}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\di \xi \end{equation} and \begin{equation}\label{eq:LY2} \int_{M\setminus B_r(x)}p(x,z,\tau)\di \vol(z)\le \int_{\setR^n\setminus \mathbb{B}_r^n}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\mathrm{\di}\xi. \end{equation} For $\delta>0$ to be precisely chosen later, set $r:=(1+\delta)^{-1}\dist(x,y)$ and $\tau := (1+\delta)t$. Note that $B_{r}(x) \cap B_{\delta r}(y) = \emptyset$. By the Harnack inequality of P.~Li and S-T.~Yau \cite{LiYau}, we have \begin{equation}\label{LY} p(x,y,t)\le \left(\frac{\tau}{t}\right)^{\frac n2}\, e^{\frac{\dist(z,y)^2}{4(\tau-t)}} p(x,z,\tau)\end{equation} for every $z\in M$, so that averaging over the ball $B_{\delta r}(y)$ gives \begin{align}\label{eq:Colding111}p(x,y,t)&\le e^{\frac{\delta^2r^2}{4(\tau-t)} } \left(\frac{\tau}{t}\right)^{\frac n2}\fint_{B_{\delta r}(y)}p(x,z,\tau)\di\vol(z) \noindent \\ &\le e^{\frac{\delta d^2(x,y)}{4(1+\delta)^2t}} \left(\frac{\tau}{t}\right)^{\frac n2}\fint_{B_{\delta r}(y)}p(x,z,\tau)\di\vol(z). \end{align} Now \begin{align*} \int_{B_{\delta r}(y)}p(x,z,\tau)\di \vol(z)&=\int_{M\setminus B_r(x)}p(x,z,\tau)\di \vol(z)- \int_{M\setminus (B_r(x)\cup B_{\delta r}(y))} p(x,z,\tau)\di \vol(z)\\ &\le \int_{\setR^n\setminus \mathbb{B}_r^n}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\di \xi - \int_{M\setminus (B_r(x)\cup B_{\delta r}(y)}\mathbb{P}_n(x,z,\tau)\di \vol(z) \end{align*} thanks to \eqref{eq:LY2} and \eqref{eq:CheegerYau}. Continuing, \begin{align*} \int_{B_{\delta r}(y)}p(x,z,\tau)\di \vol(z) \le \int_{\setR^n\setminus \setB_r^n}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\di \xi & - \int_{M\setminus B_r(x)}\bP_n(x,z,\tau)\di \vol(z)\\ &+ \int_{B_{\delta r}(y)}\bP_n(x,z,\tau)\di \vol(z)\\ \le \int_{\setR^n\setminus \setB_r^n}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\di \xi & - \int_{M\setminus B_r(x)}\bP_n(x,z,\tau)\di \vol(z)\\ &+\vol(B_{\delta r}(y)) \frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{(\dist(x,y)-\delta R)^2}{4\tau}}. \end{align*} By Cavalieri's principle and \eqref{eq:volumelowerbound}, we have \begin{align*} \int_{M\setminus B_r(x)}\bP_n(x,z,\tau)\di \vol(z) & =\int_{r}^{+\infty} \frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{s^2}{4\tau}}\frac{s}{2\tau}\vol(B_s(x))\di s\\ & \ge \int_{r}^{+\infty} \frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{s^2}{4\tau}}\frac{s}{2\tau} \theta \omega_n s^n \di s = \theta \int_{\setR^n\setminus \setB_r^n}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\di \xi, \end{align*} hence \begin{equation}\label{eq:Colding222} \int_{B_{\delta r}(y)}p(x,z,\tau)\di \vol(z) \le (1-\theta)\int_{\setR^n\setminus \setB_r^n}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\di \xi +\vol(B_{\delta r}(y)) \frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{(\dist(x,y)-\delta r)^2}{4\tau}}\, \cdot \end{equation} As pointed out in \cite[Formula (2.6)]{LiTamWang}, direct computations show that there exists a constant $C=C(n)>0$ such that $$ \int_{\setR^n\setminus \setB_r^n}\frac{1}{(4\pi \tau)^{\frac n2}}e^{-\frac{\|\xi\|^2}{4\tau}}\di \xi \le C\left(1+\left(\frac{r}{\sqrt{4\pi\tau}}\right)^n\right)e^{-\frac{r^2}{4\tau}}. $$ This together with \eqref{eq:Colding222} and \eqref{eq:Colding111} yields to \begin{align*} p(x,y,t)&\le (1-\uptheta) C\left(1+\left(\frac{r}{\sqrt{4\pi\tau}}\right)^n\right)e^{-\frac{r^2}{4\tau}}e^{\frac{\delta \dist^2(x,y)}{4(1+\delta)^2t}} \left(\frac{\tau}{t}\right)^{\frac n2}\frac{1}{\vol (B_{\delta r}(y))}\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad +\frac{1}{(4\pi t)^{\frac n2}}e^{-\frac{(\dist(x,y)-\delta r)^2}{4\tau}}e^{\frac{\delta \dist^2(x,y)}{4(1+\delta)^2t}} \, . \end{align*} It is easily checked that $ -\frac{r^2}{4\tau} =-\frac{(\dist(x,y)-\delta r)^2}{4\tau} = - \frac{\dist^2(x,y)}{4(1+\delta)^3 t}$ hence \begin{align*} p(x,y,t)&\le\left[(1-\uptheta) C\left(1+\left(\frac{r}{\sqrt{4\pi\tau}}\right)^n\right)\left(1+\delta\right)^{\frac n2}\underbrace{\frac{1}{\vol (B_{\delta r}(y))}}_{\le 1/(\uptheta \omega_n \delta^n r^n)}+\frac{1}{(4\pi t)^{\frac n2}}\right]e^{-\frac{(1-\delta-\delta^2)\dist^2(x,y)}{4(1+\delta)^3t}}\\ & \le\left[(\uptheta^{-1}-1) C\left(1+\left(\frac{r}{\sqrt{4\pi\tau}}\right)^n\right)\frac{(4\pi \tau)^{\frac n2}}{(4\pi t)^{\frac n2}}\frac{1}{\omega_n \delta^n r^n}+\frac{1}{(4\pi t)^{\frac n2}}\right]e^{-\frac{(1-\delta-\delta^2)\dist^2(x,y)}{4(1+\delta)^3t}}\\ & = \left[(\uptheta^{-1}-1)\left(1+\left(\frac{r}{\sqrt{4\pi\tau}}\right)^n\right) \frac{C}{\omega_n}\frac{(4\pi \tau)^{\frac n2}}{\delta^n r^n} +1\right]\frac{1}{(4\pi t)^{\frac n2}}e^{-\frac{(1-\delta-\delta^2)\dist^2(x,y)}{4(1+\delta)^3t}} \, . \end{align*} Now we distinguish two cases. According to \cite[Formula (2.4)]{LiTamWang}, if $\dist(x,y)\le \delta \sqrt{t}$, then \begin{equation}\label{esti1}p(x,y,t)\le \frac{1}{\uptheta} \frac{1}{(4\pi t)^{\frac n2}} e^{-\frac{d^2(x,y)}{4t}} e^{\frac{\delta^2}{4}}.\end{equation} If $\dist(x,y)\ge \delta \sqrt{t}$, then $\frac{r}{\sqrt{\tau}}\ge \frac{\delta}{(1+\delta)^{2}}$, thus \[ \left(1+\left(\frac{r}{\sqrt{4\pi\tau}}\right)^n\right) \frac{(4\pi \tau)^{\frac n2}}{\delta^n r^n} = \left( \frac{(4\pi \tau)^{\frac n2}}{r^n} + 1 \right)\delta^{-n} \le (4\pi)^{\frac n2}\left(\frac{\delta+ 1}{\delta}\right)^{2n}+\left(\frac{1}{\delta}\right)^n. \] Therefore, if $\delta<1/2$, we get $$\left(1+\left(\frac{r}{\sqrt{4\pi\tau}}\right)^n\right) \frac{(4\pi \tau)^{\frac n2}}{\delta^n r^n} \le C'\delta^{-2n}$$ where $C'$ depends only on $n$, which yields to $$ p(x,y,t) \le \left[(\uptheta^{-1}-1)\Lambda \delta^{-2n}+1\right]\frac{1}{(4\pi t)^{\frac n2}}e^{-\frac{(1-\delta-\delta^2)\dist^2(x,y)}{4(1+\delta)^3t}}. $$ where $\Lambda:=CC'/\omega_n$ depends only on $n$. Now we choose \begin{equation}\label{eq:delta} \delta=\delta(\uptheta):=\min\left\{ \frac 12\, , \left(\left(\uptheta^{-1}-1\right)\Lambda\right)^{\frac{1}{2n+1}}\right\},\end{equation} so that when $\left(\left(\uptheta^{-1}-1\right)\Lambda\right)^{\frac{1}{2n+1}} < 1/2$ then $$\delta(\uptheta)= \left(\uptheta^{-1}-1\right)\Lambda\delta(\uptheta)^{-2n},$$ hence \begin{equation}\label{eq:end} p(x,y,t)\le \frac{\delta(\uptheta)+1}{(4\pi t)^{\frac n2}}e^{-\frac{(1-\delta(\uptheta)-\delta(\uptheta)^2)\dist^2(x,y)}{4(1+\delta(\uptheta))^3t}},\,\end{equation} and when $\left(\left(\uptheta^{-1}-1\right)\Lambda\right)^{\frac{1}{2n+1}} \ge 1/2$ -- which corresponds to the case $\uptheta \le 1-\eps_n$ with $\eps_n:=(1+2^{2n+1}\Lambda)^{-1}$ depending only on $n$ -- then $\delta(\theta)=1/2$ implies \begin{equation}\label{eq:end2} p(x,y,t)\le \frac{(\uptheta^{-1} - 1)\Lambda 2^{2n} + 1}{(4\pi t)^{\frac n2}}e^{-\frac{(1-\delta(\uptheta)-\delta(\uptheta)^2)\dist^2(x,y)}{4(1+\delta(\uptheta))^3t}}\,.\end{equation} Note that $\delta(\uptheta) \to 0$ when $\uptheta \to 1$. Therefore, setting $F(\uptheta):=(1+\delta(\uptheta))^3/(1-\delta(\uptheta)-\delta(\uptheta)^2)$ and \[ \gamma(\uptheta):= \begin{cases} \max(1+\delta(\uptheta),F(\uptheta)) & \text{if $1-\eps_n<\uptheta<1$,}\\ \max(2^{2n}\Lambda (\uptheta^{-1} - 1)+1,F(\uptheta)) & \text{if $0<\uptheta \le 1-\eps_n$,} \end{cases} \] we get the result. \begin{comment} To conclude, defining $\delta$ like in \eqref{eq:delta}, we get $$ p(x,y,t) \le \frac{\upgamma(\theta)}{(4\pi t)^{n/2}} e^{-\frac{\dist^2(x,y)}{4\upgamma(\theta)}t} \qquad \forall x,y \in X, \forall t>0, $$ where $$ \upgamma(\uptheta):=\max(\uptheta^{-1}e^{\frac{\delta^2}{4}},\ 1+\Lambda \left(\uptheta^{-1}-1\right), 1+\delta,(1+\delta)^3/(1-\delta-\delta^2)). $$ \end{comment} \end{proof} For completeness, let us provide a short proof of Theorem \ref{th:Colding}. \begin{proof} Take $\upepsilon>0$. By Theorem \ref{th:almostrigidity}, there exists $\delta'=\delta'(n,\upepsilon)>0$ such that if $(M^n,g)$ is complete and satisfying $\Ric \ge 0$ and \eqref{eq:almostheatkernel} with $\delta$ replaced by $\delta'$, then any ball with radius $r$ in $M$ is $(\upepsilon r)$-GH close from a ball with same radius in $\setR^n$. But Theorem \ref{th:estimate} implies that there exists $\delta=\delta(n,\delta')=\delta(n,\upepsilon)>0$ such that if $1-\delta\le \theta$ holds, then $\gamma(\theta)-1 \le \delta$ and thus \eqref{eq:almostheatkernel} is true. The result follows. \end{proof} \section{Case of a Spherical heat kernel} In this section, for any Riemannian manifold $(M^n,g)$, we define the operator $L$ acting on $L^2(M)$ as the Friedrich extention of the operator $\tilde{L}$ defined by the formula: \[ - \int_M (\tilde{L}u)v = \int_M \langle \nabla u, \nabla v \rangle \qquad \forall u,v \in C^\infty_c(M). \] The spectral theorem implies that $L$ generates a semi-group $(e^{tL})_{t>0}$ which admits a smooth heat kernel. The heat kernel of $(e^{tL})_{t>0}$ on the sphere $\mathbb{S}^n$ equipped with the canonical spherical metric $g_{\mathbb{S}^n}$ admits a well-known expression, namely $$ K_t^{(n)}(\dist_{\mathbb{S}^n}(x,y)) $$ for any $x,y \in \mathbb{S}^n$ and $t>0$, where $\dist_{\mathbb{S}^n}$ is the Riemannian distance canonically associated with $g_{\mathbb{S}^n}$ and \begin{equation}\label{eq:K_t} K_t^{(n)}(r):=\sum_{i=0}^{+\infty} e^{\lambda_i t} C_i^{(n)}(r)\end{equation} for any $r>0$, with $\lambda_i := -i(i+n-1)$ and $ C_i^{(n)}(\cdot):=(2i+n-1)(n-1)^{-1}\sigma_n^{-1} G_i^{\frac{n-1}{2}}(\cos(\cdot))$ for any $i \in \setN$. Here the functions $G_i^\alpha$ are the Gegenbauer polynomials (see e.g.~\cite{AtkinsonHan}). For our purposes, it is worth mentioning that $$C_0^{(n)}(r)=\frac{1}{\sigma_n} \quad \text{and} \quad C_1^{(n)}(r)=\frac{n+1}{\sigma_n}\cos(r)$$ for any $r>0$. Moreover, the sum in \eqref{eq:K_t} converges uniformly in $C([0,+\infty))$. \begin{theorem} Let $(X,\dist,\mu)$ be a complete metric measure space equipped with a Dirichlet form $\cE$ admitting a spherical heat kernel $p$, that is \begin{equation}\label{eq:spher} p(x,y,t) = K_t^{(n)}(\dist(x,y)) \end{equation} for any $x, y \in X$ and $t>0$. Then $(X,\dist)$ is isometric to $(\mathbb{S}^n,\dist_{\mathbb{S}^n})$. \end{theorem} \begin{proof} Let $L$ be the self-adjoint operator canonically associated with $\cE$ and $(P_t)_{t>0}$ the associated semi-group. Assumption \eqref{eq:spher} implies that for any $t>0$ and $f \in L^2(X,\mu)$, $$ \int_{-\infty}^{+\infty} e^{t \lambda} \di (f,E_\lambda f) = \sum_{i=0}^{+\infty} e^{\lambda_i t} \iint_{X\times X} C_i(\dist(x,y)) f(x) f(y) \di \mu(x) \di \mu(y) $$ holds, where $(f,E_\lambda f)$ is the projection-valued measure of $L$ associated with $f$, see e.g.~\cite[p.~262-263]{ReedSimon}. Uniqueness of the map $f \mapsto (f,E_\lambda f)$ implies that the spectrum of $L$ is given by $\lambda_0, \lambda_1, \lambda_2, \ldots$ and that the projection operators $P_i : L^2(X,\mu) \to E_i:=\mathrm{Ker}(L-\lambda_i \mathrm{Id})$, for any $i \in \setN$, have a kernel $p_i$ such that for any $x,y \in X$, $$ p_i(x,y)=C_i(\dist(x,y)). $$ Since $P_i$ commutes with $L$ for any $i$, we have $P_i Lg = \lambda_i P_i g$ for any $g \in \cD(L)$, thus $ \langle p_i(x,\cdot), Lg \rangle_{L^2} = \lambda_i \langle p_i(x,\cdot), g \rangle_{L^2}$ for any $x \in X$. This implies $p_i(x,\cdot) \in \cD(L)$ with $$Lp_i(x,\cdot)=\lambda_i p_i(x,\cdot)$$ for any $x \in X$. In case $i=0$, as $\lambda_0=0$ and $p_0(x,y)=C_0(\dist(x,y))=1/\sigma_n$ for any $x, y \in X$, we get $L\textbf{1}=0$ thus $P_0 \textbf{1}=\textbf{1}$. This implies $\int_Xp_0(x,y) \di \mu(y)=1$ for any $x \in X$, hence \begin{equation}\label{eq:volsphere} \mu(X) = \sigma_n\, . \end{equation} In case $i=1$, we have $\lambda_1=-n$ and $p_1(x,y)=C_1(\dist(x,y))=\frac{n+1}{\sigma_n} \cos(\dist(x,y))$ for any $x,y \in X$, hence \begin{equation}\label{eq:cosdist} L_x\cos(\dist(x,y)) = - n \cos(\dist(x,y)). \end{equation} Let $\phi_1,\ldots, \phi_l$ be continuous functions forming a $L^2(X,\mu)$-orthogonal basis of $E_1$. Observe that \begin{equation}\label{eq:p1} P_1f(x) = \int_X p_1(x,y) f(y) \di \mu(y) = \int_X\frac{n+1}{\sigma_n} \cos(\dist(x,y)) f(y) \di \mu(y) \end{equation} and $$ P_1f(x)=\sum_{j=1}^l \left(\int_X \phi_i(y) f(y) \di \mu(y)\right)\phi_i(x)= \int_X \left[\sum_{j=1}^l \phi(y)\phi(x)\right] f(y)\di \mu(y) $$ holds for any $f \in L^2(X,\mu)$ and $x \in X$. This implies $$\sum_{i=1}^l\phi(x)^2 = \frac{n+1}{\sigma_n}$$ for any $x \in X$, hence integration over $X$ and \eqref{eq:volsphere} provides $$l=n+1.$$ Setting $$\cV:=\Span\{\cos(\dist(x,\cdot)) : x \in X\}$$ we get $\cV \subset E_1$ thanks to \eqref{eq:cosdist}. Since $E_1$ is the image of $L^2(X,\mu)$ by $P_1$, the reverse inclusion follows from \eqref{eq:p1}, hence $$\cV=E_1.$$ Acting as in Subsection $4.1$, we can show that there exist $x_1, \ldots, x_{n+1} \in X$ such that $\{\delta_{x_1},\ldots,\delta_{x_{n+1}}\}$ is a basis of $\cV^*$ whose associated basis $\{h_1,\ldots,h_{n+1}\}$ of $\cV$ permits to write \begin{equation}\label{eq:cos1} \cos(\dist(x,y)) = \sum_{i,j=1}^{n+1} c_{ij} h_i(x) h_j(y) \end{equation} for any $x,y \in X$, where $c_{ij}:=\cos(\dist(x_i,x_j))$ for any $i,j$. Let $\beta$ be the bilinear form defined by $$ \beta(\xi,\xi') = \sum_{i,j=1}^{n+1} c_{ij}\xi_i \xi_j'$$ for any $\xi=(\xi_1,\cdots,\xi_{n+1}), \xi'=(\xi_1',\cdots,\xi_{n+1}') \in \setR^{n+1}$ and $Q$ the associated quadratic form. Set $$ H: \begin{array}{ccl} X & \to & \setR^{n+1}\\ x & \mapsto & (h_1(x),\ldots,h_{n+1}(x)). \end{array} $$ Then \eqref{eq:cos1} writes as \begin{equation}\label{eq:cos2} \cos(\dist(x,y)) = \beta(H(x),H(y)). \end{equation} Choosing $y=x$ implies $H(x) \in \Sigma:=\{\xi \in \setR^{n+1} \, : \, \beta(\xi,\xi) =1\}$, so $H(X)$ is a subset of $\Sigma$. A direct computation provides: \begin{equation}\label{eq:cos3} Q(H(x)-H(y))=4\sin^2\left( \frac{\dist(x,y)}{2}\right) \qquad \forall x,y \in X, \end{equation} from which follows that $H$ is an injective map. Writing $ \setR^{n+1} = E_+ \oplus E_- \oplus \mathrm{Ker} \beta$ where $E_+, E_-$ are subspaces of $\setR^{n+1}$ where $\beta$ is positive definite and negative definite respectively, we can proceed as in 4.3, Step 1 (using the same notations) to get that $H_+$ is a bi-Lipschitz embedding of $(X,\dist)$ onto its image in $(E_+,q_+)$. Therefore, $\dim(E_+)$ is greater than the Hausdorff dimension of $X$. \begin{claim} The Hausdorff dimension of $X$ is $n$. \end{claim} \begin{comment} To prove this claim, let us first show that the space is stochastically complete. By the Chapman-Kolmogorov property \eqref{eq:ChapmanKolmogorov}, for any given $x,y \in X$ and $t,s>0$ we have $$ \sum_{i=0}^{+\infty} e^{-\lambda_i s} \int_X p(x,z,t)C_i(\dist(z,y)) \di \mu(z) = \sum_{i=0}^{+\infty} e^{-\lambda_i(t+s)}C_i(\dist(x,y)), $$ so that letting $s$ tend to $+\infty$ yields to $$ \int_X p(x,z,t) \underbrace{C_0(\dist(z,y))}_{=1/\sigma_n} \di \mu(z) = \underbrace{C_0(\dist(z,y))}_{1/\sigma_n}. $$ \end{comment} \begin{proof} The short-time expansion of the heat kernel on Riemannian manifolds \cite{MinakshisundaramPleijel} and the Cheeger-Yau estimate \cite{CheegerYau} implies that for some $C>0$ and $t_o>0$, $$ \frac{1}{(4\pi t)^{n/2}} e^{-\frac{r^2}{4t}} \le K_t^{(n)}(r) \le \frac{C}{(4\pi t)^{n/2}} e^{-\frac{r^2}{5t}} $$ holds for any $r \in (0,\pi)$ and $t\in (0,t_0)$. Therefore, proceeding as in the proof of Lemma \ref{lem:voleu}, we get existence of a positive constant $C$ such that for any $x \in X$ and any $r\in (0,\sqrt{t_0})$, $$C^{-1}r^n\le \mu(B_r(x))\le C r^n.$$ Hence the claim is proved. \end{proof} Thus $n+1 \ge \dim(E_+)> n$, so $\dim(E_+)=n+1$. This shows that $\beta$ is positive definite, thus the distance $\dist_Q$ is well-defined. The associated length distance $\delta$ on $\Sigma$ is then given by: $$ \dist_Q(\xi,\xi') = 2 \sin\left( \frac{\delta(\xi,\xi')}{2} \right) \qquad \forall \xi, \xi' \in \Sigma, $$ so that one eventually has: $$ \delta(H(x),H(y)) = \dist(x,y) \qquad \forall x,y \in X, $$ i.e.~$H$ is an isometric embedding of $(X,\dist)$ into $\Sigma$ equipped with $\delta$. Since $$\lim_{t\to 0^+} -4t \log K_t^{(n)}(r)=r^2,$$ we get from Remark \ref{rem:geodesicplus} that $(X,\dist$) is a geodesic space. Then $H(X)$ is a closed totally geodesic subset of $\Sigma$, meaning that minimizing geodesics joining two points in $H(X)$ are all contained in $H(X)$. We assume that there exists $p \in \Sigma \backslash H(X)$ and set $r:=\delta(p,H(X))$. \begin{claim} We have $r< \pi/2$. \end{claim} \begin{proof} Assume $r\ge \pi/2$. Then $H(X)$ is contained in the hemisphere $\{\sigma \in \Sigma : \beta(\sigma,p)\le 0\}$. Set $\lambda(\xi)=\beta(\xi,p)$ for any $\xi \in \setR^{n+1}$. Then $\lambda \circ H:X\to\setR$ is non-positive, and $\lambda \circ H(x)=0$ if and only if $H(x)=0$, which is impossible, so $\lambda \circ H$ is actually negative. But $\lambda \circ H$ is a linear combination of $h_1,\ldots,h_n$ thus it is an element of $\cV$. Since functions in $\cV = E_1$ are $L^2$-orthogonal to constant functions, we reach a contradiction, namely $\int_X \xi \circ H \di \mu = 0$. \end{proof} In fact, the same reasoning can be used to prove that $H(X)$ is contained in no hemisphere of $\Sigma$. We are now in a position to conclude. Since $H(X)$ is closed there exists $q\in H(X)$ such that $\delta(p,q)=r$. The convexity of $H(X)$ implies that any miminizing geodesic of length $<\pi$ starting at $q$ and passing through the open ball $B_r(p)$ cannot meet $H(\Sigma)$. But the union of these minimizing geodesics is an open hemisphere, so $H(X)$ is contained in the complementary hemisphere, hence a contradiction. \end{proof}
1,108,101,563,447
arxiv
\section{Introduction} \label{intro} Traditional machine learning (ML) approaches require that all data and learning processes gather in a central entity. This limits their capability to deal with real-world applications where data are isolated across different organizations and data privacy is being emphasized. Federated learning (FL), a distributed and privacy-preserving ML paradigm, is well suited for such scenarios and has been attracting growing attention. Existing FL approaches mostly focus on horizontal federated learning (HFL) \cite{yang2019federated}, which assumes that datasets from different participants share the same feature space but may not share the same sample ID space (Figure \ref{fig:1}-Left). Most existing HFL approaches aim to train a single global model for all participants \cite{mcmahan2016communication,konevcny2016federated}, while a few focus on learning separate models for each participant \cite{smith2017federated}. Vertical federated learning (VFL) \cite{yang2019federated} assumes that datasets from different participants do not share the same feature space but may share the same sample ID space. Furthermore, label information is assumed to be held by one participant. For example, two e-commerce companies and a bank which all serve users from the same city can train a model to recommend personalized loans for users based on their online shopping behaviours through VFL \cite{FL:2019}. In this case, only the bank holds label information for the intended VFL task. A key challenge in VFL is how to enable local label information from one participant to be used for training an FL model in a privacy-preserving manner. \begin{figure}[!t] \centering \includegraphics[width=1\linewidth]{FL.png} \caption{HFL vs. VFL \protect\cite{yang2019federated}.} \label{fig:1} \end{figure} VFL is currently less well explored compared to HFL \cite{Kairouz-et-al:2019}. Existing VFL approaches can only handle two VFL participants, and are generally focused on binary classification tasks \cite{hardy2017private,nock2018entity}. This makes them unsuitable for complex classification tasks in VFL applications involving multiple participants. To address this limitation, in this paper, we propose the Multi-participant Multi-class Vertical Federated Learning (MMVFL) framework. It extends the idea of multi-view learning (MVL) \cite{xu2013survey}, which jointly learns multiple models for tasks of multiple separate views of the same input data, to establish a VFL framework that is suitable for multi-class problems with multiple participants. Like the multi-task FL framework proposed in \cite{smith2017federated}, MMVFL learns a separate model for each participant, instead of a single global model for all participants, to make the learning process more personalized. Furthermore, MMVFL enables label sharing from the label owner to other participants to facilitate federated model training. It is worth mentioning that MMVFL is privacy-preserving, which means data and labels do not leave their owners during the training process. In addition, we propose a feature importance evaluation scheme based on MMVFL. It can quantify the contribution of different features from each participant to the FL model. By discarding redundant and harmful features in initial training periods, the communication, computation and storage costs of a VFL system can be reduced for subsequent training under incremental learning settings. To the best of our knowledge, MMVFL is the first VFL framework for multi-class problem with multiple participants. Through extensive experimental evaluation, we demonstrate that MMVFL can effectively share label information among multiple VFL participants and match multi-class classification performance of the existing approaches. \section{Related Work} \label{related} VFL is suitable to the FL scenarios that participants have datasets that share the same sample ID space but with different feature space. The idea of VFL was first proposed in \cite{hardy2017private}, where a federated logistic regression scheme is designed with messages encrypted with an additively homomorphic scheme. It also provided a formal analysis of the impact of entity resolution mistakes on learning. \cite{nock2018entity} then extended \cite{hardy2017private} to provide a formal assessment of the impact of errors in entity resolution on learning which spans a wide set of losses. \cite{yang2019quasi} and \cite{yang2019parallel} are two extensions of \cite{hardy2017private} that assumes sample IDs being already matched. The former focused on reducing the rounds of communication required by proposing a limited-memory BFGS algorithm based privacy-preserving optimization framework. The latter built a parallel distributed system by removing the third-party coordinator to decrease the risk of data leakage and reduce the complexity of the system. In \cite{Wang-et-al:2019}, the authors proposed an approach to evaluate feature importance in VFL participants' local dataset. The approach dynamically removes different groups of feature to assess the impact on FL model performance following a Shapley Value-based method. It is able to evaluate feature importance at the granularity of feature groups. In addition, the computation of Shapley Values incurs exponential computational complexity, making it hard to scale up. Nevertheless, these approaches are only able to deal with two VFL participants, and are generally focused on binary classification tasks. This limits the applicability of these methods in real-world application scenarios. The proposed MMVFL is more advantageous than these state-of-the-art approaches as it is designed to support multi-class multi-participant VFL settings, which makes it possible for more complex collaborations among businesses via VFL to emerge. \section{Preliminaries} \subsubsection{Multi-View Learning} MVL approaches aim to learn one function to model each view and jointly optimize all the functions to improve generalization performance \cite{xu2013survey}. Data from each view are assumed to share the same sample ID space but with heterogeneous features, making MVL well-suited for the VFL scenario. Unfortunately, existing MVL methods require raw data from different views to interact during learning, making them not suitable for direct application in FL for violating the privacy preservation requirement. \subsubsection{Feature Selection} Feature selection is a set of frequently used dimensionality reduction approaches for selecting a subset of useful features from a dataset for a given learning task \cite{li2017feature}. It can help FL save communication cost by compressing the data based on feature importance. A common practice of feature selection is to first measure the importance of each feature to the learning task and discard features that are less important \cite{zhao2010efficient,yang2011l2}. \section{The Proposed \textnormal{MMVFL} Framework} The pipeline of MMVFL is shown in Fig. \ref{fig:MMVFL}. By design, only the locally predicted labels cross the privacy barriers to reach the VFL Server. The global FL model can be trained without raw data, labels or local models leaving their owners' machine. In this section, we present the problem definition and the details of MMVFL. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{MMVFL_Figure} \caption{The pipeline of MMVFL.}\label{fig:MMVFL} \end{figure} \subsection{Notations and Problem Definition} Throughout this paper, matrices are denoted as bold upper-case letters. For a matrix $\mathbf{A} \in \mathbb{R}^{R \times C}$, $\| \mathbf{A} \|_{2,1} = \sum_{i=1}^{R} \| \mathbf{A}^{(i)} \|_2$ denotes the $\ell_{2,1}$-norm of $\mathbf{A}$, where $\| \mathbf{A}^{(i)} \|_2$ denotes the vector corresponding to the $i^\textrm{th}$ row of $\mathbf{A}$. For a VFL task for a $N_c$-class problem involving $K$ participants, each participant owns a dataset $\mathbf{X}_k \in \mathbb{R}^{N \times d_k}$ stored locally for FL model training. $d_k$ denotes the dimensionality of the dataset and $N$ denotes the number of samples in it. Following the setup in \cite{hardy2017private}, label information is assumed to be owned by one participant. Without loss of generality, we assume that the first participant owns the labels. The research problem here is how to transfer label information from the first participant to others for VFL model training, while performing feature importance evaluation for each participant. We assume that sample IDs are already matched in this paper. \subsection{Sparse Learning-based Unsupervised Feature Selection} \label{ufl} For participants who do not have access to the label information, unsupervised feature selection is adopted to select features that are representative of the underlying subspace structure of the data \cite{du2015unsupervised}. A transformation matrix is designed to project data to a new space and guide feature selection based on the sparsity of the transformation matrix. MMVFL performs feature selection on the $k^\textrm{th}$ participant by optimizing the following objective function: \begin{equation} \label{featSelect} \begin{aligned} &\min_{\mathbf{W}_k, \mathbf{Z}_k} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Z}_k \|_F^2 + \beta_k \| \mathbf{W}_k \|_{2,1} \\ &s.t.\ \mathbf{Z}_k^T\mathbf{Z}_k = \mathbf{I},\ \mathbf{Z}_k \geqslant 0 \end{aligned} \end{equation} where $\beta_k$ is a balance parameter, $\mathbf{W}_k \in \mathbb{R}^{d_k \times N_c}$ is the transformation matrix, and $\mathbf{Z}_k \in \mathbb{R}^{N \times N_c}$ is an embedding matrix in which each row denotes the representation of the corresponding data point. The second term is used as a regularization function to enhance feature importance measure. The two constraints enable $\mathbf{Z}_k$ to serve as a pseudo-label matrix for $\mathbf{X}_k$. Once $\mathbf{W}_k$ is produced, a feature importance score for each feature is computed by the $\ell_2$-norm value of the corresponding row of $\mathbf{W}_k$ following \cite{yang2011l2}. Although sophisticated sparse learning-based unsupervised feature selection algorithms have been proposed during recent years, we adopt the linear transformation method for its simplicity as our focus is to provide a proof-of-concept rather than exhausting all possible feature selection schemes. \subsection{Privacy-Preserving Label Sharing} \label{labelshare} Since most MVL approaches assume that all views share the same label space and they are correlated through the label space, following \cite{tang2013unsupervised}, the local feature selection scheme in Eq. \eqref{featSelect} can be adapted to MVL as follows: \begin{equation} \label{featSelectMVL} \begin{aligned} &\min_{\mathbf{W}_k, \mathbf{Z}}\ \sum_{k=1}^{K} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Z} \|_F^2 + \beta_k \| \mathbf{W}_k \|_{2,1} \\ &s.t.\ \mathbf{Z}^T\mathbf{Z} = \mathbf{I}, \ \mathbf{Z} \geqslant 0. \end{aligned} \end{equation} However, the optimization of $\mathbf{Z}$ needs access to raw data from different views. Thus, it cannot be directly applied to VFL. To adapt Eq. \eqref{featSelectMVL} to VFL, we propose the following objective function: \begin{equation} \label{obj_raw} \begin{aligned} &\min_{\mathbf{W}_k, \mathbf{Z}_k, \mathbf{Z}}\ \sum_{k=1}^{K} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Z}_k \|_F^2 + \beta_k \| \mathbf{W}_k \|_{2,1} \\ &s.t.\ \mathbf{Z}_1=\mathbf{Y}, \ \mathbf{Z}_k=\mathbf{Z}, \ \mathbf{Z}_k \geqslant 0,\ \mathbf{Z}_k^T\mathbf{Z}_k = \mathbf{I} \end{aligned} \end{equation} where $\mathbf{Y} \in \{0,1\}^{N \times Nc}$ is an one-hot matrix containing the label information that is owned by the first participant. Following Eq. \eqref{obj_raw}, each participant trains a pseudo-label matrix $\mathbf{Z}_k$ locally. The constraint condition $\mathbf{Z}_k=\mathbf{Z}$ ensures that these locally learned matrices are equal ($\mathbf{Z}$ is an implementation that data from all participants share the same label space). The constraint condition $\mathbf{Z}_1=\mathbf{Y}$ ensures that the pseudo-labels learned by the first participant are equal to the true labels. Note that the combination of the two constraint conditions $\mathbf{Z}_k=\mathbf{Z}$ and $\mathbf{Z}_1=\mathbf{Y}$ indirectly ensures that $\mathbf{Z}_k$ is equal to $\mathbf{Y}$. This achieves label sharing without direct access to raw data from different participants, making it suitable for VFL operations. \subsection{Optimization} Following \cite{feng2012adaptive}, we relax the constraints of $\mathbf{Z}_k=\mathbf{Z}$ and $\mathbf{Z}_1=\mathbf{Y}$ by adding a large enough penalty term $\zeta_k$ and $\eta$ to each of them respectively. Eq. \eqref{obj_raw} can be rewritten as: \begin{equation} \label{obj} \begin{aligned} \min_{\mathbf{W}_k, \mathbf{Z}_k, \mathbf{Z}}\ &\sum_{k=1}^{K} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Z}_k \|_F^2 + \beta_k \| \mathbf{W}_k \|_{2,1} + \\ &\zeta_k \| \mathbf{Z}_k - \mathbf{Z} \|_F^2 + \eta \| \mathbf{Z}_1 - \mathbf{Y} \|_F^2 \end{aligned} \end{equation} Note that the constraints $\mathbf{Z}_k^T\mathbf{Z}_k = \mathbf{I}$ and $\mathbf{Z}_k \geqslant 0$ are ignored because the large values of $\zeta_k$ and $\eta$ ensure that $\mathbf{Z}_k$ is close to $\mathbf{Y}$. The fact that $\mathbf{Y}$ satisfies $\mathbf{Y}^T\mathbf{Y} = \mathbf{I}$ and $\mathbf{Y} \geqslant 0$ makes the two constraints redundant. The closed form solution of the optimization problem in Eq. \eqref{obj} is hard to obtain due to the $\ell_{2,1}$-norm regularization term. To solve it, we design an alternative optimization approach with all parameters being iteratively updated, until the objective function value in \eqref{obj} converges or a maximum number of iterations is reached. When $\mathbf{Z}_k$ and $\mathbf{Z}$ are fixed, $\mathbf{W}_k$ can be solved locally. Eq. \eqref{obj} becomes: \begin{equation} \label{objWk} \min_{\mathbf{W}_k} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Z}_k \|_F^2 + \beta_k \| \mathbf{W}_k \|_{2,1}. \end{equation} Following \cite{hou2014joint}, Eq. \eqref{objWk} can be re-written as: \begin{equation} \label{objWk2} \min_{\mathbf{W}_k,\mathbf{A}_k} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Z}_k \|_F^2 + \beta_k {\rm Tr} \left( \mathbf{W}_k^T\mathbf{A}_k\mathbf{W}_k \right) \end{equation} where $\mathbf{A}_k \in \mathbb{R}^{d_k \times d_k}$ is a diagonal matrix whose $i^\textrm{th}$ element on the diagonal is \begin{equation} \label{optAk} \mathbf{A}_k^{(i,i)} = 1 / \left[ 2 \left( \|\mathbf{{W}_k}_{(i)}\|_2 + \epsilon \right) \right]. \end{equation} $\epsilon$ is a small constant to avoid overflow. Thus, $\| \mathbf{{W}_k}_{(i)} \|_2$ is nonzero for every $i$. Therefore, when $\mathbf{A}_k$ is fixed, the optimal value of $\mathbf{U}_k$ can be obtained through \begin{equation} \label{optWk} \mathbf{W}_k^* = \left( \mathbf{X}_k^T\mathbf{X}_k + \beta\mathbf{A}_k \right)^{-1} \mathbf{X}_k^T\mathbf{Z}_k. \end{equation} We can update $\mathbf{A}_k$ through Eq. \eqref{optAk} when $\mathbf{W}_k$ is fixed, and update $\mathbf{W}_k$ through Eq. \eqref{optWk} when $\mathbf{A}_k$ is fixed with an iterative scheme until convergence. When $\mathbf{W}_k$ is fixed, the optimization problem for solving $\mathbf{Z}_k$ and $\mathbf{Z}$ is \begin{equation} \label{optZkandZ} \min_{\mathbf{Z}_k, \mathbf{Z}}\ \sum_{k=1}^K \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Z}_k \|_F^2 + \zeta_k \| \mathbf{Z}_k - \mathbf{Z} \|_F^2 + \eta_1 \| \mathbf{Z}_1 - \mathbf{Y} \|_F^2 \end{equation} When $\mathbf{Z}_k$, $k = 2,3,\cdots,K$ and $\mathbf{Z}$ are fixed, $\mathbf{Z}_1$ can be solved locally through \begin{equation} \label{optZ1_v2} \begin{aligned} \min_{\mathbf{Z}_1}\ \| \mathbf{X}_1\mathbf{W}_1 - \mathbf{Z}_1 \|_F^2 + \zeta_1 \| \mathbf{Z}_1 - \mathbf{Z} \|_F^2 + \eta_1 \| \mathbf{Z}_1 - \mathbf{Y} \|_F^2 \end{aligned} \end{equation} It is straight forward to obtain the optimal $\mathbf{Z}_1$ as \begin{equation} \label{optZ1_final} \mathbf{Z}_1^* = \left( \mathbf{X}_1\mathbf{W}_1 + \zeta_1\mathbf{Z} + \eta\mathbf{Y} \right) / \left( 1 + \zeta_1 + \eta \right) \end{equation} When $\mathbf{Z}_1$ and $\mathbf{Z}$ are fixed, the optimization of $\mathbf{Z}_k$ for $k = 2,3,\cdots,K$ can be carried out in a similar way, and the optimal $\mathbf{Z}_k$ is: \begin{equation} \label{optZk_v2} \mathbf{Z}_k^* = \left( \mathbf{X}_k\mathbf{W}_k + \zeta_k\mathbf{Z} \right) / \left( 1 + \zeta_k \right) \end{equation} \begin{algorithm}[t] \caption{\small \sl MMVFL} \label{mmvfl_alg} \begin{algorithmic}[1] \REQUIRE Each participant's own local dataset $\{\mathbf{X}_k\}$, $k = 1, 2, \cdots, K$; \ENSURE Transformation matrix for each participant $\{\mathbf{W}_k\}$, $k = 1, 2, \cdots, K$ \STATE Initialize each $\mathbf{W}_k$ randomly; initialize each $\mathbf{Z}_k$ and $\mathbf{Z}$ randomly as $\mathbf{Z}_k^T\mathbf{Z}_k=\mathbf{I}$ and $\mathbf{Z}^T\mathbf{Z}=\mathbf{I}$; \WHILE{not converged} \FOR{participant $k \in \{1, 2, \cdots, K\}$ in parallel over $K$ nodes} \WHILE{not converged} \STATE Update $\mathbf{A}_k$ according to Eq. \eqref{optAk}; \STATE Update $\mathbf{W}_k$ according to Eq. \eqref{optWk}; \ENDWHILE \IF{k=1} \STATE Update $\mathbf{Z}_k$ according to Eq. \eqref{optZ1_final}; \ELSE \STATE Update $\mathbf{Z}_k$ according to Eq. \eqref{optZk_v2}; \ENDIF \ENDFOR \STATE Update $\mathbf{Z}$ according to Eq. \eqref{optZ_final}; \ENDWHILE \end{algorithmic} \end{algorithm} When $\mathbf{Z}_k$ ($k = 1, 2, \cdots, K$) are fixed, $\mathbf{Z}$ can be optimization by solving the following problem: \begin{equation} \label{optZ_v2} \min_{\mathbf{Z}}\ \sum_{k=1}^K \zeta_k \| \mathbf{Z}_k - \mathbf{Z} \|_F^2. \end{equation} The optimal value of $\mathbf{Z}$ is: \begin{equation} \label{optZ_final} \mathbf{Z}^* = \sum_{k=1}^K \zeta_k \mathbf{Z}_k / \sum_{k=1}^K \zeta_k. \end{equation} The details of MMVFL are summarized in Algorithm \ref{mmvfl_alg}. \section{Analysis} \label{discussion} \subsection{Convergence} The optimization problems for $\mathbf{Z}_1$, $\mathbf{Z}_k$ ($k = 1, 2, \cdots, K$), and $\mathbf{Z}$, when other parameters being fixed, are all simple convex optimization problems with global minima. It can be easily shown that the optimization scheme for $\mathbf{W}_k$ is able to make Eq. \eqref{objWk} consistently decrease until convergence following the same analysis in \cite{hou2014joint}. Interested readers can refer to \cite{hou2014joint} for details. This way, the objective function is consistently non-increasing during optimization. \subsection{Time Complexity} For the $k^\textrm{th}$ participant in VFL, the most time consuming part during local training under MMVFL is the optimization of $\mathbf{W}_k$ following Eq. \eqref{optWk}. The time complexity is $O(d_k^3)$. Since the proposed optimization scheme requires per-iteration communications among all participants, the time complexity of each iteration of the federated learning is $O((\max_k (d_k))^3)$, which means the time taken for FL training under MMVFL depends on the slowest participant in each round (referred to as stragglers). Techniques such as those reported in \cite{Liu-et-al:2019Com} can be used to improve the communication efficiency. We do not delve into more details of such techniques here. \subsection{Privacy Preservation} The main idea of MMVFL is that each participant learns its own model parameters $\mathbf{W}_k$ and $\mathbf{Z}_k$ locally, while $\mathbf{Z}$ is updated in a federated manner as expressed in Eq. \eqref{optZ_final}. In this process, only $\mathbf{Z}_k$ values from all participants are required to be transmitted to the FL server, while $\mathbf{X}_k$ and $\mathbf{Y}$ values are stored locally by their owners. Therefore, MMVFL provides a privacy-preserving label sharing as the transformation matrices are not enough to be used to derive the original data even when they are intercepted by a malicious entity in multiple rounds. \section{Experimental Evaluation} In this section, we evaluate the performance of MMVFL in terms of its effectiveness in label sharing. Experiments are conducted on two benchmark computer vision datasets. \subsection{Real-world Data} We perform experiments on 2 benchmark MVL datasets: Handwritten and Caltech7 \cite{li2015large}\footnote{Both datasets downloaded from \url{https://drive.google.com/drive/folders/1O\_3YmthAZGiq1ZPSdE74R7Nwos2PmnHH}}, while the former contains 5 views\footnote{\textit{Handwritten} is propsoed to contain 6 views in \cite{li2015large}. We remove the one with morphological features because it only contains 6 features, which makes feature selection insignificant.} and the latter contains 6 views, which can be regarded as from 5 and 6 VFL participants with each owning data with features from one view, respectively. In order to eliminate the side effect caused by imbalanced classes, for each dataset we ensure the number of instances from each class to be the same for both the training and the validation sets. The properties of datasets are summarized in our experiments are shown in Table \ref{dataProperty}. \begin{table}[ht] \caption{\small \sl Properties of the Datasets.} \label{dataProperty} \small \centering \begin{tabular}{|l|c|c|} \hline & Handwritten & Caltech7 \\\hline Data Dimensionalities & 240, 76, 216, & 48, 40, 254, \\ of All Views & 47, 64 & 1984, 912, 528 \\\hline Training Samples / Class & 120 & 20 \\\hline Validation Samples / Class & 40 & 5 \\\hline Number of Classes & 10 & 7 \\\hline \end{tabular} \end{table} \subsection{Comparison Baselines} MMVFL is compared against the following relevant state-of-the-art approaches: \begin{enumerate} \item \textit{supFL} \cite{zhao2010efficient}: which performs independent supervised feature selection on each of the $K$ participants assuming that they all have access to label information. It optimizes the following objective function: \begin{equation} \label{comp1} \min_{\mathbf{W}_k} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Y} \|_F^2 + \beta_k \| \mathbf{W}_k \|_{2,1}. \end{equation} Note that notation $\mathbf{Y}$ in Eq. \eqref{comp1} refers to the one-hot matrix that contains the label information as defined in Section \ref{labelshare}, which is different from the same notation used in \cite{zhao2010efficient}. \item \textit{supMVLFL}: which performs supervised multi-view feature selection under a linear transformation framework. It is a direct extension of supFL \cite{zhao2010efficient} into an MVL architecture, which optimizes the following objective function: \begin{equation} \label{comp2} \min_{\mathbf{W}_k} \sum_{k=1}^{K} \| \mathbf{X}_k\mathbf{W}_k - \mathbf{Y} \|_F^2 + \beta_k \| \mathbf{W}_k \|_{2,1}. \end{equation} \end{enumerate} According to \cite{tang2013unsupervised}, MVL can improve learning performance for each view compared to learning separately as multiple views can complement each other and and reduce the effect of noisy and partial data for separate single-view learning put together. The above two approaches are distributed machine learning approaches capable of sharing information across multiple participants, but do not preserve data privacy in this process. \subsection{Experiment Settings} We fix some parameters and tune others according to a ``grid search'' strategy. For all algorithms, we set the balance parameters $\beta_k = \beta$ and $\zeta_k = \zeta$, $\forall k$ for simplicity, where $\beta \in \{ 10^{-5}, 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}, 1, 10 \}$ and $\zeta = 1000$. We also set $\eta_1 = 1000$. We performed a 5-fold cross validation for classification. That is, for each view on a given dataset, samples from each class is divided equally into 5 parts. Five training/validation processes are conducted separately. Four out of the five parts are used together as the training set, while the remaining part is used as the validation set. For each specific fold and each specific view on a given dataset, after the transformation matrix is obtained for each participant, we first perform feature importance evaluation based on the scheme proposed in Section \ref{ufl}. Then, we keep the top $p\%$ of the features with the highest importance during validating. We select $p \in \{ 2, 4, 6, 8, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 \}$ of all the features from each dataset. For each specific value of $p$, each specific fold, and each specific view on a certain dataset, we tune the parameters for each algorithm in order to achieve the best results among all possible parameter combinations. Finally we report the averaged classification accuracy of 5-fold cross validation for each view of each dataset. \subsection{Results and Discussion} We present the classification results of MMVFL and the comparison algorithms on the Handwritten dataset and the Caltech7 dataset in Fig. \ref{handwritten} and Fig. \ref{caltech7}, respectively. The averaged differences ($\%$) between performance of MMVFL and supFL and supMVLFL across all choices of selected features are listed in the first and second row of Table \ref{accdiff}, respectively, where a positive number means better performance achieved by MMVFL. It can be observed that the performance of MMVFL is comparable with its supervised counterparts in most cases, and sometimes even better. On the Handwritten dataset, MMVFL outperforms supFL and supMVLFL by 1.42\% and 2.31\%, respectively when averaged over the 5 participants. On the Caltech7 dataset, the accuracy of MMVFL is lower than supFL and supMVLFL by 1.21\% and 0.88\%, respectively when averaged over the 6 participants. The results of classification performance provided by MMVFL being comparable with the two competitors demonstrate that it is able to effectively share label information from the label owner participant to other participants under VFL settings to train a global FL model. As a side note, the comparison between supFL and supMVLFL shows that MVL helps improve learning performance in this experiment. Meanwhile, in some cases MMVFL can achieve comparable or even better performance using a smaller number of important features than other approaches using all the features. As discussed in Section \ref{discussion}, by discarding feature that are less important to the FL system based on the feature importance evaluation scheme proposed in Section \ref{ufl}, the resources required such as communication bandwidth, computing devices, and memory space can be reduced. This is advantageous especially for VFL systems under incremental learning settings. \begin{table}[ht] \caption{\small \sl Performance Differences ($\%$).} \label{accdiff} \small \centering \resizebox{1\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \diagbox[width=6.5em]{Method}{Participant} & 1 & 2 & 3 & 4 & 5 & 6 & Avg\\ \hline \multirow{2}{*}{Handwritten} & 1.46 & -2.39 & 0.76 & 6.48 & 0.77 & \diagbox{}{} & 1.42\\ \cline{2-8} & 1.99 & -2.31 & 1.03 & 9.67 & 1.16 & \diagbox{}{} & 2.31\\ \hline \multirow{2}{*}{Caltech7} & 0.69 & 2.16 & 1.55 & -1.22 & -6.29 & -4.12 & -1.21\\ \cline{2-8} & 0.41 & 2.82 & 2.61 & -1.18 & -5.71 & -4.20 & -0.88\\ \hline \end{tabular} } \end{table} \begin{figure*} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view0_handwritten.pdf}} \centerline{(a) Participant 1} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view1_handwritten.pdf}} \centerline{(b) Participant 2} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view2_handwritten.pdf}} \centerline{(c) Participant 3} \end{minipage} \vfill \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view3_handwritten.pdf}} \centerline{(d) Participant 4} \end{minipage} \hfill \begin{minipage}{0.5\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view4_handwritten.pdf}} \centerline{(e) Participant 5} \end{minipage} \caption{Performance of MMVFL and competing algorithms on \textit{Handwritten} in classification as a function of the percentage of features selected $p$ ($\%$).} \label{handwritten} \end{figure*} \begin{figure*} \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view0_caltech101_7.pdf}} \centerline{(a) Participant 1} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view1_caltech101_7.pdf}} \centerline{(b) Participant 2} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view2_caltech101_7.pdf}} \centerline{(c) Participant 3} \end{minipage} \vfill \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view3_caltech101_7.pdf}} \centerline{(d) Participant 4} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view4_caltech101_7.pdf}} \centerline{(e) Participant 5} \end{minipage} \hfill \begin{minipage}{0.33\linewidth} \centerline{\includegraphics[width=6.0cm]{acc_view5_caltech101_7.pdf}} \centerline{(f) Participant 6} \end{minipage} \caption{Performance of MMVFL and competing algorithms on \textit{Caltech7} in classification as a function of the percentage of features selected $p$ ($\%$).} \label{caltech7} \end{figure*} \section{Conclusions and Future Work} In this paper, we proposed a multi-participant multi-class vertical federated learning (MMVFL) framework, which shares the label information from its owner to all the other participants without data leakage. Unlike similar existing techniques that can only support two participants, MMVFL can work in more complex scenarios, making it suited for a wider range of applications. To the best of our knowledge, it is the first attempt to transfer multi-view learning approach into the VFL setting. Experimental results on feature selection demonstrate that the performance of MMVFL can achieve comparable performance to its supervised counterparts. In subsequent research, we will focus on three major directions to further enhance MMVFL. Firstly, we plan to explore how to incorporate more sophisticated classification techniques into this framework to expand its applicability. Secondly, we will improve the communication efficiency of MMVFL and explore ways for it to handle stragglers more effectively. Last but not least, we will explore the effect of relationships across tasks among different participants in VFL on the overall FL model performance. \bibliographystyle{named}
1,108,101,563,448
arxiv
\subsubsection{Question 1: Is the Delta algorithm really more efficient than the \emph{Na\"ive}\xspace algorithm in practice?} \paragraph{How to test this?} \begin{itemize} \item Queries that have the distributivity property \item MonetDB/XQuery, other engines? \end{itemize} \subsubsection{Question 2: Does the IFP operator give advantage over general recursive functions?} \paragraph{Show that the IFP operator can do better when the recursion is deep, because IFP is tail recursion.} \paragraph{How to test this?} \begin{itemize} \item Queries that generate deep recursion queries. Possible data: XMark, artificial. \item MonetDB/XQuery, SaxonB and others? \end{itemize} \paragraph{Show performance advantages due to relational algebra optimizations that are possible now in MonetDB/XQuery, since there is an operator to describe recursion.} (So far, recursion was implemented only in non-algebra based version of MonetDB/XQuery, thus only ad-hoc optimizations were applied.) \paragraph{How to test this?} \begin{itemize} \item For data need to ask Jan/Jens/Torsten about it. \item MonetDB/XQuery without underlying algebra (milprint-summer) and MonetDB/XQuery with algebra optimizations. \end{itemize} \SubSection{Experimental set-up} \SubSection{Results}
1,108,101,563,449
arxiv
\section{Acknowledgments} We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) via WE5386/4-1, WE5386/5-1 and Germany’s Excellence Strategy EXC-2111-390814868 and the Research Council of Norway through its Centers of Excellence funding scheme, project 262633, ``QuSpin''. \begin{acknowledgments} \end{acknowledgments}
1,108,101,563,450
arxiv
\section{Introduction} \label{sec:intro} This paper is the first of three devoted to bilattices, the other two being \cite{CCP,CP2}. Taken together, our three papers provide a systematic treatment of dual representations via natural duality theory, showing that this theory applies in a uniform way to a range of varieties having bilattice structure as a unifying theme. The representations are based on hom-functors and hence the constructions are inherently functorial. The key theorems on which we call are easy to apply, in black-box fashion, without the need to delve into the theory. Almost all of the natural duality theory we employ can, if desired, be found in the text by Clark and Davey~\cite{CD98}. The term bilattice, loosely, refers to a set $L$ equipped with two lattice orders, $\leq_t$ and $\leq_k$, subject to some compatibility requirement. The subscripts have the following connotations: $t$ measuring `degree of truth' and $k$ `degree of knowledge'. As an algebraic structure, then, a bilattice carries two pairs of lattice operations: $\land_t$ and $\lor_t$; $\land_k$ and $\lor_k$. The term distributive is applied when all possible distributive laws hold amongst these four operations; distributivity imposes strictly stronger compatibility between the two lattice structures than the condition known as interlacing. Distributive bilattices may be, but need not be, also assumed to have universal bounds for each order which are treated as distinguished constants (or, in algebraic terms, as nullary operations). In addition, a bilattice is often, but not always, assumed to carry in addition an involutory unary operation $\neg$, thought of as modelling a negation. Historically, the investigation of bilattices (of all types) has been tightly bound up with their potential role as models in artificial intelligence and with the study of associated logics. We note, by way of a sample, the pioneering papers of Ginsberg \cite{Gin} and Belnap \cite{ND1,ND2} and the more recent works \cite{AA1,RPhD,BR11}. We do not, except to a very limited extent in Section~\ref{Sec:Conclude}, address logical aspects of bilattices in our work. In this paper we focus on distributive bilattices, with or without bounds and with or without negation. In \cite{CP2} we consider varieties arising as expansions of those considered here, in particular distributive bilattices with both negation and a conflation operation. In \cite{CCP} we move outside the realm of distributivity, and even outside the wider realm of interlaced bilattices, and study certain quasivarieties generated by finite non-interlaced bilattices arising in connection with default logics. The present paper is organised as follows. Section~\ref{sec:DBilat} formally introduces the varieties we shall study and establishes some basic properties. Sections~\ref{sec:DB},~\ref{sec:DBU} and \ref{sec:DPB} present our natural dualities for these varieties. We preface these sections by accounts of the relevant natural duality theory, tailored to our intended applications (Sections~\ref{piggyonesorted} and~\ref{sec:multi}). Theory and practice are brought together in Sections~\ref{Sec:DisPiggyDual} and~\ref{sec:prodrep}, in which we demonstrate how our representation theory relates to, and illuminates, results in the existing literature. Section~\ref{Sec:Applications} is devoted to applications: we exploit our natural dualities to establish a range of properties of bilattices which are categorical in nature, for instance the determination of free objects and of unification~type. We emphasise that our approach differs in an important respect from that adopted by other authors. Bilattices have been very thoroughly studied as algebraic structures (see for example~\cite{RPhD} and the references therein). Central to the theory of distributive bilattices, and more generally interlaced ones, is the theorem showing that such algebras can always be represented as products of pairs of lattices, with the structure determined from the factors (see \cite{P00} and \cite{BR11} for the bounded and unbounded cases, respectively, and the informative historical survey by Davey \cite{BD13} of the evolution of this oft-rediscovered result). The product representation is normally derived by performing quite extensive algebraic calculations. It is then used in a crucial way to obtain, for those bilattice varieties which have bounded distributive lattice reducts, dual representations which are based on Priestley duality \cite{MPSV,JR}. Our starting point is different. For each class ${\cat A}$ of algebras we study here and in \cite{CP2}, we first establish, by elementary arguments, that ${\cat A}$ takes the form $\ISP(\alg{M})$, where~$\alg{M}$ is finite, or, more rarely, $\ISP(\class{M})$, where $\class{M}$ is a set of two finite algebras. (In \cite{CCP} we assume at the outset that ${\cat A}$ is the quasivariety generated by some finite algebra in which we are interested.) This gives us direct access to the natural duality framework. From this perspective, the product representation is a consequence of the natural dual representation, and closely related to it. For a reconciliation, in the distributive setting, of our approach and that of others and an explanation of how these approaches differ, see Sections~\ref{sec:prodrep} and~\ref{Sec:Conclude}. We may summarise as follows what we achieve in this paper and in \cite{CCP,CP2}. For different varieties we call on different versions of the theory of natural dualities. Accordingly our account can, {\it inter alia}, be read as a set of illustrated tutorials on the natural duality methodology presented in a self-contained way. The examples we give will also be new to natural duality aficionados, but for such readers we anticipate that the primary interest of our work will be its contribution to the understanding of the interrelationship between natural and Priestley-style dualities for finitely generated quasivarieties of distributive lattice-based algebras. For this we exploit the piggybacking technique, building on work initiated in our paper \cite{CPcop} and our constructions elucidate precisely how product representations come about. All our natural dual representations are new, as are our Priestley-style dual representations in the unbounded cases. Finally we draw attention to the remarks with which we end the paper drawing parallels between the special role the knowledge order plays in our theory and the role this order plays in Belnap's semantics for a four-valued logic. \section{Distributive pre-bilattices and bilattices} \label{sec:DBilat} We begin by giving basic definitions and establishing the terminology we shall adopt henceforth. We warn that the definitions (bilattice, pre-bilattice, etc.) are not used in a consistent way in the literature, and that notation varies. Our choice of symbols for lattice operations enables us to keep overt which operations relate to truth and which to knowledge. Alternative notation includes $\vee$ and $\wedge$ in place of $\lor_t$ and $\land_t$, and $\oplus$ and $\otimes$ in place of $\lor_k$ and~$\land_k$. We define first the most general class of algebras we shall consider. We shall say that an algebra $\alg{A} = (A; \lor_t,\land_t,\lor_k,\land_k)$ is an \defn{unbounded distributive pre-bilattice} if each of the reducts $(A; \lor_t,\land_t)$ and $(A;\lor_k,\land_k)$ is a lattice and each of $\lor_t$, $\land_t$, $\lor_k$ and $\land_k$ distributes over each of the other three. The class of such algebras is a variety, which we denote by $\unbounded{\DPB}$. Each of the varieties we consider in this paper and in \cite{CP2} will be obtained from $\unbounded{\DPB}$ by expanding the language by adding constants, or additional unary or binary operations. Given $\alg{A} \in \unbounded{\DPB}$, we let $\alg{A}_t = (A;\lor_t, \land_t)$ and refer to it as the \defn{truth lattice reduct} of $\alg{A}$ (or $t$-lattice for short); likewise we have a \defn{knowledge lattice reduct} (or $k$-lattice) $\alg{A}_k = (A;\lor_k,\land_k)$. The following lemma is an elementary consequence of the definitions. We record it here to emphasise that no structure beyond that of an unbounded distributive pre-bilattice is involved. \begin{lem} \label{lem:cong} Let $\alg{A}= (A;\lor_t,\land_t, \lor_k,\land_k) \in \unbounded{\DPB}$. Then, for $a,b,c \in A$, \begin{newlist} \item[{\upshape (i)}] $ a \leq _k b \leq _k c $ implies $ a \land_t c \leq_t b \leq_ t a \lor_t c $; \item[{\upshape (ii)}] $a \land_t b \leq_t a \star_k b \leq_t a \lor_t b$, where $\star_k$ denotes either $\land_k$ or $\lor_k$. \end{newlist} Corresponding statements hold with $k$ and $t$ interchanged. \end{lem} As we have indicated in the introduction, we shall wish to prove, for each bilattice variety~${\cat A}$ we study, that ${\cat A}$ is finitely generated as a quasivariety. This amounts to showing that there exists a finite set $\class{M}$ of finite algebras in~${\cat A}$ such that, for each $\alg{A}\in {\cat A}$ and $a \ne b $ in $\alg{A}$, there is $\alg{M} \in \class{M}$ and a ${\cat A}$-homomorphism $h \colon \alg{A} \to \alg{M}$ with $h(a) \ne h(b)$. ($\class{M}$ will consist of a single subdirectly irreducible algebra or at most two such algebras.) This separation property is linked to the existence of particular quotients of the algebras in~${\cat A}$. Accordingly we are led to investigate congruences. We start with a known result. Our proof is direct and elementary: it uses nothing more than the distributivity properties of the $t$- and $k$-lattice operations, together with Lemma~\ref{lem:cong} and basic facts about lattice congruences given, for example, in \cite[Chapter~6]{ILO2}. (Customarily the lemma would be obtained as a spin-off from the product representation theorem as this applies to distributive bilattices.) \begin{prop} \label{lem:cong2} Let $\alg{A} = (A;\lor_t,\land_t, \lor_k, \land_k)$ be an unbounded distributive pre-bilattice. Let $\theta\subseteq A^2$ be an equivalence relation. Then the following statements are equivalent: \begin{newlist} \item[{\upshape (i)}] $\theta$ is a congruence of $\alg{A}_t =(A;\lor_t,\land_t)$; \item[{\upshape (ii)}] $\theta$ is a congruence of $\alg{A}_k = (A;\lor_k,\land_k)$; \item[{\upshape (iii)}] $\theta$ is a congruence of $\alg{A}$. \end{newlist} \end{prop} \begin{proof} It will suffice, by symmetry, to prove (i) $\Rightarrow$ (ii). So assume that~(i) holds. Since $\theta$ is a congruence of $(A;\lor_t,\land_t)$, the $\theta$-equivalence classes are convex sublattices with respect to the $\leq_t$ order. We first observe that from Lemma~\ref{lem:cong}(i) each equivalence class is convex with respect to the $\leq_k$ order, and from Lemma~\ref{lem:cong}(ii) that each equivalence class is a sublattice of $(A;\lor_k,\land_k)$. Finally we need to establish the quadrilateral property: \[ a\, \theta \, ( a\land_k b ) \Longleftrightarrow b \, \theta \, (a \lor_k b). \] For the forward direction observe that the distributive laws and Lemma~\ref{lem:cong}(ii) (swapping $t$ and $k$) imply \begin{multline*} a\land_t b=(a\lor_k b)\land_k(a\land_t b)=(a\land_t (a\land_k b))\lor_k (b\land_t (a\land_k b)) \\ =(a\land_k (a\land_t b))\lor_k (b\land_k (a\land_t b)) =(a\lor_k b )\land_k (a\land_t b). \end{multline*} Combining this with $a\, \theta \, ( a\land_k b )$ and with the fact that $\theta$ is a congruence of $(A;\lor_t,\land_t)$, we have $a\land_t b\, \theta \, (a\lor_k b)\land_t a$. Replacing $\land_t$ by $\lor_t$ in the previous argument, we obtain $a\lor_t b\, \theta \, (a\lor_k b)\lor_t a$. This proves that \[ [a]_{\theta}\land_t [b]_{\theta}= [a]_{\theta}\land_t [a\lor_k b]_{\theta}\quad \mbox{and} \quad [a]_{\theta}\lor_t [b]_{\theta}= [a]_{\theta}\lor_t [a\lor_k b]_{\theta}. \] Since $(A;\land_t, \lor_t)/\theta$ is distributive, $[b]_{\theta}=[a\lor_k b]_{\theta}$, that is, $b\,\theta\, a\lor_k b$. \end{proof} The following consequences of Proposition~\ref{lem:cong2} will be important later. Take an unbounded distributive pre-bilattice $\alg{A}$ and a filter $F$ of $\alg{A}_t$. Then $F$ is a convex sublattice of $\alg{A}_k$. If a map $h\colon A \to \{0,1\}$ acts as a lattice homomorphism from $\alg{A}_t$ into the two-element lattice~$\boldsymbol 2$, then $h$ is a lattice homomorphism from $\alg{A}_k$ into either~$\boldsymbol 2$ or its dual lattice $\boldsymbol 2^{\partial}$. Hence each prime filter for $\alg{A}_t$ is either a prime filter or a prime ideal for $\alg{A}_k$ and vice versa. These results were first proved in \cite[Lemma 1.11 and Theorem 1.12]{JR} and underpin the development of the duality theory presented there. We now wish to consider the situation in which a distributive pre-bilattice has universal bounds with respect to its $\leq_t$ and $\leq_k$ orders. We recall a classic result, known as the $90^\circ$~Lemma. The result has its origins in \cite{BK47} (see the comments in \cite[Section 3]{JM} and also \cite[Theorem~3.1]{P00}). \begin{lem} \label{90deg} Let $(L; \lor_t, \land_t, \lor_k,\land_k)$ be an unbounded distributive pre-bilattice. Assume that $(L; \leq_k)$ has a bottom element, $0_k$, and a top element, $1_k$. \begin{newlist} \item[{\rm (i)}] For all $a,b \in L$, \begin{align*} a \vee_k b &= ((a \wedge_t b)\wedge_t 0_k ) \vee_t ((a \vee_t b)\wedge_t 1_k ),\\ a \wedge _k b &= ((a \wedge_t b)\wedge_t 1_k ) \vee_t ((a \vee_t b)\wedge_t 0_k ). \end{align*} \item[{\rm (ii)}] For all $a \in L$, \[ 0_k \land_t 1_k \leq_t a \leq_t 0_k \lor_t 1_k, \] so that $(L,\leq_t)$ also has universal bounds, and in the lattice $(L;\lor_t,\land_t)$, the elements $0_k$ and $1_k$ form a complemented pair. \end{newlist} \end{lem} The import of Lemma~\ref{90deg}(i) is that $\lor_k$ and $\land_k$ are term-definable from $\lor_t$ and $\land_t$ and the universal bounds of the $k$-lattice; henceforth when these bounds are included in the type we shall exclude $\lor_k$ and $\land_k$ from it. When we refer to an algebra $\alg{A} = (A;\lor_t,\land_t,\lor_k,\land_k)$ as being an unbounded distributive pre-bilattice we do not exclude the possibility that one, and hence both, of $\alg{A}_k$ and~$\alg{A}_t$ has universal bounds; we are simply saying that bounds are not included in the algebraic language. We say an algebra $(A; \lor_t,\land_t, 0_t,1_t,0_k,1_k)$ is a \defn{distributive pre-bilattice} if $0_t$, $1_t$, $0_k$ and~$1_k$ are nullary operations, and the algebra $(A; \lor_t,\land_t,\lor_k,\land_k)$ belong to $\unbounded{\DPB}$, where $\lor_k$ and $\land_k$ are defined from $\lor_t$, $\land_t$,~$0_k$ and~$1_k$ as in Lemma~\ref{90deg}(i), and $0_t$, $1_t$ and $0_k$, $1_k$ act as $0$, $1$ in the lattices $\alg{A}_t$ and~$\alg{A}_k$, respectively. We now add a negation operation. If ${\alg{A} = (A;\lor_t,\land_t, \lor_k, \land_k)}$ belongs to $\unbounded{\DPB}$ and carries an involutory unary operation $\neg$ which is interpreted as a dual endomorphism of $(A; \lor_t,\land_t)$ and an endomorphism of $(A; \lor_k,\land_k)$, then we call $(A; \lor_t,\land_t,\lor_k,\land_k,\neg)$ an \defn{unbounded distributive bilattice}. Similarly, an algebra $(A;\lor_t,\land_t, \neg, 0_t,1_t,0_k,1_k)$ is a \defn{distributive bilattice} if the negation-free reduct is a distributive pre-bilattice, and $\neg$ is an involutory dual endomorphism of the bounded $t$-lattice reduct and endomorphism of the bounded $k$-lattice reduct. These conditions include the requirements that~$\neg$ interchanges $0_t$ and~$1_t$ and fixes $0_k$ and $1_k$. For ease of reference we present a list of the varieties we consider in this paper, in the order in which we shall study them. \begin{longnewlist} \item[$\cat{DB}$:] {\bf distributive bilattices}, for which we include in the type $\lor_t$, $\land_t$, $\neg$, $0_t$, $1_t$, $0_k$, $1_k$; \item[$\unbounded{\DB}$:] {\bf unbounded distributive bilattices}, having as basic operations $\lor_t$, $\land_t$, $\lor_k$, $\land_k$, $\neg$; \item[$\cat{DPB}$:] {\bf distributive pre-bilattices}, having as basic operations $\lor_t$, $\land_t$, $0_t$, $1_t$, $0_k$, $1_k$; \item[$\unbounded{\DPB}$:] {\bf unbounded distributive pre-bilattices}, having as basic operations $\lor_t, \land_t$, $\lor_k$, $\land_k$. \end{longnewlist} We shall denote by $\cat D$ the variety of distributive lattices in which universal bounds are included in the type, and by $\unbounded{\CCD}$ the variety of unbounded distributive lattices. For any $\alg{A} \in \cat{DB}$ or $\cat{DPB}$, its bounded truth lattice $\alg{A}_t = (A; \lor_t,\land_t, 0_t,1_t)$ is a $\cat D$-reduct of $\alg{A}$. Likewise the truth lattice $\alg{A}_t = (A;\lor_t,\land_t)$ provides a reduct in $\unbounded{\CCD}$ for any $\alg{A} \in \unbounded{\DB}$ or $\unbounded{\DPB}$. We remark also that each member of $\cat{DB}$ has a reduct in the variety~$\cat{DM}$ of De Morgan algebras, and that each algebra in $\unbounded{\DB}$ has a reduct in the variety of De Morgan lattices; in each case the reduct is obtained by suppressing the knowledge operations. This remark explains the preferential treatment we always give to truth over knowledge when forming reducts. Throughout we shall when required treat any variety as a category, by taking as morphisms all homomorphisms. Given a variety~${\cat A}$ whose algebras have reducts (or more generally term-reducts) in $\cat D$ obtained by deleting certain operations, we shall make use of the associated forgetful functor from ${\cat A}$ into $\cat D$, defined to act as the identity map on morphisms. (We shall later refer to ${\cat A}$ as being \defn{$\cat D$-based}.) Specifically we define a forgetful functor $\fnt U \colon \cat{DB} \to \cat D$, for which $\fnt U(\alg{A})=\alg{A}_t$ for any $\alg{A} \in \cat D$. We also have a functor, again denoted~$\fnt U$ and defined in the same way, from $\cat{DPB}$ to $\cat D$. Likewise there is a functor $\unbounded{\U}$ from $\unbounded{\DB}$ or from $\unbounded{\DPB}$ into $\unbounded{\CCD}$ which sends an algebra to its truth lattice. \begin{figure} [ht] \includegraphics[scale=1, trim=26 670 26 50]{dbilat-figfour.pdf} \caption{The $t$- and $k$-lattice reducts of $\boldsymbol 4$ and $\unbounded{\four}$} \label{strings} \end{figure} We now recall the best-known (pre-)bilattice of all, that known as $\mathcal{FOUR}$. We consider the set $V=\{0,1\}^2$ and, to simplify later notation, shall denote its elements by binary strings. We define lattice orders $\leq_t$ and $\leq_k$ on $V$ as shown in Figure~\ref{strings}; we draw lattices in the manner traditional in lattice theory. (In the literature of bilattices, the four-element pre-bilattice is customarily depicted via an amalgam of the lattice diagrams in Figure~\ref{strings}, with the two orders indicated vertically (for knowledge) and horizontally (for truth); virtually every paper on bilattices contains this figure and we do not reproduce it here.) We may add truth constants $0_t = 00$ and $1_t = 11$ and knowledge constants $0_k = 01$ and $1_k = 10$ to $\mathcal{FOUR}$ to obtain a member of $\cat{DPB}$. The structure $\mathcal{FOUR}$ also supports a negation $\neg$ which switches $11$ and $00$ and fixes $01$ and $10$. The four-element distributive bilattice and its unbounded counterpart play a distinguished role in what follows. Accordingly we define \begin{align*} \boldsymbol 4 & = (\{ 00,11,01,10\}; \lor_t, \land_t, \neg, 0_t,1_t,0_k,1_k) \text{ and} \\ \unbounded{\four} &= (\{00,11,01,10\}; \lor_t, \land_t,\lor_k,\land_k, \neg). \end{align*} These belong, respectively, to $\cat{DB}$ and to $\unbounded{\DB}$. There are two non-isomorphic two-element distributive pre-bilattices without bounds. One, denoted $\twoU^{+}$, has underlying set $\{0,1\}$, and the $t$-lattice structure and the $k$-lattice structure both coincide with that of the two-element lattice $\boldsymbol 2= (\{ 0,1\}; \lor, \land)$ in which $0 < 1$. The other, denoted $\twoU^{-}$, has $\boldsymbol 2$ as its $t$-lattice reduct and the order dual $\boldsymbol 2^\partial$ as its $k$-lattice reduct. If we include bounds, we must have $0_t = 0_k = 0$ and $1_t = 1_k=1$ if $\leq_k$ and $\leq_t$ coincide and $0_t = 1_k = 0$ and $1_t = 0_k=1$ if $\leq_k$ coincides with $\geq_t$. In neither the bounded nor the unbounded case do we have a two-element algebra which supports an involutory negation which preserves $\land_k$ and $\lor_k$ and interchanges $\lor_t$ and $\land_t$. Hence neither $\unbounded{\DB}$ nor $\cat{DB}$ contains a two-element algebra. Similarly, if either variety contained a three-element algebra, having universe $\{ 0,a,1\}$, with $0 <_t a <_t 1$, then~$\leq_k$ would have to coincide with either $\leq_t$ or $\geq_t$. The only involutory dual endomorphism of the $t$-reduct of the chain swaps $0$ and $1$ and fixes $a$, and this map is not order-preserving with respect to~$\leq_k$. We conclude that, whether or not bounds are included in the type, there is no non-trivial distributive bilattice of cardinality less than four. Hence, the $90^\circ$ Lemma implies that $\boldsymbol 4$ and $\unbounded{\four}$ are the only four-element algebras in $\cat{DB}$ and $\unbounded{\DB}$, respectively. As noted above, to derive a natural duality for any one of the varieties in which we are interested, we need to express the variety ${\cat A}$ in question as a finitely generated quasivariety. Specifically, we need to find a finite set $\class{M}$ of finite algebras such that ${\cat A} = \ISP(\class{M})$. We shall prove in subsequent sections, with the aid of Proposition~\ref{lem:cong2}, that \begin{alignat*}{2} \cat{DB} &= \ISP(\boldsymbol 4), \qquad \qquad & \cat{DPB} &= \ISP(\boldsymbol 2^+,\boldsymbol 2^-),\\ \unbounded{\DB} &= \ISP(\unbounded{\four}), & \unbounded{\DPB} & = \ISP(\twoU^+,\twoU^-). \end{alignat*} Corresponding results hold for the varieties we consider in \cite{CP2}. Such results are central to our enterprise. All are elementary in that the proofs use a minimum of bilattice theory and none of the algebraic structure theorems for bilattices is needed. (There is a close connection between our assertions above and the identification of the subdirectly irreducible algebras in the varieties concerned. The latter has traditionally been handled by first proving a product representation theorem. We reiterate that we prove our claims directly, by elementary~means.) \section{The natural duality framework}\label{piggyonesorted} As indicated in Section~\ref{sec:intro}, we shall introduce natural duality machinery in the form that is simplest to apply to each of the varieties we consider. We first consider ${\cat A} = \ISP(\alg{M})$, where $\alg{M}$ is a finite algebra with a lattice reduct. We shall aim to define an \defn{alter ego} $\twiddle{\spc{M}}$ for $\alg{M}$ which will serve to generate a category $\CX$ dually equivalent to~${\cat A}$. The alter ego will be a discretely topologised structure $\twiddle{\spc{M}}$ on the same universe $M$ as~$\alg{M}$ and will be equipped with a set $R$ of relations which are \defn{algebraic} in the sense that each member of $R$ is a subalgebra of some finite power $\alg{M}^n$ of~$\alg{M}$. (Later we shall need also to allow for nullary operations, but relations suffice in the simplest cases we consider.) We define~$\CX$ to be the topological quasivariety $\IScP (\twiddle{\spc{M}})$ generated by~$\twiddle{\spc{M}}$, that is, the class of isomorphic copies of closed substructures of non-empty powers of $\twiddle{\spc{M}}$; the empty structure is included. The structure of the alter ego is lifted pointwise in the obvious way. We denote the lifting of $r \in R$ to a member $\spc{X} $ of $ \CX$ by $r^{\spc{X}}$. We then have well-defined contravariant functors $\fnt D \colon {\cat A} \to \CX$ and $\fnt E \colon \CX \to {\cat A}$ defined as follows: \begin{alignat*}{3} &\text{on objects:} & \hspace*{2.5cm} &\fnt D \colon \alg{A} \mapsto {\cat A}(\alg{A},\alg{M}), \hspace*{2.5cm} \phantom{\text{on objects:}}&&\\ &\text{on morphisms:} & & \fnt D \colon x \mapsto - \circ x,&& \\ \shortintertext{where ${\cat A}(\alg{A},\alg{M})$ is seen as a closed substructure of $\twiddle{\spc{M}}^{\alg{A}}$, and} &\text{on objects:} & & \fnt E \colon \spc{X} \mapsto \CX(\spc{X},\twiddle{\spc{M}}), \phantom{\text{on objects:}}&&\\ &\text{on morphisms:} & &\fnt E \colon \phi \mapsto - \circ \phi, \phantom{\text{on morphisms:}}&& \end{alignat*} where $\CX(\spc{X},\twiddle{\spc{M}})$ is seen as a subalgebra of $\alg{M}^{\spc{X}}$. Given $\alg{A} \in {\cat A}$, we shall refer to $\fnt D(\alg{A})$ as the \defn{natural dual} of $\alg{A}$. We have, for each~$\alg{A} \in {\cat A}$, a natural evaluation map $e_{\alg{A}}\colon \alg{A} \to \fnt{ED}(\alg{A})$, given by $e_{\alg{A}}(a)(x) = x(a)$ for $a \in A$ and $x \in \fnt D(\alg{A})$, and likewise there exists an evaluation map $\varepsilon_{\spc{X} }\colon \spc{X} \to \fnt{DE}(\spc{X})$ for $\spc{X} \in \CX$. We say that \defn{$\twiddle{\spc{M}}$ yields a duality on ${\cat A}$} if $e_{\alg{A}}$ is an isomorphism for each $\alg{A} \in {\cat A}$, and that \defn{$\twiddle{\spc{M}}$ yields a full duality on~${\cat A}$} if in addition $\varepsilon_{\spc{X}}$ is an isomorphism for each $\spc{X} \in \CX$. Formally, if we have a full duality then $\fnt D$ and $\fnt E$ set up a dual equivalence between ${\cat A}$ and $\CX$ with the unit and co-unit of the adjunction given by the evaluation maps. All the dualities we shall present in this paper are full and, moreover, in each case we are able to give a precise description of the dual category~$\CX$. Better still, the dualities have the property that they are strong dualities. For the definition of a strong duality and a full discussion of this notion we refer the reader to \cite[Section~3.2]{CD98}. Strongness implies that $\fnt D$ takes injections to surjections and surjections to embeddings, facts which we shall exploit in Section~\ref{Sec:Applications}. Before proceeding we indicate, for the benefit of readers not conversant with natural duality theory, how Priestley duality fits into this framework. We have \allowdisplaybreaks \begin{alignat*}{2} {\cat A} &= \cat D, \qquad && \text{the class of distributive lattices with $0,1$}, \\ \alg{M} &= \boldsymbol 2, \quad &&\text{the two-element chain in $\cat D$}; \\ \CX &= \CP, \quad &&\text{the category of Priestley spaces}, \\ \twiddle{\spc{M}} &= \twiddle 2, \quad && \text{the discretely topologised two-element chain}; \\ R &= \{ \leq\}, \quad && \text{where $\leq $ is the subalgebra $\{ (0,0), (0,1), (1,1)\}$ of $\boldsymbol 2^2$}. \end{alignat*} This duality is strong \cite[Theorem 4.3.2]{CD98}. We later exploit it as a tool when dealing with bilattices having reducts in~$\cat D$ and it is convenient henceforth to denote the hom-functors $\fnt D$ and $\fnt E$ setting it up by~$\fnt{H}$ and $\fnt{K}$. When expedient, we view~$\fnt{KH}(\alg{L})$ as the family of clopen up-sets of~$\alg{L}$, for $\alg{L} \in \cat D$. In accordance with our black-box philosophy we shall present without further preamble the first of the duality theorems we shall use. It addresses both the issue of the existence of an alter ego yielding a duality and that of finding one which is conveniently simple. Theorem~\ref{genpigoneM} comes from specialising \cite[Theorem~7.2.1]{CD98} and the fullness assertion from \cite[Theorem~7.1.2]{CD98}. We deal with a quasivariety of algebras ${\cat A}$ generated by an algebra~$\alg{M}$ with a reduct in~$\cat D$ and denote by $\fnt U$ the associated forgetful functor from~${\cat A}$ into~$\cat D$. For $\omega_1,\omega_2 \in \Omega = \cat D(\fnt U (\alg{A}),\boldsymbol 2)$, we let $R_{\omega_1,\omega_2}$ be the collection of maximal ${\cat A}$-subalgebras of sublattices of the form \[(\omega_1,\omega_2 )^{-1}(\leq) = \{\, (a,b) \in \alg{M}^2 \mid \omega_1(a) \leq \omega_2 (b)\,\}. \] \begin{thm} \label{genpigoneM} {\rm(Piggyback Duality Theorem for $\cat D$-based algebras, single generator case)} Let ${\cat A} = \ISP (\alg{M}) $, where $\alg{M} $ is a finite algebra with a reduct in $\cat D$, and ${\Omega=\cat D(\fnt U (\alg{A}),\boldsymbol 2)}$. Let $\twiddle{\spc{M}} = (M ; R, {\mathscr{T}})$ be the topological relational structure on the underlying set~$M$ of $\alg{M}$ in which ${\mathscr{T}} $ is the discrete topology and $R$ is the union of the sets $R_{\omega_1,\omega_2}$ as $\omega_1,\omega_2$ run over $ \Omega$. Then $\twiddle{\spc{M}}$ yields a natural duality on ${\cat A}$. Moreover, if $\alg{M} $ is subdirectly irreducible, has no proper subalgebras and no endomorphisms other than the identity, then $\twiddle{\spc{M}}$ as defined above determines a strong duality. So the functors $\fnt D= {\cat A}(-, \alg{M})$ and $\fnt E = \CX(-, \twiddle{\spc{M}})$ set up a dual equivalence between ${\cat A}= \ISP(\alg{M})$ and $\CX=\IScP(\twiddle{\spc{M}})$. \end{thm} We now turn to the study of algebras which have reducts in $\unbounded{\CCD}$ rather than in~$\cat D$. We consider a class ${\cat A}$ of algebras for which we have a forgetful functor $\unbounded{\U} $ from ${\cat A}$ into~$\unbounded{\CCD}$. The natural duality for~$\unbounded{\CCD}$ will take the place of Priestley duality for $\cat D$. This duality is less well known to those who are not specialists in duality theory, but it is equally simple. We have $\unbounded{\CCD} = \ISP(\twoU)$, where $\twoU= (\{ 0,1\}; \land,\lor)$. The alter ego is $\twiddle 2_{01} =(\{0,1\}; 0, 1, \leq, {\mathscr{T}})$, where $0$ and $1$ are treated as nullary operations. It yields a strong duality between~$\unbounded{\CCD}$ and the category~$\CP_{01} = \IScP(\twiddle 2_{01} )$ of doubly-pointed Priestley spaces (bounded Priestley spaces in the terminology of~\cite[Theorem~4.3.2]{CD98}, where validation of the strong duality can also be found). The duality is set up by well-defined hom-functors $\unbounded{\fnt H} = \unbounded{\CCD}(-,\twoU) $ and $\unbounded{\fnt K} = \CP_{01} (-, \twiddle 2_{01})$. A member $\alg{L}$ of $\unbounded{\CCD}$ is isomorphic to $\unbounded{\fnt K}\unbounded{\fnt H}(\alg{L})$ and may be identified with the lattice of proper non-empty clopen up-sets of the doubly-pointed Priestley space $\unbounded{\fnt H} (\alg{L})$. Most previous applications of the piggybacking theory have been made over~$\cat D$ (see \cite[Section~7.2]{CD98}), or over the variety of unital semilattices. But one can equally well piggyback over~$\unbounded{\CCD}$; see \cite[Theorem~2.5]{DP87} and \cite[Section~3.3 and Subsection~4.3.1]{CD98}. (In \cite{CP2} we extend the scope further: we handle bilattices with conflation by piggybacking over $\cat{DB}$ and $\unbounded{\DB}$.) \begin{thm} \label{genpigoneMu} {\rm(Piggyback Duality Theorem for $\unbounded{\CCD}$-based algebras, single generator case)} Suppose that ${\cat A} = \ISP (\alg{M}) $, where $\alg{M} $ is a finite algebra with a reduct in $\unbounded{\CCD}$ but no reduct in~$\cat D$. Let $\Omega=\unbounded{\CCD}(\unbounded{\U}(\alg{M}),\twoU)$ and $\twiddle{\spc{M}} = (M ; R, {\mathscr{T}})$ be the topological relational structure on the underlying set~$M$ of $\alg{M}$ in which~${\mathscr{T}} $ is the discrete topology and $R$ contains the relations of the following types: \begin{newlist} \item[{\rm (a)}] the members of the sets $R_{\omega_1,\omega_2}$, as $\omega_1,\omega_2$ run over $ \Omega$, where $R_{\omega_1,\omega_2}$ is the set of maximal ${\cat A}$-subalgebras of sublattices of the form \[(\omega_1,\omega_2 )^{-1}(\leq) = \{\, (a,b) \in \alg{M}^2 \mid \omega_1(a) \leq \omega_2 (b)\,\}; \] \item[{\rm (b)}] the members of the sets $ R^i_{\omega}$, as $\omega$ runs over $ \Omega$ and $i\in\{0,1\}$, where $R^i_{\omega}$ is the set of maximal ${\cat A}$-subalgebras of sublattices of the form \[ \omega^{-1}(i) = \{\, a\in \alg{M} \mid \omega(a)=i\,\}. \] \end{newlist} Then $\twiddle{\spc{M}}$ yields a natural duality on ${\cat A}$. Assume moreover that $\alg{M}$ is subdirectly irreducible, that $\alg{M}$ has no non-constant endomorphisms other than the identity on $\alg{M}$ and that the only proper subalgebras of~$\alg{M}$ are one-element subalgebras. Then the duality above can be upgraded to a strong, and hence full, duality by including in the alter ego~$\twiddle{\spc{M}}$ all one-element subalgebras of~$\alg{M}$, regarded as nullary operations. If $\CX=\IScP(\twiddle{\spc{M}})$, where $\twiddle{\spc{M}}$ is upgraded as indicated, then the functors $\unbounded{\fnt D}= {\cat A}(-, \alg{M})$ and~$\unbounded{\fnt E}= \CX(-, \twiddle{\spc{M}})$ yield a dual equivalence between ${\cat A}$ and $\CX$. \end{thm} \begin{proof} Our claims regarding the duality follow from \cite[Section~2]{DP87}. For a discussion of the role played by the nullary operations in yielding a strong duality, we refer the reader to \cite[Section~3.3]{CD98}, noting that our assumptions on $\alg{M}$ ensure that any non-extendable partial endomorphisms would have to have one-element domains. Hence it suffices to include these one-element subalgebras as nullary operations in order to obtain a strong duality. \end{proof} We conclude this section with remarks on the special role of piggyback dualities. For quasivarieties to which either Theorem~\ref{genpigoneM} or Theorem~\ref{genpigoneMu} applies, we could have taken a different approach, based on the NU Strong Duality Theorem \cite[Theorems~2.3.4 and~3.3.8]{CD98}, as it applies to a quasivariety ${\cat A}= \ISP(\alg{M})$, where $\alg{M}$ is a finite algebra with a lattice reduct. This way, the set of piggybacking subalgebras would have been replaced by the set of all subalgebras of $\alg{M}^2$. But this has two disadvantages, one well known, the other revealed by our work in \cite[Section~2]{CPcop}. Firstly, the set of all subalgebras of $\alg{M}^2$ may be unwieldy, even when $\alg{M}$ is small. In part to address this, a theory of entailment has been devised, which allows superfluous relations to be discarded from a duality; see \cite[Section~2.4]{CD98}. The piggybacking method, by contrast, provides alter egos which are much closer to being optimal. Secondly, as we reveal in Section~\ref{Sec:DisPiggyDual}, the piggyback relations play a special role in translating natural dualities to ones based on the Priestley dual spaces of the algebras in $\fnt U({\cat A})$ or $\unbounded{\U}({\cat A})$, as appropriate. We shall also see that, even when certain piggyback relations can be discarded from an alter ego without destroying the duality, these relations do make a contribution in the translation process. \section{A natural duality for distributive bilattices} \label{sec:DB} In this section we set up a duality for the variety $\cat{DB}$ and reveal the special role played on the dual side by the knowledge order. \begin{prop} \label{sep-prop-bdd} $\cat{DB} = \ISP(\boldsymbol 4)$. \end{prop} \begin{proof} Let $\alg{A} \in \cat{DB}$. Let $a \ne b $ in $\alg{A}$ and choose $x \in \cat D (\alg{A}_t,\boldsymbol 2)$ such that ${x(a) \ne x(b)}$. Define an equivalence relation~$\theta$ on $\alg{A}$ by $p \, \theta \, q$ if and only if ${x(p) = x(q)}$ and $x(\neg p ) = x(\neg q)$. Clearly~$\theta$ is a congruence of $\alg{A}_t$. By Proposition~\ref{lem:cong2} it is also a congruence of $\alg{A}_k$, and by its definition it preserves~$\neg$. In addition, $A/\theta$ is a non-trivial algebra (since $x(a) \ne x(b)$) and of cardinality at most four. Since the only such algebra in $\cat{DB}$, up to isomorphism, is $\boldsymbol 4$, the image of the associated $\cat{DB}$-homomorphism $h \colon \alg{A} \to \alg{A}/\theta $ is (isomorphic to)~$\boldsymbol 4$, and separates~$a$ and~$b$. \end{proof} It is instructive also to present $h \colon \alg{A} \to \boldsymbol 4$, as above, more directly. We take \[ h(c) =\begin{cases} x(c) (1-x(\neg c)) &\text{if } x(0_k) = 0, \\ (1-x(\neg c))x(c)&\text{if } x(0_k) = 1, \end{cases} \] for all $c$; here we are viewing the image $h(c)$ as a binary string. In the case that $x(0_k) = 0$, observe that $h(0_k) = 01 =0_k^{\boldsymbol 4}$ (note that $\neg 0_k = 0_k)$). Since $x(0_k)\wedge x(1_k)=x(0_k \land _t 1_k) = x(0_t) =0$ and $x(0_k)\vee x(1_k)=x(0_k \lor _t 1_k) = x(1_t) =1$, we have $x(1_k) =x(\neg 1_k) = 1$ and $h_0(1_k)=10=1_k^{\boldsymbol 4}$. It is routine to check that $h$ preserves $\lor_t$, $\land_t$ and~$\neg$. Hence~$h$ is a $\cat{DB}$-morphism and, by construction, $h(a) \ne h(b)$. The argument for the case that $x(0_k) = 1$ is similar. In the following result we make use of the $\cat D$-morphisms from the $t$-lattice reduct of~$\boldsymbol 4$ into $\boldsymbol 2$. These are the maps $\alpha$ and $\beta$ given respectively by $\alpha^{-1}(1) = \{ 10, 11\}$ and ${\beta^{-1}(1) = \{ 01, 11\}}$. Observe that $\alpha$ and $\beta$ correspond to the maps that assign to a binary string its first and second elements, respectively. \begin{thm} \label{DBnatdual} {\rm (Natural duality for distributive bilattices)} There is a dual equivalence between the category $\cat{DB}$ and the category $\CP$ of Priestley spaces set up by hom-functors. Specifically,~let \[ \boldsymbol 4= \bigl( \{ 00, 11,01, 10\}; \lor_t, \land_t, \neg, 0_t, 1_t, 0_k, 1_k\bigr) \] be the four-element bilattice in the variety $\cat{DB}$ of distributive bilattices and let its alter ego be \[ \twiddle 4 = \bigl( \{ 00, 11, 01,10\}; \leq_k, {\mathscr{T}}\bigr). \] Then \[ \cat{DB} = \ISP(\boldsymbol 4) \quad \text{and} \quad \CP = \IScP (\twiddle 4) \] and the hom-functors $\fnt D = \cat{DB}(-, \boldsymbol 4)$ and $\fnt E = \CP(-, \twiddle 4)$ set up a dual equivalence between $\cat{DB}$ and $\CP$. Moreover, this duality is strong. \end{thm} \begin{proof} The proof involves three steps. \noindent{\bf Step 1: setting up the piggyback duality.}\newline We must identify the subalgebras of $\boldsymbol 4^2$ involved in the piggyback duality supplied by Theorem~\ref{genpigoneM} when ${\cat A} = \cat{DB}$ and $\alg{M} = \boldsymbol 4$. Define $\alpha$ and $\beta$ as above. We claim that the knowledge order $\leq_k$ is the unique maximal $\cat{DB}$-subalge\-bra of $(\alpha,\alpha)^{-1}(\leq)$. We first observe that it is immediate from order properties of lattices that $\leq_k$ is a sublattice for the $k$-lattice structure. It also contains the elements $01 \, 01$ and $10\, 10$. By the $90^\circ$ Lemma (with~$k$ and $t$ switched), $\leq_k$ is also closed under $\land_t$ and $\lor_t$ (or this can be easily checked directly). Since~$\neg$ preserves $\leq_k$, we conclude that $\leq_k$ is a subalgebra of $\boldsymbol 4^2$. Now note that, for $a = a_1a_2$ and $b = b_1b_2$ binary strings in $\boldsymbol 4$, we have $\alpha(a) \leq \alpha(b)$ if and only if $a_1\leq b_1$ and that $\alpha (\neg a) \leq \alpha(\neg b)$ if and only if $1-a_2 \leq 1-b_2$ that is, if and only if $b_2 \leq a_2$. It follows that if $(a,b)$ belongs to a $\cat{DB}$-subalgebra of $(\alpha,\alpha)^{-1}(\leq)$ then $(a,b)$ belongs to the relation $\leq_k$. Since we have already proved that $\leq_k$ is a $\cat{DB}$-subalgebra of $(\alpha,\alpha)^{-1}(\leq)$ we deduce that $\leq_k$ is the unique maximal subalgebra contained in this sublattice. Likewise, the unique maximal $\cat{DB}$-subalgebra of $(\beta,\beta)^{-1}(\leq)$ is $\geq_k$. We claim that no subalgebra of $\boldsymbol 4^2$ is contained in $(\alpha,\beta)^{-1}(\leq)$. To see this we observe that $\alpha(0_k) = \alpha(10)= 1 \nleqslant 0 = \beta(10) = \beta(0_k)$. Likewise, consideration of $1_k$ shows that there is no $\cat{DB}$-subalgebra contained in $(\beta,\alpha)^{-1}(\leq)$. Following the Piggyback Duality Theorem slavishly, we should include both $\leq_k$ and~$\geq_k$ in our alter ego. But it is never necessary to include a binary relation and also its converse in an alter ego, so $\leq_k$ suffices. \noindent{\bf Step 2: describing the dual category.}\newline To prove that $\IScP(\twiddle 4)$ is the category of Priestley spaces it suffices to note that $\twiddle 2 \in \ope{I}\Su_c (\twiddle 4)$ and that $\twiddle 4 \in \ope{IP}(\twiddle 2)$. It follows that $\IScP (\twiddle 2) \subseteq \IScP(\twiddle 4)$ and $\IScP(\twiddle 4) \subseteq \IScP(\twiddle 2)$. \noindent{\bf Step 3: confirming the duality is strong.}\newline We verify that the sufficient conditions given in Theorem~\ref{genpigoneM} for the duality to be strong are satisfied by $\alg{M} = \boldsymbol 4$. We proved in Section~\ref{sec:DB} that there is no non-trivial algebra in $\cat{DB}$ of cardinality less than four. Hence $\boldsymbol 4$ has no non-trivial quotients and no proper subalgebras. This implies, too, that $\boldsymbol 4$ is subdirectly irreducible. Since every element of $\boldsymbol 4$ is the interpretation of a nullary operation, the only endomorphism of~$\boldsymbol 4$ is the identity. \end{proof} We might wonder whether there are alternative choices for the structure of the alter ego $\twiddle 4$ of $\boldsymbol 4$. We now demonstrate that, within the realm of binary algebraic relations at least, there is no alternative: it is inevitable that the alter ego contains the relation~$\leq_k$ (or its converse). \begin{prop} \label{DBsub} The subalgebras of $\boldsymbol 4^2$ are $ \boldsymbol 4^2$, $ \Delta_{\boldsymbol 4^2}$, $\leq_k $ and $\geq_k$. Here $\Delta_{\boldsymbol 4^2} $ denotes the diagonal subalgebra $\{ \, (a,a) \mid a \in \boldsymbol 4\,\}$. \end{prop} \begin{proof} We merely outline the proof, which is routine, but tedious. Assume we have a proper subalgebra $r$ of $\boldsymbol 4^2$, necessarily containing $\Delta_{\boldsymbol 4}$ (since all the elements of $\boldsymbol 4$ are constants in the language of $\cat{DB}$) and assume that $r $ is not~$\leq_k$. We must then check that~$r$ has to be $\geq_k$. The proof relies on two facts: (i) an element belongs to $r$ if and only if its negation does and (ii) if $a = b \star c$, where $\star \in \{ \lor_t,\land_t,\lor_k,\land_k\}$ and $c \in r$, then $a \notin r$ implies $b \notin r$. \end{proof} The proposition allows us, if we prefer, to arrive at Theorem~\ref{DBnatdual} without recourse to the piggyback method. As noted at the end of Section~\ref{piggyonesorted}, it is possible to obtain a duality for a finitely generated lattice-based quasivariety ${\cat A}=\ISP(\alg{M})$ by including in the alter ego all subalgebras of~$\alg{M}^2$. Applying this to $\cat{DB}=\ISP(\boldsymbol 4)$, we obtain a duality by equipping the alter ego with the four relations listed in Proposition~\ref{DBsub}. The subalgebras $\boldsymbol 4^2$ and $\Delta_{\boldsymbol 4^2}$ qualify as `trivial relations' and can be discarded and we need only one of $\leq_k$ and $\geq_k$; see \cite[Subsection~2.4.3]{CD98}. Therefore the piggyback duality we presented earlier is essentially the only natural duality based on binary algebraic relations. (To have included relations of higher arity instead would have been possible, but would have produced a duality which is essentially the same, but artificially complicated.) We remark that the situation for $\cat{DB}$ is atypical, thanks to the very rich algebraic structure of $\boldsymbol 4$. \section{A natural duality for unbounded distributive bilattices} \label{sec:DBU} We now focus on the variety $\unbounded{\DB}$, to which we shall apply Theorem~\ref{genpigoneMu}. We first need to represent $\unbounded{\DB}$ as a finitely generated quasivariety. \begin{prop} \label{sep-prop-unbdd} $\unbounded{\DB} = \ISP(\unbounded{\four})$. \end{prop} \begin{proof} We take $\alg{A} \in \unbounded{\DB}$ and $a \ne b $ in $\alg{A}$ and use the Prime Ideal Theorem for unbounded distributive lattices to find $x \in \unbounded{\CCD}(\alg{A}_t,\twoU)$ with $x(a) \ne x(b)$. We may then argue exactly as we did in the proof of Proposition~\ref{sep-prop-bdd}, but now using the fact that $\unbounded{\four}$ is, up to isomorphism, the only non-trivial algebra in $\unbounded{\DB}$ of cardinality at most four. \end{proof} We are ready to embark on setting up a piggyback duality for $\unbounded{\DB}$. We find the piggybacking relations by drawing on the description of $\Su (\boldsymbol 4^2)$ given in Proposition~\ref{DBsub} to describe $\Su(\unbounded{\four}^2)$. As a byproduct, we shall see that among dualities whose alter egos contain relations which are at most binary, the knowledge order plays a distinguished role, just as it does in the duality for~$\cat{DB}$. Below, to simplify the notation, the elements of~$\boldsymbol 4^2$ are written as pairs of binary strings. For example, $01\, 11$ is our shorthand for $(01,11)$. \begin{prop} \label{fourUsubs} The subalgebras of $\unbounded{\four}^2$ are of two types: \begin{newlist} \item[{\rm (a)}] the subalgebras of $\boldsymbol 4^2$, as identified in Proposition~{\upshape\ref{DBsub}}; \item[{\rm (b)}] decomposable subalgebras, in which each factor is $\{01\}$, $\{10\}$ or $\unbounded{\four}$. \end{newlist} \end{prop} \begin{proof} The subalgebras of $\unbounded{\four}$ are $\{ 01\}$, $\{10\}$ and $\unbounded{\four}$. Any indecomposable subalgebra of $\unbounded{\four}^2$ must then be such that the projection maps onto each coordinate have image $\unbounded{\four}$. We claim that any indecomposable $\unbounded{\DB}$-subalgebra~$r$ of $\unbounded{\four}^2$ is a $\cat{DB}$-subalgebra of $\boldsymbol 4^2$. Suppose that $r\ne \Delta_{\unbounded{\four}^2}$, the diagonal subalgebra of $\unbounded{\four}^2$, and~$r$ is indecomposable. Then~$r$ would contain elements $a\, 01$ and $a'\, 10$ for some $a, a'\in\unbounded{\four}$. If $a=01$ and $a'=10$. Then $11\, 11$ and $00\, 00$ are in $r$ and hence~$r$ is a subalgebra of $\boldsymbol 4^2$. If $a\ne 01$, then also $(a \land_k \neg a) \, 01 \in r$. Any of the possibilities $a = 00, 11,01$ implies that $10\, 01 \in r$. Therefore we must have $10\, 01 \in r$ and likewise $01\, 10 \in r$. Then, considering $\lor_t$ and $\land_t$, we get that $11\, 11$ and $00\, 00$ are in $r$. But this implies $01\, 01\in r$, by considering~$\land_k$. Similarly $10\, 10 \in r$. The case $a' \ne 10$ follows by the same argument. \end{proof} Figure~\ref{fig:DBUsubs} shows the lattice of subalgebras of $\unbounded{\four}^2$. In the figure the indecomposable subalgebras are unshaded and the decomposable ones are shaded. \begin{figure} [ht] \includegraphics[scale=1, trim=0 650 22 50]{dbilat-figDBU.pdf} \caption{The subalgebras of $\unbounded{\four}^2$} \label{fig:DBUsubsnew} \end{figure} To list the piggybacking relations for $\unbounded{\DB}$ we first need to establish some notation. For $\omega, \omega_1,\omega_2\in\unbounded{\fnt H}\unbounded{\U}(\unbounded{\four})$ and $i\in\{0,1\}$, let $R_{\omega_1,\omega_2}$ and $R_{\omega}^i$ be as defined in Theorem~\ref{genpigoneMu}. We write $r_{\omega_1,\\omega_2}$, respectively $r_{\omega}^i$, for the unique element of $R_{\omega_1,\omega_2}$, respectively $R_{\omega}^i$, whenever this set is a singleton, The set $\unbounded{\fnt H}\unbounded{\U}(\unbounded{\four})$ contains four elements: the maps $\alpha$ and $\beta$ defined earlier, and the constant maps onto $0$ and $1$, which we shall denote by $\overline{\boldsymbol 0}$ and~$\overline{\boldsymbol 1}$, respectively. The following result is an easy consequence of Proposition~\ref{fourUsubs}. \begin{prop} \label{DBUpigsub} Consider $\alg{M}= \unbounded{\four}$. Then \begin{newlist} \item[{\rm (i)}] for the cases in which $R_{\omega_1,\omega_2}$ is a singleton, \begin{newlist} \item[{\rm (a)}] $r_{\alpha,\alpha} $ is $\leq_k$ and $r_{\beta,\beta} $ is $\geq_k$, \item[{\rm (b)}] $r_{\omega_1,\omega_2} = \alg{M}^2$ whenever $\omega_1 = \overline{\boldsymbol 0}$ or $\omega_2 = \overline{\boldsymbol 1}$, \item[{\rm (c)}] $r_{\alpha,\overline{\boldsymbol 0}} = \{01\} \times \alg{M}$, $r_{\beta,\overline{\boldsymbol 0}} = \{10\} \times \alg{M}$, $r_{\overline{\boldsymbol 1},\alpha} = \alg{M} \times \{10\}$, and ${r_{\overline{\boldsymbol 1}, \beta} = \alg{M} \times \{ 01\}}$; \end{newlist} \item[{\rm (ii)}] for the cases in which $R_{\omega_1,\omega_2}$ is not a singleton, \begin{newlist} \item[{\rm (a)}] $R_{\alpha,\beta} = \bigl\{\{01\} \times \alg{M}, \{10\ 01\}\bigr\}$, \item[{\rm (b)}] $R_{\beta,\alpha} = \bigl\{ \{10\}\times\alg{M}, \{10\ 01\}\bigr\}$, \item[{\rm (c)}] $R_{\overline{\boldsymbol 1},\overline{\boldsymbol 0}}=\emptyset$; \end{newlist} \item[{\rm (iii)}] \begin{newlist} \item[{\rm (a)}] $r_\alpha^0 = r_\beta^1 = \{ 01\}$ and $r_\alpha^1= r_\beta^0 =\{10\}$, \item[{\rm (b)}] $r_{\overline{\boldsymbol 0}}^0 = r_{\overline{\boldsymbol 1}}^{1} = \alg{M}$ and $R_{\overline{\boldsymbol 0}}^1= R_{\overline{\boldsymbol 1}}^0 = \emptyset$. \end{newlist} \end{newlist} \end{prop} Below, when we describe the connections between the natural and Priestley-style dualities for $\unbounded{\DB}$, we shall see that the subalgebras listed in Proposition~\ref{DBUpigsub} are exactly the relations we would expect to appear. We now present our duality theorem for $\unbounded{\DB}$. \begin{thm} \label{DBUnatdual} {\rm (Natural duality for unbounded distributive bilattices)} There is a strong, and hence full, duality between the category $\unbounded{\DB}$ and the category $\CP_{01}$ of doubly-pointed Priestley spaces set up by hom-functors. Specifically, let \[ \unbounded{\four}= \bigl( \{ 00, 01, 10, 00\}; \lor_t, \land_t, \lor_k, \land_k, \neg\bigr) \] be the four-element bilattice in the variety $\unbounded{\DB}$ of distributive bilattices without bounds and let its alter ego be \[ \twiddle{\unbounded{4}} = \bigl( \{ 00, 11, 01, 10\}; 01, 10, \leq_k, {\mathscr{T}}\bigr). \] where the elements $01$ and $10$ are treated as nullary operations. Then \[ \unbounded{\DB} = \ISP(\unbounded{\four}) \quad \text{and} \quad \CP_{01} = \IScP (\twiddle{\unbounded{4}}) \] and the hom-functors $\fnt D = \unbounded{\DB}(-, \unbounded{\four})$ and $\fnt E = \CP_{01}(-, \twiddle{\unbounded{4}})$ set up the required dual equivalence between $\unbounded{\DB}$ and $\CP_{01}$. \end{thm} \begin{proof} Here we have included in the alter ego fewer relations than the full set of piggybacking relations as listed in Proposition~\ref{DBUpigsub} and we need to ensure that our restricted list suffices. To accomplish this we use simple facts about entailment as set out in \cite[Subsection~2.4.3]{CD98}. We have included as nullary operations both $01$ and $10$ and these entail the two one-element subalgebras $\{01\}$ and $\{10\}$ of $\unbounded{\four}$. It then follows from Theorem~\ref{genpigoneMu} and Proposition~\ref{DBUpigsub} that~$\twiddle{\unbounded{4}}$ yields a duality on $\unbounded{\DB}$ (see \cite[Section~2.4]{CD98}). We now invoke the $\twiddle{\spc{M}}$-Shift Strong Duality Lemma \cite[3.2.3]{CD98} to confirm that changing the alter ego by removing entailed relations does not result in a duality which fails to be strong. Finally, we note that $\twiddle{\unbounded{4}}$ is a doubly-pointed Priestley space and hence a member of $\CP_{01}$. In the other direction, $\twiddle 2_{01}$ is isomorphic to a closed substructure of $\twiddle{\unbounded{4}}$ and so belongs to $\IScP(\twiddle{\unbounded{4}})$. Hence the dual category for the natural duality is indeed the category of doubly-pointed Priestley spaces. \end{proof} \section{How to dismount from a piggyback ride}\label{Sec:DisPiggyDual} The piggyback method, applied to a class ${\cat A}=\ISP(\alg{M})$ of $\cat D$-based algebras, supplies an alter ego $\twiddle{\spc{M}}$ yielding a natural duality for ${\cat A}$, as described in Section~\ref{piggyonesorted}. The relational structure of $\twiddle{\spc{M}}$ is constructed by bringing together ~$\twiddle 2$ (the alter ego for Priestley duality for $\ISP(\boldsymbol 2)$) and $\fnt{HU}(\alg{M})$ (the Priestley dual space of the distributive lattice reduct of the generating algebra of ${\cat A}$). This characteristic of the piggyback method has a significant consequence: it allows us, in a systematic way, to recover the Priestley dual spaces $\fnt{HU}(\alg{A})$ of the $\cat D$-reducts of the algebras $\alg{A}\in{\cat A}$. The procedure for doing this played a central role in \cite{CPcop}, where it was used to study coproducts in quasivarieties of $\cat D$-based algebras. Below, in Theorem~\ref{Theo:RevEng}, we shall strengthen Theorem~2.3 of \cite{CPcop} by proving that the construction given there is functorial and is naturally equivalent to $\fnt{HU}$. Traditionally, dualities for $\cat D$-based (quasi)varieties have taken two forms: natural dualities, almost always for classes ${\cat A}$ which are finitely generated, and dualities which we dubbed ${\cat D\text{-}\CP\text{-based}}$ dualities in \cite[Section~2]{CPcop}. In the latter, at the object level, the Priestley spaces of the $\cat D$-reducts of members of ${\cat A}$ are equipped with additional structure so that the operations of each algebra $\alg{A}$ in ${\cat A}$ may be captured on $\fnt{KHU}(\alg{A})$ (an isomorphic copy of $\fnt{U}(\alg{A}))$ from the structure imposed on the Priestley space $\fnt{HU}(\alg{A})$. Now assume that ${\cat A} = \ISP(\alg{M})$, where $\alg{M}$ is finite, so that a rival, natural, duality can be obtained by the piggyback method. Reconciliations of the two approaches appear rather rarely in the literature; we can however draw attention to \cite[Section~3]{DP87} and the remarks in \cite[Section~7.4]{CD98}. There are two ways one might go in order to effect a reconciliation. Firstly, we could use the fact that an algebra $\alg{A}$ in ${\cat A}$ determines and is determined by its natural dual $\fnt D(\alg{A})$ and that $\fnt U(\alg{A})$ determines and is determined by $\fnt{HU}(\alg{A})$. Given that, as we have indicated, we can determine $\fnt{HU}(\alg{A})$ from $\fnt D(\alg{A})$, we could try to capitalise on this to discover how to enrich the Priestley spaces $\fnt{HU}(\alg{A})$ to recapture the algebraic information lost in passage to the reducts. But this misses a key point about duality theory. The reason Priestley duality is such a useful tool is that it allows us concretely and in a functorial way to represent distributive lattices in terms of Priestley spaces. Up to categorical isomorphism, it is immaterial how the dual spaces are actually constructed. An alternative strategy now suggests itself for obtaining a duality for~${\cat A}$ based on enriched Priestley spaces. What we shall do in this section is to work with a version of Priestley duality based on structures directly derived from the natural duals $\fnt D(\alg{A})$ of the algebras $\alg{A}$, rather than one based on traditional Priestley duality applied to the class $\fnt{U}({\cat A})$. This shift of viewpoint allows us to tap in to the information encoded in the natural duality in a rather transparent way. We can hope thereby to arrive at a `Priestley-style' duality for ${\cat A}=\ISP(\alg{M})$. We shall demonstrate how this can be carried out in cases where the operations suppressed by the forgetful functor interact in a particularly well-behaved way with the operations which are retained. At the end of the section we also record how the strategy extends to $\unbounded{\CCD}$-based algebras. In summary, we propose to base Priestley-style dualities on dual categories more closely linked to natural dualities rather than, as in the literature, seeking to enrich Priestley duality per se. The two approaches are essentially equivalent, but ours has several benefits. By staying close to a natural duality we are well placed to profit from the good categorical properties such a duality possesses. Moreover morphisms are treated alongside objects. Also, setting up a piggyback duality is an algorithmic process in a way that formulating a Priestley-style duality ab initio is not. Although we restrict attention in this paper to the special types of operation present in bilattice varieties, and these could be handled by more traditional means, we note that our analysis has the potential to be adapted to other situations. We now recall the construction of \cite[Section~2]{CPcop} as it applies to the particular case of the piggyback theorem for the bounded case as stated in Theorem~\ref{genpigoneM}. Assume that $\alg{M}$ and $R$ are as in that theorem. For a fixed algebra $\alg{A} \in \ISP(\alg{M})$, we define $ Y_{\alg{A}}= \fnt D(\alg{A})\times\Omega,$ where $\Omega=\cat D(\fnt U (\alg{A}),\boldsymbol 2)$, and equip it with the topology ${\mathscr{T}}_{Y_{\alg{A}}}$ having as a base of open sets \[ {\mathscr{T}}_{Y_{\alg{A}}}= \{\,U\times V\mid U\mbox{ open in } \fnt D(\alg{A})\mbox{ and }V\subseteq\Omega\,\} \] and with the binary relation $\preccurlyeq\, \,\subseteq Y_{\alg{A}}^2$ defined by \[ (x,\omega_1)\preccurlyeq(y,\omega_2)\mbox{ if }(x,y)\in r^{\fnt D(\alg{A})} \mbox{ for some }r\in R_{\omega_1,\omega_2}. \] In \cite[Theorem~2.3]{CPcop}, we proved that the binary relation $\preccurlyeq$ is a pre-order on $Y_{\alg{A}}$. Moreover, if $\approx\,=\,\preccurlyeq\cap\succcurlyeq$ denotes the equivalence relation on $Y_{\alg{A}}$ determined by $\preccurlyeq$ and ${\mathscr{T}}_{Y_{\alg{A}}}/_{\approx}$ is the quotient topology, then $ (\, Y_{\alg{A}}/_{\approx};{\preccurlyeq}/_{\approx},{\mathscr{T}}_{Y_{\alg{A}}}/_{\approx}) $ is a Priestley space isomorphic to $\fnt{HU}(\alg{A})$. This isomorphism is determined by the map $\Phi_{\alg{A}}$ given by $\Phi_{\alg{A}}([(x,\omega)]_{\approx})=\omega\circ x$. \begin{thm}\label{Theo:RevEng} Let ${\cat A} = \ISP(\alg{M})$, where $\alg{M}$ is a finite algebra with a reduct in $\cat D$. Then there exists a well-defined contravariant functor $\fnt{L}\colon{\cat A}\to\CP$ given~by \begin{alignat*}{3} &\text{on objects:} & \hspace*{1.1cm} & \alg{A} \longmapsto\ \fnt{L}(\alg{A}) = (\, Y_{\alg{A}}/_{\approx};{\preccurlyeq}/_{\approx},{\mathscr{T}}_{Y_{\alg{A}}}/_{\approx}), \hspace*{1.1cm} \phantom{\text{on objects:}}&&\\ &\text{on morphisms:} & & \, \,h \longmapsto\ \fnt{L}(h) \colon [(x,\omega)]_{\approx} \mapsto [(\fnt D(h)(x),\omega)]_{\approx}. && \end{alignat*} Moreover, $\Phi$, defined on each $\alg{A}$ by $\Phi_{\alg{A}} \colon [(x,\omega)]_{\approx} \mapsto \omega\circ x$, determines a natural isomorphism between $\fnt{L}$ and $\fnt{HU}$. \end{thm} \begin{proof} We have already noted that $\fnt{L} ( \alg{A})\in\CP $. We confirm that $\fnt{L}$ is a functor. Let $h\colon\alg{A}\to \alg{B}$ and $(x,\omega),(y,\omega')\in Y_{\alg{B}}$ be such that $(x,\omega) \preccurlyeq (y,\omega')$. Then there exists $r\in R_{\omega,\omega'}$ with $(x,y)\in r^{\fnt D(\alg{B})}$. Hence ${(\fnt D(h)(x),\fnt D(h)(y))\in r^{\fnt D(\alg{A})}}$ and $(\fnt D(h)(x),\omega)\preccurlyeq (\fnt D(h)(y),\omega')$. Thus $\fnt{L}(h)$ is well defined and order-preserv\-ing. Since $\fnt D(h)$ is continuous and $Y_{\alg{A}}/_{\approx}$ carries the quotient topology, and since $\fnt{L}(h)^{-1}(U\times V)=\fnt D(h)^{-1}(U)\times V$, the map $\fnt{L}(h)$ is also continuous. Theorem~3.1(c) in \cite{CPcop} proves that $\Phi_{\alg{A}}\colon \fnt{L}(\alg{A})\to \fnt{HU}(\alg{A})$ is an isomorphism of Priestley spaces. We prove that $\Phi$ is natural in ${\cat A}$. Let $\alg{A},\alg{B}\in{\cat A}$, $x\in\fnt D(\alg{B})$, $h\in{\cat A}(\alg{A},\alg{B})$ and $\omega\in\Omega$. Then \begin{multline*} \Phi_{\alg{A}}(\fnt{L}(h)([(x,\omega)]_{\approx}))=\Phi_{\alg{A}}([(\fnt{D}(h)(x),\omega)]_{\approx})=\Phi_{\alg{A}}([(x\circ h,\omega)])\\ =\omega\circ x\circ h=\fnt{H}(h)(\omega\circ x)=\fnt{HU}(h)(\omega\circ x) =\fnt{HU}(h)(\Phi_{\alg{B}}([(x,\omega)]_{\approx})). \end{multline*} Therefore $\Phi$ is a natural isomorphism between the functors $\fnt{L}$ and~$\fnt{HU}$. \end{proof} We take as before a $\cat D$-based quasivariety ${\cat A}=\ISP(\alg{M})$, with forgetful functor $\fnt U \colon {\cat A} \to \cat D$, for which we have set up a piggyback duality. Theorem~\ref{Theo:RevEng} tells us how, given an algebra $\alg{A}\in{\cat A}$, to obtain from the natural dual $\fnt D(\alg{A})$ a Priestley space $Y_{\alg{A}}/_{\approx}$ serving as the dual space of $\fnt{U}(\alg{A})$. But it does not yet tell us how to capture on $Y_{\alg{A}}/_{\approx}$ the algebraic operations not present in the reducts. However it should be borne in mind that the maps $\omega$ in $\Omega=\fnt{HU}(\alg{M})$ are an integral part of the natural duality construction and it is therefore unsurprising that these maps will play a direct role in the translation to a Priestley-style duality, if we can achieve this. We consider in turn operations of each of the types present in the bilattice context. Assume first that $f$ is a unary operation occurring in the type of algebras in ${\cat A}$ which interprets as a $\cat D$-endomorphism on each $\alg{A} \in {\cat A}$. Then $\fnt{H}(f^{\alg{A}})\colon\fnt{HU}(\alg{A})\to\fnt{HU}(\alg{A})$ is a continuous order-preserving map, given by $\fnt{H}(f^{\alg{A}})(x)=x\circ f^{\alg{A}}$, for each $x\in \fnt{HU}(\alg{A})$. Conversely,~$f^{\alg{A}}$ can be recovered from $\fnt{H}(f^{\alg{A}})$ by setting $f^{\alg{A}}(a)$ for each $a\in\alg{A}$ to be the unique element of $\alg{A}$ for which $x(f^{\alg{A}}(a))=(\fnt{H}(f^{\alg{A}})\circ x)(a)$ for each $x\in\fnt{HU}(\alg{A})$. Denote $\fnt{H}(f^{\alg{A}})$ by~$\widehat{f^{\alg{A}}}$. Then for each $\alg{A}\in{\cat A}$ the operation $f^{\alg{A}}$ is determined by $f^{\alg{M}}$. Dually, $\widehat{f^{\alg{M}}}$ should encode enough information to enable us, with the aid of Theorem~\ref{Theo:RevEng}, to recover $\widehat{f^{\alg{A}}}$. Define a map $f_{Y_{\alg{A}}} \colon Y_{\alg{A}} \to Y_{\alg{A}}$ by ${f_{Y_\alg{A}}(x,\omega) = (x,\omega \circ f^{\alg{M}})}$, for $x \in \fnt D(\alg{A})$ and $\omega \in \Omega$; here $Y_{\alg{A}}=\fnt{D}(\alg{A})\times \Omega$, as in Theorem~\ref{Theo:RevEng}. By definition of $(Y_{\alg{A}}; \preccurlyeq,{\mathscr{T}}_{\alg{A}})$, the map $f_{Y_\alg{A}}$ is continuous. By Theorem~\ref{Theo:RevEng}(c), for every $x,x' \in \fnt D(\alg{A})$ and $\omega,\omega' \in \Omega$, \begin{align*} (x,\omega)\approx (x',\omega')&\Longleftrightarrow \omega\circ x=\omega'\circ x' \\ &\Longrightarrow \omega\circ f^{\alg{M}}\circ x= \omega\circ x\circ f^{\alg{A}}=\omega'\circ x'\circ f^{\alg{A}}=\omega'\circ f^{\alg{M}}\circ x' \\ &\Longleftrightarrow f_{Y_\alg{A}}(x,\omega)\approx f_{Y_\alg{A}}(x',\omega'). \end{align*} Then the map $\bar{f}_{\alg{A}}\colon Y_{\alg{A}}/_{\approx}\to Y_{\alg{A}}/_{\approx}$ determined by $\bar{f}_{\alg{A}}([(x,\omega)]_{\approx})=[f_{Y_{\alg{A}}}(x,\omega)]_{\approx}$ is well defined and continuous. For each $(x,\omega) \in Y_{\alg{A}}$ and $a\in\alg{A}$ we have \begin{multline*} \widehat{f^{\alg{A}}} (\Phi_{\alg{A}}([(x,\omega)]_{\approx})) (a)=\omega\circ x(f^{\alg{A}}(a)) = \omega (f^{\alg{M}}(x(a)) ) = (\omega \circ f^{\alg{M}})(x(a)) \\ =\Phi_{\alg{A}} ([(x,\omega \circ f^{\alg{M}})])(a) = \Phi_{\alg{A}}(\bar{f}_{\alg{A}}([(x,\omega)]))(a). \end{multline*} We have proved that $\widehat{f^{\alg{A}}}\circ\Phi_{\alg{A}}=\Phi_{\alg{A}}\circ \bar{f}_{\alg{A}}$. We now consider a unary operation $h$ which interprets as a dual $\cat D$-endo\-morph\-ism on each~$\fnt U(\alg{A})$. As above, $\fnt{H}(h^{\alg{A}})\colon\fnt{HU}(\alg{A})\to\fnt{HU}(\alg{A}^{\partial})$ is a continuous order-preserving map. Using the fact that the assignment $x\mapsto \cnst{1}-x$ defines an isomorphism between the Priestley spaces $\fnt{HU}(\alg{A})^{\partial}$ and $\fnt{HU}(\alg{A}^{\partial})$, it is possible to define a map $\widehat{h^{\alg{A}}}\colon\fnt{HU}(\alg{A})\to\fnt{HU}(\alg{A})$ by $\widehat{h^{\alg{A}}}(x) =\cnst{1}-\fnt{H}(h^{\alg{A}})(x)=\cnst{1}-(x\circ h^{\alg{A}})$. Then $\widehat{h^{\alg{A}}}$ is continuous and order-reversing. Conversely, $h^{\alg{A}}$ is obtained from $\widehat{h^{\alg{A}}}$ by setting $h^{\alg{A}}(a)$ to be the unique element of $\alg{A}$ that satisfies $x(h^{\alg{A}}(a))=(\cnst{1}-(\widehat{h^{\alg{A}}}(x)))(a)$ for each $x\in\fnt{HU}(\alg{A})$. In the same way as before, we define a map $h_{Y_\alg{A}} \colon Y_{\alg{A}} \to Y_{\alg{A}}$ given by $h_{Y_\alg{A}}(x,\omega) = (x,\cnst{1}-\omega \circ h^{\alg{M}})$. Again we have an associated continuous (now order-reversing) map on $(Y_{\alg{A}}; \preccurlyeq,{\mathscr{T}}_{\alg{A}})$ given by \[\bar{h}_{\alg{A}}([(x,\omega)]_{\approx}) = [h_{Y_\alg{A}}(x,\omega) ]_{\approx} = [(x,\cnst{1}-\omega \circ h^{\alg{M}})]_{\approx}.\] Furthermore, $\widehat{h^{\alg{A}}}\circ\Phi_{\alg{A}}=\Phi_{\alg{A}}\circ \bar{h}_{\alg{A}}$. Nullary operations are equally simple to handle. Suppose the algebras in ${\cat A}$ contain a nullary operation $\cnst c$ in the type. Then for each $\alg{A}\in{\cat A}$ the constant~$\cnst c^{\alg{A}}$ determines a clopen up-set $\widehat{\cnst c^{\alg{A}}}= \{\,x\in\fnt{HU}(\alg{A})\mid x(\cnst c^{\alg{A}})=1\,\}$ in $\fnt{HU}(\alg{A})$. Conversely, $\cnst c^{\alg{A}}$ is the unique element $a$ of $\alg{A}$ such that $x(a)=1$ if and only if $x\in \widehat{ \cnst c^{\alg{A}}}$. Now let $\cnst c_{Y_\alg{A}}=\fnt D(\alg{A})\times \{\,\omega\in\Omega\mid \omega(\cnst c^{\alg{M}})=1\,\}$. In the same way as above we can move down to the Priestley space level and define \[ \bar{\cnst c}_{\alg{A}} =\{\, [(x,\omega)]_{\approx} \mid (x,\omega) \in \cnst c_{Y_\alg{A}}\,\}=\{\, [(x,\omega)]_{\approx} \mid \omega( \cnst c^{\alg{M}})=1\,\}. \] Then, for each $(x,\omega) \in Y_{\alg{A}}$, we have \begin{multline*} \Phi_{\alg{A}}([(x,\omega)]_{\approx})\in\widehat{\cnst c^{\alg{A}} }\Longleftrightarrow 1=(\omega\circ x) (\cnst c^{\alg{A}})=\omega(\cnst c^{\alg{M}})\\ \Longleftrightarrow (x,\omega)\in\cnst c_{Y_\alg{A}} \Longleftrightarrow [(x,\omega)]_{\approx}\in \bar{\cnst c}_{\alg{A}}. \end{multline*} That is, $\Phi_{\alg{A}}$ and its inverse interchange the sets $\bar{\cnst c}^{\alg{A}}$ and $\widehat{\cnst c^{\alg{A}}}$. We sum up in the following theorem what we have shown on how enriched Priestley spaces may be obtained which encode the non-lattice operations of an algebra $\alg{A}$ with a reduct $\fnt U(\alg{A})$ in $\cat D$. Following common practice in similar situations, we shall simplify the presentation by assuming that only one operation of each kind is present. To state the theorem we need a definition. Let $\CY$ be the category whose objects are the structures of the form $(\spc{Y}; p,q,S)$, where~$\spc{Y}$ is a Priestley space, $p$ and~$q$ are continuous self-maps on~$\spc{Y}$ which are respectively order-preserving and order-reversing, and $S$ is a distinguished clopen subset of~$\spc{Y}$. The morphisms of~$\CY$ are continuous order-preserving maps that commute with $p$ and $q$, and preserve $S$. \begin{thm} \label{Theo:RevEngOps} Let ${\cat A} =\ISP(\alg{M})$ be a finitely generated quasivariety for which the language is that of $\cat D$ augmented with two unary operation symbols, $f$ and~$h$, and a nullary operation symbol~$c$ such that, for each $\alg{A} \in {\cat A}$, \begin{newlist} \item[{\rm (i)}] $f^{\alg{A}}$ acts as an endomorphism of $\cat D$, and \item[{\rm (ii)}] $h^{\alg{A}}$ acts as a dual endomorphism of $\cat D$. \end{newlist} Then there exist well-defined contravariant functors $\fnt{L}^+$ and $\fnt{HU}^+$ from ${\cat A}$ to~$\CY$ given by \begin{alignat*}{3} &\text{on objects:} & \hspace*{1.72cm} & \fnt{L}^+ \colon \alg{A} \mapsto (\fnt{L}(\alg{A}), \bar{f}_{\alg{A}}, \bar{h}_{\alg{A}}, \bar{\cnst c}_{\alg{A}}), \hspace*{1.72cm} \phantom{\text{on objects:}} &&\\ &\text{on morphisms:} & &\fnt{L}^+\colon \,\, h \mapsto\fnt{L}(h);&& \\ \shortintertext{and} &\text{on objects:} & &\fnt{HU}^+ \colon \alg{A} \mapsto (\fnt{HU}(\alg{A}); \widehat{f^{\alg{A}}}, \widehat{h^{\alg{A}}}, \widehat{\cnst c^{\alg{A}}}),&&\\ &\text{on morphisms:} & &\fnt{HU}^+\colon \,\, h \mapsto\fnt{HU}(h).&& \end{alignat*} Moreover, $\Phi$, as defined in Theorem~{\upshape\ref{Theo:RevEng}}, is a natural equivalence between the functor $\fnt{L}^+$ and the functor $\fnt{HU}^+$. Let $\CY'$ denote the full subcategory of $\CY$ whose objects are isomorphic to topological structures of the form $\fnt{L}^+(\alg{A})$ {\rm(or }equivalently~$\fnt{HU}^+(\alg{A})${\rm)} for some $\alg{A}\in{\cat A}$. the categories ${\cat A}$ and $\CY$ are dually equivalent, with the equivalence determined by either $\fnt{L}^+$ or $\fnt{HU}^+$. \end{thm} We now indicate the modifications that we have to make to Theorem~\ref{Theo:RevEng} to handle the unbounded case. In Theorem~\ref{Theo:RevEngu}, the sets of relations arising are as specified in Theorem~\ref{genpigoneMu}. Let ${\cat A} = \ISP(\alg{M})$, where $\alg{M}$ is a finite algebra having a reduct $\unbounded{\U}(\alg{M})$ in $\unbounded{\CCD}$ and let $\Omega=\unbounded{\fnt H}\unbounded{\U}(\alg{M})$. For each $\alg{A}\in {\cat A}$, let $ \textstyle Y_{\alg{A}}= \unbounded{\D}(\alg{A})\times\Omega $ with the topology ${\mathscr{T}}_{Y}$ having as a base of open sets $ \{\,U\times V\mid U\mbox{ open in } \unbounded{\D}(\alg{A})\mbox{ and }V\subseteq\Omega\,\}, $ and the binary relation $\preccurlyeq\, \subseteq Y^2$ given by \[ (x,\omega_1)\preccurlyeq(y,\omega_2)\mbox{ if }(x,y)\in r^{\unbounded{\D}(\alg{A})}\mbox{ for some }r\in R_{\omega_1,\omega_2}. \] \begin{thm}\label{Theo:RevEngu} Let ${\cat A} = \ISP(\alg{M})$, where $\alg{M}$ is a finite algebra with a reduct in $\unbounded{\CCD}$. Then there exists a well-defined contravariant functor $\unbounded{\fnt{L}}\colon{\cat A}\to\CP_{01}$ given by \begin{alignat*}{3} &\text{on objects:} & \hspace*{.55cm} & \alg{A} \longmapsto\ \unbounded{\fnt{L}} (\alg{A}) = (\, Y_{\alg{A}}/_{\approx}; {\preccurlyeq}/_{\approx}, c_0, c_1, {\mathscr{T}}_{Y_{\alg{A}}}/_{\approx}),\hspace*{.55cm} \phantom{\text{on objects:}} && \\ &\text{on morphisms:} & & h \,\, \longmapsto\ \unbounded{\fnt{L}}(h)\colon [(x,\omega)]_{\approx} \mapsto [(\unbounded{\D}(h)(x),\omega)]_{\approx}. && \end{alignat*} Moreover, $\Phi$, defined on each $\alg{A}$ by $\Phi_{\alg{A}}([(x,\omega)]_{\approx})=\omega\circ x$, determines a natural isomorphism between $\unbounded{\fnt{L}}$ and $\unbounded{\fnt H}\unbounded{\U}$. \end{thm} \begin{proof} The only new ingredient here as compared with the proof of Theorem~\ref{Theo:RevEng} concerns the role of the constants. The argument used in the proof of that theorem, as given in~\cite[Theorem~2.3]{CPcop}, can be applied directly to prove that $\Phi_{\alg{A}}\colon(\, Y_{\alg{A}}/_{\approx}; {\preccurlyeq}/_{\approx}, {\mathscr{T}}_{Y_{\alg{A}}}/_{\approx})\to \unbounded{\fnt H}\unbounded{\U}(\alg{A})$ defined by $\Phi_{\alg{A}}([(x,\omega)]_{\approx})=\omega\circ x$ is a well-defined homeomorphism which is also an order-isomorphism. To confirm that $\unbounded{\fnt{L}}$ is well defined we shall show simultaneously that $\left(\bigcup \{\,R^{i}_{\omega}\mid \omega\in\Omega\,\}\right)/_{\approx}$ is a singleton and that $\Phi_{\alg{A}}$ maps its unique element to the corresponding constant map in $\unbounded{\fnt H}\unbounded{\U}(\alg{A})$. Thus $\textstyle \{c_i\}=\left(\bigcup \{\,R^{i}_{\omega}\mid \omega\in\Omega\,\}\right)/_{\approx} $ for $i\in\{0,1\}$. Below we write~$r$ rather than $r^{\unbounded{\D}(\alg{A})}$ for the lifting of a piggybacking relation~$r$ to $\unbounded{\D}(\alg{A})$. Let $\omega_1,\omega_2\in\Omega$ and $r_1\in R^{1}_{\omega_1}$, $r_2\in R^{1}_{\omega_2}$, $x\in r_1$, and $y\in r_2$. For each $a\in \alg{A}$, we have $\omega_1 (x(a))=1=\omega_2 (y(a))$. Then $\Phi_{\alg{A}}([ (x,\omega_1)]_{\approx})=\Phi_{\alg{A}}([(x,\omega_2)]_{\approx})=\cnst 1$, where $\cnst 1\colon A\to \{0,1\}$ denotes the constant map $a\mapsto 1$ . Since~$\Phi_{\alg{A}}$ is injective, $[(x,\omega_1)]_{\approx}=\nu([(x,\omega_2)]_{\approx})$. This proves that $|\bigcup \{\, R^1_{\omega}\mid \omega\in\Omega\,\}/_{\approx}|\leq 1$ and that $\Phi_{\alg{A}}((\bigcup \{R^{1}_{\omega}\mid \omega\in\Omega\})/_{\approx})\subseteq\{\cnst 1\}$. Similarly, we obtain $|\bigcup \{R^0_{\omega}\mid \omega\in\Omega\}/_{\approx}|\leq 1$ and $\Phi_{\alg{A}}(\bigcup \{R^{0}_{\omega}\mid \omega\in\Omega\})/_{\approx})\subseteq\{\cnst 0\}$. Because ~$\Phi_{\alg{A}}$ is surjective, there exists $x\in \unbounded{\D}(\alg{A})$ and $\omega\in\Omega$ such that $\omega \circ x =\cnst 1$. Then $x\in R^1_{\omega}$, which proves that $\bigcup \{R^1_{\omega}\mid \omega\in\Omega\}\neq\emptyset$. The same argument applies to $\bigcup \{R^0_{\omega}\mid \omega\in\Omega\}$. \end{proof} The arguments for handling additional operations in the bounded case carry over to piggyback dualities over~$\unbounded{\CCD}$ with only the obvious modifications. \section{From a natural duality to the product representation} \label{sec:prodrep} The natural dualities in Theorems~\ref{DBnatdual} and~\ref{DBUnatdual} combined with the Priestley dualities for bounded and unbounded distributive lattices, respectively, prove that $\cat{DB}$ is categorically equivalent to $\cat D$ and that $\unbounded{\DB}$ is categorically equivalent to $\unbounded{\CCD}$. These equivalences are set up by the functors $\fnt{KD}\colon \cat{DB}\to\cat D$ and $\fnt{EH}\colon \cat D\to\cat{DB}$, and $\unbounded{\fnt K}\unbounded{\D}\colon \unbounded{\DB}\to\unbounded{\CCD}$ and $\unbounded{\E}\unbounded{\fnt H}\colon \unbounded{\CCD}\to\unbounded{\DB}$: \begin{center} \begin{tikzpicture} [auto, text depth=0.25ex, move up/.style= {transform canvas={yshift=1.9pt}}, move down/.style= {transform canvas={yshift=-1.9pt}}, move left/.style= {transform canvas={xshift=-2.5pt}}, move right/.style={transform canvas={xshift=2.5pt}}] \matrix[row sep= 1cm, column sep= 1.3cm] { \node (DB) {$\cat{DB}$}; & \node (P) {$\CP$}; & \node (D) {$\cat D$};&\node (DBU) {$\unbounded{\DB}$}; & \node (PU) {$\CP_{01}$}; & \node (DU) {$\unbounded{\CCD}$.};\\ }; \draw [->, move up] (DB) to node [yshift=-2pt] {$\fnt D$}(P); \draw [<-,move down] (DB) to node [swap] {$\fnt E$}(P); \draw [->,move up] (P) to node [yshift=-2pt] {$\fnt{K}$}(D); \draw [<-,move down] (P) to node [swap] {$\fnt{H}$}(D); \draw [->, move up] (DBU) to node [yshift=-2pt] {$\unbounded{\D}$}(PU); \draw [<-,move down] (DBU) to node [swap] {$\unbounded{\E}$}(PU); \draw [->,move up] (PU) to node [yshift=-2pt] {$\unbounded{\fnt K}$}(DU); \draw [<-,move down] (PU) to node [swap] {$\unbounded{\fnt H}$}(DU); \end{tikzpicture} \end{center} With the aid of Theorem~\ref{Theo:RevEng} we can give explicit descriptions of $\fnt{EH}$ and~$\fnt{KD}$. \begin{thm} \label{Theo:RevEngDB} Let $\fnt D\colon\cat{DB}\to\CP$ and $\fnt E\colon\CP\to\cat{DB} $ be the functors setting up the duality presented in Theorem~{\upshape\ref{DBnatdual}}. Then for each $\alg{A}\in \cat{DB}$ the Priestley dual $\fnt H(\alg{A}_t)$ of the $t$-lattice reduct of $\alg{A}$ is such that \[ \textstyle\fnt H(\alg{A}_t)\cong\fnt D(\alg{A})\coprod_{\CP}\fnt D(\alg{A})^\partial, \] where $\cong$ denotes an isomorphism of Priestley spaces. \end{thm} \begin{proof} Adopting the notation of Theorems~\ref{genpigoneM} and \ref{DBnatdual}, we note that in the proof of the latter we observed that \[ R_{\alpha,\beta} = R_{\beta,\alpha}=\emptyset, \qquad r_{\alpha,\alpha}\mbox{ is } \leq_{k} \ \text{ and }\ r_{\beta,\beta}\mbox{ is } \geq_{k} \] (here we have written $r_{\omega,\omega}$ for the unique element of $R_{\omega,\omega}$). As a result, for $\alg{A}\in\cat{DB}$, with $\fnt D(\alg{A})=(X;\leq,{\mathscr{T}})$, we have \[ R^{\fnt D(\alg{A})}_{\alpha,\beta} = R^{\fnt D(\alg{A})}_{\beta,\alpha}=\emptyset, \qquad r^{\fnt D(\alg{A})}_{\alpha,\alpha}\mbox{ is } \leq \ \text{ and } \ r^{\fnt D(\alg{A})}_{\beta,\beta}\mbox{ is } \, \geq. \] From this and the definition of $\preccurlyeq\, \subseteq Y_{\alg{A}}^2$ it follows that \[(x,\omega_1)\preccurlyeq (y,\omega_2) \Longleftrightarrow \begin{cases} x\leq y \mbox{ and } \omega_1=\omega_2=\alpha, \mbox{ or }\\ x\geq y \mbox{ and } \omega_1=\omega_2=\beta. \end{cases} \] Then $Y_{\alg{A}}=(\fnt D(\alg{A})\times\Omega;\preccurlyeq,{\mathscr{T}}_{Y_{\alg{A}}})$ is already a poset (no quotienting is required) for each $\alg{A}\in\cat{DB}$. And, order theoretically and topologically, $Y_{\alg{A}}$ is the disjoint union of ordered spaces $Y_\alpha$ and~$Y_\beta$, where $Y_\alpha$ and $Y_\beta$ are the subspaces of $Y_{\alg{A}}$ determined by $\fnt D(\alg{A})\times\{\alpha\}$ and $\fnt D(\alg{A})\times\{\beta\}$, respectively. With this notation we also have $Y_{\alpha}\cong\fnt D(\alg{A})$ and $Y_\beta\cong\fnt D(\alg{A})^\partial$. The rest of the proof follows directly from Theorem~\ref{Theo:RevEng} and the fact that finite coproducts in $\CP$ correspond to disjoint unions \cite[Theorem 6.2.4]{CD98}. \end{proof} \begin{figure} [ht] \begin{center} \begin{tikzpicture} [scale=.55] [inner sep=10mm, [auto, text depth=0.25ex, wiggly/.style={decorate,decoration={snake,amplitude=1pt,segment length=5pt, pre length=3pt,post length=3pt}}, ] \draw [thin] (-4.5,-2)--(-2.5,-2) --(-2.5,1)--(-4.5,1)--(-4.5,-2); \draw [thin] (-1,-2)--(1,-2) --(1,1)--(-1,1)--(-1,-2); \draw [thin] (10,-2)--(12,-2) --(12,1)--(10,1)--(10,-2); \draw [thin] (12,-2)--(14,-2) --(14,1)--(12,1)--(12,-2); \node (YA) at (-1.5,-3) {$(Y_{\alg{A}};\preccurlyeq)$}; \node (HUA) at (12.5,-3) {$\fnt{HU}(\alg{A})$}; \node (Yalpha) at (-5.2,0) {$Y_{\alpha}$}; \node (Ybeta) at (-1.6,0) {$Y_{\beta}$}; \node (geqk) at (0,-1) {$\geq_k$}; \node (leqk) at (-3.5,-1) {$\leq_k$}; \draw [->,decorate,decoration={snake,amplitude=1pt,segment length=5pt, pre length=3pt,post length=3pt}] (3.5,-1)-- node [yshift=.5cm] {$z \mapsto [z]_{\approx}$} (8,-1); \end{tikzpicture} \end{center} \caption{Obtaining $\fnt{HU}(\alg{A})$ from $\fnt D(\alg{A}) $ \label{fig:bddquot}} \end{figure} Figure~\ref{fig:bddquot} shows the very simple way in which Theorem~\ref{Theo:RevEngDB} tells us how to pass from the natural dual $\fnt D(\alg{A})$ of $\alg{A} \in \cat{DB}$ to the Priestley space ${\fnt{HU}(\alg{A})= \fnt{H}(\alg{A}_t)}$. We start from copies~$Y_\alpha$ and $Y_\beta$ of $\fnt D(\alg{A})$, indexed by the points $\alpha$ and $\beta$ of $\Omega =\fnt{HU}(\alg{A})$. The relation $\preccurlyeq$ gives us the partial order on $Y_\alpha \cup Y_\beta$ which restricts to $\leq_k$ on $Y_\alpha$ and $\geq_k$ on $Y_\beta$. The relation $\approx$ makes no identifications; in the right-hand diagram the two order comments are regarded as subsets of a single Priestley space; in the left-hand diagram they are regarded as two copies of the natural dual space. This very simple picture should be contrasted with the somewhat more complicated one we obtain below for the unbounded case; see Figure~\ref{fig:unbddquot}. Theorem~\ref{Theo:RevEngDB} shows us how to obtain $\fnt H(\alg{A}_t)$ from $\fnt D(\alg{A})$. We conclude that for each $\alg{A}\in{\cat A}$, the $t$-lattice reduct of $\alg{A}$ is isomorphic to $\alg{L}\times \alg{L}^{\partial}$ where $\alg{L}=\fnt{KD}(\alg{A})$. We will now see how to capture in $\fnt H(\alg{A}_t)$ the algebraic operations suppressed by $\fnt U$. Drawing on Theorem~\ref{Theo:RevEngOps} we have \begin{alignat*}{2} \bar{\neg}_{\alg{A}}([(x,\alpha)])&=[(x,\beta)], &\bar{\neg}_{\alg{A}}([(x,\beta)])&=[(x,\alpha)];\\ \widehat{\neg^{\alg{A}}}(\alpha\circ x)&=\beta\circ x, &\widehat{\neg^{\alg{A}}}(\beta\circ x) &=\alpha\circ x;\\ \bar{1_k}\,_{\alg{A}}&=Y_{\alpha}, & \bar{0_k}\,_{\alg{A}}&=Y_{\beta}; \\ \widehat{1_k^{\alg{A}}}& =\{\,\alpha\circ x\mid x\in\fnt D(\alg{A})\,\}, &\widehat{0_k^{\alg{A}}}&=\{\,\beta\circ x\mid x\in\fnt D(\alg{A})\,\}. \end{alignat*} From this and Theorem~\ref{Theo:RevEngDB}, we obtain ${\fnt{KD}(\alg{A})\cong\alg{A}_t/\theta}$ for each $\alg{A}\in\cat{DB}$, where $\theta$ is the congruence defined by $a\, \theta\, b$ if and only if $a\land_t 1_k=b\land_t 1_k$. Clearly~$\alg{A}_t/\theta$ is also isomorphic to the sublattice of~$\alg{A}_t$ determined by the set $\{\,a\in A\mid a\leq_t 1_k\,\}$. Since the duality we developed for $\cat{DB}$ was based on the piggyback duality using $\alg{A}_t$ as the $\cat D$-reduct, Theorem~\ref{Theo:RevEng} does not give us direct access to the $k$-lattice operations. Lemma~\ref{90deg} tells us that with the knowledge constants and the $t$-lattice operations we can access the $k$-lattice operations. But there is a way to recover the $k$-lattice operations directly from the dual space, and this can be adapted to cover the unbounded case too. Take, as before, ${\cat A} =\cat{DB}$, $\alg{M}=\boldsymbol 4$ and $\Omega=\{\alpha,\beta\}$. Let $\alg{A}\in{\cat A}$ and $ Y_{\alg{A}}=\fnt D(\alg{A})\times\Omega$. Define a partial order $\preccurlyeq'\, \subseteq Y_{\alg{A}}^2$ by $(x,\omega)\preccurlyeq'(y,\omega')$ if $\omega=\omega'$ and $x\leq y$ in $\fnt D(\alg{A})$. It is clear that $(Y_{\alg{A}};\preccurlyeq',{\mathscr{T}}_{Y_{\alg{A}}})\cong \fnt D(\alg{A})\coprod_{\CP}\fnt D(\alg{A})$. We claim that $\fnt{H}(\alg{A}_k)\cong (Y_{\alg{A}};\preccurlyeq',{\mathscr{T}}_{Y_{\alg{A}}})$. To prove this, observe that, since $\alpha^{-1}(1)=\{11,01\}$ is a filter of the lattice $\boldsymbol 4_k$, the map $\alpha$ is a lattice homomorphism from $\boldsymbol 4_k$ into~$\boldsymbol 2$. And since $\beta^{-1}(1)=\{11,10\}$ is an ideal in $\boldsymbol 4_k$ the map $\beta'=\cnst 1 -\beta$, is a lattice homomorphism from $\boldsymbol 4_k$ into $\boldsymbol 2$. It follows that we have a well-defined map $\eta_{\alg{A}}\colon Y_{\alg{A}}\to \fnt{H}(\alg{A}_k)$ given by \[ \eta_{\alg{A}}(x,\omega)=\begin{cases} \omega\circ x&\mbox{if } \omega=\alpha,\\ \cnst 1-\omega\circ x&\mbox{if } \omega=\beta.\\ \end{cases} \] Assume that $(x,\omega)\preccurlyeq'(y,\omega')$. Then $\omega=\omega'$ and for each $a\in\alg{A}$ we have $x(a)\leq_k y(a)$ in $\boldsymbol 4$. Since $\alpha$ is a $k$-lattice homomorphism, if $\omega=\omega'=\alpha$, then \[\eta_{\alg{A}}(x,\alpha)(a)=\alpha(x(a))\leq \alpha(y(a))=\eta_{\alg{A}}(y,\alpha)(a),\] for each $a\in\alg{A}$. If instead $\omega=\omega'=\beta$, we have $\beta_{\alg{A}}(x(a))\geq \beta_{\alg{A}}(y(a))$ for each $a\in \alg{A}$, then $\eta_{\alg{A}}(x,\beta)(a)=1-\beta(x(a))\leq 1-\beta_{\alg{A}}(y(a))=\eta_{\alg{A}}(y,\beta)(a)$. Therefore $\eta_{\alg{A}}$ preserves $\preccurlyeq'$. To see that~$\eta_{\alg{A}}$ also reverses the order, assume $\eta_{\alg{A}}(x,\omega)\leq \eta_{\alg{A}}(y,\omega')$. Then $\eta_{\alg{A}}(x,\omega)(a)\leq \eta_{\alg{A}}(y,\omega')(a)$ in $\boldsymbol 2$, for each $a\in\alg{A}$. Since $\alpha(1_t)=1\not\leq 0=1-\beta(1_t)$ and $1=\beta(0_t)=1\not\leq 0=\alpha(1_t)$ it follows that $\omega=\omega'$. Now assume that $\omega=\omega'=\alpha$, then $\alpha(x(a))\leq \alpha(y(a))$, for each $a\in \alg{A}$, equivalently $(x(a),y(a))\in r_{\alpha,\alpha}=\leq_k$ for each $a\in\alg{A}$. By Theorem~\ref{DBUnatdual}, $x\leq y$ in $\fnt D(\alg{A})$. We obtain $(x,\omega)\preccurlyeq'(y,w)$. If $\omega=\omega'=\beta$ we argue in the same way, using the fact that $r_{\beta,\beta}$ is $\geq_k$. Finally, observe that for each $a\in\alg{A}$, $b\in\boldsymbol 4$ and $i\in\boldsymbol 2$, \allowdisplaybreaks \begin{align*} \eta_{\alg{A}}(\{\,x\in\fnt D(\alg{A})\mid x(a)=b\}\times\{\alpha\,\})&=\{\,z\in\fnt{H}(\alg{A}_k)\mid z(a)=\alpha(b)\,\}\\ &\ \ \cap\{\,z\in\fnt{H}(\alg{A}_k)\mid z(\neg^{\alg{A}}a)=\alpha(\neg^{\boldsymbol 4} b)\,\};\\ \eta_{\alg{A}}(\{\,x\in\fnt D(\alg{A})\mid x(a)=b\,\}\times\{\beta\})&=\{\,z\in\fnt{H}(\alg{A}_k)\mid z(a)\neq\beta(b)\,\}\\ &\ \ \cap\{\,z\in\fnt{H}(\alg{A}_k)\mid z(\neg^{\alg{A}}a)\neq\beta(\neg^{\boldsymbol 4} b)\,\};\\ (\eta_{\alg{A}})^{-1}(\{\,z\in\fnt{H}(\alg{A}_k)\mid z(a)=i\,\})&=\{\,x\in\fnt D(\alg{A})\mid x(a)\in\alpha^{-1}(1)\}\times\{\,\alpha\,\}\\ &\ \ \cup\{\,x\in\fnt D(\alg{A})\mid x(a)\in\beta^{-1}(1-i)\,\}\times\{\beta\,\}. \end{align*} Then $\eta_{\alg{A}}$ is a homeomorphism. Hence, as claimed, $\fnt{H}(\alg{A}_k)\cong (Y_{\alg{A}};\preccurlyeq',{\mathscr{T}}_{Y_{\alg{A}}})$. Since $(Y_{\alg{A}};\preccurlyeq',{\mathscr{T}}_{Y_{\alg{A}}})\cong \fnt D(\alg{A})\coprod_{\CP}\fnt D(\alg{A})$, we conclude that $\alg{A}_k\cong\alg{L}\times\alg{L}$, where~$\alg{L}$ denotes the lattice~$\fnt{KD}(\alg{A})$. Theorem~\ref{Theo:RevEngDB} can be seen as the product representation theorem for distributive bilattices expressed in dual form. We recall that, given a distributive lattice $\alg{L} =(L; \lor,\land,0,1)$, then $\alg{L} \odot \alg{L} $ denotes the distributive bilattice with universe $L \times L$ and lattice operations given by \begin{alignat*}{2} (a_1,a_2) \lor_t (b_1,b_2) & = (a_1 \lor b_1, a_2 \land b_2), \quad & (a_1,a_2) \lor_k (b_1,b_2) & = (a_1 \lor b_1, a_2 \lor b_2), \\ (a_1,a_2) \land_t (b_1,b_2) & = (a_1 \land b_1, a_2 \lor b_2), \quad & (a_1,a_2) \land _k (b_1,b_2) & = (a_1 \land b_1, a_2 \land b_2); \end{alignat*} negation is given by $\neg(a) = (b,a)$ and the constants by $0_t = (0,1)$, $1_t = (1,0)$, $0_k = (0,0)$ and $1_k = (1,1)$. A well-known example is the representation of $\boldsymbol 4$ as $\boldsymbol 2\odot\boldsymbol 2$. More precisely, $h\colon\boldsymbol 4\to \boldsymbol 2\odot\boldsymbol 2$ defined by $h(ij)=(i,1-j)$, for $i,j\in\{0,1\}$, is an isomorphism. As a consequence of Theorem~\ref{Theo:RevEngOps} we obtain the following result. \begin{thm}\label{Theo:ProdFunDB} Let $\fnt{V}\colon \cat{DB}\to \cat D$ and $\fnt{W}\colon \cat D\to \cat{DB}$ be the functors defined~by: \begin{alignat*}{3} &\text{on objects:} & \hspace*{2.25cm} &\alg{A}\longmapsto \fnt{V}(\alg{A})=[0_k,1_t], \hspace*{2.2cm} \phantom{\text{on objects:}} &&\\ &\text{on morphisms:} & & \, \,h \longmapsto \fnt{V}(h) =h{\restriction}_{[0_k,1_t]}, && \\ \shortintertext{where $[0_k,1_t]$ is considered as a sublattice of $\alg{A}_{t}$ with bounds $0_k$ and $1_t$, and} &\text{on objects:} & &\alg{L}\longmapsto \fnt{W}(\alg{L}) = \alg{L}\odot\alg{L}, &&\\ &\text{on morphisms:} & &\,\, g \longmapsto \fnt{W}(g)\colon (a,b)\mapsto (g(a),g(b)). && \end{alignat*} \noindent Then $\fnt{V}$ and $\fnt{W}$ are naturally equivalent to $\fnt{KD}$ and $\fnt{EH}$, respectively. \end{thm} \begin{coro} {\rm (The Product Representation Theorem for distributive bilattices)} \label{DBprodrep} Let $\alg{A} \in \cat{DB}$. Then there exists $\alg{L}= (L; \lor,\land,0,1) \in \cat D$ such that $\alg{A} \cong \alg{L} \odot \alg{L}$. \end{coro} We can now see the relationship between our natural duality for $\cat{DB}$ and the dualities presented for this class in \cite{MPSV,JR}. In \cite{MPSV}, the duality for $\cat{DB}$ is obtained by first proving that the product representation is part of an equivalence between the categories $\cat{DB}$ and $\cat D$. The duality assigns to each $\alg{A}$ in $\cat{DB}$ the Priestley space $\fnt{H}([0_t,1_k])$, where the interval $[0_t,1_k]$ is considered as a sublattice of $\alg{A}_t$. Then the functor from $\cat{DB}$ to $\CP$ defined in \cite[Corollaries~12 and~14]{MPSV} corresponds to $\fnt{HV}$ where $\fnt{V}\colon\cat{DB}\to\cat D$ is as defined in Theorem~\ref{Theo:ProdFunDB}. The duality in \cite{JR}, is arrived at by a different route. At the object level, the authors consider first the De Morgan reduct of a bilattice and then enrich its dual structure by adding two clopen up-sets of the dual which represent the constants $0_k$ and $1_k$. In the notation of Theorem~\ref{Theo:RevEngOps} their duality is based on the functor $\fnt{HU}^+$ by considering ${\cat A}=\cat{DB}$ with only one lattice dual-endomorphism and two constants. The connection between their duality and ours follows from Theorems~\ref{Theo:RevEng} and~\ref{Theo:RevEngOps}. Firstly, Theorem~\ref{Theo:RevEng} tells us how to obtain $\fnt{L}$ from $\fnt D$. Then Theorem~\ref{Theo:RevEngOps} shows how to enrich this functor to obtain $\fnt{L}^+$ and confirms that the latter is naturally equivalent to $\fnt{HU}^+$. \begin{figure} [ht] \begin{center} \begin{tikzpicture} [scale=.35] [inner sep=10mm, [auto, text depth=0.25ex, wiggly/.style={decorate,decoration={snake,amplitude=1pt,segment length=5pt, pre length=3pt,post length=3pt}}, ] \path (-3.5,2) node (topL) [scale=.4,shape=circle,draw] {} (-3.5,-3) node (botL) [scale=.4,shape=circle,draw,gray,fill=gray] {} (2.5,-3) node (botR) [scale=.4,shape=circle,draw,gray,fill=gray] {} (2.5,2) node (topR) [scale=.4,shape=circle,draw] {} (-.5,8) node (top1) [scale=.4,shape=circle,draw] {} (-.5,3) node (bot1) [scale=.4,shape=circle,draw] {} (-.5,-4) node (top0) [scale=.4,shape=circle,draw,gray,fill=gray] {} (-.5,-9) node (bot0) [scale=.4,shape=circle,draw,gray,fill=gray] {}; ; \path (-4.5,-2) node (swL) [inner sep=0mm] {} (-2.5,-2) node (seL) [inner sep = 0mm] {} (-2.5,1) node (neL) [inner sep=0mm] {} (-4.5,1) node (nwL) [inner sep=0mm] {} (1.5,-2) node (swR) [inner sep=0mm] {} (3.5,-2) node (seR) [inner sep=0mm] {} (3.5,1) node (neR) [inner sep=0mm] {} (1.5,1) node (nwR) [inner sep=0mm]{} (-1.5,4) node (sw1) [inner sep=0mm] {} (.5,4) node (se1) [inner sep=0mm] {} (.5,7) node (ne1) [inner sep=0mm] {} (-1.5,7) node (nw1) [inner sep=0mm] {} (-1.5,-8) node (sw0) [inner sep=0mm] {} (.5,-8) node (se0) [inner sep=0mm] {} (.5,-5) node (ne0) [inner sep=0mm] {} (-1.5,-5) node (nw0) [inner sep=0mm] {}; \draw [thin] (topL) to (nwL); \draw [thin] (topL) to (neL); \draw [thin] (botL) to (swL); \draw [thin] (botL) to (seL); \draw [thin] (top1) to (nw1); \draw [thin] (top1) to (ne1); \draw [thin] (bot1) to (sw1); \draw [thin] (bot1) to (se1); \draw [thin] (top0) to (nw0); \draw [thin] (top0) to (ne0); \draw [thin] (bot0) to (sw0); \draw [thin] (bot0) to (se0); \draw [thin] (topR) to (nwR); \draw (topR) to (neR); \draw (botR) to (swR); \draw (botR) to (seR); \draw [thin] (-4.5,-2)--(-2.5,-2) --(-2.5,1)--(-4.5,1)--(-4.5,-2); \draw [thin] (1.5,-2)--(3.5,-2) --(3.5,1)--(1.5,1)--(1.5,-2); \draw [thin] (-1.5,4)--(.5,4) --(.5,7)--(-1.5,7)--(-1.5,4); \draw [thin] (-1.5,-8)--(.5,-8) --(.5,-5)--(-1.5,-5)--(-1.5,-8); \draw [thin] (12,-2)--(14,-2) --(14,1)--(12,1)--(12,-2); \draw [thin] (14,-2)--(16,-2) --(16,1)--(14,1)--(14,-2); \path (12,-2) node (swQ) [inner sep=0mm] {} (12,1) node (nwQ) [inner sep=0mm] {} (16,1) node (neQ) [inner sep=0mm] {} (16,-2) node (seQ) [inner sep=0mm] {} (14,4) node (topQ) [scale=.4,shape=circle,draw,gray] {} (14,-4) node (botQ) [scale=.4,shape=circle,draw,gray,fill=gray] {}; \draw [thin] (seQ) to (botQ); \draw [thin] (swQ) to (botQ); \draw [thin] (neQ) to (topQ); \draw [thin] (nwQ) to (topQ); \draw [ultra thin,rounded corners] (-4.15,1.5) --(-2.2,8.5)--(.9,8.5)--(3.5,1.5)--(-4.3,1.5)--(-2.2,8.5); \draw [ultra thin,rounded corners] (-4.15,-2.5) --(-2.2,-9.5)--(.9,-9.5)--(3.5,-2.5)--(-4.3,-2.5)--(-2.2,-9.5); \draw [very thin,->] (top0) to (botL); \draw [very thin,->] (top0) to (botR); \draw [very thin,->] (topL) to (bot1); \draw [very thin,->] (topR) to (bot1); \draw [->,decorate,decoration={snake,amplitude=1pt,segment length=5pt, pre length=3pt,post length=3pt}] (5,0)-- node [yshift=.5cm] {$z \mapsto [z]_{\approx}$} (10.5,0); \node (YA) at (4.5,-6) {$(Y_{\alg{A}};\preccurlyeq)$}; \node (HUA) at (14,-6) {$\unbounded{\fnt H}\unbounded{\U}(\alg{A})$}; \node (Yalpha) at (-5.2,0) {$Y_{\alpha}$}; \node (Ybeta) at (.8,0) {$Y_{\beta}$}; \node (Yone) at (-2.1,6) {$Y_{\overline{\boldsymbol 1}}$}; \node (Yzero) at (-2.1, -6) {$Y_{\overline{\boldsymbol 0}}$}; \node (leqk) at (-3.5,-1) {$\leq_k$}; \node (geqk) at (2.5,-1) {$\geq_k$}; \end{tikzpicture} \end{center} \caption{Obtaining $\unbounded{\fnt H}\unbounded{\U}(\alg{A})$ from $\unbounded{\D}(\alg{A})$\label{fig:unbddquot}} \end{figure} We now turn to the unbounded case. noting that, as regards dual representations, our results are entirely new, since neither \cite{MPSV} nor \cite{JR} considers duality for unbounded distributive bilattices. We shall rely on Theorem~\ref{Theo:RevEngu} to obtain a suitable description of $\unbounded{\fnt K}\unbounded{\D}$ and $\unbounded{\E}\unbounded{\fnt H}$. Fix $\alg{A} \in \unbounded{\DB}$ and let $Y_\omega = \fnt D(\alg{A}) \times \{ \omega\}$, for $\omega \in \Omega = \{\alpha,\beta,\overline{\boldsymbol 0},\overline{\boldsymbol 1}\}$. Let $X$ be the doubly-pointed Priestley space obtained as in Theorem~\ref{Theo:RevEngu} by quotienting the pre-order $\preccurlyeq$ to obtain a partial order. Note that $\fnt D(\alg{A})$ ordered by the pointwise lifting of $\leq_k$ has top and bottom elements, {\it viz.}~the constant maps onto~$10$ and onto~$01$, respectively. Hence, by Proposition~\ref{DBUpigsub}(i)(c)--(d), $Y_{\overline{\boldsymbol 0}}$ collapses to a single point and is identified with the bottom point of $Y_\alpha$ and the top point of $Y_\beta$. In the same way, $Y_{\overline{\boldsymbol 1}}$ collapses to a point and is identified with the top point of $Y_\alpha$ and with the bottom point of $Y_\beta$. No additional identifications are made. This argument proves the following theorem. \begin{thm} \label{Theo:RevEngDBU} Let $\unbounded{\D}\colon\unbounded{\DB}\to\CP_{01}$ and $\unbounded{\E}\colon\CP_{01}\to\unbounded{\DB} $ be the functors setting up the duality presented in Theorem~{\upshape\ref{DBUnatdual}}. Then for each $\alg{A}\in \unbounded{\DB}$ the Priestley dual $\unbounded{\fnt H}(\alg{A}_t)$ of the $t$-lattice reduct of $\alg{A}$ is such that \[ \textstyle\unbounded{\fnt H}(\alg{A}_t)\cong\unbounded{\D}(\alg{A})\coprod_{\CP_{01}}\unbounded{\D}(\alg{A})^\partial, \] where $\cong$ denotes an isomorphism of doubly-pointed Priestley spaces. \end{thm} Figure~\ref{fig:unbddquot} illustrates the passage from $(\fnt D(\alg{A}) \times \Omega;\preccurlyeq,{\mathscr{T}})$ to $\unbounded{\fnt H}\unbounded{\U}(\alg{A})$, including the way in which the union of the full set of piggybacking relations supplies a pre-order. The pre-ordered set $(Y_{\alg{A}};\preccurlyeq)$ has as its universe four copies of $\fnt D(\alg{A})$. Each copy is depicted in the figure by a linear sum of the form $\boldsymbol 1 \oplus P \oplus \boldsymbol 1$; the top and bottom elements are depicted by circles. For $Y_{\alpha}$, $P$ carries the lifting of the partial order $r_{\alpha,\alpha}$, that is, $\leq_k$ lifted to $\unbounded{\DB}(\alg{A},\unbounded{\four})$; for $Y_{\beta}$ the corresponding order is the lifting of $\geq_k$ to $\unbounded{\DB}(\alg{A},\unbounded{\four})$. Theorem~\ref{Theo:RevEngDBU} shows that $Y_{\overline{\boldsymbol 1}}$, together with the top elements of $(Y_\alpha; \leq_k)$ and of $(Y_\beta; \geq_k)$ form a single $\approx$-equivalence class, and likewise all elements of $Y_{\overline{\boldsymbol 0}}$ and the bottom elements of $Y_\alpha$ and of $Y_\beta$ form an $\approx$-equivalence class. These are the only $\approx$-equivalence class with more than one element. Thus the quotienting map which yields $\unbounded{\fnt H}\unbounded{\U}(\alg{A})$ operates as shown. Topologically, the image $\unbounded{\fnt H}\unbounded{\U}(\alg{A})$ carries the quotient topology, so that the top and bottom elements will both be isolated points if and only if $\alg{A}_t$ is a bounded lattice. Theorem~\ref{Theo:RevEngDBU} states that $\unbounded{\fnt H}(\alg{A}_t)$ is obtained as the coproduct of the doubly-pointed Priestley spaces $\unbounded{\D}(\alg{A})$ and $\unbounded{\D}(\alg{A})^{\partial}$. This coproduct corresponds to the product of unbounded distributive lattices $\alg{L}=\unbounded{\fnt K}\unbounded{\D}(\alg{A})$ and $\alg{L}^{\partial}$, that is, $\alg{A}_t\cong \alg{L}\times \alg{L}^{\partial}$. By the same argument as in the bounded case, $\alg{A}_k\cong \alg{L}\times \alg{L}$. Moreover, using the analogue of Theorem~\ref{Theo:RevEngOps}, we have \begin{alignat*}{2} \bar{\neg}_{\alg{A}}([(x,\alpha)])& =[(x,\beta)], \quad \quad & \bar{\neg}_{\alg{A}}([(x,\beta)])& =[(x,\alpha)];\\ \widehat{\neg^{\alg{A}}}(\alpha\circ x)&=\beta\circ x, & \widehat{\neg^{\alg{A}}}(\beta\circ x)&=\alpha\circ x;\\ \widehat{\neg^{\alg{A}}}({\overline{\boldsymbol 1}}\circ x)&={\overline{\boldsymbol 0}\circ x}, & \widehat{\neg^{\alg{A}}}(\overline{\boldsymbol 0}\circ x)&=\overline{\boldsymbol 1}\circ x. \end{alignat*} The construction of $\alg{L}\odot\alg{L}$ for $\alg{L} \in \cat D$ applies equally well to $\alg{L}\in\unbounded{\CCD}$; in this case the unbounded distributive bilattice $\alg{L}\odot\alg{L}$ is defined on $L\times L$ by taking $(\alg{L}\odot\alg{L})_t=\alg{L}\times\alg{L}^{\partial}$, $(\alg{L}\odot\alg{L})_k=\alg{L}\times\alg{L}$ and $\neg^{\alg{L}\odot\alg{L}}(a,b)=(b,a)$, for each $a,b\in L$. Given $\alg{A}\in\unbounded{\DB}$, we define $\alg{L}=\unbounded{\fnt K}\unbounded{\D}(\alg{A})$. It follows from above that $\alg{A}\cong\alg{L}\odot\alg{L}$. Let $h\colon\alg{A}\to \alg{L}\odot\alg{L}$ denote the isomorphism between $\alg{A}$ and $\alg{L}\odot\alg{L}$. Then $\alg{L}=\alg{A}_t/\ker(\rho)$ where $\rho(a)=a_1$ if $h(a)=(a_1,a_2)$. Using the~$\odot$ construction we observe that $(a,b)\in\ker(\rho)$ if and only if $a\land_t b=a\lor_k b$. This can also be proved using the fact that closed subspaces of doubly-pointed Priestley spaces correspond to congruences and that \[ \unbounded{\fnt H}(\alg{L})\cong Y_{\alpha}=\unbounded{\D}(A)\times\{\alpha\}\cong Y_{\alpha}/_{\approx} \subseteq Y_{\alg{A}}/_{\approx}\cong\unbounded{\D}(\alg{A})\textstyle\coprod_{\CP_{01}}\unbounded{\D}(\alg{A})^\partial\cong \unbounded{\fnt H}(\alg{A}_t). \] Now observe that the isomorphism $Y_{\alg{A}}/_{\approx}\cong \unbounded{\fnt H}(\alg{A}_t)$ is determined by the unique $\CP_{01}$-morphism such that $(x,\omega) \mapsto \omega\circ x$, for $\omega \in \{\alpha,\beta\}$, and that $\alpha$ is a $\unbounded{\CCD}$-homomorphism from $\alg{A}_t$ to $\twoU$ and also from $\alg{A}_k$ to $\twoU$. We deduce that $(x\circ\alpha)(a)=(x\circ\alpha)(b)$ if and only if $a\land_t b=a\lor_k b$. Our analysis yields the following theorem. \begin{thm}\label{Theo:ProdFunDBU} For $\alg{A} \in \unbounded{\DB}$ let $\theta_{\alg{A}}=\{\,(a,b)\in\alg{A}^2\mid a\land_t b=a\lor_k b\, \}$. Let $\unbounded{\fnt{V}}\colon \unbounded{\DB}\to \unbounded{\CCD}$ and $\unbounded{\fnt{W}}\colon \unbounded{\CCD}\to \unbounded{\DB}$ be the functors defined as follows: \begin{alignat*}{3} &\text{on objects:} & \hspace*{1.9cm} & \alg{A} \longmapsto \unbounded{\fnt{V}}(\alg{A}) =\alg{A}_t/\theta_{\alg{A}}, \hspace*{1.9cm} \phantom{\text{on objects:}} &&\\ &\text{on morphisms:} & &\,\, h\longmapsto \unbounded{\fnt{V}}(h)\colon [a]_{\theta_{\alg{A}}} \mapsto [h(a)]_{\theta_{\alg{B}}}, \text{ where $h\colon \alg{A} \to \alg{B}$,} &&\\ \shortintertext{and} & \text{on objects:} & &\,\, \alg{L} \longmapsto \unbounded{\fnt{W}}(\alg{L}) =\alg{L}\odot\alg{L},&& \\ &\text{on morphisms:} & &\, \, g \longmapsto \unbounded{\fnt{W}}(g) \colon (a,b) \mapsto (g(a),g(b)). \end{alignat*} Then $\unbounded{\fnt{V}}$ and $\unbounded{\fnt{W}}$ are naturally equivalent to $\unbounded{\fnt K}\unbounded{\D}$ and $\unbounded{\E}\unbounded{\fnt H}$, respectively. \end{thm} We have the following corollary; cf.~\cite{RPhD,BJR11}. \begin{coro} \label{Theo:prodrep-nobounds} {\rm(Product Representation Theorem for unbounded distributive bilattices)} Let $\alg{A} \in \unbounded{\DB}$. Then there exists a distributive lattice $\alg{L}$ such that $\alg{A} \cong \alg{L} \odot \alg{L}$. Here the lattice $\alg{L}$ may be identified with the quotient $\alg{A}_i/\theta$, where~$\theta$ is the $\unbounded{\CCD}$-congruence given by $a\, \theta \, b$ if and only if $a\land_t b=a\lor_k b$. \end{coro} \begin{figure} [h] \begin{center} \begin{tikzpicture}[scale=.85, auto, text depth=0.25ex, move up/.style= {transform canvas={yshift=1.9pt}}, move down/.style= {transform canvas={yshift=-1.9pt}}, move left/.style= {transform canvas={xshift=-2.5pt}}, move right/.style={transform canvas={xshift=2.5pt}}] \matrix[row sep= 1cm, column sep= 1.42cm] { \node (DB) {$\cat{DB}$}; & \node (P) {$\CP$}; & \node (D) {$\cat D$}; & \node (DBU) {$\unbounded{\DB}$}; & \node (P01) {$\CP_{01}$}; & \node (DU) {$\unbounded{\CCD}$};\\ }; \draw [->, move up] (DB) to node [yshift=-2pt] {$\fnt D$}(P); \draw [<-,move down] (DB) to node [swap] {$\fnt E$}(P); \draw [->,move up] (P) to node [yshift=-2pt] {$\fnt{K}$}(D); \draw [<-,move down] (P) to node [swap] {$\fnt{H}$}(D); \draw [->, move up] (DBU) to node [yshift=-2pt] {$\unbounded{\D}$}(P01); \draw [<-,move down] (DBU) to node [swap] {$\unbounded{\E}$}(P01); \draw [->,move up] (P01) to node [yshift=-2pt] {$\unbounded{\fnt K}$}(DU); \draw [<-,move down] (P01) to node [swap] {$\unbounded{\fnt H}$}(DU); \draw [->] (DB) .. controls +(1,1) and +(-1,1) .. node {$\fnt{V}$} (D); \draw [<-] (DB) .. controls +(1,-1) and +(-1,-1) .. node [swap] {$\fnt{W}$} (D); \draw [->] (DBU) .. controls +(1,1) and +(-1,1) .. node {$\unbounded{\fnt{V}}$} (DU); \draw [<-] (DBU) .. controls +(1,-1) and +(-1,-1) .. node [swap] {$\unbounded{\fnt{W}}$} (DU); \end{tikzpicture} \end{center} \caption{The categorical equivalences in Theorems~\ref{Theo:ProdFunDB} and~\ref{Theo:ProdFunDBU}}\label{fig:Equiv} \end{figure} Figure~\ref{fig:Equiv} summarises the categorical equivalences and dual equivalences involved in our approach, for both the bounded and unbounded cases. As noted in the introduction, our approach leads directly to categorical dualities, without the need to verify explicitly that the constructions are functorial: compare our presentation with that in \cite[pp.~117--120]{MPSV} and note also the work carried out to set up categorical equivalences on the algebra side in \cite[Section~5]{BJR11}. \section{Applications of the natural dualities for $\cat{DB}$ and $\unbounded{\DB}$}\label{Sec:Applications} In this section we demonstrate how the natural dualities we have developed so far lead easily to answers to questions of a categorical nature concerning $\cat{DB}$ and $\unbounded{\DB}$. Using the categorical equivalence between $\cat{DB}$ and $\cat D$, and that between $\unbounded{\DB}$ and $\unbounded{\CCD}$, it is possible directly to translate certain concepts from one context to another. We shall concentrate on~$\cat{DB}$. Analogous results can be obtained for~$\unbounded{\DB}$ and we mention these explicitly only where this seems warranted. We shall describe the following, in more or less detail: limits and colimits; free algebras; and projective and injective objects. These topics are very traditional, and our aim is simply to show how our viewpoint allows descriptions to be obtained, with the aid of duality, from corresponding descriptions in the context of distributive lattices. The results we obtain here are new, but unsurprising. We shall also venture into territory less explored by duality methods and consider unification type, and also admissible quasi-equations and clauses; here substantially more work is involved. It will be important for certain of the applications that we are dealing with strong, rather than merely full, dualities. Specifically we shall make use of the fact that if functors $\fnt D\colon {\cat A}\to \CX$ and $\fnt E\colon \CX \to {\cat A}$ set up a strong duality then surjections (injections) in ${\cat A}$ correspond to embeddings (surjections) in $\CX$; see \cite[Lemma~3.2.6]{CD98}. On a technical point, we note that we always assume that an algebra has a non-empty universe. \subsection*{Limits and colimits} \ Since $\cat{DB}$ is a variety, the forgetful functor into the category $\cat{SET}$ of sets has a left adjoint. As a consequence all limits in $\cat{DB}$ are calculated as in $\cat{SET}$ (see \cite[Section~V.5]{McL}), and this renders them fairly easy to handle, with products being cartesian products and equalisers being calculated in $\cat{SET}$. (We refer the reader to \cite[Section~V.2]{McL} where the procedure to construct arbitrary limits from products and equalisers is fully explained.) The calculation of colimits is more involved. The categorical equivalence between~$\cat{DB}$ and~$\cat D$ implies that if $\fnt{S}$ is a diagram in $\cat{DB}$ then \[ \mathrm{Colim\,} \fnt{S} \cong\fnt E\fnt{H}\bigl(\mathrm{Colim\,}\fnt{KDS} \bigr)\cong \fnt{W}\bigl(\mathrm{Colim\,}\fnt{VS} \bigr). \] This observation transfers the problem from one category to the other, but does not by itself solve it. However we can then use the natural duality derived in Theorem~\ref{DBnatdual} in particular to compute finite colimits. We rely on the fact that colimits in $\cat{DB}$ correspond to limits in $\CP$. Such limits are easily calculated, since cartesian products and equalisers of Priestley spaces are again in $\CP$. (Corresponding statements hold for $\unbounded{\DB}$ and $\CP_{01}$~\cite[Section 1.4]{CD98}.) Congruences can be seen as particular cases of colimits, specifically as co-equalisers. This implies, on the one hand, that the congruences of an algebra in $\cat{DB}$ or in $\unbounded{\DB}$ are in one-to-one correspondence with those substructures of its natural dual that arise as equalisers. Since~$\cat{DB}$ is a variety and Theorem~\ref{DBnatdual} supplies a strong duality, the lattice of congruences of an algebra~$\alg{A}$ in $\cat{DB}$ is dually isomorphic to the lattice of closed substructures of its dual space (see \cite[Theorem III.2.1]{CD98}). Simultaneously, the lattice of congruences of ${\alg{A}\in\cat{DB}}$ is isomorphic to the lattice of congruences of $\fnt{KD}(\alg{A})\in\cat D$. Likewise, from Theorem~\ref{DBUnatdual}, for each ${\alg{A}\in\unbounded{\DB}}$ the congruence lattice of $\alg{A}$ is isomorphic to the congruence lattice of $\unbounded{\fnt K}\unbounded{\D}(\alg{A})\in\unbounded{\CCD}$. The latter result was proved for interlaced bilattices in \cite[Chapter II]{RPhD} using the product representation. \subsection*{Free algebras}\ A natural duality gives direct access to a description of free objects: If an alter ego $\twiddle{\spc{M}}$ yields a duality on ${\cat A} = \ISP(\alg{M})$, then the power $\twiddle{\spc{M}}^\lambda $ is the natural dual of the free algebra in ${\cat A}$ on~$\lambda$ generators (see \cite[Corollary II.2.4]{CD98}). We immediately obtain $ \mathbf{F}_{\cat{DB}}(\lambda)\cong \fnt E_{\cat{DB}}\bigl(\twiddle 4^{\lambda}\bigr) $ where $\lambda$ is a cardinal and $\mathbf{F}_{\cat{DB}}(\lambda)$ denotes the free algebra on $\lambda$ generators in $\cat{DB}$; the free generators correspond to the projection maps. Because $\twiddle 4=\twiddle 2^2$, we have $\fnt{KD}(\mathbf{F}_{\cat{DB}}(\lambda))\cong\mathbf{F}_{\cat D}(2\lambda)$. Therefore \[\mathbf{F}_{\cat{DB}}(\lambda)\cong\fnt{EH}(\mathbf{F}_{\cat D}(2\lambda))\cong \mathbf{F}_{\cat D}(2\lambda)\odot\mathbf{F}_{\cat D}(2\lambda).\] Hence $\mathbf{F}_{\cat D}(2\lambda)\odot\mathbf{F}_{\cat D}(2\lambda)$ is the free bounded distributive bilattice on~$\lambda$ generators, the free generators being the pairs $(x_{2i-1},x_{2i})$ where $\{\,x_i\mid i\in 2\lambda\,\}$ is the set of free generators of $\mathbf{F}_{\cat D}(2\lambda)$. Analogous results hold for $\unbounded{\DB}$. \subsection*{Injective and projective objects}\ Injective, projective and weakly projective objects in $\cat D$ have been described (see \cite{BD} and the references therein; the definitions are given in Chapter I and results in Sections~V.9 and V.10). The notions of injective and projective object are preserved under categorical equivalences. For categories which are classes of algebras with homomorphisms as the morphisms, weak projectives are also preserved under categorical equivalences. A distributive lattice $\alg{L}$ (with bounds) is injective in $\cat D$ (and in $\unbounded{\DB}$ too) if and only it is complete and each element of $\alg{L}$ is complemented (see \cite[SectionV.9]{BD}). This implies that a distributive bilattice $\alg{A}$ is injective in $\cat{DB}$ if and only if $\alg{A}_k$ is complete (or equivalently $\alg{A}_t$ is complete) and each element of $\alg{A}$ is complemented in $\alg{A}_k$ (or equivalently $\alg{A}_t$ is complete and each element of $\alg{A}$ is complemented in $\alg{A}_t$). Moreover, since~$\cat D$ has enough injectives, the same is true of $\cat{DB}$. Corresponding statements can be made for $\unbounded{\DB}$. The algebra $\boldsymbol 2$ is the only projective of $\cat D$ \cite[Section V.10]{BD}. Hence $\boldsymbol 4$ is the only projective in $\cat{DB}$. The general description of weak projectives in $\cat D$ is rather involved (see \cite[Section V.10]{BD}). But in the case of finite algebras there is a simple dual characterisation: a finite bounded distributive lattice is weakly projective in $\cat D$ if and only if its dual space is a lattice. This translates to bilattices: a finite distributive bilattice is weakly projective in $\cat{DB}$ if and only if its natural dual is a lattice, or equivalently if the family of homomorphisms into $\boldsymbol 4$, ordered pointwise by~$\leq_k$, forms a lattice. In the unbounded case we note that $\unbounded{\DB}$ has no projectives since $\unbounded{\CCD}$ has none, and that a finite member~$\alg{A}$ of $\unbounded{\DB}$ is weakly projective if and only if $\unbounded{\D}(\alg{A})$ is a lattice. \subsection*{Unification type} \ The notion of \defn{unification} was introduced by Robinson in \cite{Ro1965}. Loosely, (syntactic) unification is the process of finding substitutions that equalise pairs of terms. When considering equivalence under an equational theory instead of equality the notion of unification evolves to encompass the concept of \defn{equational unification}. We refer the reader to \cite{BS2001} for the general definitions and background theory of unification. To study the unification type of bilattices we shall use the notion of algebraic unification developed by Ghilardi in \cite{Gh97}. Let $\alg{A}$ be a finitely presented algebra in a quasivariety ${\cat A}$. A \defn{unifier} for $\alg{A}$ in ${\cat A}$ is a homomorphism $u\colon\alg{A}\to\alg{P}$, where $\alg{P}$ is a finitely generated weakly projective algebra in ${\cat A}$. (In \cite{Gh97} weakly projective algebras are called regular projective or simply projective.) An algebra $\alg{A}$ is said to be \defn{solvable} in ${\cat A}$ if there exists at least one unifier for it. Let $u_i \colon \alg{A} \to \alg{P}_i$ for $i\in\{1,2\}$ be unifiers for $\alg{A}$ in ${\cat A}$. Then $u_1$ is \defn{more genera}l than~$u_2$, in symbols, $u_2 \leq u_1$, if there exists a homomorphism $f \colon \mathbf{P}_1 \to \mathbf{P}_2$ such that $f \circ u_1=u_2$. A unifier $u$ for $\alg{A}$ is said to be a \defn{most general} unifier (an mg-unifier) of $\alg{A}$ in ${\cat A}$ if $u \leq u'$ implies $u'\leq u$. For $\alg{A}$ solvable in ${\cat A}$ the \defn{type} of $\alg{A}$ is defined as follows: \begin{longnewlist} \item[\emph{nullary\!}] if there exists $u$, a unifier of $\alg{A}$, such that $u\not\leq v$ for each mg-unifier of $\alg{A}$ (in symbols, $\mathrm{type}_{{\cat A}}(\alg{A})=0$); \item[\emph{unitary\!}] if there exists a unifier $u$ of $\alg{A}$ such that $v\leq u$ for each unifier $v$ of~$\alg{A}$ ($\mathrm{type}_{{\cat A}}(\alg{A})=1$); \item[\emph{finitary\!}] if there exists a finite set $U$ of mg-unifiers of $\alg{A}$ such that for each unifier $v$ of $\alg{A}$ there exists $u\in U$ with $v\leq u$, and for each $v$ of $\alg{A}$ there exists $w$ unifier of $\alg{A}$ with $w\not\leq v$ ($\mathrm{type}_{{\cat A}}(\alg{A})=\omega$); and \item[\emph{\ infinitary\!\!}] otherwise ($\mathrm{type}_{{\cat A}}(\alg{A})=\infty$). \end{longnewlist} In \cite{BC}, an algorithm to classify finitely presented bounded distributive lattices by their unification type was presented. Since the unification type of an algebra is a categorical invariant (see \cite{Gh97}), the results in \cite{BC} can be combined with the equivalence between $\cat{DB}$ and $\cat D$ to investigate the unification types of finite distributive bilattices. Moreover, since the results in \cite{BC} were obtained using Priestley duality for~$\cat D$, we can directly translate the results to bilattices and their natural duals. This yields the following characterisation. Let $\alg{A}$ be a finitely presented (equivalently, finite) bounded distributive bilattice. Then $\alg{A}$ is solvable in $\cat{DB}$ if and only if it is non-trivial and \[ \mathrm{type}_{\cat{DB}}(\alg{A})=\begin{cases} 1&\mbox{ if }\fnt D_{\cat{DB}}(\alg{A})\mbox{ is a lattice, i.e., if }\alg{A}\mbox{ is weakly projective,}\\ \omega&\mbox{ if }\fnt D_{\cat{DB}}(\alg{A})\mbox{ is not a lattice and}\\ &\mbox{ \hspace*{.3cm} for each } x,y\in\fnt D_{\cat{DB}}(\alg{A})\mbox{ the interval } [x,y]\mbox{ is a lattice,}\\ 0&\mbox{ otherwise.} \end{cases} \] In \cite{BC} the corresponding theory for unbounded distributive lattices was not developed. With minor modifications to the proofs presented there, it is easy to extend the results to $\unbounded{\CCD}$. Its translation to $\unbounded{\DB}$ is as follows. Each finite algebra $\alg{A}$ in $\unbounded{\DB}$ is solvable and \[ \mathrm{type}_{\unbounded{\DB}}(\alg{A} )=\begin{cases} 1&\mbox{ if }\fnt D_{\unbounded{\DB}}(\alg{A})\mbox{ is a lattice, i.e., if }\alg{A}\mbox{ is weakly projective,}\\ 0&\mbox{ otherwise.} \end{cases} \] \subsection*{Admissibility}\ The concept of admissibility was introduced by Lorenzen for intuitionistic logic \cite{Lo1955}. Informally, a rule is admissible in a logic if when the rule is added to the system it does not modify the notion of theoremhood. The study of admissible rules for logics that admit an algebraic semantic has led to the investigation of admissible rules for equational logics of classes of algebras. For background on admissibility we refer the reader to \cite{Ry97}. A \defn{clause} in an algebraic language $\mathcal L$ is an ordered pair of finite sets of $\mathcal L$-identities, written $(\Sigma, \Delta)$. Such a clause is called a \defn{quasi-identity} if $\Delta$ contains only one identity. Let ${\cat A}$ be a quasivariety of algebras with language $\mathcal L$. We say that the $\mathcal L$-clause $(\Sigma,\Delta)$ is \defn{valid }in ${\cat A}$ (in symbols $\Sigma\vDash_{{\cat A}}\Delta$) if for every $\alg{A} \in {\cat A}$ and homomorphism $h \colon {\alg{Term_{\mathcal L}}} \to \alg{A}$, we have that $\Sigma \subseteq \ker h$ implies $\Delta \cap \ker h \not = \emptyset$, where ${\alg{Term_{\mathcal L}}}$ denotes the term (or absolutely free) algebra for~$\mathcal L$ over countably many variables (we are assuming that $\Sigma\cup\Delta\subseteq {\alg{Term_{\mathcal L}}}^2$). For simplicity we shall work with the following equivalent definition of admissible clause: the clause $(\Sigma,\Delta)$ is called \emph{admissible in~${\cat A}$} if it is valid in the free ${\cat A}$-algebra on countably many generators, ${\alg{F}_{{\cat A}}}(\aleph_0)$. Let ${\cat A}$ be a quasivariety. If a set of quasi-identities $\Lambda$ is such that $\alg{A}\in{\cat A}$ belongs to the quasivariety generated by ${\alg{F}_{{\cat A}}}(\aleph_0)$ if and only if $\alg{A}$ satisfies the quasi-identities in $\Lambda$, then $\Lambda$ is called a \defn{basis for the admissible quasi-identities of ${\cat A}$}. Similarly, $\Lambda$ is called a \defn{basis for the admissible clauses of ${\cat A}$} if $\alg{A}$ satisfies the clauses in $\Lambda$ if and only if $\alg{A}$ is in the universal class generated $\alg{F}_{{\cat A}}(\aleph_0)$, that is, $\alg{A}$ satisfies the same clauses as $\alg{F}_{{\cat A}}(\aleph_0)$ does. In the case of a locally finite quasivariety, checking that a set of clauses or quasi-identities is a basis can be restricted to finite algebras. \begin{lem} \normalfont{\cite{CM}} \label{Lem:CM} Let ${\cat A}$ be a locally finite quasivariety and let $\Lambda$ be a set of clauses in the language of ${\cat A}$. \begin{newlist} \item[{\rm (i)}] The following statements are equivalent: \begin{newlist} \item[{\rm (a)}] for each finite $\alg{A} \in {\cat A}$ it is the case that $\alg{A} \in \ope{IS}(\alg{F}_{{\cat A}}(\aleph_0))$ if and only if $\alg{A}$ satisfies~$\Lambda$; \item[{\rm (b)}] $\Lambda$ is a basis for the admissible clauses of ${\cat A}$. \end{newlist} \item[{\rm (ii)}] If the set $\Lambda$ consists of quasi-identities, then the following statements are equivalent: \begin{newlist} \item[{\rm (a)}] for each finite $\alg{A} \in {\cat A}$ it is the case that $\alg{A} \in \ISP(\alg{F}_{\cat A}(\aleph_0))$ if and only if $\alg{A}$ satisfies~$\Lambda$; \item[{\rm (b)}] $\Lambda$ is a basis for the admissible quasi-identities of~${\cat A}$. \end{newlist} \end{newlist} \end{lem} In \cite{CM}, using this lemma and the appropriate natural dualities, bases for admissible quasi-identities and clauses were presented for various classes of algebras---bounded distributive lattices, Stone algebras and De Morgan algebras, among others. Here we follow the same strategy using the dualities for $\cat{DB}$ and $\unbounded{\DB}$ developed in Sections~\ref{sec:DB} and~\ref{sec:DBU}. \begin{lem}\label{Lem:AdmDB} Let $\alg{A}$ be a finite distributive bilattice. \begin{newlist} \item[{\rm (i)}] $\alg{A}\in \ISP(\alg{F}_{\cat{DB}}(\aleph_0))$. \item[{\rm (ii)}] The following statements are equivalent: \begin{newlist} \item[{\rm (a)}] $\alg{A}\in \ope{IS}(\alg{F}_{\cat{DB}}(\aleph_0))$; \item[{\rm (b)}] $\fnt D_{\cat{DB}}(\alg{A})$ is a non-empty bounded poset; \item[{\rm (c)}] $\alg{A}$ satisfies the following clauses: \begin{newlist} \item[{\rm (1)}] $ ( \{ x\land_k y\approx 1_t \}, \{x\approx 1_t,\ y\approx1_t\} )$, \item[{\rm (2)}] $ ( \{ x\lor_k y\approx 1_t\}, \{x\approx 1_t, \ y\approx 1_t\} )$, \item[{\rm (3)}] $ ( \{0_t=1_t\}, \emptyset )$. \end{newlist} \end{newlist} \end{newlist} \end{lem} \begin{proof} To prove (i) it is enough to observe that $\boldsymbol 4$ is a subalgebra of any non-trivial algebra in~$\cat{DB}$, and therefore $\cat{DB}=\ISP(\boldsymbol 4)\subseteq\ISP(\alg{F}_{\cat{DB}}(\aleph_0))\subseteq\cat{DB}$. To prove (ii)(a)$\Rightarrow$(ii)(b), let $h\colon\alg{A}\to \alg{F}_{\cat{DB}}(\aleph_0)$ be an injective homomorphism. Then the map $ \fnt D_{\cat{DB}}(h)\colon \fnt D_{\cat{DB}}(\alg{F}_{\cat{DB}}(\aleph_0))\to \fnt D_{\cat{DB}}(\alg{A}) $ is an order-preserving continuous and onto $\fnt D_{\cat{DB}}(\alg{A})$. Since $ \fnt D_{\cat{DB}}(\alg{F}_{\cat{DB}}(\aleph_0))\cong \twiddle 4^{\aleph_0}$ is bounded and non-empty, so is $\fnt D_{\cat{DB}}(\alg{A})$. We next prove the converse, namely (ii)(b) $\Rightarrow$ (ii)(a). Let $\mathbf{t},\mathbf{b}\colon\alg{A}\to\boldsymbol 4$ be the top and bottom elements of $\fnt D_{\cat{DB}}(\alg{A})$ and let $\{\mathbf{t},\mathbf{b},x_1,\ldots, x_n\}$ be an enumeration of the elements of the finite set $\fnt D_{\cat{DB}}(\alg{A})$. Let $\spc{P}=\twiddle 4^{n}$, then $\fnt E_{\cat{DB}}(\spc{P})$ is the free bounded distributive bilattice on $n$ generators. Then $\fnt E_{\cat{DB}}(\spc{P})$ belongs to $\ope{IS}(\alg{F}_{\cat{DB}}(\aleph_0))$. Now define $f\colon \spc{P}\to \fnt D_{\cat{DB}}(\alg{A})$ by \[ f(c_1,\ldots,c_{n})=\begin{cases} \mathbf{b} & \mbox{ if }c_i=0_k\mbox{ for each }i\in\{1,\ldots,n\},\\ x_i & \mbox{ if }c_i\neq0_k,\mbox{ and } c_j=0_k\mbox{ for each }j\in\{1,\ldots,n\}\setminus\{i\}, \\ \mathbf{t} & \mbox{ otherwise.}\\ \end{cases} \] It is easy to check that $f$ is order-preserving and maps $P$ onto $\fnt D_{\cat{DB}}(\alg{A})$. Since the natural duality of Theorem~\ref{DBnatdual} is strong, the dual homomorphism $\fnt E_{\cat{DB}}(f)\colon\fnt{ED}(\alg{A})\to \fnt E_{\cat{DB}}(\spc{P})$ is injective. Hence \[\alg{A}\cong\fnt{ED}(\alg{A})\in\ope{IS}(\fnt E_{\cat{DB}}(\spc{P}))\subseteq\ope{IS}(\alg{F}_{\cat{DB}}(\aleph_0)).\] We now prove (ii)(b) $\Rightarrow$ (ii)(c). Let $\mathbf{t}\colon\alg{A}\to\boldsymbol 4$ be the top element of $\fnt D_{\cat{DB}}(\alg{A})$ and assume that $a,b\in\alg{A}$ are such that $a\land_k b=1_t$. If we assume that $a\neq 1_t \neq b$ then there exist $h_1,h_2\colon \alg{A}\to \boldsymbol 4$ such that $1_t<_k h_1(a)$ and $1_t<_k h_2(b)$. Since the order in $\fnt D_{\cat{DB}}(\alg{A})$ is determined pointwise by $\leq_k$, we then have $1_t <_k \mathbf{t}(a),\mathbf{t}(b)$. Then $\mathbf{t}(a)=\mathbf{t}(b)=1_k$ and $\mathbf{t}(a\land_k b)=1_k\neq 1_t$, a contradiction. Then $a= 1_t$ or $b=1_t$. A similar argument proves that $\fnt D_{\cat{DB}}(\alg{A})$ having a lower bound implies that clause (2) is valid in $\alg{A}$. If $\alg{A}\in\cat{DB}$ is such that $0_t=1_t$ then $\alg{A}$ is trivial and $\fnt D_{\cat{DB}}(\alg{A})$ is empty. This proves that clause~(3) is valid in any algebra $\alg{A}$ whose natural dual $\fnt D_{\cat{DB}}(\alg{A})$ is non-empty. Finally we prove (ii)(c) $\Rightarrow$ (ii)(b). Let $F=\{\,c\in\alg{A}\mid 1_t\leq_k c\,\}$. By clause~(3), $\alg{A}$ is non-trivial, so $0_t\notin F$. By clause (2), $F$ is a prime $k$-filter and it contains~$1_t$. Thus it is a prime $t$-filter, as observed at the end of Section~\ref{sec:DBilat}. Let $x\colon\alg{A}\to\boldsymbol 2$ be the characteristic function of~$F$. Then the map $f\colon\alg{A}\to \boldsymbol 4$ defined for each $a\in A$ by $f(a)=x(a)(1-x(\neg a))$ is a well-defined bilattice homomorphism, as observed after Theorem~\ref{sep-prop-bdd}. We shall prove that $f$ is the bottom element of $\fnt D_{\cat{DB}}(\alg{A})$. Let $h\in\fnt D_{\cat{DB}}(\alg{A})$ and $a\in A$. If $a\in F$ and $\neg a \notin F$, since $1_t\leq_k a$, then $f(a)=1_t\leq_k h(a)$. If $a,\neg a\in F$, then $1_t\leq_k h(a),\neg h(a)$. Then $h(a)=1_k=f(a)$. The other two cases follow by a similar argument, since $1_t\leq_k a,\neg a$. Then $f(a)\leq_k h(a)$ for each $a\in A$. This proves that $f\leq h$ in $\fnt D_{\cat{DB}}(\alg{A})$. By a similar argument the validity of clause (1) implies that $\fnt D_{\cat{DB}}(\alg{A})$ is upper-bounded. \end{proof} Combining Lemmas~\ref{Lem:CM} and~\ref{Lem:AdmDB} we obtain the following theorem. \begin{thm} Every admissible quasi-equation in $\cat{DB}$ is also valid in $\cat{DB}$. Moreover the following clauses form a basis for the admissible clauses for $\cat{DB}$ \begin{multline*} ( \{ x\land_k y\approx 1_t \}, \{x\approx 1_t,\ y\approx1_t\} ), \ \ ( \{ x\lor_k y\approx 1_t\}, \{x\approx 1_t, \ y\approx 1_t\} ) \\ \mbox{ and }\ (\{0_t=1_t\},\emptyset). \end{multline*} \end{thm} To simplify the proof of Lemma~\ref{Lem:AdmDB} the clauses presented in the previous theorem used the $k$-lattice operation. We can use Lemma~\ref{90deg} to rewrite the clauses using only constants and $t$-lattice operations. \begin{lem}\label{Lem:AdmDBU} Every finite unbounded distributive bilattice $\alg{A}$ is isomorphic to a subalgebra of $\alg{F}_{\unbounded{\DB}}(\aleph_0)$. \end{lem} \begin{proof} Let $\unbounded{\D}(\alg{A})=(X;\leq,\top,\bot,{\mathscr{T}})$. Since we assume that every algebra is non-empty, $X$ is non-empty. Let $X=\{\top,\bot,x_1,\ldots,x_n\}$ be an enumeration of the elements of $X$. Let $\spc{Q}=(\twiddle{\unbounded{4}})^{n}$. Then $\unbounded{\E}(\spc{Q})$ is the free distributive bilattice on $n$ generators and it belongs to $\ope{IS}(\alg{F}_{\unbounded{\DB}}(\aleph_0))$. Define $f\colon \spc{Q}\to \fnt D_{\cat{DB}}(\alg{A})$ by \[ f(c_1,\ldots,c_{n})=\begin{cases} \bot & \mbox{ if }c_i=0_k\mbox{ for each }i\in\{1,\ldots,n\},\\ x_i & \mbox{ if }c_i\neq0_k\mbox{ and } c_j=0_k\mbox{ for each }j\in\{1,\ldots,n\}\setminus\{i\} ,\\ \top & \mbox{ otherwise.}\\ \end{cases} \] Then $f$ is a continuous order-preserving map with image $\unbounded{\D}(\alg{A})$. Since the duality presented in Theorem~\ref{DBUnatdual} is strong, $\unbounded{\E}(f)\colon\unbounded{\E}\unbounded{\D}(\alg{A})\to \unbounded{\E}(\spc{Q})$ is injective. Then $\alg{A}\in\ope{IS}(\alg{F}_{\unbounded{\DB}}(\aleph_0))$. \end{proof} The following theorem follows directly from Lemmas~\ref{Lem:CM} and~\ref{Lem:AdmDBU}. \begin{thm} Every admissible clause in $\unbounded{\DB}$ is also valid in $\unbounded{\DB}$. \end{thm} \section{Multisorted natural dualities} \label{sec:multi} We have delayed presenting dualities for pre-bilattice varieties because, to fit $\cat{DPB}$ and $\unbounded{\DPB}$ into our general representation scheme, we shall draw on the multisorted version of natural duality theory. This originated in \cite{DP87} and is summarised in \cite[Chapter 7]{CD98}. It is applicable in particular to the situation that interests us, in which we have a quasivariety ${\cat A} = \ISP(\alg{M}_1,\alg{M}_2)$, where~$\alg{M}_1$ and $\alg{M}_2$ are non-isomorphic finite algebras of common type having a reduct in $\cat D$ or $\unbounded{\CCD}$. We require the theory only for algebras $\alg{M}_1$ and~$\alg{M}_2$ of size~two. We do not set up the machinery of piggybacking, opting instead to work with the multisorted version of the NU Duality Theorem, as given in \cite[Theorem~7.1.2]{CD98}, in a form adequate to yield strong dualities for $\cat{DPB}$ and $\unbounded{\DPB}$. We now give just enough information to enable us to formulate the results we require. The ideas parallel those presented in Section~\ref{piggyonesorted}. Given ${\cat A} = \ISP(\alg{M}_1,\alg{M}_2)=\ISP(\class{M})$, we shall initially consider an alter ego for $\class{M}$ which takes the form $ \twiddle{\CM} = (M_1 \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, M_2; R, {\mathscr{T}})$, where $R$ is a set of relations each of which is a subalgebra of some $\alg{M}_i \times \alg{M}_j$, where $i,j \in \{1,2\}$. (To obtain a strong duality we may need to allow for nulllary operations as well, but for simplicity we defer introducing this refinement.) The alter ego $\twiddle{\CM}$ is given the disjoint union topology derived from the discrete topology on $\alg{M}_1$ and $\alg{M}_2$. We may then form multisorted topological structures~$\spc{X} = \spc{X}_1 \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, \spc{X}_2 $ where each of the sorts $\spc{X}_i$ is a Boolean topological space,~$\spc{X}$ is equipped with the disjoint union topology and, regarded as a structure, $\spc{X}$ carries a set $R^\spc{X}$ of relations $r^\spc{X}$; if $r \subseteq M_i \times M_j$, then $r^\spc{X} \subseteq X_i \times X_j$. Given structures $\spc{X} $ and $\spc{Y}$ in $\CX$, a morphism $\phi \colon \spc{X} \to \spc{Y}$ is a continuous map preserving the sorts, so that $\phi(X_i) \subseteq Y_i$, and $\phi$ preserves the relational structure. The terms isomorphism, embedding, etc., extend in the obvious way to the multisorted setting. We define our dual category $\CX$ to have as objects those structures $\spc{X}$ which belong to $\IScP(\twiddle{\CM})$. Thus $\CX$ consists of isomorphic copies of closed substructures of powers of $\twiddle{\CM}$; here powers are formed `by sorts'; and the relational structure is lifted pointwise to substructures of such powers in the expected way. We now define the hom-functors that will set up our duality. Given $\alg{A} \in {\cat A}$ and we let $ \fnt D(\alg{A}) = {\cat A}(\alg{A},\alg{M}_1) \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, {\cat A}(\alg{A},\alg{M}_2)$, where ${\cat A}(\alg{A},\alg{M}_1) \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, {\cat A}(\alg{A},\alg{M}_2)$ is a (necessarily closed) substructure of $M_1^A \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, M_2^A$ with the relational structure defined pointwise. Given $\spc{X} = \spc{X}_1 \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, \spc{X}_2 \in \CX$, we may form the set $\CX(\spc{X},\twiddle{\CM})$ of $\CX$-morphisms from $\spc{X}$ into $\twiddle{\CM}$. This set acquires the structure of a member of ${\cat A}$ by virtue of viewing it as a subalgebra of the power $\alg{M}_1^{X_1} \times \alg{M}_2^{X_2}$. We define $\fnt E(\spc{X}) = \CX(\spc{X},\twiddle{\CM})$. Let $\fnt D$ and $\fnt E$ act on morphisms by composition in the obvious way. We then have well-defined functors $\fnt D \colon {\cat A} \to \CX$ and $\fnt E \colon \CX \to {\cat A}$. We say $\twiddle{\CM}$ \defn{yields a multisorted duality} if, for each $\alg{A} \in {\cat A}$, the natural multisorted evaluation map $e_{\alg{A}}$ given by $e_{\alg{A}} (a) \colon x \mapsto x(a)$ is an isomorphism from~$\alg{A}$ to $\fnt{ED}(\alg{A})$. The duality is \defn{full} if each evaluation map $\varepsilon_{\spc{X}} \colon \spc{X} \to \fnt D\fnt E(\spc{X})$ is an isomorphism. As before we do not present the definition of strong duality, noting only that a strong duality is necessarily full. The following very restricted form of \cite[Theorem~7.1.1]{CD98} will meet our needs. \begin{thm} {\rm(Multisorted NU Strong Duality Theorem, special case)}\label{Theo:NuTwoSorts} Let ${\cat A} = \ISP(\alg{M}_1,\alg{M}_2)$, where $\alg{M}_1,\alg{M}_2$ are two-element algebras of common type having lattice reducts. Let $ \twiddle{\CM} = (\,M_1 \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, M_2 ; R, N,{\mathscr{T}}\, ) $ where $ N $ contains all one-element subalgebras of $\alg{M}_i$, for $i=1,2$, treated as nullary operations, $ R$ is the set $\bigcup \{\, \Su(\alg{M}_i\times\alg{M}_j)\mid i,j\in\{1,2\}\,\}$, and ${\mathscr{T}}$ is is the disjoint union topology obtained from the discrete topology on $M_1$ and $M_2$. Then $\twiddle{\CM}$ yields a multisorted duality on ${\cat A}$ which is strong. \end{thm} \section{Dualities for distributive pre-bilattices} \label{sec:DPB} Paralleling our treatment of other varieties, we first record the result on the structure of $\unbounded{\DPB}$ and $\cat{DPB}$ we shall require. \begin{prop} \label{sep-pre-bilat} {\rm (i)} $\unbounded{\DPB} = \ISP(\twoU^+,\twoU^-)$ and {\rm (ii)} $\cat{DPB} = \ISP(\boldsymbol 2^+,\boldsymbol 2^-)$. \end{prop} \begin{proof} Let $\alg{A} \in \unbounded{\DPB}$ and let $a \ne b$ in $\alg{A}$. Since $\unbounded{\CCD}= \ISP(\twoU)$, there exists $x \in \unbounded{\CCD}(\alg{A}_t,\twoU)$ with $x(a) \ne x(b)$. The relation $\theta$ given by $c \, \theta \, d$ if and only if $x(c) = x(d)$ is a $t$-lattice congruence and hence, by Proposition~\ref{lem:cong2}, a $\unbounded{\DPB}$-congruence. The associated quotient algebra has two elements, and is necessarily (isomorphic to) either $\twoU^+$ or $\twoU^-$. This proves~(i). The same form of argument works for (ii), the only difference being that the map~$x$ now also preserves bounds. \end{proof} The following two theorems are consequences of the Multisorted NU Duality Theorem. We consider $\cat{DPB}$ first since the absence of one-element subalgebras makes matters particularly simple. We tag elements with $\pm$ to indicate which $2$-element algebra they belong to. In both cases we could use either $\leq_k$ or $\leq_t$ as the subalgebra of the square in either component. The choice we make mirrors that forced when negation is present. The choice will affect how the translation to the Priestley-style duality operates, but not the resulting duality. \begin{thm} \label{Theo:DPBduality} A strong natural duality for $\cat{DPB} = \ISP(\boldsymbol 2^+, \boldsymbol 2^-)$ is obtained as follows. Take $\class{M} = \{ \boldsymbol 2^+,\boldsymbol 2^-\}$ and as the alter ego \[ \twiddle{\CM} = (\{ 0^+,1^+\} \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, \{ 0^-,1^-\}; r^+, r^-, {\mathscr{T}}), \] where $r^+$ is $\leq_k$ on $\boldsymbol 2^+$ and $r^-$ is $\leq_k $ on $\boldsymbol 2^-$. Moreover $\cat{DPB}$ is dually equivalent to the category $\CX=\IScP(\twiddle{\CM} )$. \end{thm} \begin{proof} The algebras $\boldsymbol 2^+$, $\boldsymbol 2^-$, $\boldsymbol 2^+\times\boldsymbol 2^-$ and $\boldsymbol 2^-\times\boldsymbol 2^+$ have no proper subalgebras. The proper subalgebras of $\boldsymbol 2^+\times\boldsymbol 2^+$ are the diagonal subalgebra $\{(0,0),(1,1)\}$, and $\leq_k$ and its converse, and likewise for $\boldsymbol 2^-\times\boldsymbol 2^-$. \end{proof} Let $\twiddle{\CM} $ and $\CX$ be as in Theorem~\ref{Theo:DPBduality}. Since $r^+$ and $r^-$ are partial orders on the respective sorts, $(X_1,X_2;\leq_1,\leq_2,{\mathscr{T}})$ belongs to $\IScP(\twiddle{\CM})$ if and only if the topological posets $(X_1,\leq_1,{\mathscr{T}}{\upharpoonright}_{X_1})$ and $(X_2,\leq_2,{\mathscr{T}}{\upharpoonright}_{X_2})$ are Priestley spaces. Moreover, since the morphisms in $\CX$ are continuous maps that preserve the sorts and both relations, we deduce that a categorical equivalence between~$\CX$ and $\CP\times\CP$ is set up by the functors $\fnt{F}\colon \CX\to \CP\times\CP$ and $\fnt{G}\colon \CP\times\CP\to \CX$ defined~by \begin{alignat*}{3} &\text{on objects:} & \hspace*{.95cm} & \spc{X}= (X_1\smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, X_2;\leq_1,\leq_2,{\mathscr{T}})\, \longmapsto \fnt{F}(\spc{X}) = \hspace*{.95cm}\phantom{\text{on objects:}}&& \\ && & \hspace*{3.5cm} \bigl( (X_1;\leq_1,{\mathscr{T}}{\upharpoonright}_{X_1}), (X_2;\leq_2,{\mathscr{T}}{\upharpoonright}_{X_2}) \bigr),&& \\ &\text{on morphisms:} & & \phantom{\spc{X} =(X_1 \smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, X_2;\leq_1,\leq_2,) } \, h \longmapsto \fnt{F}(h) = (h{\upharpoonright}_{X_1},h{\upharpoonright}_{X_2}), &&\\ \shortintertext{and} &\text{on objects:} & & \Z = \bigl((X;\leq_X,{\mathscr{T}}_X),(Y;\leq_Y,{\mathscr{T}}_Y) \bigr)\,\longmapsto \fnt{G}(\Z) = && \\ & & & \hspace*{5.5cm} (X\smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, Y;\leq_X,\leq_Y, {\mathscr{T}}), && \\ &\text{on morphisms:} & & \phantom{ X;\leq_X,{\mathscr{T}}_X),(Y;\leq_Y,{\mathscr{T}}_Y)\,\,\, } (f_1,f_2)\longmapsto \fnt{G}(f_1,f_2) = f_1\smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, f_2, && \end{alignat*} where ${\mathscr{T}}$ is the topology on $X\smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\, Y$ generated by ${\mathscr{T}}_X\smash{\,\cup\kern-0.45em\raisebox{1ex}{$\cdot$}}\,\,{\mathscr{T}}_Y$. Then the diagram in Figure~\ref{fig:EquivDPBCCDCCD} proves that $\cat{DPB}$ is categorically equivalent to $\cat D\times\cat D$, where $\fnt{H}\times\fnt{H}$ and $\fnt{K}\times\fnt{K}$ are the corresponding product functors. \begin{figure} [ht] \begin{center} \begin{tikzpicture} [auto, text depth=0.25ex, move up/.style= {transform canvas={yshift=1.9pt}}, move down/.style= {transform canvas={yshift=-1.9pt}}, move left/.style= {transform canvas={xshift=-2.5pt}}, move right/.style={transform canvas={xshift=2.5pt}}] \matrix[row sep= 1cm, column sep= 1.7cm] { \node (DPB) {$\cat{DPB}$}; & \node (X) {$\CX$}; & \node (PP) {$\CP\times\CP$}; & \node (DD) {$\cat D\times\cat D$};\\ }; \draw [->, move up] (DPB) to node [yshift=-2pt] {$\fnt D$}(X); \draw [<-,move down] (DPB) to node [swap] {$\fnt E$}(X); \draw [->,move up] (X) to node [yshift=-2pt] {$\fnt{F}$}(PP); \draw [<-,move down] (X) to node [swap] {$\fnt{G}$}(PP); \draw [->,move up] (PP) to node [yshift=-2pt] {$\fnt{K}\times\fnt{K}$}(DD); \draw [<-,move down] (PP) to node [swap] {$\fnt{H}\times\fnt{H}$}(DD); \end{tikzpicture} \end{center} \caption{Equivalence between $\cat{DPB}$ and $\cat D \times \cat D$ }\label{fig:EquivDPBCCDCCD} \end{figure} To obtain a strong duality for $\unbounded{\DPB}$ we need first to determine $\Su(\alg{M})$ and $\Su(\alg{M}\times\alg{M}')$ where $\alg{M},\alg{M}'\in\{\twoU^+, \twoU^-\}$. To determine which binary relations to include we can argue in much the same way as for $\Su(\unbounded{\four}^2)$. Decomposable subalgebras of $\Su(\alg{M}\times\alg{M}')$ can be discounted. It is simple to confirm that all indecomposable $\unbounded{\DPB}$-subalgebras are $\cat{DPB}$-subalgebras, and such subalgebras have already been identified in the proof of Theorem~\ref{Theo:DPBduality}. We omit the details. \begin{thm} \label{Theo:DPBUduality} A strong, and hence full, duality for $\unbounded{\DPB} = \ISP(\twoU^+, \twoU^-)$ is obtained as follows. Take $\class{M} = \{ \twoU^+,\twoU^-\}$ and as the alter ego \[ \twiddle{\CM} = (\{ 0^+,1^+\} \cup \{ 0^-,1^-\}; r^+, r^-, 0^+, 1^+, 0^-, 1^-,{\mathscr{T}}), \] where $r^+$ is $ \leq_k$ on $\twoU^{+}$ and $r^-$ is $\leq_k$ on $\twoU^{-}$ and the constants are treated as nullary operations. \end{thm} Reasoning as in the bounded case, $\CX=\IScP(\twiddle{\CM})$ is categorically equivalent to $\CP_{01}\times\CP_{01}$. Then $\unbounded{\DPB}$ is categorically equivalent to $\unbounded{\CCD}\times\unbounded{\CCD}$. We have an exactly parallel situation to that shown in the diagram in Figure~\ref{fig:EquivDPBCCDCCD}. As an aside, we remark that we could generate $\unbounded{\DPB}$ as a quasivariety using the single generator ${\twoU^+ \times \twoU^-}$ and apply Theorem~\ref{genpigoneMu}. But there are some merits in working with the pair of algebras $\twoU^+$ and $\twoU^-$. Less work is involved to formulate a strong duality and to confirm that it is indeed strong. More importantly for our purposes, the translation to a Priestley-style duality is more transparent in the multisorted framework. As was done in Theorems~\ref{Theo:ProdFunDB} and~\ref{Theo:ProdFunDBU} for $\cat{DB}$ and $\unbounded{\DB}$, respectively, it is possible to develop a different presentation (naturally equivalent) of the functors that determine the equivalences between $\cat D\times\cat D$ and $\cat{DPB}$ and between $\unbounded{\CCD}\times\unbounded{\CCD}$ and $\unbounded{\DPB}$. This will lead to the known product decomposition of distributive pre-bilattices with and without bounds. We choose not to develop this here, since we would need to introduce the multisorted version of the piggyback duality (see \cite[Theorem~7.2.1]{CD98}). The results could then be obtained just by modifying the arguments used to prove Theorems~\ref{Theo:ProdFunDB} and~\ref{Theo:ProdFunDBU}. Also the applications presented in Section~\ref{Sec:Applications} can be extended to $\cat{DPB}$ and $\unbounded{\DPB}$ with the corresponding modifications. \section{Concluding remarks} \label{Sec:Conclude} With our treatment of representation theory for distributive bilattices now complete, we can take stock of what we have achieved. The scope of our work is somewhat different from that of other investigators of bilattices. Throughout we have restricted attention to the distributive case. We have not ventured into the territory of logical bilattices in this paper, but we do observe that such bilattices are customarily assumed to be distributive. Nevertheless we should comment on the role of distributivity, as compared with the weaker condition of interlacing. Any interlaced (pre)-lattice has a product representation and, conversely, such a representation is available only if the two lattice structures are linked by interlacing. Accordingly the product representation features very strongly in the literature. As indicated in Section~\ref{sec:prodrep}, the dual representations obtained in \cite{MPSV} and in \cite{JR} build on Priestley duality as it applies to the varieties $\cat D$ and $\cat{DM}$. The setting, perforce, is now that in which the bilattice structures are distributive and have bounds; the product representation is brought into play to handle the $k$-lattice operations. We next comment on the role of congruences. In this paper, the core result is Proposition~\ref{lem:cong2} asserting that the congruences of any distributive pre-bilattice coincide with the congruences of the $t$-lattice reduct and with the congruences of the $k$-lattice reduct. For the interlaced case, this result is obtained with the aid of the product representation and leads on to a description of subdirectly irreducible algebras; see \cite{MPSV,RPhD,BJR11}. We exploited Proposition~\ref{lem:cong2} to obtain our $\ISP$ results for each of $\cat{DB}$, $\unbounded{\DB}$, $\cat{DPB}$ and $\unbounded{\DPB}$. These results are of course immediate once the subdirectly irreducible algebras are known, but our method of proof is much more direct. Conversely, our results immediately yield descriptions of the subdirectly irreducibles. From what is said above it might appear that, in certain aspects our approach leads to the same principal results as previous approaches do, albeit by a different route. But we contend that we have done much more than this. In our setting we are able to harness the techniques of natural duality theory and to apply them in a systematic way to the best-known bilattice varieties. We hereby gain easy access to the applications presented, by way of illustration, in Section~\ref{Sec:Applications}. It is true that the dualities developed in \cite{MPSV} and in \cite{JR} can be described using our dualities and vice versa. However the deep connections between congruences of lattice reducts, our $\ISP$ presentations, and the topological representation theory only becomes clear using natural dualities. We end our paper with an interesting byproduct of our treatment which links back to the origins of bilattices. The theory of bilattices and the investigation of four-valued logics have been intertwined ever since the concept of a bilattice was first introduced. In his seminal paper \cite{ND1} (also available in \cite{ND2}), Belnap introduced two lattices over the same four-element set $\{F,T,Both,None\}$, the logical lattice $\alg{L4}$ and the approximation lattice $\alg{A4}$, the former admitting also a negation operation. With our notation, $\alg{L4}\cong(\{00,01,10,11\}; \lor_t,\land_t,\neg,00,11)$ and $\alg{A4}\cong \boldsymbol 4_k$. Belnap defines a \defn{set-up} as a map~$s$ from a set $X$ of atomic formulas into $\{F,T,Both,None\}$, and extends $s$ in a unique way to a homomorphism $\bar{s}\colon\alg{Fm} (X) \to \alg{L4}$, where $\alg{Fm}(X)$ to the set of formulas in the language $\{\land,\lor,\neg\}$. He then introduces a logic, understood as an entailment relation between formulas based on set-ups and what is nowadays called a Gentzen system which is complete for this logic. The connection between Belnap's logic and De Morgan lattices and De Morgan algebras, hinted at in the definition of the former, was unveiled in detail by Font in \cite{F97} in the context of abstract algebraic logic. Belnap did more than just define his logic: he also presented a mathematical formulation of the epistemic dynamic of the logic. To do this, he defined \defn{epistemic states} as sets of set-ups and lifted the order on $\alg{A4}$ to a pre-order, $\sqsubseteq$ (the \defn{approximation order}), between epistemic states. He then considered the partial order obtained from $\sqsubseteq$ by quotienting by the equivalence relation $\sqsubseteq\cap\sqsupseteq$ and showed that the resulting poset is isomorphic to the family of upward-closed sets of set-ups; here set-ups are considered as elements of $\alg{A4}^{\alg{Fm}(X)}$ and are ordered pointwise. This emphasises the importance of the poset structure, as opposed to the algebraic structure, of $\alg{A4}$. Furthermore, it is proved that, for each formula $A\in\alg{Fm}(X)$, the assignment $A\mapsto {\rm Tset}(A)=\{\, s\colon \bar{s}(A)\in\{T,Both\}\, \}$ maps conjunctions to intersections, disjunctions to unions and $\neg A\mapsto {\rm Tfalse}(A)=\{\,s\colon \bar{s}(A)\in\{F,Both\}\, \}$. So we could interpret Belnap's results as a representation of $\alg{Fm}(X)$ as upward-closed subsets of homomorphisms from $\alg{Fm}(X)$ to $\alg{L4}$ ordered pointwise by~$\alg{A4}$. Only a few steps are needed to connect Belnap's representation, as outlined above, with the natural duality for De Morgan algebras; see \cite[Section~4.3.15]{CD98} and the references therein. We adopt the notation of \cite{CD98} for the generating algebra, $\underline{\mathbf{dM}}$, and for the alter ego, $\twiddle{\mathbf{dM}}$. First observe that ${\alg{L4}\cong\underline{\mathbf{dM}}}$ is a De Morgan algebra. Therefore each homomorphism $h\colon\alg{Fm}(X)\to\alg{L4}$ factors through the free De Morgan algebra $\mathbf{F}_{\cat{DM}}(X)$. Hence the set of set-ups can be identified with $\cat{DM}(\mathbf{F}_{\cat{DM}}(X),\alg{L4})$. It is also necessary to check that for each formula $A$ the sets $\text{Tset}(A)$ and $\text{Fset}(A)$ are related by the involution of the dual space of a De Morgan algebra; more precisely, $g(\text{Tset}(A))=\text{Fset}(A)$. And finally, of course, topology plays its role by enabling one to characterise those upward-closed sets (represented by maps) that correspond to formulas. These observations serve to stress that $\alg{L4}$ and $\alg{A4 }$ in Belnap's works play quite different roles. Moreover, these structures are intimately related to the roles of $\underline{\mathbf{dM}}$ and $\twiddle{\mathbf{dM}}$ in the natural duality for De Morgan algebras. The idea of combining two lattices into one structure originated with Ginsberg \cite{Gin}. The dualities presented in Theorem~\ref{DBnatdual} and~~\ref{DBUnatdual} can be seen as a bridge reconciling Belnap's and Ginsberg's approaches, the first considering two separated lattice structures $\alg{L4}$ and $\alg{A4}$ with different roles but based on the same universe, and the latter combining them into a single algebraic structure. We, in like manner, work with two different structures $\boldsymbol 4$ and $\twiddle 4$ (and $\unbounded{\four}$ and $\twiddle{\unbounded{4}}$ in the unbounded case) with different structures having distinctive roles: one logical with an algebraic structure, the other epistemic with a poset structure. \bibliographystyle{amsplain} \renewcommand{\bibname}{References}
1,108,101,563,451
arxiv
\section{Introduction} \label{sec:intro} Orion BN/KL is a complex massive star formation region that is associated with an explosive event that occurred some 500 years ago. In particular, it contains around 200 filamentary structures in H$_2$ emission known as the Orion fingers, which could be formed by the close {encounters of young stellar objects} \citep[][and references therein]{ZETAL09,BETAL11}. {The most accepted interpretation of these fingers is that they were formed by the interaction of high velocity gas clumps with the environment \citep{BETAL17}. We will consider this interpretation.} The age of the event have been determined by several authors using different techniques. \cite{BETAL11} analyzed the projected position and velocity {of the heads} of the H$_2$ fingers. For each finger, they found an individual age that is between 1000 and 500~yr. This is in contradiction with the idea that Orion BN/KL was produced by a single explosive event and that the expelled clumps are in ballistic motion, so they concluded that there must be some deceleration. \citet{ZETAL09} {reported the counterpart of the H$_2$ fingers observing the J$=2\to1$ CO transition, called CO streamers. Each streamer has a radial velocity that increases linearly with the distance to a common origin and, assuming a simultaneous ejection}, they determined the 3D structure and obtained a most probable age of approximately 500~yr. This is in agreement with the age estimated by \cite{RETAL17}, who used the proper motions and projected positions of the runaway objects I, n and BN to estimate a close encounter 544 years ago. { Also, \cite{ZETAL11a} calculated the age of a expanding bubble in $^{13}$CO centered in the same possible origin of the region. The radial velocity and the size of this outflow result in $\sim600$ years}. The momentum and kinetic energy of this outflow is at least 160~M$_\odot$ km s$^{-1}$ and $4\times 10^{46}$ and $4\times10^{47}$~erg \citep{SETAL84,KS76}. % There is a chance that the fingers could be originated at different moments. Perhaps, there is an unexplored mechanism to produce such an extended structure. The machine-gun model has been mentioned as a possible explanation, but previous models \citep{RB93}, even when they are not colimated, are far from being as isotropic as the Orion fingers. Then, the runaway stars \citep{RETAL17}, the expansion of the molecular bubble \cite{ZETAL11b} and the age determined by the CO streamers \citep{ZETAL09}, are strong evidence of a single and simultaneous event. Then, the widespread ages could be explained by a dynamical model that takes into account the deceleration of a dense clump by the surrounding environment. % There are several attempts to describe the interaction of a moving cloud against a static medium. \cite{DYA67} (hereafter DA) analyzed the plasmon problem, which consists in a moving cloud that adopts a particular density structure, and derived its equation of motion. \cite{CETAL98} improved the plasmon solution including centrifugal pressure. Also, \cite{RETAL98} proposed the equation of motion of a static spherical cloud that is accelerated with a high velocity wind due to the ram pressure. More recently, \cite{ROETAL19} (hereafter RO19) proposed a modification to the plasmon problem, considering the mass lost by the clump, which can modify a plasmon dynamic history if it is embedded in a high density environment. The plasmon problem is based on the direct consideration of the balance between the ram pressure of the environment and the internal, stratified pressure of the decelerating clump. Fig. \ref{fig:plasmon} represents the plasmon profile adopted by the pressure balance, the post-shock region, where the material is ionized, and the inner neutral region. A similar representation has been proposed by \cite{B97} \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F0.pdf} \caption{ {\bf a) Schematic representation of the initial clump at the ejection moment. The ejected clump takes a plasmon profile by the pressure balance between the internal pressure and the ram pressure produced by the velocity component $v\cos{\alpha}$, where $\alpha$ is the angle between the plasmon surface normal and the motion direction. b) In our model (see RO19) the reverse shock deforms the initial clump that becomes into a plasmon in a negligible time. The environment has a density $\rho_a$, the plasmon has a velocity $v$ and a density $\rho (x)$ with a density structure studied in DA. The post-shock region that separates the environment and the plasmon structure has been exaggerated for clarity. An intermediate phase between this two cases was well studied by \cite{B97} and \cite{BETAL15}.}} \label{fig:plasmon} \end{figure} % Then the dynamical analysis of the motion of the Orion fingers could lead to a better understanding of the conditions that formed such a structure. \cite{BETAL15} performed numerical simulations of the fingers using observational restraints and obtained a notable resemblance to the actual fingers. Nevertheless, as they described, the interpretation of such simulations is limited since they used an adiabatic system, while, in reality, the cooling length is much shorter than the total length of the longest fingers. Therefore, more detailed numerical solutions and an adequate analytic model can be helpful to determine the physical conditions and, perhaps, the ejection mechanism of the fingers, which can be helpful to understand the relevance and duration of similar events in the star forming processes. Then, adopting an age of $t=544$~yr \citep{RETAL17}, we propose a model to obtain the physical conditions of the ejection. The mass-loss plasmon has a implicit dependence on its own size and it can be used to find better restrictions on the ejection mechanism. \\ In Section~\ref{sec:parameters} we describe the sample of objects to be analyzed, in Section~\ref{sec:analytic} we present the estimation of the properties for the clumps before the explosive event that generated the Orion fingers in Orion BN/KL. We summarize our conclusions in Section~\ref{sec:conclusions}. \section{Obtaining the physical parameters of the fingers} \label{sec:parameters} \subsection{Proper motions} \label{sec:observations} From \cite{LB00}, \cite{DETAL02} and \cite{BETAL11} we have obtained the proper motion of several features and the projected positions for the reported data. In the follow paragraphs we describe with more detail how this was done. \begin{itemize} \item \cite{LB00} analyzed the proper motions of 27 bullets, with emission in [Fe {\small II}], and 11 H$_2$ knots, using a time baseline of 4.2~yr (see Figure~\ref{fig:rVdatos}). From these 38 objects only 19 have proper motion vectors aligned with the position vectors with respect to IRc2, the possible origin of the explosive event. They used a distance to the Orion Nebula of $d=450$~pc \citep{GS89}, that is larger than the actually accepted $d=414$~pc \citep{METAL07}) which leads to overestimate the projected distance and proper motion of the data. We have corrected this effect for this paper. In general, they conclude that the farther features have larger proper motions, which is consistent with, at least, some kind of impulse with an age shorter than 1000 yr. However, it is interesting to note that they reported some H$_2$ knots as almost stationary, but these are not included in the final analysis. \item \cite{DETAL02} measured the proper motions of several HH objects in the Orion nebula. For the Orion BN/KL region they found 21 HH objects moving away from IRc2. As \cite{LB00}, they found that the larger objects are faster. HH 210 is also a prominent feature that has a proper motion of almost 400~km~s$^{-1}$. The uncertainties lead them to fit an age of $1010\pm140$~yr. Even in this case, several objects are not in the range of 870 to 1150~yr. Also,they used a distance of 450~pc, that has been corrected in this work to 414~pc. \item \cite{BETAL11} (see also, \citep{CPHD06}) obtained the proper motions of 173 fingers in H$_2$, but in this case there is no clear evidence for a linear dependence of the velocity on the projected distance. They only mentioned that the age of the event could be between 500 and 1000~yr, whether the simultaneous ejection assumption is maintained. { The three data sets are represented in Figure~\ref{fig:rVdatos}}. \item Also, \cite{ZETAL09} analyzed the CO streamers that seem to be related to the fingers. These streamers are $\sim$2 times shorter and narrower than the fingers and each one follow a Hubble law. The kinematic age of each one could be related to the projection angle with respect to the plane of the sky, and assuming that the explosion was isotropic they found that the most probable age is around 500~yr. \cite{BETAL17}, using ALMA, found more streamers and confirmed that these streamers has isotropic extension. This means that some of the CO streamers do not have associated fingers. \end{itemize} \subsection{Mass, density and size} On the other hand, from \cite{RETAL17}, \cite{CPHD06} and \citet{BETAL17} we have obtained the mass, density and size of several features and the projected positions for the reported data. In the follow paragraphs we also describe with more detail how this was done. \begin{itemize} \item Recently, \cite{RETAL17} has measured, with high precision, the proper motions of the objects I, BN and n. They found that these objects had to be ejected from a common origin $544\pm6$~yr ago. This uncertainty does not take into account systematic effects, which can increase it up to $\pm25$~yr. In any case, 544 years is consistent with the age determined by the CO streamers of about 550 years. In this work, we assume this event to be the origin of the ejection of the material that created the fingers and the streamers. \item \cite{CPHD06} measured 8$M_\odot$ as the mass of the moving gas. We can use this estimate to find the upper limits for either the mass of an individual clump, or its size. Nevertheless, due to the complexity of the region there is an uncertainty of a factor two in this mass estimate. \item For the mass, we assume that the observed moving gas corresponds, exclusively, to that of the ejected clumps. Since there are 200 fingers, then the average mass of each clump is simply $8/200=0.04 M_\odot$. An inferior limit for the clump mass is that calculated by \cite{AB93} and \cite{BA94} of $10^{-5}M_\odot$ based on the [Fe II] 1.64$\mu$m line flux and size. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F1} \caption[Data sets used in this work to apply the dynamical models]{This figure shows the three data sets used for this work, with their respective uncertainties. The open circles stand for the H$_2$ fingers reported by \citet{BETAL11} see also, \cite{CPHD06}, the filled circles stand for the {[Fe\small II}] bullets \citep{LB00} and the crosses represent the HH objects reported by \cite{DETAL02}. The lines indicate an age consistent with no deceleration, $t_1=500$~yr (dashed) and $t_2=1000$~yr (dot-dashed). } \label{fig:rVdatos} \end{figure} \item On the other hand, an upper limit for the size of the initial clump is obtained by adopting the opposite assumption than above, that is, that all the moving mass comes from the swept up environmental material, and, a negligible amount from the clumps themselves. To follow this idea we have to fix the density of the environment. Extinction observations of the region by \cite{OHETAL16} and \citet{BETAL17} indicate densities between $10^5$ and $10^7$cm$^{-3}$. We adopt this latter limit, $n_a=10^7$cm$^{-3}$. In reality, the density is highly structured (\cite{KETAL18}, \cite{BETAL87}). A better approximation would be to assume cylindrical symmetry for the Integral Spine Filament with a steep density gradient orthogonal to the spine. In this paper we assume an homogeneous environment, a cylindrical density profile would require to improve the presented plasmon dynamics. \end{itemize} \section{Analytic Model} \label{sec:analytic} We now model a finger as a cylinder of radius $R_{cl}$ and individual length $l_i$. Thus, the mass swept up by all the fingers (assuming the same radius) is, \begin{equation} M_t=\pi R_{\rm cl}^2\mu m_h n_a \sum_i l_i, \end{equation} where $\mu=2$ is the mean molecular mass, $m_h$ mass of hydrogen and $n_a$ is the numerical density of the ambient medium. Considering, as a limit, that $M_t=8M_\odot$ is equal to the accelerated mass we can obtain R$_{\rm cl}\sim90$~au, then this is the upper limit for the initial size of the ejected clumps. \subsection{Ballistic motion} The simplest model is to suppose that every ejected clump travels with constant velocity and, therefore, the motion is described by:\\ \begin{equation} r=vt. \label{eq:vcons} \end{equation} Since the projected length, $r$, and the velocity, $v$, also in projection, are observational data, then, the age of each clump can be obtained straightforward: \begin{equation} t=\frac{r}{v}, \label{eq:age} \end{equation} which is independent of projection. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F2} \caption[Kinematic age assuming no deceleration]{Kinematic age assuming no deceleration. The symbol notation is the same as in Figure~\ref{fig:rVdatos}. The dashed line correspond to an age of 1000~yr and the dot-dashed line represents an age of 500~yr.} \label{fig:rT0} \end{figure} Therefore each clump has an individual age and { if} we assume that all of them were ejected in a single event, each age should be, at least, similar. { This is far from which we observe}. In Figure~\ref{fig:rT0} we show the result of Equation~\ref{eq:age} applied to each data. The calculation of the spread of the error for the age was done using the standard procedure. The reported errors for the velocities of all the HH objects is 10~km~s$^{-1}$ \citep{DETAL02}, of all the H$_2$ fingers is the 25~km~s$^{-1}$ \citep{CPHD06} and for the [FeII] bullets is reported in \cite{LB00} for each of them. Then, Figure~\ref{fig:rT0} implies that there was no simultaneous event or that the ballistic motion model is not an appropriate assumption. Deceleration is the most likely interpretation. Notice that the plasmon model assume an early interaction of the original clump with the environment that will modify its initial characteristics quickly (shape, density stratification or sound speed) to those of a plasmon. But the ram pressure prevents the plasmon's free expansion, and this effect gives shape to the material (see also, \citep{ROETAL19}, Figure 1). \subsection{Dynamic model} In order to determine the fundamental parameters that control the dynamics of a high velocity clump, such as the ejection velocity $v_0$, the { initial} size of the clump R$_{\rm cl}$, the density of the ejected material $\rho_{cl}$ and the density of the environment $\rho_a$, or their initial density contrast $\beta=\sqrt{\rho_a/\rho_{cl}}$, we use an analysis based on the plasmon proposed by DA. Assuming a spherical clump at the ejection, the initial mass can be expressed as, \begin{equation} M_0=\frac{4\pi R_{\rm cl}^3 \rho_{cl}}{3}=\frac{4\pi R_{\rm cl}^3 \rho_a}{3\beta^2}. \label{eq:mass} \end{equation} We assume that every clump was ejected with the same size ($R_{cl}=90$au) and the environment density is $10^7$~cm$^{-3}$, therefore we can estimate the ejection conditions. The plasmon density is not constant because of the enlargement of the traveled distance and the mass detachment included in the model. In this section we explore a model which takes into account the deceleration of the clump as it losses mass due to the interaction with the environment. This is the model developed in RO19. As stated in RO19, no matter the physical characteristics of the original clump (shape, size, density, velocity or temperature) the initial interaction of the clump with the surroundings will transform it into a plasmon as proposed by DA, \citet{CETAL98} and RO19. Mass, on the other hand, is preserved. RO19 shows that the mass $M$, velocity $v$, and position $R$ of the newly created plasmon after a time $t$ of ejection/formation are given by the parametric form \begin{equation} {M}=M_0 e^{-\alpha \left(1-\frac{v}{v_0} \right)}, \label{eqn:lnm2} \end{equation} \begin{equation} t={t_0}\int_{v/v_0}^{1} {u}^{-2/3}e^{-\frac{\alpha}{3} \left(1-{u} \right)} du, \label{eq:fulltau} \end{equation} and \begin{equation} R=v_0t_0\int_{v/v_0}^1 u^{1/3}e^{-\frac{\alpha}{3}\left(1-u \right)}du, \label{eq:r} \end{equation} respectively, where $M_0$ is the initial mass of the clump, $v_0$ the ejection velocity, {\bf $u=v/v_0$ is a dimensionless velocity}, $\alpha$ a parameter given by, \begin{equation} \label{eq:lambda} \alpha=\frac{8\lambda}{\pi+2}\sqrt{\frac{2}{\gamma-1}}\left(\frac{1}{\beta}\right), \end{equation} and a scale time $t_0$ \begin{equation} t_0=\frac{R_{\rm cl}}{\beta^2}\left(\frac{16\pi}{3\xi_{DA}(\gamma-1)^2} \right)^{1/3}\frac{1}{v_0}, \label{eq:t0} \end{equation} with $\xi_{DA}=9.22$ from the DA model, $\lambda=0.0615$, and $\gamma=1.4$ is the adiabatic coefficient for an ideal diatomic gas. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F3} \caption[Trajectories for a constant ejection velocity or constant ejection mass at a fixed time]{In both panels, the gray dashed lines are the trajectories for RO19 plasmons with a) different mass and constant ejection velocity (with a lower and higher mass trajectories of 2$\times$10$^{-2}$~M$_{\odot}$ and 2$\times$10$^{-1}$~M$_{\odot}$, respectively, divided into 10 equal intervals) and b) different ejection velocities and initial clump mass fixed at $M_0=0.2M_\odot$ (with a lower and higher velocities trajectories from 100~km~s$^{-1}$ to 1100~km~s$^{-1}$ with intervals of 100~km~s$^{-1}$). Given a fixed time $t=500$~yr, each trajectory reaches a position $R$ and a velocity $v$, marked as a black point in it.} \label{fig:curves} \end{figure} Combining Equations (\ref{eq:lambda}) and (\ref{eq:t0}), we obtain: \begin{equation} \left[\frac{v_0}{\rm{km~ s}^{-1}}\right] \left[\frac{t_0}{\rm{yr}}\right]=233 \left[\frac{R_{\rm cl}}{\rm{au}}\right] \alpha^2. \label{eq:v0t0alfa} \end{equation} The purpose of the present paper is to use Equations (\ref{eq:mass}) to (\ref{eq:v0t0alfa}) to estimate the physical parameters, such as mass, ejection velocity, density, of each of the original clumps that produce the fingers we see today and formed by the interaction of the clumps with the surrounding molecular cloud. We begin by assuming that all the clumps were ejected in a single explosive event that took place 544 years ago from the place of the closest interaction that expelled BN, n and I objects reported by \cite{RETAL17}. So, in Equation (\ref{eq:fulltau}) we set $t=544$yr for all the clumps, although each clump had their own initial mass and ejection velocity. Next, for each clump we know, from observations, its distance to the origin of the explosion $R$ and its current velocity $v$. Both quantities are those on the plane of the sky. However, we take them as estimates of the real values, since there is no way to de-project them without making further assumptions. Even so, we need to make a further assumption, since we have more unknowns than equations. We might, for instance, choose to assume a fixed value of $\beta$, which means the same initial density for each clump, or, perhaps, the same initial mass, or any other reasonable constrain. We choose, however, to assume a unique initial radius for all the clumps of $R_{cl}=90$au, based on the assumption that all the clumps were produced by the close encounter of two protostellar objects that ripped off material with the same cross section interaction. Then, we have a set of equations (equations~\ref{eqn:lnm2}, \ref{eq:fulltau} and \ref{eq:v0t0alfa}) that can be solved for $v_0$, $t_0$ and $\alpha$ simultaneously, and by Equation~(\ref{eq:mass}) we also can obtain the mass of each ejected clump. The number density of the surroundings was taken $n_a=10^7$cm$^{-3}$. {In Figure \ref{fig:curves} we show the trajectories of clumps in the $v-R$ plane as calculated by our model, using Eq. (5) to Eq. (10). A fixed clump radius $R_{cl}=90$au was assumed in all the calculations. In the upper panel, we have taken a fixed initial velocity for a clump with $v_0=500$km~s$^{-1}$, and vary its initial mass from $2\times10^{-2}$ (the lower dashed line) to $2\times10^{-1}M_\odot$ (the upper dashed line). The solid line marks the time $t=500$yr after ejection. In the bottom panel, the initial clump mass is also fixed at $M_0=0.2M_\odot$ and each dashed line corresponds to a different initial velocity $v_0$, from $100$ to $1100$km~s$^{-1}$. The solid line, again, marks the time $t=500$ yr after ejection. Note that clumps stop at the same distance, in this case at $75000$au.} In Figure~\ref{fig:ROmodel}, we can see that the model curves that envelope the data set do not have high mass ($>0.2$~M$_\odot$) and high velocity clumps ($>800$~km~s$^{-1}$). We could expect slow points with low velocities at a distance greater than $8\times10^4$~au, but there is not any evidence of such clumps but in this case we have that 800~km~s$^{-1}$ is the fastest velocity that meets the longer features. Also, a plasmon with ejected mass of $0.2$ M$_\odot$ will reach a final distance of $\sim8\times10^4$~au. This means that a less massive plasmon, with less than 800~km~s$^{-1}$ could be near to its lifetime or maybe it has already stopped. This could explain the CO streamers that are not related to any H$_2$ finger. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F4} \caption[RO19 model applied to the data sets]{In the figure are the data sets described in Figure~\ref{fig:rVdatos}, along with the fixed time curves with constant ejection velocities of $v=200, 500 $ and $ 800$~km~s$^{-1}$ (dashed, dot-dashed and dotted lines, respectively) and $M_0=0.2$ and $0.1$~M$_\odot$ constant mass (black thick and thin lines, respectively) using the RO19 plasmon model} \label{fig:ROmodel} \end{figure} Finally, the RO19 plasmon solution is applied to each of the object of the data sets of the Sect.~\ref{sec:observations} and the initial mass, ejection velocity and lifetime are obtained and shown in Figure~\ref{fig:histomass}, \ref{fig:histovel} and \ref{fig:histolife}, respectively. The total mass, Figure~\ref{fig:histomass}, is $11.93$~M$_\odot$ with mean mass of $0.06$~M$_\odot$ which is close to the limits of $4\times10^{-2}$~M$_\odot$ analyzed in Section \ref{sec:observations}. Figure~\ref{fig:histovel} shows the ejection velocity distribution. It is interesting to note that there are 2 peaks in this distribution around 200 and 500~km~s$^{-1}$. Further analysis is required to propose a mechanism of explosion that could explain this characteristic. Also, the total kinetic energy of the model is $3\times10^{49}$~erg. Once the ejection parameters are obtained, we can infer the lifetime and stopping distance of each clump using $v=0$ in Equations~(\ref{eq:fulltau}) and (\ref{eq:r}). In Figure~\ref{fig:histolife} we show the distribution of the lifetime for the clumps. This can give an idea of the lifetime of the explosive event, in this case 2000~yr after the explosion, there will be just a few fingers and this can the reason why there are just a few cases of encounters of this kind. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F5} \caption[Mass distribution]{The mass of the clumps calculated using the loosing mass plasmon model of RO19, for the data set presented in Section~\ref{sec:observations}} \label{fig:histomass} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F6} \caption[Velocity distribution]{Velocity distribution according to the loosing mass plasmon model (see RO19), using the corresponding calculated ejection conditions.} \label{fig:histovel} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F7} \caption[Lifetime of the fingers]{Lifetime of each finger (for the sample used in this chapter) using R$_{\rm cl}=90$~au.} \label{fig:histolife} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F8} \caption[Lifetime of the fingers]{Final length of each finger (for the sample used in this chapter) using R$_{\rm cl}=90$~au.} \label{fig:histolife} \end{figure} Finally, in Figure~\ref{fig:rftf} we show the time and position of each clump compared with its own lifetime and stopping distance, respectively. Again, there is a tendency for the most of the clumps to be at the end of their lives. This suggests that maybe some fingers have already ended their lives, explaining that there are H$_2$ features with no proper motion and CO streamers with no H$_2$ fingers associated. This characteristic can be explained in terms of extinction, but the radial velocities of the H$_2$ fingers are needed in order to correctly associate them to the CO streamers. \begin{figure}[!ht] \centering \includegraphics[width=\columnwidth]{F9} \caption[Actual distance and age compared to the final distance and lifetime]{Distance normalized with the stopping distance versus time normalized with the lifetime for each data from Figure~\ref{fig:rVdatos}. The black line corresponds to the prediction using a $0.025M_\odot$ plasmon.} \label{fig:rftf} \end{figure} \section{Conclusions} \label{sec:conclusions} The plasmon model is a useful tool for the analysis of the dynamics of a clump interacting with a dense environment. Using the dynamic models presented in DA and RO19 we estimate the physical features, initial velocities and masses, for the components (clumps, [Fe{\small II}] and HH object) reported in \cite{LB00}, \cite{DETAL02} and \cite{CPHD06}. We obtain that the individual maximum mass for the clumps is $0.2$~M$_\odot$, but the maximum velocity of this sample is of 800~km~s$^{-1}$. The total kinetic energy, in this case, is $\sim 3\times10^{49}$~erg, which represents $10^2$ times more energy than the energy obtained for the total luminosity in the Orion Fingers region. Other two consequences of the plasmon model is that the larger ejection velocities produce the shorter lifetimes, and the initial mass of a clump determines its stopping distance. The RO19 plasmon predicts that the longest fingers in Orion BN/KL have almost reached their lifetime, but they are not far from their final length and they required ejection velocities as high as 800~km~s$^{-1}$ to reproduce the observations. This implies that the slower fingers could have lifetimes as long as 3000~yr, and the explosion signatures could disappear in 2000~yr. The mass-loss plasmon can explain that there are not visible longer fingers because, if there were clumps thrown with higher speed or less mass, they could have died by now. Also, the required ejections velocities for most of the longest fingers are about 500~km~s$^{-1}$ which is less than twice their observed velocity. Therefore, using the RO19 model we obtained the initial masses of each of the clumps, from their mass distribution it is observed a large quantity of clumps has a mass in the interval of $8\times 10^{-3} - 2\times 10^{-1}$~M$_\odot$ and from the velocities distribution, we obtain a distribution of 2 populations, one of them with a maximum at 200~km~s$^{-1}$ and another with a velocity of 500~km~s$^{-1}$. Finally, from our calculated time and position of each clump and their own expected lifetime we can see a tendency for the most of the clumps to be at the end of their lives. We proposed that some fingers have already ended their lives, it explains that there are H$_2$ features with no proper motion and CO streamers with no H$_2$ fingers associated. \acknowledgments We acknowledge support from PAPIIT-UNAM grants IN-109518 and IG-100218. P.R.R.-O. acknowledges scholarship from CONACyT-M\'exico and financial support from COZCyT. L.A.Z. acknowledge financial support from DGAPA, UNAM, and CONACyT, M\'exico. The authors thank Dr. Bally for his useful comments to improve this manuscript.
1,108,101,563,452
arxiv
\section{Building Blocks}\label{sec:preliminaries} \subsection{Biometric Features \& Template Matching} A Biometric Template (\ensuremath{\styleEntity{BT}}\xspace) is composed of features uniquely identifying an individual. In a biometric application (e.g., user authentication) a reference \ensuremath{\styleEntity{BT}}\xspace is usually sampled and stored as part of the enrollment procedure. During authentication, the feature extraction procedure is used to collect a real-time sample \ensuremath{\styleEntity{BT}}\xspace' from the purported user. If the similarity score between \ensuremath{\styleEntity{BT}}\xspace' and \ensuremath{\styleEntity{BT}}\xspace exceeds a pre-defined threshold, they are considered as a matching pair. The method to evaluate the similarity score and the choice of the threshold depend on the particular biometric. A \ensuremath{\styleEntity{BT}}\xspace corresponding to user $U$ is represented by a set: \begin{equation}\label{bt_format} \ensuremath{\styleEntity{BT}}\xspace_U = \{p_1, ..., p_M\}\,, \end{equation} where $p_1, ..., p_M$ are data points (features) representing unique details of $U$'s biometric. For instance, $p_i \in \ensuremath{\styleEntity{BT}}\xspace_U$ for a fingerprint represents the location and orientation of the fingerprint's \textit{minutiae}. \textit{Minutiae} are regions in the fingerprint image where fingerprint lines start, end, merge and/or split. In turn, each \textit{minutiae} $p_i$ is represented as: \begin{equation} p_i = (x_i, y_i, \theta_i)\, \end{equation} where $x_i$ and $y_i$ are Cartesian coordinates for the minutiae location in the fingerprint image and $\theta_i$ is the angle of minutiae orientation. In this paper, we focus on the fingerprint biometric modality, since fingerprint sensors are commonly found on commodity devices, such as laptops and smartphones. Nevertheless, similar encoding techniques are applicable to other biometric templates, such as iris scans~\cite{snuse_journal}. \subsection{Fuzzy Extractors \& The Fuzzy Vault Scheme \label{sec:BG_FV}} A Fuzzy Extractor~\cite{dodis2004fuzzy} (\ensuremath{\styleEntity{FE}}\xspace) is a cryptographic primitive commonly used in biometric systems. \ensuremath{\styleEntity{FE}}\xspace can successfully extract the same randomness from different noisy samples of the same biometric as long as these samples are within a certain distance threshold. This fuzziness in the matching allows, for instance, to match biometric samples acquired using different sensors. One popular \ensuremath{\styleEntity{FE}}\xspace instantiation is the Fuzzy Vault scheme (\ensuremath{\styleEntity{FV}}\xspace)~\cite{JS06} which is designed to work with \ensuremath{\styleEntity{BTs}}\xspace represented by data point sets in Eq.~\ref{bt_format}. An \ensuremath{\styleEntity{FV}}\xspace scheme consists of two algorithms: $\ensuremath{\styleEntity{FV}}\xspace_{GEN}$ and $\ensuremath{\styleEntity{FV}}\xspace_{OPEN}$. Given a biometric template $\ensuremath{\styleEntity{BT}}\xspace_U$ the first algorithm generates the corresponding helper data \ensuremath{\styleEntity{HD}}\xspace which hides a secret $k$. Given another biometric template $\ensuremath{\styleEntity{BT}}\xspace_U'$ and \ensuremath{\styleEntity{HD}}\xspace, the second algorithm can successfully recover $k$ from \ensuremath{\styleEntity{HD}}\xspace provided that $\ensuremath{\styleEntity{BT}}\xspace_U'$ matches $\ensuremath{\styleEntity{BT}}\xspace_U$. The notion of \ensuremath{\styleEntity{FV}}\xspace is captured in Definition~\ref{def:FV}. Security of \ensuremath{\styleEntity{FV}}\xspace relies on the infeasibility of the polynomial reconstruction problem~\cite{kiayias2002cryptographic}. Definitions~\ref{def:completeness} and~\ref{def:security} formulate $\ensuremath{\styleEntity{FV}}\xspace$'s completeness and (information theoretic) security. \begin{figure}[ht] \begin{mdframed} \small \begin{Definition}[\ensuremath{\styleEntity{FV}}\xspace]\label{def:FV} A Fuzzy Vault is defined as $\ensuremath{\styleEntity{FV}}\xspace = (\ensuremath{\styleEntity{FV}}\xspace_{GEN}, \ensuremath{\styleEntity{FV}}\xspace_{OPEN}, \Phi)$, where $\Phi$ is a set of parameters $\Phi = (d, GF(2^\tau), \texttt{MS}, \texttt{dist}, \texttt{w})$:\\ - $d$ is the polynomial degree;\\ - $GF(2^\tau)$ is a Galois Field of size $2^\tau$;\\ - $\texttt{MS}$ is a metric space;\\ - $\texttt{dist}$ is distance function defined over $\texttt{MS}$;\\ - $\texttt{w}$ is distance threshold;\\ $\ensuremath{\styleEntity{FV}}\xspace_{GEN}$ and $\ensuremath{\styleEntity{FV}}\xspace_{OPEN}$ are algorithms defined as follows: \begin{compactitem} \item $\ensuremath{\styleEntity{FV}}\xspace_{GEN}$: \begin{compactitem} \item \textbf{Inputs}: $k$ and $\ensuremath{\styleEntity{BT}}\xspace_U$, s.t., $|k| = (d+1)\times\tau$. \item \textbf{Output}: $\ensuremath{\styleEntity{HD}}\xspace$ \end{compactitem} \item $\ensuremath{\styleEntity{FV}}\xspace_{OPEN}$: \begin{compactitem} \item \textbf{Inputs}: $\ensuremath{\styleEntity{HD}}\xspace$ and $\ensuremath{\styleEntity{BT}}\xspace_U'$ \item \textbf{Output}: $k'$, s.t., $|k'| = (d+1)\times\tau$. \end{compactitem} \end{compactitem} \end{Definition} \vspace{1mm} \hrule \vspace{1mm} \begin{Definition}[\ensuremath{\styleEntity{FV}}\xspace-Completeness]\label{def:completeness}~\\ $\ensuremath{\styleEntity{FV}}\xspace = (\ensuremath{\styleEntity{FV}}\xspace_{GEN}, \ensuremath{\styleEntity{FV}}\xspace_{OPEN}, \Phi)$ is complete with $w$-fuzziness if for every possible $k \in GF(2^\tau)^{d+1}$ and every pair $\ensuremath{\styleEntity{BT}}\xspace_U$, $\ensuremath{\styleEntity{BT}}\xspace_U'$ with $\texttt{dist}(\ensuremath{\styleEntity{BT}}\xspace_U,\ensuremath{\styleEntity{BT}}\xspace_U') \leq w$: \begin{equation} \ensuremath{\styleEntity{FV}}\xspace_{OPEN}(\ensuremath{\styleEntity{FV}}\xspace_{GEN}(k,\ensuremath{\styleEntity{BT}}\xspace_U),\ensuremath{\styleEntity{BT}}\xspace_U') = k \end{equation} with overwhelming probability. \end{Definition} \vspace{1mm} \hrule \vspace{1mm} \begin{Definition}[\ensuremath{\styleEntity{FV}}\xspace-Security]\label{def:security}~\\ $\ensuremath{\styleEntity{FV}}\xspace = (\ensuremath{\styleEntity{FV}}\xspace_{GEN}, \ensuremath{\styleEntity{FV}}\xspace_{OPEN}, \Phi)$ is $p$-information theoretically secure if any computationally unbounded adversary with access to \ensuremath{\styleEntity{HD}}\xspace is able to guess either, \ensuremath{\styleEntity{BT}}\xspace or $k$, with success probability of at most $p$. \end{Definition} \end{mdframed}~ \end{figure} $FV_{GEN}$ can be implemented by selecting a polynomial $P$ of degree $d$ defined over a field $GF(2^\tau)$ and encoding (or splitting) the secret $k$ into the $d+1$ coefficients ($a_i$) of $P$. The resulting polynomial is defined as: \small \begin{equation} P_k(x) = \sum_{i=0}^{d}{a_ix^i} \end{equation} \normalsize where coefficients $\{a_0, ..., a_d\}$ are generated from $k$ and can be used to reconstruct $k$. Since $P_k$ is defined over $GF(2^\tau)$, each coefficient can encode $\tau$ bits; this implies that size of a key that can be encoded is a function of the field size and the degree of the polynomial given by: \begin{equation} ||k|| = (d+1) \times \tau \end{equation} After encoding $k$ as a polynomial $P_k$, each of the $M$ data points (features) in $\ensuremath{\styleEntity{BT}}\xspace_U$ is evaluated in the polynomial $P_k$ generating a list of points in a two-dimensional space: \begin{equation} L_P = \{(p_1, P_k(p_1)), ..., (p_{M}, P_k(p_{M}))\} \end{equation} Note that the field must also be large enough to encode a single feature from $\ensuremath{\styleEntity{BT}}\xspace_U$ as a single field element. The resulting set $L_P$ is formed by only by points in the polynomial $P_k$. In addition to $L_P$, a set of chaff points $L_S$ of size $N >> M$ is generated by randomly selecting pairs $(r_x,r_y) \sample GF(2^\tau)^2$, resulting in: \begin{equation} L_S = \{(r_{x,1},r_{y,1}), ..., (r_{x,N},r_{y,N})\} \end{equation} Finally, $L_P$ and $L_S$ are shuffled together using a random permutation $\pi_{\$}$ and the result is published as the helper data $\ensuremath{\styleEntity{HD}}\xspace$: \begin{equation} \ensuremath{\styleEntity{HD}}\xspace = \pi_{\$}(L_P + L_S) \end{equation} Note that $\ensuremath{\styleEntity{HD}}\xspace$ also includes the set of public parameters $\Phi = \{F, d, l_P, H(k)\}$, where $F$ is the field over which $P_k(x)$ is defined and $d$ is its degree, $l_P$ is the size of $\ensuremath{\styleEntity{BT}}\xspace_U$, i.e., the number of points in \ensuremath{\styleEntity{HD}}\xspace that belong to $P_k(x)$, and $H(k)$ is a cryptographic hash of $k$ allowing one to verify if the correct secret is reconstructed using $FV_{OPEN}$\footnote{Using a hash function simplifies the implementation, but makes \ensuremath{\styleEntity{FV}}\xspace's security computational in the size of the output of the hash. The scheme can be fully information theoretically secure by using error correcting codes.}. The key idea behind security of the \ensuremath{\styleEntity{FV}}\xspace scheme is that with $d+1$ distinct points $(p_i, P_k(p_i))$ (namely points on $P_k(x)$), one can interpolate $P_k(x)$, retrieve its coefficients and thus recover $k$. However, to find the \emph{right} $d+1$ points out of the $M+N$ points in the $\ensuremath{\styleEntity{HD}}\xspace$ is very unlikely. With appropriate choice of $M$, $N$, and $d$ the success probability can be made negligible with respect to a desired security parameter. To reconstruct $k$ from $\ensuremath{\styleEntity{HD}}\xspace$ using a new biometric template $\ensuremath{\styleEntity{BT}}\xspace_U'$, the $FV_{OPEN}$ algorithm applies a distance function (which must be defined according to the biometric type) to select $M$ points from \ensuremath{\styleEntity{HD}}\xspace which have the shortest distance to the points in $\ensuremath{\styleEntity{BT}}\xspace_U'$. If, out of the $M$ selected points, no less than $d+1$ points are indeed on the original the polynomial $P_k$, they can be used to interpolate $P_k$ and recover $k$. Otherwise, no interpolation with combinations of $d+1$ points out of $M$ correctly yields $P_k$ and therefore cannot recover $k$. To determine whether the resulting $k$ is correct, the algorithm compares its hash to $H(k)$ which is in the public help data. $FV_{OPEN}$ rejects $\ensuremath{\styleEntity{BT}}\xspace_U'$ if not equal; or accepts it otherwise. The distance threshold $w$ can be used to tune the balance between the false acceptance rate (revealing $k$ to the wrong user) and the false rejection rate (refusing to reveal $k$ to the rightful user). \ensuremath{\styleEntity{FV}}\xspace does not require ordered data points in the templates, and neither requires all data points to be in both sets. Only $d+1$ data points in $\ensuremath{\styleEntity{BT}}\xspace_U'$ must be close enough to points in $\ensuremath{\styleEntity{BT}}\xspace_U$. The polynomial degree $d$ acts as an accuracy parameter allowing calibration of the scheme to reduce false acceptance by increasing the required number of matching data points. In this work we use \ensuremath{\styleEntity{FVs}}\xspace as a cryptographic building block to realize biometric-based {\ensuremath{\sf{RTI}}}\xspace protocols. As shown later in Section~\ref{sec:protocol}, \ensuremath{\styleEntity{FV}}\xspace is used to cryptographically bind a random challenge chosen by {\ensuremath{\sf{\mathcal Vrf}}}\xspace to the biometric input in {\ensuremath{\sf{RTI}}}\xspace execution. \subsection{Hardware Architecture for Biometric Sensing with TEEs}\label{sec:hw} An advantage of biometric-based {\ensuremath{\sf{RTI}}}\xspace is that in several types of modern devices, such as smart-phones and laptops, biometric sensors exist and are directly connected (``hard-wired'') to the {\ensuremath{\sf{\mathcal RoT}}}\xspace exclusive memory itself, as depicted in Figure~\ref{fig:fp_tee}. it is usually the case that a biometric sensor (e.g., a fingerprint sensor) is directly hardwired to TEE exclusive memory. Therefore, the user's biometric input is not visible to untrusted (and potentially malicious) software on that device, including the operating system. This means that an input biometric, cannot be obtained by an {\ensuremath{\sf{\mathcal Adv}}}\xspace-controlled malware or OS on {$\dev\text{-}A$}\xspace, obviating the need for a trusted software path to be verified by the {\ensuremath{\sf{\mathcal RoT}}}\xspace upon receiving the challenge. Nonetheless, our prototype implementation (see Section~\ref{sec:implementation}) also considers the case where this hardware channel is not readily available. In such a case, we show how to establish a secure channel between the biometric sensor and {\ensuremath{\sf{\mathcal RoT}}}\xspace with the help of a small trusted hypervisor. \begin{figure}[!htbp] \centering \fbox{\includegraphics[width=0.7\columnwidth]{./hardware.png}} \centering \caption{TEE-Biometric hardware architecture of a typical Android device (adapted from~\cite{hw-fig})}\label{fig:fp_tee}. \end{figure} } \section{Prototype \& Evaluation} \label{sec:implementation} \subsection{\ensuremath{\styleEntity{BT}}\xspace Extraction \& \ensuremath{\styleEntity{FV}}\xspace Parameters} \ensuremath{\styleEntity{BT}}\xspace extraction generates a biometric template from a fingerprint image. As discussed in Section~\ref{sec:preliminaries}, each data point $p_i \in \ensuremath{\styleEntity{BT}}\xspace$ is the position and orientation $(x_i,y_i,\theta)$ of a fingerprint minutiae. To extract the \ensuremath{\styleEntity{BT}}\xspace we use NIST Biometric Image Software (NBIS)~\cite{nbis}. NBIS returns a set of identified minutiae points with corresponding confidence levels. From NBIS output, we select 20 points with the highest confidence and encode them as data points in $GF(2^{24})$. In our prototype, \ensuremath{\styleEntity{FV}}\xspace's \ensuremath{\styleEntity{HD}}\xspace is composed of 20 fingerprint data points mixed with 200 random chaff points. The \ensuremath{\styleEntity{FV}}\xspace polynomial degree is set to 9. Finite field operations are implemented using the Number Theory Library (NTL)~\cite{ntl}. In $FV_{OPEN}$, the candidate minutiae points are selected from the \ensuremath{\styleEntity{HD}}\xspace based on their distance to minutiae points in the new template $\ensuremath{\styleEntity{BT}}\xspace'$ sampled from the user. Similar to~\cite{nandakumar2007fingerprint}, we use a distance function between $p_i\in\ensuremath{\styleEntity{HD}}\xspace$ and $p'_j\in\ensuremath{\styleEntity{BT}}\xspace'$ defined as: \begin{equation} D(p_i, p'_j) = \sqrt{(x_i-x'_j)^2 + (y_i-y'_j)^2} + \beta \times \Delta(\theta_i,\theta'_j) \end{equation} where $p_i=(x_i,y_i,\theta)$, $p'_j=(x'_i,y'_i,\theta')$, and $\Delta(\theta_i,\theta'_j) = min(|\theta_i - \theta'_j|,360 - |\theta_i - \theta'_j|)$. Parameter $\beta$ controls the degree of importance given to minutiae orientation in computation, as compared to the euclidean distance between the points. A data point $p_i$ is selected if $D(p_i,p'_j) < w$ for some point in $p'_j\in\ensuremath{\styleEntity{BT}}\xspace'$. As described in~\cite{nandakumar2007fingerprint}, parameters $\beta$ and $w$ must be empirically calibrated to yield the best accuracy results. Our parameters are empirically calibrated to: $\beta = 0.2$ and $w = 20$. To improve accuracy results for noisy fingerprint readings before extracting the template, during the biometric sampling, we run the fingerprint pre-alignment algorithm from~\cite{tarp}. Figure~\ref{fig:fp_extraction} illustrates the result of the template extraction for two pre-aligned fingerprint images. White squares highlight the minutiae points detected in these fingerprints. We discuss the accuracy of this implementation in Section~\ref{sec:accuracy}. \textit{\textbf{Remark:} We implement our own \ensuremath{\styleEntity{BT}}\xspace extraction to have a fully working prototype and report on its accuracy. We stress that accuracy of the underlying \ensuremath{\styleEntity{BT}}\xspace extraction technique is orthogonal and not affected by the {\ensuremath{\sf{RTI}}}\xspace setting considered in this work.} \subsection{Prototype} \begin{figure*}[!hbtp] \centering \subfigure[Fingerprint pre-processing.] {\includegraphics[height=1.7in,width=0.32\textwidth]{./fingerprints.png}\label{fig:fp_extraction}} \subfigure[Hardware Setting] {\includegraphics[height=1.7in,width=0.32\textwidth]{exp}\label{fig:setup}} \subfigure[Hypervisor Based {\ensuremath{\sf{\mathcal RoT}}}\xspace. Arrows illustrate execution flow; shaded area denotes untrusted software.] {\includegraphics[height=1.7in,width=0.32\textwidth]{testing.png}\label{fig:tee}} \caption{Hardware and software components of our prototype.} \end{figure*} Due to the close environment of hardware-based TEEs with fingerprint sensing (commonly found on mobile phones), we implement the prototype of FV-based {\ensuremath{\sf{RTI}}}\xspace on a development board connected with an external fingerprint sensor. The sensor collects user fingerprints and also provides an interface to export the data to a secure storage inaccessible to applications and the operating system. We build a hypervisor-based secure execution environment (software-based {\ensuremath{\sf{\mathcal RoT}}}\xspace) to run {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace steps in the FV-based {\ensuremath{\sf{RTI}}}\xspace protocol. \subsubsection{Hardware Setting} Figure~\ref{fig:setup} shows the hardware setting of our prototype. An FMP12 Optical Fingerprint sensor is connected to the Raspberry Pi 2 development board with four Cortex-A7 CPU cores at 800 MHz and 1 GB main memory. It runs Debian Linux with kernel version 3.18.8. Software on the board can use a serial port mapped at physical address 0x3F201000 to issue commands to the fingerprint reader and read the collected data. \subsubsection{Virtualization Based {\ensuremath{\sf{\mathcal RoT}}}\xspace} We harness virtualization techniques to build an {\ensuremath{\sf{\mathcal RoT}}}\xspace secure against attacks from the operating system. Our secure environment shown in Figure~\ref{fig:tee} is implemented by following the approach proposed in \cite{fimce} which designs a fully isolated minimal computing environment (FIMCE) on a multicore x86 platform. We develop a bare-metal ARM hypervisor running in the processor's Exception Level 2 (EL2) which is more privileged than the levels for the OS and applications. After launched on the Raspberry Pi board, the hypervisor configures the permission bits in the Stage-II translation table to block the OS and applications from accessing the serial port used by the fingerprint sensor. Hence, the adversary cannot access the fingerprint sensor to issue commands or steal fingerprint images. When available, a secure boot module can be used to assure that this configuration is properly set at boot time. Upon receiving a request, the hypervisor creates a fully isolated computing environment consisting of a CPU core and a reserved physical memory region for the sensitive function to run. The CPU configuration ensures that maskable interrupts are not delivered the core and non-maskable interrupts (NMIs) are trapped to the hypervisor. Thus, the untrusted OS cannot tamper with the environment via memory accesses or interrupts. Running in the isolated environment is the code implementing {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace logic in the {\ensuremath{\sf{RTI}}}\xspace protocol. Its signing key $sk$ is stored in the hypervisor memory. To generate the response, it requests the hypervisor to run $\mathsf{FV_{open}}$ and $\mathsf{sign_{sk}}$ in the FIMCE environment at runtime. The code of these two functions are self-contained without issuing system calls so that the executions do not depend on any untrusted code and data outside of the isolated environment. Considering that these two functions are for memory-resident computations without involving I/O operations, system calls are avoided by statically allocating the needed memory buffers. Note that an ARM CPU does not allow a user privilege code to issue hypercalls. Hence, we retrofit the OS with a special system call handler which issues the hypercall on behalf of {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace. \subsubsection{Evaluation} Code complexity is shown in Table \ref{tab:code}. We measured CPU execution time for $\mathsf{FV_{OPEN}}$ and $\mathsf{sign_{sk}}$ within the virtualization-based {\ensuremath{\sf{\mathcal RoT}}}\xspace and normal user space on the Raspberry PI board. Results are reported in Table~\ref{tab:time} below. \begin{table}[!hbtp] \centering \scalebox{0.90}{ \begin{tabular}{|l|c|} \hline \cline{1-2} {\bf System Component} & {\bf LoC} \\ \hline \cline{1-2} ARM hypervisor & $456$ (C) and $906$ (Assembly) \\ \hline Self-contained $\mathsf{FV_{OPEN}}$ & $701$ \\ \hline Crypto library (incl. RSA and hash functions) & $5,032$ \\ \hline \end{tabular} } \vspace{1mm} \caption{Code Complexity (in LoC).} \label{tab:code} \end{table}% \begin{table}[!hbtp] \centering \scalebox{0.990}{ \begin{tabular}{|c|cc|} \cline{2-3} \multicolumn{1}{c|}{} & $\mathsf{FV_{OPEN}}$ & $\mathsf{sign_{sk}}$ \\ \hline Native environment & 848.7 & 79.2 \\ \hline Virtualization-based {\ensuremath{\sf{\mathcal RoT}}}\xspace & 1143.51 & 75.6 \\ \hline \end{tabular} } \vspace{1mm} \caption{CPU time comparison. Average of out of 1000 executions (time in {\em ms}). Variance was negligible and omitted} \label{tab:time} \end{table} We note that time differences are not large. In fact, the RSA signing operation has a slight performance advantage when running in the {\ensuremath{\sf{\mathcal RoT}}}\xspace. The reason might be its exclusive use of the CPU core since interrupts are blocked. \section{Introduction}\label{sec:intro} In recent years, there has been a growing demand, from both industrial and research communities, for Trusted Execution Environments (TEEs) to aid security-critical applications. While TEEs vary widely in terms of architecture, implementation, and functionality, they provide (at least in the idealized model) an isolated execution space offering both code and data integrity, without relying on any assumptions about applications or operating systems. We refer to these functionalities as TEE services. Security of TEE services (among other trusted services) rely on dynamic Roots of Trust (RoTs) to prove their integrity. RoTs consist of minimal trusted components (e.g., trusted hardware as in TPM and Intel SGX, or trusted software as in hypervisors) used to bootstrap and dynamically verify trust in the system as a whole. Despite the popularity of such services, it is somewhat surprising that there are no ``off-the-shelf'' means to securely bind a given RoT to the specific physical device housing this RoT. In particular, it is easy to verify that a service is indeed performed by \textit{\textbf{some}} RoT. However, it remains a challenge to determine if the service is performed by \textit{the} RoT residing inside a specific physical device. We refer to this problem as Root of Trust Identification ({\ensuremath{\sf{RTI}}}\xspace). To further illustrate and motivate {\ensuremath{\sf{RTI}}}\xspace, consider the following sensor auditing scenario highlighted in~\cite{ditio}. A device (e.g., a smartphone) keeps a TEE-enabled secure log of its audio and video (camera and microphone) activity in order to allow after-the-fact auditing. For example, the host of a confidential meeting uses her trusted verifier device to verify that microphones and cameras of attendees' smartphones remain turned off. The technique proposed in~\cite{ditio} consists of using each attendee device's TEE to assure (e.g., via remote attestation) the verifier of the integrity of sensor usage logs on that device. We argue that -- even with TEE-based integrity assurance -- the attendee can still use his device's microphone/camera and fool the verifier by supplying logs from a remote accomplice device (also equipped with a TEE of same type) that indeed turns off the sensor during the meeting. The response appears to be valid and there is no means for the host to differentiate between replies from the accomplice device and the one presently held by the malicious attendee. Using a dedicated physical channel (e.g., a cable) between the verifier and the attendee's device does not solve the problem as the device may use another channel to communicate with its accomplice. Another scenario relevant to {\ensuremath{\sf{RTI}}}\xspace occurs whenever some malware has been found on a device. A natural course of action is to force one or more of: (i) re-set, (ii) update software, or (iii) erase the device. However, none of these is trivial since the same adversarial behavior can fool the user into believing that her device has been re-set/updated/erased, while in fact it is some other device that has performed those actions. \begin{table*}[!hbtp] \centering \Small \caption{Notation summary} \label{table:notation} \begin{tabular}{|l|p{12.3cm}|} \hline Notation & Description \\ \hline \hline {$\dev\text{-}A$}\xspace, {$\dev\text{-}B$}\xspace, {$\dev\text{-}C$}\xspace, ... & Physical devices (e.g., smart-phone, laptop) A, B, C, ... \\ &\\ {$\rot_A$}\xspace, {$\rot_B$}\xspace, {$\rot_C$}\xspace, ... & {\ensuremath{\sf{\mathcal RoT}}}\xspace residing on physical devices A, B, C, ... \\ &\\ $\text{$pk_i$}\xspace, \text{$sk_i$}\xspace \gets Gen(\text{{$\rot_A$}\xspace})$ & {$\rot_A$}\xspace issues $i$-th session public-key \text{$pk_i$}\xspace, and corresponding secret key \text{$sk_i$}\xspace. Anyone can verify that \text{$pk_i$}\xspace was generated by some {\ensuremath{\sf{\mathcal RoT}}}\xspace. However, {$\rot_A$}\xspace signs \text{$pk_i$}\xspace using its master secret key in a group signature scheme, thus one cannot tell whether \text{$pk_i$}\xspace was issued by {$\rot_A$}\xspace or not.\\ &\\ $Pr[A|B]$ & Probability of event $A$ given that event $B$ is true\\ &\\ $Pr[A|\neg B]$ & Probability of event $A$ given that event $B$ is \underline{not} true\\ &\\ $l$ & Security parameter \\ &\\ $negl(.)$ & a negligible function: $negl(l) \leq 1/2^l$\\ &\\ \hline \hline $\ensuremath{\styleEntity{BT}}\xspace \gets \mathsf{\ensuremath{\styleEntity{BT}}\xspace.Sample}_{}(U,\text{{$\dev\text{-}A$}\xspace})$ & Sampling of biometric template \ensuremath{\styleEntity{BT}}\xspace from user $U$ performed by biometric sensor on physical device {$\dev\text{-}A$}\xspace. \\ & \\ $\ensuremath{\styleEntity{HD}}\xspace \gets \mathsf{\ensuremath{\styleEntity{FV}}\xspace_{GEN}}(\ensuremath{\styleEntity{BT}}\xspace,{\ensuremath{\sf{\mathcal Chal}}}\xspace)$ & Generation of helper data \ensuremath{\styleEntity{HD}}\xspace from biometric template $\ensuremath{\styleEntity{BT}}\xspace$ and randomness {\ensuremath{\sf{\mathcal Chal}}}\xspace. \\ & \\ ${\ensuremath{\sf{\mathcal Chal}}}\xspace' \gets \mathsf{\ensuremath{\styleEntity{FV}}\xspace_{OPEN}}(\ensuremath{\styleEntity{BT}}\xspace',\ensuremath{\styleEntity{HD}}\xspace)$ & Reconstruction of randomness ${\ensuremath{\sf{\mathcal Chal}}}\xspace'$ from helper data \ensuremath{\styleEntity{HD}}\xspace and biometric \ensuremath{\styleEntity{BT}}\xspace'. \\ & \\ $\sigma \gets \mathsf{sign}_{sk}(M)$ & Signature result $\sigma$ of using $sk$ to sign message $M$. Implicitly we assume $\mathsf{sign}_{sk}$ to be a confidentiality preserving signature scheme, i.e., $M$ cannot be extracted from $\sigma$\\ & \\ $\mathsf{verify}_{pk}(\sigma) \equiv M$ & Verification of signature $\sigma$ on message $M$ for public key $pk$. \\ \hline \end{tabular} \normalsize \end{table*} Due to lack of {\ensuremath{\sf{RTI}}}\xspace solutions, attacks of this type are applicable to any TEE-dependent application which assumes that the TEE indeed resides on the device of interest. More genrally, it applies to any service relying on physical presence of an RoT (either hardware-based or software-based) within a particular device. A successful {\ensuremath{\sf{RTI}}}\xspace verification can bind the public-key used by the RoT for remote attestation with its hosting device. However, the binding only has a long-lasting effect for RoTs using a device-specific persistent public key. For those privacy-friendly TEEs that use short-lived public keys certified with a group signature (such as Intel SGX), the binding is ephemeral. Hence, it is imperative to conduct {\ensuremath{\sf{RTI}}}\xspace verification on a per-session basis for TEEs with privacy and unlinkability protection. We observe that many TEE-enabled devices (e.g., laptops, tablets and smartphones) are equipped with biometric sensors connected to the TEE via secure physical channels. Because biometric templates are sensitive and hard to revoke, this secure channel is used to secure the biometric template in case of a compromised application and/or operating system, while still allowing biometric authentication as shown in FIDO~\cite{FIDO}. In this paper, we propose a low-burden user-aided approach to {\ensuremath{\sf{RTI}}}\xspace. The basic idea is that the TEE vouches for the biometric template securely obtained from the hard-wired sensor. We {\bf do not use biometrics} to authenticate the user. Instead, a biometric is used as a challenge. Security of our approach is based on the difficulty of cloning a human biometric (e.g., a fingerprint) in real-time during {\ensuremath{\sf{RTI}}}\xspace verification. However, prior enrollment of a user's biometric is not required. We also do not use the same biometric in different sessions. Because it is used as a challenge, the only properties the biometric needs are: sufficient entropy and (real-time) unclonability, which biometrics used for user authentication are assumed to have. In the rest of this paper, after formalizing {\ensuremath{\sf{RTI}}}\xspace and describing the attack models, we construct a biometric-based {\ensuremath{\sf{RTI}}}\xspace scheme. We also prototype and evaluate our scheme using an {\ensuremath{\sf{\mathcal RoT}}}\xspace based on a trusted micro-hypervisor to demonstrate its practicality. We consider {\ensuremath{\sf{RTI}}}\xspace as a subtle and important issue, which has been mostly overlooked. \section{Practical Considerations}\label{sec:limitations} We now discuss some practical issues relevant to the proposed {\ensuremath{\sf{RTI}}}\xspace protocol. \subsection{Biometric Sensor Availability} One limitation of our general approach is the requirement for a biometric sensor hardwired to the {\ensuremath{\sf{\mathcal RoT}}}\xspace. Our prototype shows how this requirement can be circumvented -- the protocol can be securely deployed on devices not equipped with embedded biometric sensors by using a stand-alone biometric sensor and a trusted micro-hypervisor to emulate a hardware direct channel between the sensor and {$\rot_A$}\xspace. Nonetheless, we recognize that it might be beneficial to remove this hardware dependence. In particular, it would be interesting to develop new {\ensuremath{\sf{RTI}}}\xspace protocols that use other types of physical challenges through other sensors that (similar to biometrics) are hard to clone/replay. In particular, developing alternative {\ensuremath{\sf{RTI}}}\xspace based on other sensors that might be available on commodity devices and evaluating their usability trade-offs is an interesting future direction. \subsection{Biometric Confidentiality} One concern with the proposed protocol is confidentiality of the biometric data used in the protocol. Even though {$\dev\text{-}A$}\xspace might be compromised, the biometric sample is read directly by trusted {$\rot_A$}\xspace. In other words, confidentiality of the user's biometric vis-a-vis {$\dev\text{-}A$}\xspace is guaranteed, assuming that {\ensuremath{\sf{\mathcal RoT}}}\xspace hardware tamper-resistance is preserved. The same applies to {\ensuremath{\sf{\mathcal Vrf}}}\xspace, if it is also equipped with a {\ensuremath{\sf{\mathcal RoT}}}\xspace. Otherwise, the owner of {\ensuremath{\sf{\mathcal Vrf}}}\xspace should be the same as the user providing providing the biometric. \subsection{Fuzzy Extractor Issues} Statistical and reusability attacks are well-known issues of several \ensuremath{\styleEntity{FE}}\xspace constructions, including fuzzy vaults used in our prototype. The former is the biometric analog to dictionary attacks on passwords. It analyses the distribution of minutiae in human biometrics and uses this information to extract \ensuremath{\styleEntity{BT}}\xspace or {\ensuremath{\sf{\mathcal Chal}}}\xspace from \ensuremath{\styleEntity{HD}}\xspace. The latter applies to non-reusable \ensuremath{\styleEntity{FEs}}\xspace. In such cases, obtaining two instances $\ensuremath{\styleEntity{HD}}\xspace_1$ and $\ensuremath{\styleEntity{HD}}\xspace_2$, generated from the same biometric allows reconstruction of \ensuremath{\styleEntity{BT}}\xspace in clear. We note that these attacks are a serious concern for \ensuremath{\styleEntity{FE}}\xspace-based biometric authentication where \ensuremath{\styleEntity{HD}}\xspace appears in clear. Whereas, in our case, the problem is obviated by transmitting \ensuremath{\styleEntity{HD}}\xspace over a secure channel to {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace. In particular, we do not use \ensuremath{\styleEntity{FEs}}\xspace for biometric confidentiality (since they are not necessary to achieve that purpose). They are used such that {\ensuremath{\sf{\mathcal Vrf}}}\xspace can always embed a fresh challenge {\ensuremath{\sf{\mathcal Chal}}}\xspace into the ``biometric-based'' challenge, preventing replays of previous {\ensuremath{\sf{RTI}}}\xspace executions with the same biometric on other {\ensuremath{\sf{\mathcal RoT}}}\xspace, e.g., {\ensuremath{\sf{\mathcal \rot^*}}}\xspace. \subsection{Usability} As mentioned earlier, usability is a problem with sight-based presence attestation, along with its reliance on precise timing. Recall that location- and scene-based presence attestation schemes incur lower user burden. However, they also offer much lower security. Meanwhile, user burden in our protocol amounts to performing two biometric samplings: one with {\ensuremath{\sf{\mathcal Vrf}}}\xspace and one with {$\dev\text{-}A$}\xspace. (Moreover, the user can pre-enroll his fingerprints with {\ensuremath{\sf{\mathcal Vrf}}}\xspace well ahead of time.)This type of user interaction is common for authentication purposes and typically considered more convenient than other authentication means, such as entering a PIN or password. Therefore, we consider usability of biometric-based {\ensuremath{\sf{RTI}}}\xspace protocol to be quite reasonable. \subsection{Accuracy}\label{sec:accuracy} Accuracy of the underlying biometric matching is not affected by our use-case. Improving its accuracy is an orthogonal effort. Nonetheless, for completeness, we report on the accuracy considering the implementation used in our prototype. Similar accuracy analysis for biometric matching using fuzzy vaults (also considering other biometrics modalities) can be found in~\cite{nandakumar2007fingerprint,snuse_journal,iris_fv}. We report on our prototype's accuracy considering metrics for:\\ -- \textbf{Genuine Acceptance Rate (GAR):} Percentage of biometric samples correctly matched to other samples acquired from the same biometric.\\ -- \textbf{False Acceptance Rate (FAR):} Percentage of biometric samples incorrectly matched to any sample not acquired from the same biometric.\\ We conducted accuracy experiments using FVC2000 publicly available fingerprint database (database and further information available at: \url{http://bias.csr.unibo.it/fvc2000/}). FVC2000 includes multiple fingerprint images (10 different noisy images of each fingerprint) acquired using 4 types of low-cost biometric sensors. As discussed in Section~\ref{sec:BG_FV}, the \ensuremath{\styleEntity{FV}}\xspace polynomial degree allows configuring the number of matching data points in two biometric samples necessary to consider that the samples belong to the same user. Therefore, accuracy results are presented as a function of \ensuremath{\styleEntity{FV}}\xspace polynomial degree in Figure~\ref{fig:accuracy}. \begin{figure}[!htbp] \centering \includegraphics[height = 2.0in, width=0.8\columnwidth]{./accuracy.pdf} \centering \caption{Accuracy of biometric matching in our prototype}\label{fig:accuracy}. \end{figure} According to the results in Figure~\ref{fig:accuracy}, for a security-critical task such as {\ensuremath{\sf{RTI}}}\xspace, an ideal choice would be degree $9$ with nearly zero false acceptances. The same degree results in GAR of 80\%, meaning that 1 out of 5 times a genuine {\ensuremath{\sf{RTI}}}\xspace execution would fail and the user would need to try one more time. \input{relate} \balance \bibliographystyle{ieeetr} \section{Constructing an {\ensuremath{\sf{RTI}}}\xspace Protocol} \label{sec:protocol} \begin{figure*}[!ht] \begin{center} \fbox{ \scalebox{0.9}{ \procedure{}{% \textbf{{\ensuremath{\sf{\mathcal Vrf}}}\xspace} \> \> \textbf{{${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace}\\ \pcln \ensuremath{\styleEntity{BT}}\xspace_U \gets \mathsf{\ensuremath{\styleEntity{BT}}\xspace.Sample}(U,{\ensuremath{\sf{\mathcal Vrf}}}\xspace) \> \> \text{$pk_i$}\xspace, \text{$sk_i$}\xspace \gets Gen(\text{{${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace}) \\ \pcln {\ensuremath{\sf{\mathcal Chal}}}\xspace \sample \{0,1\}^l\> \> \\ \pcln \ensuremath{\styleEntity{HD}}\xspace \gets \mathsf{\ensuremath{\styleEntity{FV}}\xspace_{GEN}}(\ensuremath{\styleEntity{BT}}\xspace_U,{\ensuremath{\sf{\mathcal Chal}}}\xspace) \> \sendmessagerightx[3cm]{1}{\mathsf{\ensuremath{\styleEntity{HD}}\xspace}} \> \\ \pcln \> \> \ensuremath{\styleEntity{BT}}\xspace_U' \gets \mathsf{\ensuremath{\styleEntity{BT}}\xspace.Sample}_{}(U,\text{{$\dev\text{-}A$}\xspace})\\ \pcln \> \> {\ensuremath{\sf{\mathcal Chal}}}\xspace' \gets \mathsf{\ensuremath{\styleEntity{FV}}\xspace_{OPEN}}(\ensuremath{\styleEntity{HD}}\xspace,\ensuremath{\styleEntity{BT}}\xspace')\\ \pcln \> \sendmessageleftx[3cm]{1}{\mathsf{\sigma, \text{$pk_i$}\xspace}} \> \sigma \gets \mathsf{sign}_{\text{$sk_i$}\xspace}({\ensuremath{\sf{\mathcal Chal}}}\xspace')\\ \pcln \mathsf{verify}_{\text{$pk_i$}\xspace}(\sigma) \equiv {\ensuremath{\sf{\mathcal Chal}}}\xspace \> \> \\ } } } \caption{\ensuremath{\styleEntity{FV}}\xspace-based {\ensuremath{\sf{RTI}}}\xspace protocol: {\ensuremath{\sf{\mathcal Vrf}}}\xspace decides whether {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace resides in {$\dev\text{-}A$}\xspace and, if so, learns its session public-key \text{$pk_i$}\xspace.} \label{fig:FV_ROT_ID} \end{center} \end{figure*} \begin{figure*}[ht] \begin{center} \fbox{ \scalebox{0.9}{ \procedure{}{% \textbf{{\ensuremath{\sf{\mathcal Vrf}}}\xspace} \> \> \text{Identified {$\rot_A$}\xspace} \> \> \textbf{{${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace}\\ \pcln \> \> \ensuremath{\styleEntity{BT}}\xspace_U \gets \mathsf{\ensuremath{\styleEntity{BT}}\xspace.Sample}_{}(U,\text{{$\dev\text{-}A$}\xspace}) \> \> \text{$pk_i$}\xspace, \text{$sk_i$}\xspace \gets Gen(\text{{${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace})\\ \pcln \> \> \sigma_{\ensuremath{\styleEntity{BT}}\xspace_U} \gets \mathsf{sign}_{\text{\rotA-$sk(i)$}\xspace}(\ensuremath{\styleEntity{BT}}\xspace_U)\> \> \\ \pcln \> \sendmessageleftx[2cm]{1}{\mathsf{\ensuremath{\styleEntity{BT}}\xspace_U,\sigma_{\ensuremath{\styleEntity{BT}}\xspace_U}}} \> \> \>\\ \pcln \mathsf{verify}_{\text{\rotA-$pk(i)$}\xspace}(\sigma_{\ensuremath{\styleEntity{BT}}\xspace_U}) \equiv \ensuremath{\styleEntity{BT}}\xspace_U \> \> \\ \pcln {\ensuremath{\sf{\mathcal Chal}}}\xspace \sample \{0,1\}^l\> \> \\ \pcln \ensuremath{\styleEntity{HD}}\xspace \gets \mathsf{\ensuremath{\styleEntity{FV}}\xspace_{GEN}}(\ensuremath{\styleEntity{BT}}\xspace_U,{\ensuremath{\sf{\mathcal Chal}}}\xspace) \> \sendmessagerightx[7cm]{3}{\mathsf{\ensuremath{\styleEntity{HD}}\xspace}} \> \\ \pcln \> \> \> \> \ensuremath{\styleEntity{BT}}\xspace_U' \gets \mathsf{\ensuremath{\styleEntity{BT}}\xspace.Sample}_{}(U,\text{{$\dev\text{-}B$}\xspace})\\ \pcln \> \> \> \> {\ensuremath{\sf{\mathcal Chal}}}\xspace' \gets \mathsf{\ensuremath{\styleEntity{FV}}\xspace_{OPEN}}(\ensuremath{\styleEntity{HD}}\xspace,\ensuremath{\styleEntity{BT}}\xspace_U')\\ \pcln \> \sendmessageleftx[7cm]{3}{\mathsf{\sigma, \text{$pk_i$}\xspace}} \> \sigma \gets \mathsf{sign}_{\text{$sk_i$}\xspace}({\ensuremath{\sf{\mathcal Chal}}}\xspace')\\ \pcln \mathsf{verify}_{\text{$pk_i$}\xspace}(\sigma) \equiv {\ensuremath{\sf{\mathcal Chal}}}\xspace \> \> \\ } } } \caption{Proxy {\ensuremath{\sf{RTI}}}\xspace protocol: {\ensuremath{\sf{\mathcal Vrf}}}\xspace is assisted by a previously identified {$\rot_A$}\xspace (residing on {$\dev\text{-}A$}\xspace) to decide whether {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace resides on physical device {$\dev\text{-}B$}\xspace. {$\dev\text{-}A$}\xspace, {$\dev\text{-}B$}\xspace, and user $U$ must be physically co-located. {\ensuremath{\sf{\mathcal Vrf}}}\xspace can be remote.} \label{fig:proxy_rot_id} \end{center} \end{figure*} We now construct a biometric-based {\ensuremath{\sf{RTI}}}\xspace protocol using \ensuremath{\styleEntity{FVs}}\xspace and analyze its security. We also present a Proxy {\ensuremath{\sf{RTI}}}\xspace protocol that can be used to address {\ensuremath{\sf{RTI}}}\xspace when {\ensuremath{\sf{\mathcal Vrf}}}\xspace is remote. \noindent {\bf System Assumption:} We assume an authentic (not confidential!) channel between the biometric sensor and the RoT. In some types of devices (e.g., branded smartphones) similar channels are implemented in hardware in order to protect the user's biometric data. Those channels are often claimed by vendors to be both confidential and authentic. Unfortunately, it has been recently shown that biometric data can still be leaked in clever ways\footnote{See: \url{https://www.blackhat.com/docs/us-15/materials/us-15-Zhang-Fingerprints-On-Mobile-Devices-Abusing-And-Leaking-wp.pdf}}, which means cuckoo attacks remain possible. In contrast, we believe that it is much harder to compromise authenticity of the channel, since the biometric sensor is hardwired to the RoT. Doing so would imply wholesale RoT compromise. Our scheme is dependent on channel authenticity and unclonability of fingerprints. For devices that do not have this kind of channel, we emulate it in software, by using a micro-hypervisor. As discussed in Section~\ref{sec:RTI}, in a cuckoo attack on the challenge-response {\ensuremath{\sf{RTI}}}\xspace protocol, the adversary relays the challenge from {\ensuremath{\sf{\mathcal Vrf}}}\xspace. In a conventional challenge-response protocol, a correct response is formed based on two factors: the challenge and the prover's secret. Hence, to counter the challenge relay attack, we include the user in the loop as the third factor needed to produce a correct response. In particular, {\ensuremath{\sf{\mathcal Vrf}}}\xspace blinds the cryptographic challenge with the user's biometric by using an \ensuremath{\styleEntity{FV}}\xspace scheme. {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace uses its biometric sensor to sample (presumably the same) biometric. The user only provides her biometric to {$\dev\text{-}A$}\xspace's sensor, which can only be read by {$\rot_A$}\xspace. Therefore, the only {\ensuremath{\sf{\mathcal RoT}}}\xspace that can unblind the challenge is on {$\dev\text{-}A$}\xspace, which means {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace is {$\rot_A$}\xspace. Since the biometric given to both {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace is the same, if {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace is not {$\rot_A$}\xspace, {\ensuremath{\sf{RTI}}}\xspace for {$\dev\text{-}A$}\xspace fails. We now discuss the protocol in more detail. \textit{\textbf{Remark:} We assume that protocol messages are exchanged over an encrypted and authenticated channel {\ensuremath{\sf{\mathcal Vrf}}}\xspace$\leftrightarrow${${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace. Note that this channel is established between {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace, i.e., {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {\bf some} {\ensuremath{\sf{\mathcal RoT}}}\xspace. Even though {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace has not been identified at this point, it is always possible to check whether \text{$pk_i$}\xspace was issued by some {\ensuremath{\sf{\mathcal RoT}}}\xspace. This is necessary to preserve confidentiality of \ensuremath{\styleEntity{HD}}\xspace if a non-reusable \ensuremath{\styleEntity{FE}}\xspace is used to implement the {\ensuremath{\sf{RTI}}}\xspace protocol. (See Section~\ref{sec:limitations} for further discussion on \ensuremath{\styleEntity{FE}}\xspace reusability.) A secure channel to some (trusted) {\ensuremath{\sf{\mathcal RoT}}}\xspace suffices to preserve confidentiality.} \subsection{FV-based {\ensuremath{\sf{RTI}}}\xspace}\label{sec:FV_protocol} Figure~\ref{fig:FV_ROT_ID} presents the {\ensuremath{\sf{RTI}}}\xspace protocol based on the \ensuremath{\styleEntity{FV}}\xspace scheme described in Section~\ref{sec:BG_FV}. It assumes that {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace are physically accessible to $U$. $U$ participates in the protocol by providing the same biometric to the sensors of {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace. The protocol starts with {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace issuing an asymmetric key-pair and with {\ensuremath{\sf{\mathcal Vrf}}}\xspace sampling $U$'s biometric, thus resulting in the template $\ensuremath{\styleEntity{BT}}\xspace_U$ (line 1). {\ensuremath{\sf{\mathcal Vrf}}}\xspace then generates a random $l$-bit challenge {\ensuremath{\sf{\mathcal Chal}}}\xspace, where $l$ is the security parameter (line 2). Next, {\ensuremath{\sf{\mathcal Vrf}}}\xspace uses the \ensuremath{\styleEntity{FV}}\xspace generation algorithm to obtain \ensuremath{\styleEntity{HD}}\xspace where $\ensuremath{\styleEntity{BT}}\xspace_U$ is the biometric and {\ensuremath{\sf{\mathcal Chal}}}\xspace is the secret. {\ensuremath{\sf{\mathcal Vrf}}}\xspace sends \ensuremath{\styleEntity{HD}}\xspace to {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace (line 3). $U$ also provides the same biometric to {$\dev\text{-}A$}\xspace. As a result, {$\rot_A$}\xspace obtains $\ensuremath{\styleEntity{BT}}\xspace_U'$ -- a new sample of the same biometric (line 4). Note that the step in line 4 is crucial. Under the assumption of a secure channel between the fingerprint sensor in {$\dev\text{-}A$}\xspace and {$\rot_A$}\xspace, $\ensuremath{\styleEntity{BT}}\xspace_U'$ can only be obtained by the {\ensuremath{\sf{\mathcal RoT}}}\xspace residing in that device, i.e., {$\rot_A$}\xspace. If {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace does not reside in {$\dev\text{-}A$}\xspace, {\ensuremath{\sf{\mathcal Adv}}}\xspace has to provide another biometric to {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace, i.e., from an accomplice person. In such a case, due to \ensuremath{\styleEntity{FV}}\xspace security, the reconstruction would result in an incorrect ${\ensuremath{\sf{\mathcal Chal}}}\xspace' \neq {\ensuremath{\sf{\mathcal Chal}}}\xspace$ with overwhelming probability $1-negl(l)$, for appropriate choice of \ensuremath{\styleEntity{FV}}\xspace parameters as a function of $l$. Hence, it would not pass {\ensuremath{\sf{\mathcal Vrf}}}\xspace's signature verification (line 7). If verification succeeds, {\ensuremath{\sf{\mathcal Vrf}}}\xspace becomes convinced that \text{$pk_i$}\xspace is indeed issued by {$\rot_A$}\xspace and {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace $\equiv$ {$\rot_A$}\xspace. Unlike \ensuremath{\small{PA}}\xspace schemes, security of our biometric-based {\ensuremath{\sf{RTI}}}\xspace scheme is based on {\ensuremath{\sf{\mathcal Adv}}}\xspace's inability to forge $\ensuremath{\styleEntity{BT}}\xspace_U$ and mount a successful cuckoo attack. Although {\ensuremath{\sf{\mathcal Adv}}}\xspace controls entire software state of {$\dev\text{-}A$}\xspace (except for {$\rot_A$}\xspace itself) and can access any memory outside of that reserved by {$\rot_A$}\xspace, it cannot obtain $\ensuremath{\styleEntity{BT}}\xspace_U'$ due to the secure channel between the fingerprint sensor and {$\rot_A$}\xspace. \noindent\textit{\textbf{Fingerprint Forgery:}} Fingerprints have been used as a biometric for a very long time and remain the most common means of biometric authentication. There have been numerous successful attacks that surreptitiously obtain a user's fingerprints and then come up with various contraptions to fool fingerprint sensors. Clearly, the proposed protocol and its variations will fail if the biometric template used in a {\ensuremath{\sf{RTI}}}\xspace protocol execution is stolen and reproduced before hand. However, the protocol does not require a pre-determined fingerprint or the user. Hence, the fingerprint forgery attack may not always succeed. \begin{figure}[ht] \begin{mdframed} \small \begin{Theorem}\label{th:fv_roti_completeness} \ensuremath{\styleEntity{FV}}\xspace-based {\ensuremath{\sf{RTI}}}\xspace protocol (Figure~\ref{fig:FV_ROT_ID}) is complete according to Definition~\ref{def:RTI_comp} as long as \ensuremath{\styleEntity{FV}}\xspace is complete according to Definition~\ref{def:completeness}. \vspace{1mm} \begin{Proof} In an honest execution of the protocol {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace resides in {$\dev\text{-}A$}\xspace, i.e.: $\text{\text{$pk_i$}\xspace} \gets Gen(\text{{$\rot_A$}\xspace})$.\\ Since, {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace resides in {$\dev\text{-}A$}\xspace, {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace (i.e., {$\rot_A$}\xspace) receive $\ensuremath{\styleEntity{BT}}\xspace_U$ and $\ensuremath{\styleEntity{BT}}\xspace_U'$ such that $dist(\ensuremath{\styleEntity{BT}}\xspace_U, \ensuremath{\styleEntity{BT}}\xspace_U') \leq w$. It follows from Definition~\ref{def:completeness} that: \begin {equation} \begin{split} & \ensuremath{\styleEntity{HD}}\xspace \gets \ensuremath{\styleEntity{FV}}\xspace_{GEN}(\ensuremath{\styleEntity{BT}}\xspace_U, {\ensuremath{\sf{\mathcal Chal}}}\xspace) \rightarrow \\ & Pr[\ensuremath{\styleEntity{FV}}\xspace_{OPEN}(\ensuremath{\styleEntity{HD}}\xspace,\ensuremath{\styleEntity{BT}}\xspace_U') = {\ensuremath{\sf{\mathcal Chal}}}\xspace] > 1-negl(l) \rightarrow \\ & Pr[\sigma \equiv sign_{\text{$sk_i$}\xspace}({\ensuremath{\sf{\mathcal Chal}}}\xspace)] > 1-negl(l) \rightarrow \\ & Pr[verify_{\text{$sk_i$}\xspace}(\sigma) \equiv {\ensuremath{\sf{\mathcal Chal}}}\xspace = 1] > 1-negl(l) \end{split} \end {equation} \end{Proof} \end{Theorem} \vspace{1mm} \hrule \vspace{1mm} \small \begin{Theorem}\label{th:fv_roti_security} \ensuremath{\styleEntity{FV}}\xspace-based {\ensuremath{\sf{RTI}}}\xspace protocol (Figure~\ref{fig:FV_ROT_ID}) is secure according to Definition~\ref{def:RTI_sec}, as long as \ensuremath{\styleEntity{FV}}\xspace is $p$-information theoretically secure as in Definition~\ref{def:security} and \ensuremath{\styleEntity{FV}}\xspace parameters are chosen such that and $p = negl(l)$. \vspace{0.3mm} \begin{Proof} In this case, {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace does not reside in {$\dev\text{-}A$}\xspace i.e.: $\neg(\text{\text{$pk_i$}\xspace} \gets Gen(\text{{$\rot_A$}\xspace}))$.\\ Therefore, it must be the case that {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace receive $\ensuremath{\styleEntity{BT}}\xspace_U$ and $\ensuremath{\styleEntity{BT}}\xspace_U'$ such that $dist(\ensuremath{\styleEntity{BT}}\xspace_U, \ensuremath{\styleEntity{BT}}\xspace_U') > w$. Assuming that {\ensuremath{\sf{\mathcal Adv}}}\xspace is unable to forge $sign_{\text{$sk_i$}\xspace}(.)$ with more than $negl(l)$ advantage, it follows from Definition~\ref{def:security} that: \begin {equation} \begin{split} & \ensuremath{\styleEntity{HD}}\xspace \gets \ensuremath{\styleEntity{FV}}\xspace_{GEN}(\ensuremath{\styleEntity{BT}}\xspace_U, {\ensuremath{\sf{\mathcal Chal}}}\xspace) \rightarrow \\ & Pr[\ensuremath{\styleEntity{FV}}\xspace_{OPEN}(\ensuremath{\styleEntity{HD}}\xspace,\ensuremath{\styleEntity{BT}}\xspace_U') = {\ensuremath{\sf{\mathcal Chal}}}\xspace] = p = negl(l) \rightarrow \\ & Pr[\sigma \equiv sign_{\text{$sk_i$}\xspace}({\ensuremath{\sf{\mathcal Chal}}}\xspace)] = negl(l) \rightarrow \\ & Pr[verify_{\text{$sk_i$}\xspace}(\sigma) \equiv {\ensuremath{\sf{\mathcal Chal}}}\xspace = 1] = negl(l) \end{split} \end {equation} \end{Proof} \end{Theorem} \end{mdframed} \end{figure} As mentioned above, security of the protocol in Figure~\ref{fig:FV_ROT_ID} depends on that of the \ensuremath{\styleEntity{FV}}\xspace scheme. Completeness and security of this protocol are stated in Theorems~\ref{th:fv_roti_completeness} and~\ref{th:fv_roti_security}, respectively. In both completeness and security arguments, we assume that whenever two samples are taken from the same biometric they are within a certain distance threshold. Conversely, we assume that two samples of different biometrics are beyond that threshold. In other words, $dist(\ensuremath{\styleEntity{BT}}\xspace_U, \ensuremath{\styleEntity{BT}}\xspace_U') \leq w\; \iff \; \ensuremath{\styleEntity{BT}}\xspace_U$ and $\ensuremath{\styleEntity{BT}}\xspace_U'$ are samples of the same biometric. In practice, validity of this assumption depends on the accuracy of the biometric matching procedure, including the distance function $dist$, the distance threshold $w$ and the degree of \ensuremath{\styleEntity{FV}}\xspace polynomial. Our choice of parameters are based on previous work on these issues and are discussed in Section~\ref{sec:implementation}. Accuracy results obtained with such parameters are discussed in Section~\ref{sec:limitations}. \subsection{Proxy {\ensuremath{\sf{RTI}}}\xspace Protocol} The protocol in Section~\ref{sec:FV_protocol} requires {\ensuremath{\sf{\mathcal Vrf}}}\xspace, and {$\dev\text{-}A$}\xspace to be physically accessible to $U$, since $U$ must provide her biometric sample to both {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace. To cope with scenarios where {\ensuremath{\sf{\mathcal Vrf}}}\xspace (e.g., a server) is not easily approachable, we suggest to use a proxy {$\dev\text{-}A$}\xspace with its {$\rot_A$}\xspace previously identified, in order to assist {\ensuremath{\sf{\mathcal Vrf}}}\xspace in identifying {$\rot_B$}\xspace. Suppose that $U$ now carries {$\dev\text{-}A$}\xspace to the location of {$\dev\text{-}B$}\xspace. Figure~\ref{fig:proxy_rot_id} shows a protocol for using {$\dev\text{-}A$}\xspace to assist {\ensuremath{\sf{\mathcal Vrf}}}\xspace in remotely identifying {$\rot_B$}\xspace of {$\dev\text{-}B$}\xspace. The main idea is for {$\rot_A$}\xspace to act as an interface of {\ensuremath{\sf{\mathcal Vrf}}}\xspace. It captures $U$'s biometric and forward it to {\ensuremath{\sf{\mathcal Vrf}}}\xspace via an authenticated and secret network channel. The same biometric is also used as a challenge to {$\dev\text{-}B$}\xspace, which runs the rest of the protocol with {\ensuremath{\sf{\mathcal Vrf}}}\xspace. The security of the \ensuremath{\styleEntity{FV}}\xspace-based {\ensuremath{\sf{RTI}}}\xspace protocol in Section~\ref{sec:FV_protocol} implies the security of the proxy {\ensuremath{\sf{RTI}}}\xspace protocol. We note that {$\dev\text{-}A$}\xspace is \emph{not} a trusted device as used in \cite{DPKC18}. Its software, including the OS, could be compromised, while its {$\rot_A$}\xspace is trusted, which is consistent with the basic protocol. Hence, both protocols provide the same level of security. As discussed earlier, lack of {\ensuremath{\sf{RTI}}}\xspace violates the assumption that {\ensuremath{\sf{\mathcal RoT}}}\xspace resides on the physical device of interest, thus undermining security of any application dependent on that assumption. The Proxy {\ensuremath{\sf{RTI}}}\xspace is itself a good example of such an application. It relies on the assumption that biometric sampling is performed on {$\dev\text{-}A$}\xspace -- the device in possession of authorized user $U$. Therefore, identification of {$\rot_A$}\xspace is crucial to overall security of this application. \section{Related Work}\label{sec:rw} In this section we summarize topics related to {\ensuremath{\sf{RTI}}}\xspace, except \ensuremath{\small{PA}}\xspace~\cite{presence_att} which was already discussed in Section~\ref{sec:pres_att}. \textbf{\textit{Cuckoo Attacks}} were thoroughly introduced and formally modeled in~\cite{parno2011bootstrapping}. Several potential solutions were analyzed under that model and, among them, secure hardware channels between {$\dev\text{-}A$}\xspace I/O interfaces the {$\rot_A$}\xspace were considered as the preferred method. As discussed in Section~\ref{sec:RTI}, even direct channels can be circumvented by a Cuckoo {\ensuremath{\sf{\mathcal Adv}}}\xspace that deploys its own \emph{accomplice challenger} to replay {\ensuremath{\sf{\mathcal Vrf}}}\xspace messages through the appropriate channel. To tackle this problem, our biometric-based approach explores the uniqueness of biometrics as a physical unclonable challenge, in addition to the existing secure channel between the biometric sensor and the {\ensuremath{\sf{\mathcal RoT}}}\xspace. \textbf{\textit{Distance Bounding (DB)}} is a promising approach for addressing the {\ensuremath{\sf{RTI}}}\xspace problem. With recent advances~\cite{leu2019message,singhuwb,235453}, DB could allow {\ensuremath{\sf{\mathcal Vrf}}}\xspace to precisely establish maximum distance (bound) to the untrusted {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace. Basically, if each device is equipped with DB facilities (a special radio and a high-precision clock) and {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace has a secure hardware channel to DB in its housing device, then the user can simply make sure that no other device is within the reported bound, e.g., 20-30 cm. However, several obstacles (discussed in Section~\ref{sec:RTI}) must be overcome before DB can be used for {\ensuremath{\sf{RTI}}}\xspace. \textbf{\textit{User Trust Bootstrapping}} allows the user to establish trust on her device. TrustICE~\cite{sun2015trustice} uses a hardware approach and uses an LED under exclusive control of {\ensuremath{\sf{\mathcal RoT}}}\xspace. The light signal emitted by this LED is used to convince the user that the device has an active {\ensuremath{\sf{\mathcal RoT}}}\xspace. Other approaches~\cite{danisevskis2015graphical,lange2013crossover} reserve a fraction of {\ensuremath{\sf{\mathcal Dev}}}\xspace screen to communicate the state of the trusted component to the user. While these approaches succeed to communicate the state of {\ensuremath{\sf{\mathcal RoT}}}\xspace in a given device, they do not provide identification of corresponding public keys. \textbf{\textit{Device Pairing}} is the problem of initializing a secure (usually wireless) channel between two previously unfamiliar devices, without any trusted third party. Many device pairing protocols have been proposed, relying on various physical properties~\cite{saxena2006secure,soriente2008hapadep,jung2011device}. The main difference between {\ensuremath{\sf{RTI}}}\xspace and device pairing is that, in the former, one of the devices ({$\dev\text{-}A$}\xspace) is potentially compromised and is therefore subject to cuckoo attacks. In contrast, device pairing mainly considers evil twin attacks. \textbf{\textit{Remote Attestation}} is an {\ensuremath{\sf{\mathcal RoT}}}\xspace-enabled security service that allows {\ensuremath{\sf{\mathcal Vrf}}}\xspace to measure software state of applications running on {\ensuremath{\sf{\mathcal Dev}}}\xspace. In recent years, several remote attestation techniques and architectures~\cite{haldar2004semantic,barbosa2016foundations,smart,hydra,vrased,simple} were proposed, targeting different platforms and offering different types of guarantee. While remote attestation enables malware detection on a remote {\ensuremath{\sf{\mathcal Dev}}}\xspace, it cannot be used as a means to solve {\ensuremath{\sf{RTI}}}\xspace by ensuring that {\ensuremath{\sf{\mathcal Dev}}}\xspace is in a malware-free state. This is because remote attestation itself requires mitigating the {\ensuremath{\sf{RTI}}}\xspace problem, i.e., making sure that a remote attestation protocol indeed executes on {\ensuremath{\sf{\mathcal Dev}}}\xspace before it can be used to ensure that {\ensuremath{\sf{\mathcal Dev}}}\xspace is malware-free. \textbf{\textit{Biometrics}} are widely used in user authentication~\cite{snelick2005large,FIDO,burger2001biometric,bhagavatula2015biometric} and identification~\cite{haghighat2015cloudid,yuan2013efficient} systems. Fuzzy extractors are typically deployed to preserve biometric template confidentiality in the back-end of these systems~\cite{snuse}. To the best of our knowledge, this paper is the first proposal to use biometrics and fuzzy extractors to convey an unclonable challenge and assist in the identification of an {\ensuremath{\sf{\mathcal RoT}}}\xspace. \vspace{-0.8mm} \section{Conclusion}\label{sec:conclusion} This paper introduced and analyzed the {\ensuremath{\sf{RTI}}}\xspace problem, which occurs whenever an {\ensuremath{\sf{\mathcal RoT}}}\xspace is used to implement a security service that depends on physical IO devices (sensors and actuators) and relies on the assumption of {\ensuremath{\sf{\mathcal RoT}}}\xspace residing in a specific physical device. To address this problem we proposed an {\ensuremath{\sf{RTI}}}\xspace protocol based on the difficulty of cloning biometrics in real time. It uses the biometric as a challenge in the {\ensuremath{\sf{RTI}}}\xspace protocol and relies on the existence of a hardware channel between biometric sensors and TEEs -- a feature already available on some current devices. We also demonstrated a prototype implementation of our approach. \section{{\ensuremath{\sf{RTI}}}\xspace Protocols}\label{sec:RTI} In this section we define {\ensuremath{\sf{RTI}}}\xspace protocols and the adversarial model. Our notations are summarized in Table~\ref{table:notation}. As noted in Section~\ref{sec:intro}, some types of TEEs use a device-specific persistent public key while others use one-time public key with group signature based certification. Without loss of generality, our treatment in this section focuses on the latter type since it subsumes the former. \subsection{Definitions} Suppose {$\dev\text{-}A$}\xspace is a physical device (e.g., smartphone, personal computer, server) equipped with an RoT denoted by {$\rot_A$}\xspace. Let: \begin{equation} \text{$pk_i$}\xspace, \text{$sk_i$}\xspace,\sigma_i \gets Gen(\text{{$\rot_A$}\xspace}) \end{equation} denote the process whereby {$\rot_A$}\xspace generates the $i$-th asymmetric key pair $(\text{$pk_i$}\xspace,\text{$sk_i$}\xspace)$ and a group signature $\sigma_i$ upon $\text{$pk_i$}\xspace$. Although $\sigma_i$ can be verified cryptographically, it does not prove that $\text{$pk_i$}\xspace$ is for {$\dev\text{-}A$}\xspace, because the signature does not enclose any physically identifiable property of {$\dev\text{-}A$}\xspace. An {\ensuremath{\sf{RTI}}}\xspace protocol is the interactions between a verifier ({\ensuremath{\sf{\mathcal Vrf}}}\xspace) and a prover RoT ({${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace) which issues \text{$pk_i$}\xspace and is alleged to reside on {$\dev\text{-}A$}\xspace. Both parties are trusted and cooperate such that {\ensuremath{\sf{\mathcal Vrf}}}\xspace can decide if {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace resides in {$\dev\text{-}A$}\xspace, i.e., whether {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace $\equiv$ {$\rot_A$}\xspace. Interestingly, not even {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace itself knows its own residency. This goal is deceptively simple and, as we discuss in the remainder of this paper, is hard to achieve even though both parties involved in the protocol are trusted. At the end of the {\ensuremath{\sf{RTI}}}\xspace protocol, {\ensuremath{\sf{\mathcal Vrf}}}\xspace learns \text{$pk_i$}\xspace which is a public key used by {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace. {\ensuremath{\sf{\mathcal Vrf}}}\xspace's assertion on {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace $\equiv$ {$\rot_A$}\xspace also implies that \text{$pk_i$}\xspace is indeed issued by {$\rot_A$}\xspace. Completeness and security of {\ensuremath{\sf{RTI}}}\xspace are defined in terms of {\ensuremath{\sf{\mathcal Vrf}}}\xspace's ability to make a positive conclusion if and only if $\text{$pk_i$}\xspace \gets Gen(\text{{$\rot_A$}\xspace})$, with overwhelming probability. We specify a generic ${\ensuremath{\sf{RTI}}}\xspace$ protocol in Definition~\ref{def:RTI}. Completeness and security of {\ensuremath{\sf{RTI}}}\xspace protocols are stated in Definitions~\ref{def:RTI_comp} and~\ref{def:RTI_sec}, respectively. Definition~\ref{def:RTI_comp} states that a complete {\ensuremath{\sf{RTI}}}\xspace protocol against {$\rot_A$}\xspace, always outputs `1' if the public-key \text{$pk_i$}\xspace given as input to the protocol is indeed generated by {$\rot_A$}\xspace. Definition~\ref{def:RTI_sec} states that a secure {\ensuremath{\sf{RTI}}}\xspace protocol against {$\rot_A$}\xspace, always output `0', if \text{$pk_i$}\xspace given as input to the protocol is not issued by {$\rot_A$}\xspace. Note that by Definition~\ref{def:RTI}, {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace is defined as the {\ensuremath{\sf{\mathcal RoT}}}\xspace that issues \text{$pk_i$}\xspace, thus the following equivalence: \begin{equation} [\text{\text{$pk_i$}\xspace} \gets Gen(\text{{$\rot_A$}\xspace})] \leftrightarrow [\text{{${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace} \equiv \text{{$\rot_A$}\xspace}]. \end{equation} We now present several possible attacks on {\ensuremath{\sf{RTI}}}\xspace protocols to illustrate some subtleties in addressing the {\ensuremath{\sf{RTI}}}\xspace problem. \subsection{Attack Vectors}\label{sec:vectors} \begin{figure*}[!htbp] \centering \subfigure[Expected setting in benign {\ensuremath{\sf{RTI}}}\xspace protocol execution.] {\includegraphics[width=3.6cm,height=1.0in]{fig1b.png}\label{fig:1a}} \hspace{1cm} \subfigure[An evil-twin {\ensuremath{\sf{\mathcal Adv}}}\xspace uses {\ensuremath{\sf{\mathcal \dev^*}}}\xspace to hijack the communication and play the role of {$\dev\text{-}A$}\xspace.] {\includegraphics[width=5.8cm,height=1.2in]{fig1c.png}\label{fig:1b}} \hspace{1cm} \subfigure[A cuckoo {\ensuremath{\sf{\mathcal Adv}}}\xspace uses malware on {$\dev\text{-}A$}\xspace to relay {\ensuremath{\sf{\mathcal Vrf}}}\xspace messages to/from accomplice {\ensuremath{\sf{\mathcal \dev^*}}}\xspace.] {\includegraphics[width=5.8cm,height=1.0in]{fig1d.png}\label{fig:1c}} \caption{Possible scenarios during {\ensuremath{\sf{RTI}}}\xspace protocol execution}\label{fig:settings} \end{figure*} In this section, we discuss several attack scenarios and argue that addressing {\ensuremath{\sf{RTI}}}\xspace is challenging. We start by describing a na\"ive approach to solving {\ensuremath{\sf{RTI}}}\xspace and show how it can be attacked trivially. We then gradually increase adversarial capabilities. \subsubsection{Na\"ive {\ensuremath{\sf{RTI}}}\xspace Protocol} As shown in \cite{ditio}, a natural way to solve {\ensuremath{\sf{RTI}}}\xspace is to challenge whether {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace knows \text{$sk_i$}\xspace, assuming that {\ensuremath{\sf{\mathcal Vrf}}}\xspace has the prior knowledge of {$\rot_A$}\xspace's ownership of $\text{$sk_i$}\xspace$. The protocol supposes the scenario in Figure~\ref{fig:1a} and proceeds as follows (communication is assumed to take place over a wireless medium): \begin{compactenum} \item {\ensuremath{\sf{\mathcal Vrf}}}\xspace requests {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace public key; \item {\ensuremath{\sf{\mathcal Vrf}}}\xspace receives \text{$pk_i$}\xspace and checks that it was issued by some legitimate {\ensuremath{\sf{\mathcal RoT}}}\xspace by verifying the group signature on \text{$pk_i$}\xspace; \item {\ensuremath{\sf{\mathcal Vrf}}}\xspace issues a random challenge $c$, encrypts $c$ under \text{$pk_i$}\xspace, and sends it to {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace; \item {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace issues signs $c$ using its private key; \item {\ensuremath{\sf{\mathcal Vrf}}}\xspace verifies the signature from {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace using \text{$pk_i$}\xspace. If valid, it concludes that {${\ensuremath{\sf{\mathcal RoT}}}\xspace_P$}\xspace is {$\rot_A$}\xspace and \text{$pk_i$}\xspace is indeed issued by {$\rot_A$}\xspace; % \end{compactenum} The problem is that the assumption in the na\"ive protocol barely holds in reality because it is infeasible for {\ensuremath{\sf{\mathcal Vrf}}}\xspace to have the prior knowledge of ownership of the key. Hence, {\ensuremath{\sf{\mathcal Vrf}}}\xspace cannot distinguish between an interaction with {$\dev\text{-}A$}\xspace and some other {\ensuremath{\sf{\mathcal \dev^*}}}\xspace of the same class and equipped with the same {\ensuremath{\sf{\mathcal RoT}}}\xspace type. In particular, an evil-twin adversary {\ensuremath{\sf{\mathcal Adv}}}\xspace can easily convince {\ensuremath{\sf{\mathcal Vrf}}}\xspace that \text{$pk_i$}\xspace was issued by {$\rot_A$}\xspace while in fact \text{$pk_i$}\xspace is issued by {\ensuremath{\sf{\mathcal \rot^*}}}\xspace. As illustrated in Figure~\ref{fig:1b}, {\ensuremath{\sf{\mathcal Adv}}}\xspace performs as follows: \begin{compactenum} \item {\ensuremath{\sf{\mathcal Adv}}}\xspace intercepts {\ensuremath{\sf{\mathcal Vrf}}}\xspace request and forwards it to {\ensuremath{\sf{\mathcal \dev^*}}}\xspace; \item {\ensuremath{\sf{\mathcal Adv}}}\xspace replies to {\ensuremath{\sf{\mathcal Vrf}}}\xspace with $\text{\text{$pk_i$}\xspace} \gets Gen(\text{{\ensuremath{\sf{\mathcal \rot^*}}}\xspace})$, issued by {\ensuremath{\sf{\mathcal \rot^*}}}\xspace; \item {\ensuremath{\sf{\mathcal Vrf}}}\xspace believes that \text{$pk_i$}\xspace was generated by {$\rot_A$}\xspace and completes the rest of the protocol with {\ensuremath{\sf{\mathcal \rot^*}}}\xspace; \item {\ensuremath{\sf{\mathcal Vrf}}}\xspace incorrectly concludes that \text{$pk_i$}\xspace was issued by {$\rot_A$}\xspace. \end{compactenum} \noindent\textit{\textbf{Remark}: Although {\ensuremath{\sf{\mathcal \rot^*}}}\xspace is honest (i.e., not subverted by {\ensuremath{\sf{\mathcal Adv}}}\xspace), it cannot tell that it is being (ab)used by {\ensuremath{\sf{\mathcal Adv}}}\xspace to fool {\ensuremath{\sf{\mathcal Vrf}}}\xspace. From {\ensuremath{\sf{\mathcal \rot^*}}}\xspace perspective, this interaction is indistinguishable from a legitimate execution of an {\ensuremath{\sf{RTI}}}\xspace protocol between {\ensuremath{\sf{\mathcal Vrf}}}\xspace and itself.} \subsubsection{Coping with Evil-twin Adversaries} One way to cope with an evil-twin adversaries is for {\ensuremath{\sf{\mathcal Vrf}}}\xspace to require a physical channel that cannot be tampered with, or accessed, by nearby devices. For example, intercepting {\ensuremath{\sf{\mathcal Vrf}}}\xspace messages and replying in place of {$\dev\text{-}A$}\xspace is significantly harder when {\ensuremath{\sf{\mathcal Vrf}}}\xspace uses a wired channel (e.g., a USB cable) to communicate with {$\dev\text{-}A$}\xspace. This would prevent {\ensuremath{\sf{\mathcal Adv}}}\xspace from using {\ensuremath{\sf{\mathcal \dev^*}}}\xspace to interact with {\ensuremath{\sf{\mathcal Vrf}}}\xspace directly, since only {$\dev\text{-}A$}\xspace is directly connected to {\ensuremath{\sf{\mathcal Vrf}}}\xspace. In this case, an honest execution of the {\ensuremath{\sf{RTI}}}\xspace protocol would proceed as above, except for the use of the wired channel. However, even a wired channel is insufficient if we consider a \emph{cuckoo} adversary~\cite{parno2011bootstrapping}. Such an adversary first installs malware on {$\dev\text{-}A$}\xspace. This malware intercepts incoming messages destined for {$\rot_A$}\xspace and forwards them to {\ensuremath{\sf{\mathcal \dev^*}}}\xspace. As illustrated by Figure~\ref{fig:1c}, the attack proceeds as follows: \begin{compactenum} \item Malware on {$\dev\text{-}A$}\xspace forwards {\ensuremath{\sf{\mathcal Vrf}}}\xspace request (received on the direct channel) to {\ensuremath{\sf{\mathcal \dev^*}}}\xspace, which feeds to {\ensuremath{\sf{\mathcal \rot^*}}}\xspace; \item {\ensuremath{\sf{\mathcal \rot^*}}}\xspace replies to {\ensuremath{\sf{\mathcal Vrf}}}\xspace request. It issues a \text{$pk_i$}\xspace and plays its part in the challenge-response protocol with {\ensuremath{\sf{\mathcal Vrf}}}\xspace (inadvertently assuming the role of of {$\rot_A$}\xspace); \item Response message from {\ensuremath{\sf{\mathcal \rot^*}}}\xspace is relayed to {\ensuremath{\sf{\mathcal Vrf}}}\xspace by malware on {$\dev\text{-}A$}\xspace, via the direct channel. \item {\ensuremath{\sf{\mathcal Vrf}}}\xspace incorrectly concludes that \text{$pk_i$}\xspace is issued by {$\rot_A$}\xspace. \end{compactenum} As in the evil-twin attack case, {\ensuremath{\sf{\mathcal \rot^*}}}\xspace is an honest {\ensuremath{\sf{\mathcal RoT}}}\xspace. However, it cannot tell that it is used by {\ensuremath{\sf{\mathcal Adv}}}\xspace to fool {\ensuremath{\sf{\mathcal Vrf}}}\xspace. \subsubsection{Cuckoo Adversaries} Cuckoo attacks show that defending against evil-twin adversaries is not enough when malware is in full control of {$\dev\text{-}A$}\xspace. Indeed, the threat of malware is the main reason for {$\dev\text{-}A$}\xspace to be equipped with an {\ensuremath{\sf{\mathcal RoT}}}\xspace. On the other hand, because network I/O interfaces typically go through untrusted components (i.e., drivers and OS), malware presence makes a secure physical connection between {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace insufficient for mitigating the {\ensuremath{\sf{RTI}}}\xspace problem. Capabilities of a cuckoo attacker are not restricted to the wired interface (e.g., USB). Any I/O device that does not communicate directly to the {\ensuremath{\sf{\mathcal RoT}}}\xspace must pass through an untrusted component and can be used for cuckoo attacks. As a matter of fact, an {\ensuremath{\sf{\mathcal RoT}}}\xspace could even be used to verify the existence of a software direct secure path (e.g., implemented by a hypervisor) between itself and the I/O interface inside a given device. Then, as a part of the {\ensuremath{\sf{RTI}}}\xspace protocol, {\ensuremath{\sf{\mathcal RoT}}}\xspace would only reply to a challenge coming on that particular verified interface. In the cuckoo attack, {\ensuremath{\sf{\mathcal \rot^*}}}\xspace (which is an honest {\ensuremath{\sf{\mathcal RoT}}}\xspace) would refuse to reply to the challenge relayed by {\ensuremath{\sf{\mathcal Adv}}}\xspace, because it is not received from the expected and verified wired I/O interface, since {$\dev\text{-}A$}\xspace and {\ensuremath{\sf{\mathcal \dev^*}}}\xspace are not directly connected. Unfortunately, even this setting can be circumvented by a more potent cuckoo {\ensuremath{\sf{\mathcal Adv}}}\xspace which uses an \emph{accomplice challenger} that connects to {\ensuremath{\sf{\mathcal \dev^*}}}\xspace via a channel expected by the {\ensuremath{\sf{\mathcal RoT}}}\xspace. Malicious software on {$\dev\text{-}A$}\xspace can forward {\ensuremath{\sf{\mathcal Vrf}}}\xspace messages to the \emph{accomplice challenger}. The \emph{accomplice challenger} then forwards to {\ensuremath{\sf{\mathcal \dev^*}}}\xspace the same messages sent by {\ensuremath{\sf{\mathcal Vrf}}}\xspace to {$\dev\text{-}A$}\xspace, over the expected I/O interface. Since the view of {\ensuremath{\sf{\mathcal \rot^*}}}\xspace is indistinguishable from that of an honest execution of {\ensuremath{\sf{RTI}}}\xspace, it produces a legitimate response that passes the verification. Although the channel expected by the {\ensuremath{\sf{\mathcal RoT}}}\xspace in our example is a wire/cable, this attack applies to any I/O interface. Assuming that the \emph{accomplice challenger} has I/O capabilities equivalent to those of {\ensuremath{\sf{\mathcal Vrf}}}\xspace, a challenge from {\ensuremath{\sf{\mathcal Vrf}}}\xspace can be replayed by the \emph{accomplice challenger} using the same type of channel. Thus, we observe that \textbf{\emph{whenever the challenge is conveyed using a machine I/O interface, it can be replayed by another machine with the same I/O capabilities}}. This motivates our choice for a biometric-based {\ensuremath{\sf{RTI}}}\xspace scheme. The key rationale is that, if a human user becomes a part of the I/O operation, this I/O operation cannot be easily replayed since it requires physical participation by the same person. \subsection{{\ensuremath{\sf{RTI}}}\xspace Adversarial Model} Considering the attack scenarios of Section~\ref{sec:vectors}, we define a strong adversary {\ensuremath{\sf{\mathcal Adv}}}\xspace that can compromise the entire software stack of {$\dev\text{-}A$}\xspace, \emph{excluding the software component of {$\rot_A$}\xspace e.g., a trusted hypervisor loaded and verified by the hardware component of {$\rot_A$}\xspace}. As such, {\ensuremath{\sf{\mathcal Adv}}}\xspace can compromise applications and the operating system. It can intercept, eavesdrop, discard or inject messages on the internal path between {$\dev\text{-}A$}\xspace's I/O interfaces and {$\rot_A$}\xspace. We assume that {\ensuremath{\sf{\mathcal Adv}}}\xspace has the same capabilities (intercept, eavesdrop, discard or inject messages) on the network. {\ensuremath{\sf{\mathcal Adv}}}\xspace can sense physical surroundings of {$\dev\text{-}A$}\xspace and {\ensuremath{\sf{\mathcal Vrf}}}\xspace and record, retransmit, and replay any message, signal or action performed by {\ensuremath{\sf{\mathcal Vrf}}}\xspace or {$\dev\text{-}A$}\xspace actuators. In particular, {\ensuremath{\sf{\mathcal Adv}}}\xspace can deploy its own sensors and actuators with I/O capabilities equivalent to those of {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace, in the environment surrounding them. This model accounts for \emph{evil-twin adversaries} as per Section~\ref{sec:vectors}. {\ensuremath{\sf{\mathcal Adv}}}\xspace can deploy an accomplice device {\ensuremath{\sf{\mathcal \dev^*}}}\xspace equipped with {\ensuremath{\sf{\mathcal \rot^*}}}\xspace. The entire software state of {\ensuremath{\sf{\mathcal \dev^*}}}\xspace is also under {\ensuremath{\sf{\mathcal Adv}}}\xspace control. These devices might be located in a remote environment where {\ensuremath{\sf{\mathcal Adv}}}\xspace deploys its own sensors and actuators with I/O capabilities equivalent to those of {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace. Malware on {$\dev\text{-}A$}\xspace (controlled by {\ensuremath{\sf{\mathcal Adv}}}\xspace) might, for instance, intercept messages sent from {\ensuremath{\sf{\mathcal Vrf}}}\xspace to {$\rot_A$}\xspace, relaying them to {\ensuremath{\sf{\mathcal \dev^*}}}\xspace. {\ensuremath{\sf{\mathcal \rot^*}}}\xspace might inadvertently reply to malware on {$\dev\text{-}A$}\xspace which then forwards replies to {\ensuremath{\sf{\mathcal Vrf}}}\xspace on behalf of {$\rot_A$}\xspace. This model accounts for \emph{cuckoo adversaries}, as discussed in Section~\ref{sec:vectors}. We consider hardware attacks to be out of scope of this paper. Specifically, {\ensuremath{\sf{\mathcal Adv}}}\xspace\ cannot make hardware changes on {$\dev\text{-}A$}\xspace, any hardware-based {\ensuremath{\sf{\mathcal RoT}}}\xspace, or the physically built-in circuit linking a trusted I/O device and an {\ensuremath{\sf{\mathcal RoT}}}\xspace. Protection against physical attacks is considered orthogonal and can be supported by tamper-resistant hardware techniques~\cite{ravi2004tamper}. \subsection{Mitigating {\ensuremath{\sf{RTI}}}\xspace via Presence Attestation}\label{sec:pres_att} The {\ensuremath{\sf{RTI}}}\xspace problem is quite similar to that of convincing a \emph{human user} that her own device has an {\em active} {\ensuremath{\sf{\mathcal RoT}}}\xspace. The latter is referred to as {\emph{Presence Attestation} (\ensuremath{\small{PA}}\xspace) in~\cite{presence_att} which proposes several concrete schemes. In addition to convincing the human user that she is interacting with the {\ensuremath{\sf{\mathcal RoT}}}\xspace on her device, \ensuremath{\small{PA}}\xspace schemes can be extended so that {\ensuremath{\sf{\mathcal Vrf}}}\xspace learns the {\ensuremath{\sf{\mathcal RoT}}}\xspace's public key. Therefore, they are one way to address {\ensuremath{\sf{RTI}}}\xspace. Unsurprisingly, \ensuremath{\small{PA}}\xspace schemes also cope with evil-twin and cuckoo attacks. We now overview three \ensuremath{\small{PA}}\xspace schemes from~\cite{presence_att} and discuss their security from the {\ensuremath{\sf{RTI}}}\xspace perspective. \subsubsection{Location-based \ensuremath{\small{PA}}\xspace} The security premise of location-based \ensuremath{\small{PA}}\xspace scheme is twofold: (i) {\ensuremath{\sf{\mathcal RoT}}}\xspace securely obtains genuine location of its hosting device, as reported by GPS; and (ii) given sufficient knowledge about {$\dev\text{-}A$}\xspace's location, the user can manually verify location reported by {\ensuremath{\sf{\mathcal RoT}}}\xspace, perhaps aided by visualization on a map. The essence of this approach is to use the geographic location as the challenge to {\ensuremath{\sf{\mathcal RoT}}}\xspace. However, besides well-known attacks on GPS signaling~\cite{GPS1,GPS2}, its main shortcoming is that it cannot differentiate {$\rot_A$}\xspace from {\ensuremath{\sf{\mathcal \rot^*}}}\xspace, which is sufficiently close to {$\dev\text{-}A$}\xspace so that they report the same readings. Moreover, manual verification of a geographic location does not have high enough accuracy. \subsubsection{Scene-based \ensuremath{\small{PA}}\xspace} This scheme uses a (photo of a) scene randomly chosen by the human user as the challenge and requires {\ensuremath{\sf{\mathcal RoT}}}\xspace to report the challenge received over a secure camera interface. As in the location-based scheme, the human user verifies correctness of the {\ensuremath{\sf{\mathcal RoT}}}\xspace response. This scheme is vulnerable to the evil-twin attack where the adversary takes the picture of the same scene and asks {\ensuremath{\sf{\mathcal \rot^*}}}\xspace to sign it. Its security is therefore dependent on the human user's ability to differentiate among photos taken by two different devices, which is obliviously not reliable. This scheme is also vulnerable to analog cuckoo attacks, whereby {\ensuremath{\sf{\mathcal Adv}}}\xspace re-renders the scene to an accomplice display such that {\ensuremath{\sf{\mathcal \dev^*}}}\xspace can take a genuine photo of it. Given today's hardware technology, it is infeasible for a normal user to distinguish between a photo of a physical scene and a re-production thereof. In both location- and the scene-based schemes, the human user decides on correctness of {$\rot_A$}\xspace's response. From the perspective of {\ensuremath{\sf{RTI}}}\xspace, it takes an extra step for the user to notify {\ensuremath{\sf{\mathcal Vrf}}}\xspace about her conclusion. \subsubsection{Sight-based \ensuremath{\small{PA}}\xspace} Sight-based \ensuremath{\small{PA}}\xspace scheme does not require any human input. Its security is based on the observation that any message reply in the line-of-sight channel incurs measurable time delay, because the attack includes analog operations which are comparatively time-consuming. In this scheme, {\ensuremath{\sf{\mathcal Vrf}}}\xspace and {$\dev\text{-}A$}\xspace run the standard challenge-response protocol using the line-of-sight channel whereby a display ``sends" messages to a camera. Using cryptographic means, {\ensuremath{\sf{\mathcal Vrf}}}\xspace checks integrity of the response. In addition, by measuring the time to complete the session, it verifies whether {$\rot_A$}\xspace is at the other end of the light-of-sight channel. Note that this scheme requires {$\rot_A$}\xspace to securely obtain the challenge from the camera and securely display the response to the display. Although it offers stronger security than location- and scene-based schemes, sight-based \ensuremath{\small{PA}}\xspace is dependent on the current frame-per-second (fps) rate of commodity cameras on modern smartphones. Moreover, sight-based \ensuremath{\small{PA}}\xspace requires the two participating devices to be physically well positioned through multiple rounds in order to form a high-quality light-of-sight channel. \noindent\textbf{Summary.} Zhang et. al. \cite{presence_att} have shed light on challenges related to {\ensuremath{\sf{\mathcal RoT}}}\xspace and cuckoo attacks, and made attempts to tackle them. We believe that {\ensuremath{\sf{RTI}}}\xspace is both harder and more general than the \ensuremath{\small{PA}}\xspace problem, since {\ensuremath{\sf{RTI}}}\xspace does not assume that the average human user possesses sufficient knowledge and expertise to discern ambient properties. Our biometric-based approach relies on the unclonability of human biometrics with high entropy, the same assumption propping up security in biometric authentication schemes. \subsection{Mitigating RTI via Distance Bounding} Distance bounding protocols~\cite{distbound1,distbound2,distbound3,distbound5} allow a verifier to determine whether its communication peer is within a certain distance (e.g., 30 cm). They are fundamentally different from a RTI protocol because establishing an acceptable distance does not always \emph{identify} the device. Using distance to solve RTI assumes that there is only single device in the range, which does not hold when the distance is large. There are also implementation issues using a distance-bounding protocol for RTI. Parno et. al \cite{parno2011bootstrapping} have remarked that it is not suited to deal with the cuckoo attack against TPM-based attestation given the slow speed of TPM. Although today's {\ensuremath{\sf{\mathcal RoT}}}\xspace has better performance, the time variance of signature generation remains too large for distance-bounding protocols which only tolerate time errors in several nanoseconds. Moreover, distance-bounding protocols would require all devices of {$\dev\text{-}A$}\xspace's class to be equipped with distance bounding hardware (ultra wide-band radios with high-precision clocks needed for accurate timing measurements) securely wired to the {\ensuremath{\sf{\mathcal RoT}}}\xspace ~\cite{distbound4}. This is currently not available in commodity devices. Recently, Dhar et. al. \cite{DPKC18} propose to use a trusted device (e.g., a smart USB device) as a proxy attached to the proving device so that a remote verifier detect the cuckoo attack during SGX attestation. Besides the hassle of using a trusted device, this approach relies on a strong assumption that the trusted device attached to an untrusted environment remain intact.
1,108,101,563,453
arxiv
\section{Introduction} The understanding of non-equilibrium systems has been a main goal of statistical physics over the last decades, where a remarkable progress has been reached. Key to this advance has been the study of the statistics of several observables ranging from the entropy production or the work, to the density or the current (of particle, charge, energy, momentum...). As the probability distribution of these observables follow a large deviation principle, the study of their corresponding large deviation functions (LDFs) has become a subject of primary interest, because LDFs can be considered as the {\em free energy} analog in non-equilibrium systems \cite{derrida2007,Touchette}. For instance, the Macroscopic Fluctuation Theory \cite{bertini2014}, provides a \col{variational} formula from which one can compute the LDF of joint space-time densities and currents for driven diffusive systems, just by knowing two transport coefficients. Particularly important in the understanding of non-equilibrium large deviations are the so called {\em fluctuations relations} \cite{EvansFT,GCFT,JarzEq,KurchanFT,LSFT,MaesFT, MaesRedig,Crooks1999,seifert2005,harris2007,chetrite2008,garcia2012}, valid arbitrarily far from equilibrium. Those relations basically exploit the {\em time reversal symmetry} of the microscopic dynamics, that gives rise to a relation between a positive fluctuation of certain observable and the corresponding negative fluctuation. However, for systems endowed with a spatial structure one may wonder if {\em spatial symmetry} yields relations between the probabilities of vectorial observables in different directions. This was pointed out in \cite{IFR}, where an Isometric Fluctuation Relation (IFR) was obtained within the Macroscopic Fluctuation Theory framework under some assumptions and verified using numerical simulations. The IFR relation allows us to relate in a very simple manner any pair of isometric current fluctuations and it has been recently extended to the context of anisotropic systems \cite{AFR}. In the IFR derivation the two dimensional system at hand was driven out of equilibrium by boundary reservoirs at different values in the presence of a field. In the present paper we consider systems driven far from equilibrium in the presence of a bulk field, i.e bulk-driven systems, and derive fluctuation theorems as a consequence of general spatial transformations, thereby extending the IFR case which was restricted to rotations. Fluctuations relations are very relevant because from them one can derive in the linear response regime the Onsager reciprocity relations and the Green-Kubo formulas and, even more important, other reciprocity relations beyond these two can be obtained by considering higher order response terms \cite{AndGasp}. In a similar way the IFR relation implies a set of new hierarchies between current cumulants and non-linear response coefficients \cite{IFR}. The focus of this paper is on deriving {\em spatial fluctuation theorems} starting from the underlying stochastic microscopic dynamics. Whereas a clear understanding has been reached for the microscopic origin of the standard fluctuation theorem \cite{GCFT,KurchanFT,LSFT,MaesFT}, it is an open problem to relate spatial fluctuation theorems to invariance properties of the microscopic dynamics. To achieve the result we will employ the Gibbs-formalism in space-time as introduced in \cite{MaesFT}. The fluctuation symmetries which we derive here are a consequence of this space-time Gibbsian structure. They apply both in equilibrium context, when the space-time Hamiltonian is time-reversal invariant (see \cite{lacoste} for the analysis {\em at equilibrium}) as well as in systems driven away from equilibrium, for which time reversal symmetry is broken. \medskip \section{Discrete systems} We first consider particle systems embedded in $\mathbb{Z}^d$ following a Markov dynamics. For the sake of simplicity we restrict to discrete time, however similar results hold true for continuous time dynamics. The Markov process is denoted by $\{X(n) = (X_{k}(n))_{k\in \mathbb{N}} :\; n \in \mathbb{N}\}$, where $X_{k}(n)$ denotes the random position of the $k^{th}$ particle at time $n$. System configurations are denoted by the vector $x=(x_1,x_2,\ldots)$ with the first particle located at site $x_1\in \Lambda_L = \{1,\ldots, L\}^d \subseteq \mathbb{Z}^d$ and similarly for all the other particles. \col{During the evolution particles jump to their nearest neighbor sites with anisotropic probabilities}. The anisotropy is tuned by the $d$-dimensional vector $a=(a_1,\ldots,a_d)$ and in each direction we assume a weak asymmetry which is produced by a constant external field $E= (E_1,\ldots,E_d)$. The dynamics is defined by the probability transition matrix with elements \begin{equation} \label{transition} p(x,y) = \left\{ \begin{array}{ll} \frac{b(x,y)}{C} a_s e^{\pm \frac{E_s}{L}} & \textrm{if } y = x^{k,\pm e_s}\\ 0 & \textrm{otherwise,} \end{array} \right. \end{equation} for $k\in\mathbb N$ and $s =1,\ldots, d$. Here $b(x,y)$ is a generic transition kernel from configuration $x$ to configuration $y$, $C$ is the normalization constant, $e_s$ is the unit vector in the $s$-direction and $x^{k,\pm e_s}$ is the configuration that is obtained from the configuration $x$ by moving the $k^{th}$ particle to $x_k \pm e_s$. We consider an initial particle number (\col{a conserved quantity of the dynamics}) proportional to the volume, i.e. $N= \rho L^d$, corresponding to a constant density $0 < \rho <\infty$. {We will be interested in the large deviations of the current vector $Q_{M,L} = \frac{1}{L}(N^{+,1}-N^{-,1},\dots,N^{+,d}-N^{-,d})$, where $N^{\pm,s}$ are the number of particle jumps in the $\pm s$ direction up to time $M$.} This current is an additive functional of the Markov process $(X(n))_{n\in\mathbb{N}}$ and it satisfies a large deviation principle that can be informally stated as $ \mathbb{P}(Q_{M,L} \sim M q) \approx e^{-M I_L(q)} $ as $M\to\infty$. By the Gartner-Ellis theorem \cite{Den-Hollander} the large deviation function $I_L$ can be obtained as $ I_L(q) = \sup_{\lambda} ( \lambda \cdot q - \mu_{L}(\lambda)) $ where we define the scaled cumulant generating function of the current as \begin{equation} \mu_{L}(\lambda) := \lim_{M\to\infty} \frac{1}{M} \ln Z_{M,L}(\lambda), ~~\text{with}~~Z_{M,L}(\lambda)=\mathbb{E}(e^{\lambda \cdot Q_{M,L}}), \nonumber \end{equation} where $\mathbb{E} (\cdot)$ denotes the average in path space. The application of Perron-Frobenius theorem allow us to express $\mu_L(\lambda)$ as {the logarithm of the largest eigenvalue of the tilted matrix} with elements $p_\lambda(x,y)=b(x,y) a_s e^{\pm \left( \frac{E_s}{L}+\frac{\lambda_s}{L} \right)}$ if $y = x^{k, \pm e_s}$ and $p_\lambda(x,y)=0$ otherwise. To study the system in the thermodynamic limit one needs to rescale by defining \begin{equation} \label{defi} \mu(\lambda) := \lim_{L\to\infty} \frac{\mu_{L}(\lambda)}{L^{d-2}} = \lim_{L\to\infty} \lim_{M\to\infty} \frac{\ln Z_{M,L}(\lambda)}{L^{d-2}M}. \end{equation} Our first result provides the derivation of an {\em anisotropic fluctuation relation} in $d$ dimensions from the assumption of a system invariance property. Consider $N$ interacting particles whose dynamics is defined by the transition matrix \eqref{transition}. We denote by $P$ the probability measure on the path space $\Omega$ (i.e. the space of all trajectories) in the presence of an external field $E=(E_1,\dots,E_d)$, and $P_0$ the corresponding measure with $E=0$. \begin{theorem*}[Spatial fluctuation theorem] Let $U:\mathbb{Z}^d \to \mathbb{Z}^d$ be a transformation on the physical space. For a trajectory in path space $\vec{x} = (x(0),x(1),\ldots x(M))$, consider the bijective mapping ${\cal R}: \Omega \to \Omega$ induced by $U$: $ ({\cal R}{x}(n))_{k} = U{x}_{k}(n)\;. $ Assume that the current satisfies \begin{equation} \label{assume2} Q_{M,L}({\cal R}\vec{x}) = U \, Q_{M,L}(\vec{x}) \end{equation} and assume that $P_0$ has the invariance property \begin{equation} \label{invariance-prop} {P}_{0}(\vec{x})={P}_{0}({\cal R} \vec{x}) \qquad \forall \; \vec{x} \in \Omega\;. \end{equation} Then the following fluctuation relations hold: $\forall \lambda$ \begin{equation} \label{afr-finite-vol} Z_{M,L}(\lambda) = Z_{M,L}\Big((U^{-1})^t (\lambda + E) - E\Big), \end{equation} \begin{equation} \label{I2-finite-vol} I_{L}(q) -I_L(Uq)= (U^t E-E) \cdot q \end{equation} where $(U^{-1})^t$ denotes the transposed of the inverse. \end{theorem*} \col{Notice that statement \eqref{afr-finite-vol} is obtained at finite time and finite volume, whereas eq. \eqref{I2-finite-vol} holds for any finite volume in the limit of large times. Furthermore, when the limit in \eqref{defi} is finite, one has $\mathbb{P}(Q_{M,L} \sim M L^{d-2}q) \approx e^{-M L^{d-2}I(q)}$ with $I(q)=\lim_{L\to \infty} I_L(q)/L^{d-2}$ satisfying the analogous of eq. \eqref{I2-finite-vol}.} We remark as well that to satisfy relation \eqref{invariance-prop} the map $U$ will also depend on the volume $L$ and the anisotropy $a$. However, to alleviate notation we do not write this dependence explicitly. In the proof we also shorthand $Z_{M,L}(\lambda)$ as $Z(\lambda)$ and $Q_{M,L}(\vec x)$ as $Q(\vec x)$. \smallskip \begin{proof} We start by observing that, from the previous definition of the current it holds that ${P}(\vec{x})=\exp{[E \cdot Q(\vec{x})]}{P}_0(\vec{x})/{\cal N}$, with ${\cal N}=\sum_{\vec{x}\in\Omega}\exp{[E \cdot Q(\vec{x})]}{P}_0(\vec{x})$. Applying this relation we have \begin{eqnarray} Z(\lambda) = \sum_{\vec{x}\in\Omega}{P}(\vec{x}) e^{\lambda \cdot Q (\vec{x})} = \sum_{\vec{x}\in\Omega}\frac{{P_0}(\vec{x})}{{\cal N}} e^{(E+\lambda) \cdot Q (\vec{x})}. \nonumber \end{eqnarray} Thus, by using the invariance property \eqref{invariance-prop} we get \begin{eqnarray} Z(\lambda) =\sum_{\vec{x}\in\Omega}\frac{{P_0}({\cal R} \vec{x})}{{\cal N}} e^{(E+\lambda) \cdot Q (\vec{x})}= \sum_{\vec{x}\in\Omega}{P}({\cal R} \vec{x}) e^{-E\cdot Q ({\cal R}\vec{x})+ (E+\lambda) \cdot Q(\vec{x})} \nonumber \end{eqnarray} Applying the change of variables $\vec{y}= {\cal R}\vec{x}$ and since ${\cal R}$ is a bijective map we find \begin{eqnarray} Z(\lambda) = \sum_{\vec{y\in \Omega}}{P}(\vec{y}) e^{-E\cdot Q (\vec{y})+ (E +\lambda) \cdot Q({\cal R}^{-1}\vec{y})} .\qed \nonumber \end{eqnarray} Hence, using the assumption \eqref{assume2}, it is easy to check that \eqref{afr-finite-vol} follows. By taking the limit $M\to \infty$ the same relation holds for $\mu_L$ and therefore \eqref{I2-finite-vol} follows by Legendre transform. \end{proof} \section{Comments and examples} If the transformation $U$ is chosen as spatial inversion, i.e. $U i = - i$ for $i\in\mathbb{Z}^d$, then one recovers the standard Gallavotti-Cohen fluctuation relation, i.e. $ \mu_L(\lambda) = \mu_L(-\lambda-2E). $ Notice that usually this relation is associated to time reversal invariance of the measure of the symmetric system. It was remarked in \cite{LSFT} that any transformation on path space such that ${\cal R} \circ {\cal R} = 1$ would lead to the Gallavotti-Cohen fluctuation relation. The transformation on path space induced by spatial inversion in physical space has indeed such property. More generally, the theorem above allows us to deduce generalised fluctuation relations as a consequence of {\em spatial symmetries}, i.e. whenever a transformation $U$ on the physical space satisfies \eqref{assume2} and \eqref{invariance-prop} then \eqref{afr-finite-vol} and \eqref{I2-finite-vol} follow. To further illustrate this point we shall discuss examples of systems of non-interacting particles. In this case the dynamics can be studied in terms of a single particle and the scaled cumulant generating function $\mu_L$ can be explicitly solved, so that one can check by inspection which spatial symmetries hold. For instance, for a system of independent random walkers (RW) where each particle at site $i$ jumps to site $i\pm e_s$ with probability $a_s e^{\pm\frac{E_s}{L}}$ with periodic boundary conditions, an elementary application of Perron-Frobenius theorem gives \begin{equation} \mu^{\textrm{RW}}_L(\lambda)= \rho L^d \ln\left[ \sum_{s=1}^d \left(a_s e^{(E_s+\lambda_s)/L} + a_s e^{-(E_s+\lambda_s)/L}\right) \right]. \nonumber \end{equation} For $d=2$ and by doing a change of variables to polar coordinates ($z$, $\theta$) such that $\lambda_1=z \cos\theta -E_1$ and $\lambda_2=z \sin \theta \sqrt{\alpha} -E_2$, with $\alpha=a_1/a_2$ being the anisotropy ratio, the above expression reads \begin{eqnarray} \mu^{\textrm{RW}}_L(z,\theta)&=& \rho L^2 \ln\left[ a_1 \left( e^{\frac{z\cos\theta}{L}}+e^{\frac{-z\cos\theta}{L}} \right)\right.\nonumber\\ &&\left. +a_2 \left( e^{\frac{z\sqrt{\alpha}\sin\theta }{L}}+e^{\frac{-z\sqrt{\alpha}\sin\theta }{L}} \right) \right]. \label{thetaIRW2} \end{eqnarray} \col{In the isotropic case ($\alpha=1$) one recognizes by inspection the {\em discrete symmetries} of the system leading to a fluctuation theorem.} Namely, $\mu^{\textrm{RW}}_L(z,\theta)=\mu^{\textrm{RW}}_L(z,\theta')$, for $\theta'=m \pi/2\pm \theta$, $\forall m\in \mathbb Z$ (see Fig. \ref{anisofig}). \begin{figure}[h] \centering \includegraphics[scale=0.7]{isotropic_p2b.pdf}\vspace*{-6mm}\\ \includegraphics[scale=0.7]{anisotropic_p2.pdf} \caption{(Color online) Plot of $\frac{\mu_L^{RW}(z,\theta)}{L^{d-2}}$ in $d=2$ as a function of $\theta = \arctan ( \frac{1}{\sqrt{\alpha}} \frac{\lambda_2 + E_2}{\lambda_1+E_1})$. In both figures $\rho=0.5$, $E = (5,0)$, $z = 4$ and the anisotropy ratio is $\alpha=1$ in the top and $\alpha=2$ in the bottom. The inset of the top panel shows the 8 discrete symmetries of the system in $\lambda$-space for $L=10$, starting from an initial angle e.g. $\theta=\pi/6$, corresponding to the transformations $U$ given by the diagonal and the anti diagonal matrices with elements $\pm 1$. As $L$ increases the discrete symmetry scales to a continuous symmetry associated to the transformation $ U=\left( {\begin{array}{cc} \cos\theta & -\sqrt{\alpha} \sin\theta \\ \frac{1}{\sqrt{\alpha}} \sin\theta & \cos\theta \\ \end{array} } \right) $ and \eqref{afr-infinite-vol} is then satisfied. } \label{anisofig} \end{figure} It is then natural to expect that for diffusions one is lead to consider {\em continuous spatial symmetries}. This can be seen by considering the diffusive scaling limit of the previous example. If we define \col{$\{X^{(L)}(n) = (X^{(L)}_{k}(n))_{k\in \mathbb{N}} :\; n \in \mathbb{N}\}$} the positions \col{at time $n$} of the $N$ independent random walkers (labeled by $k$) in a volume of linear size $L$, then the process $\{R_t = (R_{k,t})_{k\in \mathbb{N}} :\; t \in \mathbb{R}^+\}$ defined by $ R_t := \lim_{L\to \infty} \frac{X^{(L)}(\lfloor L^2 t \rfloor)}{L} $ will be a family of $N$ independent anisotropic Brownian motions satisfying the stochastic differential equation $ dR_{k,t} = A E dt + \sqrt{A} dB_{k,t} $ with $A$ the $d\times d$ diagonal diffusion matrix with elements $A_{s,s}=2 a_s$ and $B_{k,t} \in \mathbb R^d$ denoting independent standard Brownian motions. An immediate computation gives \begin{equation} \label{mubm} \lim_{L\to\infty} \frac{\mu^{RW}_{L}(\lambda)}{L^{d-2}} = \mu^{BM}(\lambda) = \rho\sum_{s=1}^d a_s \lambda_s(\lambda_s + 2E_s) \end{equation} where $ \mu^{\textrm{BM}}(\lambda) := \lim_{T\to \infty} \frac{1}{T L^d} \ln \Big[\mathbb{E}(e^{\lambda \cdot Q_T})\Big] $, being $Q_T=\sum_{k=1}^N R_{k,T}$ the current up to time $T$. From the explicit expression \eqref{mubm} we see that a spatial fluctuation relation holds, i.e. \begin{equation} \label{afr-infinite-vol} \mu^{BM}(\lambda) = \mu^{BM}\Big((U^{-1})^t (\lambda + E) - E\Big) \end{equation} with $U$ such that $ \label{cond-U} U A U^t = A $. For $d=2$, $U$ takes the form given at the end of the caption of Fig. \ref{anisofig}. As we shall see below, such a relation can be traced back to the invariance of the path measure of the anisotropic Brownian motion under a spatial transformation $U$ such that $U A U^t = A$ ,in agreement with the findings of \cite{IFR,AFR} in the context of the Macroscopic Fluctuation Theory \cite{bertini2014}. Notice indeed that the MFT predicts for a two dimensional anisotropic periodic system in the presence of a filed $E$ that the probability of having a time-integrated current fluctuation $q_0=\frac{1}{T}\int_0^T dt \int dr q(r,t)$ up to a long time $T$ is $P(q_0)\sim e^{-T L^d I(q_0)}$ with the LDF \cite{AFR} (see also \cite{PRECarlos} for the one dimensional case) \begin{equation} \label{LDFMFT} I(q_0)=\lim_{T\to \infty}\inf_{(\rho,q)\in \Omega}\frac{1}{T}{\cal I}(\rho,q), \end{equation} where \begin{equation} \nonumber {\cal I}(\rho,q)= \frac{1}{2}\int_0^T dt\int dr (q+D(\rho)\nabla \rho -E \sigma(\rho))^T\sigma(\rho)^{-1} (q+D(\rho)\nabla \rho -E \sigma(\rho)). \end{equation} Here the diffusivity and the mobility are given by $D(\rho)$ and $\sigma(\rho)$ respectively. In \eqref{LDFMFT}, $\Omega$ is the set of paths $(\rho,q)$ satisfying the continuity equation and whose time-integrated current is $q_0$, \begin{equation} \nonumber \Omega=\left\{ (\rho,q):\frac{1}{T}\int_0^Tdt\int dr q(r,t)=q_0, \partial_t \rho=-\nabla\cdot q \right\} \end{equation} The cumulant generating function is then given by the Legendre transform of \eqref{LDFMFT} \begin{equation} \mu(\lambda)=\sup_{q_0}[\lambda q_0-I(q_0)]. \end{equation} In particular, for $N$ anisotropic independent random walkers in a volume $L^d$ such that $\rho=N/L^d$, $D(\rho)$ and $\sigma(\rho)$ are given by the diagonal matrices with elements $a_s$ and $2\rho a_s$ respectively. In this case, it is easy to check that by solving the variational problem given by \eqref{LDFMFT} we get \eqref{mubm} as expected. \medskip \section{Diffusions} To state a (spatial) fluctuation relation for systems following a generic diffusion process we consider an abstract path space $\Omega$, and a bijective measurable transformation ${\cal R}:\Omega\to\Omega$ with inverse ${\cal R}^{-1}$. Elements in the path space are denoted by $\omega$. For diffusions we will consider $\Omega= {\mathscr C}([0,T],\mathbb R^d)$, i.e. the set of all continuous paths up to time $T$ taking value in $\mathbb R^d$, and $ ({\cal R}(\omega))_t= U \omega_t $, with $U:\mathbb R^d\to\mathbb R^d$ an invertible spatial transformation. \begin{proposition*} Consider a probability measure on path space $\Omega$ of the form $ P(d\omega)= e^{H(\omega)} P_0(d\omega) $ where $P_0$ is ${\cal R}$-invariant. For all $\phi:\Omega \to \mathbb{R}$ we have the identity \begin{equation}\label{fluctgen} \mathbb E(e^{\phi\circ {\cal R}}) = \mathbb E(e^{\phi} e^{H\circ {\cal R}^{-1}- H}) \end{equation} \end{proposition*} \begin{proof} \begin{eqnarray*} \int e^{\phi\circ {\cal R}} dP &=& \int e^{\phi\circ {\cal R}}e^H dP_0 = \int e^{\phi}e^{H\circ {\cal R}^{-1}} dP_0 \\ &=&\int e^{\phi}e^{H\circ {\cal R}^{-1}- H} dP.\qed \end{eqnarray*} \end{proof} \noindent In the particular case that $\phi= -\gamma(H\circ {\cal R}^{-1}-H)$ \col{with $\gamma\in\mathbb{R}$}, the relation \eqref{fluctgen} gives $ \mathbb E(e^{\gamma (H\circ {\cal R}-H)})= \mathbb E (e^{(1-\gamma)(H\circ {\cal R}^{-1}-H)}). $ Even more, specifying to a transformation such that ${\cal R}={\cal R}^{-1}$ this identity is exactly a symmetry of the form of the standard fluctuation theorem for the quantity $H\circ {\cal R}-H$. \\ Denoting by $P\circ {\cal R}$ the image measure of $P$ under ${\cal R}$, i.e., $ \int f d(P\circ {\cal R}) = \int (f\circ {\cal R} ) dP, $ it can be readily verified that $dP=e^{H} dP_0$ implies that $d(P\circ {\cal R})= e^{H\circ {\cal R}^{-1}}dP_0$. Therefore, as we shall use below, we can write \begin{equation}\label{alt} e^{H\circ {\cal R}^{-1}- H}= \frac{d(P\circ {\cal R})}{dP}= \frac{d(P\circ {\cal R})}{dP_0}\Big/ \frac{dP}{dP_0}. \end{equation} The abstract setting of the proposition above can be used to derive the spatial fluctuation theorem for finite time $T$ in the context of interacting diffusions. We shall illustrate this by considering the overdamped Langevin dynamics $\{X_t = (X_{k,t})_{k\in \mathbb{N}} :\; t \in \mathbb{R}^+\}$ describing $N$ particles (labeled by $k$) subject to a drift vector which can arise from an applied force to each particle $F$ and/or a conservative potential $V$, with a positive definite constant diffusion matrix $A$. Then the stochastic differential equation for the $k^{th}$ particle reads (Ito convention) \begin{equation} \label{origpr} dX_{k,t}= F(X_{k,t}) dt + \nabla_k V(X_t) dt + \sqrt{A} dB_{k,t} \end{equation} where $B_{k,t}$ is again a standard Brownian motion. Notice that $V$ can model a self-potential as well as an interaction potential. To obtain the new process applying a transformation ${\cal R}$ of the type $({\cal R}(\omega))_t= U \omega_t$, we remind that if $X$ is multivariate normal distributed with mean zero and covariance matrix $A$, then for any $d\times d$ matrix $U$, $UX$ is multivariate normal distributed with mean zero and covariance $UAU^t$. As a consequence if $Y_{k,t}= UX_{k,t}$ then \begin{equation} dY_{k,t}= U F(U^{-1} Y_{k,t}) dt + U \nabla_k V(U^{-1} Y_t) dt + \sqrt{UAU^t} dB_{k,t} \nonumber \end{equation} \col{where} we denote $U^{-1}Y_t$ the collection $U^{-1}Y_{k,t}$ $\forall k$. For \col{the process $Y_t$} to be absolutely continuous w.r.t. the \col{$X_{t}$ process}, we need that $ UAU^t=A. $ Moreover, assuming that the potential is invariant under the transformation $U$, i.e. $V(Ux)= V(x)$, the process $Y_{k,t}$ satisfies \begin{equation}\label{transpr} dY_{k,t}= U F(U^{-1} Y_{k,t}) dt + \nabla_k V( Y_t) dt + \sqrt{A} dB_{k,t}. \end{equation} Thus, the process \col{whose paths' distribution} is invariant under the transformation ${\cal R}$ is \begin{equation}\label{invpr} dZ_{k,t}= \nabla_k V(Z_t) dt + \sqrt{A} dB_{k,t} \end{equation} \col{By using }the Girsanov formula \cite{SV} \col{one can compute $dP_X=e^HdP_Z$, i.e. the relative density between} the path space measure $P_X$ of the process \col{\eqref{origpr}} and the path space measure $P_Z$ of the process \eqref{invpr}. Analogously the measure $P_Y$ of the process \eqref{transpr} and the measure $P_Z$ are related by $d(P_X\circ {\cal R})=dP_Y=e^{H\circ {\cal R}^{-1}}dP_Z$. Hence, if we denote $\tilde U$ and $\tilde A$ as the $N\times(d\times d)$ block diagonal matrices with elements $U$ and $A$ respectively on the diagonal and $\tilde F$ as the $N\times d$ column vector consisting in copying $N$ times the vector $F$, then by applying \eqref{alt} we get \begin{eqnarray} (H\circ {\cal R}^{-1}&-& H )(\omega)\label{HRHgen} = \int_0^T \tilde A^{-1}(\tilde U\tilde F(\tilde U^{-1}\omega_t)- \tilde F(\omega_t)) d\omega_t\nonumber \\ &-&\int_0^T \tilde A^{-1}(\tilde U\tilde F(\tilde U^{-1}\omega_t)- \tilde F(\omega_t)) \cdot \nabla V(\omega_t)dt\nonumber \\ &-& \frac12\int_0^T (\tilde U\tilde F(\tilde U^{-1}\omega_t) \cdot \tilde A^{-1}(\tilde U\tilde F(\tilde U^{-1}\omega_t) dt \nonumber \\ &+& \frac12\int_0^T \tilde F(\omega_t) \cdot \tilde A^{-1} \tilde F(\omega_t) dt. \end{eqnarray} In addition, if the force $F$ is constant we find (we put $\omega_0=0$) \begin{eqnarray}\label{HRH} (H\circ {\cal R}^{-1}-H) (\omega) = A^{-1}( U F- F) \cdot Q_T(\omega) \end{eqnarray} with $ Q_T(\omega)= \sum_{k=1}^N \Big( \omega_{k,T}- \int_0^T\nabla_k V(\omega_t) dt \Big). $ \col{Furthermore, by choosing $\phi (\omega)=\lambda \cdot Q_T(\omega)$, with $\lambda\in\mathbb R^d$}, we have that $(\phi\circ {\cal R})(\omega)= \lambda \cdot UQ_T(\omega)= U^t\lambda \cdot Q_T(\omega)$ and from \eqref{fluctgen} and \eqref{HRH} we get $ \mathbb E(e^{U^t\lambda \cdot Q_T})=\mathbb E( e^{(\lambda + A^{-1}(UF-F) \cdot Q_T}). $ By defining $Z_T(\lambda):=\mathbb E(e^{\lambda \cdot Q_T})$ it readily follows that \begin{equation}\label{isoho} Z_T (\lambda)= Z_T\Big((U^{-1})^t(\lambda + A^{-1}F)-A^{-1}F\Big) \end{equation} which is the analogous relation to the previously found for the discrete setting \eqref{afr-finite-vol}. Notice that if $A$ is the diagonal matrix with elements $A_{s,s}=2a_s$, $F=A E$, $V=0$ and \col{$\mu^{BM}(\lambda):=\lim_{T\to \infty} (TL^d)^{-1}\ln Z_T(\lambda)$} we recover from the above equation the previous result \eqref{afr-infinite-vol} for the anisotropic Brownian motion. \medskip \section{Conclusions} In this work we have derived a {\em spatial fluctuation theorem} (SFT) for interacting particle systems and interacting diffusions driven out of equilibrium by an external field. It has been proved in both cases that the SFT can be traced back to an invariance property of the microscopic path space \col{measure} under spatial transformations. Remarkably, this result holds for finite time in the case of interacting diffusions \eqref{isoho}, with $UAU^t=A$ and $V(Ux)=V(x)$, as well as for finite volume for interacting particle systems \eqref{afr-finite-vol}. In the latter, the spatial transformations yielding the SFT correspond to discrete symmetries associated with the underlying lattice geometry of the system at hand. As the system linear size increases these transformations become continuous. The SFT gives a new perspective on how the microscopic symmetries of the system are reflected at the fluctuating level. Whereas the standard fluctuation theorem is based on time reversal-symmetry of the microscopic dynamics, the SFT proves that the spatial symmetries have also a word to say. \col{It is} worth emphasizing that from the SFT new hierarchies for the currents cumulants and for the non-linear response can be obtained \col{\cite{IFR,AFR}}. For more details see section 6.2 of the J. Stat. Phys. paper of \cite{IFR}, where these hierarchies are derived from the SFT, $\mu(\lambda) = \mu\Big((U^{-1})^t (\lambda + E) - E\Big) $, when the transformation $U$ is a rotation. To obtain the hierarchical cumulants relations that has to consider the limit of infinitesimal rotations, namely $U={\mathbb I} + \Delta\theta {\cal L}$, with ${\mathbb I}$ the identity matrix and ${\cal L}$ any generator of $d$-dimensional rotations. For instance, {to the lowest order these hierarchies imply Onsager's reciprocity symmetries and Green-Kubo relations for the linear response coefficients, with the additional prediction that, the linear response matrix is in fact proportional to the identity.} In addition, the bare extension to more than one dimension of the standard fluctuation theorem only allows to compare one spatial direction and its reverse. It is within the SFT that one can relate probabilities of observing two current values in arbitrary different spatial directions (see \cite{expIFR} for the experimental measurements of the current fluctuations of a self-propelled rod immersed in a sea of spherical beads). From the experimental point of view, as the standard fluctuation relation leads to the the study of free-energy differences in terms of work distributions (see e.g. \cite{hummer2001,andrieux2007}), one can expect that SFT contains further information on the distribution of the mechanical work. The study of the full Langevin dynamics (including inertia) still remains as an open problem, as well as systems subject to noise that is non-Gaussian. \section*{Acknowledgements} The authors acknowledge financial support from the Italian Research Funding Agency (MIUR) through FIRB project, grant n. RBFR10N90W. \section*{References}
1,108,101,563,454
arxiv
\section{Introduction} Given a domain $G\subset\R^{n}$ and a sequence of independent Wiener processes $\BM^{k}$, let us consider the following stochastic partial differential equation (SPDE) \begin{equation} \md u=(a^{ij}u_{x^{i}x^{j}}+b^{i}u_{x^{i}}+cu+f)\,\md t+(\sigma^{ik}u_{x^{i}}+\nu^{k}u+g^{k})\,\md\BM_{t}^{k}\label{eq:first} \end{equation} with $(t,x,\omega)\in(0,T]\timesG\times\PS$, where the leading coefficients $a^{ij}(t,x,\omega)$ and $\sigma^{ik}(t,x,\omega)$ satisfy the \emph{strong parabolicity condition}: there are positive numbers $\kappa$ and $K$ such that \begin{equation} \kappa|\xi|^{2}+\sigma^{ik}\sigma^{jk}\xi^{i}\xi^{j}\le2a^{ij}\xi^{i}\xi^{j}\le K|\xi|^{2}\quad\text{for all }\xi\in\R^{n}\text{ and }(t,x,\omega).\label{eq:parabolic} \end{equation} Einstein summation convention is used in this paper with $i,j=1,\dots,n$ and $k=1,2,\dots$. Such equations arise in many applications such as nonlinear filtering, statistical physics, and so on (see \citet{da2014stochastic} and references therein). The countable sum of stochastic integrals in (\ref{eq:first}) lets it include equations driven by cylindrical white noise (cf. \citet{walsh1986introduction,krylov1999analytic}). The main goal of this paper is to obtain the solvability of parabolic SPDEs in the space $L^{p}(\Omega\times(0,T),W^{2,p}(G))$ with natural structural conditions, where $W^{2,p}(G)$ is a standard Sobolev space with $p\ge2$. To explain our interest in this problem, let us recall some well-known results from SPDE theory in this aspect. Under the framework of Hilbert spaces $H^m(G)=W^{m,2}(G)$, \citet{krylov1981stochastic} proved the existence and uniqueness of weak solutions for a large class of parabolic SPDEs, and then they proved the smoothness of solutions when $G=\R^{n}$. So far, the theory for the Cauchy problem is rather complete and satisfactory: a comprehensive $L^{p}$-theory of parabolic SPDEs in the whole space was developed by \citet{krylov1996l_p} in Bessel potential spaces $H^{s,p}(\R^{n})$ (equivalent to $W^{s,p}(\R^{n})$ when $s$ is a natural number), and a solvability theory in H\"older classes was constructed by \citet{mikulevicius2000cauchy,du2015cauchy}. As far as general domains $G$ are concerned, one of the greatest difficulties is how to handle the ``bad'' behaviour of derivatives of solutions near the boundary. Indeed, unless certain compatibility conditions are fulfilled, the derivatives of the solutions may blow up near the boundary even in the one-dimensional case. As an example, let us take a look at the following finding from \citet[Theorem~5.3]{Krylov2003Brownian}. \begin{lem}[\citet{Krylov2003Brownian}] There exists a $\lambda_{0}>0$ such that if $\lambda\in(0,\lambda_{0})$ and the function $u$ with $u(t,0)=0$ for all $t$ and $u(0,\cdot)\in C_{0}^{\infty}(0,\infty)$ satisfies the equation \begin{align*} \md u & =u_{xx}\,\md t+\sqrt{2-\lambda}\,u_{x}\,\md W_{t}\quad\text{on }(0,\infty)^{2}, \end{align*} then there exists a dense subset $S\subset(0,\infty)$ such that for all $s\in S$ and $\alpha>\me^{-\frac{1}{2\lambda}}$, it holds almost surely (a.s.) that $\lim_{x\downarrow0}x^{-\alpha}u(s,x)=\infty$; consequently, $\limsup_{x\downarrow0}|u_{x}(s,x)|=\infty$. \end{lem} \citet{flandoli1990dirichlet} proved the existence and uniqueness of solutions of parabolic SPDEs in the Hilbert space $H^{2m+1}(G)$ under a long series of compatibility conditions (see Theorem 4.1 there). \citet{Brzeniak1995Stochastic} solved the equations in the Besov space $B_{p,2}^{1}(G)$ (whose elements have first-order weak derivatives) requiring $\sigma$ to be sufficiently small. Both of them used semigroup method, and the leading coefficients of their equations were deterministic. Applying PDE techniques, \citet{krylov1994aw} developed a $W^{m,2}$-theory of linear SPDEs in general smooth domains, where the equations could have random coefficients; instead of the compatibility conditions, he introduced Sobolev spaces with weights to control the blow-up of derivatives of solutions near the boundary. This idea was adopted to develop a weighted $L^{p}$-theory for parabolic SPDEs in general domains, see \citet{krylov1994aw,krylov1999sobolev,kim2004p,kim2004stochastic} among others. For more aspects of regularity theory for quasilinear SPDEs in domains, we refer to \citet{denis2005p,zhang2006lp,van2012stochastic,van2012maximal,debussche2015re,Gerencs2017Boundary} and the references therein. By relaxing the requirement on derivatives of solutions, the weighted $L^{p}$-theory of SPDEs is successful in dealing with equations under very general assumptions on the coefficients. Nevertheless, it is still interesting enough to ask under which circumstances the solutions of SPDEs lie in the normal Sobolev spaces, especially the space $W^{2,p}$ in which the solutions found are called \emph{strong solutions} in classical PDE theory (cf. \citet{lieberman1996second}). This question seems not to be answered by the weighted $L^{p}$-theory of SPDEs. To find natural conditions, let us start with two examples as follows, which show that, if there is no restriction on the boundary values of coefficient $\sigma$, the second-order derivatives need not be square integrable. \begin{example} Let $u(t,x)$ with $t\in(0,\infty)$ and $x\in G=(0,1)$ be a solution of the equation: \begin{align*} \md u(t,x) & =(u_{xx}(t,x)+f(t))\,\md t+\sigma(x)u_{x}(t,x)\,\md W_{t},\\ u(0,x) & =u(t,0)=u(t,1)=0, \end{align*} where $W$ is a one-dimensional Wiener process, $\sigma\in C^{2}(\bar{G})$ with $\sup_{G}|\sigma|<2$, and $f\in L^{2}(0,\infty)$ are not identically zero. From $L^{2}$-theory of SPDEs (cf. \citet{krylov1981stochastic}), the equation has a unique (nonzero) solution $u\in L^{2}(\Omega\times(0,\infty),H_{0}^{1}(G))$. However, if $\sigma(0)\sigma(1)\neq0$, we can see that $u_{xx}\notin L^{2}(\Omega\times(0,T)\times G)$ for any $T>0$. Indeed, if $u_{xx}\in L^{2}(\Omega\times(0,T)\times G)$ for some $T>0$, then by embedding $u_{x}$ is continuous on $[0,T]\times\bar{G}$, and according to the boundary condition, we have $\sigma(0)u_{x}(t,0)=\sigma(1)u_{x}(t,1)=0$. Therefore, $v:=u_{x}\in H_{0}^{1}(G)$ satisfies (in the sense of distribution) \[ \md v=v_{xx}\md t+(\sigma v_{x}+\sigma_{x}v)\,\md W_{t},\quad v(0,x)=v(t,0)=v(t,1)=0, \] which implies $u_{x}=v=0$, and furthermore, $u=0$ by the boundary condition, yielding a contradiction. Thus, $u\notin L^{2}(\Omega\times(0,T),H^{2}(G))$ as long as $\sigma(0)\sigma(1)\neq0$. \end{example} \begin{example} With $\sigma\in(-2,2)\backslash\{0\}$, $T>0$ and $G=(0,1)$ the following equation \[ \md u=(u_{xx}+x/\sigma)\,\md t+(\sigma u_{x}-t)\,\md W_{t},\quad u(0,x)=u(t,0)=u(t,1)=0 \] has a unique solution $u\in L^{2}(\Omega\times(0,T),H_{0}^{1}(G))$ from $L^{2}$-theory of SPDEs. Suppose $u_{xx}\in L^{2}(\Omega\times(0,T),L^{2}(G))$. Then by a similar argument as in the previous example, we have that $v=u_{x}\in H^{1}(G)$ satisfies \[ \md v=(v_{xx}+1/\sigma)\,\md t+\sigma v_{x}\,\md W_{t},\quad v(0,x)=0,\quad v(t,0)=v(t,1)=t/\sigma. \] Solving this equation we have $u_{x}(t,x)=v(t,x)=t/\sigma$, which is impossible given $u(t,0)=u(t,1)=0$. Therefore, $u\notin L^{2}(\Omega\times(0,T),H^{2}(G))$. \end{example} From the above, certain compatibility condition on the coefficient $\sigma$ must be involved to ensure the second-order derivatives of solutions lie in $L^{p}(\Omega\times(0,T),W^{2,p}(G))$. This issue was first addressed by \citet{flandoli1990dirichlet}, where $p=2$ and the coefficients of equations depended only on $x$. For $p>2$ there seems be no result in the literature. In this note we propose the following condition. \begin{assumption} \label{assu:compatib}The vectors $\sigma^{\cdot k}=(\sigma^{1k},\dots,\sigma^{nk})$ restricted on $\pdG$ are tangent to $\pdG$, namely, \begin{equation} \bm{n}(x)\cdot\sigma^{\cdot k}(t,x,\omega)=0,\quad k=1,2,\dots\label{eq:compatib} \end{equation} for all $x\in\partial G$ and all $(t,\omega)$, where $\bm{n}(x)$ is a unit normal vector of $\pdG$ at $x$. \end{assumption} When considering zero boundary conditions, this assumption is quite necessary for our goal according to our examples, and technically, it gives the least condition on $\sigma$ to ensure that $\sigma^{ik}u_{x^{i}}$ vanishes on the boundary for all $u\in W^{2,p}(G)\cap W_{0}^{1,p}(G)$ and all $k$. Meanwhile, the free term $g$ must equal zero on the boundary consequently, otherwise the second-order derivatives of solutions of Eq. (\ref{eq:first}) may still blow up near the boundary, as was illustrated in \citet[Example~1.2]{krylov1994aw}. Assumption \ref{assu:compatib} and the boundary value restriction of $g$ are all we need additionally to achieve our goal. Indeed, the main result of this paper, Theorem \ref{thm:main} below, yields that, under Assumption \ref{assu:compatib} along with other standard conditions on the coefficients and on the domain, SPDE (\ref{eq:first}) with zero initial-boundary condition has a unique solution $u$ in the space $L^{p}(\Omega\times(0,T),\Pred,W^{2,p}(G))$ for any given $f\in L^{p}(\Omega\times(0,T),\Pred,L^{p}(G))$ and $g\in L^{p}(\Omega\times(0,T),\Pred,W_{0}^{1,p}(G;\ell^{2}))$, where $\Pred$ is the predictable $\sigma$-field. The requirement on the boundary value of $g$ is attracted into the space. By embedding the solution and its derivatives are globally H\"older continuous as long as $p>n+2$. It is worth noting that Assumption \ref{assu:compatib} has local impact on the regularity of solutions; in other words, if (\ref{eq:compatib}) is satisfied only on a portion of $\partial G$, then the solutions possess $W^{2,p}$-regularity and continuity near this portion. This property is elaborated in Theorem \ref{thm:local} in the next section. This paper is organized as follows. In the next section the main results are stated after introducing some notation and assumptions. Section 3 is devoted to the proof of Theorem \ref{thm:main}, consisting of four subsections: in Subsection 3.1 we obtain the existence, uniqueness and estimates of the solution of a model equation in the half space; in Subsection 3.2 we derive a priori estimates for general equations in $\mathcal{C}^{2}$ domains; the existence and uniqueness of solutions in the general case is proved in Subsection 3.3 with the help of the method of continuity and the Banach fixed-point theorem; and in Subsection 3.4 we prove the continuity of solutions and their derivatives. Theorem \ref{thm:local} is proved in the final section. \section{Notation and main results\label{sec:Main-results}} Let $(\PS,\Filt,\Filt_{t},\Prob)$ be a complete filtered probability space carrying a sequence of independent Wiener process $\BM^{k}$, and $\Pred$ the predictable $\sigma$-field generated by $\Filt_{t}$. Let $\R^{n}$ be an $n$-dimensional Euclidean space of points $x=(x^{1},\dots,x^{n})$, and \[ \R_{+}^{n}=\{x=(x^{1},x'):x^{1}>0,\,x'=(x^{2},\dots,x^{n})\in\R^{n-1}\}. \] Denote $B_{\rho}(x)=\{y\in\R^{n}:|x-y|<\rho\}$ and $B_{\rho}=B_{\rho}(0)$. Let $G$ be a domain in $\R^{n}$. The following definition is taken from \citet[Page~165]{Krylov2008Lectures}. \begin{defn} \label{def:domain} We write $G\in\mathcal{C}^{2}$ if there are positive constants $K_{0}$ and $\rho_{0}$ such that for each $z\in\pdG$ there exists a one-to-one map $\psi$ from $B_{\rho_{0}}(z)$ to a domain $U^{z}\subset\R^{n}$ such that \begin{enumerate} \item $\psi(z)=0$ and $U_{+}^{z}:=\psi(B_{\rho_{0}}(z)\capG)\subset\R_{+}^{n}$, \item $\psi(B_{\rho_{0}}(z)\cap\pdG)=U^{z}\cap\{y\in\R^{n}:y^{1}=0\}$, \item $\psi\in C^{2}(\bar{B}_{\rho_{0}}(z))$, $\psi^{-1}\in C^{2}(\bar{U}^{z})$, and $\|\psi\|_{C^{2}}+\|\psi^{-1}\|_{C^{2}}\le K_{0}$. \end{enumerate} We say that the diffeomorphism $\psi$ flattens the boundary near $z$. \end{defn} Fix real numbers $T>0$ and $p\ge2$ in this paper. For $m\ge0$ we let $W^{m,p}(G)$ and $W_{0}^{1,p}(G)$ be the usual Sobolev spaces (cf. \citet{adams2003sobolev}), and $W^{m,p}(G;\ell^{2})$ and $W_{0}^{1,p}(G;\ell^{2})$ the corresponding spaces of $\ell^{2}$-valued functions. Denote \[ W_{\circ}^{m,p}(G)=W^{m,p}(G)\cap W_{0}^{1,p}(G),\quad m\ge1, \] and $W_{\circ}^{0,p}(\cdot)=W^{0,p}(\cdot)=L^{p}(\cdot)$. Denote by $W_{{\rm loc}}^{m,p}(G)$ the space of all functions $u$ such that $u\in W^{m,p}(G')$ for any $G'\subset G$ with $\mathrm{dist}(G',\partial G)>0$. For random functions, we define \begin{align*} \mathbb{W}^{m,p}(G,\tau) & =L^{p}(\PS\times(0,\tau),\Pred,W^{m,p}(G)),\quad\mathbb{W}^{m,p}(G)=\mathbb{W}^{m,p}(G,T),\\ \mathbb{W}_{\circ}^{m,p}(G,\tau) & =L^{p}(\PS\times(0,\tau),\Pred,W_{\circ}^{m,p}(G)),\quad\mathbb{W}_{\circ}^{m,p}(G)=\mathbb{W}_{\circ}^{m,p}(G,T), \end{align*} and analogously, $\mathbb{W}^{m,p}(G,\tau;\ell^{2})$, $\mathbb{W}^{m,p}(G;\ell^{2})$, $\mathbb{W}_{\circ}^{m,p}(G;\ell^{2}),\mathbb{W}_{{\rm loc}}^{m,p}(G)$, etc. Denote $\mathbb{L}^{p}(\cdot)=\mathbb{W}^{0,p}(\cdot)=\mathbb{W}_{\circ}^{0,p}(\cdot)$. The understanding of solutions of SPDEs is implied in the following definition of a functional space for solutions (cf. \citet{krylov1999analytic}). \begin{defn} \label{def:W2p}For a positive integer $m$, by $\mathcal{W}_{\circ}^{m,p}(G)$ we denote the space of all functions $u\in\mathbb{W}_{\circ}^{m,p}(G)$ such that \[ u(0,\cdot)\in L^{p}(\PS,\Filt_{0},W_{\circ}^{m-2/p,p}(G)) \] and for some $u_{{\rm D}}\in\mathbb{W}^{m-2,p}(G)$ and $u_{{\rm S}}\in\mathbb{W}_{\circ}^{m-1,p}(G;\ell^{2})$, the equation $\md u=u_{{\rm D}}\,\md t+u_{{\rm S}}^{k}\,\md\BM_{t}^{k}$ holds in the sense of distributions, namely, for all $\phi\in C_{0}^{\infty}(G)$, \[ (u(t,\cdot),\phi)=(u(0,\cdot),\phi)+\int_{0}^{t}(u_{{\rm D}}(s,\cdot),\phi)\,\md s+\int_{0}^{t}(u_{{\rm S}}^{k}(s,\cdot),\phi)\,\md\BM_{s}^{k} \] for all $t\le T$ with probability $1$. \end{defn} Now we consider the following semilinear equation \begin{equation} \md u=(a^{ij}u_{x^{i}x^{j}}+f(t,x,u))\,\md t+(\sigma^{ik}u_{x^{i}}+g^{k}(t,x,u))\,\md\BM_{t}^{k}\label{eq:main} \end{equation} with the initial-boundary condition \begin{equation} \Big\{\!\begin{array}{ll} u(t,x)=0, & x\in\pdG,\ t\ge0;\\ u(0,x)=u_{0}(x), & x\inG. \end{array}\label{eq:bdcondition} \end{equation} The following conditions on the given data are quite standard (cf. \citet{krylov1999analytic}). \begin{assumption} \label{assu:continuity}The functions $a^{ij}=a^{ji}$ and $\sigma^{ik}$ are real valued and $\Pred\times\mathcal{B}(G)$-measurable and satisfy the strong parabolicity condition (\ref{eq:parabolic}), and there are a number $L>0$ and a continuous and increasing function $\cm(\cdot)$ with $\cm(0)=0$ such that \[ |a^{ij}(t,x)-a^{ij}(t,y)|\le\cm(|x-y|),\quad\|\sigma^{i\cdot}(t,x)-\sigma^{i\cdot}(t,y)\|_{\ell^{2}}\le L|x-y|, \] for all $(t,\omega)$, all $x,y\in\bar{G}$, and all $i,j=1,\dots,n$. \end{assumption} \begin{assumption} \label{assu:fg}{(a)} For any $u\in W_{\circ}^{2,p}(G)$, the functions $f(\cdot,\cdot,u)$ and $g(\cdot,\cdot,u)$ are predictable as functions taking values in $L^{p}(G)$ and $W_{\circ}^{1,p}(G;\ell^{2})$, respectively. {(b)} $f(\cdot,\cdot,0)\in\mathbb{L}^{p}(G)$ and $g(\cdot,\cdot,0)\in\mathbb{W}_{\circ}^{1,p}(G;\ell^{2})$. {(c)} For any $\eps>0$, there is a $K_{\eps}\ge0$ such that for any $u,v\in W_{\circ}^{2,p}(G)$, $t$, $\omega$, we have \[ \begin{aligned}\|f(t,\cdot,u)-f(t,\cdot,v)\|_{L^{p}(G)} & +\|g(t,\cdot,u)-g(t,\cdot,v)\|_{W^{1,p}(G;\ell^{2})}\\ & \le\eps\|u-v\|_{W^{2,p}(G)}+K_{\eps}\|u-v\|_{L^{p}(G)}. \end{aligned} \] \end{assumption} The main result of this paper is the following theorem. \begin{thm} \label{thm:main}Let $G\in\mathcal{C}^{2}$ and Assumptions \ref{assu:compatib}, \ref{assu:continuity} and \ref{assu:fg} be satisfied. Then we have that \emph{(i)} for any $u_{0}(\cdot)\in L^{p}(\PS,\Filt_{0},W_{\circ}^{2-2/p,p}(G))$ Dirichlet problem (\ref{eq:main})--(\ref{eq:bdcondition}) admits a unique solution $u\in\mathcal{W}_{\circ}^{2,p}(G)$; \emph{(ii)} the solution satisfies the estimate \begin{equation} \|u\|_{\mathbb{W}^{2,p}(G)}^{p}\le C\bigl(\|f(\cdot,\cdot,0)\|_{\mathbb{L}^{p}(G)}^{p}+\|g(\cdot,\cdot,0)\|_{\mathbb{W}^{1,p}(G;\ell^{2})}^{p}+\E\|u_{0}\|_{W^{2-2/p,p}(G)}^{p}\bigr),\label{eq:main-est} \end{equation} where the constant $C$ depends only on $\kappa,K,n,p,T,K_{0},\rho_{0},L,$ and the functions $\cm(\cdot)$ and $K_{\eps}$; \emph{(iii)} when $p>\max\{2,(n+2)/2\}$, $u\in L^{p}(\Omega,C^{\alpha/2,\alpha}([0,T]\times\bar{G}))$ for any $\alpha\in(0,\frac{2p-n-2}{2p})$, and when $p>n+2$, $u_{x}\in L^{p}(\Omega,C^{\beta/2,\beta}([0,T]\times\bar{G}))$ for any $\beta\in(0,\frac{p-n-2}{2p})$. \end{thm} The H\"older space $C^{\alpha/2,\alpha}([0,T]\times\bar{G})$ is defined in the standard way (cf. \citet{krylov1996lectures}), which contains all continuous functions $u:[0,T]\times\bar{G}\to\R$ such that \[ \|u\|_{C^{\alpha/2,\alpha}([0,T]\times\bar{G})}=\sup_{[0,T]\times\bar{G}}|u|+\sup_{(t,x)\neq(s,y)}\frac{|u(t,x)-u(s,y)|}{|t-s|^{\alpha/2}+|x-y|^{\alpha}}<\infty. \] In the literature the domain $G$ was usually assumed to be bounded (unless it is the whole space or a half space), but here it can be unbounded, and a detailed argument will be presented in our proof to address unbounded domains (see Subsection 3.2 below). Moreover, the above theorem still holds true if the terminal time $T$ is replaced by any stopping time $\tau\le T$ as in \citet{krylov1999analytic,kim2004p}. A simple way to do this is to zero extend the functions $f$ and $g$ after time $\tau$ until $T$ and solve the problem in the time period $[0,T]$. By interpolation it is easily checked that the linear equation (\ref{eq:first}) fits the assumptions of Theorem (\ref{thm:main}) provided that $f\in\mathbb{L}^{p}(G)$ and $g\in\mathbb{W}_{\circ}^{1,p}(G;\ell^{2})$ along with the following condition. \begin{assumption} \label{assu:bcnu}The functions $b^{i}$, $c$, $\nu^{k}$ are real valued and $\Pred\times\mathcal{B}(G)$-measurable, and $|b^{i}|$, $|c|$, $\|\nu\|_{\ell^{2}}$, $\|\nu_{x}\|_{\ell^{2}}$ are uniformly bounded on $[0,T]\times\bar{G}\times\Omega$. \end{assumption} Even if the compatibility condition (\ref{eq:compatib}) is satisfied only on a portion of $\partial G$, it is still possible to obtain local regularity of solutions near this portion. The main issue here is that the solution may not lie in $\mathcal{W}_{\circ}^{2,p}(G)$ or even not exist. Fortunately, when $G$ is bounded (or a half space) and the equation is linear, the Dirichlet problem can be solved in the weighted Sobolev space $\mathfrak{H}_{p,\theta}^{2}(G)$ by means of the main results in \citet{kim2004stochastic}; the space $\mathfrak{H}_{p,\theta}^{2}(G)$ that $\mathcal{W}_{\circ}^{2,p}(G)$ can be embedded to was introduced by \citet{krylov1999sobolev,Lototsky2000Sobolev}, based on delicately selected weights. With this observation, we formulate the local regularity result into the following theorem by assuming the existence of solutions without thorough verification of conditions, and for simplicity but without loss of essence, we consider the linear equation (\ref{eq:first}). \begin{thm} \label{thm:local}Let $\Gamma$ be an open subset of $\partial G$ and (\ref{eq:compatib}) satisfied at each point $x\in\Gamma$. Let Assumptions \ref{assu:continuity} and \ref{assu:bcnu} be satisfied and $|a_{x}^{ij}|$ dominated by the constant $L$. Suppose that $u\in\mathbb{L}^{p}(G)\cap\mathbb{W}_{{\rm loc}}^{2,p}(G)$ with $u(0,\cdot)\in L^{p}(\PS,\Filt_{0},W_{\circ}^{2-2/p,p}(G))$ satisfies Eq. (\ref{eq:first}) with $u|_{\partial G}=0$ for given $f\in\mathbb{L}^{p}(G)$ and $g\in\mathbb{W}^{1,p}(G)$ with $g|_{\Gamma}=0$. Then for any bounded domain $G'\subset G$ with $\mathrm{dist}(G',\partial G\backslash\Gamma)>0$, we have $u\in\mathbb{W}^{2,p}(G')$. Moreover, $u\in L^{p}(\Omega,C^{\alpha/2,\alpha}([0,T]\times\bar{G}'))$ for any $\alpha\in(0,\frac{2p-n-2}{2p})$ when $p>\max\{2,(n+2)/2\}$, and $u_{x}\in L^{p}(\Omega,C^{\beta/2,\beta}([0,T]\times\bar{G'}))$ for any $\beta\in(0,\frac{p-n-2}{2p})$ when $p>n+2$. \end{thm} In the above theorem the assumption that the solution lies in $\mathbb{L}^{p}(G)\cap\mathbb{W}_{{\rm loc}}^{2,p}(G)$ is not restrictive: on the one hand, a function in the space $\mathfrak{H}_{p,\theta}^{2}(G)$ naturally belongs to $\mathbb{W}_{{\rm loc}}^{2,p}(G)$; on the other hand, the property $u\in\mathbb{L}^{p}(G)$ can be derived from the other assumptions of the theorem with the help of It\^o's formula, at least when $G$ is bounded. Requiring $a^{ij}(t,\cdot)\in C^{0,1}(G)$ allows us to write the equation into the divergence form that helps us prove $u\in \mathbb{W}^{1,p}(G')$ as an important intermediate step. We remark that $u\in\mathfrak{H}_{p,\theta}^{2}(G)$ does not always imply $u\in\mathbb{W}^{1,p}(G)$ (cf. \citet{kim2004p}). \section{Proof of Theorem \ref{thm:main}} \subsection{Model equations in a half space} Let $G=\R_{+}^{n}$ in this subsection. In the first step we consider the equations on $[0,T]\times\R_{+}^{n}$ with coefficients independent of $x$. \begin{prop} \label{prop:half-constant}Let $a^{ij}=a^{ij}(t)$ and $\sigma^{ik}=\sigma^{ik}(t)$ be predictable processes and satisfy (\ref{eq:parabolic}). Assume that \[ \sigma^{1\cdot}=(\sigma^{11},\sigma^{12},\dots)\equiv0,\quad\forall(t,\omega)\in[0,T]\times\PS. \] Consider the Dirichlet problem \begin{equation} \bigg\{\begin{aligned} & \md u=(a^{ij}u_{x^{i}x^{j}}+f)\,\md t+(\sigma^{ik}u_{x^{i}}+g^{k})\,\md\BM_{t}^{k},\\ & u|_{t=0}=0,\quad u|_{x^{1}=0}=0. \end{aligned} \label{eq:half-space-1} \end{equation} Then, \emph{(i)} for $f\in\mathbb{L}^{p}(\R_{+}^{n})$ and $g\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n};\ell^{2})$, (\ref{eq:half-space-1}) has a unique solution $u\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$, and \begin{equation} \|u\|_{\mathbb{W}^{2,p}(\R_{+}^{n})}\le C\bigl(\|f\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|g\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})}\bigr);\label{eq:est-half} \end{equation} \emph{(ii)} if $g\in\mathbb{L}^{p}(\R_{+}^{n};\ell^{2})$ and $f=f^{0}+c^{i}F_{x^{i}}$ with $f^{0},F\in\mathbb{L}^{p}(\R_{+}^{n})$ and $c^{i}\in\mathbb{L}^{\infty}(G)$, then (\ref{eq:half-space-1}) has a unique solution $u\in\mathcal{W}_{\circ}^{1,p}(\R_{+}^{n})$, and \begin{equation} \|u\|_{\mathbb{W}^{1,p}(\R_{+}^{n})}\le C\bigl(\|(f^{0},F)\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|g\|_{\mathbb{L}^{p}(\R_{+}^{n};\ell^{2})}\bigr);\label{eq:est-half-3} \end{equation} where the constant $C$ depends only on $\kappa,K,n,p,$ $T$, and additionally on $\|c^{i}\|_{\mathbb{L}^{\infty}}$ for \emph{(ii)}. \end{prop} \begin{proof} The proofs of (i) and (ii) are quite similar, so we only present the proof of (i) in details. Consider the following equation \begin{equation} \md\hat{u}=K\Delta\hat{u}\,\md t+(\sigma^{ik}\hat{u}_{x^{i}}+g^{k})\,\md\BM_{t}^{k}\quad\text{on }(0,T]\times\R_{+}^{n}\label{eq:lap-halp} \end{equation} with zero initial-boundary condition. Obviously, this equation is also strongly parabolic. Define the odd continuation of $g$, i.e., \begin{equation} g(x^{1},x'):=-g(-x^{1},x'),\quad\forall x^{1}<0,\,x'\in\R^{n-1}.\label{eq:odd-ext} \end{equation} As $g\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n};\ell^{2})$, the continued function $g$ belongs to $\mathbb{W}_{\circ}^{1,p}(\R^{n};\ell^{2})$. By Theorem 5.1 in \citet{krylov1999analytic}, there exists a unique solution $\hat{u}\in\mathcal{W}_{\circ}^{2,p}(\R^{n})$ of (\ref{eq:lap-halp}) considered in the whole $\R^{n}$ with zero initial condition. From the uniqueness, $\hat{u}(t,x)=\hat{u}(t,x^{1},x')$ is odd with respect to $x^{1}$, so $\hat{u}(t,x)=0$ for $x^{1}=0$, which means that $\hat{u}$ restricted in $\R_{+}^{n}$ satisfies (\ref{eq:lap-halp}) with zero initial-boundary condition, and $\hat{u}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$. Also from Theorem 5.1 in \citet{krylov1999analytic}, we have the following estimate \begin{equation} \|\hat{u}\|_{\mathbb{W}^{2,p}(\R_{+}^{n})}\le C\|g\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})},\label{eq:hat-u} \end{equation} where the constant $C$ depends only on $\kappa,K,n,p$ and $T$. Define a stochastic process $\xi_{t}=(0,\xi_{t}^{2},\dots,\xi_{t}^{n})$ with \[ \xi_{t}^{i}=\int_{0}^{t}\sigma^{ik}(s)\,\md\BM_{s}^{k},\quad i=2,\dots,n. \] It is easily seen that for each $x\in\R_{+}^{n}$ the process $x\pm\xi_{t}$ always stays in $\R_{+}^{n}$. Moreover, for any given $\tilde{f}\in\mathbb{L}^{p}(\R_{+}^{n})$, the random translation $\tilde{f}(t,x-\xi_{t})$ as a function of $(t,x,\omega)$ also lies in $\mathbb{L}^{p}(\R_{+}^{n})$, and \[ \|\tilde{f}(\omega)\|_{L^{p}((0,T)\times\R_{+}^{n})}=\|\tilde{f}(\cdot,\cdot-\xi_{\cdot}(\omega),\omega)\|_{L^{p}((0,T)\times\R_{+}^{n})}. \] Consider the following random partial differential equation (PDE) \begin{equation} \begin{aligned}\partial_{t}v & =\Bigl(a^{ij}-\frac{1}{2}\sigma^{ik}\sigma^{jk}\Big)v_{x^{i}x^{j}}+\tilde{f}(t,x-\xi_{t}),\quad\text{on }(0,T]\times\R_{+}^{n},\\ v|_{t=0} & =0,\quad v|_{x^{1}=0}=0. \end{aligned} \label{eq:randomPDE} \end{equation} Due to (\ref{eq:parabolic}), this PDE is strongly parabolic. Moreover, $\tilde{f}(t,x-\xi_{t}(\omega),\omega)$ as a function of $(t,x)$ belongs to $L^{p}((0,T)\times\R_{+}^{n})$ for almost every $\omega$. So by the classical PDE theory (cf. \citet[Theorem~7.32]{lieberman1996second}), problem (\ref{eq:randomPDE}) has a unique strong solution \[ v(\cdot,\cdot,\omega)\in L^{p}((0,T),W_{\circ}^{2,p}(\R_{+}^{n}))\times C([0,T],L^{p}(\R_{+}^{n})) \] for almost every $\omega$, and $v(\cdot,\cdot,\omega)$ satisfies the estimates \[ \|v(\cdot,\cdot,\omega)\|_{L^{p}((0,T),W^{2,p}(\R_{+}^{n}))}^{p}\le C\|\tilde{f}(\cdot,\cdot,\omega)\|_{L^{p}((0,T)\times\R_{+}^{n})}^{p}, \] where the constant $C$ depends only on $\kappa,K,n,p$ and $T$, but not on $\omega$. Thus, one has $v\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$, and taking mathematical expectation on the above estimate yields \begin{equation} \|v\|_{\mathbb{W}^{2,p}(\R_{+}^{n})}^{p}\le C\|\tilde{f}\|_{\mathbb{L}^{p}(\R_{+}^{n})}^{p}.\label{eq:v01} \end{equation} Now applying the It\^o-Wentzell formula obtained in \citet{Krylov2011On} to $\tilde{u}(t,x):=v(t,x+\xi_{t})$, one can check that $\tilde{u}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$, and it solves the problem \begin{equation} \md\tilde{u}=(a^{ij}\tilde{u}_{x^{i}x^{j}}+\tilde{f})\,\md t+\sigma^{ik}\tilde{u}_{x^{i}}\,\md\BM_{t}^{k},\quad\tilde{u}|_{t=0}=0,\quad\tilde{u}|_{x^{1}=0}=0,\label{eq:proof003} \end{equation} and satisfies the estimate \begin{equation} \|\tilde{u}\|_{\mathbb{W}^{2,p}(\R_{+}^{n})}=\|v\|_{\mathbb{W}^{2,p}(\R_{+}^{n})}\le C\|\tilde{f}\|_{\mathbb{L}^{p}(\R_{+}^{n})}.\label{eq:tilde-u} \end{equation} On the other hand, we remark that $\tilde{u}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$ is a solution to (\ref{eq:proof003}) if and only if $v(t,x)=\tilde{u}(t,x-\xi_{t})$ is the solution to (\ref{eq:randomPDE}); as the latter has a unique solution, the solution of (\ref{eq:proof003}) is also unique. Define $u=\hat{u}+\tilde{u}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$. It follows from equations (\ref{eq:lap-halp}) and (\ref{eq:proof003}) that \[ \md u=[a^{ij}u_{x^{i}x^{j}}+(K\delta^{ij}-a^{ij})\hat{u}_{x^{i}x^{j}}+\tilde{f}]\,\md t+(\sigma^{ik}u_{x^{i}}+g^{k})\,\md\BM_{t}^{k}, \] where $\delta^{ij}$ is the Kronecker delta. With \begin{equation} \tilde{f}=f-(K\delta^{ij}-a^{ij})\hat{u}_{x^{i}x^{j}},\label{eq:pf005} \end{equation} it is seen that $u$ is a solution to problem (\ref{eq:half-space-1}), so the existence part is proved. Moreover, from estimates (\ref{eq:hat-u}) and (\ref{eq:tilde-u}), the obtained $u$ satisfies \begin{equation} \begin{aligned}\|u\|_{\mathbb{W}^{2,p}(\R_{+}^{n})} & \le\|\hat{u}+\tilde{u}\|_{\mathbb{W}^{2,p}(\R_{+}^{n})}\\ & \le C\bigl(\|g\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})}+\|f-(K\delta^{ij}-a^{ij})\hat{u}_{x^{i}x^{j}}\|_{\mathbb{L}^{p}(\R_{+}^{n})}\bigr)\\ & \le C\bigl(\|f\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|g\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})}+\|\hat{u}_{xx}\|_{\mathbb{L}^{p}(\R_{+}^{n})}\bigr)\\ & \le C\bigl(\|f\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|g\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})}\bigr), \end{aligned} \label{eq:pf004} \end{equation} where $C=C(\kappa,K,n,p,T)$. To prove the uniqueness, we let $u^{*}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$ be any solution of (\ref{eq:half-space-1}), $\hat{u}$ be the solution of (\ref{eq:lap-halp}) determined by $g$. Then $u^{*}-\hat{u}$ satisfies (\ref{eq:proof003}) with $\tilde{f}$ given by (\ref{eq:pf005}), so by uniqueness (for problem (\ref{eq:proof003})) we have $u^{*}-\hat{u}=\tilde{u}$, which means $u=u^{*}$. The uniqueness part is also proved, and the estimate (\ref{eq:est-half}) follows from (\ref{eq:pf004}) immediately. The proof is complete. \end{proof} \subsection{A priori estimates} In the following result we obtain a priori estimates for linear equations in general domains. We adapt the technique of straightening (the boundary) and partitioning (the domain) from PDE theory (cf. \citet{gilbarg2001elliptic,Krylov2008Lectures}). The new difficulties here are due to the compatibility conditions and the (possible) unbounded domains. Recall that $u\in\mathcal{W}_{\circ}^{2,p}(G)$ implies $u(t,\cdot)=0$ on the boundary $\pdG.$ \begin{prop} \label{prop:apriori}Let $G\in\mathcal{C}^{2}$ and Assumptions \ref{assu:compatib} and \ref{assu:continuity} be satisfied. Suppose that $u\in\mathcal{W}_{\circ}^{2,p}(G)$ with $u(0,\cdot)=0$ satisfies the equation \begin{equation} \md u=(a^{ij}u_{x^{i}x^{j}}+f)\,\md t+(\sigma^{ik}u_{x^{i}}+g^{k})\,\md\BM_{t}^{k}\label{eq:general-dom} \end{equation} for some $f\in\mathbb{L}^{p}(G)$ and $g\in\mathbb{W}_{\circ}^{1,p}(G)$. Then we have \begin{equation} \|u\|_{\mathbb{W}^{2,p}(G)}\le C\bigl(\|f\|_{\mathbb{L}^{p}(G)}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}\bigr),\label{eq:apriori} \end{equation} where the constant $C$ depends only on $\kappa,K,n,p,T,K_{0},\rho_{0},L,$ and the functions $\cm(\cdot)$. \end{prop} \begin{proof} Fix a $z\in\pdG$ and take the objects associated with $z$ from Definition \ref{def:domain}. For a function $h$ defined in $B_{\rho_{0}}(z)\capG$, we introduce \[ \tilde{h}(y)=h\circ\psi^{-1}(y)=h(\psi^{-1}(y))\quad\forall\,y\in U_{+}^{z}. \] Obviously, $h(x)=\tilde{h}\circ\psi(x)$. In what follows, we keep the relation \[ y=\psi(x)\quad\text{for }x\in B_{\rho_{0}}(z)\capG, \] which implies that $h(x)=\tilde{h}(y)$. For the sake of convenience, we denote $h_{i}=\partial_{i}h$ in this subsection to be the partial derivative of a function $u$ with respect to the $i$-th spatial variable. Then for $h\in W^{2,p}(B_{\rho_{0}}(z)\capG)$ we have \[ \begin{aligned}h_{i}(x) & =\psi_{i}^{r}(x)\tilde{h}_{r}(y),\\ h_{ij}(x) & =\psi_{i}^{r}(x)\psi_{j}^{s}(x)\tilde{h}_{rs}(y)+\psi_{ij}^{r}(x)\tilde{h}_{r}(y). \end{aligned} \] The following result is taken from Lemma 8.3.4 in \citet{Krylov2008Lectures}. \begin{lem} \label{lem:equivalent}$h\in W^{k,p}(B_{\rho_{0}}(z)\capG)$ if and only if $\tilde{h}\in W^{k,p}(U_{+}^{z})$ for $k=0,1,2$. Moreover, \[ C^{-1}\|h\|_{W^{k,p}(B_{\rho_{0}}(z)\capG)}\le\|\tilde{h}\|_{W^{k,p}(U_{+}^{z})}\le C\|h\|_{W^{k,p}(B_{\rho_{0}}(z)\capG)} \] with $C=C(n,p,K_{0})$. \end{lem} Let $\eta\in C_{0}^{\infty}(\R^{n})$ such that $\eta(x)=1$ for $|x|\le\rho_{0}/2$ and $\eta(x)=0$ for $|x|\ge3\rho_{0}/4$. With $y=\psi(x)$ (only for $x\in B_{\rho_{0}}(z)\capG$) we define \[ \begin{aligned}\eta^{z}(x) & =\eta(x-z),\quad\tilde{\eta}^{z}(y)=\eta^{z}(x),\\ \tilde{a}^{rs}(t,y) & =a^{ij}(t,x)\psi_{i}^{r}(x)\psi_{j}^{s}(x)\tilde{\eta}^{z}(y)+K\delta^{rs}[1-\tilde{\eta}(y)],\\ \tilde{\sigma}^{rk}(t,y) & =\sigma^{ik}(t,x)\psi_{i}^{r}(x)\tilde{\eta}(y). \end{aligned} \] Formally speaking, $\tilde{a}^{rs}(t,y)$ and $\tilde{\sigma}^{rk}(t,y)$ are not defined for $y\notin\bar{U}^{z}$, but we may set $\tilde{\eta}^{z}(y)=0$ for those $y$ and the corresponding terms to be zero, then $\tilde{a}^{rs}(t,y)$ and $\tilde{\sigma}^{rk}(t,y)$ are well-defined for all $y\in\R_{+}^{n}$. From Lemma 8.3.6 in \citet{Krylov2008Lectures}, we have \begin{lem} \emph{\label{lem:tilde-a-sigma}(i)} For any $y,y_{1},y_{2}\in\R_{+}^{n}$ and $(t,\omega)\in[0,T]\times\PS$, \[ \begin{aligned}|\tilde{a}^{rs}(t,y)|+\|\tilde{\sigma}^{r\cdot}(t,y)\|_{\ell^{2}} & \le\tilde{K}(n,K,K_{0}),\\ |\tilde{a}^{rs}(t,y_{1})-\tilde{a}^{rs}(t,y_{2})| & \le\tilde{\cm}(|y_{1}-y_{2}|),\\ \|\tilde{\sigma}^{r\cdot}(t,y_{1})-\tilde{\sigma}^{r\cdot}(t,y_{2})\|_{\ell^{2}} & \le\tilde{L}(n,K,K_{0},L), \end{aligned} \] where $\tilde{\cm}(\cdot)$ is a modulus of continuity determined only by $\cm(\cdot),n,K$ and $K_{0}$. \emph{(ii)} There is a constant $\tilde{\kappa}=\tilde{\kappa}(n,\kappa,K_{0})>0$ such that \[ (2\tilde{a}^{rs}(t,y)-\tilde{\sigma}^{rk}(t,y)\tilde{\sigma}^{sk}(t,y))\xi^{i}\xi^{j}\ge\tilde{\kappa}|\xi|^{2} \] for all $\xi=(\xi^{1},\dots,\xi^{n})\in\R^{n}$ and all $(t,y)\in[0,T]\times\R_{+}^{n}$. \end{lem} Let $\rho\in(0,\rho_{0}\wedge1]$ be a constant to be specified later, and take a nonnegative function $\zeta\in C_{0}^{\infty}(\R^{n})$ such that $\zeta(x)=1$ for $|x|\le\rho/4$ and $\zeta(x)=0$ for $|x|\ge\rho/2$. Set \begin{equation} \zeta^{z}(x)=\zeta(x-z)\quad\text{and}\quad\tilde{\zeta}^{z}(y)=\zeta^{z}(x)=\zeta^{z}(\psi^{-1}(y)).\label{eq:pf020} \end{equation} It is easily checked that $\rho|\zeta_{x}|+\rho^{2}|\zeta_{xx}|\le C(n)$. Let $u\in\mathcal{W}_{\circ}^{2,p}(G)$ be a solution to Eq. (\ref{eq:general-dom}) with $u(0,\cdot)=0$. Define \begin{equation} \tilde{u}^{z}(t,y)=\Big\{\begin{array}{ll} \tilde{\zeta}^{z}(y)u(t,x) & \text{for }y\in\bar{U}_{+}^{z},\\ 0 & \text{elsewhere}. \end{array}\label{eq:pr018} \end{equation} A direct computation gives that the function $\tilde{u}^{z}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$, whose support lies in $\psi(B_{\rho/2}(z))\cap\bar{U}_{+}^{z}$, satisfies the following equation \begin{equation} \begin{aligned}\md\tilde{u}^{z}(t,y) & =(\tilde{a}^{rs}(t,0)\tilde{u}_{rs}^{z}(t,y)+\hat{f}^{z}(t,y))\,\md t+(\tilde{\sigma}^{rk}(t,0)\tilde{u}_{r}^{z}(t,y)+\hat{g}^{z,k}(t,y))\,\md\BM_{t}^{k}\\ \tilde{u}^{z}|_{t=0} & =0,\quad\tilde{u}^{z}|_{y^{1}=0}=0, \end{aligned} \label{eq:tilde-uz} \end{equation} where \begin{align} \hat{f}^{z}(t,y) & =[\tilde{a}^{rs}(t,y)-\tilde{a}^{rs}(t,0)]\tilde{u}_{rs}^{z}(t,y)+\tilde{\zeta}^{z}(y)f(t,x)-\tilde{a}^{rs}(t,y)\tilde{\zeta}_{rs}^{z}(y)\tilde{u}(t,y)\nonumber \\ & \quad-\tilde{a}^{rs}(t,y)\tilde{\zeta}_{s}^{z}(y)\tilde{u}_{r}(t,y)+a^{ij}(t,x)\psi_{ij}^{r}(x)\tilde{\zeta}^{z}(y)\tilde{u}_{r}(t,y),\nonumber \\ \hat{g}^{z,k}(t,y) & =[\tilde{\sigma}^{rk}(t,y)-\tilde{\sigma}^{rk}(t,0)]\tilde{u}_{r}^{z}(t,y)+\tilde{\zeta}^{z}(y)g^{k}(t,x)-\tilde{\sigma}^{rk}(t,y)\tilde{\zeta}_{r}^{z}(y)\tilde{u}^{z}(t,y).\label{eq:pr010} \end{align} To apply Proposition \ref{prop:apriori} to Eq. (\ref{eq:tilde-uz}), we need to verify the following conditions: \begin{align} & \tilde{\sigma}^{1\cdot}(t,0,y')=0\ \ \forall\,y'\in\R_{+}^{n},\label{eq:pf-sigma}\\ & \hat{f}^{z}\in\mathbb{L}^{p}(\R_{+}^{n})\ \text{ and }\ \hat{g}^{z}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n}).\label{eq:pf-fg} \end{align} To check (\ref{eq:pf-sigma}), we notice that, from Definition \ref{def:domain}, the equation of the surface $B_{\rho_{0}}(z)\cap\pdG$ is $\psi^{1}(x)=0$, so $\partial\psi^{1}(x)$ is a normal vector of $\partial G$ at $x\in B_{\rho_{0}}(z)\cap\pdG$. Thanks to Assumption \ref{assu:compatib}, one has that for $x\in B_{\rho_{0}}(z)\cap\pdG$, \begin{equation} 0=\partial\psi^{1}(x)\cdot\sigma^{\cdot k}(t,x)=\sigma^{rk}(t,x)\partial_{r}\psi^{1}(x)=\tilde{\sigma}^{1k}(t,\psi(x)).\label{eq:pf011} \end{equation} Also notice that $\tilde{\sigma}^{1k}(t,\cdot)=0$ outside $\bar{U}_{+}^{z}$. So (\ref{eq:pf-sigma}) is valid. To check (\ref{eq:pf-fg}), one can use Lemmas \ref{lem:equivalent} and \ref{lem:tilde-a-sigma} to obtain that $\hat{f}^{z}\in\mathbb{L}^{p}(\R_{+}^{n})$, $\hat{g}^{z}\in\mathbb{W}^{1,p}(\R_{+}^{n})$, and \[ \begin{aligned}\|\hat{f}^{z}\|_{\mathbb{L}^{p}(\R_{+}^{n})} & \le C\tilde{\cm}(\rho)\|\tilde{u}_{yy}^{z}\|_{\mathbb{L}^{p}(\R_{+}^{n})}+C\big\{\|\tilde{\zeta}^{z}\tilde{f}\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|(\tilde{\zeta}_{yy}^{z}\tilde{u},\tilde{\zeta}_{y}^{z}\tilde{u}_{y},\tilde{\zeta}^{z}\tilde{u}_{y})\|_{\mathbb{L}^{p}(\R_{+}^{n})}\big\}\\ & \le C\tilde{\cm}(\rho)\|u_{xx}^{z}\|_{\mathbb{L}^{p}(G)}+C\big\{\|\zeta^{z}f\|_{\mathbb{L}^{p}(G)}+\|(\zeta_{xx}^{z}u,\zeta_{x}^{z}u,\zeta^{z}u,\zeta_{x}^{z}u_{x},\zeta^{z}u_{x})\|_{\mathbb{L}^{p}(G)}\big\}\\ & \le C\tilde{\cm}(\rho)\|u_{xx}\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}+C\big\{\|f\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}+\rho^{-2}\|u\|_{\mathbb{W}^{1,p}(B_{\rho/2}(z)\capG)}\big\},\\ \|\hat{g}^{z}\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})} & \le C\tilde{L}\rho\|\tilde{u}_{yy}^{z}\|_{\mathbb{L}^{p}(\R_{+}^{n})}+C\big\{\|\tilde{\zeta}^{z}\tilde{g}\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})}+\|\tilde{\zeta}_{y}^{z}\tilde{u}\|_{\mathbb{W}^{1,p}(\R_{+}^{n})}\big\}\\ & \le C\tilde{L}\rho\|u_{xx}\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}+C\big\{\|g\|_{\mathbb{W}^{1,p}(B_{\rho/2}(z)\capG;\ell^{2})}+\rho^{-2}\|u\|_{\mathbb{W}^{1,p}(B_{\rho/2}(z)\capG)}\big\} \end{aligned} \] with $C=C(K,n,p,K_{0},\rho_{0},L)$ independent of $\rho$, where $\tilde{\cm}(\cdot)$ and $\tilde{L}$ are taken from Lemma \ref{lem:tilde-a-sigma}. It remains to check $\hat{g}^{z}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n})$. This immediately follows from some basic facts in real analysis: \begin{lem} Let $h$ and $\varphi$ be functions defined $\R_{+}^{n}$. Then we have \begin{enumerate} \item[\emph{(a)}] if $h\in W_{\circ}^{1,p}(\R_{+}^{n})$ and $\varphi\in C^{0,1}(\bar{\R}_{+}^{n})$, then $\varphi h\in W_{\circ}^{1,p}(\R_{+}^{n})$; \item[\emph{(b)}] if $h\in W^{1,p}(\R_{+}^{n})$ and $\varphi\in C_{0}^{0,1}(\bar{\R}_{+}^{n})$ , then $\varphi h\in W_{\circ}^{1,p}(\R_{+}^{n})$; \item[\emph{(c)}] if $h\in W_{\circ}^{2,p}(\R_{+}^{n})$, then $h_{x^{i}}\in W_{\circ}^{1,p}(\R_{+}^{n})$ for $i=2,\dots,n$, \end{enumerate} where $C^{0,1}(\bar{\R}_{+}^{n})$ is the space of all uniformly Lipschitz continuous functions defined on $\bar{\R}_{+}^{n}$, and its subset $C_{0}^{0,1}(\bar{\R}_{+}^{n})$ collects those functions that vanish on the boundary $\{x^{1}=0\}$. \end{lem} Now we use the above lemma to verify $\hat{g}^{z}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n})$. By the assertion (a), it is easily seen the last two terms in the expression (\ref{eq:pr010}) of $\hat{g}^{z}$ belong to $\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n};\ell^{2})$. Assumption \ref{assu:continuity} and the condition $\tilde{\sigma}^{1\cdot}(t,0,y')=0$ checked above imply that $\tilde{\sigma}^{1\cdot}(t,\cdot)\in C_{0}^{0,1}(\bar{\R}_{+}^{n};\ell^{2})$ uniformly with respect to $(t,\omega)$, which along with $\tilde{u}_{y^{1}}^{z}\in\mathbb{W}^{1,p}(\R_{+}^{n})$ yields $[\tilde{\sigma}^{1\cdot}-\tilde{\sigma}^{1\cdot}(\cdot,0)]\tilde{u}_{1}^{z}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n};\ell^{2})$ by means of the assertion (b). Moreover, because $\tilde{u}^{z}\in\mathbb{W}_{\circ}^{2,p}(\R_{+}^{n};\ell^{2})$ and $\tilde{\sigma}^{i\cdot}(t,\cdot)\in C^{0,1}(\bar{\R}_{+}^{n};\ell^{2})$, it follows from the assertion (c) that $[\tilde{\sigma}^{i\cdot}-\tilde{\sigma}^{i\cdot}(\cdot,0)]\tilde{u}_{i}^{z}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n};\ell^{2})$ for $i=2,\dots,n$. Therefore, we have $\hat{g}^{z}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n})$. The facts (\ref{eq:pf-sigma}) and (\ref{eq:pf-fg}) along with Lemma \ref{lem:tilde-a-sigma} ensure us to apply Proposition \ref{prop:apriori} to Eq. (\ref{eq:tilde-uz}) to get the estimate \[ \begin{aligned}\|u\|_{\mathbb{W}^{2,p}(B_{\rho/4}(z)\capG)} & \le\|u^{z}\|_{\mathbb{W}^{2,p}(B_{\rho/2}(z)\capG)}\le C\|\tilde{u}^{z}\|_{\mathbb{W}^{2,p}(\R_{+}^{n})}\\ & \le C\bigl(\|\hat{f}^{z}\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|\hat{g}^{z}\|_{\mathbb{W}^{1,p}(\R_{+}^{n};\ell^{2})}\bigr)\\ & \le C(\tilde{\cm}(\rho)+\tilde{L}\rho)\|u_{xx}\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}+C\rho^{-2}\|u\|_{\mathbb{W}^{1,p}(B_{\rho/2}(z)\capG)}\\ & \quad+C\big\{\|f\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}+\|g\|_{\mathbb{W}^{1,p}(B_{\rho/2}(z)\capG;\ell^{2})}\big\}, \end{aligned} \] where $C=C(\kappa,K,n,p,T,K_{0},\rho_{0},L)$. By interpolation, we have \begin{align*} \|u_{x}\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)} & \le C(n)\|u_{xx}\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}^{1/2}\|u\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}^{1/2}\\ & \le\rho^{3}\|u_{xx}\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}^{1/2}+C(n)\rho^{-3}\|u\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}^{1/2}. \end{align*} Combining the last two inequalities, we obtain \begin{equation} \begin{aligned}\|u\|_{\mathbb{W}^{2,p}(B_{\rho/4}(z)\capG)} & \le C(\tilde{\cm}(\rho)+\tilde{L}\rho)\|u_{xx}\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}+C\rho^{-5}\|u\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}\\ & \quad+C\big\{\|f\|_{\mathbb{L}^{p}(B_{\rho/2}(z)\capG)}+\|g\|_{\mathbb{W}^{1,p}(B_{\rho/2}(z)\capG;\ell^{2})}\big\}. \end{aligned} \label{eq:pf006} \end{equation} Now we define the narrow area near the boundary $\partial G$: \[ G_{r}=\{x\inG:\text{there is an }\bar{x}\in\pdG\text{ such that }|x-\bar{x}|<r\}. \] \begin{lem} \label{lem:cover}There exist countable points $z_{1},z_{2},\dots\in\pdG$ satisfying the following properties: \begin{enumerate} \item $|z_{i}-z_{j}|\ge\rho/8$ for $i\neq j$, and the whole $\pdG$ is covered by $\cup_{i}B_{\rho/8}(z_{i})$; \item any $x\inG_{\rho/8}$ lies in at least one $B_{\rho/4}(z_{i})$; \item any $x\in G_{\rho/2}$ is covered by at most $N(n)$ balls from $\{B_{\rho/2}(z_{i})\}$, where $N(n)$ is the greatest number of such points in $B_{1}$ that any two of them are over $1/4$ apart. \end{enumerate} \end{lem} Now we postpone the proof of this lemma to this end of this subsection and move on the proof of Proposition \ref{prop:apriori}. From this lemma it follows that \begin{align*} \|u\|_{\mathbb{W}^{2,p}(G_{\rho/8})} & \le\sum_{i}\|u\|_{\mathbb{L}^{p}(B_{\rho/2}(z_{i})\capG)}\le\sum_{i}\|u\|_{\mathbb{W}^{2,p}(B_{\rho/4}(z_{i})\capG)}\\ & \le N(n)\|u\|_{\mathbb{W}^{2,p}(G_{\rho/2})}\le N(n)\|u\|_{\mathbb{W}^{2,p}(G)}, \end{align*} which along with the estimate (\ref{eq:pf006}) yields that \begin{equation} \|u\|_{\mathbb{W}^{2,p}(G_{\rho/8})}\le C(\tilde{\cm}(\rho)+\tilde{L}\rho)\|u_{xx}\|_{\mathbb{L}^{p}(G)}+C\big\{\rho^{-5}\|u\|_{\mathbb{L}^{p}(G)}+\|f\|_{\mathbb{L}^{p}(G)}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}\big\}\label{eq:pf008} \end{equation} with $C=C(\kappa,K,n,p,T,K_{0},\rho_{0},L)$. To obtain the estimate in $G^{\rho/8}:=G\backslash G_{\rho/8}$, we write $\bar{\zeta}(x)=\zeta(4x)$ and define a cut-off function $\zeta_{0}=\bar{\zeta}*\bm{1}_{G^{\rho/8}}$. For a solution $u\in\mathcal{W}_{\circ}^{2,p}(G)$ of Eq. (\ref{eq:general-dom}), the function $u^{0}=\zeta_{0}u\in\mathcal{W}_{\circ}^{2,p}(\R^{n})$, whose support lies in $\bar{G}$, satisfies the following equation \[ \md u^{0}=(a^{ij}u_{x^{i}x^{j}}^{0}+f^{0})\,\md t+(\sigma^{ik}u_{x^{i}}^{0}+g^{0,k})\,\md\BM_{t}^{k},\quad u^{0}(0,\cdot)=0, \] on $(0,T]\times\R^{n}$, where \[ f^{0}=\zeta_{0}f-a^{ij}(\zeta_{0})_{x^{i}x^{j}}u-a^{ij}(\zeta_{0})_{x^{i}}u_{x^{j}},\quad g^{0,k}=\zeta_{0}g^{k}-\sigma^{ik}(\zeta_{0})_{x^{i}}u^{0}. \] Thanks to the $L^{p}$-theory of SPDEs in the whole space (cf. Theorem 5.1 in \citet{krylov1999analytic}), we have the estimate \begin{equation} \begin{aligned}\|u\|_{\mathbb{W}^{2,p}(G^{\rho/8})} & \le\|u^{0}\|_{\mathbb{W}^{2,p}(\R^{n})}\le C\big(\|f^{0}\|_{\mathbb{L}^{p}(\R^{n})}+\|g^{0}\|_{\mathbb{W}^{1,p}(\R^{n};\ell^{2})}\big)\\ & \le C\big(\rho^{-2}\|u\|_{\mathbb{W}^{1,p}(G)}+\|f^{0}\|_{\mathbb{L}^{p}(\R^{n})}+\|g^{0}\|_{\mathbb{W}^{1,p}(\R^{n};\ell^{2})}\big)\\ & \le C\rho\|u_{xx}\|_{\mathbb{L}^{p}(G)}+C\big(\rho^{-5}\|u\|_{\mathbb{L}^{p}(G)}+\|f\|_{\mathbb{L}^{p}(G)}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}\big), \end{aligned} \label{eq:pf007} \end{equation} where $C=C(\kappa,K,n,p,T,L,\cm)$. Combining the estimates (\ref{eq:pf008}) and (\ref{eq:pf007}), we can choose a small number $\rho=\rho(\kappa,K,n,p,T,L,\cm)\in(0,\rho_{0}\wedge1]$ such that \[ \|u\|_{\mathbb{W}^{2,p}(G)}\le\frac{1}{2}\|u_{xx}\|_{\mathbb{L}^{p}(G)}+C\big(\|u\|_{\mathbb{L}^{p}(G)}+\|f\|_{\mathbb{L}^{p}(G)}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}\big), \] which yields \begin{equation} \|u\|_{\mathbb{W}^{2,p}(G)}\le C\big(\|u\|_{\mathbb{L}^{p}(G)}+\|f\|_{\mathbb{L}^{p}(G)}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}\big).\label{eq:pf009} \end{equation} It remains to estimate $\|u\|_{\mathbb{L}^{p}(G)}$. Applying It\^o's formula to $\me^{-\lambda t}|u(t,x)|^{p}$ and integrating on $G\times[0,s]\times\PS$, we have \begin{equation} \begin{aligned} & \me^{-\lambda T}\E\|u(T,\cdot)\|_{L^{p}(G)}^{p}+\lambda\E\int_{0}^{T}\me^{-\lambda t}\|u(t,\cdot)\|_{L^{p}(G)}^{p}\,\md t\\ & =p\,\E\int_{0}^{T}\!\!\!\int_{G}\me^{-\lambda t}|u(t,x)|^{p-2}u(t,x)[a^{ij}(t,x)u_{x^{i}x^{j}}(t,x)+f(t,x)]\,\md x\md t\\ & \quad+\frac{1}{2}p(p-1)\E\int_{0}^{T}\!\!\!\int_{G}\me^{-\lambda t}|u(t,x)|^{p-2}\|\sigma^{i\cdot}(t,x)u_{x^{i}}(t,x)+g(t,x)\|_{\ell^{2}}^{2}\,\md x\md t\\ & \le\eps\,\E\int_{0}^{T}\|u_{xx}(t,\cdot)\|_{L^{p}(G)}^{p}\,\md t+C(\eps,p,K,T)\E\int_{0}^{T}\me^{-\lambda t}\|u(t,\cdot)\|_{L^{p}(G)}^{p}\,\md t\\ & \quad+C(p,T)\big(\|f\|_{\mathbb{L}^{p}(G)}^{p}+\|g\|_{\mathbb{L}^{p}(G;\ell^{2})}^{p}\big). \end{aligned} \label{eq:pf016} \end{equation} Letting $\lambda=1+C(\eps,p,K,T)$, one can get that \begin{equation} \|u\|_{\mathbb{L}^{p}(G)}^{p}\le\eps C(p,K,T)\|u_{xx}\|_{\mathbb{L}^{p}(G)}^{p}+C(\eps,p,K,T)\big(\|f\|_{\mathbb{L}^{p}(G)}^{p}+\|g\|_{\mathbb{L}^{p}(G;\ell^{2})}^{p}\big).\label{eq:pf017} \end{equation} Selecting $\eps>0$ sufficiently small, the above estimate along with (\ref{eq:pf009}) yields the desired estimate (\ref{eq:apriori}), so the proof of Proposition \ref{prop:apriori} is complete. \end{proof} With non-homogeneous initial value condition, we have the following result. \begin{cor} \label{cor:linear}Let $G\in\mathcal{C}^{2}$ and Assumptions \ref{assu:compatib} and \ref{assu:continuity} be satisfied. Suppose that for any $f\in\mathbb{L}^{p}(G)$ and $g\in\mathbb{W}_{\circ}^{1,p}(G)$ there exists a unique solution in $\mathcal{W}_{\circ}^{2,p}(G)$ to Eq. (\ref{eq:general-dom}) with zero initial-boundary condition. Then for any given $f\in\mathbb{L}^{p}(G)$, $g\in\mathbb{W}_{\circ}^{1,p}(G)$, and \[ \begin{gathered}u_{0}(\cdot)\in L^{p}(\PS,\Filt_{0},W_{\circ}^{2-2/p,p}(G)),\end{gathered} \] Eq. (\ref{eq:general-dom}) with the initial-boundary condition (\ref{eq:bdcondition}) also admits a unique solution $u\in\mathcal{W}_{\circ}^{2,p}(G)$, and this solution satisfies \begin{equation} \|u\|_{\mathbb{W}^{2,p}(G)}^{p}\le C\bigl(\|f\|_{\mathbb{L}^{p}(G)}^{p}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}^{p}+\E\|u(0,\cdot)\|_{W^{2-2/p,p}(G)}^{p}\bigr),\label{eq:apriori-2} \end{equation} where the constant $C$ depends only on $\kappa,K,n,p,T,K_{0},\rho_{0},L,$ and the functions $\cm(\cdot)$. \end{cor} \begin{proof} From Theorem IV.9.1 in \citet{ladyzhenskaya1988linear}, the heat equation \[ \partial_{t}V=\Delta V\ \text{ on }(0,T]\times G;\quad V(t,\cdot)|_{\partial G}=0;\quad V(0,\cdot)=u(0,\cdot)\ \text{ on }G \] has a unique strong solution $V(\cdot,\cdot,\omega)\in L^{p}((0,T),W_{\circ}^{2,p}(G))$ for each $\omega$, and \begin{equation} \|V\|\le C(n,p,K_{0},\rho_{0},T)\E\|u(0,\cdot)\|_{W^{2-2/p,p}(G)}^{p}.\label{eq:pf012} \end{equation} On the other hand, from the assumptions the following equation \begin{equation} \begin{aligned}\md U & =[a^{ij}U_{x^{i}x^{j}}+f+(a^{ij}-\delta^{ij})V_{x^{i}x^{j}}]\,\md t+(\sigma^{ik}U_{x^{i}}+g^{k}+\sigma^{ik}V_{x^{i}})\,\md\BM_{t}^{k},\\ U|_{\partial G} & =U(0,\cdot)=0 \end{aligned} \label{eq:pf013} \end{equation} has a unique solution $U\in\mathcal{W}_{\circ}^{2,p}(G)$, and from Proposition \ref{prop:apriori} we have \begin{align*} \|U\|_{\mathbb{W}^{2,p}(G)}^{p} & \le C(\|f+(a^{ij}-\delta^{ij})V_{x^{i}x^{j}}\|_{\mathbb{L}^{p}(G)}^{p}+\|g+\sigma^{i\cdot}V_{x^{i}}\|_{\mathbb{W}^{1,p}(G;\ell^{2})}^{p})\\ & \le C\bigl(\|f\|_{\mathbb{L}^{p}(G)}^{p}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}^{p}+\|V\|_{\mathbb{W}^{2,p}(G)}^{p}\bigr)\\ & \le C\bigl(\|f\|_{\mathbb{L}^{p}(G)}^{p}+\|g\|_{\mathbb{W}^{1,p}(G;\ell^{2})}^{p}+\E\|u(0,\cdot)\|_{W^{2-2/p,p}(G)}^{p}\bigr). \end{align*} Obviously, the function $u=U+V\in\mathcal{W}_{\circ}^{2,p}(G)$ solves Eq. (\ref{eq:general-dom}) with condition (\ref{eq:bdcondition}), and (\ref{eq:apriori-2}) immediately follows from the above estimates for $U$ and $V$. The uniqueness also holds true, otherwise we can construct different solutions of (\ref{eq:pf013}) from different solutions of Eq. (\ref{eq:general-dom}) with (\ref{eq:bdcondition}) (with the help of $V$), which contradicts to the assumptions. The proof is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:cover}] For convenience, we say $\{z_{1},z_{2},\dots\}$ is a proper $\rho/8$-covering set of $E\subset\R^{n}$ if $z_{i}\in E$, $|z_{i}-z_{j}|\ge\rho/8$ for $i\neq j$, and $E$ is covered by $\cup_{i}B_{\rho/8}(z_{i})$. If $G$ is bounded, then $\partial G$ is a compact subset in $\R^{n}$, so there are finite points $\{z_{1},\dots,z_{N}\}\subset\partial G$ such that $\partial G\subset\cup_{i}B_{\rho/8}(z_{i})$, but it is not necessary that $|z_{i}-z_{j}|\ge\rho/8$ for any $i\neq j$. Now we adjust the choice of points $z_{i}$ as follows: in the $i$-th step, we check whether $B_{\rho/8}(z_{i})\subset\cup_{j\neq i}B_{\rho/8}(z_{j})$: if yes, then remove this $z_{i}$ from the set; if no, then $E_{i}:=\partial G\,\backslash\!\cup_{j\neq i}B_{\rho/8}(z_{j})$ is nonempty and covered by $B_{\rho/8}(z_{i})$, so we can pick one point $z'_{i}\in E_{i}$ or two $z_{i}',z_{i}''\in E_{i}$ with $|z_{i}'-z''_{i}|\ge\rho/8$ such that $E_{i}$ is covered by $B_{\rho/8}(z_{i}')$ or $B_{\rho/8}(z_{i}')\cup B_{\rho/8}(z_{i}'')$, and replace $z_{i}$ by $z_{i}'$ or the pair $(z_{i}',z_{i}'')$. After $N$ steps one obtains a finite proper $\rho/8$-covering set of $\partial G$. If $G$ is unbounded, we fix a large number $R>0$ and denote $\varGamma_{k}=\partial G\cap B_{kR}(0)$. Repeating the argument as above one can find a sequence of finite sets $A_{1},A_{2},\dots$ inductively such that $A_{1}$ is a finite proper $\rho/8$-covering set of $\varGamma_{1}$, and $A_{k}$ with $k\ge2$ is a finite proper $\rho/8$-covering set of $\varGamma_{k}\backslash D_{k-1}$, where $D_{k-1}=\cup\{B_{\rho/8}(z):z\in\cup_{i=1}^{k-1}A_{1}\}$. It is easily seen that $A:=\cup_{i=1}^{\infty}A_{i}$ is a finite proper $\rho/8$-covering set of $\partial G$. Next we prove that the set $A$ has the second property. For $x\in G_{\rho/8}$ there is an $\bar{x}\in\partial G$ such that $|x-\bar{x}|<\rho/8$. Meanwhile, there is a point $z\in A$ such that $\bar{x}\in B_{\rho/8}(z)$. Hence, $|x-z|\le|x-\bar{x}|+|\bar{x}-z|\le\rho/4$, which means $x\in B_{\rho/4}(z)$. Finally, for $x\in G_{\rho/2}$ the ball $B_{\rho/2}(x)$ contains at most $N(n)$ points from the set $A$ according to the definition of $N(n)$, which implies the last property. The proof is complete. \end{proof} \subsection{Existence and uniqueness} We start from the solvability of stochastic heat equations. In view of Corollary \ref{cor:linear}, we can just focus on the homogeneous Dirichlet boundary value problem. \begin{lem} Let $G\in\mathcal{C}^{2}$. Then for given $f\in\mathbb{L}^{p}(G)$ and $g\in\mathbb{W}_{\circ}^{1,p}(G)$, the equation \begin{equation} \md u=(\Delta u+f)\,\md t+g^{k}\,\md\BM_{t}^{k}\quad\text{on }(0,T]\times G\label{eq:lap-dom} \end{equation} with zero initial-boundary condition has a unique solution $u\in\mathcal{W}_{\circ}^{2,p}(G)$. \end{lem} \begin{proof} The uniqueness follows from the estimate (\ref{eq:apriori}). For the existence we adopt an approximation strategy from the proof of Theorem 2.9 in \citet{kim2004stochastic}. It is well-known that $C_{0}^{\infty}(G)$ is a dense subset of $W_{\circ}^{1,p}(G)$. We can approximate $g=(g^{1},g^{2},\dots)\in\mathbb{W}_{\circ}^{1,p}(G;\ell^{2})$ with functions having only finite nonzero entries, bounded on $[0,T]\times G\times\PS$ along with each derivative of any order, and vanishing near $\pd G$ and the infinity (cf. Theorem 3.17 in \citet{adams2003sobolev}). In this case it is known that \[ V(t,x)=\int_{0}^{t}g^{k}(t,x)\,\md\BM_{s}^{k} \] is infinitely differentiable in $x$ and vanishes near $\pd G$ and the infinity. So we conclude that $V\in\mathcal{W}_{\circ}^{2,p}(G)$. Again, from PDE theory, the equation \[ \partial_{t}U=\Delta U+f+\Delta V,\quad U|_{\pd G}=0,\quad U(0,\cdot)=0 \] has a solution $U$ in $\mathcal{W}_{\circ}^{2,p}(G)$. The solution of (\ref{eq:lap-dom}) is then given by $u=U+V\in\mathcal{W}_{\circ}^{2,p}(G)$. The case of general $g$ can be obtained by approximation by the help of the estimate (\ref{eq:apriori}). The proof is complete. \end{proof} With the solvability of stochastic heat equation (\ref{eq:lap-dom}) and the a priori estimate (\ref{eq:apriori}) in hand, the existence and uniqueness of solutions to the general linear equation (\ref{eq:general-dom}) immediately follows from the standard method of continuity (cf. \citet[Theorem 5.2]{gilbarg2001elliptic}). Bearing in mind Corollary \ref{cor:linear}, we have the following result. \begin{cor} \label{cor:nohomog}Let $G\in\mathcal{C}^{2}$ and Assumptions \ref{assu:compatib} and \ref{assu:continuity}. Then for any given $f\in\mathbb{L}^{p}(G)$, $g\in\mathbb{W}_{\circ}^{1,p}(G)$ and $u_{0}(\cdot)\in L^{p}(\PS,\Filt_{0},W_{\circ}^{2-2/p,p}(G))$, Eq. (\ref{eq:general-dom}) with the initial-boundary condition (\ref{eq:bdcondition}) has a unique solution $u\in\mathcal{W}_{\circ}^{2,p}(G)$. \end{cor} \begin{proof}[Proof of Theorem \ref{thm:main} (i) and (ii)] The argument is similar to the proof of Theorem 6.4 in \citet{krylov1999analytic}. From Assumption \ref{assu:fg}, we know that for any $v\in\mathbb{W}_{\circ}^{2,p}(G)$, \[ f(\cdot,\cdot,v)\in\mathbb{L}^{p}(G),\quad g(\cdot,\cdot,v)\in\mathbb{W}_{\circ}^{1,p}(G;\ell^{2}). \] So by Corollary \ref{cor:nohomog}, the equation \[ \md u=(a^{ij}u_{x^{i}x^{j}}+f(t,x,v))\,\md t+(\sigma^{ik}u_{x^{i}}+g^{k}(t,x,v))\,\md\BM_{t}^{k} \] with condition (\ref{eq:bdcondition}) has a unique solution $u\in\mathcal{W}_{\circ}^{2,p}(G)$. Define a mapping $\mathcal{T}v=u$. Replacing the terminal time $T$ into any $\tau\le T$, it follows from the estimate (\ref{eq:apriori}) and Assumption \ref{assu:fg} that for $v^{1},v^{2}\in\mathbb{W}_{\circ}^{2,p}(G)$, \begin{align*} \|\mathcal{T}v^{1}-\mathcal{T}v^{2}\|_{\mathbb{W}^{2,p}(G,\tau)}^{p} & \le C(\|f(\cdot,\cdot,v^{1})-f(\cdot,\cdot,v^{2})\|_{\mathbb{L}^{p}(G,\tau)}^{p}+\|g(\cdot,\cdot,v^{1})-g(\cdot,\cdot,v^{2})\|_{\mathbb{W}^{1,p}(G,\tau;\ell^{2})}^{p})\\ & \le C\eps^{p}\|v^{1}-v^{2}\|_{\mathbb{W}^{2,p}(G,\tau)}^{p}+CK_{\eps}^{p}\int_{0}^{\tau}\|v^{1}(s)-v^{2}(s)\|_{\mathbb{L}^{p}(G,s)}^{p}\md s. \end{align*} From the computation (\ref{eq:pf016}) (with $s$ instead of $T$) and Assumption \ref{assu:fg}, we can see that \[ \E\|\mathcal{T}v^{1}(s)-\mathcal{T}v^{2}(s)\|_{L^{p}(G)}^{p}\le C\|v^{1}-v^{2}\|_{\mathbb{W}^{2,p}(G,s)}^{p} \] with $C$ independent of $s$. Combining the last two inequalities and letting $C\eps^{p}=1/4$ and , we have \[ \|\mathcal{T}v^{1}-\mathcal{T}v^{2}\|_{\mathbb{W}^{2,p}(G,\tau)}^{p}\le\frac{1}{4}\|v^{1}-v^{2}\|_{\mathbb{W}^{2,p}(G,\tau)}^{p}+C\int_{0}^{\tau}\|v^{1}-v^{2}\|_{\mathbb{W}^{2,p}(G,s)}^{p}\md s, \] Then by induction we can compute that for positive integer $m$, \begin{align*} \|\mathcal{T}^{m}v^{1}-\mathcal{T}^{m}v^{2}\|_{\mathbb{W}^{2,p}(G)}^{p} & \le\|v^{1}-v^{2}\|_{\mathbb{W}^{2,p}(G)}^{p}\sum_{k=0}^{m}\begin{pmatrix}\alpha\\ k \end{pmatrix}\frac{4^{k-m}}{k!}(CT)^{k}\\ & \le2^{-m}\|v^{1}-v^{2}\|_{\mathbb{W}^{2,p}(G)}^{p}\max_{k\ge0}\frac{(4CT)^{k}}{k!}. \end{align*} Choose $m$ sufficiently large so that $\mathcal{T}^{m}$ is a contraction in $\mathbb{W}_{\circ}^{2,p}(G)$. Then there is a unique $u\in\mathbb{W}_{\circ}^{2,p}(G)$ such that $\mathcal{T}^{m}u=u$, and from Corollary \ref{cor:nohomog} we have $u\in\mathcal{W}_{\circ}^{2,p}(G)$. Now we derive the estimate (\ref{eq:main-est}). From Corollary \ref{cor:nohomog} and Assumption \ref{assu:fg} (with a proper choice of $\eps$), we can obtain that \[ \|u\|_{\mathbb{W}^{2,p}(G)}^{p}\le C\bigl(\|u\|_{\mathbb{L}^{p}(G)}^{p}+\|f(\cdot,\cdot,0)\|_{\mathbb{L}^{p}(G)}^{p}+\|g(\cdot,\cdot,0)\|_{\mathbb{W}^{1,p}(G;\ell^{2})}^{p}+\E\|u_{0}\|_{W^{2-2/p,p}(G)}^{p}\bigr). \] The term $\|u\|_{\mathbb{L}^{p}(G)}^{p}$ can be eliminated just as we got rid of the same one in (\ref{eq:pf009}). The assertions (i) and (ii) of Theorem \ref{thm:main} are proved. \end{proof} In the proof of Theorem \ref{thm:local} we will need the following result concerning the existence and uniqueness of $W^{1,p}$-solutions of SPDEs of divergence form. We keep the formulation as the most compact form that can be applied comfortably, and leave the general extension to readers. \begin{prop} \label{prop:W1p}Let Assumptions \ref{assu:compatib} and \ref{assu:continuity} be satisfied with $G=\R_{+}^{n}$, and let $c^{i}\in\mathbb{L}^{\infty}(\R_{+}^{n})$. Then for any $f^{0},F\in\mathbb{L}^{p}(\R_{+}^{n})$ and $g\in\mathbb{L}^{p}(G;\ell^{2})$, the equation \begin{align*} \md u & =[(a^{ij}u_{x^{i}})_{x^{j}}+f^{0}+c^{i}F_{x^{i}}]\,\md t+(\sigma^{ik}u_{x^{i}}+g^{k})\,\md\BM_{t}^{k},\\ u|_{\partial G} & =0,\quad u|_{t=0}=0 \end{align*} has a unique solution $u\in\mathcal{W}_{\circ}^{1,p}(\R_{+}^{n})$, and \begin{equation} \|u\|_{\mathbb{W}^{1,p}(\R_{+}^{n})}\le C(\|(f^{0},F)\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|g\|_{\mathbb{L}^{p}(\R_{+}^{n};\ell^{2})});\label{eq:divergence} \end{equation} where the constant $C$ depends only on $\kappa,K,n,p,T,L,\cm(\cdot)$ and $\|c^{i}\|_{\mathbb{L}^{\infty}}$. \end{prop} \begin{proof} As above the existence and uniqueness of solutions follows from the (a priori) estimate (\ref{eq:divergence}) by using the method of continuity and the Banach fixed-point theorem. The proof of (\ref{eq:divergence}) is similar to but much easier than the derivation of estimate (\ref{eq:apriori}) because one needn't straighten the boundary but just do the computation on the original equation, while the auxiliary estimate for model equations is provided by Proposition \ref{prop:apriori} (ii). We suppress the details here to avoid unnecessary repeating. \end{proof} \subsection{Embedding for $\mathcal{W}_{\circ}^{2,p}(G)$} Let us define the following norm for the space $\mathcal{W}_{\circ}^{2,p}(G)$ (recall Definition \ref{def:W2p}): \[ \|u\|_{\mathcal{W}_{\circ}^{2,p}(G)}=\|u_{xx}\|_{\mathbb{L}^{p}(G)}+\|u_{{\rm D}}\|_{\mathbb{L}^{p}(G)}+\|u_{{\rm S}}\|_{\mathbb{W}^{1,p}(G;\ell^{2})}+(\E\|u_{0}\|_{W^{2-2/p,p}(G)}^{p})^{1/p}. \] Following the proof of Theorem 3.7 in \citet{krylov1999analytic}, one can prove that $\mathcal{W}_{\circ}^{2,p}(G)$ is a Banach space with the above norm. The assertion (iii) of Theorem \ref{thm:main} is a direct consequence of the following lemma. \begin{lem} \label{lem:embedding}Let $G\in\mathcal{C}^{2}$ and $p>2$. Then for $u\in\mathcal{W}_{\circ}^{2,p}(G)$ we have (a) if $\alpha_{0}:=\frac{2p-n-2}{2p}>0$, then for any $\alpha\in(0,\alpha_{0})$, \[ \E\|u\|_{C^{\alpha/2,\alpha}([0,T]\times\bar{G})}^{p}\le C(n,p,\alpha,K_{0},\rho_{0},T)\|u\|_{\mathcal{W}_{\circ}^{2,p}(G)}^{p}; \] (b) if $\beta_{0}:=\frac{p-n-2}{2p}>0$, then for any $\beta\in(0,\beta_{0})$, \[ \E\|u_{x}\|_{C^{\beta/2,\beta}([0,T]\times\bar{G})}^{p}\le C(n,p,\beta,K_{0},\rho_{0},T)\|u\|_{\mathcal{W}_{\circ}^{2,p}(G)}^{p}. \] \end{lem} \begin{proof} When $G=\R^{n}$ this lemma is a simple consequence of Theorem 7.2 in \citet{krylov1999analytic} by means of Sobolev embedding. For $G=\R_{+}^{n}$ it suffices to show that the odd extension of $u$ (see (\ref{eq:odd-ext})) lies in $\mathcal{W}_{\circ}^{2,p}(\R^{n})$. Indeed, we set $f=u_{{\rm D}}-\Delta u\in\mathbb{L}^{p}(\R_{+}^{n})$ and $g=u_{{\rm S}}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n};\ell^{2})$, then \begin{equation} \md u=(\Delta u+f)\,\md t+g^{k}\,\md\BM_{t}^{k}.\label{eq:pf019} \end{equation} We continue $u_{0}$, $f$ and $g$ to be odd functions of $x^{1}$, and solve the above equation with initial data $u_{0}$ in the whole space $\R^{n}$. By our solvability results, the solution of the extended equation is the odd continuation of $u$ and belongs to $\mathcal{W}_{\circ}^{2,p}(\R^{n})$. Finally, we consider the case of general $G\in\mathcal{C}^{2}$. For $u\in\mathcal{W}_{\circ}^{2,p}(G)$, and define $\tilde{u}^{z}$, $\tilde{u}_{{\rm D}}^{z}$ and $\tilde{u}_{{\rm S}}^{z}$ in the spirit of (\ref{eq:pr018}) for any $z\in\partial G$. Evidently, $\tilde{u}^{z}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$ with $\md\tilde{u}^{z}=\tilde{u}_{{\rm D}}^{z}\,\md t+\tilde{u}_{{\rm S}}^{z,k}\,\md\BM_{t}^{k}$. Bearing in mind the assertion for $\R_{+}^{n}$, a direct computation shows that \begin{align*} & \E\|u\|_{C^{\alpha/2,\alpha}([0,T]\times(\bar{B}_{\rho_{0}/4}(z)\cap\bar{G}))}^{p}\le\E\|\zeta^{z}u\|_{C^{\alpha/2,\alpha}([0,T]\times(\bar{B}_{\rho_{0}/2}(z)\cap\bar{G}))}^{p}\\ & \le C(n,p,\alpha,K_{0},\rho_{0})\E\|\tilde{u}^{z}\|_{C^{\alpha/2,\alpha}([0,T]\times\bar{\R}_{+}^{n})}^{p}\le C(n,p,\alpha,K_{0},\rho_{0},T)\|\tilde{u}^{z}\|_{\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})}^{p}\\ & \le C(n,p,\alpha,K_{0},\rho_{0},T)\|u\|_{\mathcal{W}_{\circ}^{2,p}(G)}^{p},\\ & \E\|u_{x}\|_{C^{\beta/2,\beta}([0,T]\times(\bar{B}_{\rho_{0}/4}(z)\cap\bar{G}))}^{p}\le\E\|(\zeta^{z}u)_{x}\|_{C^{\beta/2,\beta}([0,T]\times(\bar{B}_{\rho_{0}/2}(z)\cap\bar{G}))}^{p}\\ & \le C(n,p,\beta,K_{0},\rho_{0})\E\|\partial\tilde{u}^{z}\|_{C^{\beta/2,\beta}([0,T]\times\bar{\R}_{+}^{n})}^{p}\le C(n,p,\beta,K_{0},\rho_{0})\|\tilde{u}^{z}\|_{\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})}^{p}\\ & \le C(n,p,\beta,K_{0},\rho_{0})\|u\|_{\mathcal{W}_{\circ}^{2,p}(G)}^{p}. \end{align*} For $z\in G^{\rho_{0}/4}=G\backslash G_{\rho_{0}/4}$ the estimate is much simpler: \begin{align*} \E\|u\|_{C^{\alpha/2,\alpha}([0,T]\times\bar{B}_{\rho_{0}/8}(z))}^{p} & \le\E\|\eta^{z}u\|_{C^{\alpha/2,\alpha}([0,T]\times\R^{n})}^{p}\le C\|\eta^{z}u\|_{\mathcal{W}_{\circ}^{2,p}(\R^{n})}^{p}\le C\|u\|_{\mathcal{W}_{\circ}^{2,p}(G)}^{p},\\ \E\|u_{x}\|_{C^{\beta/2,\beta}([0,T]\times\bar{B}_{\rho_{0}/8}(z))}^{p} & \le\E\|(\eta^{z}u)_{x}\|_{C^{\beta/2,\beta}([0,T]\times\R^{n})}^{p}\le C\|\eta^{z}u\|_{\mathcal{W}_{\circ}^{2,p}(\R^{n})}^{p}\le C\|u\|_{\mathcal{W}_{\circ}^{2,p}(G)}^{p}, \end{align*} where $\eta^{z}\in C_{0}^{\infty}(\R^{n})$ such that $\eta^{z}(x)=1$ for $|x-z|\le\rho_{0}/8$ and $\eta^{z}(x)=0$ for $|x-z|\ge\rho_{0}/4$. Therefore, we have bounded the H\"oler norms in any $\bar{B}_{\rho_{0}/8}(z)\cap\bar{G}$ with $z\in\bar{G}$. The desired global estimate follows from the localization property of H\"oler norms (cf. Theorem 4.1.1 in \citet{krylov1996lectures}). The lemma is proved. \end{proof} \section{Proof of Theorem \ref{thm:local}} The interior regularity of the solution is implied in the assumption $u\in\mathbb{W}_{{\rm loc}}^{2,p}(G)$. To prove the regularity near $\Gamma':=\Gamma\cap\partial G'$, it suffices to do this in a neighbourhood of any point $z\in\Gamma'$ because $G'$ is bounded (and $\Gamma'$ is bounded too). In other words, we need prove that $u\in\mathbb{W}^{2,p}(B_{\eps}(z)\cap G)$, where $\eps>0$ is a number much smaller than $\mathrm{dist}(G',\partial G\backslash\Gamma)$ and $\rho_{0}$ (recall Definition \ref{def:domain}). In the spirit of the method of straightening boundary as in the proof of Proposition \ref{prop:apriori}, the desired result can be converted equivalently to the following lemma. \begin{lem} The conclusion of Theorem \ref{thm:local} holds true for $G=\R_{+}^{n}$, $\Gamma=\partial\R_{+}^{n}\cap B_{2\eps}(0)$ and $G'=B_{\eps}(0)$. \end{lem} \begin{proof} In view of Corollary \ref{cor:linear} one can assume that $u(0,\cdot)=0$. Take a function $\zeta\in C_{0}^{\infty}(\R^{n})$ such that $\zeta(x)=1$ for $|x|\le3\eps/2$ and $\zeta(x)=0$ for $|x|\ge2\eps$. Then $v=\zeta u$ satisfies the following equation \[ \md v=[(a^{ij}v_{x^{i}})_{x^{j}}+f^{0}+c^{i}F_{x^{i}}]\,\md t+(\sigma^{ik}v_{x^{i}}+\tilde{g}^{k})\,\md\BM_{t}^{k}, \] where \begin{align*} c^{i} & =b^{i}\zeta-2a^{ij}\zeta_{x^{j}}-a_{x^{j}}^{ij}\zeta,\quad F=u\\ f^{0} & =\zeta f+(c\zeta-a^{ij}\zeta_{x^{i}x^{j}}-a_{x^{j}}^{ij}\zeta_{x^{i}})u,\\ \tilde{g}^{k} & =\zeta g^{k}+(\nu^{k}\zeta-\sigma^{ik}\zeta_{x^{i}})u. \end{align*} From Proposition \ref{prop:W1p} one has $\zeta u=v\in\mathcal{W}_{\circ}^{1,p}(\R_{+}^{n})$ and \[ \|u\|_{\mathbb{W}^{1,p}(B_{3\eps/2})}\le C(\|u\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|f\|_{\mathbb{L}^{p}(\R_{+}^{n})}+\|g\|_{\mathbb{L}^{p}(\R_{+}^{n};\ell^{2})}). \] Now we let $\tilde{\zeta}\in C_{0}^{\infty}(\R^{n})$ such that $\tilde{\zeta}(x)=1$ for $|x|\le\eps$ and $\tilde{\zeta}(x)=0$ for $|x|\ge3\eps/2$. Then $\tilde{v}=\tilde{\zeta}u$ satisfies \[ \md\tilde{v}=(a^{ij}\tilde{v}+\tilde{f})\,\md t+(\sigma^{ik}\tilde{v}+\tilde{g}^{k})\,\md\BM_{t}^{k}, \] where $\tilde{g}$ is defined above and \[ \tilde{f}=\tilde{\zeta}f+(b\tilde{\zeta}-a^{ij}\tilde{\zeta}_{x^{j}})u_{x}+(c\tilde{\zeta}-a^{ij}\tilde{\zeta}_{x^{i}x^{j}})u. \] Since $u\in\mathbb{W}^{1,p}(B_{3\eps/2}\cap\R_{+}^{n})$ and $u=0$ on $B_{3\eps/2}\cap\partial\R_{+}^{n}$, one has $\tilde{f}\in\mathbb{L}^{p}(\R_{+}^{n})$ and $\tilde{g}\in\mathbb{W}_{\circ}^{1,p}(\R_{+}^{n})$. Then from Theorem \ref{thm:main} one obtains $\tilde{\zeta}u=\tilde{v}\in\mathcal{W}_{\circ}^{2,p}(\R_{+}^{n})$. The continuity property of $\tilde{\zeta}u$ and its derivatives follows from Lemma \ref{lem:embedding}. The proof is complete. \end{proof} \section*{References} \bibliographystyle{authordate3}
1,108,101,563,455
arxiv
\section{Introduction} As the multi-party data-sharing becomes inevitable in the \emph{big data} era, privacy and security issues of data sharing also create major challenges for data curators and owners. More precisely, when releasing the useful data $X$, we also need to restrict the leakage of the sensitive/private information $S$, e.g., an individual's medical records, due to the inherent correlation between $S$ and $X$. Existing privacy preserving techniques in computer science such as the celebrated differential privacy (DP) \cite{DPbook2014} add noise to the useful data $X$ to ensure the indifference of the noisy released dataset to the presence or absence of an individual's records. It is for sure that, if we distort $X$ to protect against malicious inference on $S$, we also lose the fidelity/utility. However, the DP framework does not allow \emph{joint} optimization of privacy and utility. On the other hand, a general framework of statistical inference is outlined in \cite{PvsInfer2012}, which allows characterization of the \emph{privacy-utility tradeoff (PUT)}. The average privacy leakage is measured by the posterior knowledge gain $H(S) - H(S | \hat{X}) = I(S ; \hat{X})$ on $S$ at the adversary, where $\hat{X}$ is the released \emph{sanitized} data based on the original useful data $X$. See Fig.~\ref{fig:sys}. The problem is to determine the transition probability $p(\hat{x} | x)$ to minimize $I(S ; \hat{X})$ while ensuring the distortion in $\hat{X}$ remains below a certain level. It is shown in \cite[Theorem~1]{PvsInfer2012} that the solution $p(\hat{x} | x )$ can be determined by solving a multivariate convex minimization problem, where the alternating minimization algorithms, e.g., the Blahut–Arimoto algorithm \cite{Blahut1972}, also apply.\footnote{The convexity holds if the distortion is measured by the expected distance between $X$ and $\hat{X}$ \cite[Theorem~1]{PvsInfer2012}.} But, this approach requires a given alphabet $\hat{\mathcal{X}}$ for the sanitization. \begin{figure}[tbp] \centering \scalebox{0.7}{\input{figures/oneoff.tex}} \caption{Statistical inference framework \cite{PvsInfer2012}: a data curator wants to publish $X$ that is correlated with the private data $S$. The privacy funnel problem \cite{PF2014} is to generate the sanitized data $\hat{X}$ via transition probability $p(\hat{x} | x)$ so as to minimize the privacy leakage $I(S ; \hat{X})$ and maintain a utility threshold on $I(X ; \hat{X})$.} \label{fig:sys} \end{figure} In \cite{PF2014}, the authors used $I(X ; \hat{X})$ to measure the utility and considered the problem $\min_{p(\hat{x} | x)} I(S ; \hat{X})$ s.t. $I(X ; \hat{X}) \geq \theta_{\text{U}}$ for a guaranteed utility level $\theta_{\text{U}}$. It is called the \emph{privacy funnel (PF)} problem in that $p(\hat{x} | x)$ behaves like a funnel to pass $X$, but block $S$. This problem is dual to the information bottleneck (IB) \cite{IB2000} in machine learning\footnote{IB tries to design the bottleneck $p(\hat{x} | x)$ that passes relevant information about $S$, which is nested in $X$, with the minimum coding rate $I(X ; \hat{X})$ \cite{IB2000}. } so that the idea of the agglomerative clustering \cite{IBAgglom1999} was borrowed in \cite[Algorithm~1]{PF2014} to produce a codebook $\hat{\mathcal{X}}$ from $\mathcal{X}$: through iteratively merging elements in $X$ with a resulting $I(X ; \hat{X}) \geq \theta_{\text{U}}$ that minimizes $I(S ; \hat{X})$. Since the determination of the optimal merge incurs a combinatorial search, the authors in \cite{PF2014,IBAgglom1999} resorted to a brute-force pairwise merge approach so that the complexity is controlled at $O(|\mathcal{X}|^2)$ in each iteration. However, it is well-known that some combinatorial optimization problems exhibiting strong structures can be solved efficiently in polynomial time \cite{Fujishige2005}. Thus, it is worth understanding whether these techniques also apply to the PF problem. In this paper, we propose a submodularity-based iterative agglomerative clustering approach, the IAC-MDSF algorithm, that starts with $\hat{\mathcal{X}}^{(0)} \coloneqq \mathcal{X}$ and iteratively searches an optimal merge over all subsets $W \subseteq \hat{\mathcal{X}}^{(k)}$ resulting in an r.v. $\hat{X}^{(k)}_W$ that minimizes the Lagrangian function $I (S ; \hat{X}^{(k)}_W) - \lambda I (X ; \hat{X}^{(k)}_W)$. We prove that this minimization problem is reduced to the minimization of the difference of two submodular functions (MDSF), for which, the existing MDSF algorithms, e.g., \cite{SubMCover2013,SubMDiff2012,SubSuper2005}, can be applied to ensure a local convergence in polynomial time. We show that this MDSF-based approach also applies to the IB problem. We run experiments on the Hungarian heart disease data set in the UCI machine learning repository \cite{UCI2007}, where we vary the value of the Lagrangian multiplier $\lambda$ to outline the PUT as the Pareto frontier: the privacy leakage $I(S ; \hat{X})$ vs. utility loss $ - I(X ; \hat{X})$. We show that, for both PF and IB problems, our IAC-MDSF algorithm in general Pareto dominates or outperforms the pairwise merge approaches in \cite{PF2014} and is computationally much less complex. \section{Privacy against Statistical Inference Attack} Consider the situation where a data curator wants to publish data $X$ to the public. At the same time, he/she obtains some sensitive/private data $S$ that is, in general, correlated with $X$. In the public domain, there may exist some adversaries or legitimate, but curious users that can infer $S$ by observing $X$. Thus, instead of the original $X$, the data curator publishes a distorted version $\hat{X}$. The purpose is to keep the fidelity/utility of $X$ in $\hat{X}$ while minimizing the leakage of $S$ via $\hat{X}$. We regard $S$ and $X$ as r.v.s with alphabets $\mathcal{S}$ and $\mathcal{X}$, respectively. The correlation between $S$ and $X$ is captured by the joint probability mass function $p(s,x)$. The design of the data release scheme $\hat{X}$ is to determine the mapping from $\mathcal{X}$ to $\hat{\mathcal{X}}$, or the transition probability $p(\hat{x} | x)$ for all $(x,\hat{x}) \in \mathcal{X} \times \hat{\mathcal{X}}$. Thus, there naturally arises a Markov chain $S - X - \hat{X}$. \textbf{Privacy measure} \cite{PvsInfer2012}: Let $\triangle_{S}$ be the probability simplex over $\mathcal{S}$. For any user in the public domain, denote $q \in \triangle_{S}$ his/her belief on $S$ and $C(S,q)$ the cost to infer $S$ based on the distribution $q$. It is assumed that any user in the public domain is able to adjust $q$ to minimize the expected prior inference cost $c^*_0 = \min_{q \in \triangle_{S}} \mathbb{E}_{S}[C(S,q)]$ and posterior inference cost $c^*(\hat{x}) = \min_{q \in \triangle_{S}} \mathbb{E}_{S}[C(S,q) | \hat{X} = \hat{x}]$ (e.g., of a maximum a posteriori estimation). Thus, in order to preserve the privacy of $S$, the goal is to remain the difficulty inferring $S$, or minimize the average cost reduction $\delta C = c^*_0 - \mathbb{E}_{\hat{X}} [c^*(\hat{x})]$, in the public domain. It is shown in \cite[Section~IV]{PvsInfer2012} that $\delta C = I(S ; \hat{X})$ if the log loss $C(S,q) = - \log q(s)$ is adopted as the inference cost. Here, the mutual information $I(S ; \hat{X}) = H(S) - H(S | \hat{X})$ is interpreted as the leakage of $S$ to the public via $\hat{X}$. \subsection{Privacy Funnel} \textbf{Utility measure}: The utility refers to how much useful information in $X$ is revealed to the public via $\hat{X}$. It can be measured as the expected distortion $\mathbb{E}_{X,\hat{X}}[d(x,\hat{x})]$ for some pairwise distance measure $d \colon \mathcal{X} \times \hat{\mathcal{X}} \mapsto \mathbb{R}_+$. Instead, the authors in \cite{PF2014} again considered the widely used log-loss in machine learning and information theory so that the utility is measured by $I(X ; \hat{X})$. Thus, the optimization of PUT can be formulated as a constrained minimization problem, called the \emph{privacy funnel (PF)} \cite{PF2014}: for a given utility threshold $\theta_\text{U}$, \begin{equation} \label{eq:PF} \begin{aligned} & \min_{p(\hat{x} | x)} I (S ; \hat{X}) \\ & \text{s.t.} \quad I (X ; \hat{X}) \geq \theta_\text{U}. \end{aligned} \end{equation} This problem formulation also establishes the duality between PF and the information bottleneck (IB) problem \cite{IB2000} in machine learning \cite[Section~II]{PF2014}. See also Section~\ref{sec:IB}. Although problem~\eqref{eq:PF} is not convex, it allows the agglomerative clustering algorithms \cite{IBAgglom1999} to not only search a deterministic solution $p(\hat{x} | x)$, but also determine the alphabet $\hat{\mathcal{X}}$ from $\mathcal{X}$. \section{Agglomerative Clustering Algorithm} Consider the PF problem \eqref{eq:PF}. We have the Lagrangian function \begin{equation} \label{eq:Lagrangian} L_{\text{PF}}(p(\hat{x} | x),\lambda) = I (S ; \hat{X}) - \lambda I (X ; \hat{X}). \end{equation} The solution of \eqref{eq:PF} for all $\theta_{\text{U}}$ can be determined if we solve $\min_{p(\hat{x} | x)} L_{\text{PF}}(p(\hat{x} | x),\lambda)$ for all $\lambda \geq 0$.\footnote{When $\lambda = 0$, \eqref{eq:PF} reduces to $\min_{p(\hat{x} | x)} I (S ; \hat{X})$, where we only want to minimize the privacy leakage. } Also, due to the PUT, $L_{\text{PF}}(p(\hat{x} | x),\lambda)$ can be interpreted as a weighted sum of two conflicting objectives, for which, each $\lambda$ produces an achievable pair of mutual information $\Set{I(S ; \hat{X}), -I(X ; \hat{X})}$ and all pairs form the \emph{Pareto frontier} indicating how best we can minimize the privacy and utility losses at the same time. Thus, instead of \eqref{eq:PF}, it suffices to address how to solve the problem $\min_{p(\hat{x} | x)} L_{\text{PF}}(p(\hat{x} | x),\lambda)$ for any given $\lambda$. \subsection{Agglomerative Clustering Algorithm} Let the alphabet $\hat{\mathcal{X}}$ be generated by a deterministic transition $p(\hat{x} | x)$ or hard clustering of the elements in $\mathcal{X}$, i.e., the resulting $\hat{\mathcal{X}} = \Set{\hat{W} \colon W \in \mathcal{P}}$ is built based on a partition $\mathcal{P} = \Set{W \colon W \subseteq \mathcal{X} }$ of $\mathcal{X}$, where, for each $W \in \mathcal{P}$, all elements $x \in W$ are merged into the same element $\hat{W} \in \hat{\mathcal{X}}$. For example, for $\mathcal{X} = \Set{1,\dotsc,4}$, the partition $\Set{\Set{1,4},\Set{2,3}}$ yields $\hat{\mathcal{X}} = \Set{14,23}$, where $1$ and $4$ are merged to $14$ and $2$ and $3$ are merged to $23$.\footnote{In $\hat{\mathcal{X}}$, alphabet elements $\hat{W}$, e.g., `$14$' and `$23$', denote the labels/indices of the merged elements, one can choose other labels based on the real application.} The transition probability $p(\hat{x} | x)$ is $$ p(\hat{W}|x) = \begin{cases} 1 & x \in W \\ 0 & x \notin W \end{cases}, \qquad \forall W \in \mathcal{P},$$ the resulting joint distribution $p(s,\hat{x})$ is $$ p(s, \hat{W}) = \sum_{x \in W} p(s,x), \qquad \forall s \in S, W \in \mathcal{P}, $$ and the marginal distribution $p(\hat{x})$ is $p(\hat{W}) = \sum_{x \in W} p(x), \forall W \in \mathcal{P}$. For example, if $p(X = 1) = 0.2$, $p(X = 2) = 0.3$, $p(X = 3) = 0.1$ and $p(X = 4) = 0.4$, we have $p(\hat{X} = 14) = 0.6$ and $p(\hat{X} = 23) = 0.4$. Instead of obtaining the partition $\mathcal{P}$ in a one-off manner, consider an iterative agglomerative clustering approach in Algorithm~\ref{algo:CombAggloCluster}: initiate $\hat{\mathcal{X}}^{(0)} \coloneqq \mathcal{X}$ and, in each iteration $k$, we obtain $W^*$ as the minimizer of \begin{equation} \label{eq:LagrangianComb} \min \Set{ I (S ; \hat{X}^{(k)}_W) - \lambda I (\hat{X}^{(k)} ; \hat{X}^{(k)}_W) \colon W \subseteq \hat{\mathcal{X}}^{(k)} } \end{equation} and merge all $\hat{x}^{(k)} \in W^*$ into $\hat{W}^*$. Let $\hat{\mathcal{X}}^{(k)}_W = (\hat{\mathcal{X}}^{(k)} \setminus W) \cup \Set{\hat{W}}$ be the alphabet by merging all $\hat{x}^{(k)} \in W$ into $\hat{W}$. We have $\hat{X}^{(k)}_W$ in \eqref{eq:LagrangianComb} denote the resulting r.v.. For example, for $\hat{\mathcal{X}}^{(k)} = \Set{1,\dotsc,4}$ and $W = \Set{1,4}$, $\hat{\mathcal{X}}^{(k)}_W = \Set{14,2,3}$ and the resulting $\hat{X}^{(k)}_W$ has probabilities $p(\hat{X}^{(k)}_W = 14) = p(\hat{X}^{(k)} = 1) + p(\hat{X}^{(k)} = 4)$, $p(\hat{X}^{(k)}_W = 2) = p(\hat{X}^{(k)} = 2)$ and $p(\hat{X}^{(k)}_W = 3) = p(\hat{X}^{(k)} = 3)$. The iteration in Algorithm~\ref{algo:CombAggloCluster} terminates when there is no merge that reduces the objective function \eqref{eq:LagrangianComb}, i.e., when $W^* = \emptyset$ or $|W^*| = 1$. Note, the basic idea of Algorithm~\ref{algo:CombAggloCluster} is proposed in \cite{IBAgglom1999,PF2014}. The difference is that the algorithms in \cite{IBAgglom1999,PF2014} are iterative pairwise merge approaches in that $W^*$ is brute-force searched over $\Set{W \subseteq \hat{\mathcal{X}}^{(k)} \colon |W| = 2}$ each time, while Algorithm~\ref{algo:CombAggloCluster} searches $W^*$ over $2^{\hat{\mathcal{X}}^{(k)}}$ by solving \eqref{eq:LagrangianComb}, a minimization of a set function converted from the Lagrangian function \eqref{eq:Lagrangian}. It is obvious that the constraint on pairwise combinations of $\hat{\mathcal{X}}^{(k)}$ in \cite{IBAgglom1999,PF2014} is to avoid dealing with set function optimization problem. However, in the next subsection, we show that problem \eqref{eq:LagrangianComb} can be converted to an MDSF problem, a local optimum of which can be searched efficiently. \subsection{Minimizing Difference of Submodular Functions (MDSF)} For a given alphabet, e.g., $\hat{\mathcal{X}}^{(k)}$ at any iteration $k$ of Algorithm~\ref{algo:CombAggloCluster}, define two set functions $f$ and $g$ as \begin{equation} \begin{aligned} f(W) &= \sum_{\hat{x}^{(k)} \in W} p(\hat{x}^{(k)}) \log \frac{p(\hat{x}^{(k)})}{p(\hat{W})}, \\ g(W) &= \sum_{s \in \mathcal{S}} \sum_{\hat{x}^{(k)} \in W} p(s,\hat{x}^{(k)}) \log \frac{p(s,\hat{x}^{(k)})}{p(s,\hat{W})}, \end{aligned} \nonumber \end{equation} for all $W \subseteq \mathcal{X}^{(k)}$. The following result shows that we can decompose the objective function in \eqref{eq:LagrangianComb} into two submodular set functions. The proof of Theorem~\ref{theo:main} is in Appendix~\ref{app:theo:main}. \begin{algorithm} [t] \label{algo:CombAggloCluster} \small \SetAlgoLined \SetKwInOut{Input}{input}\SetKwInOut{Output}{output} \SetKwFor{For}{for}{do}{endfor} \SetKwRepeat{Repeat}{repeat}{until} \SetKwIF{If}{ElseIf}{Else}{if}{then}{else if}{else}{endif} \BlankLine \Input{$\lambda \in [0,1]$, $\mathcal{S}$, $\mathcal{X}$ and $p(s,x),\forall (s,x) \in \mathcal{S} \times \mathcal{X}$.} \Output{alphabet $\hat{\mathcal{X}}$ and $p(s,\hat{x}), \forall (s,\hat{x}) \in \mathcal{S} \times \hat{\mathcal{X}}$.} \BlankLine initiate $\hat{\mathcal{X}}^{(0)} \coloneqq \mathcal{X}$, $p(s,\hat{x}^{(0)}) \coloneqq p(s,x)$ and $k \coloneqq 0$\; \Repeat{ $|W^*| \leq 1$ or $|\hat{\mathcal{X}}^{(k)}| = 1$}{ apply an MDSF algorithm to obtain the minimizer $W^*$ of \label{step:MDSF} $$ \min \Set{ I (S ; \hat{X}^{(k)}_W) - \lambda I (\hat{X}^{(k)} ; \hat{X}^{(k)}_W) \colon W \subseteq \hat{\mathcal{X}}^{(k)} }; $$ \nl $\hat{\mathcal{X}}^{(k+1)} \coloneqq (\hat{\mathcal{X}}^{(k)} \setminus W^*) \cup \Set{\hat{W}^*} $\; \textbf{forall} $s \in \mathcal{S}$ \textbf{do} obtain \begin{multline} p(s, \hat{x}^{(k+1)} ) \coloneqq \\ \begin{cases} p(s,\hat{W}^*) & \hat{x}^{(k+1)} = \hat{W}^* \\ p(s,\hat{x}^{(k)}) & \hat{x}^{(k+1)} \neq \hat{W}^* \text{ and } \hat{x}^{(k+1)} = \hat{x}^{(k)} \end{cases}; \nonumber \end{multline} \nl $k \coloneqq k + 1$\; } \Return $\hat{\mathcal{X}}^{(k)}$ and $p(s, \hat{x}^{(k)})$\; \caption{Iterative agglomerative clustering algorithm based on the minimization of the difference of submodular functions (IAC-MDSF)} \end{algorithm} \begin{theorem} \label{theo:main} In each iteration $k$ of Algorithm~\ref{algo:CombAggloCluster}, \begin{multline} \label{eq:Eq} \arg\!\min \Set{ I (S ; \hat{X}^{(k)}_W) - \lambda I (\hat{X}^{(k)} ; \hat{X}^{(k)}_W) \colon W \subseteq \hat{\mathcal{X}}^{(k)} } \\ = \arg\!\min \Set{ (1-\lambda) f (W) - g (W) \colon W \subseteq \hat{\mathcal{X}}^{(k)} }, \end{multline} where $f$ and $g$ are submodular\footnote{A set function $f \colon 2^V \mapsto \mathbb{R}$ is submodular if $f(X) + f(Y) \geq f(X \cup Y) + f(X \cap Y), \forall X, Y \subseteq V$; $-f$ is supermodular if $f$ is submodular \cite{Fujishige2005}. } and nonincreasing: $f(W) \geq f(Y)$ and $g(W) \geq g(Y)$ for all $Y \subseteq W$. \hfill \IEEEQED \end{theorem} Then, in order to determine $W^*$ in step~\ref{step:MDSF} of the IAC-MDSF algorithm in Algorithm~\ref{algo:CombAggloCluster}, we just need to solve the problem \begin{equation} \label{eq:MDSF_PF} \min \Set{ (1-\lambda) f (W) - g (W) \colon W \subseteq \hat{\mathcal{X}}^{(k)} }. \end{equation} Since $f$ and $g$ are nonincreasing, for all $\lambda \geq 1$, we have the minimizer of \eqref{eq:MDSF_PF} being the empty set $\emptyset$, i.e., Algorithm~\ref{algo:CombAggloCluster} just returns $\hat{\mathcal{X}} = \mathcal{X}$ and $p(s,\hat{x}) = p(s,x), \forall s, \hat{x} = x$. Then, to determine the Pareto frontier, we only need to solve the problem~\eqref{eq:MDSF_PF} for all $\lambda \in [0,1]$. In this case, $(1-\lambda) f$ and $g$ are both submodular and the problem \eqref{eq:MDSF_PF} is an MDSF. The MDSF problem arises in many machine learning applications, e.g., feature selection, discriminative structured graph learning \cite{SubMCover2013}, for which, there are many polynomial time algorithms proposed in the literature, e.g., the \cite{SubMCover2013,SubMDiff2012}, that ensure convergence to a local optimum. \begin{figure*}[t \centerline{ \subfigure[$S = \Set{\text{`age', } \text{`sex'}}$, $X = \Set{\text{`sex', } \text{`cholesterol'} }$ and $\lambda = 0.8$.]{\scalebox{0.6}{\input{figures/ObjConverge_HealthDoubleSex.tex}}} \qquad\qquad \subfigure[$S = \Set{\text{`age', } \text{`sex'}}$, $X = \Set{\text{`age', } \text{`cholesterol'} }$ and $\lambda = 0.7$.]{\scalebox{0.6}{\input{figures/ObjConverge_HealthDoubleAge.tex}}} } \caption{ The convergence of the Lagrangian function $I(S;\hat{X}^{(k)}) - \lambda I(X;\hat{X}^{(k)})$ when the IAC-MDSF algorithm in Algorithm~\ref{algo:CombAggloCluster} is applied to the PF and IB problems on the Hungarian heart disease data set in \cite{UCI2007}. } \label{fig:ObjConverge_Health} \end{figure*} \begin{figure*}[htbp] \centerline{\subfigure[$S = \Set{\text{`age', } \text{`sex'}}$ and $X = \Set{\text{`sex', } \text{`cholesterol'} }$.]{\scalebox{0.6}{\input{figures/PvsU_HealthDoubleSex.tex}}} \qquad \subfigure[$S = \Set{\text{`age', } \text{`sex'}}$ and $X = \Set{\text{`age', } \text{`cholesterol'} }$.]{\scalebox{0.6}{\input{figures/PvsU_HealthDoubleAge.tex}}}} \caption{The Pareto frontiers of the PF and IB problems obtained by applying the IAC-MDSF algorithm in Algorithm~\ref{algo:CombAggloCluster} for multiple values of $\lambda \in [0,1]$ to the Hungarian heart disease data set in \cite{UCI2007}. Note, for IB, the Pareto frontier is interpreted as the extracted useful information on $S$ vs. the reduction in coding rate. The results are compared with the iterative pairwise merge algorithms \cite[Algorithms~1]{PF2014} for PF and \cite[Algorithms~2]{PF2014} for IB. } \label{fig:PvsU_Health} \end{figure*} \subsection{Information Bottleneck Problem} \label{sec:IB} The duality relationship between PF and information bottleneck (IB) has been pointed out in \cite{PF2014,GIB2018}. In IB \cite{IB2000}, $S$ refers to the useful/relevent signal that is hidden in the observations $X$. The problem is to encode $X$ into $\hat{X}$ with the minimum rate $I(X ; \hat{X})$ that extracts out the most information on $S$, e.g., modeling speech phonemes from the audio waves. The optimization is exactly the opposite of PF: given a coding rate threshold $\theta_\text{R}$, \begin{equation} \label{eq:IB} \begin{aligned} & \max_{p(\hat{x} | x)} I (S ; \hat{X}) \\ & \text{s.t.} \quad I (X ; \hat{X}) \leq \theta_\text{R}. \end{aligned} \end{equation} The Lagrangian function is\footnote{The original IB problem in \cite{IB2000} is formulated as minimizing the coding rate $I(X ; \hat{X})$ subject to the relevance $I(S ; \hat{X})$ is no less than some threshold. This problem and \eqref{eq:IB} share the same Lagrangian function $L_{\text{IB}}(p(\hat{x} | x),\lambda)$. } $$ L_{\text{IB}}(p(\hat{x} | x),\lambda) = -I (S ; \hat{X}) + \lambda I (X ; \hat{X}) = - L_{\text{PF}}(p(\hat{x} | x),\lambda)$$ and the Pareto frontier for IB can be outlined by maximizing $L_{\text{PF}}(p(\hat{x} | x),\lambda)$ for all $\lambda \geq 0$. It is also obvious that, if we determine $W^*$ as the minimizer of \begin{equation} \label{eq:MDSF_IB} \min \Set{ g (W) - (1-\lambda) f (W) \colon W \subseteq \hat{\mathcal{X}}^{(k)} } \end{equation} in step~\ref{step:MDSF},\footnote{This is equivalent to replacing the minimization problem in step~\ref{step:MDSF} of Algorithm~\ref{algo:CombAggloCluster} by $ \max \Set{ I (S ; \hat{X}^{(k)}_W) - \lambda I (\hat{X}^{(k)} ; \hat{X}^{(k)}_W) \colon W \subseteq \hat{\mathcal{X}}^{(k)} } $. } Algorithm~\ref{algo:CombAggloCluster} returns a hard clustering solution and corresponding codebook $\hat{\mathcal{X}}$ for the IB problem. For \eqref{eq:MDSF_IB}, we just need to consider $\lambda \in [0, 1]$ since the minimizer is $\hat{\mathcal{X}}^{(k)}$ for all $\lambda \geq 1$. Again, \eqref{eq:MDSF_IB} for all $\lambda \in [0,1]$ is an MDSF problem. This means that, for the same $\lambda$, the IAC-MDSF algorithm in Algorithm~\ref{algo:CombAggloCluster} can provide solutions for both PF and IB problems. \section{Experimental Results} The UCI machine learning repository \cite{UCI2007} contains 463 data sets. In this repository, we use the heart disease data set created by the Hungarian Institute of Cardiology that contains patients data including 76 attributes to identify the presence of heart disease. We extract three of them, `age', `sex' and `serum cholesterol (mg/dl)' to run the following experiments based on two settings: the first is when $S = \Set{\text{`age', } \text{'sex'}}$ and $X = \Set{\text{'sex', } \text{`cholesterol'} }$; the second is when $S = \Set{\text{`age', } \text{`sex'}}$ and $X = \Set{\text{`age', } \text{`cholesterol'} }$. For solving problems~\eqref{eq:MDSF_PF} and \eqref{eq:MDSF_IB} in step~\ref{step:MDSF} of Algorithm~\ref{algo:CombAggloCluster} for PF and IB, respectively, we run the function \verb"sfo_ssp" in the SFO toolbox \cite{SFOtools2010}. This function implements the submodular-supermodular (SubM-SuperM) algorithm proposed in \cite[Algorithm~1]{SubSuper2005} for solving MDSF problems. \subsection{Convergence} We first show the convergence performance of the IAC-MDSF algorithm for both PF and IB problems in Fig.~\ref{fig:ObjConverge_Health}, which is consistent with \cite[Lemma~3.3]{SubMDiff2012} that the SubM-SuperM algorithm ensures a reduction for PF and an increase for IB of the Lagrangian function in each iteration. \subsection{Pareto Frontier} In Fig.~\ref{fig:PvsU_Health}, we apply the IAC-MDSF algorithm to get the Pareto frontiers for PF and IB problems by varying $\lambda$ from $0$ to $1$. The Pareto frontier is presented in terms of two normalized mutual information: $I(S;Y)/H(S)$ and $-I(X;Y)/H(X)$. For PF, this is respectively interpreted as the loss in privacy, the leakage of $S$, vs. the loss in utility; For IB, this is respectively interpreted as the extracted useful information vs. the reduction in coding rate. We also plot the Pareto frontiers obtained by the pairwise merge algorithms proposed in \cite{PF2014}. For PF, \cite[Algorithm~1]{PF2014} iteratively searches two elements in $W^* = \Set{i,j} \subseteq \hat{\mathcal{X}}^{(k)}$ with $I(X ; \hat{X}^{(k)}_{W^*}) \geq \theta_{\text{U}}$ that minimizes $I(S ; \hat{X}^{(k)}_{W^*})$ and merge them to form the new alphabet $\hat{\mathcal{X}}^{(k+1)}$; For IB, \cite[Algorithm~2]{PF2014} iteratively merges $W^* = \Set{i,j} \subseteq \hat{\mathcal{X}}^{(k)}$ with $I(S ; \hat{X}^{(k)}_{W^*}) \geq \theta_{\text{U}}$ that minimizes $I(X ; \hat{X}^{(k)}_{W^*})$. It can be seen that the IAC-MDSF algorithm in general outperforms \cite[Algorithms~1 and 2]{PF2014}. \subsection{Complexity} \label{sec:Complexity} The IAC-MDSF algorithm in Algorithm~\ref{algo:CombAggloCluster} and \cite[Algorithms~1 and 2]{PF2014} all ensure a local convergence. However, \cite[Algorithms~1 and 2]{PF2014} may become very cumbersome for large $\mathcal{X}$. Fig.~\ref{fig:ConvergeCompare} shows an example of the convergence performance of \cite[Algorithms~1 and 2]{PF2014}, where both algorithms require more than $100$ iterations. In this case, \cite[Algorithm~1]{PF2014} merges $\mathcal{X}$ with $|\mathcal{X}| = 197$ into $|\hat{\mathcal{X}}^{(108)}| = 90$ clusters finally. Since $|\hat{\mathcal{X}}^{(k)}|$ is reduced by $1$ each time by a brute-force search over all $O(|\hat{\mathcal{X}}^{(k)}|^2)$ pairs of elements in $\hat{\mathcal{X}}^{(k)}$, the overall computation is around $107 \times 197^2$ large (The exact complexity is $\sum_{i =90}^{197} \frac{i(i-1)}{2}$ operations). On the other hand, Algorithm~\ref{algo:CombAggloCluster} searches the optimal merge in the power set $2^{\hat{\mathcal{X}}^{(k)}}$ and allows more than $1$ reduction of $|\hat{\mathcal{X}}^{(k)}|$ each time so that it is able to converge only in a few iterations, e.g., Fig.~\ref{fig:ObjConverge_Health}. The SubM-SuperM algorithm \cite{SubSuper2005} implemented in this paper for solving the MDSF problem is a greedy iterative method, where each iteration calls the min-norm algorithm \cite{MinNorm}, a submodular function minimization (SFM) algorithm that is practically fast although the asymptotic complexity is unknown. Alternatively, one can implement \cite[Algorithm~3]{SubMDiff2012} that calls a modular function minimization algorithm with complexity $O(|\hat{\mathcal{X}}^{(k)}|)$ in each iteration. In addition, MDSF is still an active research topic in combinatorial optimizations. There might be some development in this topic in the future that can be applied to Algorithm~\ref{algo:CombAggloCluster} to improve the performance (e.g., a faster convergence to a better local optimum). \section{Conclusion} We considered the problem of how to determine a deterministic solution $p(\hat{x} | x) \in \Set{0,1}$ for the PF problem $\min_{p(\hat{x} | x)} I(S ; \hat{X})$ s.t. $I(X ; \hat{X}) \geq \theta_{\text{U}}$. We proposed an IAC-MDSF algorithm that generates a deterministic transition $p(\hat{x} | x)$ and an alphabet $\hat{\mathcal{X}}$ by iteratively merging elements in $\mathcal{X}$. Our IAC-MDSF algorithm differs from the existing algorithms in \cite{PF2014} in that it searches the optimal merge over all subsets, instead of all pairwise combinations, of the current alphabet and this problem is proved to be an MDSF, a local optimum of which could be obtained in polynomial time. Experimental results showed that our IAC-MDSF algorithm generally outperforms the pairwise merge algorithm in \cite{PF2014} in much fewer iterations. While the IAC-MDSF algorithm only searches a deterministic solution for the PF problem, it is worth understanding in the future how to search an optimal soft transition $p(\hat{x} | x) \in [0,1]$ over the probability simplex, e.g., by the deterministic annealing method \cite{DA1998}, and whether this soft solution can improve the Pareto frontiers in Fig.~\ref{fig:PvsU_Health}. On the other hand, as explained in Section~\ref{sec:Complexity}, it would be of interest to see if we can utilize better MDSF algorithms to improve the performance and complexity of the IAC-MDSF algorithm. \appendices \section{Proof of Theorem~\ref{theo:main}} \label{app:theo:main} \begin{IEEEproof} We have \eqref{eq:Eq} hold since \begin{equation} \begin{aligned} & I (S;\hat{X}^{(k)}) - I(S;\hat{X}^{(k)}_W) \\ & = \sum_{s \in \mathcal{S}} \sum_{\hat{x}^{(k)} \in W} p(s,\hat{x}^{(k)}) \Big( \log \frac{p(s,\hat{x}^{(k)})}{p(s) p(\hat{x}^{(k)})} - \log \frac{p(s,\hat{W})}{p(s) p(\hat{W})} \Big) \\ & = g(W) - f(W) \end{aligned} \nonumber \end{equation} so that $I(S;\hat{X}^{(k)}_W) = I (S;\hat{X}^{(k)}) - g(W) + f(W)$ and \begin{equation} \begin{aligned} & I (\hat{X}^{(k)};\hat{X}^{(k)}_W) \\ & \ = -\sum_{\hat{x}^{(k)} \notin W} p(\hat{x}^{(k)}) \log p(\hat{x}^{(k)}) - \sum_{\hat{x}^{(k)} \in W} p(\hat{x}^{(k)}) \log p(\hat{W}) \\ & \ = H(\hat{X}^{(k)}) + f(W). \\ \end{aligned} \nonumber \end{equation} For function $l = u(t(W))$, if $t$ is a modular (both submodular and supermodular) set function such that $t(W) = \sum_{i \in W} t_i, \forall W \subseteq V$ for the vector $\mathbf{t} \in \mathbb{R}_+^{|V|}$ and $u \colon \mathbb{R} \mapsto \mathbb{R}$ is convex, $l$ is supermodular \cite[Proposition~37]{Bach2010SFMtut}. Then, $-l$ is submodular. % Rewrite $f(W) = \sum_{\hat{x}^{(k)} \in W} p(\hat{x}^{(k)}) \log p(\hat{x}^{(k)}) - p(\hat{W}) \log {p(\hat{W})}$. Here, $\sum_{\hat{x}^{(k)} \in W} p(\hat{x}^{(k)}) \log p(\hat{x}^{(k)})$ is modular. Since $p(\hat{W})$ is nonnegative and modular and $-y \log y$ is convex in $y$, $- p(\hat{W}) \log {p(\hat{W})}$ is submodular. Therefore, $f$ is submodular. % Also, for all $W \subseteq Y$, \begin{multline} f(W) - f(Y) = \sum_{\hat{x}^{(k)} \in W} p(\hat{x}^{(k)}) \log \frac{p(\hat{Y})}{p(\hat{W})} \\ - \sum_{\hat{x}^{(k)} \in Y \setminus W} p(\hat{x}^{(k)}) \log \frac{p(\hat{x}^{(k)})}{p(\hat{Y})} \geq 0. \nonumber \end{multline} In the same way, we can prove that $g$ is submodular and nonincreasing. \end{IEEEproof} \begin{figure}[tbp] \centering \scalebox{0.6}{\input{figures/ConvergePair.tex}} \caption{The convergence of $I(S;\hat{X}^{(k)})$ and $I(X;\hat{X}^{(k)})$ for \cite[Algorithm~1]{PF2014} and \cite[Algorithm~2]{PF2014}, respectively, on the Hungarian heart disease data set in \cite{UCI2007} with $S = \Set{\text{`age', } \text{`sex'}}$ and $X = \Set{\text{`sex', } \text{`cholesterol'} }$.} \label{fig:ConvergeCompare} \end{figure} \bibliographystyle{IEEEtran}
1,108,101,563,456
arxiv
\section{Discussion} \label{sec:discussion} We have introduced a novel learning framework for supporting human decision-making. Rather than viewing algorithms as experts, asked to explain their conclusions to people, we position algorithms as advisors whose goal is to help humans make better decisions while retaining human agency. The {M\raisebox{0.6pt}{$\circ$}M}\ framework learns to provide representations of inputs that provide advice and promote good decisions. We demonstrate success in learning to support human decision models. We hope that by tapping into innate cognitive human strengths, learned representations can improve human-machine collaboration by prioritizing information, highlighting alternatives, and correcting biases. \section{Experimental Results} \label{sec:experiments} In this section, we report the results of three distinct experiments. Our intent is to demonstrate the breadth of the framework’s potential, and the experiments we present vary in the decision task, the form of representational advice, their complexity and scale, and the degree of human involvement (one experiment is entirely through simulations, another uses thousands of mTurk queries). We defer some of the experimental details to the Appendix. \subsection{Decision-compatible scatterplots} In the first experiment, we focus on learning useful, low-dimensional representations of high-dimensional data, in the form of scatterplots. The choice of how to project high-dimensional data into a lower-dimensional space is consequential to decision-making \cite{kiselev2019challenges}, and yet standard dimensionality-reduction methods optimize statistical criteria (e.g., maximizing directional variation in PCA) rather than optimizing for success in user interpretation. The {M\raisebox{0.6pt}{$\circ$}M}\ framework learns projections that, once visualized, directly support good decisions. We consider a setting where the goal is to correctly classify objects in $p$-dimensional space, $p>2$. Each example $x$ is a $p$-dimensional point cloud consisting of $m=40$ points in ${\mathbb{R}}^p$ (so $x \in {\mathbb{R}}^{40p}$). Point clouds are constructed such that, when orthogonally projected onto a particular linear 2D subspace of ${\mathbb{R}}^p$, denoted $V$, they form the shape of either an `X' or an `O', this determining their true label $y$. All directions orthogonal to $V$ contain similarly scaled random noise. Subjects are presented with a series of scatterplots, which visualize the point clouds for a given 2D projection, and are asked to determine for each point cloud its label (`X' or 'O'). Whereas a projection onto $V$ produces a useful representation, most others do not, including those learned coming from PCA. Our goal is to show that {M\raisebox{0.6pt}{$\circ$}M}\ can use human feedback to learn a projection (${\phi}$) that produces visually meaningful scatterplots (${\rho}$), leading to good decisions. \subsubsection{Model.} Here, representation ${\phi}$ plays the role of a dimensionality reduction mapping. We use $d=3$ and set ${\phi}$ to be a 3x2 linear mapping The parameters ${\theta}$ of mapping ${\phi}$ are the entries of the 3x2 matrix, and ${\phi}$ is augmented with an orthogonality penalty ${\phi}^{T}{\phi} - \mathbb{I}$ to encourage matrices which represent rotations. For the human proxy model, we want to be able to roughly model the visual perception of subjects. For this, we use for $\hat{h}$ a small, single-layer 3x3 convolutional network, that takes as inputs a soft (differentiable) 6x6 histogram over the 2D projections. \begin{figure}[t!] \includegraphics[width=0.47\columnwidth]{notprojected.png} \includegraphics[width=0.47\columnwidth]{projected.png} \includegraphics[trim=0 0 5cm 0, clip, width=1\columnwidth]{xorounds.png} \caption{Low dimensional representations of point clouds. \textbf{(A)} Points in their original 3D representation give little visual indication of class (X or O). \textbf{(B)} Shapes become easily distinguishable when projected onto an appropriate subspace (shown in bold). \textbf{(Bottom)} Learned 2D representations after each training round. The initial 2D projection (round 1), on which a machine-classifier is fully accurate, is unintelligible to people. However, as training progresses, feedback improves the projection until the class becomes visually apparent (round 4), with very high human accuracy.} \label{fig:xo} \end{figure} \subsubsection{Results.} We recruited 12 computer science students to test the {M\raisebox{0.6pt}{$\circ$}M}\ framework. Participants watched an instructional video and then completed a training and testing phase, each having five rounds (with intermittent model optimization) of 15 queries to label plots as either `X' or `O'. The results we provide refer to the testing phase. Round 1 includes representations based on a random initialization of model parameters and therefore serves as a baseline condition. The results show that participants achieve an average accuracy of 68\% in round 1, but improve to an average accuracy of 91\% in round 5, a significant improvement of 23\% ($p<.01$, paired $t$-test) with 75\% of participants achieving 100\% accuracy by round 5. Subjects are never given machine-generated predictions or feedback, and progress is driven solely by the performance of subjects on the reframed problem instances. Figure \ref{fig:xo} demonstrates a typical example of a five-round sequential training progression. Initially, representations produced by {M\raisebox{0.6pt}{$\circ$}M}\ are difficult to classify when ${\theta}$ is initialized arbitrarily. (This is also true when ${\theta}$ is initialized wiht a fully accurate machine-only model.) As training progresses, feedback regarding subject perception gradually rotates the projection, revealing distinct class shapes. Training progress is made as long as subject responses carry some machine-discernible signal regarding the subject's propensity to label a plot as `X' or `O'. {M\raisebox{0.6pt}{$\circ$}M}\ utilizes these signals to update the representations and improve human performance. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{georges4} \caption{Different facial avatars, each avatar representing an algorithmic assistant and not a loan applicant, and trained to provide useful advice through facial expressions. The leftmost avatar is set to a neutral expression ($z=0$). \label{fig:avatar}} \end{figure} \subsection{Decision-compatible algorithmic avatars} For this experiment we consider a real decision task and use real data (approving loans), train with many humans participants (mTurkers), and explore a novel form of representational advice (facial avatars). Altogether we elicit around 6,000 human decisions for training and evaluation.\footnote{All experiments are conducted subject to ethical review by the university's IRB.} Specifically we use the {\em Lending Club} dataset, focusing on the resolved loans, i.e., loans that were paid in full ($y=1$) or defaulted ($y=0$), and only using features that would have been available to lenders at loan inception.\footnote{ https://www.kaggle.com/wendykan/lending-club-loan-data} The decision task is to determine whether to approve a loan (${a}=1$) or not (${a}=0$), and the loss function we use is ${\ell}(y,a)=\one{y \neq a}$. \subsubsection{Goals, expectations, and limitations.} Whereas professional decision-makers are inclined to exercise their own judgment and deviate from machine advice \cite{stevenson2019algorithmic, de2020case}, mTurkers are non-experts and are likely to follow machine predictions \cite{lai2018human,yin2019understanding}.\footnote{We only know of Turk experiments in which good human performance from algorithmic advice can be attributed to humans accepting the advice of accurate machine predictions~\cite[e.g.]{lai2020harnessing}.} For this reason, the goal of the experiment is {\em not to demonstrate performance superiority over purely predictive advice}, nor to show that mTurkers can become expert loan officers. Rather, the goal is to show that abstract representations can convey predictive advice in a way that requires users to deliberate to make a decision, and to explore whether humans use learned representations differently than they use machine predictions in making decisions. In Appendix \ref{sec:opt} we further discuss the unique challenges encountered when training with mTurkers in the loop. \begin{figure}[t] \centering \includegraphics[trim=1cm 0.2cm 2.4cm 1cm,clip,width=1\columnwidth]{newresult.png} \caption{Human accuracy in the algorithmic advice condition (‘avatar advice') consistently increases over rounds. Performance quickly surpasses the ‘no advice’ (data only) condition, and steadily approaches performance of users observing algorithmic predictions (‘predictive advice’), which in itself is lower than machine-only performance (`machine accuracy'). Human accuracy falls when faces are shuffled within predicted labels of $\hat{h}$, confirming that faces convey useful, multi-variate information. \label{fig:avatar_acc}} \end{figure} \subsubsection{Representations.} With the aim of exploring broader forms of representational advice, we make use of a \emph{facial avatar}, framed to users as an \emph{algorithmic assistant}--- not the recipient of the loan ---and communicating through its facial expressions information that is relevant to a loan decision. The avatar is based on a single, realistic-looking face capable of conveying versatile expressions (Figure~\ref{fig:avatar_acc} includes some examples). Expressions vary along ten dimensions including as {\em basic emotions} \cite{du2014compound}, {\em social dimensions } (e.g., dominance and trustworthiness \cite{du2014compound,todorov2008understanding}), and subtle changes in {\em appearance } (e.g., eye gaze). Expressions are encoded by the representation vector $z$, with each entry corresponding to a different facial dimension. Thus, vectors $z$ can be thought of as points in $k$-dimensional `face-space' in which expressions vary smoothly with $z$. We are interested in facial avatars because they are abstract (i.e., not in the domain of the input objects) and because they have previously been validated as useful representations of information~\cite{chernoff1973use, lott1990use}. They are also high-dimensional representations, and non-linear in the input features; that is, faces are known to be processed holistically with dependencies beyond the sum of their parts \cite{richler2009holistic}. Faces also leverage innate human cognition---immediate, effortless, and fairly consistent processing of facial signals \cite{izard1994innate,todorov2008understanding,freeman2016more}. Through {M\raisebox{0.6pt}{$\circ$}M}, we \emph{learn} a mapping from inputs to avatars that is useful for decision-making. Training is driven completely by human responses, and learned expressions reflect usage patterns that users found to be useful, as opposed to the hand-coded mappings as used in {\em Chernoff faces}~\cite{chernoff1973use}. In a successful mapping, the avatars can summarize combinations of variables, but with the range and variation of expression calibrated such that \emph{decision-relevant variation} in the data is salient. In this way, learned avatars can relate data points to the full training set (e.g., encoding how extreme the values are). \subsubsection{Model and training.} We set ${\phi}$ to be a small, fully connected network with a single 25-hidden unit layer, mapping inputs to representation vectors $z \in {\mathbb{R}}^{9}$. The visualization component ${\rho}(z)$ creates avatars by morphing a set of base images, each corresponding to a facial dimension, with $z$ used to weight the importance of each base image.\footnote{ Morphed images were created using the {\em Webmorph} software package \cite{debruine2016webmorph}.}\footnote{All base images correspond to the same human actor, whose corresponding avatar was used throughout the experiment.} We use a regularization term that, at the cost of some reduction in accuracy, encourages points in face-space to preserve distances in instance-space. As we will show, this promotes representations that carry more information about inputs than that implied by simple predictions. For ${\hat{h}}$ we use a small, fully connected network with two layers of size 20 each, operating directly on representation vectors $z$. In collecting human decisions for training ${\hat{h}}$, mTurkers were queried for their decisions regarding the approval or denial of loan applications.\footnote{Recognizing that we use the same representation mapping for all users, we restrict to US-based participants to promote greater cross-user consistency with greater, common cultural understanding of face-space.} New users were recruited at each round to obtain reports that are as independent as possible and to control for any human learning. Each user was queried for a random subset of 40 training examples, with the number of users chosen to ensure that each example would receive multiple responses (w.h.p.). For predictive purposes, binary outputs were set to be the majority human response. Each loan application was presented using the most informative features as well as the avatar. We did not relate to users any specific way in which they should incorporate avatar advice, and {\em care was taken to ensure users understood that the avatar represents an algorithmic assistant and not a loan applicant}.\footnote{Respondents who did not understand this point in a comprehension quiz were not permitted to complete the task.} Appendix \ref{sec:exp3} provides further details regarding the experimental setup. \subsubsection{Results.} Our results show that {M\raisebox{0.6pt}{$\circ$}M}\ can learn representations that support good decisions through a complex, abstract representation, and that this representation carries multi-variate information, making it qualitatively different than prediction. As benchmarks, we consider the accuracy of a trained neural network model ${\mathcal{N}}(x)$ having architecture equal to ${\hat{h}} \circ {\phi}$ (but otherwise unrelated to our human-in-the-loop experiments), as well as human performance under predictive advice ${\gamma}(x) = {\ytilde} \in [0,1]$ where ${\ytilde}$ is the predicted probability of ${\mathcal{N}}(x)$. We also consider a condition with `shuffled' avatar advice, which we describe below. Figure~\ref{fig:avatar_acc} shows the training process and resulting test accuracy (the data is balanced, with chance$\,\,\approx 0.5$ accuracy).\footnote{Results are statistically significant under a one-way ANOVA test, $\text{F}(3,196)=2.98, p<0.03$.} At first, the (randomly-initialized) representation ${\phi}$ produces arbitrary avatars, and performance in the avatar condition is lower than in the no-advice condition. This indicates that users take into account the (initially uninformative) algorithmic advice. As learning progresses, user feedback accumulates and the accuracy from using the {M\raisebox{0.6pt}{$\circ$}M}\ framework steadily rises. After six rounds, avatar advice contributes to a boost of 11.5\% in accuracy (0.69) over the no-advice condition (0.575), reaching 99\% of the accuracy in the predictive advice condition (0.70). Performance in the predictive advice condition does not reach machine accuracy (0.73), showing that not all subjects follow predictive advice. \subsubsection{Analysis.} We additionally explore what the representations learn, and how humans incorporate them into predictions. One possible concern is that despite regularization, learned avatars may simply convey binary predictions through fancy graphics (e.g., happy or sad faces). To explore this, we added a `shuffled' condition in which faces are shuffled within predicted labels of ${\hat{h}}$. As shown in Figure~\ref{fig:avatar_acc}, shuffling degrades performance and confirms that faces convey more complex information than the system's binary prediction. Moreover, the avatars do not provide a simple encoding of a univariate (but not binary) prediction, and humans do not use the information in the same way that they use numeric predictions: (i) no single feature of $z$ has a correlation with human responses ${\hat{h}}(z)$ of more than $R^2=0.7$, (ii) correlations of average human response with the features of $z$ are low (at most $R^2=0.36$ across features) while responses in the predictive condition have $R^2=0.73$ with the predictions, and (iii) users in the avatar condition self-report using the data as much or more than the advice 83\% of the time, compared to 47\% for the predictive advice condition. At the same time, $z$ preserves important information regarding $x$. To show this, we train linear models to predict from $z$ each of the data features: interest rate (\feat{rate}), loan term (\feat{term}), debt to income ratio (\feat{dti}), negative public records (\feat{rec}), annual income (\feat{inc}), employment length (\feat{emp}). Results show that $z$ is highly informative of \feat{rate} ($R^2=0.79$) and \feat{term} ($0.57$), mildly informative of \feat{rec} ($-0.21$), \feat{inc} ($0.23$), and \feat{emp} ($0.13$), and has virtually no predictive power of \feat{dti} ($-0.03$). Further inspecting model coefficients reveals a complex pattern of how $z$ carries information regarding $x$ (see Appendix \ref{sec:avatarreps} for all coefficients). For example: trustworthiness plays an important part in predicting all features, whereas anger is virtually unused; happiness and sadness do not play opposite roles---happiness is significant in \feat{term}, while sadness is significant in \feat{rate}; and whereas \feat{emp} linked almost exclusively to age variation, \feat{inc} is expressed by over half of the facial dimensions. \subsection{Incorporating side information} As an additional demonstration of the unique capabilities of {M\raisebox{0.6pt}{$\circ$}M}\, we show that the framework can also learn representations that allow a decision maker to leverage side information that is unavailable to the machine. Referencing our earlier discussion of mTurk, we adopt simulation for this experiment because it is challenging for non-experts (like mTurkers) to outperform purely predictive advice and even with access to additional side information. Simulation also allows us to systematically vary the synthetic human model, and we consider four distinct models of decision making. % \if 0 Our main focus in this experiment is on demonstrating how our learning pipeline can adapt to this setting, and as it introduces unique challenges that go beyond those studied in our other experiments, here we evaluate {M\raisebox{0.6pt}{$\circ$}M}\ in a synthetic environment and using a human-proxy model (as in many other current works on human-machine cooperation). \fi The task we consider is a medical decision-making setting where doctors must evaluate the health risk of incoming ER patients, and have access to a predictive model.\footnote{MDCalc.com is one example of a risk assessment calculator for use by medical professionals.} Here, we focus on compact, linear models, and view the coefficients of the model along with the input features as the representation, affecting the decision process of doctors. Doctors also have access to additional, side information. Our goal is to learn a compact, linear model that can account for how doctors choose to use this side information. \subsubsection{Setup.} There are four primary binary features $x \in \{0,1\}^4$: diabetes ($x_d$), cardiovascular disease ($x_c$), race ($x_r$), and income level ($x_i$). An additional integer `side-information' variable ${s} \in \{0,1,2,3\}$ encodes how long the patient's condition was allowed to progress before coming to the ER and is available only to the doctor. We assume ground-truth risk $y$ is determined only by diabetes, cardiovascular disease, and time to ER, through $y = x_d + x_c + {s}$, where $x_d, x_c, {s}$ are sampled independently. We also assume that $x_r$ and $x_i$ jointly correlate with $y$, albeit not perfectly, so that they carry some but not all signal in ${s}$ (whereas $x_d,x_c$ do not, see Appendix \ref{sec:side_info_data_gen} for full details). In this way, race and income can be useful in prediction as they offer predictive power beyond that implied by their correlations with known health conditions (e.g., diabetes, cardiovascular disease), but interfere with how side information is used. We model a decision maker who generally follows predictive advice ${\hat{y}}=f_w(x)= \inner{w,x}$, but with the capacity to adjust the machine-generated risk scores at her discretion and in a way that depends on the model through its coefficients $w$. We assume that doctors are broadly aware of the correlation structure of the problem, and are prone to incorporate the available side information ${s}$ into ${\hat{y}}$ if they believe this will give a better risk estimate. We model the decisions of a population of doctors as incorporating ${s}$ additively and with probability that increases with the magnitude of either of the coefficients $w_r$ and $w_i$. We refer to this as the \emph{or} model and set $h_{\text{or}}(x,{s},w)={\hat{y}}+I(w)\cdotp {s}$ with $I(w) \propto 1/(\max\{w_r,w_i\})$, so that more weight on $w_r$ and $w_i$ reduces the probability of incorporating $s$. We also consider simpler decision models: \emph{always} using side information ($h_{\text{always}}$), \emph{never} using side information ($h_{\text{never}}$), and a \emph{coarse} variant of $h_{\text{or}}$ using binarized side information, $h_{\text{coarse}} = {\hat{y}} + I(w)\cdotp 2\cdotp\1{{s}\geq2}$. \subsubsection{Model.} The representation ${\rho}(z)$ consists of $x$, coefficients $w$ (these are learned within ${\phi}$), and ${\hat{y}} = \inner{w,x}$.\footnote{In an application, representations should also convey to users that the system is aware they may have additional side information.} The difficulty in optimizing ${\phi}$ is that ${s}$ is never observed, and our proposed solution is to use $y$ (which is known at train time) as a proxy for ${s}$ when fitting ${\hat{h}}$, which is then used to train ${\phi}$ (see Section~\ref{sec:method}). Since $x$ and $y$ jointly carry information regarding ${s}$, we define ${\hat{h}}(x,y;w) = \inner{w,x}+{\hat{{s}}}(x,y)$, where ${\hat{{s}}}(x,y)=v_0 y + \sum_{j=1}^{4}v_j x_j$, and $v$ are parameters. Note that it is enough that ${\hat{{s}}}$ models how the user {\em utilizes } side information, rather than the value of $s$ directly; $s$ is never observed, and there is no guarantee about the relation between ${\hat{{s}}}$ and ${s}$. \begin{table}[t] \centering \caption{Performance of {M\raisebox{0.6pt}{$\circ$}M}\ in the presence of side information and with four different synthetic human models. The machine-only performance is 0.890. \label{tab:sideinfo}} \begin{tabular}{lcc} & \textbf{{M\raisebox{0.6pt}{$\circ$}M}} & $h(\text{\textbf{Machine}})$ \\ \hline Or & 1.0 & .894 \\ Coarse Or & .951 & .891 \\ Never & .891 & .891 \\ Always & 1.0 & .674 \end{tabular} \end{table} \subsubsection{Results.} We compare {M\raisebox{0.6pt}{$\circ$}M}\ to two other baselines: a machine-only linear regression, and the human model $h$ applied to this machine-only model, and evaluate performance on the four synthetic human models ($h_{\text{or}}, h_{\text{coarse}}, h_{\text{never}}$, and $h_{\text{always}}$). Both {M\raisebox{0.6pt}{$\circ$}M}\ and the baselines use a linear model but the model in {M\raisebox{0.6pt}{$\circ$}M}\ is trained to take into account how users incorporate side information. For evaluation, we consider binarized labels $y_{bin}=\1{y>3}$. We report results averaged over ten random data samples of size 1,000 with a 80-20 train-test split. As Table \ref{tab:sideinfo} shows, due to its flexibility in finding a representation that allows for incorporation of side information by the user, {M\raisebox{0.6pt}{$\circ$}M}\ reaches 100\% accuracy for the {\em or} and {\em always} decision models. {M\raisebox{0.6pt}{$\circ$}M}\ maintains its advantage under the {\em coarse-or } decision model (i.e., when doctors use imperfect information), and remains effective in settings where side information is never used. The problem with the baseline model is that it includes non-zero coefficients for all four features. This promotes accuracy in a machine-only setting, and in the absence of side information. Given this, the \emph{or} and \emph{coarse-or} decision models only very rarely introduce the side information--- and this is indeed the best they can do given that the machine model uses all four variables. In contrast, for the {\em always} decision model the user always introduces side information, causing over-counting of the time to ER effect on patient outcomes (because of correlations between ${s}$ and $x_r$ and $x_i$). In contrast, {M\raisebox{0.6pt}{$\circ$}M}\ learns a linear model that is optimally responsive to the human decision maker. For example, including non-zero coefficients for only $x_d$ and $x_c$ in the case of the {\em or} decision model. \section{Copyright} All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details. \section{Formatting Requirements in Brief} We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai21.sty). \textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements: \begin{quote} \begin{itemize} \item Your .tex file must compile in PDF\LaTeX{} --- (you may not include .ps or .eps figure files.) \item All fonts must be embedded in the PDF file --- including includes your figures. \item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages. \item No type 3 fonts may be used (even in illustrations). \item You may not alter the spacing above and below captions, figures, headings, and subheadings. \item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein). \item You may not alter the line spacing of text. \item Your title must follow Title Case capitalization rules (not sentence case). \item Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below). \item \LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper). \item No \LaTeX{} 209 documents may be used or submitted. \item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document). \item Two-column format in AAAI style is required for all papers. \item The paper size for final submission must be US letter without exception. \item The source file must exactly match the PDF. \item The document margins may not be exceeded (no overfull boxes). \item The number of pages and the file size must be as specified for your event. \item No document may be password protected. \item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages). \item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands). \item Your PDF must be compatible with Acrobat 5 or higher. \item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed. \item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" or ``trim'' command) . \end{itemize} \end{quote} If you do not follow these requirements, your paper will be returned to you to correct the deficiencies. \section{What Files to Submit} You must submit the following items to ensure that your paper is published: \begin{itemize} \item A fully-compliant PDF file that includes PDF metadata. \item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately). \item The bibliography (.bib) file(s). \item Your source must compile on our system, which includes only standard \LaTeX{} 2020 TeXLive support files. \item Only the graphics files used in compiling paper. \item The \LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.). \end{itemize} Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai21.bst), and any custom macros. Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution. \textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth. \textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth. \textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files. \textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB. Name your source file with the last (family) name of the first author, even if that is not you. \section{Using \LaTeX{} to Format Your Paper} The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file. \subsection{Document Preamble} In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation). Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2. \subsubsection{The Following Must Appear in Your Preamble} \begin{quote} \begin{scriptsize}\begin{verbatim} \def\year{2021}\relax \documentclass[letterpaper]{article} \usepackage{aaai21} \usepackage{times} \usepackage{helvet} \usepackage{courier} \usepackage[hyphens]{url} \usepackage{graphicx} \urlstyle{rm} \def\UrlFont{\rm} \usepackage{graphicx} \usepackage{natbib} \usepackage{caption} \frenchspacing \setlength{\pdfpagewidth}{8.5in} \setlength{\pdfpageheight}{11in} \pdfinfo{ /Title (AAAI Press Formatting Instructions for Authors Using LaTeX -- A Guide) /Author (AAAI Press Staff, Pater Patel Schneider, Sunil Issar, J. Scott Penberthy, George Ferguson, Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez) /TemplateVersion (2021.1) } \end{verbatim}\end{scriptsize} \end{quote} \subsection{Preparing Your Paper} After the preamble above, you should prepare your paper as follows: \begin{quote} \begin{scriptsize}\begin{verbatim} \begin{document} \maketitle \begin{abstract} \end{abstract}\end{verbatim}\end{scriptsize} \end{quote} \noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows: \begin{quote} \begin{scriptsize}\begin{verbatim} \section{Introduction} \label{sec:intro} \epigraph{\textit{``No one ever made a decision because of a number. \\ \hspace{0.1cm} They need a story.''}}{--- Daniel Kahneman} Advancements in machine learning algorithms, as well as increased data availability and computational power, have led to the rise of predictive machines that outperform human experts in controlled experiments \cite{esteva2017dermatologist, nickerson2014political}. And yet, there is broad recognition that human involvement remains important \cite{liu2019comparison}, especially in domains in which safety and equity are important considerations \cite{parikh2019regulation, barabas2017interventions}, and with users have external information or want to exercise agency and use their own judgment. At the same time, as humans we are limited in our capacity to make good decisions. We are bound by our cognitive capabilities and prone to psychological biases (related to, e.g., framing and heuristics), which make it difficult to identify and rightly act upon complex patterns emerging in data \cite{kahneman2011thinking,miller1956magical}. Given that pattern recognition is the hallmark of machine learning, there is a clear opportunity for data-driven, human-computer collaboration. Work in facilitating human-machine collaboration has focused on {\em interpretable machine learning}, which augments machine predictions with explanations about a prediction \cite{ribeiro2016should,lakkaraju19faithful,lundberg2017unified}. Important as it is, we see two main drawbacks to this approach First, setting the role of machines to predict and then explain serves to reduce humans to auditors of the `expert' machines \cite{lai2018human}. With loss of agency, people are reluctant to adopt predictions and even inclined to go against them \cite{bandura1989human,bandura2010self, yeomans2017making,dietvorst2016overcoming, yin2019understanding, green2019principles}. This leads to a degradation in performance of the human-machine pipeline over time \cite{elmalech2015suboptimal, dietvorst2015algorithm, logg2017theory, stevenson2018algorithmic}. Second, these methods cannot adapt to the way in which predictions are used, are unable to adjust for systematic human errors or inconsistencies, and fail to make use of human capabilities. \if 0 requiring significant investment of time, money, and expertise, and that cannot utilize feedback regarding performance for improvement. Nonetheless, their design remains a source of aspiration as to the care taken in determining how to represent information to users in ways that align with human considerations and dispositions, and that support, rather than replace, humans. \fi Here we introduce the `Man Composed with Machine' framework ({M\raisebox{0.6pt}{$\circ$}M}), aiming to bridge the paradigm of machine learning with that of human-centric design~\cite{sutton2020overview,venkatesh2003user}, and advocate for a broader perspective on using machine learning in support of decision-making. We make two key contributions. First, rather than machines that predict or decide, we train models that learn how to \emph{reframe problems} for a human decision-maker. We learn to map problem instances to representational objects such as plots, summaries, or avatars, aiming to capture problem structure and maintain user autonomy. This approach of ``advising through reframing" draws on a large body of work in the social sciences that shows that the quality of human decisions depends on how problems are presented \cite{thompson1980margaret,cosmides1992cognitive,gigerenzer1995improve,kahneman2013prospect, brown2013framing}. Second, rather than optimizing for machine performance, we \emph{directly optimize for human performance}. We learn representations of inputs for which human decision-makers perform well rather than those under which machines achieve high accuracy. We use a human-in-the-loop procedure for training end-to-end both the machine model and a proxy model of human decision making, where human decisions given the current representation are used to update the representation and improve decisions. We demonstrate the framework on three distinct tasks, each highlighting a different aspect of our approach and exploring different forms of representations. We first use the controlled environment of {\em point clouds} to show how the framework can learn scatter-plot representations that allow for high human accuracy without explicitly presenting a recommended action. The second experiment is in the setting of loan approvals, and adopts {\em facial avatars } as representational advice---with the goal of exploring what representations are learned and how they are used to support decision-making. Our third experiment is simulative and designed to demonstrate the novel capacity of our framework to learn how to best enable the incorporation of \emph{side-information}--- even while this is known only to the user ---and achieve better performance than either human or machine alone. Collectively, these experiments showcase this new approach to machine learning as a tool for human-intelligence augmentation~\cite{licklider1960man,engelbart1962augmenting}. {\bf On the use of facial avatars:} In our study on loan approval we convey advice through a facial avatar representing an algorithmic assistant. We experiment with facial avatars as representations because they are high dimensional, abstract (relative to the domain studied), and accessible to people. Any use of facial representations in consequential decision settings must be done with care, and we are aware of the legitimate concerns regarding the use of faces in AI systems, especially in regard to discrimination~\cite{west2019}. We take care to minimize these concerns, ensuring that users understand that the avatar represents an algorithmic assistant and not a loan applicant, and restricting to carefully chosen variations on the image of the single actor. \section{Method} \label{sec:method} \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\textwidth]{illustration} \caption{ \textbf{Left}: The {M\raisebox{0.6pt}{$\circ$}M}\ framework. The neural network learns a mapping ${\phi}$ from inputs $x$ to representations $z$, such that when $z$ is visualized through ${\rho}$, representations elicit good human decisions. \textbf{Right}: Training alternates between (A) querying users for decisions on the current representations, (B) using these to train a proxy network ${\hat{h}}$, and (C) re-training representations. \label{fig:illustration} } \end{center} \end{figure*} In a typical setting, a decision-making user is given an {\em instance} $x \in {\cal{X}}$. For clarity, we will consider ${\cal{X}}={\mathbb{R}}^d$. Given $x$, the user must decide on an {\em action} ${a} \in {{\cal{A}}}$. For example, if $x$ are details of a loan application, then users can choose ${a} \in \{\texttt{approve},\texttt{deny} \}$. Each instance is also associated with a ground-truth \emph{outcome} $y \in {\cal{Y}}$, so that $(x,y)$ is sampled from an unknown distribution $D$. We assume that users seek to choose actions that minimize an incurred {\em loss} ${\ell}(y,{a})$, with ${\ell}$ also known to the system designer; e.g., for loans, $y$ denotes whether or not a person will repay the loan. We consider the general class of \emph{prediction policy problems}~\cite{kleinberg2015prediction}, where the loss function is known for a given action ${a}$ and outcome $y$, and the difficulty in decision-making is governed by how well $y$ can be predicted. We denote by $h$ the {\em human mapping} from inputs to decisions or actions. For example, ${a}=h(x)$ denotes a decision based on raw instances $x$, but other sources of input such as {\em explanations} $e$ or representations can be considered; e.g., ${a}=h(x,e)$ denotes a decision based on $x$ and explanation $e$. The function $h$ may output either a deterministic action or a probability distribution over actions. We conceptualize $h$ as either representing a single human, or a stable distribution over a crowd of humans. We assume the mapping $h$ is fixed (if there is adaptation to a representation, $h$ can be thought of as the end-point of this adaptation). Crucially, we allow machines to present users with machine-generated \emph{advice} ${\gamma}(x)$, with human actions denoted as ${a}=h({\gamma}(x))$. Users may additionally have access to {\em side information} $s$ that is unavailable to the machine, in which case user actions are ${a}=h({\gamma}(x), s)$.\footnote{This notion of machine-generated advice generalizes both explanations (as ${\gamma} = (x,{\hat{y}},e)$, where $e$ is the explanation) and deferrals (as ${\gamma} = (x,{\bar{y}})$, where ${\bar{y}} \in \{0,1,\text{defer}\}$, with a human model that always accepts $\{0,1\}$)~\cite{madras2018predict}.} Advice ${\gamma}(x)$ allows for a {\em human-centric representation} of the input, and we seek to \emph{learn} a mapping ${\gamma}$ from inputs to representations under which humans will make good decisions. The benchmark for evaluation will be the expected loss entailed by human actions given this advice: \begin{equation} \label{eq:overall_objective} \expect{D}{{\ell}(y,{a})}, \qquad \mbox{for}\quad {a} = h({\gamma}(x)). \end{equation} \subsubsection{Predictive advice.} A standard approach provides human users with machine-generated predictions, ${\hat{y}} = f(x)$, where $f$ is optimized for predictive accuracy and the predictions themselves correspond to action recommendations (e.g., `will return loan' corresponds to `approve loan'). This is a special case of our framework where advice ${\gamma}=(x,{\hat{y}})$, and the user is modeled as $a={\hat{y}}=h(x,{\hat{y}})$. The predictive model is trained to minimize: \begin{equation} \label{eq:machine_objective} \min\nolimits_f \expect{D}{{\ell}(y,{\hat{y}})}, \qquad \mbox{for}\quad {\hat{y}} = f(x). \end{equation} This makes plain that predictions $f(x)$ are useful only to the extent that the human decision-maker follows them. Moreover, predictions provide only a scalar summary of the information in $x$, and limit the degree to which users can exercise their cognitive and decision-making capabilities; e.g., in the context of side information. \subsubsection{Representational advice.} In {M\raisebox{0.6pt}{$\circ$}M}, we allow advice ${\gamma}$ to map inputs into representations that are designed to be suitably high-dimensional while also utilizing human strengths (e.g., a scatterplot, a compact linear model, or an avatar). Given a {\em representation class}, ${\Gamma}$, we seek a mapping ${\gamma}\in{\Gamma}$ that minimizes expected loss $\min_{{\gamma} \in {\Gamma}} \expect{D}{{\ell}(y,h({\gamma}(x)))}$. With a {\em training set} ${\mathcal{S}}=\{(x_i,y_i)\}_{i=1}^m$ sampled from $D$, and with knowledge of the human mapping $h$, we would find a mapping ${\gamma}$ minimizing the \emph{empirical loss}: \begin{equation} \label{eq:objective} \min_{{\gamma} \in {\Gamma}} \sum_{i=1}^m {\ell}(y_i, {a}_i), \qquad \mbox{for}\quad {a}_i = h({\gamma}(x_i)), \end{equation} possibly under some form of regularization. Here, ${\Gamma}$ needs to be rich enough to contain flexible mappings from inputs to representations, as well as to generate objects that are accessible to humans. To achieve this, we decompose this algorithmic advice ${\gamma}(x) = {\rho}({\phi}_{\theta}(x))$ into two components: $\bullet$ ${\phi}_{\theta}:{\mathbb{R}}^d \rightarrow {\mathbb{R}}^k$ is a parametrized \emph{embedding model} with learnable parameters ${\theta} \in {\Theta}$, that maps inputs into vector representations $z = {\phi}_{\theta}(x) \in {\mathbb{R}}^k$ for some $k>1$ $\bullet$ ${\rho} : {\mathbb{R}}^k \rightarrow {\mathcal{V}}$ is a \emph{visualization component} that maps each $z$ into a visual object ${v} = {\rho}(z) \in {\mathcal{V}}$ (e.g., a scatterplot, a facial avatar). \smallskip We consider different, fixed visualization components ${\rho}$, and focus on learning the embedding model ${\phi}_{\theta}$, and it is this question of embedding ${\phi}_{\theta}$ that becomes the problem of learning representations in {M\raisebox{0.6pt}{$\circ$}M}. Henceforth, it is convenient to fold the visualization component ${\rho}$ into the human mapping $h$, and write $h(z)$ to mean $h({\rho}(z))$, for embedding $z={\phi}_{\theta}(x)$. The training problem~\eqref{eq:objective} becomes: \begin{equation} \label{eq:objective_v2} \min_{{\theta} \in {\Theta}} \sum_{i=1}^m {\ell}(y_i, {a}_i), \quad \mbox{for}\ {a}_i = h({\phi}_{\theta}(x_i)), \end{equation} again, perhaps with some regularization. By solving \eqref{eq:objective_v2}, we learn representations that promote good decisions by the human user. See Figure~\ref{fig:illustration} (left). \subsubsection{Training procedure, and human proxy.} We adopt a neural network to model the parametrized embedding ${\phi}_{\theta}(x)$, and thus advice ${\gamma}$. The main difficulty in optimizing \eqref{eq:objective_v2} is that human actions $\{{a}_i \}_{i=1}^m$ depend on ${\phi}_{\theta}(x)$ via an unknown human mapping $h$. Hence, gradients of ${\theta}$ must pass through $h$, but this function represents an actual human decision process. To handle this, we make use of a {\em differentiable proxy of the human mapping}, ${\hat{h}}_{\eta} : {\mathbb{R}}^k \rightarrow {\Gamma}$, parameterized by ${\eta} \in {H}$, which we learn. We refer to this as ``h-hat." We learn using a {\em human-in-the-loop training procedure} that alternates between two steps: \begin{enumerate} \item Using the current ${\theta}$ to gather samples of real human decisions ${a} = h(z)$ on inputs $z={\phi}_{\theta}(x)$ and fitting ${\hat{h}}_{\eta}$. \item Find ${\theta}$ to optimize the performance of ${\hat{h}}_{\eta} \circ {\phi}_{\theta}$ for the current ${\eta}$, as in \eqref{eq:objective_v2}. \end{enumerate} \smallskip Figure~\ref{fig:illustration} (right) illustrates this process (for pseudocode see Appendix~\ref{sec:algo}). Since ${\hat{h}}$ is trained to be locally rather than globally accurate, ${\hat{h}}$ need not exactly match $h$ for learning to improve. Rather, it suffices for ${\hat{h}}$ to induce gradients of the loss that improve performance (see Figure \ref{fig:hhat} in the Appendix). H-hat must be periodically retrained because as parameters ${\theta}$ change, so does the induced distribution of representations $z$, and ${\hat{h}}_{\eta}$ may become less accurate. \subsubsection{Extensions.} One way in which humans can potentially surpass machines is when they have access to side information ${s}$ that is informative of $y$ yet unknown to the machine. The {M\raisebox{0.6pt}{$\circ$}M}\ framework can be extended to learn a representation ${\gamma}(x)$ that is optimal \emph{conditional on the human using ${s}$}, and despite lacking access to ${s}$. At test time, the human has access to ${s}$, and so action ${a} = h({\phi}(x),s)$. At train time, $y$ can be used as a proxy for $s$: if ${s}$ is informative of $y$, then $(x,y)$ are jointly informative of ${s}$. Although ${s}$ cannot generally be reconstructed without supervision, in some cases $(x,y)$ can be used to make inferences on ${s}$. As a simple example, consider the case where $y=g(x,{s})$ for some $g$ and fix $x$. If $g$ is invertible w.r.t. $y$ for all $x$, then ${s}$ can be fully reconstructed using $g^{-1}(x,y)$. In general, $g$ may not be invertible, but a `lossy' approximate inverse mapping $\hat{g}^{-1}(x,y)$ can still be modeled into ${\hat{h}}$, now takeing as input $(z,x,y)$. \section{Optimization Algorithm} \label{sec:algo} \input{pseudocode} \section{General Optimization Issues} \label{sec:opt} \subsection{Initialization} Because acquiring human labels is expensive, it is important to initialize ${\phi}$ to map to a region of the representation space in which there is variation and consistency in human reports, such that gradients lead to progress in subsequent rounds. In some representation spaces, such as our 2D projections of noisy 3D rotated images, this is likely to be the case (almost any 3D slice will retain some signal from the original 2D image). However, in 4+ dimensions, as well as with the subset selection and avatar tasks, there are no such guarantees. To minimize non-informative queries, we adopt two initialization strategies: \begin{enumerate} \item \textbf{Initialization with a computer-only model:} In scenarios in which the representation space is a (possibly discrete) subset of input space, such as in subset selection, the initialization problem is to isolate the region of the input space that is important for decision-making. In this situation, it can be useful to initialize with a computer-only classifier. This classifier should share a representation-learning architecture with ${\phi}$ but can have any other classifying architecture appended (although simpler is likely better for this purpose). This should result in some ${\phi}$ which at least focuses on the features relevant for classification, if not necessarily in a human-interpretable format. \end{enumerate} \subsection{Convergence} As is true in general of gradient descent algorithms, the {M\raisebox{0.6pt}{$\circ$}M}\ framework is not guaranteed to find a global optimum but rather is likely to end up at a local optimum dependent on both the initialization of ${\phi}$ and $\hat{h}$. In our case, however, the path of gradient descent is also dependent on the inherently stochastic selection and behavior of human users. If users are inconsistent or user groups at different iterations are not drawn from the same behavior distribution, it is possible that learning at one step of the algorithm could result in convergence to a suboptimal distribution for future users. It remains for future work to test how robust machine learning methods might be adapted to this situation to mitigate this issue. \subsection{Regularization/Early Stopping} As mentioned in Section \ref{sec:method}, training ${\phi}$ will in general shift the distribution of the representation space away from the region on which we have collected labels for $\hat{h}$ in the previous iterations, resulting in increasing uncertainty in the predicted outcomes. We test a variety of methods to account for this, but developing a consistent scheme for choosing how best to maximize the information in human labels remains future work. \begin{itemize} \item \textbf{Regularization of $\hat{h}$:} We test regularization of $\hat{h}$ both with Dropout and L2 regularization, both of which help in preventing overfitting, especially in early stages of training, when the representation distribution is not yet refined. As training progresses and the distribution ${\phi}_{\theta}(x)$ becomes more tightly defined, decreasing these regularization parameters increases performance. \item \textbf{Training $\hat{h}$ with samples from previous iterations}: We also found it helpful in early training iterations to reuse samples from the previous human labeling round in training $\hat{h}$, as inspired by [Bobu et al. 2018]. \footnote{Bobu, Andreea, et al. "Adapting to continuously shifting domains." (2018).} We weight these samples equally and use only the previous round, but it may be reasonable in other applications to alter the weighting scheme and number of rounds used. \item \textbf{Early stopping based on Bayesian Linear Regression:} In an attempt to quantify how the prediction uncertainly changes as ${\theta}$ changes, we also implement Bayesian Linear Regression, found in [Riquelme et al., 2018] \footnote{Riquelme, Carlos, George Tucker, and Jasper Snoek. "Deep bayesian bandits showdown." \emph{International Conference on Learning Representations}. 2018.} to be a simple but effective measure of uncertainty, over the last layer of $\hat{h}({\phi}_{\theta})$ as we vary ${\theta}$ through training. We find that in early iterations of training, this can be an effective stopping criterion for training of ${\phi}$. Again, as training progresses, we find that this mostly indicates only small changes in model uncertainty. \end{itemize} \subsection{Human Input} Testing on mTurk presents various challenges for testing the {M\raisebox{0.6pt}{$\circ$}M}\ framework: \begin{itemize} \item In some applications, such as loan approval, mTurk users are not experts. This makes it difficult to convince them that anything is at stake (we found that bonuses did not meaningfully affect performance). It is also difficult to directly measure effort, agency, trust, or autonomy, all of which result in higher variance in responses. % \item In many other applications, the ground truth is generated by humans to begin with (for example, sentiment analysis). Since we require ground truth for training, in these task it cannot be expected of humans to outperform machines. \item As the researchers found in \cite{lage2018human}, there can be a large variance in the time users take to complete a given task. Researchers have found that around 25\% of mTurk users complete several tasks at once or take breaks during HITs [Moss and Litman, 2019].\footnote{A. J. Moss and L. Litman. How do most mturk workers work?, Mar 2019.} making it difficult to determine how closely Turkers are paying attention to a given task. We use requirements of HIT approval rate greater than 98\%, US only, and at least 5,000 HITs approved, as well as a simple comprehension check. \item Turker populations can vary over time and within time periods, again leading to highly variable responses, which can considerably effect the performance of learning. % \item Recently, there have been concerns regarding the usage of automated bots within the mTurk communiy. Towards this end, we incorporated in the experimental survey a required reading comprehension task and as well as a CAPTCHA task, and filtered users that did not succeed in these. \end{itemize} \section{Experimental Details} \begin{figure}[t] \centering \subfloat[Initial]{\includegraphics[trim=1cm 2.75cm 1cm 2cm, clip,width=0.4\linewidth]{step1.png} \\ \subfloat[Step 3 `x']{\includegraphics[trim=1cm 2.75cm 1cm 2cm, clip,width=0.4\linewidth]{step3x.png}}% \quad \subfloat[Step 3 `o']{\includegraphics[trim=1cm 2.75cm 1cm 2cm, clip,width=0.4\linewidth]{step3o.png}}% \quad \subfloat[Step 4 `x']{\includegraphics[trim=1cm 2.75cm 1cm 2cm, clip,width=0.4\linewidth]{step4x.png}}% \quad \subfloat[Step 4 `o']{\includegraphics[trim=1cm 2.75cm 1cm 2cm, clip,width=0.4\linewidth]{step4o.png}}% \caption{Images of x-o interface \label{fig:xos} } \end{figure} \subsection{Decision-compatible 2D projections} \label{sec:exp1} In the experiment, we generate 1,000 examples of these point clouds in 3D. The class of ${\phi}$ is a 3x3 linear layer with no bias, where we add a penalization term on ${\phi}^{T}{\phi} - \mathbb{I}$ during training to constrain the matrix to be orthogonal. Humans are shown the result of passing the points through this layer and projecting onto the first two dimensions. The class of $\hat{h}$ is a small network with 1 3x3 convolutional layer creating 3 channels, 2x2 max pooling, and a sigmoid over a final linear layer. The input to this network is a soft (differentiable) 6x6 histogram over the 2D projection shown to the human user. We tested an interactive command line query and response game on 12 computer science students recruited on Slack and email. Users filled out a consent form online, watched an instructional video, and then completed a training and testing round, each with up to 5 rounds of 15 responses. Due to the nature of the training process, achieving 100\% accuracy results in ${\phi}$ not updating in the following round. With this in mind, if a user reached 100\% accuracy in training, they immediately progressed to testing. If a user reached 100\% accuracy in testing, the program exited. ${\phi}$ was able to find a representation that allowed for 100\% accuracy 75\% of the time, with an average 5 round improvement of 23\% across all participants. Many times the resulting projection appeared to be an `x' and `o', as in Figure \ref{fig:xos}, but occasionally it was user-specific. For example, a user who associates straight lines with the `x' may train the network to learn any projection for `x' that includes many points along a straight line. The architecture of ${\phi}$ and $\hat{h}$ are described in Section \ref{sec:experiments}. For training, we use a fixed number of epochs (500 for $\hat{h}$ and 300 for ${\phi}$) with base learning rates of .07 and .03, respectively, that increase with lower accuracy scores and decrease with each iteration. We have found these parameters to work well in practice, but observed that results were not sensitive to their selection. The interface allows the number of rounds and examples to be determined by the user, but often 100\% accuracy can be achieved after about 5 rounds of 15 examples each. \subsection{Decision-compatible algorithmic avatars} \label{sec:exp3} \subsubsection{Data Preprocessing.} We use the {\em Lending Club} dataset, which we filter to include only loans for which we know the resolution (either default or paid in full, not loans currently in progress) and to remove all features that would not have been available at funding time. We additionally drop loans that were paid off in a single lump sum payment of at least 5 times the normal installment. This results in a dataset that is 49\% defaulted and 51\% repaid loans. Categorical features are transformed to one-hot variables. There are roughly 95,000 examples remaining in this dataset, of which we split 20\% into the test set. \begin{figure}[] \centering \includegraphics[width=0.8\linewidth]{recons.png} \caption{Visualization of reconstruction component} \label{fig:recon} \end{figure} \subsubsection{Learning architecture and pipeline.} \label{sec:exp3details} The network ${\phi}$ takes as input the standardized loan data. Although the number of output dimension are $ {\mathbb{R}}^{9}$, ${\phi}$ outputs vectors in ${\mathbb{R}}^{11}$. This is because the some facial expressions do not naturally coexist as compound emotions, i.e., happiness and sadness [Du et al., 2014]. \footnote{Shichuan Du, Yong Tao, and Aleix M Martinez. Compound facial expressions of emotion. \emph{ Proceedings of the National Academy of Sciences}, 111(15):E1454–E1462, 2014.} Hence, we must add some additional constraints to the output space, encoded in the extra dimensions. For example, happiness and sadness are split into two separate parameters (rather than using one dimension with positive for happiness and negative for sadness). The same is true of ``happy surprise", which is only allowed to coincide with happiness, as opposed to ``sad surprise." For parameters which have positive and negative versions, we use a tanh function as the final nonlinearity, and for parameters which are positive only, we use a sigmoid function as the final nonlinearity. These parameters are programmatically mapped to a series of Webmorph \cite{debruine2016webmorph} transformation text files, which are manually loaded into the batch transform/batch edit functions of Webmorph. We use base emotion images from the CFEE database [Du et al., 2014] and trait identities from [Oosterhof and Todorov, 2008].\footnote{Nikolaas N Oosterhof and Alexander Todorov. The functional basis of face evaluation. \emph{Proceedings of the National Academy of Sciences}, 105(32):11087–11092, 2008.} This forms ${\rho}$ for this experiment. The network ${\phi}$ is initialized with a WGAN to match a distribution of parameters chosen to output a fairly uniform distribution of feasible faces. To achieve this, each parameter was chosen to be distributed according to one of the following: a clipped $\mathcal{N}(0,4)$, $\mathcal{U}[0,1]$, or Beta(1,2). The choice of distribution was based on inspection as to what would give reasonable coverage over the set of emotional representations we were interested in testing. In this initial version of ${\phi}$, $x$ values end up mapped randomly to representations, as the WGAN has no objective other than distribution matching. \begin{figure}[t] \centering \subfloat[Loss in training $\hat{h}$ over 3 rounds ]{\includegraphics[width=0.85\linewidth]{h_loss.png} \qquad \subfloat[Validation Accuracy in training ${\phi}$ over 3 rounds]{\includegraphics[width=0.85\linewidth]{val_acc.png}}% \caption{$\hat{h}$ does not necessarily have to match $h$ well to lead to an increase in accuracy} \label{fig:hhat} \end{figure} The hidden layer sizes of ${\phi}$ and ${\hat{h}}$ were chosen via cross validation. For ${\phi}$, we use the smallest architecture out of those tested capable of recreating a wide distribution of representations $z$ as the generator of the WGAN. For ${\hat{h}}$, we use the smallest architecture out of those tested that achieves low error both in the computer-only simulation and with the first round of human responses. In the first experiment, we collect approximately 5 labels each (with minor variation due to a few mTurk users dropping out mid-experiment) for the LASSO feature subset of 400 training set $x$ points and their ${\phi}_0$ mappings (see Figure \ref{fig:qualtr_ques}). $a$ is taken to be the percentage of users responding ``approve" for each point. To train $\hat{h}$, we generate 15 different training-test splits of the collected $\{z,a\}$ pairs and compare the performance of variations of $\hat{h}$ in which it is either initialized randomly or with the $\hat{h}$ from the previous iteration, trained with or without adding the samples from the previous iteration, and ranging over different regularization parameters. We choose the training parameters and number of training epochs which result in the lowest average error across the 15 random splits. In the case of random initialization, we choose the best out of 30 random seeds over the 15 splits. To train ${\phi}$, we fix $\hat{h}$ and use batches of 30,000 samples per epoch from the training set, which has 75,933 examples in total. To prevent mode collapse, wherein faces ``binarize" to two prototypical exemplars, we add a reconstruction regularization term ${R}(x) = \|x-{\psi}({\phi}(x))\|_2^2$ to the binary cross entropy accuracy loss, where ${\psi}$ is a decoder implemented by an additional neural network (see Figure \ref{fig:recon}). ${\phi}$ here also features a constraint penalty that prevents co-occurrence of incompatible emotions. We train ${\phi}$ for 2,000 epochs with the Adam optimizer for a variety of values of $\alpha$, where we use $\alpha$ to balance reconstruction and accuracy loss in the form $\mathcal{L}_{total} = \alpha\mathcal{L}_{acc} + (1-\alpha)\mathcal{L}_{rec}$. We choose the value of $\alpha$ per round that optimally retains $x$ information while promoting accuracy by inspecting the accuracy vs. reconstruction MSE curve. We then perform Bayesian Linear Regression over the final layer of the current $\hat{h}$ for every 50th epoch of ${\phi}$ training and select the number of epochs to use by the minimum of either 2,000 epochs or the epoch at which accuracy uncertainty has doubled. In all but the first step, this resulted in using 2,000 epochs. At each of the 2-5th epochs, we choose only 200 training points to query. In the 6th epoch we use 200 points from the test set. \subsubsection{Self-reported user type.} In the end of the survey, we ask users to report their decision method from among the following choices: \begin{itemize} \item I primarily relied on the data available \item I used the available data unless I had a strong feeling about the advice of the computer system \item I used both the available data and the advice of the computer system equally \item I used the advice of the computer system unless I had a strong feeling about the available data \item I primarily relied on the advice of the computer system \item Other \end{itemize} The percentage of users in each of these groups varied widely from round to round. We consider the first two conditions to be the `Data' group, the third to be the `Equal' group, and the next two to be the `Computer Advice' group. Although the trend is not statistically significant (at $p=0.05$), likely due to the small number of subjects per type per round, we find it interesting that the performance improved on average over training rounds for all three types, of which the equal-consideration type performed best. For the data-inclined users, whose performance improved to surpass that of the no-advice condition in as early as round two, this implies at least one of the following: users misreport their decision method; users believe they are not influenced by the advice but are in fact influenced; or, as the algorithmic evidence becomes apparently better, only the population of users who are comparatively skilled at using the data continue to do so. \begin{figure}[t] \centering \subfloat[Training Rounds (`Overall` here is average \emph{per user} score, rather than the score of the average response per question)]{\includegraphics[width=0.8\linewidth]{accbytype.png}}% \qquad \subfloat[Test Round]{\includegraphics[width=0.8\linewidth]{accbytype_test.png}}% \caption{Results by Reported User Type} \label{fig:qualtr} \end{figure} \subsubsection{Diversity in avatar representation.} \label{sec:avatarreps} We believe the additional dimensionality of the avatar representation relative to a numerical or binary prediction of default is useful for two reasons. Most importantly, high dimensionality allows users to retain an ability to reason about their decisions. In particular, avatars are useful because people likely have shared, mental reference points for faces. Moreover, users with a more sophisticated mental reference space may be able to teach the advising system over time to match specific reasoning patterns to specific characteristics. Additionally, when the advising system does not have a strong conviction about a prediction, presenting neutral advice should encourage the user to revisit the data, whereas percentages above or below the base rate of default (or 50\%) may suffer from the anchoring effect. \subsubsection{Further Details on Information Learned by $z$.} Using cross-validated ridge regression to predict individual $x$ variables from individual $z$ variables results in the coefficients of determination $R^2$ (to 2 significant figures) shown in Table \ref{tab:xonzr2}. \begin{table}[t] \centering \caption{Coefficients of Determination $R^2$, predicting each $x$ variable from each final $z$ variable. \label{tab:xonzr2}} \fontsize{8.5pt}{9.5pt}\selectfont \begin{tabular}{lcccccc} & \feat{rate} & \feat{term} & \feat{dt} & \feat{rec} & \feat{inc} & \feat{emp} \bigstrut[b]\\ \cline{2-7}happiness & 0.00 & \cellcolor[rgb]{ 1, .949, .8}-0.15 & \cellcolor[rgb]{ 1, .949, .8}-0.14 & 0.00 & -0.01 & 0.00 \bigstrut[t]\\ sadness & -0.01 & -0.06 & -0.10 & 0.00 & -0.04 & -0.07 \\ trustworthiness & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{0.57} & \cellcolor[rgb]{ 1, .949, .8}0.17 & 0.01 & 0.00 & -0.01 & -0.01 \\ dominance & 0.00 & -0.01 & 0.03 & -0.01 & 0.01 & -0.01 \\ hue & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{0.48} & \cellcolor[rgb]{ .957, .69, .518}0.29 & -0.02 & 0.00 & -0.04 & -0.02 \\ eye gaze & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{0.42} & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{0.46} & -0.04 & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{-0.40} & -0.04 & \cellcolor[rgb]{ 1, .949, .8}-0.17 \\ age & \cellcolor[rgb]{ .957, .69, .518}0.23 & \cellcolor[rgb]{ .957, .69, .518}0.22 & \cellcolor[rgb]{ 1, .949, .8}-0.12 & \cellcolor[rgb]{ .957, .69, .518}-0.21 & \cellcolor[rgb]{ 1, .949, .8}0.17 & 0.04 \\ anger & -0.01 & -0.02 & -0.05 & -0.02 & -0.01 & 0.00 \\ fear & 0.04 & 0.00 & -0.03 & 0.00 & -0.01 & -0.01 \\ surprise & \cellcolor[rgb]{ 1, .949, .8}-0.18 & 0.04 & -0.01 & -0.02 & 0.00 & -0.04 \\ \end{tabular}% \end{table} Using cross-validated ridge regression to predict individual $x$ variables from all $z$ variables (both standardized to mean 0, std 1) results in the \emph{variable coefficients} (to 2 significant figures) shown in Table~\ref{tab:xonzcoef}. \begin{table} \centering \caption{Coefficients of Ridge Regression, predicting each $x$ variable from all final $z$ variables. \label{tab:xonzcoef}} \fontsize{8.5pt}{9.5pt}\selectfont \begin{tabular}{l|cccccc} & \feat{rate} & \feat{term} & \feat{dt} & \feat{rec} & \feat{inc} & \feat{emp} \bigstrut[b]\\ \hline happiness & -0.07 & \cellcolor[rgb]{ .957, .69, .518}-0.29 & -0.10 & -0.06 & \cellcolor[rgb]{ .957, .69, .518}0.21 & -0.07 \bigstrut[t]\\ sadness & \cellcolor[rgb]{ 1, .949, .8}0.16 & 0.07 & 0.07 & -0.01 & \cellcolor[rgb]{ 1, .949, .8}0.13 & 0.07 \\ trustworthiness & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{-0.62} & \cellcolor[rgb]{ .957, .69, .518}-0.28 & -0.05 & \cellcolor[rgb]{ .957, .69, .518}-0.23 & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{0.31} & \cellcolor[rgb]{ 1, .949, .8}0.16 \\ dominance & 0.05 & \cellcolor[rgb]{ 1, .949, .8}0.16 & \cellcolor[rgb]{ 1, .949, .8}0.12 & \cellcolor[rgb]{ 1, .949, .8}-0.13 & -0.02 & 0.04 \\ hue & \cellcolor[rgb]{ .957, .69, .518}0.27 & \cellcolor[rgb]{ 1, .949, .8}0.20 & \cellcolor[rgb]{ 1, .949, .8}0.19 & 0.03 & 0.01 & -0.08 \\ eye gaze & \cellcolor[rgb]{ 1, .949, .8}0.13 & \cellcolor[rgb]{ .957, .69, .518}0.28 & -0.10 & \cellcolor[rgb]{ 1, .949, .8}0.13 & \cellcolor[rgb]{ .957, .69, .518}-0.29 & -0.04 \\ age & -0.09 & \cellcolor[rgb]{ 1, .949, .8}0.14 & \cellcolor[rgb]{ 1, .949, .8}0.12 & -0.09 & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{0.67} & \cellcolor[rgb]{ .514, .235, .047}\textcolor[rgb]{ 1, 1, 1}{0.40} \\ anger & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ fear & \cellcolor[rgb]{ 1, .949, .8}0.19 & \cellcolor[rgb]{ 1, .949, .8}0.12 & 0.08 & -0.07 & 0.04 & 0.00 \\ surprise & 0.07 & \cellcolor[rgb]{ 1, .949, .8}0.12 & 0.03 & -0.07 & -0.06 & \cellcolor[rgb]{ 1, .949, .8}0.13 \\ \end{tabular}% \end{table} \begin{figure}[t!] \centering \subfloat[]{\includegraphics[width=0.85\linewidth]{qualtrics.png} \\ \subfloat[]{\includegraphics[width=0.85\linewidth]{qualtrics_pct.png}}% \caption{Images from mTurk questionnaire} \label{fig:qualtr_ques} \end{figure} \subsection{Incorporating Side Information} \label{sec:exp2} \begin{figure}[!h] \centering \begin{tikzpicture}[ roundnode/.style={circle, draw=black!60, very thick, minimum size=7mm}, ] \node[roundnode] (x_i) {$x_i$}; \node[roundnode] (x_r) [below=of x_i] {$x_r$}; \node[roundnode] (x_c) [right=of x_i] {$x_c$}; \node[roundnode] (x_d) [right=of x_r] {$x_d$}; \node[roundnode] (ER) [below=of x_d] {$ER$}; \node[roundnode] (y) [right=of x_d] {$y$}; \draw[->] (x_i.east) -- (x_c.west); \draw[->] (x_r.east) -- (x_d.west); \draw[->] (x_i.south) -- (ER.north); \draw[->] (x_r.south) .. controls +(down:7mm) and +(left:7mm) .. (ER.west); \draw[->] (x_c.east) -- (y.north); \draw[->] (x_d.east) -- (y.west); \draw[->] (ER.east) -- (y.south); \end{tikzpicture} \caption{Relationship of variable correlations in the side information experiment} \label{fig:dag} \end{figure} \subsubsection{Data Generation.} \label{sec:side_info_data_gen} A directed graph showing the variable correlations is shown in Figure \ref{fig:dag}. The data in the side-information experiment is generated as follows: A latent variable $l_0 \sim \mathcal{N}(.3, .1)$ introduces a low correlation between $x_i$ and $x_r$ by setting a common mean for their Bernoulli probabilities $l_1, l_2$: \begin{itemize} \item $l_1, l_2 \sim \textrm{Unif}(\textrm{max}(l_0 - .3, 0), \textrm{min}(l_0 + .3, 1))$ \item $x_i \sim \textrm{Bernoulli}(1-l_1)$ \item $x_r \sim \textrm{Bernoulli}(1-l_2)$ \end{itemize} An additional latent variable $l_3$ provides a similar correlation between $x_c$ and $x_d$, which also correlate, respectively, with $x_i$ and $x_r$: \begin{itemize} \item $l_3 \sim \textrm{Unif}(.5, .7)$ \item $x_c \sim \textrm{Bernoulli}(l_3 + x_i)$ \item $x_d \sim \textrm{Bernoulli}(l_3 + x_r)$ \end{itemize} Side information $s$ is highly correlated with $x_r$ and $x_i$ but noisy: $s$ is drawn from a normal distribution centered at $x_r + x_i$ before rounding to an integer value between 0 and 3. \begin{itemize} \item $s_{cont} \sim \mathcal{N}(x_r + x_i, .5)$ \item $s = \textrm{max}(0,\textrm{min}(3, \textrm{round}(s_{cont})))$ \end{itemize} The integer outcome variable $y$ is the sum of $x_c$, $x_d$, and $s$. The binary outcome variable $y_{bin}$ is thresholded at $y>3$. $y = x_c + x_d + s$; $;\ y_{bin} = \1{y>3}$ \subsubsection{Learning Architecture.} The network ${\phi}$ contains a single linear layer with no bias which takes a constant (1) as an input and outputs a number $z_i$ for each data dimension $i$. The network ${\hat{h}}$ takes as input $(x,w,y)$. It contains one linear layer with no bias which takes as input $[x,y]$ and outputs a single number $\hat{s}$. The second linear layer (with bias) takes as input $w$ and outputs the sigmoid activation of a single number, $switch$, representing the propensity to incorporate $s$ at $w$. It then outputs $ w^{\intercal}x + switch \cdot \hat{s}$. \subsubsection{Baselines.} \begin{itemize} \item \textbf{Machine Only}: The best possible linear model (with bias) trained to predict $y$ from $x_1 \hdots x_{4}$. \item \textbf{$h($Machine)}: The human model $h$ applied to the best possible linear model (with bias) trained to predict $y$ from $x_1 \hdots x_{4}$. \[ h(\text{Machine}) = \beta_0 + h(x, \beta_1, \dots, \beta_4, s) \] where $\beta$ are the coefficients selected by the machine-only regression. \end{itemize} \subsubsection{Human Models} \begin{itemize} \item \textbf{Always}: The human always fully incorporates the side information, $$h(x,w,s) = w^{\intercal}x + s$$ \item \textbf{Never}: The human never incorporates the side information, $$h(x,w,s) = w^{\intercal}x$$ \item \textbf{Or}: The human becomes less likely to incorporate side information as weight is put on $x_i, x_r$, $$h(x,w,s) = w^{\intercal}x + \sigma(1 / \textrm{max}(\textrm{max}(x_i,x_r),.0001) - 2). \cdot s$$ Note that max(.0001) is required to prevent numerical overflow, and -2 recenters the sigmoid to allow for values $<.5$. \item \textbf{Coarse}: The human incorporates $s$ as in Or, but uses a coarse, noisy version of $s$, $s' = 2 \cdot \1{s \geq 2}$ $$h(x,w,s) = w^{\intercal}x + \sigma(1 / \textrm{max}(\textrm{max}(x_i,x_r),.0001) - 2). \cdot s'$$ \end{itemize} \section{Select Turker quotes} \begin{itemize} \item ``I wasn't always looking at just happiness or sadness. Sometimes the expressions seemed disingenuously happy, and that also threw me off. I don't know if that was intentional but it definitely effected my gut feeling and how I chose.'' \item ``In my opinion, the level of happiness or sadness, the degree of a smile or a frown, was used to represent applications who were likely to be payed back. The more happy one looks, the better the chances of the client paying the loan off (or at least what the survey information lead me to believe).'' \item ``I was more comfortable with facial expressions than numbers. I felt like a computer and I didn't feel human anymore. Didn't like it at all.'' \end{itemize} \clearpage \includepdf{avatars_nips19.pdf} \end{appendices} \section*{Ethics Statement} By incorporating the use of human judgment rather than encouraging human automation bias or simply automation, the kinds of methods suggested here have the potential to fail more gracefully than traditional decision support systems. Still, the idea of seeking to optimize for human decisions should not be considered lightly. It is our belief that a responsible and transparent deployment of models with ``h-hat-like'' components should encourage environments in which humans are aware of what information they provide about their thought processes. Unfortunately, this may not always be the case, and ethical, legal, and societal aspects of systems that are optimized to promote particular human decisions must be subject to scrutiny by both researchers and practitioners. If designed to correct for inadvertent user biases, for example, the system will first have to learn these biases, and this can be sensitive information and damaging to users if not properly managed. These kinds of issues are not specific to our framework and have been a concern of the HCI community as early as 1998 \cite{fogg1998persuasive}. The opportunities and dangers of our framework generally reflect those of the broader field of persuasive technology \cite{berdichevsky1999toward}, where system goals may be poor proxies for user goals~\cite{ribeiro2020auditing}, or even at odds with user goals. Moreover, the method does not in itself prevent biases from being passed through the data without appropriate care in the design of loss functions. \if 0 Still, we see reasons to be optimistic regarding the future of algorithmic decision support. Systems designed specifically to provide users with the information and framing they need to make good decisions can harness the strengths of both computer pattern recognition and human judgment and information synthesis. We can hope that the combination of man and machine can do better than either one alone. The ideas presented in this paper serve as a step toward this goal. \fi \subsection{Additional Related Work} \label{sec:related} \subsubsection{Modeling human factors and machine arbiters.} Recent studies have shown that the connections between trust, accuracy, and explainability can be complex and nuanced~\cite{green2019disparate, lai2018human}. For example, users fail to consistently increase trust in a model even when model accuracy is superior to human accuracy and models are modified to make them more interpretable \cite{yin2019understanding, poursabzi2018manipulating}. At the same time, there is a tension, in that whether or not users retain agency has been shown to affect acceptance of model predictions \cite{dietvorst2016overcoming}, providing support for the approach we take with {M\raisebox{0.6pt}{$\circ$}M}. Other work considers learning when to predict, in effect bypassing human arbitration, and when to defer to the user~\cite{madras2018predict}. Recent work acknowledges that human decision processes must be considered when developing decision support technology~\cite{lai2020harnessing, bansal2019beyond}, and work in cognitive science has shown settings in which accurate models of human decision-making can be developed~\cite{bourgin2019cognitive}. \subsubsection{Model interpretability for human arbiters.} Much of the interpretability literature also follows our paradigm of humans as the final decision-makers, with interpretable models enabling users to bring to bear additional criteria (e.g., fairness) that can be difficult to automate~\cite{doshi2017towards}; e.g., by simplifying inputs~\cite{angelino2017learning, lakkaraju2016interpretable} or augmenting inputs~\cite{ribeiro2016should, smilkov2017smoothgrad, lei2016rationalizing} in order to help users understand the way data is being used. This work generally optimizes interpretability proxies \cite{lage2019evaluation} in combination with machine accuracy, using humans only to validate at test time. A few exceptions allow human feedback to guide the selection of machine-trained models \cite{ross2017right}, including \cite{lage2018human}, which we discuss further in relation to human in the loop training. We are not aware of any work in this space which models humans in the machine training procedure. \subsubsection{Humans in the loop.} Despite much recent interest in training with humans in the loop, experimentation in this setting remains an exceptionally challenging task. As mentioned above, \citet{lage2018human} is the only interpretability method we know of to directly incorporate human responses into training. There, the feedback mechanism is simple, and the authors explicitly abandon attempts to train with mTurkers, working instead with a group of machine learning graduate students and postdocs. Other work on human-machine cooperation proposes methods for training with humans, but either demonstrates results through fully synthetic experiments using simulated human responses \cite{madras2018predict}, Our framework directly optimizes for human performance by querying humans during training for their decisions on arbitrary points in representation space.
1,108,101,563,457
arxiv
\section{Introduction} \paragraph{}Smart Contracts~\cite{wood2014ethereum} are sequential and executable programs that run on Blockchains~\cite{nakamoto2008bitcoin}. They permit trusted transactions and agreements to be carried out among parties without the need for a central authority while keeping transactions traceable, transparent, and irreversible. These contracts are increasingly confronted with various attacks exploiting their execution vulnerabilities. Attacks lead to significant malicious scenarios, such as the infamous \textit{The DAO} attack~\cite{atzei2017survey}, resulting in a loss of $\sim$\$60M. In this paper, we use formal methods on smart contracts from an existing Blockchain application. Our motivation is to ensure safe and correct contracts, avoiding the presence of computer bugs, by using a deductive verification language able to write, verify and compile such programs. The chosen language is an automated tool called \textit{Why3}~\cite{filliatre2013why3}, which is a complete tool to perform deductive program verification, based on Hoare logic. A first approach using \textit{Why3} on solidity contracts (the Ethereum smart contracts language) has already been undertaken~\cite{SolandWhy}. The author uses \textit{Why3} to formally verify \textit{Solidity} contracts based on code annotation. Unfortunately, that work remained at the prototype level. We describe our research approach through a use case that has already been the subject of previous work, namely the Blockchain Energy Market Place (BEMP) application~\cite{nehai}. In summary, the contributions of this paper are as follows: \begin{enumerate} \item Showing the adaptability of \textit{Why3} as a formal language for writing, checking and compiling smart contracts. \item Comparing existing smart contracts, written in \textit{Solidity}~\cite{buterin2014next}, and the same existing contracts written in \textit{Why3}. \item Detailing a formal and verified \textit{Trading} contract, an example of a more complicated contract than the majority of existing \textit{Solidity} contracts. \item Providing a way to prove the quantity of \textit{gas} (fraction of an Ethereum token needed for each transaction) used by a smart contract. \end{enumerate} The paper is organized as follows. Section 2 describes the approach from a theoretical and formal point of view by explaining the choices made in the study, and section 3 is the proof-of-concept of compiling \textit{Why3} contracts. A state-of-the-art review of existing work concerning the formal verification of smart contracts is described in section 4. Finally, section 5 summarizes conclusions. \section{A New Approach to Verifying Smart Contracts Using Why3} \subsection{Background of the study} \paragraph{Deductive approach \& Why3 tool.} A previous work aimed to verify smart contracts using an abstraction method, model-checking~\cite{nehai}. Despite interesting results from this modelling method, the approach to property verification was not satisfactory. Indeed, it is well-known that model-checking confronts us either with limitation on combinatorial explosion, or limitation with invariant generation. Thus, proving properties involving a large number of states was impossible to achieve because of these limitations. This conclusion led us to consider applying another formal methods technique, deductive verification, which has the advantage of being less dependent on the size of the state space. In this approach, the user is asked to write the invariants. We chose the automated \textit{Why3} tool~\cite{filliatre2013why3} as our platform for deductive verification. It provides a rich language for specification and programming, called \textit{WhyML}, and relies on well-known external theorem provers such as Alt-ergo~\cite{altergo}, Z3~\cite{z3smt}, and CVC4~\cite{BCD+11}. \textit{Why3} comes with a standard library\footnote{http://why3.lri.fr/} of logical theories and programming data structures. The logic of \textit{Why3} is a first-order logic with polymorphic types and several extensions: recursive definitions, algebraic data types and inductive predicates. \paragraph{Case study: Blockchain Energy Market Place.} We have applied our approach to a case study provided by industry~\cite{nehai}. It is an Ethereum Blockchain application (BEMP) based on \textit{Solidity} smart contracts language. Briefly, this Blockchain application makes it possible to manage energy exchanges in a peer-to-peer way among the inhabitants of a district as shown in Figure~\ref{fig:modelingBEMP}. The figure illustrates (1) \& (1') energy production (Alice) and energy consumption (Bob). (2) \& (2') Smart meters provide production/consumption data to Ethereum blockchain. (3) Bob pays Alice in \textit{ether} (Ethereum's cryptocurrency) for his energy consumption. For more details about the application, please refer to~\cite{nehai}. \begin{wrapfigure}[13]{r}{0.50\textwidth} \centering \includegraphics[width=0.50\textwidth]{marketPlace.png} \caption{BEMP Process} \label{fig:modelingBEMP} \end{wrapfigure} \paragraph{} In our initial work, we applied our method on a simplified version of the application, that is, a one-to-one exchange (1 producer and 1 consumer), with a fixed price for each kilowatt-hour. This first test allowed us to identify and prove RTE properties. The simplicity of the unidirectional exchange model did not allow the definition of complex functional properties to show the importance and utility of the \textit{Why3} tool. In a second step, we extended the application under study to an indefinite number of users, and then enriched our specifications. The use of \textit{Why3} is quite suitable for this order of magnitude. In this second version, we have a set of consumers and producers willing to buy or to sell energy. Accordingly, we introduced a simple trading algorithm that matches producers with consumers. In addition to transferring \textit{ether}, users transfer crypto-Kilowatthours to reward consumers consuming locally produced energy. Hence, the system needs to formulate and prove predicates and properties of functions handling various data other than cryptocurrency. For a first trading approach, we adopted, to our case study, an order book matching algorithm~\cite{domowitz1993taxonomy}. \subsection{Why3 features intended for Smart Contracts } \subsubsection{Library modelling.} \textit{Solidity} is an imperative object-oriented programming language, characterized by static typing\footnote{Ethereum foundation: Solidity, the contract-oriented programming language. https://github.com/ethereum/solidity}. It provides several elementary types that can be combined to form complex types such as booleans, signed, unsigned, and fixed-width integers, settings, and domain-specific types like addresses. Moreover, the address type has primitive functions able to transfer \textit{ether} (\verb|send()|, \verb|transfer()|) or manipulate cryptocurrency balances (\verb|.balance|). \textit{Solidity} contains elements that are not part of the \textit{Why3} language. One could model these as additional types or primitive features. Examples of such types are \verb|uint256| and \verb|address|. For machine integers, we use the range feature of Why3: \lstinline[language = Why3, basicstyle=\fontsize{7}{9}\tt] ! type uint256 = <range 0 0x7FFFFFFFFFFFFFFFFFFFFFFFFFFFFF... >! because it exactly represents the set of values we want to represent. Moreover why3 checks that the constants written by the user of this types are inside the bounds and converts in specifications automatically range types to the mathematical integers, e.g., \verb|int| type. Indeed it is a lot more natural and clearer to express specification with mathematical integers, for example with wrap-around semantic \lstinline[language = Why3, basicstyle=\fontsize{7}{9}\tt] ! account = old account - transfer ! doesn't express that the account lose money (if the account was empty it could now have the maximum quantity of money). Based on the same reasoning, we have modelled the type \verb|Int160|, \verb|Uint160| (which characterizes type \verb|uint| in \textit{Solidity}). We also model the \verb|address| type and its members. We choose to encode the private storage (\verb|balance|) by a Hashtable having as a key value an address, and the associated value a \verb|uint256| value. The current value of the balance of addresses would be \verb|balance[address]|. In addition, the \verb|send| function is translated by a \verb|val| function, which performs operations on the \verb|balance| hashtable. Moreover, we model primitive features such as the \verb|modifier| function, whose role is to restrict access to a function; it can be used to model the states and guard against incorrect usage of the contract. In \textit{Why3} this feature would be an exception to be raised if the condition is not respected, or a precondition to satisfy. We will explain it in more details with an example later. Finally, we give a model of \textit{gas}, in order to specify the maximum amount of \textit{gas} needed in any case. We introduce a new type: \verb|type gas = int|. The quantity of \textit{gas} is modelled as a mathematical integer because it is never manipulated directly by the program. This part is detailed later. It is important to note that the purpose of our work is not to achieve a complete encoding of \textit{Solidity}. The interest is rather to rely on the case study in our possession (which turns out to be written in \textit{Solidity}), and from its contracts, we build our own \textit{Why3} contracts. Therefore, throughout the article, we have chosen to encode only \textit{Solidity} features encountered through our case study. Consequently, notions like \verb|revert| or \verb|delegatecall| are not treated. Conversely, we introduce additional types such as \verb|order| and \verb|order_trading|, which are specific to the BEMP application. The \verb|order| type is a record that contains \verb|orderAddress| which can be a seller or a buyer, \verb|tokens| that express the crypto-Kilowatthours (wiling to buy or to sell), and \verb|price_order|. The \verb|order_trading| type is a record that contains seller ID; \verb|seller_index|, buyer ID; \verb|buyer_index|, the transferred amount \verb|amount_t|, and the trading price \verb|price_t|. \paragraph{Remark:} In our methodology, we make the choice to encode some primitives of \textit{Solidity} but not all. For example, the \verb|send()| function in \textit{Solidity} can fail (return \verb|False|) due to an out-of-gas, e.g. an overrun of 2300 units of \textit{gas}. The reason is that in certain cases the transfer of \textit{ether} to a contract involves the execution of the contract fallback, therefore the function might consume more \textit{gas} than expected. A fallback function is a function without a signature (no name, no parameters), it is executed if a contract is called and no other function matches the specified function identifier, or if no data is supplied. As we made the choice of a \textit{private} blockchain type, all users can be identified and we have control on who can write or read from the blockchain. Thus, the \textit{Why3} \verb|send()| function does not need a fallback execution, it only transfers \textit{ether} from one address to another. The \textit{Why3} \verb|send()| function does not return a boolean, because we require that the transfer is possible (enough ether in the sending contract and not too much in the receiving) and we want to avoid Denial-of-service attack~\cite{DOS}. Indeed if we allow to propagate errors and accept to send to untrusted contracts, it could always make our contract fail and revert. So we can't prove any property of progress of our contract. In \textit{Tezos} blockchain~\cite{goodman2014tezos}, call to other contracts are postponed to after the execution of the current contract. So another contract should not be able to make the calling contract fail. \subsubsection{Encoding and verifying functions from the BEMP application.} \paragraph{Oracle notions.} Developping smart contracts often rely on the concept of \textit{Oracles}~\cite{EthOracle}. An oracle can be seen as the link between the blockchain and the ``real world". Some smart contracts functions have arguments that are external to the blockchain. However, the blockchain does not have access to information from an off-chain data source which is untrusted. Accordingly, the oracle provides a service responsible for entering external data into the blockchain, having the role of a trusted third party. However, questions arise about the reliability of such oracles and accuracy of information. Oracles can have unpredictable behaviour, e.g. a sensor that measures the temperature might be an oracle, but might be faulty; thus one must account for invalid information from oracles. \begin{wrapfigure}[15]{r}{0.60\textwidth} \centering \includegraphics[width=0.60\textwidth]{diagramV3.png} \caption{Link between on-chain and off-chain} \label{fig:modeling} \end{wrapfigure} Figure~\ref{fig:modeling} illustrates the three communication stages between various systems in the real world with the blockchain: \textit{(1)} the collection of off-chain raw data; \textit{(2)} this data is collected by oracles; and finally, \textit{(3)} oracles provide information to the blockchain (via smart contracts). \paragraph{}Based on this distinction, we defined two types of functions involved in contracts, namely \textit{Private functions} and \textit{Public functions}. We noted that some functions are called internally, by other smart contracts functions, while others are called externally by oracles. Functions that interact with oracles are defined as \textit{public} functions. The proof approach of the two types is different. For the \textit{private} functions one defines pre-conditions and post-conditions, and then we prove that no error can occur and that the function behaves as it should. It is thus not necessary to define exceptions to be raised throughout the program; they are proved to never occur. Conversely, the \textit{public} functions are called by oracles, the behaviour of the function must, therefore, take into account any input values and it is not possible to require conditions upstream of the call. So in contrast, the exceptions are necessary; we use so-called \textit{defensive proof} in order to protect ourselves from the errors that can be generated by oracles. No constraints are applied on post-conditions. Thus, valid data (which does not raise exceptions) received by a public function will satisfy the pre-conditions of the public function that uses it, because pre-conditions are proved. \paragraph{Methodology of proving BEMP functions.}To illustrate our methodology, we take an example from BEMP. \begin{lstlisting}[language=Solidity, basicstyle=\fontsize{7}{9}\tt] function transferFromMarket(address _to, uint _value) onlyMarket returns (bool success) { if (exportBalanceOf[market] >= _value) {/* Transferring _value from market to _to */} else {success = false; Error("Tokens couldn't be transferred from market");}} \end{lstlisting} The function allows transferring \verb|_value| (expressing cryptokwh) from the \verb|market| to \verb|_to| address. The mapping \verb|exportBalanceOf[]| stores balances corresponding to addresses that export tokens. The function can be executed solely by the market (the modifier function \verb|onlyMarket|). The program checks if the market has enough tokens to send to \verb|_to|. If this condition is verified, then the transfer is done. If the condition is not verified, the function returns \verb|false| and triggers an \verb|Error| event (a feature that allows writing logs in the blockchain)~\footnote{https://media.consensys.net/technical-introduction-to-events-and-logs-in-ethereum-a074d65dd61e}. This process is internal to the blockchain, there is no external exchange, hence the function is qualified as \textit{private}. According to the modelling approach, we define complete pre-conditions and post-conditions to verify and prove the function. The corresponding \textit{Why3} function is: \begin{lstlisting}[language= Why3, basicstyle=\fontsize{7}{9}\tt] let transferFromMarket (_to : address) (_value : uint) : bool requires {!onlymarket /\ _value > 0 } requires {marketBalanceOf[market] >= _value } requires {importBalanceOf[_to] <= max_uint - _value} ensures {(old marketBalanceOf[market]) + (old importBalanceOf[_to]) = marketBalanceOf[market] + importBalanceOf[_to]} = (* The program *) \end{lstlisting} The pre-condition in line 2 expresses the \verb|modifier onlyMarket| function. Note that \verb|marketBalanceOf| is the hashtable that records crypto-Kilowatthours balances associated with market addresses, and \verb|importBalanceOf| is the hashtable that records the amount of crypto-Kilowatthours intended for the buyer addresses. From the specification, we understand the behaviour of the function without referencing to the program. To be executed, \verb|transferFromMarket| must respect RTE and functional properties: \begin{itemize} \item RTE properties: \textit{(1) Positive values}; a valid amount of crypto-Kilowatthours to transfer is a positive amount (Line 2). \textit{(2) Integer overflow}; no overflow will occur when \verb|_to| receives \verb|_value| (Line 4). \item Functional properties: \textit{(1) Acceptable transfer}; the transfer can be done, if the market has enough crypto-Kilowatthours to send (Line 3). \textit{(2) Successful transfer}; the transaction is completed successfully if the sum of the sender and the receiver balance before and after the execution does not change (Line 5). \textit{(3)} \verb|modifier| \textit{function}; the function can be executed only by the market (Line 2). \end{itemize} The set of specifications is necessary and sufficient to prove the expected behaviour of the function. \paragraph{}The following function illustrates a \textit{Solidity} public function. \begin{lstlisting}[language=Solidity, basicstyle=\fontsize{7}{9}\tt] function registerSmartMeter(string _meterId, address _ownerAddress) onlyOwner { addressOf[_meterId] = _ownerAddress; MeterRegistered(_ownerAddress, _meterId);} \end{lstlisting} The function \verb|registerSmartMeters()| is identified by a name (\verb|meterID|) and an owner (\verb|ownerAddress|). Note that all meter owners are recorded in a hashtable \verb|addressOf| associated with a key value \verb|meterID| of the \verb|string| type. The main potential bug in this function is possibly registering a meter twice. When a meter is registered, the function broadcasts an event \verb|MeterRegistered|. Following the modelling rules, there are no pre-conditions, instead, we define exceptions. The corresponding \textit{Why3} function is: \begin{lstlisting}[language = Why3, basicstyle=\fontsize{7}{9}\tt] Exception OnlyOwner, ExistingSmartMeter let registerSmartMeter (meterID : string) (ownerAddress : address) raises { OnlyOwner-> !onlyOwner = False } raises {ExistingSmartMeter -> mem addressOf meterID} ensures { (size addressOf) = (size (old addressOf) + 1 ) } ensures { mem addressOf meterID} = (*The program*) \end{lstlisting} The first exception (Line 3) is the \verb|modifier| function which restricts the function execution to the owner, the caller function. It is not possible to pre-condition inputs of the function, so we manage exceptional conditions during the execution of the program. To be executed, \verb|registerSmartMeter| must respect RTE and functional properties: \begin{itemize} \item RTE properties: \textit{Duplicate record}; if a smart meter and its owner is recorded twice, raise an exception (Line 4) \item Functional properties: \textit{(1)} \verb|modifier| \textit{function}; the function can be executed only by the owner, thus we raise \verb|OnlyOwner| when the caller of the function is not the owner (Line 3). \textit{(2) Successful record}; at the end of the function execution, we ensure (Line 5) that a record has made. \textit{(3) Existing record}; the registered smart meter has been properly recorded in the hashtable \verb|addressOf| (Line 6). \end{itemize} The set of specifications is necessary and sufficient to prove the expected behaviour of the function. \paragraph{Trading contract.} The trading algorithm allows matching a potential consumer with a potential seller, recorded in two arrays \verb|buy_order| and \verb|sell_order| taken as parameters of the algorithm. In order to obtain an expected result at the end of the algorithm, properties must be respected. We define specifications that make it possible throughout the trading process. The algorithm is a private function type because it runs on-chain. Thus no exceptions are defined but pre-conditions are. The Trading contract has no \textit{Solidity} equivalent because it is a function added to the original BEMP project. Below is the set of properties of the function: \begin{lstlisting}[language = Why3, basicstyle=\fontsize{7}{9}\tt] let trading (buy_order : array order) (sell_order : array order) : list order_trading requires { length buy_order > 0 /\ length sell_order > 0} requires {sorted_order buy_order} requires {sorted_order sell_order} requires {forall j:int. 0 <= j < length buy_order -> 0 < buy_order[j].tokens } requires {forall j:int. 0 <= j < length sell_order -> 0 < sell_order[j].tokens } ensures { correct result (old buy_order) (old sell_order) } ensures { forall l. correct l (old buy_order) (old sell_order) -> nb_token l <= nb_token result } ensures {!gas <= old !gas + 374 + (length buy_order + length sell_order) * 363} ensures {!alloc <= old !alloc + 35 + (length buy_order + length sell_order) * 35} = (* The program *) \end{lstlisting} \begin{itemize} \item RTE properties:\textit{ positive values}; parameters of the functions must not be empty (empty array) (Line 2), and a trade cannot be done with null or negative tokens (Lines 5, 6).\\ \item Functional requirements: \textit{sorted orders}; the orders need to be sorted in a decreasing way. Sellers and buyers asking for the most expensive price of energy will be at the top of the list (Lines 3, 4). \item Functional properties: \textit{(1) correct trading} (Lines 7, 8); for a trading to be qualified as correct, it must satisfy two properties: \begin{itemize} \item the conservation of buyer and seller tokens that states no loss of tokens during the trading process : \lstinline[language = Why3, basicstyle=\fontsize{7}{9}\tt] ! forall i:uint. 0 <= i < length sell_order -> sum_seller (list_trading) i <= sell_order[i].tokens!. For the buyer it is equivalent by replacing seller by buyer. \item a successful matching; a match between a seller and a buyer is qualified as correct if the price offered by the seller is less than or equal to that of the buyer, and that the sellers and buyers are valid indices in the array. \end{itemize} \textit{(2) Best tokens exchange}; we choose to qualify a trade as being one of the best if it maximize the total number of tokens exchanged. Line 8 ensures that no correct trading list can have more tokens exchanged than the one resulting from the function. The criteria could be refined by adding that we then want to maximize or minimize the sum of paid (best for seller or for buyer). \textit{(3) Gas consumption}; Lines 9 and 10 ensures that no extra-consumption of gas will happen (see the following paragraph). \end{itemize} \paragraph{Gas consumption proof.} Overconsumption of \textit{gas} can be avoided by the \textit{gas} model. Instructions in EVM consume an amount of \textit{gas}, and they are categorized by level of difficulty; e.g., for the set $W_{verylow}=\{ ADD,\ SUB,\ ...\}$, the amount to pay is $G_{verylow} =\ 3\ units\ of\ gas$, and for a create operation the amount to pay is $G_{create} =\ 32000\ units\ of\ gas$~\cite{wood2014ethereum}. The price of an operation is proportional to its difficulty. Accordingly, we fix for each \textit{Why3} function, the appropriate amount of \textit{gas} needed to execute it. Thus, at the end of the function instructions, a variable \verb|gas| expresses the total quantity of \textit{gas} consumed during the process. We introduce a \verb|val ghost| function that adds to the variable \verb|gas| the amount of \textit{gas} consumed by each function calling \verb|add_gas| (see section 3 for more details on \textit{gas} allocation). \begin{lstlisting}[language = Why3, basicstyle=\fontsize{7}{9}\tt] val ghost add_gas (used : gas) (allocation: int): unit requires { 0 <= used /\ 0 <= allocation } ensures { !gas = (old !gas) + used } ensures { !alloc = (old !alloc) + allocation } writes { gas, alloc} \end{lstlisting} \paragraph{}The specifications of the function above require \textit{positive values} (Line 2). Moreover, at the end of the function, we ensure that there is no extra \textit{gas} consumption (Lines 3, 4). Line 5 specifies the changing variables. \section{Compiling Why3 Contracts and Proving Gas Consumption} The final step of the approach is the deployment of \textit{Why3} contracts. EVM is designed to be the runtime environment for the smart contracts on the Ethereum blockchain~\cite{wood2014ethereum}. The EVM is a stack-based machine (word of 256 bits) and uses a set of instructions (called opcodes)\footnote{https://ethervm.io} to execute specific tasks. The EVM features two memories, one volatile that does not survive the current transaction and a second for storage that does survive but is a lot more expensive to modify. The goal of this section is to describe the approach of compiling \textit{Why3} contracts into EVM code and proving the cost of functions. The compilation\footnote{The implementation can be found at \url{http://francois.bobot.eu/fm2019/}} is done in three phases: \textit{(1)} compiling to an EVM that uses symbolic labels for jump destination and macro instructions. \textit{(2)} computing the absolute address of the labels, it must be done inside a fixpoint because the size of the jump addresses has an impact on the size of the instruction. Finally, \textit{(3)} translating the assembly code to pure EVM assembly and printed. Most of \textit{Why3} can be translated, the proof-of-concept compiler allows using algebraic datatypes, not nested pattern-matching, mutable records, recursive functions, while loops, integer bounded arithmetic (32, 64,128, 256 bits). Global variables are restricted to mutable records with fields of integers. It could be extended to hashtables using the hashing technique of the keys used in \textit{Solidity}. Without using specific instructions, like for C, \textit{Why3} is extracted to garbage collected language, here all the allocations are done in the volatile memory, so the memory is reclaimed only at the end of the transaction. \paragraph{} We have not formally proved yet the correction of the compilation, we only tested the compiler using reference interpreter \cite{} and by asserting some invariants during the transformation. However, we could list the following arguments for the correction: \begin{itemize} \item the compilation of why3 (ML-language) is straightforward to stack machine. \item the precondition on all the arithmetic operations (always bounded) ensures arithmetic operations could directly use 256bit operations \item raise accepted only in public function before any mutation so the fact they are translated into revert does not change their semantics. \lstinline{try with} are forbidden. \item only immutable datatype can be stored in the permanent store. Currently, only integers can be stored, it could be extended to other immutable datatye by copying the data to and from the store. \item The send function in why3 only modifies the state of balance of the contracts, requires that the transfer is acceptable and never fail, as discussed previously. So it is compiled similarly to the solidity function send function with a gas limit small enough to disallow modification of the store. Additionally, we discard the result. \end{itemize} The execution of each bytecode instruction has an associated cost. One must pay some \textit{gas} when sending a transaction; if there is not enough \textit{gas} to execute the transaction, the execution stops and the state is rolled back. So it is important to be sure that at any later date the execution of a smart contract will not require an unreasonable quantity of \textit{gas}. The computation of WCET is facilitated in EVM by the absence of cache. So we could use techniques of~\cite{cerco} which annotate in the source code the quantity of \textit{gas} used, here using a function \lstinline{add_gas used allocations}. The number of allocations is important because the real \textit{gas} consumption of EVM integrates the maximum quantity of volatile memory used. The compilation checks that all the paths of the function have a cost smaller than the sum of the \lstinline{add_gas g a} on it. The paths of a function are defined on the EVM code by starting at the function-entry and loop-head and going through the code following jumps that are not going back to loop-head. \begin{center} \begin{minipage}{0.95\textwidth} \begin{lstlisting}[language = Why3, basicstyle=\fontsize{7}{9}\tt] let rec mk_list42 [@ evm:gas_checking] (i:int32) : list int32 requires { 0 <= i } ensures { i = length result } variant { i } ensures { !gas - old !gas <= i * 185 + 113 } ensures { !alloc - old !alloc <= i * 96 + 32 } = if i <= 0 then (add_gas 113 32; Nil) else (let l = mk_list42 (i-1) in add_gas 185 96; Cons (0x42:int32) l) \end{lstlisting} \end{minipage} \end{center} \paragraph{}Currently, the cost of the modification of storage is over-approximated; using specific contract for the functions that modify it we could specify that it is less expansive to use a memory cell already used. \section{Related Work} Since the \textit{DAO} attack, the introduction of formal methods at the level of smart contracts has increased. Raziel is a framework to prove the validity of smart contracts to third parties before their execution in a private way~\cite{sanchez2017raziel}. In that paper, the authors also use a deductive proof approach, but their concept is based on Proof-Carrying Code (PCC) infrastructure, which consists of annotating the source code, thus proofs can be checked before contract execution to verify their validity. Our method does not consist in annotating the \textit{Solidity} source code but in writing the contract program and thus getting a correct-by-construction program. Another widespread approach is static analysis tools. One of them is called Oyente. It has been developed to analyze Ethereum smart contracts to detect bugs. In the corresponding paper~\cite{luu2016making}, the authors were able to run Oyente on 19,366 existing Ethereum contracts, and as a result, the tool flagged 8,833 of them as vulnerable. Although that work provides interesting conclusions, it uses symbolic execution, analyzing paths, so it does not allow to prove functional properties of the entire application. We can also mention the work undertaken by the \textit{F*} community~\cite{bhargavan2016short} where they use their functional programming language to translate \textit{Solidity} contracts to shallow-embedded F* programs. Just like~\cite{ahrendtverification} where the authors perform static analysis by translating \textit{Solidity} contracts into Java using \textit{KeY}~\cite{ahrendt2016deductive}. The initiative of the current paper is directly related to a previous work~\cite{nehai}, which dealt with formally verifying the smart contracts application by using model-checking. The paper established a methodology to construct a three-fold model of an Ethereum application, with properties formalized in temporal logic CTL. However, because of the limitation of the model-checker used, ambitious verification could not be achieved (e.g., a model for $m$ consumers and $n$ producers). This present work aims to surpass the limits encountered with model-checking, by using a deductive proof approach on an Ethereum application using the \textit{Why3} tool. \section{Conclusions} In this paper, we applied concepts of deductive verification to a computer protocol intended to enforce some transaction rules within an Ethereum blockchain application. The aim is to avoid errors that could have serious consequences. Reproducing, with \textit{Why3}, the behaviour of \textit{Solidity} functions showed that \textit{Why3} is suitable for writing and verifying smart contracts programs. The presented method was applied to a use case that describes an energy market place allowing local energy trading among inhabitants of a neighbourhood. The resulting modelling allows establishing a trading contract, in order to match consumers with producers willing to make a transaction. In addition, this last point demonstrates that with a deductive approach it is possible to model and prove the operation of the BEMP application at realistic scale (e.g. matching $m$ consumers with $n$ producers), contrary to model-checking in~\cite{nehai}, thus allowing the verifying of more realistic functional properties. \newpage \newpage \bibliographystyle{splncs04}
1,108,101,563,458
arxiv
\section{Introduction} Scene parsing based on semantic segmentation is a dense classification task for visual content analysis in image processing. The goal is to assign a class label to every single pixel in given images, i.e., parse a scene into different geometric regions associated with semantic categories such as \emph{sky, road} and \emph{bicycles}. This topic has drawn a broad research interest for many applications such as surveillance for security \cite{STAP:STIS}, robot sensing \cite{ICRA14:ROBOT_SENSING} and auto-navigation \cite{CVPR17:NAV}. The difficulty of unconstrained semantic segmentation mainly lies in the high varieties of scenes and their associated labels. Some categories are semantically confusing due to spatial-semantic inconsistencies. For example, regions of ``pedestrians'' and ``riders'' are often indistinguishable, and ``cars'' are usually affected by visual scales, occlusions and illuminations. Therefore, the spatial and semantic information shall be consistent to address this challenge. Furthermore, accurate label prediction at the pixel level requires high resolution of visual feature representations. For example, in the challenging Cityscapes dataset \cite{CVPR16:CITYSCAPES}, it is comparably easy to segment some large objects such as ``road'' and ``building'', but very difficult to localize and sketch the contours of small objects such as ``poles'' and ``traffic signs''. Recently, the development of deep convolutional neural networks has led to remarkable progress in scene parsing due to their powerful feature representation ability to describe the local visual properties. Deep parsing networks are often fine-tuned based on the pre-trained classification networks, e.g., deep residual networks \cite{CVPR16:RESNET}. These classification networks usually stack convolution and down-sampling layers to obtain visual feature maps with rich semantics. The deeper layer features with rich semantics are crucial for accurate classification, but lead to the reduced resolution and in turn spatial information loss. Such information loss is detrimental for scene parsing because it decreases the localization accuracy. On the other hand, the spatial-sensitive feature maps in the shallow layers are rarely optimized due to the vanishing gradient. Thus, how to keep the spatial-semantic consistencies becomes an open problem in scene parsing. To address this issue, several modifications to Fully Convolutional Networks (FCNs) \cite{CVPR15:FCN} have been made. For example, in the encoder-decoder structure such as UNet \cite{MICCAI15:UNET}, the encoder maps the original images into low-resolution feature representations, while the decoder mainly restores the spatial information with skip-connections. Unfortunately, the missing geometric information cannot be fully restored. Another popular method that has been widely used in segmentation is the dilated (atrous) convolution \cite{ARXIV:DILATED}, which can enlarge the receptive field in the feature maps without adding more computation overhead, thus more visual details are preserved. Combining the encoder-decoder structure and dilated convolution can effectively boost the pixel-wise prediction accuracy \cite{TPAMI:DEEPLAB}, but is extremely computational demanding. \begin{figure*}[t] \centering \begin{minipage}{0.4\textwidth} \includegraphics[width=1\textwidth]{figs/na.png} \subcaption{Sequential (no aggregation)} \label{FIG:NA} \end{minipage} ~ \begin{minipage}{0.4\textwidth} \includegraphics[width=1\textwidth]{figs/al.png} \subcaption{Auxiliary loss} \label{FIG:AL} \end{minipage} ~ \begin{minipage}{0.4\textwidth} \includegraphics[width=1\textwidth]{figs/ca.png} \subcaption{Skip-connection} \label{FIG:CA} \end{minipage} ~ \begin{minipage}{0.4\textwidth} \includegraphics[width=1\textwidth]{figs/da.png} \subcaption{Deep layer aggregation} \label{FIG:DA} \end{minipage} \caption{Different approaches to utilize multiple-layer outputs. (a) Sequential learning blocks without aggregation as default for supervised learning tasks. (b) Sequential learning blocks with an auxiliary loss branch. (c) Skip-connections through multiple learning blocks \cite{CVPR17:DENSENET}. (d) Deep layer aggregation proposed in \cite{CVPR18:DLA}. The ``learning block'' refers to simple or composite convolutional structures (e.g., residual block \cite{CVPR16:RESNET}). The ``aggregation'' here means either the residual-add or simple concatenation on the channel axis.} \label{FIG:4AGGREGATIONS} \end{figure*} In deep parsing networks, the shallow convolution features, i.e., the early convolution outputs, encode the low-level spatial visual information such as edges, corners and circles. The high-level features in the deep blocks, on the other hand, carry more semantic information, including instance and category-level evidences, but lack geometric information. The outputs in the middle convolution blocks carry miscellaneous spatial and semantic properties of images, forming the mid-level feature maps. In the optimization procedure, the gradients can be easily passed through the deeper convolution blocks, but the weights in the shallower convolution blocks are rarely updated, making a very slow convergence. At the same time, the different layer outputs of feature representations are complementary to each other, which should be re-considered for any dense prediction tasks. However, the effective aggregation of multiple feature outputs is rarely recognized as a critical step in model optimization. In this paper, we propose a novel feature aggregation module applied on multiple layer outputs of deep parsing networks to effectively capture the long-range contextual information. In the proposed method, the module can auto-select the most useful feature maps to form a discriminative visual feature representation for dense prediction. Thus, with the neatly designed feature aggregation module, the proposed parsing network, named {\bf S}patial-semantic {\bf A}ggregation {\bf Net}work (SANet), can take the long-contextual information of deep parsing networks into consideration, which has the following two advantages: (1) it uses multiple feature maps to form a strong supervision, in which the spatial-semantic properties are automatically correlated; and (2) the gradients are directly passed into multiple layers, making the parsing network easy to train. In the experiment, the proposed SANet achieves very promising performance on four widely used benchmark datasets, i.e., NYU Depth v2 \cite{ECCV12:NYU}, SUN RGB-D \cite{NIPS14:SUN}, ADE20K \cite{CVPR17:ADE20K} and Cityscapes \cite{CVPR16:CITYSCAPES}. The spatial-semantic feature aggregation module with the long-contextual information paradigm can be extended to other image processing models to benefit the spatial-aware learning tasks. The rest of the paper is organized as follows. Section \ref{SEC:RELATED_WORK} introduces related work. Section \ref{SEC:METHOD} elaborates the proposed feature aggregation module as well as the learning framework of SANet for scene parsing. Experimental results and analysis are presented in Section \ref{SEC:EXP}. Finally, Section \ref{SEC:CONCLUSION} concludes the paper. \section{Related work} \label{SEC:RELATED_WORK} \subsection{Deep learning based parsing networks} Deep parsing models based on fully-convolutional networks \cite{CVPR15:FCN} have achieved significant outcomes on large-scale benchmark datasets \cite{CVPR16:CITYSCAPES,CVPR17:ADE20K}. Following the first deep parsing model FCN \cite{CVPR15:FCN}, DeconvNet \cite{CVPR15:DECONV}, SegNet \cite{TPAMI:SEGNET}, UNet \cite{MICCAI15:UNET} and their variants \cite{ARXIV:CONVCRF,CVPR17:GFRN,ARXIV:DECODER} adopt the encoder-decoder framework, where skip-connections are frequently used to refine segmentation masks. The aim to use skip-connections is to recover the geometric information that is represented in the early convolution blocks, but such aggregations fail to effectively select the most appropriate feature components because they are mixed with higher semantic information. Moreover, it is not clear that how much information can help to preserve the geometric information and aggregate the semantics in the 2D feature maps. For scene parsing tasks, exploring the contextual information is beneficial for semantic understanding, which requires large receptive fields with adjacent pattern information in deep models. Based on this consideration, some methods use dilated convolutions to replace traditional convolution layers to enlarge the receptive field \cite{ARXIV:DILATED}, or some others adopt graphical models with effective inference to analyze the surrounding visual information \cite{TPAMI:DAG}. The multi-scale processing technique is commonly used in many learning tasks such as visual localization and detection. Specific to semantic segmentation, employing multi-scale inputs \cite{CVPR17:REFINENET,CVPR16:ATTENTION_SCALE} or applying multi-scale aggregation \cite{CVPR17:PSPNET} are effective ways to improve the model accuracy. In \cite{CVPR19:AUTODEEPLAB}, the authors proposed to use neural architecture search to discover the deep model structures for semantic segmentation, which is no longer dependent on the pre-trained deep models and can obtain satisfactory results. However, its generalization capability remains an issue because the architecture is searched on a specific validation dataset rather than a universal one, so it is hard to adapt to new data environments. \subsection{Effective utilization of multiple layer outputs} Deep neural networks have been extensively used for various image processing tasks. Specifically, the pre-trained deep networks on large-scale datasets serve as stem networks for new learning tasks. The stem networks usually contain multiple learning blocks such as residual blocks in deep residual networks \cite{CVPR16:RESNET}. The densely connected network \cite{CVPR17:DENSENET} is a canonical architecture for semantic fusion, which is designed to better propagate features and losses through skip-connections that concatenate all the feature maps in a learning block. For spatial-aware learning tasks such as object detection, feature fusion \cite{CVPR17:FPN} is designed to equalize and standardize semantics across the levels of a pyramid feature hierarchy through top-down and lateral connections. With the powerful GPUs, the number of layers in the stem networks can be easily extended to several hundreds or even more than one thousand, but the resulting performance does not increase linearly. In a deep convolution network, the stacked learning blocks are divided into stages according to the resolution and representation capacity of feature maps. The feature map outputs in deeper stages contain more semantic information but are spatially coarser than shallow layers. Hence, further exploration is needed on how to connect these layers or learning blocks. In Figure \ref{FIG:4AGGREGATIONS}, we illustrate four feature inference schemes. The basic architecture (a) without layer aggregation or branching structure is widely used in supervised learning tasks \cite{CVPR16:RESNET,CVPR18:SENET, CVPR15:FCN, ARXIV:DILATED,NIPS15:FASTER_RCNN,CVPR15:CAP}. To address the spatial-aware learning task, deep semantic segmentation models\cite{MICCAI15:UNET,TPAMI:DEEPLAB} adopt the skip-connections (b) across multiple learning stages, fusing the feature maps to restore the spatial information. The sequential inference with auxiliary loss (c) was firstly used in \cite{CVPR15:GOOGLENET} for very deep neural networks to enhance the supervision, and was also used in PSPNet \cite{CVPR17:PSPNET} for semantic segmentation. In \cite{CVPR18:DLA}, the authors designed a multi-stage aggregation without losing any intermediate feature representations to improve the model performance, as is illustrated in (d). However, the design of the aggregation relies on experience, and the basic sequential structure of the stem network has to be modified, which means the model needs to be trained from the very beginning. In our proposed aggregation structure, we design a spatial-semantic aggregation module to effectively aggregate the feature maps without changing the stem network structure. \section{Method} \label{SEC:METHOD} In this section, we start with the observation and analysis of skip-connections and auxiliary losses when applying FCN for scene parsing, which motivate the design of the multi-layer feature aggregation module, then we give the details of the whole learning framework. \subsection{Multi-layer feature aggregation} \label{SUBSEC:MLFA} To effectively correlate the spatial and semantic information, the feature aggregation should reserve the discriminative feature components and abandon the useless ones. Specific to a FCN, the early convolution outputs have dual functionalities: describing the geometric properties and acquiring new knowledge in deeper stages. The features of the middle and late layers are less spatial-aware but semantic indicative. To effectively use multiple feature maps from different layers in a FCN, skip-connections and auxiliary losses are commonly used. Skip-connections are usually used in deep learning-based scene parsing models such as DeepLab \cite{TPAMI:DEEPLAB} to correlate the spatial information in the early convolution layers and the semantics in the late feature outputs. Auxiliary losses, on the other hand, can help improve the discriminative power by adding extra output branches in intermediate layers and providing stronger supervision. To observe the effectiveness of skip-connections and auxiliary losses in FCN, we conducted a simple indoor scene parsing experiment on two small datasets NYU Depth v2 \cite{ECCV12:NYU} (40 classes) and SUN RGB-D \cite{NIPS14:SUN} (37 classes). We used the ResNeXt50 model \cite{CVPR17:RESNEXT} as the stem network, replacing the global average pooling layer and classification layer with two convolution layers to form a pixel-wise classifier for dense prediction. We selected the following layer outputs to skip-connect the last feature map before the final convolution (pixel classifier): (a) the ReLU layer after the first convolution ({\bf s0}); (b) the final ReLU layers at each of the four stages before resolution changes ({\bf s1}, {\bf s2}, {\bf s3} and {\bf s4}). Similarly, we added dense layers on these layer outputs as auxiliary losses to observe their effectiveness. In the training procedure, we used the IoU (intersection over union) to evaluate the performance on the validation (test) set. The results of different skip-connections and auxiliary losses are summarised in Table \ref{TB:SKIP}. \begin{table}[h] \centering \caption{Comparison of IoU of FCNs with different skip-connections ({\bf SC}) and auxiliary losses ({\bf AL}) on NYU Depth v2 and SUN RGB-D validation sets.} \label{TB:SKIP} \begin{tabular}{|c||c|c|c|c|} \hline \multirow{2}{*}{{\bf Layer}} & \multicolumn{2}{c|}{NYU Depth v2} & \multicolumn{2}{c|}{SUN RGB-D} \\ \cline{2-5} & SC &AL & SC &AL \\ \hline {\bf None} & \multicolumn{2}{c|}{40.2} & \multicolumn{2}{c|}{40.6} \\ \cline{2-5} {\bf s0} & 39.2 & 40.4 & 39.8 & 40.1 \\ {\bf s1} & 40.0 & 40.4 & {\bf 40.4} & 40.5 \\ {\bf s2} & 39.6 & {\bf 40.6} & 40.3 & {\bf 40.7} \\ {\bf s3} & 39.4 & 40.4 & 40.2 & 40.6 \\ {\bf s4} & {\bf 40.4} & 40.2 & 40.3 & 40.6 \\ \hline \end{tabular} \end{table} Inspecting the validation results in the table above, we can observe that some skip-connections or auxiliary losses can improve the classification accuracy, while some others cannot. Furthermore, applying the two approaches to the early convolution output ({\bf s0} in the experiment) essentially decreases the IoU. For a given stem network with very deep structures and a large-scale dataset, it is hard to select the proper intermediate layers to form the most appropriate feature representation. Based on such observations, we design a multi-layer feature selection method to fully utilize multiple feature maps thus improve the scene parsing performance. Instead of concatenating multiple-layer outputs as is in skip-connections or applying an auxiliary loss in an intermediate layer output, we design a nonlinear feature aggregation method throughout the whole stem network, without wasting any layer information, to better fit the potential data distribution. We consider the multi-layer outputs in a FCN as a sequence of feature maps. Given the labels as supervision information, the global feature representation should well leverage the spatial and semantic properties for the spatial-aware prediction. Therefore, the layer dependency in the sequence of feature maps should be modelled to learn the spatial-aware feature representation. Here we use the long short-term memory (LSTM) as a feature selection function conducted on multi-layer outputs. LSTM can model the long-term dependencies in a sequence. Consequently, it acts as a feature selection function to form the appropriate feature representation. The core part of LSTM is a memory cell $\mathbf{c}_t$ at the time step $t$ that records the history of input sequence observed up to that time step, and the behaviour of the cell is controlled by an input gate $\mathbf{i}_t$, a forget gate $\mathbf{f}_t$ and an output gate $\mathbf{o}_t$. In a fully-connected LSTM (FC-LSTM), these three gates are computed by affine mappings with non-linear activations, which control whether to forget the current cell value if it should read its input and whether to output a new cell value. The major limitation of FC-LSTM in modelling sequential data is the usage of full connections in input-to-state and state-to-state transitions in which no spatial information is encoded. Moreover, the affine mappings are with full weight matrices, leading to high computational complexity and the over-fitting problem. To address this issue, we use 2D convolution LSTM \cite{NIPS15:CONVLSTM} (ConvLSTM) instead of FC-LSTM to aggregate the multi-layer feature outputs. Suppose the 2D feature outputs of multiple learning blocks are $\mathbf{x}_1, \ldots, \mathbf{x}_m$, where $m$ is the number of feature map candidates. The $t$-th feature map ($1\le t \le m$) is a 3D tensor, i.e., $\mathbf{x}_t\in\mathbb{R}^{w_t\times h_t\times c_t}$, where $w_t$, $h_t$ and $c_t$ are the width, height and number of channels, respectively. To feed the 2D feature outputs into the ConvLSTM, all feature maps should be converted to the identical-shaped tensors $\mathbf{x}'_1, \ldots, \mathbf{x}'_m$ having the same resolution and dimension. Assume shape of the target feature map is $w'\times h'\times c'$, the change of the resolution from $w_t\times h_t$ to $w'\times h'$ is implemented by down-sampling, and the channel conversion from $c_t$ to $c'$ is computed by $1\times 1$ convolution with no bias. When $c_t>c'$, the $1\times 1$ convolution acts as the dimensionality reduction, otherwise it expands the original features into a higher feature space to better fit the high-dimenional non-linearity. Formally, the mapping from $\mathbf{x}_t \in \mathbb{R}^{w_t\times h_t\times c_t}$ to $\mathbf{x}'_t \in \mathbb{R}^{w'\times h'\times c'}$ is as follows: \begin{equation} \mathbf{x}'_t = \mathbf{w}_t * \psi(\mathbf{x}_t) \end{equation} where $*$ denotes the 2D convolution, $\mathbf{w}_t$ is the convolution kernel, and $\psi(\cdot)$ is the down-sampling operator\footnote{We use the bilinear interpolation on the 2D grids to adjust the feature resolution.}, respectively. \begin{figure*}[t!] \centering \includegraphics[width=0.8\textwidth]{figs/sanet_frm.png} \caption{Overview of the deep parsing framework with a feature aggregation module. Given an input image (a), we use a pre-trained deep model (b) as a stem network for feature inference. At the same time, we keep the output feature maps of multiple learning blocks and send them into a feature aggregation module (c) for spatial-semantic consistencies. Finally, the aggregated feature representation goes through a pyramid pooling module to get the final per-pixel prediction (d).} \label{FIG:FRM} \end{figure*} The feature map $\mathbf{x}'_t$ can be considered as $w_t\times h_t$ feature vectors standing on a spatial grid. The ConvLSTM learns the sequential state of a certain cell in the grid by the inputs and past states of its local neighbours, which is implemented by using a convolution operator in the state-to-state and input-to-state transitions. Similar to FC-LSTM, the input gate $\mathbf{i}_t$ controls whether ConvLSTM considers the current input $\mathbf{x}'_t$, the forget gate $\mathbf{f}_t$ controls whether ConvLSTM forgets the previous memory $\mathbf{c}_{t-1}$, and the output gate $\mathbf{o}_t$ controls how much information will be read from memory $\mathbf{c}_t$ to the current hidden state $\mathbf{h}_t$. The computation stream of ConvLSTM is as follows: \begin{align} \mathbf{i}_t & = \sigma(\mathbf{w}_{ix}*\mathbf{x}'_t+ \mathbf{w}_{ih}*\mathbf{h}_{t-1}+\mathbf{b}_i), \\ \mathbf{f}_t & = \sigma(\mathbf{w}_{fx}*\mathbf{x}'_t+\mathbf{w}_{fh}*\mathbf{h}_{t-1}+\mathbf{b}_f), \\ \mathbf{o}_t & = \sigma(\mathbf{w}_{ox}*\mathbf{x}'_t+\mathbf{w}_{oh}*\mathbf{h}_{t-1}+\mathbf{b}_o), \\ \mathbf{g}_t & = \tanh (\mathbf{w}_{gx}*\mathbf{x}'_t+\mathbf{w}_{gh} *\mathbf{h}_{t-1}+\mathbf{b}_g), \\ \mathbf{c}_t & = \mathbf{f}_t \circ \mathbf{c}_{t-1} + \mathbf{i}_t\circ \mathbf{g}_t, \\ \mathbf{h}_t &= \mathbf{o}_t \circ \tanh (\mathbf{c}_t), \end{align} where $\sigma(\cdot)$ is the sigmoid function and $\circ$ is the element-wise multiplication, respectively. The convolution kernels $\mathbf{w}_{*x}$ and $\mathbf{w}_{*h}$ are the ConvLSTM state and recurrent transformations, and $\mathbf{b}_*$ are bias matrices. Given a sequence of feature maps $\mathbf{x}'_1, \ldots, \mathbf{x}'_m$ from $m$ learning blocks, the output of 2D ConvLSTM is a sequence of tensors. The final feature map that correlates the spatial and semantic information is calculated as: \begin{equation} \mathbf{y}=\frac{1}{m}\sum\limits \text{ConvLSTM}(\mathbf{x}'_1, \ldots, \mathbf{x}'_m) . \end{equation} Considering the specific case of pixel-level classification that requires larger receptive fields, a higher dilation rate (e.g., 2) is used in the ConvLSTM. By adding the ConvLSTM as a feature aggregation function on multiple-layer outputs, the deep parsing structure has the following three advantages. First, the feature aggregation module is able to keep the spatial-semantic consistencies, which is not a linear combination of multiple feature maps, but is an extra learning module to acquire new knowledge thus enhance the global feature representation. Second, the feature aggregation module uses multiple feature maps at different stages as the input through the stem network, forming a strong supervision and making it easier to train, because the gradients from the late layer for pixel-wise classification can be directly passed into shallower learning blocks. Thus, compared to the deep parsing networks such as \cite{CVPR17:PSPNET,CVPR19:DANET}, the convergence is faster when applying the feature aggregation module. Third, the feature aggregation module can be directly inserted in any existing FCN pipeline, which does not significantly increase the computational complexity overhead yet enhance the feature representation capability. Recurrent modules have been widely used to improve the performance by considering the visual contextual information in some visual pattern recognition works \cite{CVPR18:RNN_SEG,CVPR19:TDBU,ECCV18:AAF,TPAMI:DAG}. For example, the ConvLSTM is used to help model the motion \cite{ISBI19:ML_CONVLSTM} and spatial-temporal dependencies in video frames \cite{BMVC18:FSS_CONVLSTM}. In our work, we use the recurrent module from another perspective for multi-layer feature aggregation for spatial-semantic consistencies, which differs from their contexts and it is the key contribution for spatial-aware learning tasks. In \cite{CVPR18:CCL}, the authors proposed to make multi-level context contrasted local features to aggregate multi-layer outputs. The key difference to our model is that they adopt the gated sum to control the information flow, while we use the recurrent model to auto-aggregate the features. The gated sum is computed multiple times for every two layer aggregation, so for the whole learning framework it is less computationally efficient compared to our method. \subsection{The overall SANet learning architecture} We now present the architecture of spatial-semantic aggregation network (SANet). The overall framework is illustrated in Figure \ref{FIG:FRM}. In the proposed framework, the global feature representation is composed of two parallel pipelines. The first pipeline (b) is essentially a dilated FCN that consists of multiple learning blocks, during which both the feature resolution and feature dimensionality are changed. The second pipeline (c) is in parallel with (b), in which multiple feature maps are aggregated by the spatial-semantic feature selection module. The stem network of SANet used in our work is a 50-layer ResNeXt \cite{CVPR17:RESNEXT}. Compared with ResNet proposed in \cite{CVPR16:RESNET}, ResNeXt adopts the aggregated transformation by increasing the cardinality, which can improve the classification accuracy without increasing the width of the bottleneck block or the depth of the whole network. The ResNeXt-50 has four learning stages (blocks) after the early convolution. At each stage the number of channels doubles while the resolution halves. We choose the final ReLU activation before resolution changes, so five feature map outputs are selected as the intermediate feature candidates to feed into the spatial-semantic feature aggregation module. Given an input image (a) in Figure \ref{FIG:FRM}, it has two paths to arrive at the final feature representation prior. The first path is the sequential learning blocks of ResNeXt-50 in (b), and the second path is the feature map conversion and a spatial-semantic feature aggregation module in (c), respectively. We then apply a pyramid feature module (PSP) \cite{CVPR17:PSPNET} as a global feature representation for the pixel-label prediction of the objectives. We do not apply auxiliary loss to any intermediate layer to enhance the discriminative power because the multi-layer feature aggregation module can already conduct the feature selection and pass the gradients into multiple layers in the back-forward optimization. \begin{table*}[t!] \centering \caption{Computaional complexity analysis.} \label{TB:FLOPS} \begin{tabular}{|c||c|c|c|c|c|} \hline {\bf Method} & {\bf Stem network} & {\bf Input size} & {\bf Output} & {\bf FLOPs} & {\bf Memory}\\ \hline FCN-101 & ResNet-101 & $512\times512$ & 1/8 & $1.04\times10^8$ & 3.25G \\ PSPNet-50 \cite{CVPR17:PSPNET} & ResNet-50 & $473\times473$ & 1/8 & $0.93\times10^8$ & 1.96G \\ PSPNet-101 \cite{CVPR17:PSPNET} & ResNet-101 & $473\times473$ & 1/8 & $1.31\times10^8$ & 3.38G \\ DeepLab v3+ \cite{TPAMI:DEEPLAB} & Xception & $512\times512$ & 1/8 & $0.82\times10^8$ & 5.14G \\ RefineNet \cite{CVPR17:REFINENET} & ResNet-101 & $512\times512$ & 1/4 & $2.61\times10^8$ & 1.92G \\ SANet (ours) & ResNeXt-50 & $473\times473$ & 1/8 &$1.65\times10^8$ & 2.68G \\ \hline \end{tabular} \end{table*} \subsection{Computational cost analysis} For deep learning-based models, the computational cost is mainly measured by FLOPs and memory usage of GPU. We compare our SANet based on ResNeXt-50 with some recent deep parsing networks and show the breakdown analysis in Table \ref{TB:FLOPS}. When the input image resolution is $473\times 473$, our model has a moderate computational cost. The parameter efficiency of SANet is similar to PSPNet-101. On the other hand, SANet needs less GPU memory compared to PSPNet with ResNet-101 but a little more float operations. In general, it is worth increasing the FLOPs overhead in a deep parsing network to improve the accuracy of dense prediction. \section{Experiments}\label{SEC:EXP} To evaluate the effectiveness of the proposed spatial-semantic aggregation model for scene parsing, we conducted comprehensive experiments on NYU Depth v2 \cite{ECCV12:NYU}, SUN RGB-D \cite{NIPS14:SUN}, Cityscapes \cite{CVPR16:CITYSCAPES} and ADE20K \cite{CVPR17:ADE20K} datasets. In this section, we first briefly introduce the datasets and experimental settings, then report the results on the four datasets. \subsection{Datasets} {\bf NYU Depth v2 \& SUN RGB-D}: The two datasets are both indoor scene understanding benchmarks. The pixel-wise classes are from a variety of indoor scenes as recorded by a RGB-D camera from different sensors. We use the standard training/testing splits. The depth images are not used to enhance performance. Some partial experimental results have been shown and analyzed in Section \ref{SUBSEC:MLFA}. {\bf ADE20K}: This is a challenging dataset with more than 20K scene images, which has 150 classes of dense labels. The testing set has not been released, so we use 2,000 validation images for qualitative evaluation. {\bf Cityscapes}: This is a street-view dataset taken from 50 European cities, which provides fine-grained pixel-level annotations of 19 classes including buildings, pedestrians, bicycles, cars, etc. The training/validation/testing splits are with 2,975, 500 and 1,525 images, respectively. We do not use the 20,000 coarsely labelled images to pre-train the model. \subsection{Implementation details} Our implementation is based on Pytorch. Specifically, we applied the following settings in the experiment: \begin{itemize} \item To leverage the pixel redundancy and output resolution, we removed the down-sampling operations in the last two learning stages (blocks) of ResNeXt-50 to preserve more visual details without adding extra parameters. Thus, the size of the final feature map is 1/8 of the input image. \item We set the dilation rate to 2 for all the $3\times3$ convolutions when the stride is 1 to enlarge the receptive field in the third convolution block. Similarly, we set set the dilation rate to 4 in the forth convolution block. Such a setting is beneficial to improve the segmentation performance without adding computational complexity. \item The skip-connection is used multiple times in our SANet. We used the first convolution feature map before residual blocks as the input of the feature aggregation module. Due to the limited computational resource of GPU, we could only use the batch size of 8 at maximum in the training procedure. \end{itemize} We added a pyramid pooling module proposed in \cite{CVPR17:PSPNET} at multi-scale spatial levels to augment the global feature representation prior. To better exploit the contextual information, we employed five bin sizes with $60\times60$, $30\times30$, $20\times20$, $15\times15$ and $10\times10$, respectively. Such a setting is beneficial to parse very small objects or stuff by considering multiple contextual information. After that, a $3\times3$ convolution and an up-sampling were applied to spatially adjust the feature size. \subsection{Evaluation metric and experimental settings} We used {\em intersection-over-union} (IoU) and {\em pixel accuracy} to measure the parsing quality of the models in the experiment. In the training stage, we used horizontal flipping, random scaling and contrast normalization as image augmentation to further improve the model generalization ability. We used the AdamW \cite{ICLR19:ADAMW} optimizer with the initial learning rate $10^{-5}$ and followed \cite{TPAMI:DEEPLAB} to set the learning rate schedualing. In the training process, the best models were checkpointed by the minimum categorical cross-entropy losses on NYU Depth v2 and SUN RGB-D datasets. On ADE20K and Cityscapes datasets, we used the recent lov{\'a}sz-softmax loss \cite{CVPR18:LOVAZ_LOSS} to further optimize the IoU. We applied the multi-scale prediction with a single model in the inference procedure on all datasets, but did not use CRF post-processing \cite{TPAMI:DEEPLAB} to fine-tune the pixel labels after the categorical probability estimation. \subsection{Results} \subsubsection{Results on NYU Depth v2 and SUN RGB-D datasets} One of the nice properties of the proposed multi-layer feature aggregation is that the gradient can be simultaneously passed to multiple layers, making the deep parsing network easy to train. In the model training of the FCN on NYU Depth v2 dataset, we recorded the mean accuracy values for the first 80 training epochs on the validation set, which is shown in Fig. \ref{FIG:CONV}. Compared to the FCN models with skip-connections and auxiliary losses, the proposed feature aggregation module can simultaneously pass the gradients into multiple learning blocks, including the shallower convolution blocks, which improves the optimization efficiency for dense predictions. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{figs/conv.png} \caption{The mean accuracy curves on the NYU Depth v2 validation set. The feature aggregation is based on a dilated FCN. Due to the multiple skip-connections, the proposed feature aggregation can effectively accelerate the training of the deep scene parsing model.} \label{FIG:CONV} \end{figure} We first conducted the component analysis of SANet on NYU Depth v2 dataset. The proposed SANet is essentially a composition of a stem network (ResNeXt-50), a multi-layer feature aggregation module and a pyramid pooling module. In the training process, the best models were checkpointed by the minimum categorical cross-entropies, and the final evaluation of the testing set is summarized in Table \ref{TB:COMP}. Applying a pyramid pooling module on a FCN can slightly improve the parsing performance. If the stem network is augmented by the feature aggregation module, the accuracy can be further boosted. Specifically, the much lower loss value (categorical cross-entropy in our case) means the model can get more reliable parsing results. \begin{table}[h] \centering \caption{Component analysis of SANet on NYU Depth v2 testing set.} \label{TB:COMP} \begin{tabular}{|c|c|c|c|} \hline {\bf Method} &PSP & Feature aggregation &Pixel accuracy \\ \hline ResNet-50 & & &71.8 \\ ResNeXt-50 & & &72.5 \\ ResNeXt-50 &\checkmark & &74.3 \\ ResNeXt-50 &\checkmark &\checkmark &{\bf 75.9} \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{Scene parsing results on NYU Depth v2 and SUN RGB-D testing sets.} \label{TB:NYU-SUN} \begin{tabular}{|c|cc|cc|} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{NYU Depth v2} & \multicolumn{2}{c|}{SUN RGB-D} \\ \cline{2-5} &Pixel Acc. &IoU &Pixel Acc. &IoU \\ \hline SegNet \cite{TPAMI:SEGNET} &- &- &72.6 &31.8 \\ SEGCloud \cite{IC3DV17:SEFCLOUD} &- &43.5 &- &- \\ Lin et al. \cite{CVPR16:PIECEWISE} &70.0 &40.6 &78.4 &42.3 \\ RefineNet\cite{CVPR17:REFINENET} &74.4 &47.6 &81.1 &47.0 \\ MSCI \cite{ECCV18:MCCI} &- &49.0 &- &50.4 \\ Pad-Net \cite{CVPR18:PAD_NET} &75.2 &50.2 &- &- \\ CCL \cite{CVPR18:CCL} &- &- &81.4 &47.1 \\ \hline SANet (ours) &{\bf 75.9} &{\bf 50.7} &{\bf 82.3} &{\bf 51.5} \\ \hline \end{tabular} \end{table} We show the quantitative results of the two indoor scene parsing datasets in Table \ref{TB:NYU-SUN}. Even we did not incorporate the depth images in the training process, the proposed SANet reaches the best performance in most cases. Since we adopted the pyramid pooling module which is the same as PSPNet, the proposed spatial-semantic feature aggregation module outperforms the pixel accuracies by 0.7\% and 0.9\%, and boosts the IoU values by 0.5\% and 1.1\% on the two datasets, respectively. \subsubsection{Results on ADE20K dataset} \begin{table}[t] \centering \caption{Results on ADE20K validation set.} \label{TB:ADE20K} \begin{tabular}{|c|cc|} \hline {\bf Method} &Pixel Acc. &IoU \\ \hline PSPNet\cite{CVPR17:PSPNET} &81.4 &43.3 \\ SAC\cite{ICCV17:SAC} &81.9 &44.3 \\ RefineNet\cite{CVPR17:REFINENET} &- &42.4 \\ PSANet\cite{ECCV18:PSANET} &81.5 &43.8 \\ EncNet\cite{CVPR18:ENCNET} & 81.7 & 44.7 \\ \hline SANet (ours) &{\bf 82.1} &{\bf 44.8} \\ \hline \end{tabular} \end{table} \begin{figure*}[t]\centering \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000552.jpg} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000552_gt.png} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000552_pspnet.jpg} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000552_sanet.png} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000931.jpg} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000931_gt.png} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000931_pspnet.jpg} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000931_sanet.png} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000746.jpg} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000746_gt.png} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000746_pspnet.jpg} \end{minipage} \begin{minipage}{0.2\textwidth} \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000746_sanet.png} \end{minipage} \begin{minipage}{0.2\textwidth}\centering \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000777.jpg} (a) Image \end{minipage} \begin{minipage}{0.2\textwidth}\centering \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000777_gt.png} (b) GT \end{minipage} \begin{minipage}{0.2\textwidth}\centering \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000777_pspnet.jpg} (c) PSPNet \end{minipage} \begin{minipage}{0.2\textwidth}\centering \includegraphics[width=1\textwidth]{figs/ade20k/ADE_val_00000777_sanet.png} (d) SANet \end{minipage} \caption{Scene parsing examples on ADE20K validation set.} \label{FIG:ADE20K} \end{figure*} We experimented on the large-scale ADE20K dataset to verify the effectiveness of the proposed SANet. The comparisons on the validation set with some recently proposed methods are reported in Table \ref{TB:ADE20K}. Results show that our model achieves 82.1\% in pixel accuracy and 44.7\% in IoU, which achieves the best performance among these methods. Some example scene parsing results in both indoor and outdoor environments are illustrated in Fig. \ref{FIG:ADE20K}. \subsubsection{Results on Cityscapes dataset} For this dataset, the ground-truth of test images are withheld by the organizers, so all methods can only be tested by submitting the results to the evaluation server. The overall comparisons of our model with some recently proposed methods are summarized in Table \ref{TB:CITYSCAPES}. Among the test results on the evaluation server, our proposed SANet outperforms all previous methods in terms of the class IoU. The per-class scene parsing results are reported in Table \ref{TB:PER-CITYSCAPES}. From the table, we can see that even without the pre-training on the 20,000 coarse-labelled images, our method still achieves very promising results. We also illustrate some example results for the scene parsing visualization in Figure \ref{FIG:CITYSCAPES}. Since our model can automatically leverage the spatial and semantic information from multi-layer feature maps, some small objects (such as traffic signs) are accurately segmented by SANet, which demonstrates that the proposed feature aggregation module can well deal with the spatial-aware learning tasks. \begin{table*}[t] \centering \small \caption{Per-class results on Cityscapes testing set.} \label{TB:PER-CITYSCAPES} \setlength{\tabcolsep}{4pt} \renewcommand{\arraystretch}{0.5} \begin{tabular}{|c|ccccccccccccccccccc|} \hline {\bf Method} &\rot{road} &\rot{sidewalk} &\rot{building} &\rot{wall} &\rot{fence} &\rot{pole} &\rot{traffic light} &\rot{traffic sign} &\rot{vegetation} &\rot{terrain} &\rot{sky} &\rot{person} &\rot{rider} &\rot{car} &\rot{truck} &\rot{bus} &\rot{train} &\rot{motorcycle} &\rot{bicycle}\\ \hline SegNet\cite{TPAMI:SEGNET} &96.4 &73.2 &84.0 &28.5 &29.0 &35.7 &39.8 &45.2 &87.0 &63.8 &91.8 &62.8 &42.8 &89.3 &38.1 &43.1 &44.1 &35.8 &51.9 \\ LDN-121\cite{ICCV17:LADDER} &97.4 &80.2 &92.0 &47.6 &53.9 &64.6 &72.8 &76.3 &92.8 &66.4 &95.5 &83.8 &66.1 &94.3 &55.6 &70.3 &67.0 &62.1 &73.0\\ ResNet-38\cite{PR:WD_RESNET} &98.5 &85.7 &93.1 &55.5 &59.1 & 67.1 &74.8 &78.7 &93.7 &72.6 &95.5 &86.6 &69.2 &95.7 &64.5 &78.8 &74.1 &69.0 &76.7 \\ SAC\cite{ICCV17:SAC} &98.7 &86.5 &93.1 &56.3 &59.5 &65.1 &73.0 &78.2 &93.5 &72.6 &95.6 &85.9 &70.8 &95.9 &71.2 &78.6 &66.2 &67.7 &76.0 \\ Alex et al.\cite{CVPR18:MT} &98.4 &85.2 &92.8 &54.1 &60.8 &62.4 &73.4 &77.5 &93.3 &71.5 &95.1 &84.9 &69.5 &95.3 &68.5 &86.2 &80.0 &67.8 &75.6 \\ RefineNet\cite{CVPR17:REFINENET} &98.2 &83.3 &91.3 &47.8 &50.4 &56.1 &66.9 &71.3 &92.3 &70.3 &94.8 &80.9 &63.3 &94.5 &64.6 &76.1 &64.3 &62.2 &70.0 \\ PSPNet\cite{CVPR17:PSPNET} &98.6 &86.6 &93.2 &58.1 &63.0 &64.5 &75.2 &79.2 &93.4 &72.1 &95.1 &86.3 &71.4 &96.0 &73.5 &90.4 &80.3 &69.9 &76.9 \\ AAF \cite{ECCV18:AAF} &98.5 &85.6 &93.0 &53.8 &59.0 &65.9 &75.0 &78.4 &93.7 &72.4 &95.6 &86.4 &70.5 &95.9 &73.9 &82.7 &76.9 &68.7 &76.4 \\ DANet \cite{CVPR19:DANET} &98.6 &86.1 &93.5 &56.1 &{\bf 63.3} &{\bf 69.7} &{\bf 77.3} &{\bf 81.3} &{\bf 93.9} &72.9 &95.7 &87.3 &{\bf 72.9} &96.2 &{\bf 76.8} &{\bf 89.4} &{\bf 86.5} &{\bf 72.2} &{\bf 78.2} \\ \hline SANet (ours) &{\bf 98.7} &{\bf 87.1} &{\bf 93.6} &{\bf 61.6} &62.4 &68.1 &75.9 &79.5 &93.8 &{\bf 73.1} &{\bf 95.8} &{\bf 87.3} &71.5 &{\bf 96.2} &71.9 &88.1 &86.1 &69.4 &77.2 \\ \hline \end{tabular} \end{table*} \begin{table}[t] \centering \caption{Overall results on Cityscapes testing set.} \label{TB:CITYSCAPES} \begin{tabular}{|c|cccc|} \hline {\bf Method} &IoU cla. &iIoU cla. &IoU cat. &iIoU cat. \\ \hline LDN-121\cite{ICCV17:LADDER} &74.3 &51.6 &89.7 &79.5 \\ SAC\cite{ICCV17:SAC} &78.1 &55.2 &90.6 &78.3 \\ RefineNet\cite{CVPR17:REFINENET} &73.6 &47.2 &87.9 &70.6 \\ Yu et al. \cite{CVPR18:DLA} &75.9 &- &- &- \\ Alex et al.\cite{CVPR18:MT} &78.5 &57.4 &89.9 &77.7 \\ AAF \cite{ECCV18:AAF} &79.1 &56.1 &90.8 &78.5 \\ PSPNet\cite{CVPR17:PSPNET} &78.4 &56.7 &90.6 &78.6 \\ PSANet\cite{ECCV18:PSANET} &78.6 &- &- &- \\ Pad-Net\cite{CVPR18:PAD_NET} &80.3 &58.8 &90.8 &78.5 \\ \hline SANet (ours) &{\bf 80.9} &{\bf 59.6} &{\bf 91.4} &{\bf 80.2} \\ \hline \end{tabular} \end{table} \begin{figure*}[t]\centering \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_original/frankfurt_000001_013016_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_gt/frankfurt_000001_013016_gtFine_color.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_pspnet/frankfurt_000001_013016_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_seg/frankfurt_000001_013016_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_original/frankfurt_000000_009688_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_gt/frankfurt_000000_009688_gtFine_color.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_pspnet/frankfurt_000000_009688_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_seg/frankfurt_000000_009688_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_original/lindau_000014_000019_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_gt/lindau_000014_000019_gtFine_color.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_pspnet/lindau_000014_000019_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{figs/cityscapes_seg/lindau_000014_000019_leftImg8bit.png} \end{minipage} \begin{minipage}{0.23\textwidth}\centering \includegraphics[width=1\textwidth]{figs/cityscapes_original/munster_000061_000019_leftImg8bit.png} (a) Image \end{minipage} \begin{minipage}{0.23\textwidth}\centering \includegraphics[width=1\textwidth]{figs/cityscapes_gt/munster_000061_000019_gtFine_color.png} (b) GT \end{minipage} \begin{minipage}{0.23\textwidth}\centering \includegraphics[width=1\textwidth]{figs/cityscapes_pspnet/munster_000061_000019_leftImg8bit.png} (c) PSPNet \end{minipage} \begin{minipage}{0.23\textwidth}\centering \includegraphics[width=1\textwidth]{figs/cityscapes_seg/munster_000061_000019_leftImg8bit.png} (d) SANet \end{minipage} \caption{Scene parsing examples on Cityscapes validation set.} \label{FIG:CITYSCAPES} \end{figure*} \subsection{Discussions} The goal of the proposed spatial-semantic feature aggregation module is to learn better feature maps for dense classification. Furthermore, due to the use of the proposed feature aggregation module, the deep parsing network is much easier to train because the gradients can be passed to multiple learning blocks, achieving to a faster training process. So the proposed feature aggregation module is an excellent alternative compared to simple skip-connections or auxiliary losses in the deep scene parsing models. The experimental results have proved the effectiveness of our module. In some other spatial-aware learning tasks such as object detection \cite{NIPS15:FASTER_RCNN,CVPR18:RN} and tracking \cite{CSR:TRACK}, the global feature representation prior that leverages the spatial and semantic information usually obtains superior performance, so it is worth trying to apply the memory cell on multiple feature outputs in deep neural networks to improve the accuracy. \section{Conclusion}\label{SEC:CONCLUSION} We have presented a novel spatial-semantic feature selection module for supervised scene parsing. The extra module can select the useful components in multiple feature outputs and aggregate them to form a discriminative global feature representation for accurate pixel-label prediction. Integrating the feature selection module into deep parsing networks also forms a strong supervision to effectively suppress the over-fitting problem, making the parsing network easy to train. Extensive experiments with very promising results on four public scene parsing datasets demonstrate the effectiveness of our SANet. We believe the proposed spatial-semantic feature aggregation module can also benefit the related spatial-aware learning techniques in the community. \bibliographystyle{IEEEtran}
1,108,101,563,459
arxiv
\section{Introduction} The aim of this paper is to provide some remarks on the uniformizations of $\delta$-Gromov hyperbolic spaces developed by Bonk, Heinonen and Koskela in \cite{BHK}. For a $\delta$-Gromov hyperbolic space $(X, d)$ with a distinguished point $p \in X$, they introduced the metric defined by \begin{equation}\label{uniformization metric} d_{\epsilon}(x, y):= \inf\limits_{\gamma}\int_{\gamma}\rho_{\epsilon} \,ds, \end{equation} where $\rho_{\epsilon}(\cdot)=e^{-\epsilon d(p, \ \cdot \ )}$ and the infimum is taken over all rectifiable curves $\gamma$ from $x$ to $y$. The metric space $(X, d_{\epsilon})$ is called \emph{a uniformized space of $X$} and is denoted simply by $X^{\epsilon}$. This uniformization technique has led to numerous applications, see \cite{BB}, \cite{BHK} and \cite{BBS} and references therein. Another uniformization procedure using Busemann functions has been also established, see \cite{B} and \cite{Z}. It was shown in \cite[Proposition 4.5]{BHK} that if a $\delta$-Gromov hyperbolic space $(X, d)$ is uniformized with a sufficiently small parameter $\epsilon>0$, then $X^{\epsilon}$ is a uniform space. The key to prove the above result was to show the following Gehring-Hayman theorem. \begin{theorem}(\cite[Section 5]{BHK}\label{GH theorem}) Let $(X, d)$ be a $\delta$-Gromov hyperbolic space. Then there exists $\epsilon(\delta)>0$ such that for every $0<\epsilon \leq \epsilon(\delta)$, there exists $A>0$ such that for each pair of points $x, y \in X^{\epsilon}$, \begin{equation}\label{GH theorem} l_{d_{\epsilon}}([x, y]) \leq A d_{\epsilon}(x, y), \end{equation} where $[x, y]$ is a geodesic curve with respect to $d$ from $x$ to $y$ and $l_{d_{\epsilon}}([x, y])$ denotes the length of $[x, y]$ with respect to $d_{\epsilon}$. \end{theorem} In this paper we say that \emph{the Gehring-Hayman theorem holds for $X^{\epsilon}$} if \eqref{GH theorem} holds for all $x, y \in X^{\epsilon}$. We prove that $X^{\epsilon}$ being uniform implies the Gehring-Hayman theorem for $X^{\epsilon}$ through quasihyperbolization, see Subsection \ref{From uniformity to GH}. We also study the following two localized Gehring-Hayman properties. \begin{definition}[Localized Gehring-Hayman property]\label{GH for Gromov sequences} We say that the \emph{Gehring-Hayman property for Gromov sequences} holds if for each Gromov sequence $(x_n)_n$, there exists $C \geq 1$ such that for all $n, m \in \mathbb{N}$ \begin{equation}\label{GH for sequences} l_{d_{\epsilon}}([x_n, x_m]) \leq C d_{\epsilon}(x_n, x_m). \end{equation} Note that the constant $C$ may depend on $(x_n)_n$. See Remark \ref{equivalence relation for Gromov sequences} for the equivalence relation among Gromov sequences. We also say that the \emph{Gehring-Hayman property for metric boundary points} holds if for all $(x_n)_n$ and $x \in \partial_{d_{\epsilon}} X^{\epsilon}$ such that $d_{\epsilon}(x_n, x) \to 0$ as $n \to \infty$, there exists $C \geq 1$ such that \eqref{GH for sequences} holds for all $n, m \in \mathbb{N}$. \end{definition} It is clear that the original Gehring-Hayman theorem implies the localized Gehring-Hayman properties and the bijectivity of the canonical boundary map $\Phi : \partial_{G} X \to \partial_{d_{\epsilon}} X^{\epsilon}$, see \cite[Proposition 4.13]{BHK} and also Remark \ref{Def of boundary map} for the construction of the map $\Phi$. We obtain further characterizations of the Gehring-Hayman theorem from these properties. The first two conditions in Theorem \ref{GH decomposition 1} can be seen as a boundary pointwise decomposition of the Gehring-Hayman theorem. The following is the list of equivalent conditions of the Gehring-Hayman theorem. \begin{theorem}\label{GH decomposition 1} Let $(X, d)$ be an $M$-roughly starlike $\delta$-Gromov hyperbolic space and $X^{\epsilon}$ be the uniformized space with $\epsilon>0$. Then the following are equivalent. \begin{enumerate} \item The Gehring-Hayman property for metric boundary points holds. \item The canonical boundary map $\Phi : \partial_{G} X \to \partial_{d_{\epsilon}}X^{\epsilon}$ is bijective and the Gehring-Hayman property for Gromov sequences holds. \item The Gehring-Hayman theorem holds. \item $X^{\epsilon}$ is a uniform space. \end{enumerate} \end{theorem} We remark that the equivalence among the first three conditions in Theorem \ref{GH decomposition 1} holds true without the roughly starlike property. An immediate consequence of Theorem \ref{GH decomposition 1} together with \cite[Proposition 4.12]{B} is the following. \begin{corollary} Let $(X, d)$ be an $M$-roughly starlike $\delta$-Gromov hyperbolic space and $X^{\epsilon}$ be the uniformized space with $\epsilon>0$. Suppose that $X^{\epsilon}$ is a uniform space. Then $X^{\epsilon'}$ is a uniform space for all $0<\epsilon'\leq \epsilon$. \end{corollary} Due to Theorem \ref{GH decomposition 1}, looking at boundary behavior makes it easy to check that $X^{\epsilon}$ is not a uniform space. Using the results in \cite{B} and \cite{BBS}, we determine the sharp uniformization parameter for the uniformized space to be a uniform space in the case of the hyperbolic spaces, the model spaces $\mathbb{M}^{\kappa}_n$ of the sectional curvature $\kappa<0$ with the dimension $n \geq 2$ and hyperbolic fillings, see Section~\ref{examples}. We note that in the case of the hyperbolic spaces, Butler has already shown the same result by a different argument, see \cite[Remark 1.~11]{B}. He employed the results of the asymptotic curvature upper bound \cite[Definition~1.1 and Theorem~1.5]{BF} and the fact that the Gehring-Hayman theorem induces a visual metric on the Gromov boundary $\partial_{G}X$. Our argument is more elementary in that we only use the polar coordinate expression of the hyperbolic metric. \subsection*{Acknowledgment} The authors thank Prof. Nageswari Shanmugalingam for introducing the topic and discussions on it. Qingshan Zhou was partly supported by NNSF of China (Nos. 11901090 and 12071121), by Department of Education of Guangdong Province, China (No. 2021KTSCX116), by Guangdong Basic and Applied Basic Research Foundation (No. 2021A1515012289). \section{Preliminaries} We fix some notation. Let $(X, d)$ be a metric space. We say that $(X, d)$ is proper if all bounded closed sets are compact. For $x \in X$ and $r>0$, we set $B_d(x,r):=~\{y\in X\, |\, d(x,y)<r\}$. We also set $\text{dist}_{d}(x, A):=~\inf_{y \in A}d(x, y)$ for any $A \subseteq X$ and $x \in X$. The minimum of $n$ real numbers $(a_i)_{i=1}^{n}$ is denoted by $a_1\wedge \cdots \wedge a_n$. The \emph{length} of a curve $\gamma$ is denoted by $l_d(\gamma)$. A curve $\gamma$ is said to be \emph{rectifiable} if $l_d(\gamma) <\infty$. For $a, b\in~(-\infty, \infty)$, we say that a curve $\gamma : [a, b] \to X$ is \emph{geodesic} if $d(\gamma(a), \gamma(b))=l_d(\gamma)$. A geodesic curve from $x$ to $y$ is often denoted by $[x, y]$. A curve $\gamma : [0, \infty) \to X$ is called a \emph{geodesic ray} if $\gamma|_{[0, t]}$ is geodesic for every $t >0$. We first introduce the $M$-roughly starlike property. \begin{definition}[$M$-Roughly starlike property] Let $M\geq 0$. We say that a metric space $(X, d)$ is \emph{$M$-roughly starlike} if there exists a base point $p \in X$ such that for every $x \in X$, there exists a geodesic ray $\gamma : [0, \infty) \to X$ with $\gamma(0)=p$ satisfying $\text{dist}_{d}(x, \gamma) \leq M$. \end{definition} We define $\delta$-Gromov hyperbolic spaces and Gromov boundary. \begin{definition}[Gromov product] Let $(X, d)$ be a metric space. The \emph{Gromov product} of two points $x, y \in X$ with respect to $p \in X$ is defined by \[ (x|y)_{p}:= \frac{1}{2}(d(p, x)+d(p, y)-d(x, y)). \] \end{definition} \begin{definition}[Gromov hyperbolic space] Let $(X, d)$ be a metric space and $\delta \geq 0$. We say that $X$ is a \emph{$\delta$-Gromov hyperbolic space} if $X$ is unbounded, proper, geodesic metric space such that for all $x, y, z, p \in X$, \[ (x|z)_{p}\geq (x|y)_{p}\wedge(y|z)_{p}-\delta. \] \end{definition} \begin{definition}[Gromov boundary] We say that two geodesic rays $\gamma$ and $\tilde{\gamma}$ in $X$ with $\gamma(0)=\tilde{\gamma}(0)=p$ are equivalent if $\sup_{t \geq 0}d(\gamma(t), \tilde{\gamma}(t))$ is finite. Then we can consider a quotient space of the set of all geodesic rays emanating from $p \in X$ by the above equivalence relation, denoted by $\partial_{G}X$. \end{definition} \begin{remark}\label{equivalence relation for Gromov sequences} Let $(X, d)$ be a $\delta$-Gromov hyperbolic space and $p \in X$ be a fixed point. There is another construction of Gromov boundary through Gromov sequences. Recall that we say that $ (x_n)_n \subseteq X$ is a Gromov sequence if $(x_n| x_m)_{p} \to \infty$ as $n, m \to \infty$. We consider the quotient space of all Gromov sequences where equivalence relation $\sim$ between two Gromov sequences $(x_n)_n$ and $(y_n)_n$ is given by $(x_n | y_n)_{p} \to \infty$ as $n \to \infty$. There is a canonical bijective map between the above two Gromov boundaries, see \cite[Lemma~3.13]{BH}. \end{remark} We review a Harnack type inequality for the uniformization developed by Bonk-Heinonen-Koskela \cite{BHK}. Let $(X, d) $ be a metric space and $p \in X$. For any $\epsilon>0$, a \emph{Harnack type inequality} \begin{equation}\label{Harnack} e^{-\epsilon d(x, y)} \leq \frac{e^{-\epsilon d(p,x)}}{e^{-\epsilon d(p,y)}} \leq e^{\epsilon d(x, y)} \end{equation} holds for every $x, y \in X$, see \cite[Chapter 5]{BHK}. We next recall the construction of the canonical boundary map $\Phi$. \begin{definition}[Canonical boundary map $\Phi$]\label{Def of boundary map} Let $(X, d)$ be a $\delta$-Gromov hyperbolic space and $p \in X$ be a fixed point for uniformization with the parameter $\epsilon>0$. Let $\partial_{d_{\epsilon}} X^{\epsilon}:=\overline{X^{\epsilon}}\setminus X^{\epsilon}$ be the metric boundary of $X^{\epsilon}$. It is easy to see that for each geodesic ray $\gamma : [0, \infty) \to X$, $(\gamma(k))_k$ is a sequence converging to some point $x \in \partial_{d_{\epsilon}}X^{\epsilon}$ with respect to $d_{\epsilon}$. The limit point $x$ is independent of the choice the geodesic rays in the same equivalence class. Hence we define the map $\Phi : \partial_{G}X \to \partial_{d_{\epsilon}}X^{\epsilon}$ by $[\gamma] \mapsto \lim\limits_{k \to \infty}\gamma(k)$ where $[\gamma] \in \partial_{G}X$ is the equivalence class of a geodesic ray $\gamma$. \end{definition} We next review uniform spaces. \begin{definition}[$A$-uniform curve]\label{uniform curve} Let $A>0$ and a metric space $(X, d)$ be given. Let $\partial_d X :=\overline{X}\setminus X$ be the metric boundary of $X$. For $a, b \in \mathbb{R}$ with $a<b$, we say that a curve $\gamma : [a, b] \to X$ is an \emph{$A$-uniform curve} if the curve $\gamma$ satisfies \begin{enumerate} \item $l_{d}(\gamma) \leq A d(\gamma(a), \gamma(b))$, \item $l_{d}(\gamma|_{[a, t]})\wedge l_{d}(\gamma|_{[t, b]}) \leq A$ $\text{dist}_{d}(\gamma(t), \partial_d X)$ \ \ \ for every $t \in [a, b]$. \end{enumerate} \end{definition} \begin{definition}($A$-uniform space)\label{uniform space} A noncomplete locally compact metric space $(X, d)$ is called an \emph{$A$-uniform space} if every pair of points in $X$ can be connected by an $A$-uniform curve. \end{definition} \section{Proof of Theorem \ref{GH decomposition 1}} In this section we prove Theorem \ref{GH decomposition 1}. Remark that the Gehring-Hayman theorem implies the other conditions in Theorem \ref{GH decomposition 1}. The rest of implications will be proved one by one. \subsection{(1) $\rightarrow$ (2)}\label{equivalence of localized GH} In this subsection we prove the Gehring-Hayman property for metric boundary points implies the Gehring-Hayman property for Gromov sequences and the bijectivity of the canonical boundary map $\Phi$. \begin{lemma} Let $(X, d)$ be a $\delta$-Gromov-hyperbolic space and $X^{\epsilon}$ be the uniformized space. If the Gehring-Hayman property for metric boundary points holds, then the Gehring-Hayman property for Gromov sequences holds. \end{lemma} \begin{proof} Let $(x_n)_n$ be an arbitrary Gromov sequence. Then by the first inequality of \cite[Proof of Lemma~4.10]{BHK}, there exists $x \in \partial_{d_{\epsilon}}X^{\epsilon}$ such that $x_n \to x$ as $n \to \infty$ with respect to $d_{\epsilon}$. The conclusion follows by the Gehring-Hayman theorem for metric boundary points. \end{proof} \begin{lemma} Let $(X, d)$ be a $\delta$-Gromov-hyperbolic space and $X^{\epsilon}$ be the uniformized space with the distinguished point $p\in X$. Suppose that the Gehring-Hayman property for metric boundary points holds. Then the canonical boundary map $\Phi : \partial_{G}X \to \partial_{d_{\epsilon}}X^{\epsilon} $ is bijective. \end{lemma} \begin{proof} Take $x \in \partial_{d_{\epsilon}} X^{\epsilon}$ and geodesic rays $\gamma$ and $\tilde{\gamma}$ such that \[ \gamma(0)=\tilde{\gamma}(0)=p \ \text{and} \ \lim\limits_{n \to \infty}\gamma(n)=\lim\limits_{n \to \infty}\tilde{\gamma}(n)=x \] with respect to $d_{\epsilon}$. The sequence $(z_n)_n:=(\gamma(1), \tilde{\gamma}(1), \gamma(2), \tilde{\gamma}(2), \cdots)$ is a sequence converging to $x \in \partial_{d_{\epsilon}}X^{\epsilon}$. Thus by the Gehring-Hayman theorem for metric boundary points and the second inequality of \cite[Proof of Lemma~4.10]{BHK}, we have $(z_n|z_m)_p \to \infty$ as $n, m \to \infty$, which implies $(\gamma(n))_n \sim (\tilde{\gamma}(n))_n$ as Gromov sequences. Hence the geodesic rays $\gamma$ and $\tilde{\gamma}$ are in the same equivalence class. Therefore, $\Phi$ is injective. The surjectivity of $\Phi$ directly follows from the Gehring-Hayman property for metric boundary points and \cite[Lemma~4.10]{BHK}. \end{proof} \subsection{(2) $\rightarrow$ (3)}\label{localized GH to GH} In this subsection we will prove that the second condition in Theorem \ref{GH decomposition 1} implies the Gehring-Hayman theorem. We first give a remark on Gromov sequences. \begin{remark}\label{Gromov sequence remark} Let $(X, d)$ be a $\delta$-Gromov-hyperbolic space. For given $(x_n)_n$, $(y_n)_n \subseteq X$, suppose that $(y_n)_n$ is a Gromov sequence and $(x_n | y_n)_p \to \infty$ as $n \to \infty$. Then $(x_n)_n$ is a Gromov sequence which is equivalent to $ (y_n)_n$. In fact, since $X$ is a $\delta$-Gromov hyperbolic space, \begin{align} (x_n|x_m)_p &\geq (x_n | y_n)_p \wedge (y_n|x_m)_p-\delta \notag \\ &\geq (x_n | y_n)_p \wedge \Big((y_n|y_m)_p\wedge (y_m|x_m)_p-\delta\Big)-\delta \notag \\ & \to \infty \ \ \ (n, m \to \infty), \end{align} which tells us that $(x_n)_n$ is a Gromov sequence. \end{remark} Next we derive some properties from the map $\Phi : \partial_{G}X \to \partial_{d_{\epsilon}} X^{\epsilon}$. Note that the only one thing we know is that $\Phi$ is a bijective map. For each $x \in \partial_{d_{\epsilon}} X^{\epsilon}$, take $(x_n)_n$ and $(y_n)_n$ such that $x_n \to x$ and $y_n \to x$ as $n \to \infty$ with respect to $d_{\epsilon}$. We then show that \begin{itemize} \item We can extract Gromov sequences from $(x_n)_n$ and $(y_n)_n$. \item The extracted Gromov sequences are equivalent to each other. \end{itemize} The second property derived from the bijectivity of $\Phi$ plays an important role to prove Proposition~\ref{local to global}. \begin{lemma}\label{Gromov subsequence} Let $x \in \partial_{d_{\epsilon}}X^{\epsilon}$. For each sequence $(x_n)_n \subseteq X$ converging to $x$ with respect to $d_{\epsilon}$, there always exists a Gromov subsequence $(x_{n_k})_k$. \end{lemma} \begin{proof} By taking a subsequence if needed, we may assume that $d(p, x_n)\geq n$ for each $n \in \mathbb{N}$. Take a geodesic curve $\gamma_n$ from $p$ to $x_n$. Applying Arzela-Ascoli theorem and doing a diagonal argument, there exist a subsequence $(\gamma_{n_k})_k$ and a geodesic ray $\gamma$ such that \begin{equation}\label{estimate of Gromov subsequence} \lim\limits_{k \to \infty}\sup_{t \in [0, m]}d(\gamma_{n_k}(t), \gamma(t))=0 \ \ \text{and} \ \ \sup_{t \in [0, m]}d(\gamma_{n_m}(t), \gamma(t))\leq 1. \end{equation} for each $m \in \mathbb{N}$. Set $y_m=\gamma_{n_m}(m)$. Note that \begin{align}\label{Gromov seq 1} (y_m|x_{n_m})_p &= \frac{1}{2}\Big(d(p, y_m)+d(p, x_{n_m})-d(y_m, x_{n_m})\Big)\notag \\ &=\frac{1}{2}\Big(m+d(p, x_{n_m})-(d(p, x_{n_m})-m)\Big)\notag \\ &\geq m \to \infty \ \ (m \to \infty). \end{align} Also, we have \begin{align}\label{Gromov seq 2} (y_m|\gamma(m))_p &=\frac{1}{2}\Big(d(p, y_m)+d(p, \gamma(m))-d(y_m, \gamma(m))\Big) \notag \\ &\geq \frac{1}{2}(m+m-1) \to \infty \ \ (m \to \infty). \end{align} where we used \eqref{estimate of Gromov subsequence} to obtain the last inequality. Therefore, combining Remark \ref{Gromov sequence remark}, \eqref{Gromov seq 1} and \eqref{Gromov seq 2}, we conclude that $(x_{n_m})_m$ is a Gromov sequence. This completes the proof. \end{proof} \begin{lemma}\label{inverse is well defined} Let $x \in \partial_{d_{\epsilon}} X^{\epsilon}$. Suppose that $\Phi : \partial_{G} X \to \partial_{d_{\epsilon}}X^{\epsilon}$ is bijective. If there are sequences $(x_n)_n$ and $(y_n)_n$ that converge to $x$ with respect to $d_{\epsilon}$, then any Gromov subsequences $ (x_{n_k})_k$ and $(y_{n_k})_k$ are equivalent to each other. \end{lemma} \begin{proof} Fix $x \in \partial_{d_{\epsilon}}X^{\epsilon}$. Let $(x_n)_n$ and $(y_n)_n$ be sequences converging to $x \in \partial_{d_{\epsilon}}X^{\epsilon}$. We will first prove that there exist Gromov subsequences $(x_{n_k})_k$ and $(y_{n_k})_k$ that are equivalent to each other as Gromov sequences. Applying the proof of Lemma \ref{Gromov subsequence} to $(x_n)_n$ and $(y_n)_n$, there exist Gromov subsequences $(x_{n_k})_k$ and $(y_{n_k})_k$ and geodesic rays $\gamma$ and $\tilde{\gamma}$ such that \begin{equation} (x_{n_k})_k \sim (\gamma(k))_k, \ \ (y_{n_k})_k \sim (\tilde{\gamma}(k))_k, \end{equation} as Gromov sequences and \begin{equation} \lim\limits_{k \to \infty}\gamma(k)=\lim\limits_{k \to \infty}\tilde{\gamma}(k)=x \end{equation} with respect to $d_{\epsilon}$. Since $\Phi : \partial_{G} X \to \partial_{d_{\epsilon}}X^{\epsilon}$ is injective, $\gamma$ and $\tilde{\gamma}$ are equivalent to each other, i.e., $(\gamma(k))_k \sim (\tilde{\gamma}(k))_k$ as Gromov sequences. Hence we conclude that $(x_{n_k})_k \sim (y_{n_k})_k$. By the above argument and noting that every Gromov sequence $(x_n)_n$ is equivalent to its subsequence $(x_{n_k})$ (\cite[Lemma~5.3]{Jussi}), we can prove that any Gromov subsequences $ (x_{n_k})_k$ and $(y_{n_k})_k$ are equivalent to each other. \end{proof} The following proposition tells us that the Gehring-Hayman property always holds for any $\epsilon>0$ if the distance between points is bounded above by a uniform constant. This will be used to prove Proposition \ref{local to global}. \begin{proposition}\label{local is fine.} Let $(X, d)$ be a geodesic metric space. Let $\epsilon>0$ and $M >0$. Then there exists $C:=C(\epsilon, M) \geq 1$ such that for every pair of points $x, y \in X$ with $d(x, y) \leq M$, \[ l_{d_{\epsilon}}([x, y]) \leq C d_{\epsilon}(x, y) \] holds. \end{proposition} \begin{proof} Let $z\in [x,y]$, then $d(x,p)\le d(x,z)+d(z,p)\le M +d(z,p)$. Thus \begin{align}\label{estimate 1} l_{d_{\epsilon}}([x, y]) = \int_{[x, y]}\rho_{\epsilon}(s)\, ds \leq e^{-\epsilon (d(p, x)-M)}d(x, y). \end{align} Let $\lambda$ be a rectifiable curve from $x$ to $y$. We examine two cases.\\ \textbf{Case 1} : $\lambda \subseteq B_d(x, M)$\\ For $z \in \lambda$, we have $d(p, z) \leq d(p, x)+d(x, z) \leq d(p, x)+M$. Hence \begin{align}\label{estimate 2} l_{d_{\epsilon}}(\lambda) \geq e^{-\epsilon(d(p, x)+M)}l_d(\lambda) \geq e^{-\epsilon(d(p, x)+M)}d(x, y). \end{align} \textbf{Case 2} : $\lambda \not\subseteq B_d(x, M)$\\ In this case we have \begin{align}\label{estimate 3} l_{d_{\epsilon}}(\lambda) &\geq \int_{\lambda \cap B_d(x, M)} \rho_{\epsilon}\, ds \geq e^{-\epsilon (d(p, x)+M)} d(x, y) \end{align} Combining \eqref{estimate 1}, \eqref{estimate 2}, and \eqref{estimate 3}, we have \[ l_{d_{\epsilon}}([x, y]) \leq e^{2 \epsilon M}d_{\epsilon}(x, y). \] \end{proof} Lastly, we prove the following. \begin{proposition}\label{local to global} Let $(X, d)$ be a geodesic metric space and $X^{\epsilon}$ be the uniformized space. Suppose the Gehring-Hayman property for Gromov sequences holds and $\Phi : \partial_{G}X \to \partial_{d_{\epsilon}}X^{\epsilon}$ is bijective. Then the Gehring-Hayman theorem holds. \end{proposition} \begin{proof} Suppose that the Gehring-Hayman theorem does not hold. Then there exist $(x_n)_n$, $(y_n)_n \subseteq X^{\epsilon}$ such that \begin{equation}\label{GH proof estimate 1} l_{d_{\epsilon}}([x_n, y_n]) \geq n d_{\epsilon}(x_n, y_n). \end{equation} Note that by the proof of \cite[Lemma~4.10]{BHK}, we know that the LHS of \eqref{GH proof estimate 1} is bounded above by a uniform constant $C>0$ which only depends on $\epsilon$ and $\delta$. Dividing both sides of \eqref{GH proof estimate 1} by $n$ implies that \begin{equation}\label{GH proof estimate 2} d_{\epsilon}(x_n, y_n) \leq C/n\to 0, \end{equation} as $n \to \infty$. We first claim that both $(x_n)_n$ and $(y_n)_n$ are unbounded. For this sake, it is enough to prove that either of these two sequences is unbounded. In fact, if one of these two sequences is unbounded and the other one is bounded, then by taking a subsequence if needed, $d_{\epsilon}(x_n, y_n)$ is uniformly bounded from below by a positive constant, which contradicts \eqref{GH proof estimate 2}. If both $(x_n)_n$ and $(y_n)_n$ are bounded, then Proposition \ref{local is fine.} gives us a contradiction for a large enough $n \in \mathbb{N}$. Hence the claim follows. By Arzela-Ascoli theorem, there exist a subsequence $(x_{n_k})_k$ and a geodesic ray $\gamma$ such that $d_{\epsilon}(x_{n_k}, \gamma(\infty)) \to 0$ as $k \to \infty$ where $\gamma(\infty) \in \partial_{d_{\epsilon}}X^{\epsilon}$ is the limit of the sequence $(\gamma(k))_k$ with respect to $d_{\epsilon}$. We note that $d_{\epsilon}(\gamma(\infty), y_{n_k}) \to 0$ as $k \to \infty$ since $d_{\epsilon}(x_{n_k}, y_{n_k}) \to 0$ as $k \to \infty$. By Lemma \ref{Gromov subsequence}, we can extract further subsequences $(x_{n_k})_k$ and $(y_{n_k})_k$ that are Gromov sequences. By Lemma \ref{inverse is well defined}, $(x_{n_k})_k$ and $(y_{n_k})_k$ are equivalent to each other as Gromov sequences. By \cite[Lemma 5.3 (3)]{Jussi}, the sequence $(z_n)_n:=(x_{n_1}, y_{n_1}, x_{n_2}, y_{n_2, \cdots})$ is a Gromov sequence. Since the Gehring-Hayman property for Gromov sequences holds, there exists a constant $C\geq 1$ such that for all $k, l \in \mathbb{N}$, \[ l_{d_{\epsilon}}([x_{n_k}, y_{n_l}]) \leq C d_{\epsilon} (x_{n_k}, y_{n_l}), \] which contradicts \eqref{GH proof estimate 1}. This completes the proof. \end{proof} \subsection{(4) $\rightarrow$ (3)}\label{From uniformity to GH} In this subsection we prove that $X^{\epsilon}$ being uniform implies the Gehring-Hayman theorem. To do this, we first recall the quasihyperbolization of a uniform space. \begin{definition}(Quasihyperbolization) Let $(\Omega, d)$ be an $A$-uniform space. The \emph{quasihyperbolic metric $k$} is defined by \[ k(x, y):= \inf\limits_{\gamma}\int^{l_d(\gamma)}_{0}\frac{1}{d(\gamma(t))}\,dt \] where the infimum is taken over all rectifiable curves $\gamma$ from $x$ to $y$ and $d(\cdot):= \text{dist}_d( \cdot , \partial \Omega)$. \end{definition} By \cite[Theorem 3.6]{BHK}, if $(\Omega, d)$ is an $A$-uniform space, then $(\Omega, k)$ is a proper geodesic $\delta$-Gromov hyperbolic space for some $\delta=\delta(A)$. Moreover, if $(\Omega, d)$ is bounded, then $(\Omega, k)$ is $M$-roughly starlike for some $M=M(A)$. \begin{definition} Let $(X, d)$ be a metric space and $C>0$. We say that a curve $\gamma : [a, b] \to X$ is a \emph{$C$-quasigeodesic} if \begin{equation} \frac{1}{C}|t-t'|\leq d(\gamma(t), \gamma(t')) \leq C|t-t'| \end{equation} holds for all $t, t' \in [a, b]$. \end{definition} The following is essentially \cite[Proposition~4.37]{BHK}. We remark that the restriction on the unifromization parameter $\epsilon$ is not needed to prove it. \begin{proposition} Let $(X,d)$ be an $M$-roughly starlike $\delta$-Gromov hyperbolic space. Then for $\epsilon>0$ there exists $C=C(\epsilon, M)$ such that for alla $x,y\in X$ \begin{equation}\label{key inequality0} \frac{1}{C}d(x,y)\le k_\epsilon(x,y)\le C d(x,y), \end{equation} where $k_\epsilon$ is the quasihyperbolic metric of $X^{\epsilon}$. \end{proposition} \begin{proof} Let $\gamma$ be a rectifiable curve in $(X,d)$ arc-length parametrized by $d$. By Lemmas A.5 and A.7 in Appendix in \cite{BHK} we know that $\gamma$ is also rectifiable in $(X,d_\epsilon)$ and moreover there exists a reparametrization $\gamma^o:[0,l_{d_\epsilon}(\gamma)]\to X$ of $\gamma$ with respect to $d_{\epsilon}$ such that $\gamma=\gamma^o\circ s_\epsilon$, where $s_\epsilon(t):=l_{d_\epsilon}(\gamma|_{0,t})=\int_0^t\rho_\epsilon(\gamma(t))\,dt$. Thus setting $d_{\epsilon}(x):=\text{dist}_{d_{\epsilon}} (x, \partial_{d_{\epsilon}} X^{\epsilon})$, we have \[ l_{k_\epsilon}(\gamma)=\int_0^{l_{d_\epsilon}(\gamma)}\frac{1}{d_\epsilon(\gamma^o(t))}\,dt=\int_0^{l_d(\gamma)}\frac{s_\epsilon'(t)}{d_\epsilon(\gamma^o\circ s_\epsilon(t))}\,dt= \int_0^{l_d(\gamma)}\frac{\rho_\epsilon(\gamma(t))}{d_\epsilon(\gamma(t))}\,dt, \] where the first equality comes from \cite[Lemma~A.7]{BHK} and we used the change of variables in the second equality. By \cite[Lemma 4.16]{BHK} there exists $C=C(\epsilon, M)\ge 1$ such that for every $x\in X$ we have \begin{equation}\label{key inequality1} \frac{1}{C}\le \frac{\rho_\epsilon(x)}{d_\epsilon(x)}\le C. \end{equation} Combining the above we get \begin{equation}\label{key inequality2} \frac{1}{C}l_d(\gamma)\le l_{k_\epsilon}(\gamma)\le C l_d(\gamma). \end{equation} Let $x,y\in X$ and let $\gamma$ be rectifiable curve joining them. The above gives that the first inequality of \eqref{key inequality0} by simply taking the infimum over all curves joining $x$ and $y$. The second inequality of \eqref{key inequality0} is obtained by choosing $\gamma$ to be the geodesic between $x$ and $y$ with respect to $d$. \end{proof} The following lemma is originally stated for geodesics with respect to $k_{\epsilon}$ in \cite[Theorem~2.10]{BHK}. We note that the consequence of \cite[Theorem~2.10]{BHK} still holds true for quasigeodesics with respect to $k_{\epsilon}$ by the straightforward modification of their proof, see also \cite[Fact~2.10]{H}. \begin{lemma}(\cite[Theorem~2.10]{BHK})\label{quasihyperbolization key} Let $(\Omega, d)$ be an $A$-uniform space and $k$ be the quasihyperbolic metric with respect to $d$. Then every $C$-quasigeodesic with respect to $k_{\epsilon}$ is a $B$-uniform curve with respect to $d$ for some $B:=B(A, C)>0$. \end{lemma} We are now in a position to prove that $X^{\epsilon}$ being uniform implies the Gehring-Hayman theorem. \begin{proposition}\label{uniform to GH} Let $(X, d)$ be an $M$-roughly starlike $\delta$-Gromov hyperbolic space. If $X^{\epsilon}$ is an $A$-uniform space, the Gehring-Hayman theorem holds for $X^{\epsilon}$. \end{proposition} \begin{proof} By \eqref{key inequality2}, there exists $C>0$ such that for every geodesic $\gamma : [0, l_d(\gamma)] \to X$ with respect to $d$, we have \[ \frac{1}{C}|t-t'|\leq l_{k_{\epsilon}}(\gamma) \leq C|t-t'| \] for any $t, t' \in [0, l_d(\gamma)]$. This implies that geodesics with respect to $d$ are $C$-quasigeodesics with respect to $k_{\epsilon}$. The conclusion follows by Lemma~\ref{quasihyperbolization key}. \end{proof} \begin{remark} By a careful observation of proofs in this subsection, one might notice that $\delta$-Gromov hyperbolicity does not play any role in proving Proposition \ref{uniform to GH}. This tells us that Proposition \ref{uniform to GH} holds as long as $(X, d)$ is $M$-roughly starlike and $X^{\epsilon}$ is uniform although we assume $\delta$-Gromov hyperbolicity, following convention. \end{remark} \section{Critical exponents for some examples}\label{examples} In this section we examine critical exponents for the uniformized space to be a uniform space in the case of the hyperbolic spaces, the model spaces $\mathbb{M}^{\kappa}_n$ of the sectional curvature $\kappa<0$ with the dimension $n \geq 2$ and hyperbolic fillings. \subsection{The hyperbolic spaces and the model spaces} In this subsection we first show that the critical exponent for the uniformized space of the hyperbolic spaces to be a uniform space is $\epsilon~=~1$. We first recall that for $n \geq 2$, the Poincar\'e ball model of the hyperbolic space is the unit open ball $\mathbb{B}^n \subseteq \mathbb{R}^n$ with the metric defined by \[ g_{\mathbb{B}^n}=\frac{4}{(1-(\sum_{i=1}^{n}x_i^2))^2}\sum_{i=1}^{n}dx_i^2, \] where $(x_1, \cdots, x_n) \in \mathbb{B}^n$. Note that in the case of $2$-dimensional hyperbolic space, one can see that in polar coordinates with respect to the hyperbolic metric we have \begin{equation}\label{polar coordinate} g_{\mathbb{B}^2}=dr^2+\text{sinh}(r)^2d\theta^2. \end{equation} Let $d_{\mathbb{B}^n}$ be the Riemannian distance induced from the metric $g_{\mathbb{B}^n}$ on $\mathbb{B}^n$. We first uniformize the metric space $(X, d):=(\mathbb{B}^2, d_{\mathbb{B}^2})$ with the base point $p=~(0,0)$ and $\epsilon>1$. Let $\gamma$ and $\tilde{\gamma}$ be two geodesic rays with $\gamma(0)=~\tilde{\gamma}(0)=~(0, 0)$. From \eqref{polar coordinate} and $\epsilon > 1$, we have \begin{align} d_{\epsilon}(\gamma(k), \tilde{\gamma}(k)) &\leq 2\pi e^{-\epsilon k}\text{sinh}(k) \to 0, \end{align} as $k \to \infty$. This implies that $\partial_{d_{\epsilon}}X^{\epsilon}$ is one point while $\partial_{G}X$ is the unit circle. Therefore by Theorem~\ref{GH decomposition 1}, $X^{\epsilon}$ is not a uniform space. Moreover, since we can isometrically embed $(\mathbb{B}^2, d_{\mathbb{B}^2})$ into $(\mathbb{B}^n, d_{\mathbb{B}^n})$ for any $n \geq 3$, two geodesic rays $\gamma$ and $\tilde{\gamma}$ that are not equivalent in $(\mathbb{B}^2, d_{\mathbb{B}^2})$ can be seen as the ones in $(\mathbb{B}^n, d_{\mathbb{B}^n})$. The claim for the higher dimensional case follows by looking at those two geodesic rays $\gamma$ and $\tilde{\gamma}$ and the fact that $d_{\epsilon}(\gamma(k), \tilde{\gamma}(k)) \to 0$ as $k \to \infty$. On the other hand, Butler showed in \cite[Proposition~4.11 and Proposition~4.12]{B} that the Gehring-Hayman theorem holds for any CAT($-1$) space if $0<\epsilon \leq 1$. Although his uniformization procedure is different from the one in \cite{BHK}, all the arguments to prove \cite[Proposition~4.11 and Proposition~4.12]{B} are valid for the uniformization developed by Bonk, Heinonen and Koskela. Hence we obtain the following corollary. \begin{corollary} The critical exponent for the uniformized space of the hyperbolic space $(\mathbb{B}^n, d_{\mathbb{B}^n})$ to be a uniform space is $\epsilon=1$. \end{corollary} Let $\mathbb{M}^{\kappa}_n$ be the model space of the constant sectional curvature $\kappa<0$ with the dimension $n \geq 2$. The space $\mathbb{M}^{\kappa}_n$ is defined by $\mathbb{B}^n$ with the metric $g_{\mathbb{M}^{\kappa}_n}:=~(-1/\kappa) g_{\mathbb{B}^n}$. We determine the critical exponent for the uniformized space of the model space $\mathbb{M}^{\kappa}_n$ to be a uniform space. Note that $d_{\mathbb{M}^{\kappa}_n}=(1/\sqrt{-\kappa})d_{\mathbb{B}^n}$. We first uniformize $(X, d):=(\mathbb{M}^{\kappa}_2, d_{\mathbb{M}^{\kappa}_2})$ with $p=(0, 0)$ and $\epsilon > \sqrt{-\kappa}$. In the case of $\mathbb{M}^{\kappa}_2$, we have \begin{equation}\label{polar coordinate2} g_{\mathbb{M}^{\kappa}_2}=dr^2-\frac{1}{\kappa}\text{sinh}(\sqrt{-\kappa}r)^2d\theta^2 \end{equation} with respect to the polar coordinates. Hence $X^{\epsilon}$ is not a uniform space by Theorem~\ref{GH decomposition 1} and looking at the circumference of a circle. The proof for the higher dimensional case follows as in the case of the hyperbolic spaces. We next examine the case where $0<\epsilon \leq \sqrt{-\kappa}$. For this sake, we first prove the following simple lemma regarding the uniformization parameter for the Gehring-Hayman theorem under the scaling in a metric. \begin{lemma}\label{scaling metric} Let $(X, d)$ be a $\delta$-Gromov hyperbolic space. Let $K>0$ and $ \epsilon>0$. Set $(\tilde{X}, \tilde{d}):=(X, Kd)$ and $\tilde{\epsilon}:=\epsilon/K$. Then the Gehring-Hayman theorem holds for $X^{\epsilon}$ if and only if the Gehring-Hayman theorem holds for $\tilde{X}^{\tilde{\epsilon}}$. \end{lemma} \begin{proof} It is enough to prove one implication. Assume that the Gehring-Hayman theorem holds for $(X, d_\epsilon)$. Let $\gamma$ be a rectifiable curve arc-length parametrized with respect to $d$ and $\tilde\gamma$ be the same curve but with the arc-length parametrization with respect to $\tilde{d}$. Notice that $l_{\tilde d}(\gamma)= kl_d(\gamma)$. Thus \begin{align*} l_{d_{\epsilon}}(\gamma)&=\int_{0}^{l_d(\gamma)}e^{-\epsilon d( p, \gamma(t))}\, dt =\int_{0}^{l_d(\gamma)}e^{-\tilde{\epsilon} \tilde{d}( p, \gamma(t))}\, dt \\ &= \frac{1}{K}\int_0^{l_{\tilde d}(\gamma)}e^{-\tilde\epsilon\tilde d(p,\tilde\gamma(t))}\,dt=\frac{1}{K}l_{\tilde{d}_{\tilde{\epsilon}}}(\gamma), \end{align*} where the second last equality comes from the change of variable formula. Therefore, for $x,y\in X$ we have \[ l_{\tilde{d}_{\tilde{\epsilon}}}([x, y]) = Kl_{d_{\epsilon}}([x, y])\leq KC d_{\epsilon}(x, y)= C\tilde{d}_{\tilde{\epsilon}}(x, y), \] where the Gehring-Hayman theorem for $X^{\epsilon}$ was applied to obtain the above inequality. This completes the proof. \end{proof} Applying Lemma \ref{scaling metric} to $(X, d):=(\mathbb{B}^n, d_{\mathbb{B}^n})$, $0<\epsilon\leq 1$ and $K:=1/\sqrt{-\kappa}$, we conclude that the Gehring-Hayman theorem for the uniformized space of $(\mathbb{M}^{\kappa}_n, d_{\mathbb{M}^{\kappa}_n})$ holds if $0<\tilde{\epsilon}\leq \sqrt{-\kappa}$ by \cite[Proposition~4.11 and Proposition~4.12]{B}. Since the Gehring-Hayman theorem implies that the uniformized space is a uniform space, we have the following consequence. \begin{corollary} Let $n \geq 2$ and $\kappa<0$. The critical exponent for the uniformized space of the model space $\mathbb{M}^{\kappa}_n$ to be a uniform space is $\epsilon = \sqrt{-\kappa}$. \end{corollary} \subsection{Hyperbolic fillings} We briefly recall the construction and the properties of hyperbolic fillings, following \cite{BBS}. We refer readers to \cite[Section 3]{BBS} for more detailed explanations. Note that there are many slightly different definitions and variants of hyperbolic fillings appeared in \cite{BSc}, \cite{BSa}, \cite{BP}, \cite{B} and \cite{K}. Before defining hyperbolic fillings, we recall some terms. Given a metric space $(Z, d)$ and $r>0$, a set $E \subseteq Z$ is called an \emph{$r$-separated set} if $d(x, y)\geq r$ for every pair of points $x, y \in E$. The existence of a maximal $r$-separated set is ensured by Zorn's lemma. We say that $(Z, d)$ is \emph{precompact} if the completion of $Z$ is compact. Let $(Z, d)$ be a precompact metric space with $\text{diam}(Z)<1$ and $\alpha, \tau>1$ be given parameters. Take maximal $\alpha^{-n}$-separated sets $E_n$ with the property $E_n\subseteq E_{n+1}$. Define a vertex set by \[ V:= \bigcup V_n, \] where $V_n:=\{(x, n) \ | \ x \in E_n \}$. For $(x, n), (y, m) \in V$, there is an edge between them if and only if either of the following is satisfied: \begin{enumerate} \item $n=m$ and $B_d(x, \tau \alpha^n) \cap B_d(y, \tau \alpha^m) \neq \emptyset$. \item $n=m \pm 1$ and $B_d(x, \alpha^n) \cap B_d(y, \alpha^m)\neq \emptyset$. \end{enumerate} The metric graph $V$ with edges defined in the above manner is called \emph{a hyperbolic filling of $Z$}, denoted by $X$. Note that a metric graph is a graph whose edges are identified with the unit interval $[0, 1]$. By \cite[Corollary~3.2, Theorem~3.4 and Proposition~4.6]{BBS}, $X$ is a $1/2$-roughly starlike $\delta$-Gromov hyperbolic space for some $\delta=\delta(\alpha, \tau)$. Moreover, it was shown in \cite[Theorem 5.1]{BBS} that the uniformized space $X^{\epsilon}$ of $X$ is a uniform space if $0<\epsilon \leq \text{log}(\alpha)$. We show that $X^{\epsilon}$ is not a uniform space if $Z$ is sufficiently nice and $\epsilon > \text{log}(\alpha)$. \begin{corollary}\label{hyperbolic filling} Let $\epsilon>\text{log}(\alpha)$. Assume a precompact metric space $Z$ has at least two points and each pair of points in $Z$ is connected by a curve in $\overline{Z}$ whose length is bounded by some uniform constant. Then $X^{\epsilon}$ is not a uniform space. Therefore, the critical exponent for the uniformized space of the hyperbolic filling $X$ to be a uniform space is $\epsilon=\text{log}(\alpha)$. \end{corollary} \begin{proof} By our assumption, we know that the metric boundary of the uniformized space of $X$ with the parameter $\epsilon > \text{log}(\alpha)$ consists of one point by \cite[Proposition 4.1]{BBS}. Since $Z$ has at least two points, there is no way to construct a bijective map between $\partial_{G}X$ and $\partial_{d_{\epsilon}}X^{\epsilon}$. Theorem~\ref{GH decomposition 1} tells us that the uniformized space $X^{\epsilon}$ of a hyperbolic filling $X$ is not a uniform space. \end{proof} \begin{remark} It is well-known that metric trees are $0$-Gromov hyperbolic spaces. Although Corollary \ref{hyperbolic filling} tells us that the uniformized space of a hyperbolic filling is not a uniform space for $Z$ sufficiently nice and $\epsilon > \text{log}(\alpha)$, the uniformized space of a metric tree is always a uniform space for every $\epsilon>0$. This is due to the fact that the Gehring-Hayman theorem always holds for any $\epsilon>0$. \end{remark}
1,108,101,563,460
arxiv
\subsection{Mass dependance of $A_{FB}(M)$ as a function of $\sin^2\theta_W$ and PDFs at the Tevatron} The mass dependence of $A_{FB}(M)$ depends on both $\sin^2\theta_W$ and on PDFs In the region of the $Z$ pole, $A_{FB}(M)$ is sensitive to the vector coupling, which depend on $\sin^2\theta_W$. At higher and lower mass $A_{FB}(M)$ is sensitive to the axial coupling and therefore insensitive to value of $\sin^2\theta_W$. In contrast, the magnitude of the dilution of $A_{FB}(M)$ depends on the PDFs. The sensitivity to PDFs is largest in regions where $A_{FB}(M)$ is large (i.e. away from the $Z$ pole). \begin{figure}[ht] \includegraphics[width=7.5 cm, height=8.0 cm]{{CMS-mass-rapidity.pdf}} \includegraphics[width=7.5 cm, height=8.0 cm]{{plot-for-dis2015.pdf}} \caption{ LHC: Left-Top panel- $A_{FB}$ at the LHC at $\sqrt{s}$=8 TeV for six rapidity bins. The horizontal scale for each of the six plots is the $\mu^+\mu^-$ invariant mass. Left-Bottom panel : The green bands span the difference between $A_{FB}(M,y)$ calculated for the 100 NNPDF3.0 replicas and $A_{FB}(M,y)$ calculated for the central default NNPDF3.0 for the six $\mu^+\mu^-$ rapidity bins. The blue lines are the differences between $A_{FB}(M,y)$ calculated with different values of $\sin^2\theta_{eff}$ and the values calculated with nominal $\sin^2\theta_{eff}$=0.23120. Right-Top panel. Analysis of one of the 64 LHC pseudo experiments The top two panels show the extracted $\sin^2\theta_{eff}$ and corresponding $\chi^{2}_{AFB}$ values from fits to $A_{FB}(M,y)$ versus replica number for the 100 NNPDF3.0 replicas. The Right-Bottom panel shows the same results in the form of a scatter plot of $\chi^{2}_{AFB}$ values versus $\sin^2\theta_{eff}$ for one pseudo experiment. } \label{fig_2} \end{figure} The right panel of Fig.~\ref{fig_1} shows the sensitivity of $A_{FB}(M)$ at the Tevatron to PDFs. Also shown is the sensitivity of $A_{FB}(M)$ at the Tevatron to $\sin^2\theta_W$. There is a large difference in the $A_{FB}(M)$ predictions for PDF sets with different $\frac{d}{u} (x)$ and $\frac{\bar{u}}{u}(x)$ in regions where $A_{FB}(M)$ is large and positive (M$>$100 GeV). The changes in $A_{FB}(M)$ in regions where $A_{FB}(M)$ is large and negative (M$<$80 GeV) are the opposite direction. In contrast, different values of $\sin^2\theta_W$ change $A_{FB}(M)$ primarily in the region near the $Z$ pole. However, here the change is in the same direction above and below the $Z$ pole. Therefore, if we extract $\sin^2\theta_W$ from $A_{FB}(M)$ data using different PDFs, PDFs with poor values of $\chi^2$ are less likely to be correct. The Left-Top panel of Fig.~\ref{fig_2} shows $A_{FB}(M,y)$ at the LHC at $\sqrt{s}$=8 TeV for six rapidity bins (i=1 to 6) with average $y$ values of 0.2, 0.6, 1.0, 1.4, 1.8 and 2.2. The horizontal scale for each of the six plots is the $\mu^+\mu^-$ invariant mass. The calculations are done with the POWHEG MC generator. The version of POWHEG that is used does not include electroweak radiative corrections. Therefore, POWHEG requires an input value of $\sin^2\theta_{eff}$ for the calculation of $A_{FB}$. The green bands span the difference between $A_{FB}(M,y)$ calculated with the 10 NNPDF3.0 replicas and $A_{FB}(M,y)$ calculated with the default NNPDF3.0 PDF. The blue lines are the differences between $A_{FB}(M,y)$ calculated for several values of $\sin^2\theta_{eff}$ and $A_{FB}(M,y)$ for the nominal $\sin^2\theta_{eff}$=0.23120. For all of the blue lines, $A_{FB}(M,y)$ is calculated with the default NNPDF3.0 PDF. At the LHC, as for the Tevatron, the dependence of $A_{FB}(M,y)$ on $\sin^2\theta_{eff}$ and on PDFs is different. In the region of the $Z$ pole, $A_{FB}(M,y)$ is sensitive to the vector coupling, which are functions of $\sin^2\theta_{eff}$. At higher and lower mass $A_{FB}(M,y)$ is sensitive to the axial coupling and therefore insensitive to value of $\sin^2\theta_{eff}$. As is the case for the Tevatron, the magnitude of the dilution of $A_{FB}(M)$ is larger in regions where the absolute value of $A_{FB}(M)$ is large (i.e. away from the $Z$ pole). At the LHC the dilution depends on both M and y. The combined mass and rapidity dependence of the dilution at the LHC provides more stringent constraints on PDFs than $A_{FB}(M)$ measurements at the Tevatron. The NNPDF3.0 PDF set is given in the form of N (e.g. 100 or 1000) replica PDFs. Each of the PDF replicas has equal probability of being correct. The central value of any observable is the average of the values extracted using each one of the N PDF replicas. The PDF error is the RMS of the values extracted using each of the N replicas. One advantage of the PDF replica method is that constraints from new data can easily be incorporated in any analysis by using different weights for each replica. Replicas for which the theory predictions are in agreement with the new data are given higher weights, and replicas for which the predictions are in poor agreement are given lower weights. The weights are derived from the $\chi^2$ values of the comparison between the new data and theory prediction using each of the PDF replicas. The central value of any observable is then the {\it {\it weighted}} average of the values extracted using each one of the N PDF replicas. The PDF error is the {\it {\it weighted}} RMS of the values extracted using each of the N replicas. The procedure of constraining a PDF set with new data was initially proposed by Giele and Keller\cite{GK}. They proposed that each of the N PDF replicas be {\it {\it weighted}} by $w_i$, and the weights reduce the effective number of replicas\cite{Ball} from N to $N_{eff}$. Here $$w_i=\frac {~ e^ {-\frac{1}{2}\chi^2_i}}{\frac {1}{N} \sum_{i=1}^{N}{~ e^ {-\frac{1}{2}\chi^2_i}}}; ~~~~~~~~N_{eff}=exp ( \frac {1}{N} \sum_{i=1}^{N}{~ w_i~ ln(N/w_i)})$$ More recent discussions of the method can be found in references \cite{weights-MST,weights-web,GK1,GK2,Ball}. The mass and rapidity dependence of $A_{FB}$ can be used to both provide additional constraints on PDFs and reduce the PDF error in measurements of $ \sin^2 \theta_W$. \begin{table}[htb] \caption {Values of $\sin^2\theta_W$ with statistical errors and PDF errors expected for a 15 fb$^{-1}$ Drell-Yan $\mu^+\mu^-$ sample at the LHC (at 8 TeV). The pseudo data is generated by the POWHEG MC generator with the default NNPDF3.0 PDF, and $\sin^2\theta_{eff}$=0.23120. The PDF error for a standard analysis is compared to the PDF error for an analysis with both $\chi^{2}_{AFB}$ {\it weighting} and $\chi^2_{AFB}+\chi^2_{Wasym}$ {\it weighting}. In addition, expected errors for larger statistical samples are shown.} \begin{center} \begin{tabular}{|c||c||c||c||} \hline input& LHC~CMS~like & LHC~CMS~like & LHC~CMS~like \\ POWEG & Pseudo-Exp. & Pseudo-Exp.& Pseudo-Exp. \\ Default& {{~15~fb$^{-1}$}} ~8~TeV & {{~19~fb$^{-1}$}} ~8~TeV& {{~200~fb$^{-1}$}} ~13~TeV \\ {NNPDF3.0} & $6.7M~(\mu^+\mu^-)$& $15M~(\mu^+\mu^-, ~e^+e^-)$& $120M~(\mu^+\mu^-)$ \\ (261000) &reconst.~events & reconst.~events & econst.~events\\ \hline\hline $\sin^2\theta_{eff}$ statistical~error & $\pm$0.00050&$\pm$0.00034 &$\pm$0.00011 \\ $\sin^2\theta_{eff}$ CT10~PDF~error & $\pm$ 0.00080& & \\ \hline \hline NNPDF3.0 Average & $N_{eff}=100$&& \\ {PDF~error~RMS} & $\pm$0.00051&& \\ \hline $\chi^2_{AFB}$ {\it weighting} & $N_{eff}=46$& & \\ {{\it {\it weighted}}~PDF~error RMS} & $\pm$0.00029 && \\ \hline {$\chi^2_{AFB}$+$\chi^2_{Wsym}$~{\it weighting} } & $N_{eff}=21$& & \\ {{\it {\it weighted}}~PDF~error RMS} & $\pm$0.00026 &$\pm$ 0.00022& $\pm$0.00014 \\ \hline\hline $\Delta\sin^2\theta_{eff}$ Stat+PDF &$\pm$0.00056 & $\pm$0.00040& $\pm$0.00018\\ $\Delta M_W$ indirect Stat+PDF &$\pm$28~MeV &$\pm$ 20~MeV&$\pm$9 MeV\\ \hline\hline \end{tabular} \label{table_1} \end{center} \end{table} For studies of $A_{FB}(M,y)$ at the LHC we simulate Drell-Yan $\mu^+\mu^-$ data for 64 pseudo experiments for a CMS like detector at $\sqrt{s}$=8 TeV. The pseudo data is generated using the POWHEG NLO MC generator with the default NNPDF3.0 PDFs and $\sin^2\theta_{eff}$=0.23120. For each pseudo experiment, we generate a sample of 15.6 Million $\mu^+\mu^-$ events with $M_{\mu\mu} >$ ~50 GeV, which corresponds to an integrated luminosity of 15.0 fb$^{-1}$. This is similar to the $\approx$19 fb$^{-1}$ of integrated luminosity collected by CMS and ATLAS at 8 TeV. To this sample, we apply acceptance and transverse momentum cuts which are similar to a CMS-like detector. We also smear the events with a muon momentum resolution similar to a CMS-like detector. The final sample consists 6.7M reconstructed $\mu^+\mu^-$ events. The 8 TeV W asymmetry data at the LHC has not yet been incorporated into the most recent PDF fits. Therefore, in addition to $A_{FB}(M,y)$, we also use the default NNPDF3.0 PDF to generate pseudo data for the W decay muon asymmetry as a function of muon rapidity (for muon transverse momentum PT$>$25 GeV). This simulates the W asymmetry measurement at 8 TeV. In the analysis of each of the 64 pseudo experiments generated with the default NNPDF3.0 PDF the simulated values of $A_{FB}(M,y)$ for each experiment are compared to $A_{FB}(M,y)$ templates. The templates are generated with the POWHEG MC for a range of values of $\sin^2\theta_{eff}$ for each of the 100 NNPDF3.0 PDF replica. For each replica we extract the best fit value of $\sin^2\theta_{eff}$, the corresponding statistical error and the fit $\chi^{2}_{AFB}$. In addition, we calculate $\chi^2_{Wasym}$ which is the $\chi^2$ for the agreement between the predictions for the W lepton decay asymmetry and the W decay lepton asymmetry pseudo data at 8 TeV for each of the 100 PDF replicas. The right panels of Fig. \ref{fig_2} shows the results from one of the 64 pseudo experiments at the LHC. The top two panels on the right show the extracted $\sin^2\theta_{eff}$ and corresponding $\chi^{2}_{AFB}$ values from fits to $A_{FB}(M,y)$ versus replica number for the 100 NNPDF3.0 replicas. The bottom panel on the right shows the same results in the form of a scatter plot of $\chi^{2}_{AFB}$ values versus $\sin^2\theta_{eff}$ for one pseudo experiment. % % % For each pseudo experiment we find the mean value and PDF error of $\sin^2\theta_{eff}$ from the average and RMS of the $\sin^2\theta_{eff}$ extracted values using each of the 100 PDF replicas. The average and RMS values are done in three ways: (1) Using the standard average and RMS of the $\sin^2\theta_{eff}$ fit values. This analysis results in a standard PDF error of $\pm0.00051$ with 100 replicas. (2) Using the $\chi^{2}_{AFB}$ values of the fits to $A_{FB} (M,y)$ to form a {\it {\it weighted}} average and {\it {\it weighted}} RMS of the $\sin^2\theta_{eff}$ values. This analysis results in a PDF error of $\pm0.00029$ with 46 effective replicas. (3) Using the combined $\chi^{2}_{AFB}$+$\chi^2_{Wasym}$ for the fits to Drell-Yan $A_{FB} (M,y)$ pseudo data and the fits to the W lepton decay asymmetry pseudo data to form the {\it weighted} average and {\it weighted} RMS of the $\sin^2\theta_{eff}$ values. This analysis results in a PDF error of $\pm0.00026$ with 21 effective replicas. Table~\ref{table_1} shows values of $\sin^2\theta_W$ with statistical errors and PDF errors expected for a 15 fb$^{-1}$ Drell-Yan $\mu^+\mu^-$ sample at the LHC (at 8 TeV). The pseudo data is generated by the POWHEG MC generator with the default NNPDF3.0 PDF, and $\sin^2\theta_{eff}$=0.23120. The PDF error for a standard analysis is compared to the PDF error for an analysis with both $\chi^{2}_{AFB}$ {\it weighting} and $\chi^2_{AFB}+\chi^2_{Wasym}$ {\it weighting}. As shown in Table~\ref{table_1}, the number of effective PDF replicas is reduced when we apply constraints from $\chi^{2}_{AFB}$ and $\chi^2_{Wasym}$. Therefore, the analysis will be more robust if we start with 1000 PDF replicas. Also shown are the expected errors for larger statistical samples. With larger statistical samples, the PDF constraints are more stringent, and the PDF errors are also reduced. The errors in this indirect measurement of the W mass are competitive with direct measurements. For example, with 200 fb$^{-1}$ at 13 TeV, the expected error in the indirect measurement of the W mass is $\pm$9 MeV. Additional details and studies for both the Tevatron and LHC are give in ref.~\cite{bodek-afb}.
1,108,101,563,461
arxiv
\subsubsection{Block-Design Visually-Evoked (BDVE) dataset} We herein provide an exhaustive description of the dataset in \cite{Spampinato2016deep} and include additional details, in order to make this paper self-contained. The recording protocol included 40 object classes with 50 images each, taken from the ImageNet dataset~\cite{imagenet_cvpr09}, resulting in a total of 2,000 images. Six participants viewed the visual stimuli in a block-based design. All 50 images corresponding to a single class were shown consecutively in a single sequence. Each image was shown for 0.5 seconds for a total duration of 25 seconds per block/class. After each block/class, a 10-second blank screen (with a black background) was shown, then the next class started. Because of the total time needed for the experiment, its execution was split into \textbf{four sessions} of 10 classes each (about 4 minutes per session). After each session the subjects had time to rest and they continued the experiment whenever they felt ready. Short block durations are critical to keeping participants engaged~\cite{jerison1963decrement,nuechterlein1983visual,see1995meta} and to maximize EEG recording quality \cite{luck2014introduction}. Excessively long block duration is expected to result in extremely low quality data because subjects become inattentive and are likely to create more movement and ocular artifacts. In turn, this would amplify the contribution of the temporal correlation as that becomes the main signal left in the data. \emph{It should be noted that \cite{siskind} failed to replicate the block durations from \cite{Spampinato2016deep}: experiments in~\cite{siskind} lasted over 23 minutes, while in~\cite{Spampinato2016deep} block durations were about 4 minutes.} The BDVE dataset contains a total of 11,964 segments (time intervals recording the response to each image); 36 were excluded from the expected 6$\times$2,000 = 12,000 segments, due to low recording quality or subjects not looking at the screen as determined by the gaze movement data from a Tobii T60 eye-tracker. Each EEG segment contains 128 channels, recorded for 0.5 seconds at 1 kHz sampling rate, represented as a 128$\times$L matrix, with $L\approx 500$ being the number of samples contained in each segment in each channel. The exact duration of each signal may vary slightly, so we discarded the first 20 samples (20 ms) to reduce interference from the previous image and then cut the signal to a common length of 440 samples (to account for signals with $L < 500$). For a fair and consistent comparison, when using this dataset for EEG-based classification, we employ the same training, validation and test splits as in ~\cite{Spampinato2016deep}, consisting of 1600 (80\%), 200 (10\%), 200 (10\%) images with associated EEG signals, respectively, ensuring that all signals related to a given image belong to the same split. \subsubsection{Block-Design Blanks (BDB) dataset} As mentioned above, our block-design protocol presented a 10-second blank screen between each class block. For each user, we thus record 360 seconds of brain activity corresponding to the visualization of inter-class blank screens (9 black screens for each of the four sessions), for a total of 6$\times$360 = 1,980 seconds of EEG recording across all six subjects. To construct the BDB dataset we split such signals into 500-sample long segments with an overlap of 100 samples between consecutive segments. From this operation we obtain 864 blank screen samples from all the 10-second segments for each user, for a total of 5,184 segments. The data from these blank screens are particularly significant because, as claimed in~\cite{siskind}, any contribution of a temporal correlation to classification accuracy should persist throughout the blank screen interval (i.e., the blank interval should be consistently classified above chance as either the class before or after the blank screen). \subsubsection{Rapid-Design Visually-Evoked (RDVE) dataset} We also replicate the experiments performed in \cite{siskind}, where authors additionally collected data using an interleaved or ``\textit{rapid}'' design, in which images were presented in a random order rather than grouped by class. For the rapid-design data collection, we employ the same 40 classes as in the BDVE dataset. However, we only utilize 25 images per class to reduce the experiment duration. Images were shown in blocks of 50 images as in~\cite{siskind}, after which a blank screen was shown for 10 seconds. Replicating~\cite{siskind}, we did not split the recordings into sessions and performed all of them in a single session that lasted about 11.5 minutes. Note that this is still twice as long as \cite{Spampinato2016deep}'s block-design sessions and less than half as long as \cite{siskind}'s sessions (more than 23 minutes). The resulting dataset consists of 6,000 EEG signals (1,000 per subject, across six subjects), sharing the same characteristics (sampling rate, temporal length, common-length preprocessing) as the BDVE dataset. Given these similarities, we use the same dataset splits as above. Furthermore, to assess whether the results of the BDVE dataset (shown in the next section) are related to cognitive/visual information in the EEG signals rather than other factors that could be used by a classifier, (e.g., EEG drift, temporal correlation, current phase, etc.), we also use a variant of this dataset by modifying the class labels as proposed by~\cite{siskind}. Rather than using the actual class label for each stimulus (i.e., image), labels were assigned to each stimulus based on the corresponding temporal order in the sequence (the block order) during the recording protocol. This procedure replicates \cite{siskind}'s effort to demonstrate temporal correlation bias in~\cite{Spampinato2016deep}, but our results show a very different pattern of classification accuracy. \subsubsection{Data preprocessing} In the following experiments, frequency-filtering is performed by applying a second-order Butterworth bandpass IIR filter (we ran experiments using several cut-off frequencies) and a notch filter at 50 Hz. The filtered signal is then z-scored --- per channel --- to obtain zero-centered values with unitary standard deviations. The maximum low-pass cut-off frequency is 95 Hz, since frequencies above 100 Hz rarely have the power to penetrate the skull. \subsection{Duration of the experiments }\label{sec:comment_duration} As mentioned in the previous sections, it is unclear why \cite{siskind} performed all experiments in a single session, rather than splitting it into shorter sub-sessions, as is unmistakenly indicated in~\cite{Spampinato2016deep}. The only effect of increasing the sessions duration is to increase fatigue, which, in turn, decreases the subject's vigilance. This is a well-known issue in the cognitive neuroscience and human factors communities ~\cite{jerison1963decrement,nuechterlein1983visual,see1995meta,luck2014introduction}. To provide some evidence of this to readers of the machine learning community, we attempt to quantify the impact of experiment duration on temporal correlation in EEG data and how this may inflate classification accuracy. Specifically, we measure the difference between the classification accuracy of 1D-CNN~\cite{siskind} and chance accuracy on the \textit{BDB} (with the same procedure explained in Section~\ref{sec:classification_eeg}) and \textit{RDVE} (with block-level labels) datasets. Indeed, as mentioned earlier, these datasets contain neural responses which are not related to any visual class stimuli. Any classification rate above chance on these data can be accounted for by temporal correlation. For this analysis EEG data is filtered between 14-70 Hz and results are computed on a per-subject basis, consistently with~\cite{siskind}. We compare our results with those obtained for the same 1D-CNN method in~\cite{siskind}. Specifically, from~\cite{siskind}, we report\footnote{~\cite{siskind}'s accuracy values are taken from Table 9 of \cite{siskind}'s manuscript and from Tables 41--45 of \cite{siskind}'s appendix.} the performance (as average classification accuracy across all subjects) of the 1D-CNN method on the rapid-design (with block labels, as this is supposed to show temporal correlation) with both images and videos. Results, in Table~\ref{tab:experiment_duration}, clearly indicate that as the duration of experiment increases, the magnitude of the temporal correlation, and its impact on classification performance, increases substantially, due to subjects becoming inattentive. Additionally, a longer duration experiment would provide more of an opportunity for the signal to drift over time, which when coupled with the per-subject classification exacerbates the issue. \textit{This is a clear indication that the experimental setting in \cite{siskind} maximizes the temporal correlation in their data but likely has minimal impact on a properly-designed study.} \begin{table}[] \centering \begin{tabular}{ccc} \toprule \multicolumn{3}{c}{\textbf{1D-CNN on our data}}\\ \midrule Experiment & Duration (minutes) & Increase over chance\\ \midrule Blank-data (BDB) & $\sim$4 mins & 4.4 \\ Rapid-design (RDVE) & $\sim$11 mins & 17.1\\ \midrule \multicolumn{3}{c}{\textbf{1D-CNN performance, taken from ~\cite{siskind}, on their data}}\\ \midrule Experiment & Duration (minutes) & Increase over chance \\ \midrule Image rapid-design & $\sim$23 mins & 50.3\\ Video rapid-design & $\sim$28 mins& 61.9\\ \midrule \end{tabular} \caption{\textbf{Temporal correlation w.r.t. experiment duration for the 1D-CNN method~\cite{siskind}}. The longer the experiment lasts, the larger the temporal correlation. The ``Increase over chance'' column reports the increase in percent points of the 1D-CNN method~\cite{siskind} over chance accuracy. Please note that a similar (or lower) level of performance increase of the 1D-CNN method over the chance level is observed for all the other tested methods on our data, see Tables~\ref{tab:eeg_black} and ~\ref{tab:eeg_rapid_block}.} \label{tab:experiment_duration} \end{table} \subsection{Rapid-design}\label{sec:comment_rapid} Beside the duration of the experiment and the lack of breaks, the rapid-design experiment, as proposed in \cite{siskind}, seems designed to suppress the very responses that we hope to classify with machine learning methods. Also, the rapid-design experiment proposed in~\cite{siskind} creates a whole host of additional issues, since object recognition tends to last many hundreds of milliseconds (especially when the items change rapidly from trial to trial). This means that components such as the P300 and the N400 may still be processing the item from one class, when an item from the next class is presented~\cite{luck2014introduction}. This signal overlap certainly results in the signal bleeding into the subsequent trial. This is somewhat less of a problem in a block-design study as recognition is known (from decades) to be faster when there is a consistent context \cite{taylor1978identification}. Indeed, the neural processes involved in object categorization are thought to differ greatly between a blocked and interleaved design; when items from a single category are presented in a block, people tend to focus on the similarities of consecutive items, whereas when items from different categories are interleaved (as in \cite{siskind}'s rapid design) people tend to focus on the differences between items \cite{carvalho2017sequence}. A fair test of the classification rates of randomly presented classes would utilize designs common to the event-related potentials literature. These designs generally include a jittered (randomly timed) blank screen between each stimulus with some minimum duration. In fact, \cite{siskind} discusses the need to add jitter between items (page 1), but then fails to do so in the study they present. During this jittered blank interval, a long enough duration is typically included to prevent overlap of the neural responses \cite{luck2014introduction}, i.e., activity generally returns to some near baseline level. Additionally, when event-related potentials are examined, the time right before the stimulus is presented is used to perform a baseline correction, which takes the mean activity for a given channel during a pre-stimulus interval and subtracts it from every sample in that channel \cite{luck2014introduction}. This is expected to eliminate any and all temporal correlation effects not due to subject fatigue or inattention (i.e., a drifting electrical signal over time). Importantly, \cite{siskind} failed to include any jitter in their rapid design study. Essentially, all of the features that make a block-design sub-optimal for classification were included to work against classification accuracy in \cite{siskind}'s rapid design study. Additionally, when items are presented in a block, it is possible to make the class very salient (i.e., the participant will notice that they have viewed 50 dogs in a row), whereas the rapid design obscures the point of the study. In this case, if the subjects were even mildly inattentive, they would certainly fail to think about the class of items being presented to them, something that is far harder to miss in the block-design. Importantly, obscuring the class in the way that \cite{siskind} did, without requiring an overt response from the subject, calls into question if the subject was even paying attention to the stimuli, whereas an overt response forces the subject to attend to and more fully process the stimuli to the class level \cite{luck2014introduction}. This flawed design is exacerbated by the effects of the aforementioned prolonged exposition to visual stimuli (much longer than in \cite{Spampinato2016deep}, see Table~\ref{tab:experiment_duration}). If the subject is not actively and fully processing the stimulus (i.e., he/she is not paying attention and is, instead, daydreaming), machine learning methods have little hope of being able to classify the stimulus or class from any neuroimaging data. \subsection{Per-subject analysis} It is not clear to us why the authors of~\cite{siskind} presented their results separately for each subject, when in~\cite{Spampinato2016deep} the experiments are performed in a multi-subject pooled analysis. In Sect.~\ref{sec:cont} we already showed how the per-subject analysis may enhance the effect of temporal correlation on classification results. The per-subject analysis is critical mainly because EEG data are known to be highly replicable within a person \cite{luck2014introduction}, but also highly specific from person to person \cite{luck2014introduction}, \cite{huang2012human}. In fact, the difference in EEG activity is so great from person to person that it has even been proposed as a method for hard-to-mimic and impossible-to-steal biometric identification, and it is able to identify individuals with ~95 to 100 percent accuracy \cite{huang2012human}. This variability can be observed in the results reported in \cite{siskind}. For instance, in the block-design experiments (Table 4 in \cite{siskind}, and Tables 21-25 in \cite{siskind}'s appendix), classification performance of the 1D-CNN method varies from 37.80\% to 70.50\%. Similarly, in the rapid-design experiment with block-level labels (Table 9 in \cite{siskind}, and Tables 41-45 in \cite{siskind}'s appendix), classification accuracy varies from 19.20\% to 84.10\%. Thus, classification performance was not consistent from subject to subject. To further elucidate that the per-subject analysis is problematic, we compare the test performance of the 1D-CNN in two settings: 1) the model is trained and tested using single-subject data; 2) the model is trained with data pooled from all subjects and then tested on single subject data. The results of this comparison are given in Table~\ref{tab:variance} and clearly show how pooling data accounts for inter-subject variability by making classifiers less influenced by subject-specific representation. More specifically, Table~\ref{tab:variance} shows that the variability (measured in terms of standard deviation) among per-subject results decreases significantly when a classifier is trained using all subjects' data (on the rapid design with block labels, the standard deviation of performance between subjects drops from 19.0\% in~\cite{siskind} to 3.6\% in our case). Furthermore, this allows the model to focus on inter-subject discriminative features, reducing the bias due to possible temporal correlation in single subject neural responses. \begin{table*}[] \centering \begin{tabular}{cccccccccc} \toprule & & Subj. 1 & Subj. 2 & Subj. 3 & Subj. 4 & Subj. 5 & Subj. 6 & Average & Std\\ \midrule \multicolumn{9}{c}{\textbf{Results of 1D-CNN as reported in~\cite{siskind} on their data with per-subject analysis}}\\ Block-design&& 62.7\% & 50.7\% & 50.4\% & 48.1\% & 70.5\% & 37.8\% & 53.4\% & 10.5\%\\ Rapid-design with block labels& & 50.1\% & 52.2\% & 54.8\% & 19.2\% &84.1\% &59.6\% & 53.3\% & 19.0\%\\ \midrule \multicolumn{9}{c}{\textbf{Our results of 1D-CNN~\cite{siskind} on our data with per-subject analysis}}\\ Block-design&& 13.1\% & 36.2\% & 42.5\% & 44.4\% & 31.9\% & 33.1\% & 33.5\% & 10.2\%\\ Rapid-design with block labels& & 21.4\% & 10.7\% & 25.0\% & 30.4\% &20.5\% &9.8\% & 19.6\% & 7.3\%\\ \midrule \multicolumn{9}{c}{\textbf{Our results of 1D-CNN~\cite{siskind} on (our) pooled data from multiple subjects}}\\ Block-design&& 17.3\% & 25.8\% & 37.0\% & 37.0\% & 23.9\% & 20.4\% & 27.9\% & 7.3\% \\ Rapid-design with block labels&& 16.7\% & 10.8\% &16.7\% &20.0\% & 18.3\% &10.3\%& 15.5\%& 3.6\% \\ \bottomrule \end{tabular} \caption{Variability in classification performance with per-subject analysis and with pooled data.} \label{tab:variance} \end{table*} Thus, the large inter-subject differences must be overcome for any viable classification method. Importantly, averaged event-related data from a random sample of ~10 subjects tends to look highly similar to another random sample of ~10 subjects \cite{kappenman2011brainwave}, \cite{luck2014introduction}. Failure to pool data across subjects would, again, only serve to increase the impact of temporal correlation. Indeed, it seems that \cite{siskind} did everything possible to maximize the impact of temporal correlation in their results. \subsection{Discussion} In conclusion, carrying out cognitive neuroscience experiments is an extremely delicate process and small changes to the design may lead to completely different results and different neural processes being activated. The block-design in~\cite{siskind} differs from the one in~\cite{Spampinato2016deep} in a number of design choices and analysis procedures, including the duration of each group of trials and the number of breaks (20 minute blocks~\cite{siskind} with no mention of a break, compared to the about 4 minutes in~\cite{Spampinato2016deep}). Given the complex, and mostly unknown, nature of the human brain, it is often not trivial to compare results, even with the same exact designs, as outcomes strongly depend on the subjects' conditions. This is especially true with small scale studies as both \cite{Spampinato2016deep} and~\cite{siskind} are. Attempting to generalize from findings with different designs, as~\cite{siskind} did, based on the results of completely different studies is definitively wrong, an unconventional practice and a causal fallacy. Indeed, the fact that, in~\cite{siskind}, block-design results are similar to rapid-design (with block labels) results --- similarity on which the whole refutation in~\cite{siskind} is based --- happens only because both their experiments are flawed by the same methodological error (e.g., the duration of the experiments extremely long), and not because of the block-design. \subsection{Incorrect statements in~\cite{siskind} about availability and reproducibility of our code} In Section~5.5 of \cite{siskind}, the authors claim that they were allegedly \textit{``hindered''} in their attempt to reproduce the results of \cite{tirupattur_acmmm,gan_brain_iccv_2017,decoding_arxiv,brain2image} by the fact, quoting, \textit{``that the authors have declined to release their code to us, despite requests, and the fact that the published papers lack sufficient detail to replicate their models.''} As for the first two works, the authors of~\cite{siskind} state \textit{``Palazzo et al. [24] and Tirupattur et al. [34] appear to employ related but somewhat different encoders. We do not comment on these here since we do not have access to this code''} (\textit{``[24]''} and \textit{``[34]''} are reference numbers within \cite{siskind} for \cite{gan_brain_iccv_2017} and \cite{tirupattur_acmmm}). Also, in reference to~\cite{tirupattur_acmmm}, Li \textit{et al}.~\cite{siskind} state that they \textit{``do not have access to these datasets''}. This is clearly incorrect. Indeed, one of the co-authors of \cite{siskind}, Hamad Ahmed, contacted the first author of \cite{tirupattur_acmmm} on December 19, 2018 and asked for the code. \cite{tirupattur_acmmm}'s first author sent him the code and data promptly and made everything available on Github\footnote{\url{https://github.com/ptirupat/ThoughtViz}}. After that, there was no additional communication. We believe Li \textit{et al}.~\cite{siskind} had enough time to run the provided code and analyze the results, instead of speculating on the results obtained in~\cite{tirupattur_acmmm}. Claiming that they were not given access to the code and the data, is simply incorrect. The claim of insufficient detail to reproduce the code and the unavailability of the code described in~\cite{gan_brain_iccv_2017} are, again, not true. Indeed, the architecture of the EEG encoder and the data are the same as in~\cite{Spampinato2016deep}, while the code for the conditional GAN (whose architecture is a standard DC-GAN~\cite{RadfordMC15}, as clearly stated in~\cite{gan_brain_iccv_2017} and for which multiple implementations are available) is taken from~\cite{Reed2016generative}, which is publicly available\footnote{\url{https://github.com/reedscot/icml2016}}, and as such, couldn't be shared by us. The same discussion holds for~\cite{brain2image} that used state of the art deep models. This was also communicated to Jeffrey Mark Siskind on December, 19th 2018. Finally, we did not release the code when the preprint was made available~\cite{decoding_arxiv}, since it was under review at PAMI (and recently accepted~\cite{model10}) and it might have been subject to changes during the peer-reviewing process. However, we believe that enough implementation details were available in the text to reproduce our work, since the dataset is the same of~\cite{Spampinato2016deep}, and therefore readily available. The statements by the authors of~\cite{siskind} related to \cite{tirupattur_acmmm,gan_brain_iccv_2017,decoding_arxiv,brain2image} are simply not true and quite bizarre, at the very least, given that a record of communications exists. Both code and data had been made available to them, and they did not even attempt to implement methods when the code could be easily accessed from online sources (which they were informed of) or reproduced through basic programming. \subsection{Analysis of EEG encoding in~\cite{siskind}} The code initially released to replicate the results in~\cite{Spampinato2016deep} did not perfectly match the description of the architecture employed in the manuscript. This was due to rewriting the code in Pytorch, since the model's code had been originally written in Torch Lua. However, the published code was fixed in early 2019 to match the model described in the manuscript --- whose architecture was not debatable despite \cite{siskind} implying (Section~4.6) that we: \begin{itemize} \item[] \textit{``describe this (40-way) classifier alternately as a softmax layer, a softmax classification layer, a softmax classifier, and softmax. Colloquial usage varies as to whether or not this implies use of a fully-connected layer prior to the softmax layer.''}. \end{itemize} Any careful reader will notice that Section~3.2 of~\cite{Spampinato2016deep} specifies that \textit{``the encoder network is trained by adding, at its output, a classification module (in all our experiments, it will be a softmax layer)''}; secondly, Table~2 of~\cite{Spampinato2016deep} clearly shows that all output sizes for the encoder configurations are different than 40. Therefore, it is necessary that the ``softmax layer'' must include a layer projecting the dimensionality of the encoder output to the number of classes. Ignoring these considerations, in the same section, Li \textit{et al}.~\cite{siskind} claim: \begin{itemize} \item[]\textit{"we perform all analyses with both the original unmodified code and four variants that cover all possible reasonable interpretations of what was reported in the published papers."}. \end{itemize} Thus, in a fundamentally misplaced effort, Li \textit{et al}.~in~\cite{siskind} in Sections 1.3 and 1.5 report the learned EEG encodings of five different configurations (that we report here to ease reading): \begin{itemize}[leftmargin=*] \item[-] \textbf{LSTM}: LSTM(128) $\rightarrow$ FC(128) $\rightarrow$ ReLU $\rightarrow$ $\|$ $\rightarrow$ cross-entropy. This is the configuration in the code we originally released and then corrected. \item[-] \textbf{LSTM1}: LSTM(128) $\rightarrow$ FC(128) $\rightarrow$ $\|$ $\rightarrow$ cross-entropy \item[-] \textbf{LSTM2}: LSTM(128) $\rightarrow$ FC(40) $\rightarrow$ $\|$ $\rightarrow$ cross-entropy \item[-] \textbf{LSTM3}: LSTM(128) $\rightarrow$ FC(40) $\rightarrow$ ReLU $\rightarrow$ $\|$ $\rightarrow$ cross-entropy \item[-] \textbf{LSTM4}: LSTM(128) $\rightarrow$ FC(128) $\rightarrow$ ReLU $\rightarrow$ $\|$ $\rightarrow$ FC(40) $\rightarrow$ cross-entropy. This is the correct configuration as clearly reported in~\cite{Spampinato2016deep}. Indeed, the other three configurations just encode class labels, while this one projects the input EEG data into a new manifold where data can be classified. \end{itemize} The $\|$ indicates the boundary between the encoder and the classifier. Reporting 66 tables for five different configurations, when only one was needed as shown in~\cite{Spampinato2016deep}, appears useless and may only confuse and exhaust the reader without adding anything to the discussion. However, what is irremediably flawed is the line of reasoning in their conclusions, as we will show in the remainder of this section. Li~\textit{et al}.~\cite{siskind}, indeed, conclude by stating that all the tested LSTM configurations \textit{``... exhibit the same broad pattern of results''}. Figure~\ref{fig:encodings} shows the encodings as computed (and reported) in \cite{siskind}, clearly indicating that the encodings learned by LSTM4 (which is the exact configuration used in \cite{Spampinato2016deep}) does not show the \textit{``problematic''} one-hot pattern as the other four configurations. \begin{figure*}[h] \centering \begin{tabular}{cccc} \includegraphics[width=0.3\textwidth]{figures/lstm0.PNG} & \includegraphics[width=0.3\textwidth]{figures/lstm1.PNG} & \includegraphics[width=0.3\textwidth]{figures/lstm2.PNG} \\ \textbf{(a) LSTM} & \textbf{(b) LSTM1} & \textbf{(c) LSTM2} \\[6pt] \end{tabular} \begin{tabular}{cccc} \includegraphics[width=0.3\textwidth]{figures/lstm3.PNG} & \includegraphics[width=0.3\textwidth]{figures/lstm4.PNG} \\ \textbf{(d) LSTM3} & \textbf{(e) LSTM4} \\[6pt] \end{tabular} \caption{ The average encodings learned by the 5 LSTM variants described in \cite{siskind}. Figures taken from \cite{siskind}'s appendix.} \label{fig:encodings} \end{figure*} To justify the different pattern by LSTM4, in Section~1.5 of the appendix \cite{siskind}, the authors discuss an imaginary issue related to the distribution of the embeddings computed right before the classification layer, complaining that these \textit{``exhibit the same approximate one-hot encoding''} and that it \textit{``means that the encoder output is just a linear mapping of these one-hot encodings''}, i.e., a linear mapping of output classes. We believe that the audience of this journal do not need an explanation as to how and why this is wrong, but to prevent anyone from being misled into thinking that such a problem exists, we have to respond. Having a simple linear layer (followed by a softmax activation) after the last hidden layer (with nonlinear activation) of a fully-connected neural network is standard and common practice, e.g., ResNet~\cite{resnet}, Inception/GoogleNet~\cite{inception,googlenet}, DenseNet~\cite{densenet}. As already mentioned above, the output layer is only meant to estimate class scores from features that, at that point, are assumed to be linearly separable (hence encode a linear mapping of output classes) if the classifier has learned to classify the input data. However, Li \textit{et al}.~\cite{siskind}, in Section~1.5 of the appendix, attempt to show that the encoder output \textit{``exhibits one-hot pattern"}. In particular, they empirically measure \textit{one-hotness} of feature representation as follows: \begin{equation}\label{oh} \text{OH} = |\text{det} (A-I)|, \end{equation} with $A$ being the outer product of the normalized class-averaged encoding vectors and $I$ the identity matrix. Li \textit{et al}.~\cite{siskind} suggest that all methods for which encoding (pre-classification) features hold the property $\text{OH} \ll 1$, as the method in~\cite{Spampinato2016deep}, are flawed. It is unclear why, according to~\cite{siskind}, having features which can be linearly mapped to one-hot vectors at the model encoding layer should be problematic. We argue that the one-hotness behavior described in~\cite{siskind}, where the outer product of class-averaged features is shown to be close to the identity matrix, is just the expected behavior of any well-trained neural network. To demonstrate this, we have prepared an example\footnote{Please find it at \url{https://colab.research.google.com/drive/1Y6HyToZv6HkRKK48D663Fthv8_-4n-nI}}, where a ResNet-18 model is trained on MNIST and the same analysis of pre-output features is carried out. Of course, using ResNet on MNIST is overkill: the choice of the model is meant to show that the phenomenon described by~\cite{siskind} is not an anomaly and is not model-related or embedding-related, while the dataset was chosen to make the example quickly reproducible. Unsurprisingly, the behavior observed with ResNet-18 is the same of the~\cite{Spampinato2016deep}'s EEG encoder as observed by~\cite{siskind}. This is not an issue related to the embeddings, but, again, simply the normal behavior that any machine learning researcher would reasonably expect. Beside ResNet-18, we tested several other state of the art models, including InceptionV3, DenseNet-121 and GoogleNet on the MNIST dataset. We measure the one-hotness behaviour claimed by~\cite{siskind} in Table~\ref{tab:encodings_ass}. Results show that when a classifier works properly (i.e., it is able to discriminate the input data), $|\text{det} (A-I)| \ll 1$, i.e., features (at the encoding layer) show a \textit{``one-hot representation''}. On the contrary when classifiers have not learned any linear separation among classes (``Untrained models'' in Table~\ref{tab:encodings_ass}), the one-hotness measure is significantly higher than 0, as expected. \begin{table}[] \centering \begin{tabular}{ccc} \toprule Model & \multicolumn{1}{c}{Trained models} & \multicolumn{1}{c}{Untrained models} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} & $|\text{det} (A-I)|$ & $|\text{det} (A-I)|$ \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} ResNet-18 & 3.2 $\times$ $10^{-2}$ & 8.45\\ DenseNet-121 &2.7$\times$ $10^{-2}$ & 7.52\\ Inception-v3 & 2.9 $\times$ $10^{-2}$ & 7.53\\ GoogleNet &7.8$\times$ $10^{-3}$ & 7.21 \\ \midrule LSTM variants in \cite{siskind}& $|\text{det} (A-I)|$ \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} LSTM & 1.2 $\times$ $10^{-4}$ \\ LSTM1 & 1.9 $\times$ $10^{-1}$ \\ LSTM2 & 3.1 $\times$ $10^{-4}$ \\ LSTM3 & 3.7 $\times$ $10^{-3}$ \\ LSTM4 & 2.4 $\times$ $10^{-4}$ \\ \bottomrule \end{tabular} \caption{Comparison, in terms of the one-hotness metric (computed by us) defined in~\cite{siskind}, between state-of-the-art classifiers and the LSTM variants presented in \cite{siskind} (the values in this table are taken from \cite{siskind}'s appendix) . Not trained models column shows the one-hotness metrics for the same classifiers when no training has been carried out, thus unable to discriminate input image data.} \label{tab:encodings_ass} \end{table} We carried out the same experiment by adapting the code\footnote{Please find it at \url{https://colab.research.google.com/drive/1aBrz3mbraekDqopFFXIWG1Wm4WiDe36R}. Note that running this code requires the availability of ImageNet validation data.} to extract features with a pre-trained ResNet-152 model on the ImageNet validation data, achieving $|\text{det} (A-I)| \approx 0$, much lower than those in Table~\ref{tab:encodings_ass}. \textit{Unsurprisingly, state-of-the-art classifiers exhibit a similar behavior as the LSTM variants described in~\cite{siskind}. This, according to~\cite{siskind}’s reasoning, would suggest that state of the art classifiers have not learned anything from image data, which is clearly nonsense.} \subsection{\cite{siskind}'s analysis of EEG embeddings} In Section~3.8 of~\cite{siskind}, the authors argue that since \cite{Spampinato2016deep} regress image features (extracted from a pre-trained CNN) to EEG features (extracted by the encoder trained on EEG signals), this does not imply that these regressed EEG features are meaningful. To show this, the authors of \cite{siskind} state that they: \begin{itemize} \item[] \textit{"replace the EEG encodings with a random codebook and achieve equivalent, if not better, results. This demonstrates that the regression and transfer-learning analyses performed by Spampinato et al. [31] are not benefiting from a brain-inspired or brain-derived representation in any way"}. \end{itemize} In particular, Li \textit{et al}.~in \cite{siskind} generate a synthetic dataset by uniformly sampling 40 codewords --- one for each class --- and generating class samples by adding Gaussian noise to each codeword. They then compute image-specific codewords by averaging across subjects. Note that the resulting dataset is \textit{linearly separable by construction}: the initial 40 codewords represent cluster centers, and the effect of the Gaussian noise is reduced by smoothing across subjects. At this point, they merely show that it is possible to regress, with sufficient accuracy\footnote{We trust the authors' assessment on this ``sufficiency'', since the reported MSE of 0.55, without any information on the size of the codewords and on how/whether the data were normalized, does not say anything about the achieved accuracy.}, a set of class-clustered linearly-separable image features to a corresponding set of class-clustered linearly-separable codewords, using a linear regressor. This should come as no surprise to anyone. It is even less surprising that replacing the EEG encoder, that has learned a sub-optimal feature representation w.r.t. the image encoder, with a the codebook that, instead, has been generated to have an optimal class representation, yields enhanced classification accuracy. The key observation is that performing this kind of regression is possible \textit{only if the target features encode information that allows them to be clustered by class}, which is obviously true in the codebook experiment described in~\cite{siskind}, since it is \textit{designed} that way, but which is not obvious when those target features are learned by another model (in our case, the EEG encoder). Since it seems possible to learn meaningful (and linearly separable) representations from neural signals, as shown in Table~\ref{tab:eeg_results_frequencies}, there can be no doubt that the regressed features are brain-inspired, simply because the data source they are extracted from is brain activity signals. The invalid reasoning in~\cite{siskind} can also be shown with an example. Let us suppose that we replace the EEG encoder, in the regression schema in~\cite{Spampinato2016deep}, with a ResNet. Then, we regress visual features extracted from GoogleNet to those learned by a ResNet. Of course, within the limits of intra-class variability, such a regression succeeds\footnote{We implemented this experiment and it can be found here: \url{https://colab.research.google.com/drive/1et3Pnlv9Iivtlcku8ck9-KEe2cKHijmv}. To evaluate the regressor, we compared the classification accuracy on a random subset of the ImageNet validation set when using the original features and when using the regressed features. Those two values were respectively about 78\% and 65\%, showing a performance drop that can be expected, given the approximation introduced by regression and the original difference in classification performance between GoogleNet and ResNet (about 9 percent points).}, since a class mapping exists between the two groups of features. Then, if we replace ResNet with the codebook proposed in \cite{siskind}, the regression would still work, as implied by the fact that the authors can successfully carry out that regression from VGG-16, that has similar performance as GoogleNet\footnote{See \url{https://pytorch.org/docs/stable/torchvision/models.html}.}. This, according to the reasoning in~\cite{siskind}, should imply that ResNet does not derive any image-related features, which is of course absurd. What the authors of~\cite{siskind} overlook is the fact that, if ResNet had not learned image-related features in the first place, then the resulting representations would not be class-separable and the regression would fail. Similarly, if the EEG encoder in~\cite{Spampinato2016deep} had not learned EEG-related features, the regression would also simply fail. Thus, the reasoning done in~\cite{siskind} on the EEG embeddings in~\cite{Spampinato2016deep} looks very much like a straw man argument: indeed, contrary to what is stated in \cite{siskind}, we did not say in~\cite{Spampinato2016deep} that the regression from pre-trained visual features to the EEG features should result in a benefit to classification accuracy. On the contrary, we clearly show in \cite{Spampinato2016deep} (in Table 5) that performance drops when moving from visual to brain-derived features. What is clear from \cite{Spampinato2016deep} is that the regression from visual features to brain-derived features is a simple but effective strategy to employ EEG features that are useful for visual categorization, and not to improve performance of visual classifiers. This point cannot be refuted. \subsection{Comment on \cite{siskind}'s analysis of image generation} In Section~5.3 of~\cite{siskind}, the authors argue that the generative model in~\cite{tirupattur_acmmm} shows signs of model collapse and it is very unlikely that the images in the paper are generated by a GAN model. These are unverified claims that seem to have a malicious intent rather than offer scientific findings. Unlike what is assumed by these unsubstantiated claims, in \cite{tirupattur_acmmm}: a) a widely-employed metric such as the Inception score is used to compare a trained model with existing GAN models, and b) the correct behavior of the model is verified by interpolating the input EEG encondings between two classes of images, which is a standard way to check for GAN mode collapse~\cite{goodfellow2014generative,radford2015unsupervised}. Additionally, source code and evaluation routines have been publicly available since December 2018. Li \textit{et al}.~\cite{siskind} had been informed of the code availability but opted to make their claims without bringing any evidence to substantiate them. \subsection{\cite{siskind}'s interpretation of brain-visual joint embedding} In Section~5.4, the authors of~\cite{siskind} discuss the method and results presented in a preprint~\cite{decoding_arxiv}, now published in~\cite{model10}. We would like to point out some corrections to their arguments. Authors in \cite{siskind} claim the following: \begin{itemize} \item[] \textit{``The loss function employed in the joint training regimen simply constrains the two encoded representations to be similar. A perfectly trained image encoder, trained against class labels, would simply encode image class, no more and no less. A perfectly trained EEG encoder, trained against class labels, would simply encode stimulus class, no more and no less. During joint training of the EEG encoder, the image encoder serves simply as a surrogate for class labels, no more and no less. Similarly during joint training of the image encoder, the EEG encoder serves simply as a surrogate for class labels, no more and no less. Thus joint training accomplishes nothing that could not be accomplished by training the components individually against class labels. The resulting encoded representations would contain no information beyond class labels''}. \end{itemize} However, in \cite{decoding_arxiv,model10} we clearly propose a model and a training strategy aimed at: a) preventing the learned features from being \textit{``strongly tied to the proxy task employed to compute the initial representation (e.g., classification)''}, and b) focusing more on learning \textit{``class-discriminative features than on finding relations between EEG and visual patterns''}. In particular, these papers describe a deep learning architecture to maximize the similarity between brain and visual data without any class supervision, i.e., exactly the opposite of what was claimed in~\cite{siskind} (again straw man argumentation). The implementation of the method trains the EEG encoder from scratch, hence its initial output will be basically random and will definitely \textit{not} encode a class distribution. From training practical purposes, \cite{decoding_arxiv,model10} use a pre-trained image encoder and the image classifier's complexity is generally higher than the complexity of the EEG encoder. However, even if the image encoder is pre-trained for classification, the encoders resulting from joint training will not necessarily provide class embeddings. On the contrary, since the training loss is based on feature similarity (and not on image classes), and since the EEG encoder will initially provide random values, the image features shift from being class-discriminative to whatever embedding the EEG and image encoder converge to. \textit{Thus the claim made by Li et al.~\cite{siskind} holds if, and only if, the two encoders are trained against class labels, but as \cite{decoding_arxiv,model10} clearly state, the EEG encoder is trained from scratch and not against class labels, thus invalidating their assumption.} \subsection{Comment on \cite{siskind}'s analysis of the state of the art} In Section~6 of \cite{siskind}, the authors discuss the state of the art of \textit{``the field''} (it is not clear to what they are referring to but it sounds as if it applies to all EEG signal analysis in general). To summarize, they scrutinized 306 papers cited by Roy~\textit{et al}.~\cite{roy2019deep} or by \cite{brain2image,gan_brain_iccv_2017,Spampinato2016deep,tirupattur_acmmm}. Excluding self-citations and works that do not collect or use EEG data for classification, a total of 122 papers remained, for which the authors \cite{siskind} state (quoting): \begin{itemize} \item[] \textit{``... attempted to assess whether samples in the test set were recorded in close proximity to samples in the training set or the classes in the test trials appeared in the same order as classes in the training trials''}. \end{itemize} They come to the following conclusions: \begin{itemize} \item[]\textit{``About a third of the papers scrutinized appeared to be clear of the concerns raised here. About another third appeared to suffer from the concerns raised here. It was not possible to assess the remaining third''}. \end{itemize} We cannot refrain from pointing out the qualitative nature of this kind of analysis of the state of the art. No individual paper in this analysis is cited, either in the manuscript or in the appendix. No examples of papers belonging to the three ``about-a-third`` categories are cited, except the ones of which we are authors or coauthors. Attempting to assess the veracity of those claims requires one to search the relevant papers and repeat every single study. In our experience, this is the first time we have seen a journal paper with a ``related work'' section that does not actually cite any papers and expects the readers to either trust the authors or research the subject by themselves. This is also the first time that we have seen a paper discussing a problem that apparently is general (\textbf{``about a third''} of 122 papers suffers from it) by focusing on just a few papers that share a subset of common authors. What appears even more puzzling is that while \cite{gan_brain_iccv_2017,Spampinato2016deep,brain2image,decoding_arxiv} share the dataset being criticized in~\cite{siskind}, the dataset employed in~\cite{tirupattur_acmmm} is from~\cite{Kumar}, which is a completely different dataset, and for which Li \textit{et al}.~\cite{siskind} clearly claim not to have information\footnote{They clearly state in relation to \cite{Kumar}'s dataset: \textit{"We have no way of knowing whether the test sets contained segments from the same blocks that had segments in the corresponding training sets."}}. The only thing these papers have in common, is, again, a subset of authors. \subsection{Comment on code and data availability of \cite{siskind}} In our experiments, we tried to replicate the results presented in~\cite{siskind} by evaluating our implementation of their model on our data. Unfortunately, at the time of writing, ~\cite{siskind}'s code was not provided to us and no information about training procedures (hyperparameters, optimizer, etc.) is available in the manuscript. Given this, we implemented the model to the best of our knowledge based on what could be assessed from their paper and by making reasonable, unbiased, assumptions for missing information. Analogously, Li \textit{et al}.~\cite{siskind} provided us with only raw EEG data but without any information to understand from what experiment each data comes from. Indeed, the raw EEG data contains only a suffix from 1 to 8 (and this does not hold for all subjects) which is intended to describe the specific experiment. Moreover, even assuming we could run all the combinations to understand their data and make a guess based on the obtained results, for the rapid experiments the sequence of visual stimuli is unknown. Lack of code and data hindered us to run experiments and check thoroughly the results reported in~\cite{siskind}. For the sake of fairness, we believe this is a point for the readers to be aware of. \section{Introduction}\label{sec:introduction}} \input{introduction.tex} \section{Overview} Here we provide an overview of the data collection effort, the experiments carried out for our counter-analysis, and the main findings: \begin{itemize} \item Section~\ref{sec:experiments} describes the EEG data used in this work and the classification performance obtained by state of the art deep models. In particular: \begin{itemize} \item Section~\ref{sec:datasets} provides the data collection procedure and pre-processing details of the three datasets used in this work: (a) \textit{Block-Design Visually-Evoked (BDVE) dataset} --- the dataset based on a block design presented in \cite{Spampinato2016deep}; (b) \textit{Block-Design Blanks (BDB) dataset} --- an additional dataset containing EEG recordings of the same subjects who underwent the experiment in \cite{Spampinato2016deep}, while they were shown a blank screen for 10 seconds between each of the 40 image classes, as per the original acquisition protocol; (c) \textit{Rapid-Design Visually-Evoked (RDVE) dataset} --- EEG data collected during our replication of the rapid-design experiment presented in \cite{siskind}. \item Section~\ref{sec:classification} re-analyzes the results of the models proposed in \cite{Spampinato2016deep,decoding_arxiv,model10} as well as of state-of-the-art deep models on all three above datasets. The summary of the findings is as follows: \begin{itemize} \item Classification accuracy in the block-design \textit{BDVE} dataset, when the EEG data is correctly filtered, reaches approximately 50\%, which is significant over 40 classes. This is lower than the $\sim$83\% reported in~\cite{Spampinato2016deep}, but that analysis was biased by slow EEG activity. Thus, we agree that classifying raw EEG data, without any pre-processing, may inflate classifier performance, as stated in~\cite{siskind} and already discussed in~\cite{model10}. \item Classification accuracy on the \textit{BDB} dataset is only slightly above chance, suggesting \textit{a negligible temporal correlation in \cite{Spampinato2016deep}'s data.} \item Classification accuracy on the \textit{RDVE} dataset when using block-level labels, as done in \cite{siskind}, is also slightly above chance. This stands in stark contrast to the robust classification performance reported in \cite{siskind}. Analogously, performance is at chance when using the correct labels. We argue that the high accuracy obtained in \cite{siskind} on the rapid-design experiment with block-level labels is the direct result of a) an incorrect experimental and recording protocol, which increases the experiment duration, removes breaks, and undermines the subjects' attention level, and b) the use per-subject classification. \end{itemize} \item In section~\ref{sec:cont}, we attempt to understand why the pattern of classification accuracy that we found is so different from those in \cite{siskind}. In particular, we are only able to reproduce \cite{siskind}'s results with our data when we intentionally contaminate it with temporal correlation and magnify its impact on classification results through per-subject analysis as in~\cite{siskind}. This suggests that data in~\cite{Spampinato2016deep} and our new data show a significantly reduced temporal correlation compared to data in~\cite{siskind}. Again, the reason for the large bias in data in \cite{siskind} is due to an incorrect experimental design that does not follow EEG/ERP design guidelines and to problematic procedures such as per-subject analysis. \end{itemize} \item In Section~\ref{sec:disp}, we comment in detail on the experimental design utilized in~\cite{siskind} and the differences, with related consequences on results, from the designs in~\cite{Spampinato2016deep}. We also report what cognitive neuroscience suggests would be a more appropriate design. \item In section~\ref{sec:incon}, we carefully analyze~\cite{siskind} and expose the plethora of false claims, misinterpretations of our previous work and of basic machine learning concepts, and unverified opinions which appear to have the sole purpose of hurting the credibility of the authors of~\cite{Spampinato2016deep,brain2image,gan_brain_iccv_2017,tirupattur_acmmm,decoding_arxiv}. \end{itemize} \section{Experiments}\label{sec:experiments} \subsection{EEG data collection and pre-processing}\label{sec:datasets} \input{dataset.tex} \subsection{Influence of EEG temporal correlation on classification tasks}\label{sec:classification} This section investigates the extent to which temporal correlation in \cite{Spampinato2016deep}'s dataset may have influenced the performance of deep classifiers. We first compute EEG classification performance on the block-design BDVE dataset proposed in \cite{Spampinato2016deep} (described in the previous section) to show what is the current state-of-the-art classification performance on this dataset. Then, in order to investigate if the obtained results are due to a temporal correlation in EEG data: \begin{itemize} \item We perform new experiments on the \emph{Block-Design Blanks -- BDB -- dataset}; \item We replicate the experiments on a rapid-design dataset with block-level labels as proposed by \cite{siskind} on \emph{our Rapid-Design Visually-Evoked -- RDVE -- dataset}. \end{itemize} We employ the methods presented in~\cite{Spampinato2016deep,model10} and other state-of-the-art EEG classification methods, including \cite{Lawhern_2018} and \cite{NIPS2017_7048}\footnote{Released source code ported by us to PyTorch.}, in order to properly link our previous work to the current state of the art. Given that Li \textit{et al}.~\cite{siskind}'s source code is not available, we are unable to replicate the identical analysis steps on our block-design and rapid-design data. As mentioned earlier, in our experiments on the block-design dataset we use the dataset splits of \cite{Spampinato2016deep}, with 80\% of the data used for training, 10\% for validation and 10\% for test. We employ a standard cross-entropy loss function, minimized through mini-batch gradient descent using the Adam optimizer, with a learning rate of 0.001, and a mini-batch size of 16. Training was carried out for 200 epochs. Test accuracy is reported at the lowest validation accuracy. Given that EEG-related object categorization should pool data across subjects~\cite{Stewart2014single}, we compute performance by pooling data from all subjects, instead of per-subject (as was done in Li \textit{et al}.~\cite{siskind}). Indeed, we will show in Sections~\ref{sec:cont} and \ref{sec:disp} how and why single-subject data tends to maximize the effect of temporal correlation in EEG classification tasks. \subsubsection{Block-design experiment} \label{sec:classification_eeg} \input{eeg_classification.tex} \subsubsection{Rapid-design experiment}\label{sec:saliency} \input{rapid.tex} \subsection{Comparison to~\cite{siskind}'s results}\label{sec:cont} \input{cont.tex} \subsection{Discussion}\label{sec:discussion} Here we summarize the findings presented in this section: \begin{itemize} \item As already demonstrated in recent work~\cite{model10}, and observed in~\cite{siskind}, the higher performance obtained in~\cite{Spampinato2016deep} was due to incorrectly using EEG raw data, because of the drift present in the DC component of the signal that makes block-design classification easier. \item However, with properly filtered EEG data, classification accuracy reaches about 50\% on 40 classes, which is still significant. More importantly, the obtained performance seems to be supported by cognitive neuroscience studies. \item Our counter-analysis, aimed at investigating the presence and influence of temporal correlation on classification performance, reveals that our classification results of the block-design data are unaffected (or only slightly affected). Thus, \emph{the data published in~\cite{Spampinato2016deep} is correct, and importantly, the effect of the temporal correlation claimed in~\cite{siskind} is marginal.} \item Our replication of experiments in \cite{siskind} with rapid-design data with block-level labels shows a small contribution of temporal correlation to classification performance (less than 5\%, as shown in Table~\ref{tab:eeg_black}), that is an order of magnitude less than what \cite{siskind} reports (about 50\%). \item The results presented in \cite{siskind} appear to be a consequence of their failure to follow standard EEG/ERP study design procedures, particularly those related to the duration of experiments --- recommendations that \cite{Spampinato2016deep} did follow. In addition, they did not replicate the analysis in \cite{Spampinato2016deep}, which pooled data from all subjects. \item We verify that the classification results reported in \cite{siskind} are similar to what we obtain when we intentionally create temporal correlation in our EEG data. This seems to suggest that \cite{siskind}'s data suffers from a strong temporal correlation. \item Given that \cite{siskind}'s primary evidence for a temporal correlation in \cite{Spampinato2016deep} stems from the very similar performance of the block-design dataset and the random-design dataset using block-level labels, and that this similarity exists only for their data and not our data, \cite{siskind}'s criticism of the block-design experiment in \cite{Spampinato2016deep} is invalid. \item In conclusion, our counter-analyses of the experimental settings and results in~\cite{siskind} suggest a strong temporal correlation in their data due to erroneous experimental procedures and uncovential analysis practices. \end{itemize} \section{Cognitive neuroscience experimental designs}\label{sec:disp} This section elaborates in more depth how and why the experimental design of the studies in~\cite{siskind} are at odds with cognitive neuroscience recommendations and compares them with the designs described in~\cite{Spampinato2016deep} and in this paper. \input{disp.tex} \section{Additional inconsistencies and fallacies in \cite{siskind}}\label{sec:incon} In this section, we comment on a number of inaccuracies and logical fallacies in~\cite{siskind} and rebut the remaining points raised by \cite{siskind} and not covered in previous sections. \input{inc.tex} \section{Conclusion}\label{sec:conclusions} \input{conclusions.tex} \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank Dr. Martina Platania for supporting the data acquisition phase, Dr. Demian Faraci for the experimental results, and NVIDIA for the generous donation of two Titan X GPUs. \ifCLASSOPTIONcaptionsoff \newpage \fi
1,108,101,563,462
arxiv
\section{Introduction} Two-dimensional (2D) flows have proven to be very useful for studying various phenomena in fluid dynamics since, compared to three-dimensional flows, they are much more amenable to theoretical analysis and numerical investigation. In particular, much of our fundamental understanding of transport properties of fluid flows has been developed using 2D models. Effectively 2D fluid flows are responsible for transport and mixing in many geophysical processes such as atmospheric \cite{Haynes2005,Pierrehumbert1991} and oceanic \cite{Haller1997,Wiggins2005} flows as well as in convection processes within the Earth's mantle \cite{Hoffman1985,Allegre1986}. Two-dimensional laminar mixing is a key process in many types of microfluidic essays \cite{Lee2001,Niu2003}, such as ones used for gene expression profiling \cite{Stremler2004}, and in numerous technological applications, such as the production of polymer blends \cite{Ottino2004}. The reduction to two dimensions has also provided insights into many difficult 3D problems ranging from mixing in the radiation zones of rotating stars \cite{Mathis2004} to confinement of thermonuclear plasmas \cite{Robert1980}. Much of our understanding of transport properties of fluid flows comes from experimental observations or numerical simulations of the advection, or stirring, of passive tracers by the flow. The dynamics of passive tracers in 2D flows of incompressible fluids is formally described by a Hamiltonian system with one degree of freedom \begin{eqnarray} \label{tracers} &&\dot{x}=v_x=\partial_y\Psi,\nonumber\\ &&\dot{y}=v_y=-\partial_x\Psi, \end{eqnarray} where the stream function $\Psi(x,y,t)$ plays the role of a Hamiltonian and the coordinates $x$ and $y$ are the conjugate variables. Time-independent 2D flows are integrable (because $\Psi$ is conserved), and the trajectories of tracers coincide with the closed stream lines of the flow, resulting in poor mixing. The introduction of time-dependence effectively makes the velocity field a three-dimensional flow, which is generally nonintegrable. Stream lines in such flows can be chaotic even if the underlying velocity field is regular (e.g., stable and time-periodic). If this is the case, the stream lines will diverge exponentially fast from one another, resulting in rapid stretching and folding of fluid elements. This process, known as chaotic advection or Lagrangian chaos, is the underlying mechanism responsible for dramatically improved mixing. One of the simplest 2D models in which chaotic mixing can occur is the `blinking-vortex' flow studied by Aref \cite{Aref1984}. Historically the first study of the mixing properties of a fluid system, this model was originally proposed as an idealization of a periodically stirred fluid and consists of a pair of spatially separated fixed point vortices which are alternately turned on for one half of the period $T$. Numerical simulations showed that when both vortices are running continuously (i.e., $T=0$), the flow is time-independent and, thus, integrable. For small nonzero $T$, it was found that the trajectories nearest the vortices become chaotic. The size of the mixed region increases monotonically with $T$ until, at some finite critical value of $T$, the entire domain becomes uniformly mixed. The first analytic investigation of mixing in a time-periodic 2D flow is due to Khakhar \textit{et al}. \cite{Khakhar1986}. This study introduced an idealized model, known as the `tendril-whorl' flow, in which uniform shear is followed by differential rotation and showed that mixing takes place in the vicinity of separatrices associated with saddle fixed points of the period-$T$ map of the flow, while the KAM tori surrounding the elliptic fixed points serve as transport barriers. The same structures were shown to also control mixing in the `blinking vortex' flow. These studies demonstrated that laminar, time-periodic, area-preserving 2D flows can produce efficient mixing. The results of these idealized models raised questions as to whether or not real-world laminar fluid flows could give rise to chaotic stream lines. This prompted the analytical and numerical study of chaotic advection in a journal bearing Stokes flow with physical boundary conditions \cite{Aref1986,Chaiken1987}. The basic setup is that of a Couette flow between non-coaxial rotating cylinders, where time-periodicity is introduced by alternating the rotation between the inner and outer cylinders. By varying the distance between the axes of the cylinders as well as the time-interval for which one of the cylinders rotates, one can obtain various flow patterns with both regular and chaotic trajectories. The experimental realization of this flow \cite{Chaiken1986} showed excellent agreement with numerical results. Subsequently, the experimental study of cavity flows by Chien {\em et al.} \cite{Chien1986} showed the existence of transverse intersections of homo/heteroclinic manifolds at small Reynolds numbers, providing more evidence for the mixing capabilities of 2D laminar flows. Rom-Kedar \textit{et al}. \cite{Rom-Kedar1990} proved the existence of chaotic trajectories analytically for a model flow produced by a pair of time-independent point vortices perturbed by a time-periodic shear. The theory of lobe dynamics developed in this paper set up a framework for quantitative description of transport across separatrices of the unperturbed flow which evolve into a homoclinic tangle in the presence of perturbation. In conjunction with the analytic techniques introduced by Melnikov \cite{melnikov1963}, this framework enabled them to estimate the fluxes between different regions of the flow domain. Recently, most experimental and many theoretical studies of mixing in 2D flows have used thin layers of electrolyte placed over various arrangements of permanent magnets. The fluid flow is driven by the Lorenz force which arises when electric current flows through the electrolyte. Since this setup is closely related to our work, we describe here other studies which used it. Rothstein {\em et al.} \cite{Rothstein1999} discovered the existence of persistent spatial patterns, which they called strange eigenmodes, in a flow driven by a combination of time-periodic current and either a disordered or a square array of magnets. These patterns were shown to emerge as a result of a delicate balance between advective stretching and molecular diffusion. The process of mixing was observed to continue even after these structures reached an asymptotic shape. The same experimental setup was subsequently used to investigate the rate of mixing \cite{Voth2003}. By examining the spatial structure of persistent patterns, it was found that locally, mixing rates are controlled by stretching, but on large scales they are governed by diffusive transport. Additionally, it was discovered that mixing rates could be dramatically enhanced by breaking certain spatial and temporal symmetries. Voth {\em et al.} \cite{Voth2002} used a disordered array of magnets and time-periodic current to drive the flow and were able to use flow measurements to construct forward and backwards finite-time Lyapunov exponent (FTLE) fields which follow the time-evolution of the unstable and stable manifolds of saddle points of the flow, thus providing an empirical method for visualizing the geometrical structures underlying the mixing process. A follow-up experimental study carried out using magnets arranged in a square, hexagonal, and a disordered array \cite{Arratia2005} found that the probability distribution of FTLEs exhibited self-similar behavior regardless of the flow pattern or the degree of mixing in the system. Fluid mixing was also studied in time-dependent flows driven by steady current. Danilov {\em et al.} \cite{Danilov2000} performed a combined experimental and theoretical study of mixing by a time-periodic four-vortex flow. Numerical study of a truncated analytic model showed that separatrices partitioned the flow domain into regions with different mixing rates and that transport between these regions was a relatively slow process compared to the mixing within these regions. The theory of adiabatic chaos was used to explain the results and show that long-term transport could effectively be modeled as a random walk of an adiabatic invariant. This paper investigates mixing properties of a range of 2D flows arising as intermediate stages in the transition from the so-called Kolmogorov flow \cite{Arnold1960,Meshalkin1961} to turbulence. Unlike the majority of other studies of mixing in 2D flows where the time-dependence is due to external monochromatic forcing, our focus is on time-dependence that arises naturally as a result of fluid-dynamic instabilities, producing flows ranging in their temporal complexity from time-periodic to quasi-periodic and chaotic. The paper is organized as follows: In Sect. \ref{s:model} we introduce the model of the fluid flow and characterize the flow states that emerge in the transition from the laminar to the turbulent regime. The mixing properties of these flows are described and analyzed in Sect. \ref{s:mixing}. Finally, summary and conclusions are presented in Sect. \ref{s:summary}. \section{Problem Description} \label{s:model} \subsection{Model of the Fluid Flow} We consider a model of an experimental flow described in Ref. \onlinecite{jfm_12} which employs bar magnets with alternating polarity to generate a Kolmogorov flow in a layer of electrolyte supported by a liquid dielectric. The flow in the conducting layer can be described by the following equation for the vorticity $\Omega=-\nabla^2\Psi$: \begin{equation} \partial_t\Omega+\beta{\bf v}\cdot\nabla\Omega=\nu\nabla^2\Omega-\alpha\Omega+A\sin ky \label{vorticity} \end{equation} where $k=\pi/w$ and $w$ is the width of individual magnets. Parameters $\beta=1$, $\nu=0.0115$ cm$^{2}$/s, and $\alpha=0.1141$ s$^{-1}$ were selected to be representative of a typical experimental setup. Furthermore, we chose the domain width $L_y=5$ cm corresponding to four magnets of width $w=1.25$ cm and the length $L_x=2L_y=10$ cm. For simplicity, unlike the experimental system which is larger and features physical (no-slip) lateral boundary conditions, we assume periodic boundary conditions. The effect of the bottom boundary, however, is included in our model via the Rayleigh friction term $-\alpha \Omega$. The importance of this term is described by the non-dimensional combination $F=\alpha/\nu k^2\approx 1.57$ which shows that it is comparable to the viscous term $\nu\nabla^2\Omega$. Finally, $A$ measures the strength of the driving force and is used as a control parameter analogous to the Reynolds number. The vorticity equation (\ref{vorticity}) was solved numerically using a spectral (Fourier) method with $64 \times 128$ modes. As a check, we recalculated the bifurcation sequence using $128 \times 256$ modes which yielded less than a $1\%$ difference in both the leading stability eigenvalues and the location of the bifurcations. Temporal discretization used a second-order, implicit-explicit, operator-splitting scheme with an adaptive time step \cite{Ascher1995}. The Crank-Nicolson method was used for the linear and forcing terms in Fourier space, while the fourth-order Runge-Kutta method was used for the advection term in real space. The use of the so-called Strang-Marchuk splitting \cite{Marchuk1975} ensures that the resulting scheme is second order in time. \subsection{From Kolmogorov Flow to Turbulence} In this section we describe the transition to 2D turbulence in our model system as the value of the control parameter $A$ is increased. Unlike many shear flows in 3D which transition directly from laminar flow to turbulence, here we find a rather complicated sequence of transitional flow states whose temporal complexity changes in a rather non-monotonic fashion before a turbulent flow is eventually established. \begin{figure} (a)\includegraphics[width=3in]{L_A_pt1.pdf}\\ (b)\includegraphics[width=3in]{M_A_pt25.pdf} \caption{Laminar flow $L$ at $A=0.1$ s$^{-2}$ (a) and spatially modulated flow $M$ at $A=0.250$ s$^{-2}$ (b). Velocity field (arrows) is overlayed on top of vorticity field (grayscale).} \label{LM} \end{figure} \begin{figure} \includegraphics[width=3in]{bifplot_winset.pdf} \caption{Bifurcation diagram. The relative vorticity magnitude $\Omega_0\equiv \|\Omega-\Omega_L\|_2-cA$ is shown, where $c$ is a constant chosen to separate the various branches of the diagram for visualization purposes. Solid and dashed lines denote stable and unstable states, respectively. Periodic orbits are represented by their time-averaged values. Inset shows the region where the $P_3$ branch exists.} \label{Bifurcation} \end{figure} Kolmogorov flow profile describes a laminar solution of the vorticity equation (\ref{vorticity}) with the symmetry of the driving: continuous translational symmetry in the $x$ direction and discrete translational symmetry in the $y$ direction. The problem also possesses two additional discrete symmetries (rotation by 180 degrees about a vertical axis and a flip about $x$ (or $y$) axis combined with the change in the sign of vorticity), but these will not play an important role in the subsequent discussion. The laminar flow (referred to simply as $L$ below) is described by the following analytical solution for the vorticity \begin{equation} \Omega_L=\frac{A}{\alpha + k^2 \nu}\sin ky \label{laminar} \end{equation} and features straight alternating shear bands which reflect the geometric arrangement of the magnets (see Fig. \ref{LM}(a)). For our choice of parameters, linear stability analysis predicts this flow profile to be stable for $A<0.1145$ s$^{-2}$. This is confirmed by the results of our numerical simulations summarized in Fig. \ref{Bifurcation}, which shows all stable and unstable solutions that have been computed using a Jacobian-free Newton-Krylov solver \cite{Knoll2004} for $A\leq 1$ s$^{-2}$. At $A\approx 0.1145$ s$^{-2}$ the laminar flow $L$ loses stability through a supercritical pitchfork bifurcation and is replaced with its steady, spatially modulated version. As $A$ is increased, the distortion of the shear bands increases and they are gradually replaced with a periodic array of counter-rotating vortices. This spatially modulated shear flow (denoted $M$ and shown in Fig. \ref{LM}(b)) eventually undergoes a supercritical Hopf bifurcation and loses stability at $A \approx 0.3750$ s$^{-2}$. \begin{figure}[b] \includegraphics[width=\columnwidth]{P1_1.pdf} \caption{Time-periodic flow $P_1$ at $A=0.428$ s$^{-2}$ and (a) t=0, (b) t=T/4, (c) t=T/2, (d) t=3T/4 with $T=365.83$ s. The same color bar as in Fig. \ref{NS}(a) is used here.} \label{P1} \end{figure} At this point the first stable, time-periodic solution (denoted $P_1$) appears. Four snapshots of this state at different phases of the oscillation are shown in Fig. \ref{P1}. For time-periodic flows it is convenient to represent the stream function as a perturbation about a steady state \begin{equation} \Psi(x,y,t)=\Psi_0(x,y)+\epsilon\Psi_1(x,y,t),\label{perturbation} \end{equation} where the perturbation $\Psi_1$ has zero time average and $\langle\|\Psi_1\|_2\rangle_t=\|\Psi_0\|_2$ ($\|\ \|_2$ denotes the 2-norm and $\langle\ \rangle_t$ denotes the time average). The strength $\epsilon$ of the time-dependent perturbation as a function of $A$ is shown in Fig. \ref{P123} along with its frequency $\omega_1$. \begin{figure}[t] \centering (a) \includegraphics[height=1.15in]{avg_nrmratio_P1_1}\hspace{2mm} \includegraphics[height=1.15in]{OmvsA_P1}\vspace{2mm} (b) \includegraphics[height=1.15in]{avg_nrmratio_P2_1}\hspace{2mm} \includegraphics[height=1.15in]{OmvsA_P2}\vspace{2mm} \hspace{5mm} (c) \includegraphics[height=1.15in]{avg_nrmratio_P3_1}\hspace{2mm} \includegraphics[height=1.15in]{OmvsA_P3} \caption{\label{P123}The perturbation amplitude $\epsilon$ and frequency $\omega_1=2\pi/T$ of the time-periodic flows $P_1$ (a), $P_2$ (b), and $P_3$ (c). Only the ranges of $A$ are shown where these flows exist and are stable.} \end{figure} As expected for a state created via a Hopf bifurcation, the amplitude of oscillation for $P_1$ grows as a square root of the distance to the bifurcation point (see Fig. \ref{P123}(a)). The frequency of oscillations $\omega_1=2\pi/T$ decreases (and the period $T$ increases) monotonically with $A$ until the oscillatory state is destroyed as a result of an infinite-period bifurcation at $A \approx 0.4635$ s$^{-2}$. \begin{figure} (a)\includegraphics[width=3in]{N_A_pt75.pdf}\\ (b)\includegraphics[width=3in]{S_A_pt75.pdf} \caption{Stable steady flow $N$ (a) and unstable steady flow $S$ (b) at $A=0.750$ s$^{-2}$.} \label{NS} \end{figure} At this point two steady solutions are created, a stable node $N$ and a saddle $S$ (shown in Figs. \ref{NS}(a) and (b), respectively). The corresponding flows are quite similar (a disordered array of four clockwise and four counterclockwise vortices) and possess a relatively low symmetry: just like $P_1$, they are symmetric with respect to a shift $(x,y)\to(x+L_x/2,y+L_y/2)$. The numerical solution of (\ref{vorticity}) follows the stable branch $N$ as $A$ increases further until the corresponding steady flow again develops an oscillatory instability (also a supercritical Hopf) at $A \approx 0.8125$ s$^{-2}$, giving rise to another time-periodic flow $P_2$, shown in Fig. \ref{P2}. The amplitude and frequency of this flow are shown in Fig. \ref{P123}(b). This state is stable in a fairly narrow range of $A$ and, at $A \approx 0.8180$ s$^{-2}$, $P_2$ undergoes a secondary supercritical Hopf bifurcation giving rise to a quasi-periodic flow (denoted $QP$) which, after another Hopf bifurcation, transitions to aperiodic flow around $A \approx 0.865$ s$^{-2}$. \begin{figure}[b] \includegraphics[width=\columnwidth]{P2_1.pdf} \caption{Time-periodic flow $P_2$ at $A=0.817$ s$^{-2}$ and (a) t=0, (b) t=T/4, (c) t=T/2, (d) t=3T/4 with $T=131.76$ s. The same color bar as in Fig. \ref{NS}(a) is used here.} \label{P2} \end{figure} At $A \approx 0.8740$ s$^{-2}$, a third stable, time-periodic state $P_3$, shown in Fig. \ref{P3}, is created via a subcritical Hopf bifurcation. The corresponding flow does not respect any of the symmetries of the system and is only stable for a very narrow range of $A$ before it undergoes a subcritical pitchfork bifurcation at $A \approx 0.8768$ s$^{-2}$. Its amplitude and frequency are effectively constant throughout its range of stability as Fig. \ref{P123}(c) illustrates. \begin{figure} \includegraphics[width=\columnwidth]{P3_1.pdf} \caption{Time-periodic flow $P_3$ at $A=0.875$ s$^{-2}$ and (a) t=0, (b) t=T/4, (c) t=T/2, (d) t=3T/4 with $T=94.39$ s. The same color bar as in Fig. \ref{NS}(b) is used here.} \label{P3} \end{figure} Increasing $A$ further, we find another narrow aperiodic window before the flow returns to quasi-periodic behavior at about $A \approx 0.885$ s$^{-2}$. Finally, the flow once again becomes aperiodic at $A \approx 0.980$ s$^{-2}$. The temporally aperiodic (or chaotic) flows we find are weakly turbulent. We conclude this section by a discussion of the Reynolds number \begin{equation} Re=w \nu^{-1}\|\langle{\bf v}\rangle_t \|_2 \end{equation} characterizing the solutions described above. As Fig. \ref{RevsA} shows, $Re$ varies linearly with $A$ in different flow regimes. The slope is roughly the same for almost all flows, except the laminar flow $L$, for which it is much steeper. Indeed, a quick inspection of the vorticity fields shows that, beyond $L$, the flow is dominated by structures oriented at an angle $\theta\approx 45$ degrees to the $x$ direction, so that the slope can be estimated as $Re/A\sim (k/\sin\theta)^{-4} \nu^{-2}\approx 47.5$ s$^2$. For the laminar flow we find instead $Re/A\sim k^{-4} \nu^{-2}\approx 190$ s$^2$. Both estimates are in reasonable agreement with the numerical data presented in Fig. \ref{RevsA}. \section{Mixing Properties} \label{s:mixing} \subsection{Numerical Results} In order to quantify the transport properties of the flow, it would be convenient to use two different metrics: (i) the relative size (in this case area) of the mixed region and (ii) the rate of mixing. Both metrics are most easily evaluated by following the evolution of an initially well-localized array of passive tracers. Before continuing with the detailed discussion of mixing dynamics, we should point out that, while the laminar flow $L$ is expected to be the worst mixer and the aperiodic (turbulent) flow to be the best, the complicated sequence of transitional states observed as $A$ is increased implies that we should not expect a monotonic increase for either metric. While one would expect both metrics to mirror the spatial and temporal complexity of the flow, we find that this correlation is far from perfect. \begin{figure} \includegraphics[width=3in]{RevsA.pdf} \caption{Relationship between the Reynolds number and the forcing strength $A$. Again solid lines denote stable states and dashed lines denote unstable ones.} \label{RevsA} \end{figure} As discussed in the introduction, the dynamics of passive tracers (\ref{tracers}) is formally Hamiltonian, with $\Psi$ being the Hamiltonian. Time-independent, one-degree-of-freedom Hamiltonian systems are always integrable and thus exhibit regular motion. The tracers follow closed stream lines on which $\Psi$ is exactly conserved, hence, the initial tracer distribution eventually stretches along the stream line passing through its center, but never broadens. However, the introduction of time-dependence is expected to split the flow domain into regions of chaotic and regular dynamics. The relation between mixing and chaotic stream lines establishes a direct analogy between transport in one-degree-of-freedom Hamiltonian systems and mixing in 2D area-preserving flows. In order to quantify the mixing process, for each value of $A$, a set of passive tracers was initially placed in a square region with the side of $0.1$ mm (which corresponds to initial area fraction $f(0)=2\times 10^{-6}$). Since the greatest degree of stretching usually occurs along homo- or heteroclinic trajectories, the initial sets were centered on top of one of the saddles of the instantaneous flow field. Each tracer was then advected by numerically integrating (\ref{tracers}) using a fourth-order, area-preserving, symplectic integrator based on the 2-stage Gauss-Legendre scheme \cite{Hairer2006}. Velocities for each tracer were computed at each time step using a cubic interpolation scheme on the $64\times 128$ grid in real space. The dispersion of tracers was then used to compute the mixing metrics. The mixed area fraction $f(t)$ was computed by partitioning the flow domain into a set of small boxes and computing the ratio between the number of boxes $m$ containing at least one tracer to the total number of boxes $k$. When the tracers uniformly cover the domain, the area fraction should be unity. However, if there are $k$ boxes with $n$ randomly distributed tracers, the fraction of boxes containing at least one tracer would on average be $p_{n,k}=1-\exp(-n/k)$. Thus, the measured area fraction for each value of $A$ was normalized by $p_{n,k}$ \begin{equation} f(t)=\frac{m(n,t)}{kp_{n,k}}, \end{equation} so that a uniformly distributed set of tracers would give an area fraction of one. Fig. \ref{fvsA} shows the area fraction occupied by the tracers after a rather long time interval of $5\times10^4$ s. In comparison, the period of $P_1$, $P_2$ and $P_3$ is of order $100$ s, while the characteristic time scale of the flow around vortices is below $10$ s. We find that the area fraction remains near zero for all of the time-independent flows ($L$, $M$, and $N$), as it should be for integrable flows. For time-dependent flows (\ref{tracers}) formally becomes a three-dimensional dynamical system (augmented by an equation $\dot{t}=1$) which, in general, possesses chaotic solutions (stream lines). Chaotic advection, in principle, should dramatically enhance mixing. However, as Fig. \ref{fvsA} shows, the mixed area fraction for $P_1$ and $P_2$ is only slightly higher than that for the time-independent flows. The time-periodic flow $P_3$, on the other hand, produces nearly perfect mixing, with mixed area fraction comparable to that of aperiodic flows. \begin{figure} \includegraphics[width=3in]{AreafracvsA_paper.pdf} \caption{The fraction $f$ of the mixed area relative to the total area of the domain at $t=5\times 10^4$ s.} \label{fvsA} \end{figure} Examining the temporal evolution of the area fraction covered by the tracers shown in Fig. \ref{fracvst}, one can discern two distinct stages for the time periodic flows. Initially there is a very fast increase. For $P_1$ and $P_2$ it corresponds to rapid stretching of the set of tracers along the homoclinic trajectories forming a thin closed band (see Figs. \ref{P12-mixing}(a) and (c)). This is followed by a much slower growth associated with the broadening of this band. However, even after a very long time, the band of tracers remains quite thin and aligned along the stream lines of the instantaneous flow (see Figs. \ref{P12-mixing}(b) and (d)). For $P_3$, on the other hand, the set of tracers undergoes a rapid initial phase of both stretching \textit{and} folding and quickly (within several periods of the flow) covers almost the entire domain (see Fig. \ref{P3-mixing}(a)). A closer look further shows that, for $P_2$ and $P_3$, the tracer distribution reaches an asymptotic state already around $10^3$ s, while for $P_1$ the area fraction is still growing at $t=5\times 10^4$ s. Finally, although the asymptotic distribution of the tracers for $P_3$ is essentially uniform, the tracers never penetrate four small regular islands centered around vortices with positive vorticity, as Fig. \ref{P3-mixing}(b) illustrates. We will return to this fact in Sect. \ref{s:resonance}. \begin{figure} \centering (a)\hspace{-8mm} \includegraphics[width=3.1in]{Areafracvst_428.pdf} \\ (b)\hspace{-8mm} \includegraphics[width=3.1in]{Areafracvst_817.pdf} \\ (c)\hspace{-8mm} \includegraphics[width=3.0in]{Areafracvst_875.pdf} \caption{\label{fracvst}Temporal dependence of the area fraction for the three time-periodic flows: (a) $P_1$ at $A=0.428$ s$^{-2}$, (b) $P_2$ at $A=0.817$ s$^{-2}$, and (c) $P_3$ at $A=0.875$ s$^{-2}$.} \end{figure} \begin{figure*} \centering{ (a) \includegraphics[width=3in]{tracers_levelsets_428_intermediate.pdf}\hspace{5mm} (b) \includegraphics[width=3in]{asymp_tr_P1.pdf} \\ (c) \includegraphics[width=3in]{tracers_levelsets_817_intermediate.pdf}\hspace{5mm} (d) \includegraphics[width=3in]{asymp_tr_P2.pdf}} \caption{\label{P12-mixing}Mixing by the time periodic flows. The distribution of $6\times10^4$ tracers and the stream lines of the instantaneous flow for $P_1$ at $t=598$ s (a) and $t=5\times10^4$ s (b). The same for $P_2$ at $t=49$ s (c) and $t=5\times10^4$ s (d).} \end{figure*} \begin{figure*} \centering{ (a) \includegraphics[width=3in]{tracers_levelsets_875_75.pdf}\hspace{5mm} (b) \includegraphics[width=3in]{tracers_levelsets_875_asymp.pdf}} \caption{\label{P3-mixing}Mixing by the time-periodic flow $P_3$. The distribution of $6\times10^4$ tracers and the stream lines of the instantaneous flow at $t=317$ s (a) and $t=3500$ s (b).} \end{figure*} \begin{figure*} \centering{ (a) \includegraphics[width=3in]{tracers_levelsets_820_75.pdf}\hspace{5mm} (b) \includegraphics[width=3in]{tracers_levelsets_820_asymp.pdf}\\ (c) \includegraphics[width=3in]{tracers_levelsets_846_75.pdf}\hspace{5mm} (d) \includegraphics[width=3in]{tracers_levelsets_846_asymp.pdf}} \caption{\label{QP-mixing} Mixing by quasi-periodic flow $QP$. The distribution of $6\times10^4$ tracers and stream lines of the instantaneous flow for $A=0.820$ s$^{-2}$ at $t=1038$ s (a) and $t=3500$ s (b). Same for $A=0.846$ s$^{-2}$ at $t=645$ s (c) and $t=3500$ s (d).} \end{figure*} \begin{figure*} \end{figure*} \begin{figure*}[t] \centering{ (a) \includegraphics[width=3in]{tracers_levelsets_872_75.pdf}\hspace{5mm} (b) \includegraphics[width=3in]{tracers_levelsets_872_asymp.pdf}\\ (c) \includegraphics[width=3in]{tracers_levelsets_878_75.pdf}\hspace{5mm} (d) \includegraphics[width=3in]{tracers_levelsets_878_asymp.pdf}} \caption{\label{tracerlevel} Mixing by aperiodic flow. Distribution of $6\times10^4$ tracers and the stream lines of the instantaneous flow for $A=0.872$ s$^{-2}$ at $t=369$ (a) and $t=3500$ s (b). The same for $A=0.878$ s$^{-2}$ at $t=221$ s (c) and $t=3500$ s (d).} \end{figure*} Fig. \ref{QP-mixing} shows the tracer distribution for two values of $A$ above the onset of the secondary Hopf bifurcation which destroys $P_2$ and makes the flow quasi-periodic. We find the evolution of the tracers to follow the same scenario as in the case of the time-periodic flow $P_3$: after a short initial stage of stretching and folding, the set of tracers fills a significant fraction of the full domain. This stage is followed by a much slower homogenization process in which the distribution becomes spatially uniform. However, just like in the case of $P_3$, the tracers never penetrate four regular islands centered around vortices, now with negative vorticity. The fundamental difference between (quasi)periodic and aperiodic flows makes itself apparent if we compare mixing by the periodic flow $P_3$ with that by aperiodic flows just outside of the window of stability for $P_3$, at $A=0.872$ s$^{-2}$ and $A=0.878$ s$^{-2}$. Although the forcing is almost identical in these three cases and the short-term dynamics of the three flows are similar, Fig. \ref{tracerlevel} shows that the aperiodic flows achieve perfect mixing in the long term, covering the entire domain, including the four regular islands of $P_3$. Fig. \ref{fvst} summarizes the observed mixing rate as a function of the control parameter $A$. As Fig. \ref{fracvst} amply illustrates, the mixing process is characterized by a range of time scales. The fastest time scale describes stretching of the initial tracer distribution along the stream line passing through its center. The corresponding rate is defined as $r_{\max}=\max_t|df/dt|$ and is proportional to the average shear rate corresponding to that stream line. The slowest time scale describes broadening of the distribution due to transport of tracers through semi-penetrable transport barriers discussed in Sect. \ref{s:resonance}. To characterize this broadening, we computed the time $t_{90}$ it takes for the area fraction to reach 90\% of its asymptotic value, $f(t_{90})/f(t_{100})=0.9$, where we assumed the asymptotic distribution is achieved at $t_{100}=5\times 10^4$ s. The minimal mixing rate was then defined as $r_{\min}=1/t_{90}$. In both cases we averaged $f(t)$ over a small window to filter out small oscillations associated with the passage of tracers near saddles. \begin{figure}[b] \includegraphics[width=3in]{mixrate_vs_A.pdf} \caption{The rates of mixing for time-dependent flows as a function of $A$. The solid and dashed curves correspond, respectively, the fastest time scale $r_{\max}$ and the slowest time scale $r_{\min}$.} \label{fvst} \end{figure} The fast time scale $r_{\max}$ is found to increase almost monotonically with $A$, reflecting the corresponding increase in the shear of the underlying flow. The slow time scale requires more care to interpret. In particular, for $P_1$ we find $r_{\min}$ to drop by almost an order of magnitude as $A$ increases. This decline is associated with the tracer distribution shown in Fig. \ref{P12-mixing}(b) slowly broadening in time as illustrated by Fig. \ref{fracvst}(a). This broadening is due to a slow ``leak'' of tracers across a semi-penetrable transport barrier, creating a ``halo'' of tracers surrounding the main band. Another drop observed around the secondary Hopf bifurcation at $A\approx 0.818$ s$^{-2}$ is associated with a similar process for the quasi-periodic flow $QP$. As $A$ increases past this critical value of $A$, the transport barrier which exists for $P_2$ gets eroded, leading to a quick increase in $r_{\min}$. While many of our numerical results are quite logical, several findings raise questions. For instance, the flows $P_1$ and $P_3$ appear to be qualitatively quite similar. Both are stable, time-periodic and, with the choice of $A=0.428$ s$^{-2}$ for $P_1$, both have a time-dependent component of the same magnitude $\epsilon \approx 0.238$. Yet, despite these similarities, their mixing properties are radically different. $P_1$ is a very poor mixer, as Fig. \ref{P12-mixing}(b) illustrates. It is characterized by both a very low mixing rate and a very low mixed area fraction. In fact, $P_1$'s mixing properties are comparable to those of time-independent flows. $P_3$, on the other hand, is an extremely good mixer, almost as good as the aperiodic flows. The mixing rate for this flow is high and its mixed area fraction is close to unity. Another question concerns the islands surrounding positive or negative vortices that remain impenetrable for extremely long times for both the time-periodic flow $P_3$ (Fig. \ref{P3-mixing}) and the quasi-periodic flow $QP$ succeeding $P_2$ (Fig. \ref{QP-mixing}). In both cases there appear to be transport barriers surrounding vortices characterized by vorticity of one sign but not the other. This was also found to occur in the model flow of Danilov {\em et al.} \cite{Danilov2000} as well as in real oceanic flows \cite{haller2011}. \subsection{Lagrangian Coherent Structures} \label{s:LCS} \begin{figure} \centering (a) \includegraphics[width=3in]{FTLE_P1_plt.pdf} \\ (b) \includegraphics[width=3in]{FTLE_P2_plt.pdf} \\ (c) \includegraphics[width=3in]{FTLE_P3_plt.pdf} \caption{\label{ftlefig} Forward finite-time Lyapunov exponent field. (a) $P_1$ at $A=0.428$ s$^{-2}$ with $\tau=32$ s, (b) $P_2$ at $A=0.817$ s$^{-2}$ with $\tau=19$ s, and (c) $P_3$ at $A=0.875$ s$^{-2}$ with $\tau=22$ s.} \end{figure} Finite-time Lyapunov exponent (FTLE) fields associated with the time-dependent flows provide some intuition regarding their drastically different mixing properties. The forward FTLE is a scalar quantity \begin{equation}\label{ftle} \sigma({\bf x}_0,\tau)=\frac{1}{\tau}\ln\left\|\frac{D{\bf x}(\tau)}{D{\bf x}_0}\right\|_2, \end{equation} which characterizes the amount of stretching along a trajectory ${\bf x}(t)$ passing through the point ${\bf x}_0$ at $t=0$ over a finite time interval $\tau$. In particular, the ridges of the forward FTLE field define Lagrangian Coherent Structures (LCS) \cite{shadden2005} which, for time-periodic flows, correspond to segments of unstable manifolds of saddle orbits with temporal period equal to that of the flow. As Fig. \ref{ftlefig} illustrates, for $P_1$ and $P_2$ the LCS show very little folding, effectively forming closed, compact curves. For $P_3$, on the other hand, the LCS display a lot of folding, which is a necessary ingredient for efficient mixing and cover a substantial fraction of the total area. Indeed, we find the LCS of $P_1$ and $P_2$ are qualitatively similar to those of steady flows from which they are born (i.e., $M$ and $N$), while the LCS of $P_3$ are qualitatively similar to those of aperiodic flows, which is consistent with the observed similarities in their mixing properties. LCS play an important role in organizing transport. For instance, placing the initial set of tracers on top of the saddle orbit we should expect that set to be quickly stretched along the LCS forming effectively one-dimensional structures for $P_1$ and $P_2$, while for $P_3$ the structure becomes effectively two-dimensional. Furthermore, the LCS form transport barriers which cannot be crossed by the tracers. For $P_1$ and $P_2$ (as well for steady flows), these transport barriers are closed, effectively partitioning the domain and preventing mixing between regions separated by the LCS. For $P_3$ (as well as for aperiodic flows), the transport barriers are open, enabling transport and mixing across the whole domain. The LCS-based description of transport is consistent with our long-term numerical advection calculations and has the advantage that it requires time-integration over a considerably shorter time-interval (fraction of the temporal period $T$ of the flow, compared with hundreds to thousands of periods for numerical advection calculations). However, neither approach explains {\em why} the mixing properties of the time-periodic flows are so dramatically different. A more insightful approach is discussed next. \subsection{Resonance Phenomena} \label{s:resonance} As we mentioned previously, area-preserving time-periodic flows $P_1$, $P_2$, and $P_3$ can be treated formally as a perturbed Hamiltonian system (\ref{tracers}), with the stream function (\ref{perturbation}) serving the role of the Hamiltonian. In particular, $\Psi_0$ plays the role of the unperturbed Hamiltonian and $\epsilon\Psi_1$ -- the time-periodic perturbation. Transport in near-integrable time-periodic Hamiltonian systems and area-preserving flows has been studied extensively. It is well understood that, for weak perturbations, chaotic trajectories emerge in the neighborhood of the homo- or heteroclinic manifolds of saddle orbits of the integrable unperturbed, or base, flow. These manifolds self-intersect as a result of the imposed perturbation, forming a homoclinic tangle with the lobe dynamics \cite{Rom-Kedar1990} which provides an insightful, albeit computationally challenging, description of mixing in the separatrix chaotic layer (SCL). Not only is the computation of the width of the SCL is a computationally intractable problem for any realistic flow, the width of the SCL significantly underestimates the size of the actual chaotic domain for finite values of $\epsilon$. According to the KAM theory \cite{kolmogorov1954,arnold1965,moser1962}, in the presence of perturbation, resonant tori of the unperturbed flow (tori whose frequency $\omega_0(\Psi_0)$ is in rational ratio with the frequency of the perturbation $\omega_1=2\pi/T$) break up, forming chains of elliptic and hyperbolic time-periodic orbits (or stream lines) with their own sets of self-intersecting stable and unstable manifolds generating resonant chaotic layers (RCL). These RCLs can overlap with the SCL making the chaotic domain much broader. The dynamics away from the separatrix can be described by computing the change in the value of $\Psi_0$ over an interval of time $(0,t_f)$. By analogy with the derivation of Melnikov's function, we can use (\ref{perturbation}) and (\ref{tracers}) to show that \begin{equation}\label{dpsi} \Psi_0(t_f)-\Psi_0(0)=\epsilon\int_{0}^{t_f}v_0({\bf x}(t))v^\perp_1({\bf x}(t),t)dt, \end{equation} where ${\bf v}_i=(\partial_y\Psi_i,-\partial_x\Psi_i)$ and the superscript $\perp$ denotes the component normal to the stream line of the unperturbed flow. The velocity field describing a time-periodic perturbation can be written in the form of a Fourier series \begin{equation}\label{fourier1} v_1^\perp({\bf x},t)=\sum_kg({\bf x},\omega_k)e^{-i\omega_k t}, \end{equation} where we have defined $\omega_k\equiv k\omega_1$. Furthermore, the product $v_0({\bf x}(t))g({\bf x},\omega_k)$ is also a time-periodic function with period $T_0=2\pi/\omega_0$ and can be written as a Fourier series \begin{equation}\label{fourier2} v_0({\bf x}(t))g({\bf x},\omega_k)=\sum_mG_{k,m}e^{im\omega_0 t}. \end{equation} Substituting (\ref{fourier1}) and (\ref{fourier2}) into (\ref{dpsi}) we find that, over a time interval $t_f\gg\max(T,T_0)$, the rate of change \begin{equation}\label{rate} \frac{\Psi_0(t_f)-\Psi_0(0)}{t_f}=\sum_{k,m} G_{k,m}\frac{\epsilon}{t_f} \int_{0}^{t_f}e^{i(m\omega_0-\omega_k)t}dt \end{equation} vanishes unless $\omega_0/\omega_1=k/m$ for some integer $m$ and $k$. In what follows, we take $m$ and $k$ to be positive. \begin{figure}[t] \hspace{2.5mm}\includegraphics[width=3.1in]{v0vst_P1.pdf} \caption{Velocity magnitude for a typical stream line near a separatrix of the base flow of $P_1$.} \label{velocity} \end{figure} The Fourier coefficient $G_{k,m}$ also controls the width $W^l_{k,m}$ of the RCL which replaces the stream line of the unperturbed flow with frequency $\omega_0=\omega_k/m$. For stream lines close to the separatrix $\Psi_0(x,y)=\psi_l$, $v_0({\bf x}(t))$ is small everywhere except for short time intervals corresponding to fast motion away from the saddles (see Fig. \ref{velocity}), so we can estimate \begin{eqnarray}\label{fourier3} |G_{k,m}|&=&\left|\frac{1}{T_0}\int_0^{T_0} v_0({\bf x}(t))g({\bf x}(t),\omega_k) e^{-im\omega_0 t}dt\right|\nonumber\\ &\approx&\frac{s_l}{2\pi}\frac{\omega_k|g({\bf x}_l^*,\omega_k)|}{m}, \end{eqnarray} where $s_l$ is the length of the separatrix, \begin{equation} g({\bf x}_l^*,\omega_k)=\frac{1}{T}\int_0^{T} v_1^\perp({\bf x}^*,t)e^{i\omega_k t} \end{equation} is the (discrete) Fourier spectrum of $v_1^\perp({\bf x}^*,t)$, and ${\bf x}_l^*$ is the point on the separatrix which lies midway between the saddles (for which $v_0({\bf x}(t))$ is near its maximum). If there is more than one saddle on the separatrix, the estimate (\ref{fourier3}) should be generalized to include respective contributions from all segments, which can either enhance or suppress each other. \begin{figure*} \centering (a)\hspace{-5mm} \includegraphics[width=3in]{Omega_vs_psi_P1.pdf}\hspace{5mm} (b)\hspace{-5mm} \includegraphics[width=3in]{Omega_vs_psi_P2.pdf} \\ (c)\hspace{-5mm} \includegraphics[width=3in]{Omega_vs_psi_P3.pdf}\hspace{5mm} (d)\hspace{-5mm} \includegraphics[width=3in]{Omega_vs_psi_QP.pdf} \caption{\label{ompsi} Frequency of the motion along the stream lines of the base flow for (a) $P_1$ at $A=0.428$ s$^{-2}$, (b) $P_2$ at $A=0.817$ s$^{-2}$, (c) $P_3$ at $A=0.875$ s$^{-2}$ and (d) $QP$ at $A=0.820$ s$^{-2}$. The black portion corresponds to the chaotic domain around the separatrix $\Psi_0=0$ seeded with the tracers, while the gray portion corresponds to the regular (as well as chaotic) domains without tracers. Horizontal lines show the frequencies of the overlapping dominant RCLs.} \end{figure*} The period of the unperturbed motion along the stream line lying near the same separatrix is given by \begin{equation}\label{period} T_0(\Psi_0)=-\sum_i\lambda_{l,i}^{-1}\ln\frac{|\Psi_0-\psi_l|}{\xi_l}, \end{equation} where $\lambda_{l,i}$ are the positive eigenvalues of all the saddles on the separatrix and $\xi_l$ is a constant. Hence, the distance (in terms of $\Psi_0$) from the separatrix to the nearest $k$:$m$ resonant torus is exponentially small for low $k$: \begin{equation} |\Psi_0-\psi_l|=\xi_l\exp\left(-\frac{\chi_l}{\omega_0}\right)=\xi_l\exp\left(-\frac{m\chi_l}{k\omega_1}\right), \end{equation} where $\chi_l^{-1}\equiv\sum_i\lambda_{l,i}^{-1}/2\pi$. In a similar fashion we can compute the distance between various resonant tori. In particular, the distance between the tori with frequency ratios $k$:1 and $(k+1)$:1 is given by \begin{equation} S_l(\omega_k)\approx\left|\frac{d\Psi_0}{d\omega_0}\right|\omega_1\approx \frac{\xi_l\chi_l\omega_1}{\omega_k^2}\exp\left(-\frac{\chi_l}{\omega_k}\right). \end{equation} According to (\ref{fourier3}), for moderate $k$, $|G_{k,m}|$ takes the largest values for $m=1$, hence the width of the dominant ($k$:1) RCLs can be estimated from (\ref{rate}): \begin{equation} W_l(\omega_k)\approx\epsilon \frac{T}{2} |G_{k,1}|\approx\frac{\epsilon s_l}{2}\frac{\omega_k}{\omega_1}|g({\bf x}_l^*,\omega_k)|. \end{equation} Comparing the widths of the RCLs with the distances $S_l(\omega_k)$ between the neighboring resonant tori we can determine which RCLs overlap and which do not for a particular strength of the perturbation. For moderate $\epsilon$, we can expect several RCLs with low $k$ to overlap with each other and with the SCL, since $S_l(\omega_k)$ is exponentially small, while $W_l(\omega_k)$ scales as a power of $\omega_k$ near a separatrix. More specifically, the region where $W_l(\omega_k)>S_l(\omega_k)$ is expected to be well mixed, while in the region where $W_l(\omega_k)<S_l(\omega_k)$ mixing is expected be limited to narrow RCLs of width $W_l(\omega_k)$ (as well as some even narrower RCLs corresponding to $m>1$). The boundaries of the main chaotic region should be determined by the regular tori with frequency \begin{equation} \omega_0(\Psi_0(x,y))\approx \omega_{k_\pm}+\frac{W_l(\omega_{k_\pm})}{2}\left|\frac{d\omega_0}{d\Psi_0}\right|_{\omega_{k_\pm}}. \end{equation} The value $k_\pm$ on each side is different and corresponds to the outermost overlapping RCL, i.e., it is largest integer $k_\pm$ such that $S_l(\omega_k)<W_l(\omega_k)$ for all $k<k_\pm$. \begin{figure*} \centering (a)\hspace{-5mm} \includegraphics[width=3in]{delpsi_P1.pdf}\hspace{5mm} (b)\hspace{-5mm} \includegraphics[width=3in]{delpsi_P2.pdf} \\ (c)\hspace{-5mm} \includegraphics[width=3in]{delpsi_P3.pdf}\hspace{5mm} (d)\hspace{-5mm} \includegraphics[width=3in]{delpsi_QP.pdf} \caption{\label{pwrfreq} The widths $W_l(\omega)$ of resonant chaotic layers (solid line) and the spacings $S_l(\omega)$ between the corresponding resonant tori (dashed and dotted lines) on different sides of the separatrix $\Psi_0=0$. (a) $P_1$ at $A=0.428$ s$^{-2}$ with $\omega_1=0.0172$ s$^{-1}$, (b) $P_2$ at $A=0.817$ s$^{-2}$ with $\omega_1=0.0477$ s$^{-1}$, (c) $P_3$ at $A=0.875$ s$^{-2}$ with $\omega_1=0.0666$ s$^{-1}$, (d) $QP$ at $A=0.820$ s$^{-2}$ with $\omega_1\approx0.051$ s$^{-1}$} \end{figure*} Consider, for example, the flow $P_2$. Fig. \ref{pwrfreq}(b) shows the width of resonant chaotic layers and the spacing between the resonant tori on both sides of the saddle seeded with tracers (which corresponds to $\Psi_0=0$). For low values of $k$ we do indeed find $W_l(\omega_k)>S_l(\omega_k)$, so the separatrix chaotic layer and the RCLs of a few nearby $k$:1 resonant tori overlap, forming a single chaotic domain. $W_l(\omega_k)$ decreases quickly (in fact, exponentially fast) with $\omega_k$, while $S_l(\omega_k)$ increases on both sides of the separatrix, so that the number of overlapping RCLs is rather low ($k_-=3$ for $\Psi_0<0$ and $k_+=4$ for $\Psi>0$), making the width of the chaotic domain (the black region in Fig. \ref{ompsi}(b)) extremely small, in good agreement with the numerical result shown in Fig. \ref{P12-mixing}(d). The situation is similar for $P_1$. Fig. \ref{pwrfreq}(a) shows that the number of overlapping RCLs is somewhat larger for $P_1$ ($k_-=k_+=11$), but still small enough for the chaotic domain formed by the overlapping resonant and separatrix chaotic layers to remain quite thin (see Fig. \ref{ompsi}(a)). This is again in agreement with the numerical result shown in Fig. \ref{P12-mixing}(b). A ``halo'' of tracers outside of the well mixed region suggests that the outermost RCL just touches its neighbor on the side of the separatrix, with the two separated by a semi-penetrable transport barrier formed by either a narrow high-order RCL with very small $G_{k,m}$ or by a cantorus \cite{mackay1984}, which allows a very slow ``leak'' of tracers into the outermost RCL. While $P_1$ and $P_2$ are almost monochromatic, the Fourier spectrum $g({\bf x}^*,\omega_k)$ of $P_3$ is exceptionally broad, with a large number of harmonics $\omega_k$ that have amplitudes comparable to that of the base frequency $\omega_1$. As a result, we find that RCLs remain fairly broad even for large values of $k$ (see Fig. \ref{pwrfreq}(c)). Consequently, the chaotic domain for $P_3$ is comprised of a large number of overlapping RCLs which allow transport across most of the $\Psi_0$ range (that is across most of the physical space). Since in this case the boundaries of the chaotic domain are very far from the separatrix of the saddle seeded with tracers, we computed the spacing $S_l(\omega_k)$ between the resonant tori corresponding to the two outermost branches of the $\omega_0(\Psi_0)$ curve which extend to the extremal values of $\Psi_0$ corresponding to clockwise and counterclockwise vortices. As Fig. \ref{ompsi}(c) illustrates, $W_l(\omega_k)>S_l(\omega_k)$ for all $k$ for both the leftmost and the rightmost branch. Hence, we should expect all of the RCLs to overlap, allowing global transport. However, the last RCL surrounding the clockwise vortex is not wide enough to cover the whole range of $\Psi_0$, leaving a small regular island around each of the four vortices centered at $\Psi_0(x,y)=1.058$, which is also in agreement with the numerical result shown in Fig. \ref{P3-mixing}(b). The mixing properties of the quasi-periodic flow $QP$ can also be understood by analyzing the widths of RCLs. The time-average flow for $QP$ is essentially the same as that for $P_2$, hence we can use the same frequency curve $\omega_0(\Psi_0)$. The spectrum $g({\bf x}^*,\omega)$ of the quasi-periodic perturbation is discrete, just like the spectra of the periodic flows, with frequencies of the peaks that can be labeled $\omega_k$, with integer $k$. Although the spacing $\omega_{k+1}-\omega_k$ between the peaks is not exactly constant, it varies little about the average $\omega_1\approx 0.051$ s$^{-1}$, as Fig. \ref{pwrfreq}(d) illustrates. Comparison of $S_l(\omega_k)$ and $W_l(\omega_k)$ shows that RCLs with $k\le 9$ overlap. In addition, the tori 13:1, 14:1, 15:1, and 16:1 also overlap. Although the width of the tori 10:1, 11:1, and 12:1 is smaller than the spacing between them, the tori 9:1 and 13:1 are more than twice as wide as the biggest inter-tori spacing in this ``gap'', which means that the 9:1 and 13:1 RCLs overlap directly. This indicates that we should have global transport in the region of $\Psi_0$ where $\omega_0(\Psi_0)<\omega_{16}+W_l(\omega_{16})|d\omega_0/d\Psi_0|/2\approx 0.85$. According to Fig. \ref{ompsi}(d), this corresponds to almost the entire physical domain, with the exception of regular islands around all the vortices. Although this prediction is {\em not} in perfect agreement with the numerical result, the description in terms of interacting resonances captures most of the features of the asymptotic tracer distribution shown in Fig. \ref{QP-mixing}(b). Both global transport and the regular islands around the clockwise vortices are predicted correctly. The size of the regular islands is predicted to be much larger than for $P_3$, which is also consistent with numerics. The resonant description also predicts the formation of regular islands around the counterclockwise vortices, which are filled in in Fig. \ref{QP-mixing}, which shows a limitation of our analytical description. However, the discrepancies at high frequencies are expected, given the fact that the expression (\ref{fourier3}) for the Fourier coefficients $G_{k,m}$ (and hence the widths $W_l(\omega_k)$) was obtained in the limit of low frequencies $\omega_k$. For higher frequencies corresponding to the neighborhood of the vortices, (\ref{fourier3}) becomes inaccurate and $G_{k,m}$ has to be computed in a different way. \section{Summary and Conclusion} \label{s:summary} To summarize, we have described transition from the laminar Kolmogorov flow to turbulence in a doubly-periodic domain of relatively small size. The sequence of bifurcations preceding turbulence is quite rich, with several different steady and time-periodic flows succeeding one another. This bifurcation sequence is quite sensitive to the choice of parameters ($\alpha$, $\beta$, $\nu$, the system size $L_x\times L_y$) although the actual transition to turbulence follows one of two standard routes. In one case we find that turbulence emerges through a sequence of several Hopf bifurcations, commonly referred to as the Ruelle-Takens-Newhouse scenario \cite{ruelle1971,newhouse1978}. In the other, a sub-critical bifurcation leads to intermittency, as in the Pomeau-Mannevile scenario \cite{pomeau1980}. The details of the bifurcation sequence, however, are quite important in describing the evolution of the transport properties of the flow. As a general trend, we find the mixing efficiency (defined either in terms of the mixed area fraction or in terms of the mixing rate) to improve as the forcing is increased, with steady flows being the worst mixers and turbulent flows -- the best. However, the complexity of the flow does not increase monotonically and neither does mixing efficiency. Neither is mixing efficiency directly related to the complexity of the flow, as the comparison of three different time-periodic flows showed. Furthermore, time-periodic flows such as $P_3$ can rival the mixing efficiency of turbulent flows. The most unexpected result was that the mixed area fraction of a class of time-periodic and quasi-periodic flow can be described -- rather accurately -- by a perturbative approach. This is despite the fact that none of the flows considered can actually be considered weakly perturbed. The description is based on the idea of multiple overlapping resonances, which define well-mixed regions of the flow. In particular, our results confirm the idea of Soskin and Mannella \cite{soskin2009} that resonances play an important role in defining the width of (and dynamics inside) the separatrix chaotic layer. Although the flow domain is densely covered by an infinite number of resonant tori, only the dominant resonances (e.g., ones that correspond to the harmonics of the frequency of the perturbation) play an important role. As a general rule of thumb, we find the flows with the broadest Fourier spectrum to possess the best mixing properties, while (nearly) monochromatic flows have mixing properties comparable to those of steady flows. \section*{Acknowledgements} This material is based upon work supported in part by the National Science Foundation under Grants No. CBET-0900018 and CMMI-1234436.
1,108,101,563,463
arxiv
\section{Introduction} \label{sec:model} Random matrix models \cite{Wigner, thooft-planar, Brezin:1990rb,Douglas:1989ve,Gross:1989vs} were first introduced by Wigner in a context of nuclear physics, and have since then proven to be an essential tool in modern physics and mathematics, with applications in quantum chromodynamics, disordered systems, 2D quantum gravity, quantum information, combinatorics of discrete surfaces, free probability, and so on \cite{2DgravityReview, oxford-rand-mat, book-rand-mat-2}. Our main interest in this paper is to study a new kind of random one-matrix model defined by the following partition function, \[ Z_{N,R}(\lambda, k) := \int_{\mathbb{R}^{NR}} d\phi \exp \left( -\lambda U(\phi)-k \mathrm{Tr}\phi\phi^t \right), \label{eq:integral} \] where the integration is done over matrices $\phi$ with real coefficients $\phi_a^i\in \mathbb{R}\ (a=1,2,\ldots,N,\ i=1,2,\ldots,R)$, $d \phi:=\prod_{i=1}^R \prod_{a=1}^N d\phi_a^i$, the Gaussian part is $\mathrm{Tr}\phi\phi^t = \sum_{i=1}^R \sum_{a=1}^{N}\phi^i_a\phi_a^i$, and the interaction term is \begin{equation} U(\phi) = \sum_{i,j=1}^R \left(\sum_{a=1}^{N}\phi^i_a\phi^j_a \right)^3= \sum_{a,b,c=1}^N \sum_{i,j=1}^R \phi_a^i \phi_b^i \phi_c^i \phi_a^j \phi_b^j \phi_c^j. \label{eq:interaction} \end{equation} The parameters $k,\lambda$ can be both real or complex, depending on the specific problems considered. Random matrix models are usually defined using trace invariants and matrix products, for which the indices of the matrices are contracted (summed) pairwise. The archetypal example of one-matrix model is obtained for interactions of the form $\tilde U(\phi) = \mathrm{Tr}\bigl((\phi\phi^t)^p\bigr)$. Instead, while the lower indices are contracted pairwise in the interaction we consider \eqref{eq:interaction}, the upper indices, $i,j\in\{1,2,\ldots,R\}$, do not appear pairwise. In our study, we will consider both square matrices ($R=N$) and rectangular matrices. Rectangular random matrix models were considered and then systematically analyzed in \cite{Anderson:1990nw,Anderson:1991ku,Myers:1992dq}, extending the celebrated double scaling limits of matrix models \cite{Brezin:1990rb,Douglas:1989ve,Gross:1989vs}. See also \cite{Francesco-rect} and references therein. In the large matrix size limit, rectangular random matrix models interpolate between the behavior of branched polymers (involving Feynman graphs with a tree-like filamentary structure) and that of two-dimensional quantum gravity (involving planar Feynman ribbon graphs). An important step in solving these models was to diagonalize the rectangular matrix by using the Lie-group symmetries on the matrix indices. On the other hand, the present model \eq{eq:integral} respects only the discrete permutation symmetry\footnote{Namely, reordering of $i=\{1,2,\ldots,R\}$.} on the upper index, while it respects the orthogonal symmetry on the lower indices. Moreover, the usual pairwise contraction pattern allows for the t'Hooft expansion \cite{thooft-planar} over ribbon graphs, discrete surfaces classified according to their genera, where the contribution in the matrix sizes of a graph is given in terms of closed loops called faces \cite{2DgravityReview, Francesco-rect}. With the non-pairwise contraction pattern in \eqref{eq:interaction} we lose this combinatorial structure and the expansion over random discretized surfaces. Because of the differences in the symmetry and combinatorial structure, we expect the present model \eq{eq:integral} to behave differently from the usual square and rectangular random matrix models. It is also challenging to analyze the present model with this lack of symmetry and without the topological expansion over discrete surfaces. Due to this lack of symmetry, the present model, \eq{eq:integral} with \eq{eq:interaction}, can be seen as a random vector model with multiple vectors, $\phi^i\in \mathbb{R}^N\ (i=1,2,\ldots,R)$. In the usual solvable settings of the vector models \cite{Nishigaki:1990sk,DiVecchia:1991vu}, however, there are independent Lie-group symmetries for each vector, and the interactions are rather arbitrary among the invariants made of these vectors. On the other hand, our present model has more restrictive characteristics: there is only a single common Lie-group symmetry\footnote{Such a model was written down as (2.14) in the paper \cite{Nishigaki:1990sk}. However, the model was not solved.}, the vectors are equivalent with each other under the permutation symmetry, and the interaction has the particular form with non-pairwise index contractions. Therefore, we would expect that our model defines a specific type of vector model with some interesting characteristic properties. In a sense, the present model is in-between the matrix and vector models, and in fact, by just changing the power of the interaction term in \eq{eq:interaction} from 3 to 2, we recover the usual $\mathrm{Tr}\bigl((\phi\phi^t)^2\bigr)$ rectangular random matrix model. As a matter of fact, an expression very similar to \eq{eq:integral} has already been discussed in the context of spin glasses in physics, for the $p$-spin spherical model \cite{pspin, pedestrians}. The model has spherical coordinates as degrees of freedom, and considers random couplings among them to model the spin glass. An expression of the form \eq{eq:integral} appears after integrating out the random couplings under the replica trick. However, there are some differences with our case: there exists a constraint $\sum_{a=1}^N \phi_a^i \phi_a^i= const$, corresponding to spherical coordinates; $\lambda$ is negative, while it should be positive for the convergence of \eq{eq:integral} (or should have a positive real part); the limit $R\rightarrow 0$ is taken in applying the replica trick. Because of these rather non-trivial differences, we would expect new outcomes with respect to the previous studies. Note that the case where $R$ is kept finite while $N$ is taken to be large in \eq{eq:integral} could have an application for systems with a finite number of ``real" replicas \cite{MezPar, pedestrians}. One of our motivations to initiate the study of the model \eq{eq:integral} is to investigate the properties of the wave function \cite{Narain:2014cya,Obster:2017dhx} of a tensor model \cite{Ambjorn:1990ge,Sasakura:1990fs,Godfrey:1990dt} in the Hamilton formalism \cite{Sasakura:2011sq,Sasakura:2012fb}, which is studied in a quantum gravity context. The expression \eq{eq:integral} can be obtained after integrating over the tensor argument of the wave function of the toy model introduced in \cite{Obster:2017pdq}, which is closely related to this tensor model. The details will be explained in Section~\ref{sec:limit}. As another potential application, we can consider randomly connected tensor networks \cite{Sasakura:2014zwa,Sasakura:2014yoa} with random tensors. It would also be possible to obtain \eq{eq:integral} by considering a random coupling vector model, or a bosonic timeless analogue of the SYK model \cite{SYKSY,SYKK}. Indeed, introducing $R$ replicas in such a model, we obtain \[ Z_{N,R}(\lambda, k) = \int dP e^{-\frac 1 2 \sum_{abc=1}^N P_{abc}^2}\Bigl(\int_{{\mathbb{R}}^N}d\phi e^{ - k \sum_{a=1}^N \phi_a^2 - I\sqrt{2\lambda } \sum_{a,b,c=1}^N P_{abc} \phi_a \phi_b \phi_c }\Bigr)^R, \] (where here the $\phi$ are vectors and $d\phi = \prod_{a=1}^N d\phi_a$) from which we recover \eq{eq:integral} by integrating out the random tensors. In fact, as detailed in this paper, the Feynman diagrammatic expansions of vector models with random couplings such as the SYK model with a finite number of replicas are still dominated by the celebrated melonic diagrams \cite{melons1, melons2, melons3} when the size of the system is large. We will show that this dominance still holds when the number of replicas is large, as long as the latter does not exceed the size of the system.\footnote{The results we obtain concerning dominant Feynman graphs should still apply to models with a time dependence.} This paper is organized as follows. In Section~\ref{sec:graph}, we describe the Feynman graph expansion of the partition function \eqref{eq:integral}. We identify the graphs for which the dependence in $N$ and $R$ is the strongest when the number of interactions is fixed in the following different regimes: $N$ large and $R$ finite, $R$ large and $N$ finite, and $R\sim N^\alpha$ with $\alpha \in (0, +\infty)$. In Section~\ref{sec:convergent}, we develop a method to treat the model in a convergent series by separating the integration variables of \eq{eq:integral} into the angular and radial parts. We apply the method to study the properties of the wave function of the toy model introduced in \cite{Obster:2017pdq}, which is closely related to the tensor model mentioned above. The last section is devoted to a summary and future prospects. \section{Graphical expansion and dominant graphs for the different regimes } \label{sec:graph} We consider the normalized partition function \begin{equation} \label{eq:Part-Funct} {\mathcal Z}_{N,R}(\lambda, k) =\Bigl(\frac {k}{\pi}\Bigr)^{\frac{NR}2} \int_{{\mathbb{R}}^{NR}} d\phi e^{-\lambda U(\phi) - k \mathrm{Tr} (\phi \phi^t)}, \end{equation} where $\mathrm{Tr}(\phi \phi^t) = \sum_{a=1}^N \sum_{i=1}^R \phi_a^i\phi_a^i$, and where the interaction $U(\phi)$ is not an usual trace invariant, but instead has non-pairwise contracted indices, \begin{equation} \label{eq:Interaction} U(\phi) = \sum_{i,j=1}^R \Bigl(\sum_{a=1}^N \phi_a^i \phi_a^j\Bigr)^3 = \sum_{a,b,c=1}^N \sum_{i,j=1}^R \phi_a^i \phi_b^i \phi_c^i \phi_a^j \phi_b^j \phi_c^j. \end{equation} This partition function is indeed normalized, as $\int_{{\mathbb{R}}^{NR}} d\phi e^{ - k \mathrm{Tr} (\phi \phi^t)} = (\frac {\pi}{k})^{\frac{NR}2}$. We represent graphically the contraction pattern of the interaction \eqref{eq:Interaction} in Fig.~\ref{fig:intvertex}. Each matrix $\phi$ is associated with a vertex, with two half-edges\footnote{An edge between two vertices is divided in two parts, which correspond to the neighborhoods of the two vertices. We call these parts half-edges.} attached: a dotted half-edge representing the lower index (summed from 1 to $N$), and a solid half-edge representing the upper index (summed from 1 to $R$). The dotted half-edges are associated pairwise, representing the summation of the indices $a,b,c$ in \eqref{eq:Interaction}, while the solid half-edges are attached to trivalent nodes, representing the summation of the indices $i,j$ in \eqref{eq:Interaction}. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.7]{vertex2.pdf} \caption{Graphical representation of an interaction $\sum_{i,j=1}^R (\sum_{a=1}^N \phi^i_a\phi^j_a)^3$. The variables $\phi_a^i$ are located at each connection point between the solid and the dotted lines. The solid lines represent the contractions of the $i$ and $j$ indices, while the dotted lines represent the contractions of the lower indices. } \label{fig:intvertex} \end{center} \end{figure} We consider the {\it formal} expansion of the partition function in powers of the coupling constant $\lambda$. It is formally obtained by expanding the exponential of the interaction \eqref{eq:Interaction} in \eqref{eq:Part-Funct}, by exchanging the sum and the integral, and by applying Wick theorem to compute Gaussian expectation values of products of \eqref{eq:Interaction} of the form \begin{equation} \langle U(\phi) ^n \rangle_0 = \Bigl(\frac {k}{\pi}\Bigr)^{\frac{NR}2} \int d\phi\, U(\phi) ^n e^{-k \mathrm{Tr} (\phi \phi^t)}. \end{equation} This way, the partition function is formally expressed as \begin{equation} \label{eq:Part-Func-Exp-1} {\mathcal Z}_{N,R}(\lambda, k) = \sum_{n \ge 0} z_n(N,R,k)(- \lambda)^n, \qquad z_n(N,R,k) = \frac 1 {n!} \langle U(\phi) ^n \rangle_0. \end{equation} By changing variables $\phi' = \sqrt{2k} \phi$, we see that $z_n(N,R,k) = z'_n(N,R)/{(8k^3)^n}$. In the present section, we identify the dominant term in $z_n(N,R,k)$ for different regimes of large $N$ and $R$. \subsection{Feynman graphs} \label{sub:Feyn} Applying Wick theorem, $\langle U(\phi) ^n \rangle_0$ is standardly computed by summing over all possible ways to pair the $6n$ matrices involved, and by replacing the paired matrices with the Gaussian covariance \begin{equation} \label{eq:eachwick} \langle \phi^i_a \phi^j_b\rangle_0 = \frac 1 {2k} \delta_{ij}\delta_{ab}. \end{equation} This can be expressed graphically using sums over graphs as follows: the $n$ interactions $U(\phi)$ are each represented as in Fig.~\ref{fig:intvertex}, and contribute with a factor $(-\lambda)$, while the Wick pairings (the propagators) are represented by new thin edges between pairs of matrices, which identify the indices corresponding to the dotted and the solid edges, and contribute with a factor $1/2k$. We therefore have graphs with three kind of edges, dotted, solid and thin, and so that we recover $n$ copies of the graph in Fig.~\ref{fig:intvertex} when the thin edges are deleted. We denote ${\mathbb{G}}(n)$ the set of such graphs, and ${\mathbb{G}}$ the set of graphs with any positive number of interactions. Similarly, we denote by ${\mathbb{G}}_c(n)$ and ${\mathbb{G}}_c$ the subsets of connected graphs in ${\mathbb{G}}(n)$ and ${\mathbb{G}}$. An example of a graph in ${\mathbb{G}}_c(3)$ is represented in Fig.~\ref{fig:contraction}. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.5]{Ex-Fey-GR.pdf} \caption{An example of a connected Feynman graph with three interactions. Wick contractions are represented by the thin lines.} \label{fig:contraction} \end{center} \end{figure} As for usual matrix models, the sums of Kronecker deltas corresponding to the indices contracted pairwise in \eqref{eq:Interaction} yield a factor of $N$ for each free sum on the lower index of the matrices. In our representation, these free sums correspond to the connected subgraphs obtained when only the dotted and thin edges are kept, while the solid edges are deleted. These subgraphs are loops called \textit{dotted faces}\footnote{It is a common denomination in random matrix and tensor models to call such loops faces.}. In Fig.~\ref{fig:contraction} for instance, there are four dotted faces, represented on the left of Fig.~\ref{fig:faces}, thus a contribution of $N^4$ for this graph. In the present case however, the contraction patterns of the upper indices corresponding to the solid edges are more complicated: we still get a factor of $R$ for every connected subgraph with only solid and thin edges, but now such subgraphs are no longer loops as they have nodes of valency three, as shown on the right of Fig.~\ref{fig:faces}. Even though these subgraphs are not loops, we call them {\it solid faces}. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.5]{Ex-Fey-GR-dotted.pdf}\hspace{2cm}\includegraphics[scale=0.5]{Ex-Fey-GR-solid.pdf} \caption{Faces for the graph of Fig.~\ref{fig:contraction}. On the left there are four dotted faces, representing the four free sums of the lower indices generated by the Wick contractions, each of which contributes with a factor of $N$. There is a single solid face, represented on the right, thus one free sum for the upper indices, which generates a factor $R$. Hence, the total weight in $N,R$ of this Feynman graph is $N^4R$.} \label{fig:faces} \end{center} \end{figure} We denote by $\Fd$ (resp.~$\Fs$) the number of dotted (resp.~solid) faces. For the graph of Fig.~\ref{fig:contraction}, we thus have $\Fd=4$ and $\Fs=1$. Then, using Wick theorem, the expectation values $ \langle U(\phi) ^n \rangle_0$ are expressed as \begin{equation} \langle U(\phi) ^n \rangle_0 =\Bigl(\frac{1 }{8k^3}\Bigr)^{n} \sum_{G\in {\mathbb{G}}(n)} m(G) N^{\Fd(G)} R^{\Fs(G)}, \end{equation} where we have used the fact that the number of thin edges is $3n$, and where $m(G)$ is the multiplicity of the graph $G$, defined as the number of occurrences of $G$ when adding the thin edges in all possible ways for the $n(G)$ interactions. Inserting this in \eqref{eq:Part-Func-Exp-1}, the partition function is formally expressed as an expansion indexed by Feynman graphs. \ \begin{figure}[h!] \begin{center} \includegraphics[width=12cm]{One-Int-Diag.pdf} \caption{The list of Feynman graphs in ${\mathbb{G}}(1)$. For later convenience, they are gathered in two groups, separated by a comma.} \label{fig:neq1} \end{center} \end{figure} In Figure~\ref{fig:neq1}, all the possible one-interaction graphs are drawn. From the graphs, one can easily compute the weights coming from the free sums over the indices. One also has to take into account the multiplicities $m(G)$ of the graphs. For instance, the contribution of the first graph of Figure~\ref{fig:neq1} can be computed as $N^3 R(-\lambda/(2k)^3)$. By computing similarly for the other graphs, the contributions of these graphs lead to \[ z_1(N,R,k)=\left(N^3R+3 N^2 R+2 NR+3N^2R+6NR\right)(2k)^{-3}. \] The terms are ordered in the same way as the graphs appear in Figure~\ref{fig:neq1}. \ In practice, ${\mathcal Z}_{N,R}$ is rather computed by exponentiating the \textit{free-energy}\footnote{In this paper, we call $\log {\cal Z}_{N,R}$ the free energy, rather than the real free energy $-\log {\cal Z}_{N,R}$ in physics to avoid the frequent appearance of extra minus signs. This ``convention" is often used in combinatorics papers.} whose expansion involves only \textit{connected} Feynman graphs: \begin{equation} {\mathcal F}_{N,R}(\lambda, k) = \log {\mathcal Z}_{N,R}(\lambda, k) =\sum_{G\in {\mathbb{G}}_c} \frac {m(G)} {n(G)!} \Bigl(\frac{-\lambda}{8k^3}\Bigr)^{n(G)} N^{\Fd(G)} R^{\Fs(G)}, \label{eq:free-nrj} \end{equation} where we have denoted by $n(G)$ the number of interactions $U(\phi)$ in the graph $G\in{\mathbb{G}}_c$. \ \textit{ Our aim in the present section, is to identify the graphs in ${\mathbb{G}}_c(n)$ which dominate when $N$ or $R$ is large, or both, in various specific regimes.} \ More precisely, we will consider the following cases: $N$ large and finite $R$ in Sec.~\ref{sub:LargeNfiniteR}, $R$ large and finite $N$ in Sec.~\ref{sub:LargeRfiniteN}, and both $N$ and $R$ large with $R\sim N^\alpha$, where $\alpha>1$ in Sec.~\ref{sub:AlphaLarg1} and where $\alpha\le1$ in Sec.~\ref{sub:AlphaSmall1}. For each one of these regimes, and for a fixed value of $n\ge 1$, the connected graphs in ${\mathbb{G}}_c(n)$ can be classified according to their dependence in $N$ and $R$. The graphs in ${\mathbb{G}}_c(n)$ for which this dependence is the strongest are called \emph{dominant graphs}. We will compute the dominant free-energy, i.e.~the free energy restricted to dominant graphs. \ Note that because all the one-interaction graphs have the same contribution in $R$, in all the regimes where $N$ is large, the only dominant one-interaction graph is given by the leftmost graph in Fig.~\ref{fig:neq1}, so that in any regime we may consider where $N$ is large, \begin{equation} \label{eq:leading-order-1} z^{\text{dom}}_1(N>>1,R,k) = \frac {N^3R}{8k^3}. \end{equation} For the rectangular matrix model defined with an interaction of the form $\tilde U(\phi) = \mathrm{Tr}\bigl((\phi\phi^t)^p\bigr) $ with $p\ge 2$, the Feynman graphs are ribbon graphs whose vertices have $2p$ incident edges and whose faces are colored in black for the lower index ranging from 1 to $N$ and white for the upper index ranging from 1 to $R$, so that two neighboring faces have different colors \cite{Anderson:1991ku, Francesco-rect}. If the black and white faces are respectively counted by $F_b$ and $F_w$, and assuming that $R\sim N^\alpha$ with $\alpha\ge 1$ (for $0<\alpha<1$ the roles of $N$ and $R$ are just exchanged), the dependence in $\lambda$, $N$, $R$ of a graph behaves as $$\lambda^n N^{F_b + \alpha F_w} = \lambda^n N^{\alpha(F_b + F_w) + (1-\alpha) F_b} = \lambda^nN^{\alpha(2+ n(p-1) -2g ) + (1-\alpha ) F_b},$$ so that we obtain the two following cases: \begin{enumerate}[label=--] \item If $\alpha = 1$, the dominant graphs are all the planar $2p$-regular ribbon graphs, and we recover the 2D quantum gravity phase \cite{2DgravityReview}, \item If $\alpha \neq 1$, the dominant graphs are all the planar $2p$-regular ribbon graphs which in addition have a single black face (or a single white face if $\alpha < 1$). Such graphs are easily shown to have the same structure as the dominant graphs for a $(\phi.\phi)^p$ vector model, which we describe in Appendix~\ref{app:proof} for the $(\phi.\phi)^3$ model. These graphs have a tree-like structure characteristic of the branched polymer phase. \end{enumerate} Note that by scaling the coupling constant as $\lambda= \lambda' N^{\alpha(1-p)}$, the contributions of the graphs are bounded by $N^{1+\alpha}$. The scenario for dominant graphs for the rectangular one-matrix model with the assumption $R\sim N^\alpha$ with $\alpha> 0$ is summarized in Fig.~\ref{fig:RecDomMat}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{RectangDom.pdf} \caption{Dominant graphs at large $N$ for the random $N\times N^\alpha$ matrix models with $\alpha> 0$.} \label{fig:RecDomMat} \end{center} \end{figure} In this section, we will show that the scenario for the dominant graphs of our model is as shown in Fig.~\ref{fig:RecDom}. The families of graphs referred to as tree-like and star-like will be described more precisely in the rest of the section. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{NonPairDom.pdf} \caption{Dominant graphs at large $N$ for our random $N\times N^\alpha$ matrix models with non-pairwise index contractions with $\alpha> 0$.} \label{fig:RecDom} \end{center} \end{figure} We will see that while the dominant graphs for finite $R$ and large $N$ are the same as those for $R\sim N^\alpha$ with $0<\alpha\le 1$, the dominant graphs for $R\sim N^\alpha$ with $\alpha> 2$ are a strict subset of those for $R$ large and $N$ finite. In the intermediate regime where $R\sim N^\alpha$ with $1<\alpha\le 2$, there is a competition between the two families of dominant graphs, so that dominant graphs are neither included in those for $R$ large and $N$ finite, nor in those for finite $R$ and large $N$. \subsection{The large $N$ and finite $R$ regime} \label{sub:LargeNfiniteR} \noindent{\bf The $R=1$ vector model at large $N$.} Let us start with this well-known particular regime of the model: for $R=1$, we recover the $(\phi\cdot \phi)^3$ vector model, for which the graphs that maximize $\Fd$ in ${\mathbb{G}}_c(n)$ are well-known and satisfy $\Fd=1+2n$ (an example is shown in Fig.~\ref{fig:Tree}). Such graphs are said to have a tree-like structure (see Appendix~\ref{app:proof}). \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{tree2.pdf} \caption{An example of dominant graph in the large $N$ and finite $R$ regime.} \label{fig:Tree} \end{center} \end{figure} To compute the free-energy restricted to the dominant graphs in this regime, it is easier to first compute the 2-point function. More precisely, by differentiating the free-energy with respect to $k$, we obtain the generating series $ {\mathcal G}_{N,R}(\lambda, k) $ of graphs with one oriented thin edge, which corresponds to the \textit{normalized two-point function} \begin{equation} \label{eq:TwoPoint-diff} {\mathcal G}_{N,R}(\lambda, k)= \frac{2k}{NR}\langle \mathrm{Tr} \phi \phi^t \rangle = 1- \frac{2k}{NR} \frac{\partial}{\partial k} {\mathcal F}_{N,R}(\lambda, k) , \end{equation} where \begin{equation} \langle \mathrm{Tr} \phi \phi^t \rangle= \frac1{ {\mathcal Z}_{N,R}(\lambda, k) } \int_{{\mathbb{R}}^{NR}}\, d\phi \mathrm{Tr} \phi \phi^t e^{-\lambda U(\phi) - k \mathrm{Tr} (\phi \phi^t)}. \end{equation} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.55]{two-point-trees.pdf} \caption{Graphical representation of the self-consistency equation for the normalized two-point function at leading order in the large $N$ and $R=1$ regime.} \label{fig:Two-point-trees} \end{center} \end{figure} The dominant graphs in this regime have the recursive structure shown in Fig.~\ref{fig:Two-point-trees}, which translates in the following self-consistency equation for the 2-point function ${\mathcal G}^{\text{dom}}_{N,1}$ restricted to dominant graphs, \begin{equation} \label{eq:Ternary-Tree} {\mathcal G}^{\text{dom}}_{N,1} = 1 + z \times ({\mathcal G}^{\text{dom}}_{N,1})^3, \qquad z= -\frac {3\lambda N^2}{4k^3}, \end{equation} where the factor $N^2$ comes from the normalization $1/NR$ in \eqref{eq:TwoPoint-diff} (see Appendix~\ref{app:proof}). This is the usual self-consistency equation for the generating function of rooted regular ternary trees. Note that by choosing the dependence in $N$ $\lambda=N\lambda'$ and $k=N k'$ where $\lambda', k'$ do not depend on $N$, as usually done for vector models, we see that $ {\mathcal G}^{\text{dom}}_{N,1}$ no longer depends on $N$, so that we get a well-defined limit when $N$ goes to infinity. The coefficients of $ {\mathcal G}^{\text{dom}}_{N,1} (\lambda, k)$ are obtained using Lagrange inversion, and are known to be Fuss-Catalan numbers \begin{equation} {\mathcal G}^{\text{dom}}_{N,1} (\lambda, k) = \sum_{n\ge 0} \frac 1 {3n + 1}\binom{3n+1}{n}\Bigl(-\frac {3\lambda N^2}{4k^3}\Bigr)^n. \end{equation} By integrating over $k$, we find the coefficients of the leading-order free energy, \begin{equation} \label{eq:series-lo-FE} {\mathcal F}^{\text{dom}}_{N,1}(\lambda, k) = \frac{N}6\sum_{n\ge 1} \frac 1 {n }\frac 1 {3n + 1}\binom{3n+1}{n}\Bigl(-\frac {3\lambda N^2}{4k^3}\Bigr)^n, \end{equation} where the integration constant has been determined from the fact that $ {\mathcal F}^{\text{dom}}_{N,1}(0, k)=0 $. The equation \eqref{eq:Ternary-Tree} can also be solved explicitly, and the solution to this equation for $z<0$ which leads to the right series expansion is \begin{equation} \label{eq:solTernary} {\mathcal G}^{\text{dom}}_{N,1} (\lambda, k) = -\frac{2\times 3^{1/3} z + 2^{1/3} (9 z^2 + \sqrt{3} \sqrt{z^3 (-4 + 27 z)})^{2/3}}{ 6^{2/3} z (9 z^2 + \sqrt{3} \sqrt{z^3 (-4 + 27 z)})^{1/3}) }. \end{equation} To recover an exact expression for $ {\mathcal F}^{\text{dom}}_{N,1}$, one can use \eqref{eq:TwoPoint-diff} and integrate \eqref{eq:solTernary} over $k$, however this becomes quite cumbersome. Rather, it is easy to express $ {\mathcal F}^{\text{dom}}_{N,1}$ as a function of $ {\mathcal G}^{\text{dom}}_{N,1}$ by keeping the latter to implicitly represent the $k$-dependence. We obtain \begin{equation} {\mathcal F}^{\text{dom}}_{N,1} =- \frac N 2\Bigl( {\mathcal G}^{\text{dom}}_{N,1} + \frac{\lambda N^2}{4k^3} ({\mathcal G}^{\text{dom}}_{N,1})^3 - \log( {\mathcal G}^{\text{dom}}_{N,1}) - 1\Bigr), \end{equation} where the integration constant is found knowing that ${\mathcal G}^{\text{dom}}_{N,1}(0,k)=1$ and $ {\mathcal F}^{\text{dom}}_{N,1}(0, k)=0 $. One can easily show $ {\mathcal F}^{\text{dom}}_{N,1}$ satisfies \eqref{eq:TwoPoint-diff} due to \eqref{eq:Ternary-Tree}. \ \noindent{\bf The finite $R$ case at large $N$.} In this case, the dominant graphs are the same as the $R=1$ case, the only difference being that we need to take into account the factor $R^{\Fs}$ in \eqref{eq:free-nrj}. Because of the tree-like structure (Figs.~\ref{fig:Tree} and \ref{fig:Two-point-trees}), it is easily seen that the graphs that maximize $\Fd$ in ${\mathbb{G}}_c(n)$ have $\Fs = 1$, so that for $R$ finite, at large $N$, ${\mathcal G}^{\text{dom}}_{N,R}(\lambda, k) = {\mathcal G}^{\text{dom}}_{N,1}(\lambda, k)$ and ${\mathcal F}^{\text{dom}}_{N,R}(\lambda, k) = R {\mathcal F}^{\text{dom}}_{N,1}(\lambda, k)$. \ A consequence is that considering random coupling vector models with a finite number $R$ of real replicas of the form \[ \label{eq:random-coup-2} Z_{N,R}(\lambda, k) = \int dP e^{-\frac 1 2 \sum_{abc=1}^N P_{abc}^2}\Bigl(\int_{{\mathbb{R}}^N} \prod_a d\phi_a e^{ - k \sum_{a=1}^N \phi_a^2 - I\sqrt{2\lambda } \sum_{a,b,c=1}^N P_{abc} \phi_a \phi_b \phi_c }\Bigr)^R, \] introducing replicas of the fields $\phi^i$, $(i=1,\ldots,R)$, and then expanding over Feynman graph, the graphs that dominate at large $N$ are the celebrated melonic graphs \cite{melons1, melons2, melons3}. This is explained in more details in Appendix \ref{app:melo}. \subsection{The large $R$ and finite $N$ regime} \label{sub:LargeRfiniteN} \subsubsection{Results} In the case where $R$ is large and $N$ is kept finite, we want to identify the graphs which maximize $\Fs$ in ${\mathbb{G}}_c(n)$. We will show that in this regime, the sum of the contributions of the dominant connected Feynman graphs in ${\mathbb{G}}_c(n)$ for any $n\ge 1$ is given by \begin{equation} {\mathcal F}^{\text{dom}}_{N,R}(\lambda, k) = \sum_{n\ge 1}\biggl[\frac{N}{2n}\Bigl(-\frac { 6(N+4) \lambda R}{8k^3}\Bigr)^n+\frac{N^3+3 N^2-4N}{12 n}\Bigl(-\frac {12 \lambda R}{8k^3}\Bigr)^n \biggr]. \label{eq:dnr} \end{equation} Summing this series, we find the dominant free-energy to be, in this regime, \[ {\mathcal F}^{\text{dom}}_{N,R}(\lambda, k) &= - \frac N 2 \log\Bigl(1+\frac{3(N+4)R\lambda }{4k^3}\Bigr) - \frac{N(N+4)(N-1)}{12} \log\Bigl(1+\frac{3R\lambda }{2k^3}\Bigr). \label{eq:freeenergylargeR} \] Note that we can choose the dependence in $R$ of $\lambda$ and $k$ in order to cancel the dependence in $R$ and have a well defined limit for $R\rightarrow \infty$ and $N$ finite, e.g. by choosing $\lambda=\lambda'/R$ with $\lambda', k$ independent of $R$, and $\lvert \lambda'/k^3\rvert < 4/(3(N+4))$ By exponentiation, we find the dominant partition function in this regime to be \[ {\mathcal Z}^{\text{dom}}_{N,R}(\lambda, k)= \left( 1+ \frac{3(N+4)R\lambda}{4 k^3}\right)^{-\frac{N}{2}} \left( 1+ \frac{3R\lambda}{2k^3} \right)^{-\frac{N(N+4)(N-1)}{12}}. \label{eq:resultsum} \] {\noindent \it Dominant graphs. }In the finite $R$ case at large $N$, it was possible to have dotted faces with a single dotted edge. This gives rise to the tree-like structure of the dominant graphs. In the present case however, a solid face necessarily has an {\it even} number of trivalent solid nodes. Furthermore, $\Fs$ is bounded from above by the number of interactions \begin{equation} \text{if }G\in{\mathbb{G}}_c(n), \qquad \Fs(G) \le n(G), \end{equation} with equality if and only if every solid face has exactly two trivalent solid nodes. The dominant graphs in the large $R$ and finite $N$ regime are thus the graphs which satisfy this last condition, and they are easily shown to be necklace-like graphs obtained by forming one loop with the building blocks listed in Figure~\ref{fig:components} (see the examples in Figure~\ref{fig:necklace}). \begin{figure}[!h] \begin{center} \includegraphics[width=3cm]{neck1.pdf} \hfil \includegraphics[width=3cm]{neck2.pdf} \caption{The building blocks of the dominant graphs at large $R$ and finite $N$. These building blocks can be obtained by cutting the dotted edges in half in the graphs of Figure~\ref{fig:neq1}. In a graph made of these building blocks, the three dotted half-edges on each side must be paired with those on another building block, or itself.} \label{fig:components} \end{center} \end{figure} \subsubsection{Proof} More precisely, given a graph in $G\in{\mathbb{G}}_c(n)$, let us consider the following abstract graph $\Gamma(G)$: for each solid face $f$ we draw a vertex $v(f)$, and for each interaction in $G$, if its two 3-valent solid nodes belong to some (non-necessarily distinct) faces $f_1$ and $f_2$, we draw an edge between the corresponding vertices $v(f_1)$ and $v(f_2)$. Then the number of independent loops in the graph $\Gamma$ is $L(\Gamma) = n(G) - F_s(G) +1$, since $\Gamma$ has $n(G)$ edges, $F_s(G) $ vertices, and is connected. Therefore, \begin{equation} F_s(G) = n (G) +1 - L(\Gamma). \end{equation} Furthermore, graphs with no loops are trees, which necessarily have vertices of valency one. But as the solid faces in $G$ contain an even number of 3-valent solid nodes, the vertices in $\Gamma(G)$ have at least valency two. Therefore, $L(\Gamma)\ge 1$, and we recover that $1\le \Fs(G) \le n(G)$, but in addition we now know that the graphs in ${\mathbb{G}}_c(n)$ whose contribution in $R$ is $R^{n+1-l}$ are obtained by considering all the abstract graphs $\Gamma(G)$ with $l$ loops. In theory, we can thus identify the graphs contributing at any order in $R$. In particular, as written above, the dominant contribution in $R^n$ is given by the graphs for which $\Gamma$ is the only one-loop graph, which corresponds to the necklaces of building blocks listed in Fig.~\ref{fig:components}. Two examples of necklace graphs for $n=3$ are shown in Fig.~\ref{fig:necklace}. \vspace{-0.5cm} \begin{figure}[!h] \begin{center} \includegraphics[width=5cm]{necklace1.pdf} \hfil \includegraphics[width=5cm]{necklace2.pdf} \caption{Examples of necklace graphs for $n=3$. The weights in $N,R$ are $N^2 R^3$ and $NR^3$, respectively. } \label{fig:necklace} \end{center} \end{figure} \vspace{-0.5cm} As was just proven, the dominant graphs are such that the solid faces having exactly two trivalent solid nodes are connected by dotted edges to form a loop. Such solid faces are the building blocks of the dominant graphs, and there exist only two kinds shown in Fig.~\ref{fig:components}. These building blocks can be obtained by cutting the dotted edges in half in the graphs of Fig.~\ref{fig:neq1}. Two building blocks are connected by the dotted half-edges on one of their sides (or both, if the whole graph is composed of a single building block). The summation over Wick pairings in such a building block can be accounted by permuting the dotted half-edges. To count the number of ways of connecting the building blocks in a loop, it is convenient to use a matrix representation. By this, the free energy coming from the dominant graphs with $n$ interactions is given by \[ f^{\text{dom}}_n=\frac{2^n(-\lambda)^n R^n}{2 n(2k)^{3n}} {\rm Tr}\left( A^n\right), \label{eq:fn} \] where $A$ is a matrix representing the connection of the dotted edges of a building block. More precisely, the matrix $A$ is a sum of two matrices, $A=B+C$, respectively corresponding to the two kinds of building blocks in Figure~\ref{fig:components}: \[ \begin{split} &B:=\frac{1}{6}P\tilde B P,\ \ \tilde B_{abc,def}:=\delta_{ad}\delta_{be}\delta_{cf},\\ &C:=\frac{1}{4}P\tilde CP, \ \ \tilde C_{abc,def}:=\delta_{ad} \delta_{bc}\delta_{ef}, \end{split} \label{eq:ABC} \] where a product of two matrices, say $X$ and $Y$, is defined by $$(XY)_{abc,ghi}:=\sum_{d,e,f=1}^N X_{abc,def}Y_{def,ghi},$$ and \begin{equation} P_{abc,def}:=\delta_{ad}\delta_{be}\delta_{cf} +(\hbox{permutations of }d,e,f). \end{equation} The matrix $P$ represents all permutations of the three dotted edges on each side of a building block. The numerical factors in \eq{eq:ABC} cancel the graph degeneracies\footnote{Among the 36 terms generated, some correspond to the same graphs. There are respectively 6 and 9 non-equivalent terms for $B$ and $C$, with degeneracies 6 and 4.}. As for the other factors in \eqref{eq:fn}, the factor $2^n$ comes from the choice of the sides of the interactions to form the solid faces, the factor $R^n$ accounts for the contribution of the $n$ solid faces, and there is a symmetric factor $2n$ in the denominator, where 2 comes from the overall reflection and $n$ from the choice of starting points in a loop. To compute \eq{eq:fn}, we use the properties of $B,C$ and $P$. By using $P^2=6 P,\ P\tilde B=\tilde BP=P$ (so that $B=P$), and $\tilde CP\tilde C=2(N+2)\tilde C$, one obtains \begin{equation} B^2=6B, \quad BC=CB=6C,\quad C^2=3(N+2)C. \label{eq:bcprod} \end{equation} One can also show \begin{equation} {\rm Tr}(B)=N^3+3N^2+2N,\quad {\rm Tr}(C)=3N(N+2). \label{eq:bctr} \end{equation} Though $B$ and $C$ are the natural choices for representing the connections of the dotted edges, they are not convenient for the computation of \eq{eq:fn} because of the mixed structure of their products. A better choice is given by \[ K=\frac{1}{6}B-\frac{1}{3(N+2)}C,\ \ H=\frac{1}{3(N+2)}C. \] Indeed, from \eq{eq:bcprod} and \eq{eq:bctr}, these quantities satisfy \[ \begin{split} & K^2=K,\qquad HK=KH=0,\qquad H^2=H, \\ & \hbox{Tr}(K)=\frac{1}{6}(N^3+3N^2-4N),\quad \hbox{Tr}(H)=N. \end{split} \] Since $A=6 K +3 (N+4) H$, we obtain \[ {\rm Tr}(A^n)=6^{n-1}(N^3+3N^2-4N)+3^n(N+4)^n N. \] By putting this into \eq{eq:fn}, we obtain the forementioned result \eq{eq:dnr}. \ Note that the graphs which maximize $\Fd$ at fixed $n$, when $\Fs=n$ satisfy $\Fd=n+1$. If there are only building blocks as on the left of Fig.~\ref{fig:components}, the number of dotted faces is bounded by 3. To obtain $\Fs=n+1$ for $n>2$, we therefore need to have building blocks as on the right of Fig.~\ref{fig:components}, which means that there is a single dotted face going around the loop. The number of the remaining dotted faces is bounded by $n$, which occurs when all the building blocks are as on the right of Fig.~\ref{fig:components}, and the dotted edges produce one dotted face between every two building blocks. This imposes the graph to be as in Fig.~\ref{fig:LO-largNR}. We will call such graphs {\it star-like graphs}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.4]{LO-largeNR.pdf} \caption{ The star-like graphs which dominate for sufficiently large $n$ in the large $R\sim N^\alpha$ regime with $\alpha>1$.} \label{fig:LO-largNR} \end{center} \end{figure} \subsection{The large $R\sim N^\alpha$ regime with $\alpha>1$} \label{sub:AlphaLarg1} \subsubsection{Results} In this section, we will identify the dominant graphs in the regime where $R\sim N^\alpha$ with $\alpha>1$ and $N\rightarrow +\infty$. They correspond to the graphs in ${\mathbb{G}}_c(n)$ which maximize \begin{equation} F_\alpha = \Fd + \alpha \Fs, \end{equation} with $\alpha>1$. We have seen two families of graphs which maximize $\Fd$ at fixed $\Fs$: \begin{itemize} \item The \emph{tree-like graphs}, which have a maximal $\Fd$ among all graphs at fixed $n$, and for which $\Fs=1$. Tree-like graphs in ${\mathbb{G}}_c(n)$ thus have \begin{equation} F_{\alpha,n}^\text{tree} = 1+\alpha + 2n \end{equation} \item The \emph{star-like graphs} shown in Fig.~\ref{fig:LO-largNR}, which have a maximal $\Fd$ at fixed $n$ ($\Fd=n+1$), among graphs with maximal $\Fs$. Star-like graphs in ${\mathbb{G}}_c(n)$ thus have \begin{equation} F_{\alpha,n}^\text{star} = 1+(\alpha + 1)n. \end{equation} \end{itemize} There is a competition between the two families of graphs. Indeed, we see that \begin{equation} \label{eq:Comp-star-tree} F_{\alpha,n}^\text{tree} \le F_{\alpha,n}^\text{star} \qquad \Leftrightarrow \qquad n\ge \frac \alpha{\alpha - 1}. \end{equation} In this section, we will show that all other graphs are dominated either by the tree-like graphs or by the star-like graphs, in the sense that they have a lower $F_\alpha$ at fixed $n$. An exception occurs for $n=2$, for which another one of the necklace graphs has the same contribution in $N$ and $R$ as the star-like graph (it belongs to the family of necklaces whose contribution is in $N^3R^n$ in \eqref{eq:dnr}). Therefore, \eqref{eq:Comp-star-tree} describes the unusual scenario for dominant graphs, which we summarized in Fig.~\ref{fig:RecDom} and Fig.~\ref{fig:fig-Falpha}: \begin{enumerate}[label={\small $\blacktriangleright$}] \item \underline{For $\alpha>2$}, we have $1<\alpha/(\alpha - 1) <2$, so that tree-like graphs only dominate at $n=1$, while star-like graphs dominate for $n> 2$. Both the star-like graphs and the necklaces whose contribution is in $N^3R^n$ in \eqref{eq:dnr} co-dominate at $n=2$. As a consequence, in this regime, the dominant free-energy is given by \begin{equation} {\mathcal F}^{\text{dom}}_{N,R}(\lambda, k) = - \frac {N^2R\lambda}{ 8k^3}(N-3) + \frac{3 N^3 R^2\lambda^2}{32 k^6}- \frac N 2 \log\Bigl(1+\frac{3NR\lambda }{4k^3}\Bigr). \label{eq:allarger2} \end{equation} \item \underline{For $\alpha=2$}, we have $\alpha/(\alpha - 1) =2$, so that tree-like graphs dominate at $n=1$, tree-like graphs, star-like graphs, and the necklaces whose contribution is in $N^3R^n$ in \eqref{eq:dnr} co-dominate at $n=2$, while star-like graphs dominate for $n>2$. As a consequence, in this regime, the dominant free-energy is given by \begin{equation} {\mathcal F}^{\text{dom}}_{N,R}(\lambda, k) = - \frac {N^2R\lambda}{ 8k^3}(N-3) + \frac{3 N^3 R\lambda^2}{32 k^6}\bigl(\frac {3} 2 N^2 + R\bigr) - \frac N 2 \log\Bigl(1+\frac{3NR\lambda }{4k^3}\Bigr). \label{eq:aleq2} \end{equation} \item \underline{For $1<\alpha<2$}, we have $\alpha/(\alpha - 1) >2$. For $n\le \frac \alpha{\alpha - 1}$, tree-like graphs dominate, while for $n\ge \frac \alpha{\alpha - 1}$, star-like graphs dominate. If $\alpha=\frac {n_0}{n_0-1}$ for some positive integer $n_0$, then both tree-like graphs and star-like graphs co-dominate for $n=n_0$. As a consequence, in this regime, the dominant free-energy is given by \begin{equation} {\mathcal F}^{\text{dom}}_{N,R}(\lambda, k) =\frac {NR} 6 \sum_{n=1}^{\lfloor \frac \alpha{\alpha - 1}\rfloor} \frac 1 {n }\frac 1 {3n + 1}\binom{3n+1}{n}\Bigl(-\frac {3\lambda N^2}{4k^3}\Bigr)^n + \frac N 2 \sum_{n\ge\lceil \frac \alpha{\alpha - 1}\rceil} \frac 1 n \Bigl(-\frac {3\lambda NR}{4k^3}\Bigr)^n. \label{eq:a-bet-12} \end{equation} \end{enumerate} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.8]{alpha2-3.pdf} \caption{$F_\alpha$ as a function of $n$ for $\alpha>1$. The region of reachable $F_\alpha$ is shaded. It is delimited by $ F_{\alpha,n}^\text{tree}$ for small $n$ and by $F_{\alpha,n}^\text{star}$ for larger $n$.} \label{fig:fig-Falpha} \end{center} \end{figure} \subsubsection{Discussion} \label{sec:discussionlimit} The series we obtain for the star-like graphs is the remainder of a logarithm, which is convergent if $\lvert \frac \lambda {k^3}\rvert < \frac 4 3 \frac 1 {NR}$ and divergent otherwise. By choosing the dependence in $N, R$ of the coupling constants to compensate the factors $(NR)^n$ in the sum corresponding to the logarithm, i.e. \begin{equation} \frac\lambda{k^3} = \frac{\lambda'}{k'^3} \frac 1 {NR}\quad\text{ with }\quad\Bigl\lvert\frac{\lambda'}{k'^3}\Bigr\rvert < \frac 4 3, \label{eq:scaling-lamda-k} \end{equation} the sum on the right of \eqref{eq:a-bet-12} can be replaced by the remainder of the logarithm, which scales in $N$: \begin{equation} \label{eq:log-free-NRJ-1} {\mathcal F}^{\text{dom}}_{N,R}\Bigl(\frac{\lambda'}{NR}, k' \Bigr) = \frac {NR} 6 \sum_{n=1}^{\lfloor \frac \alpha{\alpha - 1}\rfloor}u_n\Bigl(-\frac {3\lambda' N}{4k'^3R}\Bigr)^n - \frac N 2 \sum_{n=1}^{\lceil \frac \alpha{\alpha - 1}\rceil - 1} \frac 1 n \Bigl(-\frac {3\lambda' }{4k'^3}\Bigr)^n - \frac N 2 \log\Bigl(1+\frac{3\lambda' }{4k'^3}\Bigr), \end{equation} where $u_n= \frac 1 {n }\frac 1 {3n + 1}\binom{3n+1}{n}$. The terms in the partial sum of the tree-like free energy behave in $N^{1+ \alpha -n(\alpha - 1) }$, so that these terms all have a stronger scaling in $N$ than the logarithm\footnote{Apart from the term for $\alpha/(\alpha - 1)$ if it is an integer.}. Note that the dominant free-energy as we defined it only retains the dominant graphs at fixed $n$, so that for $n\le \lfloor \frac \alpha{\alpha - 1}\rfloor$ there might be other graphs whose dependence in $N$ is stronger than $N$ with the choice \eqref{eq:scaling-lamda-k}. However, there are only finitely many of them, so that there exists a polynomial \begin{equation} \nonumber P_\alpha(\lambda'/k'^3, N, R) = \sum_{n=1}^{\lfloor \frac \alpha{\alpha - 1}\rfloor} c_n(N,R)\Bigl(-\frac {\lambda' }{8k'^3}\Bigr)^n, \end{equation} where $c_n(N,R)$ gathers the contributions of all the graphs with $n$ interactions whose dependence in $N$ when $R=N^\alpha$ is stronger or equal to $N$, aside from the star-like graphs, so that $$ c_n(N,R) = u_n\frac {NR} 6 \Bigl(-\frac {3\lambda' N}{4k'^3R}\Bigr)^n + o \bigl(N^{2 -(n-1)(\alpha - 1)}\bigr), $$ and in particular, \begin{equation} P_\alpha(\lambda'/k'^3, N, R) = -\frac {\lambda' N^2}{8k'^3}+ o(N^2), \end{equation} and such that \[ \lim_{ \substack{{N\rightarrow +\infty,}\\{R\sim N^\alpha,\ \alpha >1}} } \frac 1 {N}\, \Biggl[\log Z_{N,R} \Bigl(\frac{\lambda'}{NR}, k' \Bigr) - P_\alpha(\lambda'/k'^3, N, R) \Biggr] = - \frac 1 2 \log\Bigl(1+\frac{3\lambda' }{4k'^3}\Bigr). \label{eq:limitsub} \] We have for instance for any $\alpha>2$,\footnote{Note that because of the convention that ${\mathcal F}^{\text{dom}}_{N,R}$ only retains the dominant graphs order per order, we had to add the term of order 1 of the logarithm in \eq{eq:allarger2}, thus the $-\frac {3\lambda' N^2}{8k'^3}$. This is not necessary here, as we deal with the full free-energy.} $$ P_{\alpha>2}(\lambda'/k'^3, N, R) = -\frac {\lambda' N^2}{8k'^3} + \frac {3\lambda'^2N}{32k'^6},$$ and for $\alpha=2$, $$ P_{2}(\lambda'/k'^3, N, R) = -\frac {\lambda' N^2}{8k'^3} + \frac{3 N \lambda'^2}{32 k'^6}\Bigl(\frac {3} 2 \frac {N^2} R + 1\Bigr).$$ Said otherwise, after retrieving the contribution of a finite number of graphs, the large $N$ free energy is essentially a logarithm. In matrix models in the context of 2D quantum gravity, one is naturally interested in the behavior of large graphs, as they carry the properties of the continuum limit. Here, for large graphs, the free-energy is dominated by a logarithm, so that in a sense the large dominant graphs are more ordered than the smaller ones. This is an interesting phenomenon, which, as far as we know, had not been exhibited by any previously studied random vector, matrix or tensor model. \ Note that because the series of star-like graphs is highly divergent outside of its domain of convergence, the conclusions of the graphical study performed in this section (the identification of the various series of dominant graphs and the comparison between them) cannot be extrapolated outside the domain of convergence. This applies for instance to the regime where $\lambda/k^3 = t/N^2$ for $t$ of the order of 1. \subsubsection{Proof} We split the proof in several parts. \medskip {\noindent \bf (a) \ A bound on the number of small faces. }We will need the following result for $n\ge 1$, which proves that {\it a dominant graph necessarily contains small faces}: \begin{equation} \label{eq:Bound-small-faces} G \in {\mathbb{G}}_c(n)\text{ is dominant}\quad \Rightarrow \quad\Fd[(1)](G) + \alpha \Fs[(2)](G) \ge 2 + n(\alpha - 1) \end{equation} where $\Fd[(l)]$ (resp.~$\Fs[(l)]$) is the number of dotted (resp.~solid) faces incident to $l$ interactions counted with multiplicity: for dotted faces, it means having $l$ dotted edges, and for solid faces, it means having $l$ trivalent solid nodes. The proof of this preliminary result goes as follows. In the following, we consider a graph $G\in{\mathbb{G}}_c(n)$. By summing $l \Fd[(l)]$ (resp.~$l \Fs[(l)]$) over $l$, we just count the total number of dotted edges (resp.~trivalent solid nodes) in the graph: \begin{equation} \label{eq:sum_lengths_faces} \sum_{l\ge 1}l \Fd[(l)](G) = 3n, \qquad\text{and}\qquad \sum_{l\ge 1}2l \Fs[(2l)](G) = 2n, \end{equation} where the second sum has been restricted to even integer, due to the fact that solid faces visit an even number of interactions. By considering $3n - \Fd(G)$, we find $$ 3n - \Fd= \sum_{l\ge 2} (l-1) \Fd[(l)] \ge \Fd - \Fd[(1)], $$ so that \begin{equation} \label{eq:bound-small-dotted} \Fd[(1)] \ge 2\Fd - 3n. \end{equation} with equality if and only if $\Fd[(l)]$ vanishes for $l>2$. Similarly, by considering $n - \Fs(G)$, we find $$ n - \Fs= \sum_{l\ge 2} (l-1) \Fs[(2l)] \ge \Fs - \Fs[(2)]. $$ so that \begin{equation} \label{eq:bound-small-solid} \Fs[(2)] \ge 2\Fs - n. \end{equation} From \eqref{eq:bound-small-dotted} and \eqref{eq:bound-small-solid}, we see that any graph $G\in{\mathbb{G}}_c(n)$ satisfies \begin{equation} \label{eq:small-bound-Falpha} \Fd[(1)](G) + \alpha \Fs[(2)](G) \ge 2F_\alpha(G) - n(3 + \alpha). \end{equation} Since we know that the graph in Fig.~\ref{fig:LO-largNR} has $F_\alpha = 1+(\alpha+1)n$, we know that $F_\alpha^{\text{dom}}(n) \ge 1+(\alpha+1)n$, so that a dominant graph $G$ satisfies $$ \Fd[(1)](G) + \alpha \Fs[(2)](G) \ge 2(1+(\alpha+1)n) - n(3 + \alpha), $$ which simplifies to \eqref{eq:Bound-small-faces}. \ We now prove recursively on the number of interactions $n$ that if $n<\frac \alpha {\alpha - 1}$, the dominant graphs in ${\mathbb{G}}_c(n)$ are the tree-like graphs of Sec.~\ref{sub:LargeNfiniteR}), and if $n>\frac \alpha {\alpha - 1}$, the dominant graphs in ${\mathbb{G}}_c(n)$ are the star-like graphs of Fig.~\ref{fig:LO-largNR}. The method is to assume that $G$ is dominant, and to characterize it using some graphical moves. In the following, we assume that $n\ge4$, as we will initiate the induction at $n=3$. The cases $n=1,2,3$ will be treated below in the paragraph {\bf (d)}. Note that to show that a graph $G$ is a tree-like graph, it is sufficient to show that $\Fd(G)=2n(G)+1$, and to show that it is a star-like graph, if $n\ge 3$, it is sufficient to show that $\Fs(G)=n(G)$ and to assume that it is dominant. \ {\noindent \bf (b) \ On the existence of dotted faces with a single edge in a dominant graph. } In this paragraph, we show that {\it if a graph $G$ contains a dotted face with a single dotted edge, then either $G$ is a tree, or $G$ is not dominant (i.e.~we can find another connected graph with a larger $F_\alpha$)}. Since tree-like graphs are not dominant for $n>\frac \alpha{\alpha - 1}$, this implies that: \begin{enumerate}[label=(\roman*)] \item\label{lemma11} {\it a dominant graph for $n>\frac \alpha{\alpha - 1}$ must satisfy $\Fd[(1)]=0$}. \item\label{lemma12}{\it a graph with $n\le \frac \alpha{\alpha - 1}$ for which $\Fd[(1)]>0$ is a tree-like graph or has a smaller $F_\alpha$ than tree-like graphs.} \end{enumerate} Suppose that there exists a dotted face in $G$ with a single dotted edge. There are four possibilities locally, shown below. \begin{figure}[h!] \center \includegraphics[scale=0.45]{One-dotted-face-lead.pdf} \hspace{1cm} \includegraphics[scale=0.45]{One-dotted-face-sub.pdf} \hspace{1cm} \includegraphics[scale=0.45]{One-dotted-face-sub2.pdf} \hspace{1cm} \includegraphics[scale=0.45]{One-dotted-move_1.pdf} \caption{Local possibilities for an interaction with a dotted face with a single dotted edge.} \label{fig:One-dotted-cases} \end{figure} We immediately see that the case on the left has more dotted faces than the two cases in the middle, while they have the same number of solid faces. Therefore, a graph containing the subgraphs in the middle cannot be dominant. Let us first suppose that there exist a subgraph as on the left of Fig.~\ref{fig:One-dotted-cases}. Performing the operation below \begin{equation} \label{fig:One-dotted-last-move} \begin{array}{c}\includegraphics[scale=0.45]{One-dotted-face-lead0.pdf}\end{array} \hspace{1cm}\rightarrow\hspace{1cm} \begin{array}{c}\includegraphics[scale=0.45]{One-dotted-face-lead-move.pdf}\end{array} \end{equation} we obtain a graph $G'$ with one less interaction. We have that $\Fd(G) = \Fd(G') + 2$ and $\Fs(G) = \Fs(G')$. If $n-1< \alpha /({\alpha - 1})$, either $G'$ is a tree-like graph, in which case $G$ is a tree-like graph (and therefore $G$ is not dominant if $n>\frac \alpha {\alpha - 1}$), or using the recursion hypothesis, $F_\alpha(G')<1+\alpha + 2(n-1)$, so that $F_\alpha(G) < 1+\alpha + 2n$, in which case $G$ is not dominant (tree-like graphs always have a larger $F_\alpha$). If $n-1\ge \frac \alpha {\alpha - 1}$, from the recursion hypothesis, $F_\alpha(G')\le1+(\alpha + 1)(n-1)$, so that $F_\alpha(G) \le 1+ (\alpha + 1)n + 1 - \alpha < 1+ (\alpha + 1)n$, which implies that $G$ is not dominant (star-like graphs always have a larger $F_\alpha$). Now focusing on the case on the right of Fig.~\ref{fig:One-dotted-cases}, we exchange the thin lines as illustrated below, for one of the two other dotted edges of the interaction: \begin{equation} \label{fig:One-dotted-move} \begin{array}{c}\includegraphics[scale=0.45]{One-dotted-move_10.pdf}\end{array} \hspace{1cm}\rightarrow\hspace{1cm} \begin{array}{c}\includegraphics[scale=0.45]{One-dotted-move_20.pdf}\end{array}. \end{equation} There are two cases. If this disconnects the graph into two graphs $G_1$ and $G_2$, one dotted face and one solid face are created, so that $F_\alpha(G) = F_\alpha(G_1) + F_\alpha(G_2) - (1+\alpha)$. We bound $F_\alpha(G_1)$ and $ F_\alpha(G_2)$ by their maximal possible values, using the recursion hypothesis. Depending on whether $n_1=n(G_1)$ and $n_2=n(G_2)$ are smaller than $\frac \alpha {\alpha - 1}$ or not, we may have the following situations. \begin{itemize} \item If both $n_1$ and $n_2$ are smaller than $\frac \alpha {\alpha - 1}$, either both $G_1$ and $G_2$ are tree-like graphs so that $G$ is also a tree-like graph (and so that $G$ is not dominant if $n>\frac \alpha {\alpha - 1}$), or from the induction hypothesis, one of the $G_i$ satisfies $F_\alpha(G_i) < 1+\alpha + 2n_i $, so that $$F_\alpha(G) < 2+2\alpha + 2(n_1 + n_2) - (1+\alpha) = 1+\alpha + 2(n_1 + n_2),$$ which implies that $G$ is not dominant (tree-like graphs always have a larger $F_\alpha$). \item If $n_1<\frac \alpha {\alpha - 1}$ and $n_2\ge \frac \alpha {\alpha - 1}$ (or conversely), we have $$F_\alpha(G) \le 1+\alpha + 2n_1 + 1 + (\alpha + 1)n_2 - (1+\alpha) = 1+(\alpha + 1)(n_1 + n_2) + (1-\alpha) n _1,$$ so that $G$ is not dominant (star-like graphs always have a larger $F_\alpha$). \item If both $n_1$ and $n_2$ are larger or equal to $\frac \alpha {\alpha - 1}$, we have $$F_\alpha(G) \le 2+(\alpha+1)(n_1 + n_2) - (1+\alpha) < 1+(\alpha + 1)(n_1 + n_2) ,$$ so that $G$ is not dominant (star-like graphs always have a larger $F_\alpha$). \end{itemize} The only remaining case in this paragraph, is that for which the graph stays connected when exchanging the thin lines as in \eqref{fig:One-dotted-move}. In this case, we obtain a graph $G''$ with $\Fd(G) = \Fd(G'') - 1$ and either $\Fs(G) = \Fs(G'')$ or $\Fs(G) = \Fs(G'') - 1$, depending on whether the solid face splits or not. In any case, we see that $F_\alpha(G)<F_\alpha(G'')$, so that $G$ is not dominant. This concludes the paragraph. \ {\noindent \bf (c) \ On the existence of solid faces with two trivalent solid nodes in a dominant graph. } In this paragraph, we show that {\it if a graph contains a solid faces with two trivalent solid nodes, then either $G$ is a star, or $G$ is not dominant (i.e.~we can find another connected graph with a larger $F_\alpha$)}. Since star-like graphs are not dominant for $n<\frac \alpha{\alpha - 1}$, this implies that: \begin{enumerate}[label=(\roman*), start=3] \item \label{lemma21} {\it a dominant graph for $n<\frac \alpha{\alpha - 1}$ must satisfy $\Fs[(2)]=0$}. \item \label{lemma22}{\it a graph with $n\ge \frac \alpha{\alpha - 1}$ for which $\Fs[(2)]>0$ is a star-like graph or has a smaller $F_\alpha$ than star-like graphs.} \end{enumerate} Suppose that there exists a solid face in $G$ with two trivalent solid nodes. There are two possibilities locally, shown below (and all possible ways of crossing the three thin edges in the center for the graph on the left). \begin{figure}[h!] \center \includegraphics[scale=0.45]{Two-solid-1.pdf} \hspace{1.8cm} \includegraphics[scale=0.45]{Two-solid-2.pdf} \caption{Local possibilities for interactions around a solid face with two trivalent solid nodes} \label{fig:Two-solid-cases} \end{figure} Let us first consider the case on the left of Fig.~\ref{fig:Two-solid-cases} (and possible crossings of the central thin edges), and perform the following move \begin{equation} \label{fig:Two-solid-first-move} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-1.pdf}\end{array} \hspace{1cm}\rightarrow\hspace{1cm} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-move.pdf}\end{array} \end{equation} in a way which respects the dotted faces. We obtain a graph $G'$ with the same number of dotted faces, and with one less solid face, so that $F_\alpha(G)=F_\alpha(G')+\alpha$. As usual, if $n-1<\alpha/(\alpha - 1)$, $F_\alpha(G')\le 1+\alpha+2(n-1)$, so that $$F_\alpha(G)\le 1+2\alpha+2n - 2 = 1+ (\alpha + 1)n + (n-2)(1-\alpha) < 1+ (\alpha + 1)n$$ as long as $n>2$. On the other hand, if $n-1\ge \alpha/(\alpha - 1)$, $F_\alpha(G')\le 1+(\alpha+1)(n-1)$, so that $$F_\alpha(G)\le 1+(\alpha+1)n - 1 < 1+ (\alpha + 1)n,$$ so that the case on the left of Fig.~\ref{fig:Two-solid-cases} always leads to a non-dominant graph as long as $n>2$. Let us now consider the case on the right of Fig.~\ref{fig:Two-solid-cases}. It is slightly more involved than the previous cases. First, let us specify that the results obtained for the move \eqref{fig:One-dotted-move} in the case where it disconnects the graph are slightly more general: consider a graph $G$ and two thin edges which belong to the same solid face, and such that exchanging them disconnects the graph. Suppose in addition that one of these connected components is not a tree. Then the graph is not dominant, in the sense that we can always find graphs with larger $F_\alpha$. Indeed, the computations are precisely the same as what we have done before, with the difference that the case in which both $G_1$ and $G_2$ are tree-like graphs is excluded. Let us consider the following move, which we can perform only if the two thin edges we exchange are indeed distinct. \begin{equation} \label{fig:Two-solid-last-move-0} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-20.pdf}\end{array} \hspace{1cm}\rightarrow\hspace{1cm} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-3.pdf}\end{array}. \end{equation} From what we just said, if it disconnects the graph, then $G$ is not dominant. If not, we obtain a connected graph $G''$, with one more dotted face, and either one or zero additional solid face. Thus, $F_\alpha(G) < F_\alpha(G'')$. Therefore, a dominant graph $G$ with a solid face with precisely two trivalent solid nodes must be as follows, \begin{equation} \label{fig:Two-solid-AB} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-AB.pdf}\end{array}. \end{equation} Let us now focus on the interaction $C$ attached to the other extremity of the thin edge $e_B$ on the left of \eqref{fig:Two-solid-AB}, shown on the left of \eqref{fig:Two-solid-last-last-move} below (in the figure, we do not represent the interaction $A$ anymore). We perform the following move (if the two exchanged edges are distinct): \begin{equation} \label{fig:Two-solid-last-last-move} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-4.pdf}\end{array} \hspace{1cm}\rightarrow\hspace{1cm} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-5.pdf}\end{array}. \end{equation} Applying the same argument again, we see that if this disconnects the graph $G$, then $G$ is not dominant, and if it does not disconnect the graph, it creates a solid face, while the number of dotted faces is modified by $-1$, $+1$, or 0. In any case, we obtain a graph $G'''$ with $F_\alpha(G) \le F_\alpha(G''') + 1-\alpha < F_\alpha(G''')$. This means that for the graph to be dominant, the two edges we exchange in \eqref{fig:Two-solid-last-last-move} must in fact be the same edge, so that we are again in the situation on the left of \eqref{fig:Two-solid-last-move-0}, but for the interactions $C$ and $B$ instead of $B$ and $A$. We then exchange the two thin edges on the upper left of $C$, concluding that if the graph is dominant, they must be the same edge, and we then focus on the interaction $D$ at the other extremity of $e_C$ and exchange the two thin edges on the upper right of the interaction $D$, concluding that if the graph is dominant, they must be the same edge, and so on. We can apply repeatedly the moves \eqref{fig:Two-solid-last-move-0} and \eqref{fig:Two-solid-last-last-move}, to show that either the graph is not dominant, or it contains a larger and larger portion of star-like graph - a chain - which we uncover from right to left at every step until eventually, all the $n$ interactions are included in the chain. This forces the chain to be cyclic: when we uncover the $(n+1)$th interaction, the leftmost interaction must in fact be the rightmost interaction $A$. This proves that a graph with a solid face with exactly two trivalent solid nodes is a star-like graph, or has a smaller $F_\alpha$ than some other graph. \ {\noindent \bf (d) \ Small dominant graphs. } Since this is a proof by induction, we must study the cases for small $n$. We already know that \underline{for $n=1$}, the only dominant graph is the tree-like graph. \medskip \underline{For $n=2$}, we have $\Fs\in\{1,2\}$. If $\Fs=2$, we know that $\Fd\le 3$ with equality for the star-like graph (as is the case for larger $n$) or the other necklace graphs with the $R^2N^3$ behavior in \eqref{eq:dnr} (this is specific to $n=2$), in which case $F_\alpha= 3+2\alpha$. If $\Fs=1$, we know that $\Fd\le 5$ with equality for the tree-like graph, in which case $F_\alpha= 5+\alpha$. If $\alpha<2$, the only dominant graph is the tree-like graph, while for $\alpha>2$, the only dominant graphs are the star-like graph and the other necklace graph. If $\alpha=2$, all of these graphs are dominant. However, this is the only value of $n$ for which a graph which is not a tree-like graph or a star-like graph is dominant. \medskip \underline{For $n=3$}, there are three possibilities for the number of solid faces: $\Fs\in\{1,2,3\}$. Again, if $\Fs=3$ the graph is at best a star-like graph with $\Fd=4$ (now the only possible case), while if $\Fs=1$, the graph is at best a tree-like graph with $\Fd=7$. If $\Fs=2$, we know that $\Fd\le 6$ since the graph is not a tree-like graph. Let us suppose that $\Fd=6$ and $\Fs=2$ for a graph $G$. In that case, we use the lower bound on the number of dotted faces with a single dotted edge, \eqref{eq:bound-small-dotted}, which implies that $\Fd[(1)]\ge 3$. Suppose first that $\Fd[(1)]= 3$. Then $\Fd[(l)]$ vanishes for $l>2$, so that from \eqref{eq:sum_lengths_faces}, $ \Fd[(1)] + 2\Fd[(2)]=9$, which implies that $\Fd[(2)]=3$. One can easily see that the graphs with $n=3$ and $\Fd[(1)]=\Fd[(2)]=3$ have $\Fs=1$. Therefore if $\Fs=2$ and $\Fd=6$, we must have $\Fd[(1)]\ge 4$. Since there are three interactions, this implies that one of the interactions has two dotted faces with a single dotted edge each, i.e.~one of the interactions is as on the left of Fig.~\ref{fig:One-dotted-cases}. Applying the move \eqref{fig:One-dotted-last-move}, we have a graph $G'$ with two interactions and with two solid faces, so that at best there are three dotted faces. Thus at best, $\Fd(G) = \Fd(G')+2 = 5$ which contradicts the initial hypothesis that $\Fd(G)=6$. Thus at best, if $\Fs=2$, $\Fd=5$, so that $F_\alpha = 5 + 2\alpha$. We see that if $\alpha <3/2$, the tree-like graphs are dominant, while for $\alpha > 3/2$, only the star-like graph graph is dominant. For $\alpha=3/2$, both the star-graph and the tree-like graphs are dominant. This is the scenario we prove recursively, so that the case $n=3$ is enough to initiate the induction. \medskip Using similar arguments, it is possible to prove that \underline{for $n=4$}, the maximum number of dotted faces at fixed number of solid faces are obtained for $R^4N^5$, $R^3N^6$, $R^2N^7$, and $RN^9$, but we do not detail the computations here. \ {\noindent \bf (e) \ Proof of the results. }The proof is an induction on $n$. The results hold for $n=3$. Using the results above, under the induction hypothesis, we know from \ref{lemma21} that if $n<\frac \alpha{\alpha - 1}$, a dominant graph $G$ in ${\mathbb{G}}_c(n)$ has $\Fs[(2)](G)=0$, so that from \eqref{eq:Bound-small-faces}, it must have $\Fd[(1)](G)>0$. We thus see from \ref{lemma12} that either $G$ is a tree, or it has a smaller $F_\alpha$ than tree-like graphs. A dominant graph with $n<\frac \alpha{\alpha - 1}$ must therefore be a tree. Similarly, from \ref{lemma11}, if $n>\frac \alpha{\alpha - 1}$, a dominant graph $G$ in ${\mathbb{G}}_c(n)$ has $\Fd[(1)](G)=0$, so that from \eqref{eq:Bound-small-faces}, it must have $\Fs[(2)](G)>0$. We therefore see from \ref{lemma22} that either $G$ is a star, or it has a smaller $F_\alpha$ than star-like graphs . A dominant graph with $n>\frac \alpha{\alpha - 1}$ must therefore be a star. If $n=\frac \alpha{\alpha - 1}$, both cases are possible, a graph $G$ in ${\mathbb{G}}_c(n)$ cannot have both $\Fd[(1)](G)=0$ and $\Fs[(2)](G)=0$, and we have shown that either it is a tree, either it is a star, either it is non-dominant. \subsection{The large $R\sim N^\alpha$ regime with $\alpha\le 1$} \label{sub:AlphaSmall1} \subsubsection{Results} In the previous section, we have shown that for $1<\alpha<2$, the dominant graphs were tree-like graphs for $n\le \frac\alpha{\alpha - 1}$, and star-like for $n\ge \frac\alpha{\alpha - 1}$. The dominant free-energy \eqref{eq:a-bet-12} consists of a polynomial of order $ \frac\alpha{\alpha - 1}$ corresponding to the tree-like graphs, and a series remainder corresponding to the star-like graphs. When $\alpha$ approaches 1, the polynomial part grows bigger and bigger, and we would expect that it would eventually take over the full series when $\alpha\rightarrow 1^+$. In this section, we show that this is indeed the case, and that tree-like graphs are actually dominant in the full domain $0<\alpha\le 1$. \bigskip To summarize, {\it in the large $R\sim N^\alpha$ regime with $\alpha\le 1$, dominant graphs are the tree-like graphs}, and the partition function, free energy, and two-point functions are given by those for finite $R$ and large $N$. \bigskip Note that although we do not know of any statistical physics interpretation to such a regime, from the point of view of the Feynman graph expansion of random coupling vector models \eqref{eq:random-coup-2} of large size $N$, this shows the robustness of the dominance of melonic graphs \cite{melons1, melons2, melons3} (Appendix \ref{app:melo}), since it remains valid when the number of replicas $R$ is large, but not larger than the size $N$ of the system. \subsubsection{Proof} We can adapt the proof of the previous section in that case, with a few modifications. It is again a recursion on $n$, initiated at $n=1$, for which we already know that the property holds. \bigskip {\noindent \bf Another bound on the number of small faces. }Again, we will need to develop a similar lower bound as \eqref{eq:Bound-small-faces} for the case where $\alpha \le 1$. We show that \begin{equation} \label{eq:Bound-small-faces-2} G \in {\mathbb{G}}_c(n)\text{ is dominant}\quad \Rightarrow \quad\Fd[(1)](G) + \alpha \Fs[(2)](G) \ge 2(\alpha + 1) + n(1 - \alpha ), \end{equation} where we recall that $\Fd[(l)]$ (resp.~$\Fs[(l)]$) is the number of dotted (resp.~solid) faces incident to $l$ interactions (counted with multiplicity). To prove this bound, we just use the lower bound \eqref{eq:small-bound-Falpha} on $F_\alpha$. Since we know that $F_\alpha^{{\text{dom}}} \ge 1+\alpha + 2n$, a dominant graph $G$ must satisfy $$ \Fd[(1)](G) + \alpha \Fs[(2)](G) \ge 2(1+\alpha+2n) - n(3 + \alpha), $$ which simplifies to \eqref{eq:Bound-small-faces-2}. \ {\noindent \bf Dominant graphs have no solid faces with two trivalent solid nodes. }Let us consider a graph in ${\mathbb{G}}_c(n)$ and suppose that $\Fs[(2)]>0$. Then $G$ must contain a subgraph as in Fig.~\ref{fig:Two-solid-cases}. For the case on the left of Fig.~\ref{fig:Two-solid-cases} (and possible crossings of the central thin edges), we perform the move \eqref{fig:Two-solid-first-move} in a way which respects the dotted faces. We obtain a graph $G'$ with the same number of dotted faces, and with one less solid face, so that $F_\alpha(G)=F_\alpha(G')+\alpha$. Using the induction hypothesis, $F_\alpha(G')\le 1+\alpha+2(n-1)$, so that $$F_\alpha(G)\le 1+\alpha+2n + ( \alpha - 2) < 1+\alpha+2n,$$ so that $G$ is not dominant. The case on the right of Fig.~\ref{fig:Two-solid-cases} is slightly more involved, since the dotted faces are not conserved when we perform the following move: \begin{equation} \label{fig:Two-solid-last-move} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-2.pdf}\end{array} \hspace{1cm}\rightarrow\hspace{1cm} \begin{array}{c}\includegraphics[scale=0.45]{Two-solid-move.pdf}\end{array}. \end{equation} Upon performing this move, we suppress a dotted face if the two vertical thin edges belong to different dotted faces, and if they belong to the same dotted face, we either create a dotted face or their number remains the same. In addition, a solid face is always suppressed. If $G''$ is the graph obtained after performing the move, we thus have $\Fd(G)=\Fd(G'') + \eta$, where $\eta\in\{-1, 0, 1\}$, and $\Fs(G)=\Fs(G'') +1$. Importantly, if $G''$ is a tree, all three dotted edges on the right of \eqref{fig:Two-solid-last-move} lie in different dotted faces, so that if $G''$ is a tree-like graph we must have $\Fd(G)=\Fd(G'') -1$. Using the recursion hypothesis, $F_\alpha(G'') \le 1+\alpha + 2(n-1)$, with equality iff $G''$ is a tree, in which case $F_\alpha(G) = 1+\alpha + 2(n-1) + \alpha - 1$. Else if $G''$ is not a tree, $F_\alpha(G'') < 1+\alpha + 2(n-1) $ and $F_\alpha(G) < 1+\alpha + 2n-2 + \alpha +1 $. In both cases, $$F_\alpha(G)< 1+\alpha + 2n +\alpha -1 < 1+\alpha + 2n,$$ so that $G$ is not dominant. In conclusion, under the induction hypothesis, if $\Fs[(2)](G)>0$ and $\alpha\le1$, $G$ is not dominant. Using the bound \eqref{eq:Bound-small-faces-2} on small faces, this implies that a dominant graph $G$ must therefore satisfy $\Fd[(1)]>0$. \ {\noindent \bf Dominant graphs are tree-like graphs. } Since a dominant graph must contain a dotted face with a single dotted edge, it must include a subgraph as in Fig.~\ref{fig:One-dotted-cases}. We review the various cases, everything works as before, with fewer cases. Again the cases in the middle of Fig.~\ref{fig:One-dotted-cases} are excluded in a dominant graph. If we have a subgraph as on the left of Fig.~\ref{fig:One-dotted-cases}. Performing the move \eqref{fig:One-dotted-last-move}, we obtain a graph $G'$ with one less interaction and such that $\Fd(G) = \Fd(G') + 2$ and $\Fs(G) = \Fs(G')$. Either $G'$ is a tree-like graph, in which case $G$ is a tree-like graph, or using the recursion hypothesis, $F_\alpha(G')<1+\alpha + 2(n-1)$, so that $F_\alpha(G) < 1+\alpha + 2n$, in which case $G$ is not dominant. If we have a subgraph as on the right of Fig.~\ref{fig:One-dotted-cases}, we perform the move \eqref{fig:One-dotted-move}. If the move disconnects the graph into two graphs $G_1$ and $G_2$, we have $F_\alpha(G) = F_\alpha(G_1) + F_\alpha(G_2) - (1+\alpha)$. Using the recursion hypothesis, either both $G_1$ and $G_2$ are tree-like graphs so that $G$ is also a tree-like graph, or one of the $G_i$ satisfies $F_\alpha(G_i) < 1+\alpha + 2n_i $, so that $$F_\alpha(G) < 2+2\alpha + 2(n_1 + n_2) - (1+\alpha) = 1+\alpha + 2(n_1 + n_2),$$ so that that $G$ is not dominant. If on the other hand the graph stays connected when exchanging the thin lines as in \eqref{fig:One-dotted-last-move}, as before we obtain a graph $G''$ with $\Fd(G) = \Fd(G'') - 1$ and either $\Fs(G) = \Fs(G'')$ or $\Fs(G) = \Fs(G'') - 1$, depending on whether the solid face splits or not. In any case, $F_\alpha(G)<F_\alpha(G'')$, so that $G$ is not dominant. This concludes the proof. \section{Describing the model in a convergent series} \label{sec:convergent} The expansion in $\lambda$ of the partition function $Z_{N,R}(\lambda,k)$ defined in \eq{eq:integral} does not give a convergent series in general, because there exists an essential singularity at $\lambda=0$. This is obvious from the form of \eq{eq:integral}, because the integral diverges for $\lambda<0$. In fact, the situation can explicitly be checked in the exactly solvable case $R=1$, as we will see in Section~\ref{sec:Req1}. The reason why we obtained the convergent results in Section~\ref{sec:graph} comes from the fact that we summed up the dominant graphs only\footnote{This is usually the case for vector, matrix and tensor models.}. In this section, to have more control over the situation, we will divide the integral over $\phi_a^i$ into its angular and radial parts. We will see that the angular part admits a convergent series expansion in $\lambda$, whose coefficients are expressed in terms of the coefficients $z_n$ of the Feynman diagrammatic expansions of Section~\ref{sec:graph}. On the other hand, the radial part will be treated in a different manner as an explicit integration. We will finally apply our results to discuss the integrability of the wave function of a toy model \cite{Obster:2017pdq} closely related to the tensor model in the Hamilton formalism introduced in \cite{Sasakura:2011sq,Sasakura:2012fb}. \subsection{Dividing the integration into the angular and radial parts} \label{sec:probability} Let us break $\phi_{a}^i$ into the radial part $r^2:=\mathrm{Tr}\phi \phi^t$ and the angular part $\tilde \phi_a^i:=\phi_a^i/r$, which represents coordinates on a unit sphere, $S^{NR-1}$. Then, one can rewrite $Z_{N,R}$ in \eq{eq:integral} as \[ Z_{N,R}(\lambda,k) =\hbox{vol}\left(S^{NR-1}\right) \int_0^\infty dr\, r^{NR-1} f_{N,R}(\lambda r^6) e^{-k r^2}, \label{eq:zwithf} \] where \[ f_{N,R}(t):=\frac{1}{\hbox{vol}\left(S^{NR-1}\right)}\int_{S^{NR-1}}d\tilde \phi\ \exp\left(-t\, U(\tilde \phi) \right) \label{eq:defoff} \] with $U$ defined in \eq{eq:Interaction} and $\hbox{vol}\left(S^{NR-1}\right)$ denoting the volume of the unit sphere, $\int_{S^{NR-1}}d\tilde \phi$. For finite $N,R$ and for complex $t$, $f_{N,R}(t)$ is an entire function\footnote{i.e.~it is holomorphic at every finite point of $\mathbb{C}$.} because of the form of \eq{eq:defoff}, which is an integration of an exponential function of $t$ over a compact space. As an entire function, it is differentiable over ${\mathbb{R}}$, and since \begin{equation} \label{eq:non-negative-interaction} U(\tilde \phi)= \sum_{i,j=1}^R\left(\sum_{a=1}^N \tilde \phi^i_a \tilde \phi^j_a\right)^3 =\sum_{a,b,c=1}^N \left(\sum_{i=1}^R \tilde \phi_a^i \tilde \phi_b^i\tilde \phi_c^i\right)\left(\sum_{j=1}^R \tilde \phi_a^j\tilde \phi_b^j\tilde \phi_c^j\right)\ge 0, \end{equation} we see that $f_{N,R}(t)$ is a monotonically decreasing positive function for finite $N,R$ and real $t$ with $f_{N,R}(0)=1$. Furthermore, as an entire function, the series expansion of $f_{N,R}$ in $t$ is convergent, and the radius of convergence is infinite. In Section~\ref{sec:pertf}, we will express the coefficients of the series expansion of $f_{N,R}$ in terms of the coefficients $z_n$, which are computed using Feynman graphs as detailed in the Section~\ref{sub:Feyn}. The angular part of the integration is contained in the expression \eq{eq:defoff} of $f_{N,R}$. This integration is over a compact space and free from the variables $\lambda,k$, but it is still highly non-trivial. It is closely related to what appears in the $p$-spin spherical model \cite{pspin,pedestrians} for the spin glass, as the integration variables are constrained to be on a unit sphere. Therefore we would be able to apply to our model the various techniques which have been developed for the understanding of this spin glass model. Although we rather use diagrammatic expansions in the present paper, such applications would be of potential interest. To interpret $f_{N,R}$, we express it as the moment-generating function (or Laplace transform) \begin{equation} f_{N,R}(t)=\int_0^1 d\sigma\, \rho_{N,R}(\sigma) \exp(-t\, \sigma), \label{eq:relfrho} \end{equation} of the following probability density, \[ \rho_{N,R}(\sigma):=\frac{1}{\hbox{vol}\left(S^{NR-1}\right)}\int_{S^{NR-1}}d\tilde \phi\ \delta \left( \sigma- U(\tilde \phi) \right). \label{eq:defofrho} \] This quantity obviously satisfies $\rho_{N,R}(\sigma)\geq 0$ and $\int d\sigma\, \rho_{N,R}(\sigma)=1$, which justifies that it can indeed be regarded as a probability density over $\sigma$. Furthermore, we see from \eqref{eq:non-negative-interaction} and \[ U(\tilde \phi)= \sum_{i,j=1}^R\left(\sum_{a=1}^N\tilde \phi^i_a \tilde \phi_a^j\right)^3 \leq \sum_{i,j=1}^R \left(\sum_{a,b=1}^N \tilde \phi^i_a\tilde \phi_a^i \tilde \phi^j_b\tilde \phi^j_b \right)^\frac{3}{2}\leq \sum_{i,j=1}^R \sum_{a,b=1}^N\tilde \phi^i_a\tilde \phi^i_a \tilde \phi^j_b \tilde \phi^j_b=1, \label{eq:ineqforint} \] that the support of $\rho_{N,R}(\sigma)$ is included in $0\leq\sigma\leq1$. In \eqref{eq:ineqforint}, we have used the Cauchy-Schwartz inequality, $\sum_{a=1}^N \tilde \phi^i_a \tilde \phi^j_a\leq \left(\sum_{a,b=1}^N\tilde \phi^i_a \tilde \phi^i_a \,\tilde \phi^j_b\tilde \phi^j_b\right)^\frac{1}{2}$, and $0 \leq \sum_{a=1}^N \tilde\phi^i_a \tilde \phi^i_a\leq 1$. In terms of $\rho_{N,R}$, the partition function is expressed as \[ Z_{N,R}(\lambda,k)= \hbox{vol}\left(S^{NR-1}\right) \int_0^1 d\sigma \, \rho_{N,R}(\sigma) \int_0^\infty dr \, r^{NR-1} e^{- \lambda \sigma r^6-k r^2}. \label{eq:zwithrho} \] In this expression, the angular part $\rho_{N,R}$ and the radial part can be treated independently, and they are combined by the last integration over $\sigma$. As in the case of $f_{N,R}$, the angular integration for $\rho_{N,R}$ is highly non-trivial. On the other hand, the integration over $r$ is rather straightforward, and one can obtain an explicit expression in terms of the generalized hypergeometric function $_1F_2$, as shown in Appendix~\ref{app:explicit}. An important property is that it has an essential singularity at $\lambda=0$ (as the partition function $Z_{N,R}(\lambda,k)$), which is consistent with the fact that the series expansion of $Z_{N,R}(\lambda,k)$ in $\lambda$ is not convergent. The probability density $\rho_{N,R}$ also has an interesting meaning in the context of tensor-rank decomposition (or CP-decomposition) in computer science \cite{SAPM:SAPM192761164,comon:hal-00923279,Landsberg2012}, which is an important technique for analyzing tensors representing data. This technique decomposes a tensor, say a symmetric tensor $Q_{abc}\ (a,b,c=1,2,\ldots,N)$, into a sum of rank-one tensors as \begin{equation} Q_{abc}=\sum_{i=1}^R \phi_a^i \phi_b^i \phi_c^i , \label{eq:tensordec} \end{equation} where $R$ is called the rank of $Q_{abc}$ (more precisely, for a given tensor, its rank is the smallest $R$ which realizes such a decomposition). It is not well understood how such rank-$R$ tensors exist in the space of all tensors, especially for real tensors. Since $\sum_{a,b,c=1}^N Q_{abc}Q_{abc}=U(\phi)$, the probability density $\rho_{N,R}$ in \eq{eq:defofrho} gives a part of such knowledge, namely, the size distributions of tensors with a certain rank under the normalization $\mathrm{Tr}\phi \phi^t=1$. \subsection{The series expansion of the angular part} \label{sec:pertf} In this subsection, we will compute the series expansion of $f_{N,R}(t)$ in $t$, which is guaranteed to have an infinite convergent radius, as discussed in Section~\ref{sec:probability}. This could be applied to other models as well. By performing the Taylor expansion of $f_{N,R}(t)$ in \eq{eq:defoff} in $t$, one obtains \[ f_{N,R}(t)&= \sum_{n=0}^\infty (-t)^n C_{N,R}(n), \label{eq:expandf} \] where \[ C_{N,R}(n)&=\frac{(-1)^n}{n!} \left. \frac{d^n}{dt^n}f_{N,R}(t) \right|_{t=0} \nonumber \\ &=\frac{1}{n!} \frac{1}{\hbox{vol}\left(S^{NR-1}\right)} \int_{S^{NR-1}}d\tilde \phi\ \left( U(\tilde \phi) \right)^n. \] Here, changing the order of the derivative and the integration is allowed for this well-behaved integration. For any arbitrary positive constant $\beta$, we have \[ C_{N,R}(n)&=\frac{1}{n!}\frac{ \int_{\mathbb{R}^{NR}} d\phi \left(\frac{U(\phi)}{\left(\mathrm{Tr}\phi \phi^t\right)^3 }\right)^n \exp \left( -\beta\mathrm{Tr}\phi \phi^t\right)}{\int_{\mathbb{R}^{NR}} d\phi \exp \left( -\beta \mathrm{Tr}\phi \phi^t\right)}. \label{eq:defofC} \] Indeed, introducing a radial direction by $\phi_a^i=\tilde \phi_a^i r$, we see that the integrations over $r$ cancel between the numerator and the denominator, and $\beta$ is indeed a dummy variable, which does not appear in the final expression of $C_{N,R}$. In particular, $\beta$ has nothing to do with the parameter $k$ in \eqref{eq:zwithf}. The numerator in the last line of \eq{eq:defofC} has the following obvious properties: on one hand, by performing the rescaling $\phi\rightarrow \phi/\sqrt{\beta}$ we see that \[ \int_{\mathbb{R}^{NR}} d\phi & \left(\frac{U(\phi)}{\left(\mathrm{Tr}\phi \phi^t\right)^3 }\right)^n e^{ -\beta\mathrm{Tr}\phi \phi^t} = \beta^{-\frac{NR}2} A, \label{eq:A} \] where $A$ does not depend on $\beta$, while on the other hand, \[ (-1)^{3n}\frac{d^{3n}}{d\beta^{3n}} \int_{\mathbb{R}^{NR}} d\phi & \left(\frac{U(\phi)}{\left(\mathrm{Tr}\phi \phi^t\right)^3 }\right)^n e^{ -\beta\mathrm{Tr}\phi \phi^t} =\int_{\mathbb{R}^{NR}} d\phi \left(U(\phi)\right)^n e^{ -\beta\mathrm{Tr}\phi \phi^t}, \label{eq:derbeta} \] which is equal to $(\frac {\pi}{\beta})^{\frac{NR}2} n ! z_n(N,R,\beta)$, where $z_n({N,R,\beta})$ are the expansion coefficients of the partition function defined in \eq{eq:Part-Func-Exp-1}. Differentiating \eqref{eq:A}, we determine $A$ and obtain the following relation, \[ C_{N,R}(n)=\frac{\Gamma\left(\frac{NR}{2}\right)\beta^{3n}}{\Gamma\left(\frac{NR}{2}+3n\right)} \, z_n({N,R,\beta}). \label{eq:relofCandd} \] Since $z_n({N,R,\beta})=z'_n(N,R)/{(8\beta^3)^n}$ (see below \eq{eq:Part-Func-Exp-1}), the dummy parameter $\beta$ cancels out from the expression. The relation \eq{eq:relofCandd} provides a method for determining the series expansion of the angular part from the standard series expansion with the Feynman graphs. As for the $n$-dependence, \eq{eq:relofCandd} shows that $C_{N,R}(n)$ decays much faster than $z_n(N,R,\beta)$ in $n$. Therefore, in general, $f_{N,R}$ has a much faster convergent series than that of the partition function. This is consistent with the argument made in Section~\ref{sec:probability} that $f_{N,R}$ is an entire function, while the partition function is not. \subsection{The $R=1$ example} \label{sec:Req1} The behaviors of $f_{N,R}$ and the partition function $Z_{N,R}$ mentioned in Section~\ref{sec:pertf} can explicitly be checked in the trivial solvable case with $R=1$.\footnote{This case corresponds to a one-vector model \cite{Nishigaki:1990sk,DiVecchia:1991vu} with a sixth order interaction term.} In this case, $U(\tilde \phi)=(\mathrm{Tr}\tilde\phi \tilde\phi^t)^3=1$ identically from the normalization of $\tilde \phi$, and hence from \eq{eq:defofC}, we obtain \[ C_{N,1}(n)=\frac{1}{n!}. \label{eq:CforReq1} \] Then the series \eq{eq:expandf} can be summed up to \[ f_{N,1}(t)=e^{-t}. \] As mentioned in Section~\ref{sec:probability}, this is in fact an entire function of $t$. By putting it into \eq{eq:zwithf}, one obtains \[ Z_{N,1}(\lambda,k)=\hbox{vol}\left(S^{N-1}\right) \int_0^\infty dr\, r^{N-1} e^{-\lambda r^6-k r^2}. \label{eq:zreq1} \] This is of course equivalent to what one will obtain by directly parametrizing $\phi_a^{i=1}$ with the radial and angular coordinates in the original expression \eq{eq:integral}, and integrating out the trivial angular part. The remaining integration over $r$ can be expressed using the hypergeometric function ${}_1F_2$ as derived in Appendix~\ref{app:explicit}. The result has an essential singularity at $\lambda=0$, as expected. This last statement can also be checked from the explicit form of $z_n$. We obtain \[ z_n(N,1,k)=\frac{\Gamma\left(\frac{N}{2}+3n\right)}{n!\ \Gamma\left(\frac{N}{2}\right)} k^{-3n}. \] This can be obtained from the relation \eq{eq:relofCandd} with $\beta=k$ and \eq{eq:CforReq1}, or even directly computing the Gaussian integration in \eq{eq:Part-Func-Exp-1}. This is a divergent series and its interpretation is not straightforward, as widely discussed in the literature on random vector models. As shown in this trivial $R=1$ example, it seems useful to divide the integration into the angular and radial directions for explicit evaluations of the partition function $Z_{N,R}(\lambda,k)$, rather than directly treating a highly divergent series in $\lambda$ with $z_{n}(N,R,k)$. \subsection{Application to a tensor model} \label{sec:limit} In this subsection, we will apply the results of the previous subsections to study the integrability of the wavefunction of the model introduced in \cite{Obster:2017pdq}. It is a toy model closely related to a tensor model in the Hamilton formalism, called the canonical tensor model \cite{Sasakura:2011sq,Sasakura:2012fb}, which is studied in a quantum gravity context. Let us consider the following wave function depending on a symmetric tensor $P_{abc}$, where $\ (a,b,c=1,2,\ldots,N)$: \[ \psi(P):=\int_{\mathbb{R}^N} d\varphi \exp\left(I \sum_{a,b,c=1}^N P_{abc}\varphi_a\varphi_b\varphi_c +\left(I-\epsilon\right) \sum_{a=1}^N\varphi_a \varphi_a\right), \label{eq:psip} \] where $I$ denotes the imaginary unit $I^2=-1$, and $d\varphi:=\prod_{a=1}^N d\varphi_a$. For general real $P_{abc}$, the integral \eq{eq:psip} is oscillatory and is regularized by a small positive regularization parameter $\epsilon$ of the so-called Feynmann prescription, in which $\epsilon\rightarrow +0$ is supposed to be lastly taken. In \cite{Obster:2017pdq}, it was argued and explicitly shown for some simple cases that the wave function \eq{eq:psip} has coherent peaks for some specific loci of $P_{abc}$ where $P_{abc}$ is invariant under Lie-group transformations (namely, $P_{abc}=h_a^{a'}h_b^{b'}h_c^{c'}P_{a'b'c'}$ for $^\forall h\in H$ with a Lie-group representation $H$). In fact, a tensor model \cite{Ambjorn:1990ge,Sasakura:1990fs,Godfrey:1990dt} in the Hamilton formalism \cite{Sasakura:2011sq,Sasakura:2012fb} has a similar wave function $\tilde \psi(P)^R$ with a power $R$ and $\tilde \psi(P)$ very similar to $\psi(P)$ \cite{Narain:2014cya}, and it was shown in \cite{Obster:2017dhx} that the wave function of this tensor model has similar coherent peaks. To consistently interpret this phenomenon as the preference for Lie-group symmetric configurations in the tensor model, we first have to show that we can apply the quantum mechanical probabilistic interpretation to the wave function, namely, the wave function must be absolute square integrable. This is a difficult question even for the toy wave function \eq{eq:psip}, since it has complicated dependence on $P_{abc}$ mainly due to its oscillatory character. As a first step towards answering this question, in this paper we will study the behavior of the following quantity in $\kappa$: \[ g(N,R,\kappa):=\int_{\mathbb{R}^{\#P}} dP\ \exp\left( -\kappa \sum_{a,b,c=1}^N P_{abc}P_{abc} \right) \psi(P)^R, \label{eq:defofg} \] where $\#P:=N(N+1)(N+2)/6$ is the number of independent components of the symmetric tensor $P_{abc}$, and $dP:=\prod_{\genfrac{}{}{0pt}{3}{a,b,c=1}{a\leq b\leq c}}^N \sqrt{d_{abc}}\,dP_{abc}$ with a degeneracy factor, $d_{aaa}=1,\ d_{aab}=3$, and $d_{abc}=6$ for $a<b<c$. In the $\kappa\rightarrow +0$ limit, this quantity coincides with the integration of the wave function $\psi(P)^R$ over the whole space of $P_{abc}$. If this is finite in the limit, the wave function is integrable. We may regard this as a toy case study towards proving the square-integrability of the wave function \cite{Narain:2014cya, Obster:2017dhx} in the canonical tensor model of \cite{Sasakura:2011sq,Sasakura:2012fb}. By putting \eq{eq:psip} into \eq{eq:defofg} and integrating over $P_{abc}$, we obtain \[ g(N,R,\kappa)&=\int_{\mathbb{R}^{\#P} }dP\int_{\mathbb{R}^{NR}} d\phi \exp \left( -\kappa \sum_{a,b,c=1}^N P_{abc}P_{abc}+I \sum_{i=1}^R \sum_{a,b,c=1}^N P_{abc}\phi_a^i\phi_b^i\phi_c^i \right. \nonumber \\ &\hspace{5cm}\left.+(I-\epsilon) \sum_{i=1}^R \sum_{a=1}^N\phi^i_a\phi^i_a\right) \nonumber \\ &=\left(\frac{\pi}{\kappa}\right)^{\frac{\#P}{2}} Z_{N,R}\left( \frac{1}{4 \kappa},-I+\epsilon\right), \label{eq:gtoz} \] where $Z_{N,R}$ is the partition function of our matrix model \eq{eq:integral}. By using \eq{eq:gtoz} and \eq{eq:zwithrho}, $g(N,R,\kappa)$ can also be expressed as \[ g(N,R,\kappa)=\hbox{vol}\left(S^{NR-1}\right) \left(\frac{\pi}{\kappa}\right)^{\frac{\#P}{2}} \int_0^1 d\sigma \, \rho_{N,R}(\sigma) \int_0^\infty dr \, r^{NR-1} e^{- \sigma r^6/(4\kappa)+(I-\epsilon) r^2}. \] From this expression, one can see that the most delicate region of the integration over $\sigma$ is located near the origin, since the integration over $r$ may need careful treatment in the large $r$ region for $\sigma\sim +0$. In addition, the $\sigma\sim +0$ region becomes more important, as $\kappa$ is taken smaller for our interest in the $\kappa\rightarrow +0$ limit. Therefore, it is essentially important to determine the behavior of $\rho_{N,R}(\sigma)$ near the origin. In turn, from the relation \eq{eq:relfrho}, this is equivalent to determining the $t\rightarrow +\infty$ behavior of $f_{N,R}(t)$. This can also be seen directly from \[ g(N,R,\kappa) =\hbox{vol}\left(S^{NR-1}\right) \left(\frac{\pi}{\kappa}\right)^{\frac{\#P}{2}} \int_0^\infty dr\, r^{NR-1} f_{N,R}\left(\frac{r^6}{4 \kappa }\right) e^{(I-\epsilon) r^2}, \label{eq:gwithf} \] which can be obtained by expressing $g(N,R,\kappa)$ with $f_{N,R}$ using \eq{eq:zwithf}. While the toy model of \cite{Obster:2017pdq} allows any value of $R$, the tensor model \cite{Sasakura:2011sq,Sasakura:2012fb} uniquely requires $R=(N+2)(N+3)/2$ \cite{Obster:2017dhx,Sasakura:2013wza} for the hermiticity of the Hamiltonian.\footnote{The wave function of the tensor model is given by $\psi(P)^{\lambda_H/2}$ with $\lambda_H=(N+2)(N+3)/2$ \cite{Obster:2017dhx,Sasakura:2013wza}. Our interest in this paper is its square integrability, which is a toy case study for the absolute square integrability. Therefore, $R= \lambda_H$. } Because of the very similar form of the wave functions of the models, our interest is therefore especially in the regime $R \sim N^2$. The dominant graphs at large $N$ at each order in $\lambda$ for $R\sim N^\alpha$ have systematically been analyzed in Section~\ref{sub:AlphaLarg1}, and it has been found that there exists a transition region $1\leq \alpha \leq 2$, where the dominant graphs gradually change. This would imply that the dynamics of the model are largely different between the two regions $R\gtrsim N^2$ and $R \lesssim N$. Motivated by this fact, we compute $f_{N,R}$ through the relation \eq{eq:relofCandd} first by using the result in Section~\ref{sub:LargeRfiniteN}, which incorporates all the necklace graphs and is a valid approximation for large $R$ and finite $N$. We will also comment on how our result will change if we take \eq{eq:allarger2} and \eq{eq:aleq2}, which come from the dominant graphs in large $N$ for $\alpha>2$ and $\alpha=2$, respectively. In the leading order of large $NR$, which includes the regime of large $R$ and finite $N$, and also all the other regimes with $R\sim N^\alpha$ discussed in Section~\ref{sec:graph}, the relation \eq{eq:relofCandd} is given by \begin{equation} C_{N,R}(n)_{leading}=\left( \frac{NR}{2}\right)^{-3n} \beta^{3n} \, z_n(N,R,\beta)_{leading}, \label{eq:cnrlead} \end{equation} where we have formally employed the following expansion in $1/NR$, \[ \frac{\Gamma\left(\frac{NR}{2}\right)}{\Gamma\left(\frac{NR}{2}+3n\right)} = \prod_{i=0}^{3n-1} \left( \frac{NR}{2}+i\right)^{-1}=\left( \frac{NR}{2}\right)^{-3n}\left(1+O\left((NR)^{-1}\right) \right), \label{eq:formal} \] and $z_n(N,R,\beta)_{leading}$ denotes the leading order of the coefficient $z_n(N,R,\beta)$ in any of the regimes discussed in Section~\ref{sec:graph}. Note that here we have assumed the existence of a $1/R$ expansion or other expansions with $R\sim N^\alpha$ for $C_{N,R}(n)$, or $f_{N,R}$, to employ the formal expansion in $1/NR$ irrespective of the value of $n$ in \eq{eq:formal}. In the regime of large $R$, by using the result from Section~\ref{sub:LargeRfiniteN} one obtains \[ f_{N,R}(t)_{leading}&=\sum_{n=0}^\infty (-t)^n C_{N,R}(n)_{leading} \nonumber \\ &=\sum_{n=0}^\infty \left(-\frac{8\beta^3 t}{N^3R^3}\right)^n z_n(N,R,\beta)_{leading} \nonumber \\ &=\left( 1+ \frac{6(N+4)t}{N^3 R^2}\right)^{-\frac{N}{2}} \left( 1+ \frac{12t}{N^3 R^2} \right)^{-\frac{N(N+4)(N-1)}{12}}, \label{eq:fleading} \] where we have put \eq{eq:cnrlead} into \eq{eq:expandf} and have used \eq{eq:resultsum}. As can be seen in \eq{eq:fleading}, the $f_{N,R}(t)_{leading}$ has an interesting scaling property at large $t$, which will become important in the analysis below. Let us check \eq{eq:fleading} from the point of view of the expected properties of $f_{N,R}(t)$. As explained in Section~\ref{sec:probability}, $f_{N,R}(t)$ should be a monotonically decreasing function for real $t$ with $f_{N,R}(0)=1$. This is satisfied by \eq{eq:fleading} in the region $t\geq 0$, which is the integration region for the computation of $g(N,R,\kappa)$ as in \eq{eq:gwithf}. On the other hand, $f_{N,R}(t)$ should be an entire function of $t$ as explained in Section~\ref{sec:probability}. This is not satisfied by \eq{eq:fleading}, as there exist singular points at $t\sim -N^2R^2,\ -N^3 R^2$. Considering the fact that we are discussing the large-$R$ regime, the singular points can be regarded as being far away from the integration region $t\geq 0$. However, for $f_{N,R}(t)$ to be an entire function, these singularities in \eq{eq:fleading} must be canceled by the sub-leading corrections of $f_{N,R}(t)$ to \eq{eq:fleading}. This means that the sub-leading corrections may also be a series whose radius of convergence is of the order of $N^2 R^2$. Therefore, $f_{N,R}(t)$ may get some important corrections at $t\gtrsim N^2 R^2$ from such sub-leading contributions (see also Section~\ref{sec:discussionlimit}). Though this should be taken as a caution in using \eq{eq:fleading}, we will use it beyond this limit in the computation below as the leading order expression of the entire function $f_{N,R}(t)$. By putting \eq{eq:fleading} into \eq{eq:gwithf}, we obtain \[ \begin{split} &g(N,R,\kappa)_{leading}\\ &\ \ =\hbox{vol}\left(S^{NR-1}\right) \left(\frac{\pi}{\kappa}\right)^{\frac{\#P}{2}} \\ &\ \ \ \ \ \times \int_0^\infty dr\, r^{NR-1} \left( 1+ \frac{3(N+4)r^6}{2\kappa N^3 R^2}\right)^{-\frac{N}{2}} \left( 1+ \frac{3 r^6}{\kappa N^3 R^2} \right)^{-\frac{N(N+4)(N-1)}{12}}e^{(I-\epsilon) r^2}. \end{split} \] From now on, let us concentrate only on the behavior in $\kappa$. By the change of variable, $r\rightarrow \kappa^{1/6}r$, we obtain \[ &g(N,R,\kappa)_{leading} \nonumber \\ &\ \ \propto \kappa^{-\frac{\#P}{2}+\frac{NR}{6}} \int_0^\infty dr\, r^{NR-1} \left( 1+ \frac{3(N+4)r^6}{2N^3 R^2}\right)^{-\frac{N}{2}} \left( 1+ \frac{3 r^6}{ N^3 R^2} \right)^{-\frac{N(N+4)(N-1)}{12}}e^{(I \kappa^{1/3}-\epsilon)r^2}. \label{eq:gwithfresult} \] We now divide further discussions into the following three cases. \noindent (i) $R < (N+1)(N+2)/2$\\ \noindent In this case, the $\kappa\rightarrow +0$ behavior of the integration in \eq{eq:gwithfresult} converges to a finite non-zero value, because the modulus of the integrand damps fast enough in $r$. Therefore, the behavior of $g(N,R,\kappa)_{leading}$ is determined by the factor in front. By putting $\#P=N(N+1)(N+2)/6$ (see below \eq{eq:defofg}), we obtain \[ g(N,R,\kappa)_{leading}\sim \kappa^{\frac{N}{6}\left(R-\frac{(N+1)(N+2)}{2}\right)}. \label{eq:gcasei} \] Therefore it has diverging behavior in the limit $\kappa\rightarrow +0$. \noindent (ii) $R > (N+1)(N+2)/2$\\ \noindent In this case, since the integrand in \eq{eq:gwithfresult} is oscillatory and has a modulus diverging in $r\rightarrow +\infty$, the $\kappa\rightarrow +0$ limit has to be taken in a careful manner. For $\kappa\sim+0$, the integral is dominated by the large $r$ region. Therefore, the behavior of $g(N,R,\kappa)_{leading}$ in $\kappa\sim+0$ is given by \[ g(N,R,\kappa)_{leading}&\sim \kappa^{-\frac{\#P}{2}+\frac{NR}{6}} \int^\infty dr\, r^{\gamma-1} e^{I \kappa^{1/3}r^2-\epsilon r^2}\nonumber \\ &\sim \kappa^{-\frac{\#P}{2}+\frac{NR}{6}} \left(\epsilon-I \kappa^{\frac{1}{3}}\right)^{ -\frac{\gamma}{2}}. \] where $\gamma=NR-N(N+1)(N+2)/2$. By taking the $\epsilon\rightarrow +0$ limit, we find that \[ g(N,R,\kappa)_{leading}\sim\kappa^{0}. \label{eq:caseii} \] Therefore $g(N,R,\kappa)$ converges to a finite value in the $\kappa\rightarrow +0$ limit. \noindent (iii) $R = (N+1)(N+2)/2$\\ With slight modification of the discussions in the case (ii), we obtain \[ g(N,R,\kappa)_{leading}\sim \log(\kappa). \label{eq:caseiii} \] Therefore it diverges logarithmically. Combining the three cases above, we see that the behavior of $g(N,R,\kappa)$ in $\kappa\rightarrow +0$ limit has a transition at $R=R_c:=(N+1)(N+2)/2$, where it is finite and diverging at $R>R_c$ and $R\leq R_c$, respectively. However, this result may be changed by some corrections. As analyzed in Section~\ref{sub:AlphaLarg1}, there exists a transition region $R\sim N^{\alpha}\, (1\leq \alpha \leq 2)$, and $R\sim R_c\sim N^2$ is at the edge of the transition region. Therefore, $f_{N,R}(t)$ may have some important corrections at $R\sim R_c$ in addition to $f_{N,R}(t)_{leading}$. We also commented above that $f_{N,R}(t)$ may have some important corrections at $t\gtrsim N^2 R^2$. In fact, the case (ii) has the main contributions from the large $t$ region if $\kappa$ is taken small. Therefore, unless $\kappa$ is kept finite for $R\gg R_c$ in the case (ii), the results above must be taken with caution. Let us see the subtleties more concretely by using our results, \eq{eq:allarger2} and \eq{eq:aleq2}, which are from the analysis of the dominant graphs for $R\sim N^\alpha$ in large $N$ with $\alpha>2$ and $\alpha=2$, respectively. By applying the relation \eq{eq:relofCandd} as before, we obtain $f_{N,R}(t)_{leading}=\exp({\cal F}_{N,R}^{\rm dom}(\lambda,k))$ with the replacement $\lambda/k^3\rightarrow 8 t/N^3R^3$. In either of the cases \eq{eq:allarger2} and \eq{eq:aleq2}, $f_{N,R}(t)_{leading}$ at large $t$ has the divergence coming from the second term, which violates the monotonically decreasing property of $f_{N,R}(t)_{leading}$ discussed in Section~\ref{sec:probability}. Therefore we encounter a maximum value of $t$, over which the expression of $f_{N,R}(t)_{leading}$ cannot be correct. Then, the integration over $r$ cannot be done to the infinite, and this is problematic in taking the $\kappa\rightarrow +0$ limit in the case (ii) and (iii), since the main contribution of the integration comes from the large-$r$ region. From these discussions, to obtain more concrete statement in the $\kappa\rightarrow +0$ limit, we would need to check the sub-leading corrections to $f_{N,R}(t)$. On the other hand, we would be able to say that the present result is in favor of, or does not contradict, the consistent quantum probabilistic interpretation of the wave function of the tensor model \cite{Obster:2017dhx}, because the tensor model requires $R=(N+2)(N+3)/2>R_c$, and $g(N,R,\kappa)_{leading}$ is finite in the $\kappa\rightarrow +0$ limit at least in the computation above. To improve the statement, in addition to computing the sub-leading corrections, one should consider the absolute value $|\psi(P)|^R$ as the integrand rather than the present $\psi(P)^R$ in \eq{eq:defofg}, since the former is semi-positive definite being a probability distribution, but the latter, the present one, is not. Integrating over $P_{abc}$ in the former case leads to a more involved form than \eq{eq:integral}, and analyzing it is left for future study. \section{Summary and future prospects} In this paper, we considered a random matrix model with pairwise and non-pairwise contracted indices. We analyzed the model in various regimes concerning the relative relation between $N$ and $R$, which are the dimensions of the pairwise and non-pairwise contracted indices, respectively. We used Feynman diagrammatic expansions for the analysis, and have shown a transition of dominant graphs between tree-like ones at large $N$ and loop-like ones at large $R$. As a specific application, we applied our result to study the integrability of the wave function of the model introduced in \cite{Obster:2017pdq} as a toy model for a tensor model \cite{Ambjorn:1990ge,Sasakura:1990fs,Godfrey:1990dt} in the Hamilton formalism \cite{Sasakura:2011sq,Sasakura:2012fb}. The result seems to be in favor of the consistency of the quantum probabilistic interpretation of the tensor model. More precisely, in the regimes where $N$ is large but $R$ is finite, or where $R\sim N^\alpha$ with $\alpha\le 1$, which includes the case of square matrices $\alpha=1$, we have shown the dominance of the tree-like graphs, which are the dominant graphs for the $\phi^6$ vector model. As explained in the Appendix~\ref{app:melo}, this tree-like family corresponds to the family of melonic graphs for the equivalent replicated vector model with random tensor couplings (or analogs with a time-dependence). This shows the robustness of the dominance of the tree-like graphs for matrix models with non-pairwise contracted indices (such tree-like behaviors are also found for non-square one-matrix models) or of the dominance of melonic graphs for replicated random coupling vector models, when the number of replicas does not exceed the size of the system. In the regime where $R\sim N^\alpha$ with $\alpha>1$, we have shown that the dominant graphs exhibit a very interesting behavior: tree-like graphs dominate for graphs with $\alpha/(\alpha - 1)$ or less interactions, while a family of very ordered star-like graphs dominate for graphs with more interactions. In a sense, the small dominant graphs exhibit ``more disorder'' than the larger ones. The contribution of the tree-like graphs is a truncation of the usual vector-model free-energy, and the contributions of the star-like graphs is roughly a logarithm. The value of $\alpha$ thus provides a parameter to tune the value at which the vector-model free-energy is truncated, and the remainder replaced by a logarithm expansion. While such an interesting behavior is not known to the authors to exist in other vector, matrix, or tensor models, it is not possible to rescale the coupling constants to obtain a finite limit for the free-energy that involves both families, precisely because of the different behaviors in $N$ and $R$ of the tree-like and star-like graphs. It would be interesting to see if a similar situation occurs for dominant graphs for other models with non-pairwise contracted indices. Our first guess is that we could have such a competition between necklace-like graphs and tree-like graphs whenever an odd number of indices are contracted together, while trees could dominate for any $\alpha$ for an even number of contracted indices, however the precise situation when $\alpha$ takes values in ${\mathbb{R}}^+$ should be investigated. It should be stressed that we have not concluded in the present paper that the wave function of the model introduced in \cite{Obster:2017pdq} is integrable in the region $R\sim N^2$, which is the region of interest for the the tensor model of \cite{Sasakura:2011sq,Sasakura:2012fb}. This is due to some limitations of our approximation at the regime $R\sim N^2$, which comes from the condition $R=(N+2)(N+3)/2$ required for the consistency \cite{Sasakura:2013wza} of the tensor model of \cite{Sasakura:2011sq,Sasakura:2012fb}. Therefore an obviously important question about our results is how they would change by improving the approximations. This could be done by including higher order Feynman graphs along the present line, or employing the various techniques which have been developed for the analysis of the spin glass models, because of the similarities between our model and the $p$-spin spherical model \cite{pspin,pedestrians}. In particular, it is important to obtain more correctly the behavior of the moment-generating function $f_{N,R}(t)$, especially in the $t\rightarrow \infty$ limit, because it essentially determines whether the wave function of the toy model \cite{Obster:2017pdq} related to the tensor model is integrable or not. It is also important to deal with the real wave function of the tensor model, which has the same interaction term, but for which the Gaussian part is replaced by a product of Airy functions \cite{Narain:2014cya,Obster:2017dhx}. An interesting future question is to find a way to take a large $N$ limit which would keep the non-trivial characteristics of this model. We have observed that there is a crossover between the tree-like and the star-like graphs under varying the parameter $\alpha$ of $R\sim N^\alpha$. However, as pointed out in the text, it is not possible at the level of the Feynman graph series expansion to take a large $N$ limit which keeps both the tree and star graphs in an interesting manner. On the other hand, a close look at \eq{eq:freeenergylargeR} suggests an interesting scenario. To see this, let us consider ${\cal F}_{N,R}(\lambda,k)/RN$, which is the free energy per degrees of freedom, and take a large $N$ limit with $\lambda/k^3 \sim N^{-2}$ and $R\sim N^2$. The former scaling of the coupling constants ensures the tree-like graphs to give a non-trivial contribution to ${\cal F}_{N,R}/RN$ at large-$N$. Assuming that the contribution to ${\cal F}_{N,R}/RN$ for the star graphs actually vanishes in this limit\footnote{As mentioned in the text, we cannot rely on the logarithm expression for the first term in \eq{eq:freeenergylargeR} to draw conclusions in this regime on the relative contributions of the star-like graphs with respect to tree-like graphs or other necklace graphs, as the series for star-like graphs is highly divergent with these choices for the coupling constants, while the series for trees and necklaces are convergent. }, the second term, which contains the other {\rm necklace} graphs, remains finite and non-trivial. Note that this scenario does not contradict our analysis in the text, since order by order analysis of the dominant graphs is not necessarily related to the dynamics of the model in general. At the present stage, this scenario is not more than a speculation, since we presently do not understand well the dynamics at $R\sim N^2$. \vspace{0.8cm} \section*{Acknowledgements} The work of N.S. is supported in part by JSPS KAKENHI Grant No.15K05050. L.L. is a JSPS international research fellow. N.S. would like to thank S.~Nishigaki for brief communication, and G.~Narain for some inspiring comments. L.L. would like to thank V.~Bonzom for useful discussions, as well as L.~Cugliandolo for useful references.
1,108,101,563,464
arxiv
\section{Introduction} \label{sec:Introduction} Physical Layer Security (PLS) schemes have emerged as important tools to enhance confidentiality in wireless systems, adding an extra security layer on top of traditional cryptography by exploiting the fluctuations of the wireless channel~\cite{Bloch}. Among other strategies, multiple antenna (MIMO) schemes have received great interest in the PLS scenario~\cite{Alves.2012, Yang.2013, yang.13b, yang.15}. For instance, the authors in~\cite{Alves.2012} show that Transmit Antenna Selection (TAS) at the legitimate transmitter (Alice) can enhance the secrecy outage probability (SOP) performance with lower cost, complexity and power consumption in comparison with other MIMO strategies, while different combinations of TAS with Maximal Ratio Combining (MRC) or Selection Combining (SC) at the legitimate receiver (Bob) and the eavesdropper (Eve) are investigated in~\cite{Yang.2013, yang.13b, yang.15}. Since only the index of the antenna with the best channel condition is fed back to Alice using TAS, the diversity increases with respect to Bob, but not with respect to Eve, improving security~\cite{Alves.2012}. At Bob, MRC is the optimal strategy in terms of secrecy~\cite{Yang.2013}. Cooperative protocols have also been intensively investigated in the PLS scenario. Initial studies of the secrecy capacity of cooperative communications have been presented in~\cite{Lai}. Typical cooperative methods employed in the context of PLS are the Decode-and-Forward (DF) and the Artificial-Noise (AN) schemes. The DF protocol combined with TAS at Alice has been analyzed, \emph{e.g.}, in~\cite{Lin.2016} under an outdated channel state information (CSI) feedback assumption showing that the increase of the number of antennas at the legitimate nodes significantly reduces the SOP, even with outdated CSI. Recently, the SOP of the AN scheme has been investigated in~\cite{Benevides.2016}, while a secure energy efficiency (SEE) metric is employed in~\cite{Hu.2016, Jamil.2016}, in which AN is shown to considerably improve the SEE and the SOP when compared to a non-cooperative scheme~\cite{Hu.2016}. Additionally, secure relay and jammer selection using AN against multiple eavesdroppers is studied in~\cite{Hui.2015}. Moreover, the DF, AF and AN schemes were comparatively investigated, considering single antenna devices and a partial security regime in~\cite{Jamil.2016}, where a trade-off between SOP and SEE is established, so that the SEE can be considerably increased by relaxing the security requirement. In this paper we consider a cooperative MIMO scenario in which Alice employs TAS, due to its good performance, low cost and reduced feedback requirement, while Bob and Eve employ MRC due to its optimality and manageable cost. Moreover, we assume that the relay either employs jamming, designing a beamforming vector so that the AN affects Eve without interfering at Bob, or cooperates using a CSI-aided DF (CSI-DF) protocol, in which the legitimate nodes use the available CSI to choose between direct and cooperative paths. In addition, we also employ power and rate allocation aiming at maximizing the SEE, subjected to a maximum SOP constraint. Our results show that CSI-DF outperforms AN in most scenarios, except when Eve is closer to the relay or when the number of antennas at Eve is sufficiently large. Finally, we show that there is an optimum number of antennas at the relay and Bob to maximize the SEE due to the trade-off between secure throughput and power consumption. \section{System Model} \label{sec:Preliminaries} We consider Alice ($\text{A}$), Bob ($\text{B}$) and a relay node ($\text{R}$) communicating in the presence of Eve ($\text{E}$), each one equipped with $n_{\text{A}}$, $n_{\text{B}}$, $n_{\text{R}}$ and $n_{\text{E}}$ antennas, respectively. Assuming TAS at the transmitters $i \in \{\text{A}, \text{R}\}$, the frame received by the $k$-th antenna at any node $j \in \{\text{R}, \text{B}, \text{E}\}$, $i \neq j$, is given by \begin{equation} \mathbf{y}_{ij} = \sqrt{\kappa_{ij} P_i} \, h_{ij} \, \mathbf{x} + \mathbf{w}_{j}, \label{ReceivedFrame} \end{equation} where $P_i$ is the transmit power of node $i$, $\mathbf{x}$ is the unit average energy transmitted symbol vector, $\mathbf{w}_{j}$ is the zero-mean complex Gaussian noise vector with variance $N_{0}/2$ per dimension and $h_{ij}$ is the quasi-static channel realization, with zero-mean and unit-variance complex Gaussian distributed elements. Moreover, $\kappa_{ij}=\frac{G}{(4 \pi f_\text{c}/c)^2\, d_{ij}^{\upsilon}\, M_\text{l}\, N_\text{f}}$ is the path-loss, where $G$ is the total antenna gain, $f_\text{c}$ is the carrier frequency, $c$ is the speed of light, $d_{ij}$ is the distance between $i$ and $j$, $\upsilon$ is the path-loss exponent, $M_\text{l}$ is the link margin and $N_\text{f}$ is the noise at the receiver~\cite{Goldsmith}. In the employed TAS scheme, Bob informs the index of the best transmit antenna through an open feedback channel. The selected antenna at Alice is a random event from the point of view of the relay and Eve, giving no diversity gain. Thus, following~\cite{Yang.2013}, the signal to noise ratio (SNR) at the receiving nodes follows one of two probability density functions (PDFs): \begin{equation} f_{i\text{B}}^{\text{TAS/MRC}}(\gamma) = \frac{n_{i}\,\gamma^{n_{\text{B}}-1}\,e^{-\frac{\gamma}{\bar\gamma}}}{\Gamma(n_{\text{B}})\,{\bar\gamma}^{n_{\text{B}}}}\, \left(1-e^{-\frac{\gamma}{\bar\gamma}}\,\sum_{k=0}^{n_{\text{B}}-1} \frac{\gamma^{k}}{k!\bar\gamma^{k}}\right)^{n_{i}-1} \label{PDF_TAS_MRC} \end{equation}at Bob, due to the combination of TAS and MRC, and \begin{equation} f_{ij}^{\text{MRC}}(\gamma) = \frac{\gamma^{n_{j}-1}\,e^{-\frac{\gamma}{\bar\gamma}}}{\Gamma(n_{j})\,{\bar\gamma}^{n_{j}}} \label{PDF_MRC} \end{equation}at the relay and Eve due to MRC only, where $\Gamma(.)$ is the complete gamma function~\cite[\textsection 8.339.1]{gradshteyn.2007}, $i \in \{\text{A}, \text{R}\}$, $j \in \{\text{R}, \text{E}\}$, with $i \neq j$. For the last, we also write its cumulative distribution function (CDF) as~\cite{Lin.2016} \begin{equation} F_{ij}^{\text{MRC}}(\gamma) = 1 - e^{-\frac{\gamma}{\bar\gamma}}\,\sum_{w=0}^{n_{j}-1} \left(\frac{1}{w!}\right)\left(\frac{\gamma}{\bar\gamma}\right)^w. \label{CDF_MRC} \end{equation} Moreover, $\bar{\gamma}_{ij} = \frac{\kappa_{ij} P_i}{N_0 B}$ is the average SNR, with $B$ being the system bandwidth, while $\gamma_{ij}$ represents the instantaneous SNR taking the fading realization into account. \section{Cooperative Schemes} \label{subsection:CooperativeSchemes} In this section we derive the SOP, defined by~\cite{Bloch} as $p_{\text{out}}^\text{(sch)} = \Pr\left\{C_{\text{B}}-C_{\text{E}}< \mathcal{R}\right\}$, where $\text{sch} \in \{\text{CSI-DF}, \text{AN}\}$, $C_{\text{B}}$ and $C_{\text{E}}$ represent the instantaneous capacities of legitimate and Eve's channel, respectively, and $\mathcal{R}$ is the secrecy rate. \subsection{CSI-Aided Decode-and-Forward (CSI-DF)} Usually, a given channel realization is estimated using a sequence of training symbols sent by the transmitter. Then, the receiver decomposes the obtained channel information $\mathbf{h}$ into the channel direction information (CDI), denoted by $\frac{\mathbf{h}}{||\mathbf{h}||}$, and the channel gain information (CGI), denoted by $||\mathbf{h}||$~\cite{zhang.14, zhang.15}. The CGI is real and positive, so that it can be efficiently quantized with a small number of bits~\cite{yoo.07}, while the CDI is a complex vector with the same number of dimensions as the number of receive antennas~\cite{zhang.14, zhang.15}, thus, quite complex to be quantized. Since we employ TAS at the transmitters, only the CGI is needed, which allows a reduced usage of the feedback, leading to a more practical implementation. Then, with the available CGI, the proposed CSI-DF scheme is also able to choose the most advantageous path: direct or cooperative. If cooperation is chosen, the relay employs the DF scheme to forward the information to Bob at the second time slot. Otherwise, if the direct path is more advantageous, we consider that Alice transmits twice to make the comparison fair. Thus, the capacity of the legitimate channel is given by \begin{equation} \begin{split} C_\text{B}^\text{(CSI-DF)} = &\frac{1}{2} \max\{\log_{2}\left(1+\gamma_\text{dir}\right), \min\left\{\log_{2}\left(1 + \gamma_\text{AR}\right), \log_{2}\left(1 + \gamma_\text{coop}\right)\right\}\}, \end{split} \label{Capacidade_CSI_RC} \end{equation} where $\gamma_\text{coop} = \gamma_\text{AB} + \gamma_\text{RB}$ is the equivalent SNR at Bob when cooperation occurs and $\gamma_\text{dir} = \gamma_\text{AB,1} + \gamma_\text{AB,2}$ is the equivalent SNR when Alice transmits in two consecutive time slots. At Eve, we consider an optimistic assumption with respect to its channel capacity (thus, pessimistic in terms of secrecy capacity), so that \begin{equation} \begin{split} C_\text{E}^{(\text{CSI-DF})} = \frac{1}{2} \log_{2}(1+\gamma_\text{AE}+\gamma_\text{RE}), \end{split} \label{Capacidade_CSI_EVE_RC} \end{equation}which assumes that the relay always participates in the legitimate transmission. Let us remark that this assumption is made to allow the computation of a closed form expression to the SOP by using the maximum between i.i.d. random variables. Then, considering two i.i.d. random variables, $X_{1}$ and $X_{2}$, their maximum and minimum can be written following~\cite{papoulis.2002} \begin{equation} \label{MaxProbability} \Pr\left\{\max\left(X_{1}, X_{2} \leq x\right)\right\} = \Pr\left\{X_{1} \leq x\right\}\,\Pr\left\{X_{2} \leq x\right\} \end{equation} \begin{equation} \begin{split} \label{MinProbability} \Pr\left\{\min\left(X_{1}, X_{2} \leq x\right)\right\} &= \Pr\left\{X_{1} \leq x\right\} + \Pr\left\{X_{1} > x\right\}\,\Pr\left\{X_{2} \leq x\right\} \end{split} \end{equation} so that with an approach similar to~\cite{laneman.04}, taking~\eqref{MaxProbability} and~\eqref{MinProbability} into account, and considering that the correlation between $\gamma_\text{dir}$ and $\gamma_\text{coop}$ is treated as in~\cite{laneman.04}, the SOP yields \begin{equation} \label{p_out_CSI_RC} p_\text{out}^\text{(CSI-DF)} = \mathcal{O}_{\text{ABE}} \left[\mathcal{O}_{\text{ARE}} + \left(1-\mathcal{O}_{\text{ARE}}\right) \mathcal{O}_{\text{BE}}\right], \end{equation} where $\mathcal{O}_{\text{ABE}}=\Pr\left\{\log_{2}(1+\gamma_\text{dir})-\log_{2}(1+\gamma_\text{E})<2\mathcal{R}\right\}$, $\mathcal{O}_{\text{ARE}}=\Pr\left\{\log_{2}(1+\gamma_\text{AR})-\log_{2}(1+\gamma_\text{E})<2\mathcal{R}\right\}$ and $\mathcal{O}_{\text{BE}}=\Pr\left\{\log_{2}(1+\gamma_\text{coop})-\log_{2}(1+\gamma_\text{E})<2\mathcal{R}\right\}$. Next, isolating $\gamma_\text{E}$ for each variable in~\eqref{p_out_CSI_RC} we obtain \begin{equation} \begin{split} p_\text{out}^\text{(CSI-DF)} =& \Pr\{\gamma_\text{E} > 2^{-2\mathcal{R}}\left(1+\gamma_\text{dir}\right)-1\} \Bigg[\Pr\{\gamma_\text{E} > 2^{-2\mathcal{R}}\left(1+\gamma_\text{AR}\right)-1\} \\ &+ \Pr\{\gamma_\text{E} \leq 2^{-2\mathcal{R}}\left(1+\gamma_\text{AR}\right)-1\} \Pr\{\gamma_\text{E} > 2^{-2\mathcal{R}}\left(1+\gamma_\text{coop}\right)-1\} \Bigg], \end{split} \label{Integrais_CSI_RC2} \end{equation} which requires the PDF related to $\gamma_\text{coop}$ and the CDF related to $\gamma_\text{E}$ to be solved in closed-form. Considering the convolutions defined by~\cite{Lin.2016}, we can obtain the PDF related to $\gamma_\text{coop}$ and the CDF related to $\gamma_\text{E}$ by doing \begin{align} \label{PDF_coop} f_\text{coop}(x) &= \int\limits_{0}^{x} f_{\text{RB}}^{\text{TAS/MRC}}\left(x-y\right) f_\text{AB}^{\text{TAS/MRC}}(y)\,\mathrm{d}y, \\ \label{CDF_eve} F_\text{E}(x) &= \int\limits_{0}^{x} F_{\text{E,1}}^{\text{MRC}}\left(x-y\right) f_\text{E,2}^{\text{MRC}}(y)\,\mathrm{d}y, \end{align} which can be solved with the aid of~\cite[eq. (8)]{Choi.2005},~\cite[\textsection 1.111]{gradshteyn.2007},~\cite[\textsection 3.351.1]{gradshteyn.2007} and~\cite[\textsection 3.381.1]{gradshteyn.2007}, resulting in \begin{equation} \begin{split} f_\text{coop}(x) &=\frac{n_{\text{R}}n_{\text{A}}\Gamma(n_{\text{B}})^{-2}}{\left(\bar\gamma_{\text{RB}}\bar\gamma_{\text{AB}}\right)^{n_{\text{B}}}} \sum_{k=0}^{n_{\text{A}}-1} \sum_{m=0}^{n_{\text{R}}-1} \binom{n_{\text{A}}-1}{k} \binom{n_{\text{R}}-1}{m} (-1)^{m+k} \prod_{\bar\gamma_{\text{AB}}} \prod_{\bar\gamma_{\text{AR}}}\,e^{-x \left(\frac{k+1}{\bar\gamma_{\text{AB}}}\right)} \\ &\times x^{n_{\text{B}}+\beta_{1}-1-l}\,\sum_{l=0}^{n_{\text{B}}+\beta_{1}-1}\binom{n_{\text{B}}+\beta_{1}-1}{l}(-1)^{l}\,\left[\left(\frac{m+1}{\bar\gamma_{\text{RB}}}-\frac{k+1}{\bar\gamma_{\text{AB}}}\right)^{-\left(n_{\text{B}}+\beta_{2}+l\right)}\right]\, \\ &\times \gamma\left(n_{\text{B}}+\beta_{2}+l; \left[\frac{m+1}{\bar\gamma_{\text{RB}}}-\frac{k+1}{\bar\gamma_{\text{AB}}}\right]x\right), \\ \end{split} \end{equation} where $\gamma(.)$ is the incomplete gamma function, and \begin{equation} \begin{split} F_\text{E}(x) &= \frac{1}{{\bar\gamma_{\text{AE}}}^{n_{\text{E}}}\Gamma(n_{\text{E}})} \left[\frac{(n_{\text{E}}-1)!}{\bar\gamma_{\text{AE}}^{-n_{\text{E}}}} - e^{-\frac{x}{\bar\gamma_{\text{AE}}}} \sum_{m=0}^{n_{\text{E}}-1} \frac{(n_{\text{E}}-1)!}{m!} \frac{x^m}{\bar\gamma_{\text{AE}}^{m-n_{\text{E}}}} - \sum_{w=0}^{n_{\text{E}}-1} \left(\frac{1}{w!}\right) \left({\frac{1}{\bar\gamma_{\text{RE}}}}\right)^w e^{-\frac{x}{\bar\gamma_{\text{RE}}}} \right. \\ &\left. \sum_{k=0}^{w} \binom{w}{k}\,\left(-1\right)^k\,x^{w-k} \left\{\frac{(k+n_{\text{E}}-1)!}{\left[\frac{1}{\bar\gamma_{\text{AE}}}-\frac{1}{\bar\gamma_{\text{RE}}}\right]^{k+n_{\text{E}}}}-e^{-x \left(\frac{1}{\bar\gamma_{\text{AE}}}-\frac{1}{\bar\gamma_{\text{RE}}}\right)}\,\sum_{m=0}^{k+n_{\text{E}}-1}\frac{\left(k+n_{\text{E}}-1\right)!x^m}{m!\left(\frac{1}{\bar\gamma_{\text{AE}}}-\frac{1}{\bar\gamma_{\text{RE}}}\right)^{k+n_{\text{E}}-m}}\right\}\right]. \end{split} \end{equation} With that in hand we can solve~\eqref{p_out_CSI_RC}. Starting with~$\mathcal{O}_{\text{ABE}}$, employing $f_\text{AB}^{\text{TAS/MRC}}$ and $F_{\text{E}}$, we have \begin{align} \mathcal{O}_{\text{ABE}}=\int\limits_{0}^{\infty} F_\text{E}\left[2^{-2\mathcal{R}}\left(1+\gamma_\text{dir}\right)-1\right]f_{\text{AB}}^{\text{TAS/MRC}}\left(\gamma_\text{dir}\right) \,\mathrm{d}{\gamma_\text{dir}} \end{align} which, after using the binomial expansion and applying~\cite[\textsection 3.35.3]{gradshteyn.2007}, \cite[eq. (8)]{Choi.2005} and some algebraic manipulations, results in \begin{align} \mathcal{O}_{\text{ABE}}&=\frac{n_{\text{A}}}{\Gamma(n_{\text{B}})\:\Gamma(n_{\text{E}})\ {\bar\gamma_{\text{AB}}}^{n_{\text{B}}}\ \bar\gamma_{\text{AE}}^{n_{\text{E}}}}\ \sum_{k=0}^{n_{\text{A}}-1} \binom{n_{\text{A}}-1}{k}\ (-1)^{k}\; \prod_{\bar\gamma_{\text{AB}}} \nonumber \times \left\{\frac{(n_{\text{E}}-1)!}{\bar\gamma_{\text{AE}}^{-n_{\text{E}}}} \mathcal{X}(0, 0, \frac{k+1}{\bar\gamma_{\text{AB}}}) \right. \\ &- \left. e^{-\frac{2^{-2\mathcal{R}}-1}{\bar\gamma_{\text{AE}}}} \sum_{m=0}^{n_{\text{E}}-1} \frac{m!\Gamma(n_{\text{E}})}{\bar\gamma_{\text{AE}}^{m-n_{\text{E}}}} \times \sum_{v=0}^{m} \binom{m}{v} \mathcal{X}\left(m, v, \phi(\bar\gamma_{\text{AE}})\right) - \sum_{w=0}^{n_{\text{E}}-1} \sum_{v=0}^{w} \frac{(-1)^{v}}{w!} \left(\frac{1}{\bar\gamma_{\text{RE}}}\right)^w \right. \nonumber\\ &\left. \times \left[\:\:e^{-\frac{2^{-2\mathcal{R}}-1}{\bar\gamma_{\text{RE}}}}\:\:\:\sum_{z=0}^{w-v}\:\:\: \binom{w}{v}\ \mathcal{Z}(0)\:\:\mathcal{X}(w-v, z, \phi(\bar\gamma_{\text{RE}}))\:\:\: - \:\: e^{-\frac{2^{-2\mathcal{R}}-1}{\bar\gamma_{\text{AE}}}}\:\:\sum_{z=0}^{w-v} \sum_{m=0}^{v+n_{\text{E}}-1} \:\: \right.\right. \nonumber \\ &\left.\left. \:\:\sum_{y=0}^{m}\:\:\binom{w}{v}\:\:\binom{m}{y}\:\:\mathcal{Z}(m) \mathcal{X}(w+m-v, z+y, \phi(\bar\gamma_{\text{AE}})) \right. \bigg] \right\}, \label{Probability_CSI_RC_AB} \end{align} where $ \phi(a)=\frac{k+1}{\bar\gamma_{\text{AB}}}+\frac{2^{-2\mathcal{R}+1}}{a}$, $ \mathcal{Z}(x)=\frac{(v+n_{\text{E}}-1)!}{x!\left(\frac{\bar\gamma_{\text{RE}}-\bar\gamma_{\text{AE}}}{\bar\gamma_{\text{RE}}\bar\gamma_{\text{AE}}}\right)^{v+n_{\text{E}}-x}}$, $\mathcal{X}(a, b, c)=\frac{(2^{-2\mathcal{R}}-1)^{a-b}(b+\beta_{1}+n_{\text{B}}-1)!}{2^{b(-2\mathcal{R}+1)}c^{(b+\beta_{1}+n_{\text{B}})}}$ and \begin{equation} \label{ProductGammaAB} \prod_{\bar\gamma_{\text{AB}}}=\prod_{i=1}^{n_{\text{B}}-1} \left[\sum_{u_{i}=0}^{u_{i-1}} \binom{u_{i-1}}{u_{i}}\left({\frac{1}{i!}}\right)^{u_{i}-u_{i+1}}\left(\frac{1}{{\bar\gamma_{\text{AB}}}}\right)^{u_{i}}\right] \end{equation} with $\beta_{1}=\sum_{i=1}^{n_{\text{B}}-1} u_{i}$, $u_{0}=k$ and $u_{n_{\text{B}}}=0$. Next, solving $\Pr\{\gamma_\text{E} > 2^{-2\mathcal{R}}\left(1+\gamma_\text{AR}\right)-1\}$ yields an integral form as \begin{align} \mathcal{O}_{\text{ARE}}=\int\limits_{0}^{\infty} F_\text{E}\left[2^{-2\mathcal{R}}\left(1+\gamma_\text{AR}\right)-1\right]\,f_{\text{AR}}^{\text{MRC}}\left(\gamma_{\text{AR}}\right) \,\mathrm{d}\gamma_{\text{AR}}, \label{Probability_CSI_RC_AR2} \end{align} which can be solved in closed-form by substituting $f_{\text{AR}}^{\text{MRC}}$ and $F_\text{E}$, while using an algebraic approach similar to that in~\eqref{Probability_CSI_RC_AB}. After these steps we arrive at \begin{align} \mathcal{O}_{\text{ARE}}&=\frac{\Gamma(n_{\text{E}})^{-1}\Gamma(n_{\text{R}})^{-1}}{\bar\gamma_{\text{AE}}^{n_{\text{E}}}\bar\gamma_{\text{AR}}^{n_{\text{R}}}} \left[\sum_{m=0}^{n_{\text{E}}-1}\frac{\left(n_{\text{E}}-1\right)!}{m! e^{\frac{2^{-2\mathcal{R}}-1}{\bar\gamma_{\text{AE}}}}} \left(\frac{1}{\bar\gamma_{\text{AE}}}\right)^{m-n_{\text{E}}}\: \mathcal{J}(m, \bar\gamma_{\text{AE}})+ \sum_{w=0}^{n_{\text{E}}-1} \left(\frac{1}{w!}\right) \left(\frac{1}{\bar\gamma_{\text{RE}}}\right)^w \right. \nonumber \\ &\left. \sum_{k=0}^{w} \binom{w}{k} \left(-1\right)^{k}\,\Big\{\mathcal{T}(\bar\gamma_{\text{RE}}, 0)\:\: \mathcal{J}(w-k, \bar\gamma_{\text{RE}}) - \sum_{o=0}^{k+n_{\text{E}}-1} \mathcal{T}(\bar\gamma_{\text{AE}}, o)\:\: \times \mathcal{J}(w-k+o, \bar\gamma_{\text{AE}}) \Big\}\right]. \label{Probability_CSI_RC_AR} \end{align} where $\mathcal{T}(a, b)=\frac{b!\left(k+n_{\text{E}}-1\right)! e^{-\frac{2^{-2\mathcal{R}}-1}{a}}}{\left[\frac{1}{\bar\gamma_{\text{AE}}}-\frac{1}{\bar\gamma_\text{RE}}\right]^{k+n_{\text{E}}-b}}$ and $ \mathcal{J}(a, b)=\sum_{p=0}^{a}\binom{a}{p} \frac{\left(2^{-2\mathcal{R}}-1\right)^{a-p}\left(n_{\text{R}}+p-1\right)!}{2^{2\mathcal{R}p}}\left(\frac{2^{-2\mathcal{R}}\bar\gamma_{\text{AR}}+b}{\bar\gamma_{\text{AR}}b}\right)^{-\left(n_{\text{R}}+p\right)}$. Finally, $\mathcal{O}_{\text{BE}}$ can be solved with the aid of $ F_\text{E}$ and $f_{\text{coop}}$, so that \begin{align} \mathcal{O}_{\text{BE}}=&\int\limits_{0}^{\infty} F_\text{E}\left[2^{-2\mathcal{R}}\left(1+\gamma_{\text{coop}}\right)-1\right]\,f_{\text{coop}}\left(\gamma_{\text{coop}}\right) \,\mathrm{d}\gamma_{\text{coop}}, \end{align} whose closed-form expression is obtained using~\cite[\textsection 6.455.2]{gradshteyn.2007} and some algebraic manipulations, resulting in \begin{align} \mathcal{O}_{\text{BE}}&=\frac{n_{\text{R}}n_{\text{A}}\left(\bar\gamma_{\text{AB}}\bar\gamma_{\text{RB}}\right)^{-n_{\text{B}}}}{\bar\gamma_{\text{AE}}^{n_{\text{E}}}\Gamma(n_{\text{E}})\Gamma(n_{\text{B}})^{2}} \sum_{k=0}^{n_{\text{A}}-1} \sum_{m=0}^{n_{\text{R}}-1} \binom{n_{\text{A}}-1}{k} \binom{n_{\text{R}}-1}{m} \prod_{\bar\gamma_{\text{AB}}} \prod_{\bar\gamma_{\text{AR}}} \sum_{l=0}^{n_{\text{B}}-\beta_{1}-1} \frac{\binom{n_{\text{B}}+\beta_{1}-1}{l}\left(-1\right)^{k+m+l}}{\left(\frac{m+1}{\bar\gamma_{\text{RB}}}-\frac{k+1}{\bar\gamma_{\text{AB}}}\right)^{\left(n_{\text{B}}+\beta_{2}+l\right)}} \nonumber\\ & \times \left[\frac{\Gamma(n_{\text{E}})}{\bar\gamma_\text{AE}^{-n_{\text{E}}}} \mathcal{B}\left(0, \mu(0), \psi(0)\right) -\:\:\sum_{p=0}^{n_{\text{E}}-1} \frac{\Gamma(n_{\text{E}})\:\:e^{-\frac{2^{-2\mathcal{R}}-1}{\bar\gamma_\text{AE}}}\:\:\bar\gamma_\text{AE}^{n_{\text{E}}-p}}{p!}\:\mathcal{B}\left(p, \mu(s), \psi\left(\frac{2^{-2\mathcal{R}}}{\bar\gamma_{\text{AE}}}\right)\right) \right. \nonumber\\ &\left. - \sum_{w=0}^{n_{\text{E}}-1} \left(\frac{1}{w!}\right) \left(\frac{1}{\bar\gamma_{\text{RE}}}\right)^w \sum_{o=0}^{w} \binom{w}{o} \left(-1\right)^o \times \bigg\{ \mathcal{C}(\bar\gamma_{\text{RE}}, 0) \mathcal{B}\left(w-o, \mu(s), \psi\left(\frac{2^{-2\mathcal{R}}}{\bar\gamma_{\text{RE}}}\right)\right) \right. \nonumber\\ &\left. + \mathcal{C}(\bar\gamma_{\text{AE}}, z) \mathcal{B}\left(w+z-o, \mu(s), \psi\left(\frac{2^{-2\mathcal{R}}}{\bar\gamma_{\text{AE}}}\right)\right) \bigg\}\right], \label{Probability_CSI_RC_B} \end{align} where $\mathcal{B}(a, b, c)= \sum_{s=0}^{a} \binom{a}{s} \frac{\left(2^{-2\mathcal{R}}-1\right)^{a-s}}{2^{2\mathcal{R}s}} \frac{\alpha^v \Gamma(b+v)}{v\left(\alpha+c\right)} {}_2F_1\left(1;b+v;v+1;\frac{\alpha}{\alpha+c}\right)$, with $_{2}F_{1}\left(\alpha,\beta;\gamma;z\right)$ being the Gauss hypergeometric function~\cite[\textsection 9.111]{gradshteyn.2007}, and $\alpha=\frac{\bar\gamma_{\text{AB}}(m-1)-\bar\gamma_{\text{RB}}(k+1)}{\bar\gamma_{\text{AB}}\bar\gamma_{\text{RB}}}$, $v=n_{\text{B}}+\beta_{2}+l$, $\mu(a)=a+n_{\text{B}}+\beta_{1}-l$, $\psi(a)=\frac{k+1}{\bar\gamma_{\text{AB}}}+a$, $\mathcal{C}(a, b)=\sum_{i=0}^{b} \frac{\left(o+n_{\text{E}}-1\right)!e^{-\frac{2^{-2\mathcal{R}}-1}{a}}}{b!\left[\frac{\bar\gamma_\text{RE}-\bar\gamma_\text{AE}}{\bar\gamma_\text{AE}\bar\gamma_\text{RE}}\right]^{o+n_{\text{E}}-b}}$, \begin{align} \prod_{\bar\gamma_{\text{AR}}}=\prod_{i=1}^{n_{\text{R}}-1} \left[\sum_{v_{i}=0}^{v_{i-1}} \binom{v_{i-1}}{v_{i}}\left({\frac{1}{i!}}\right)^{v_{i}-v_{i+1}}\left(\frac{1}{{\bar\gamma_{\text{AR}}}}\right)^{v_{i}}\right], \label{ProductGammaAR} \end{align} with $\displaystyle \beta_{2}=\sum_{i=1}^{n_{\text{R}}-1} v_{i}$, $v_{0}=m$ and $v_{n_{\text{R}}}=0$. Then, the closed-form expression to the SOP of CSI-DF is obtained after applying~\eqref{Probability_CSI_RC_AB},~\eqref{Probability_CSI_RC_AR} and~\eqref{Probability_CSI_RC_B} into~\eqref{p_out_CSI_RC}. \subsection{Artificial-Noise (AN)} Several works in the literature have shown that multiple antennas can increase the PLS. A notable strategy is to use jamming in order to confuse Eve, as \emph{e.g.} in~\cite{Benevides.2016}, in which Alice employs TAS and Bob employs MRC while the communication to Eve is degraded by multiple interfering signals. However, differently from~\cite{Benevides.2016}, we consider that these interfering signals are generated by the multiple antennas of the relay, which creates a beamforming vector so that the noise is null in the direction of Bob. Thus, the jamming affects only Eve, without interfering at Bob. An important assumption with respect to the creation of the beamforming vector is that the number of antennas at the relay must be larger than the number of antennas at Bob~\cite{Benevides.2016, Zhu.2013}; thus, $n_\text{B} \leq n_\text{R}-1$ is always considered for the AN scheme. Then, the capacity of the legitimate channel is given by \begin{equation} C_\text{B}^\text{(AN)} = \frac{1}{2} \log_{2}\left(1+\gamma_\text{AB}\right), \label{Capacidade_AN} \end{equation} while at Eve the capacity is limited by the jamming (denoted by~\cite{Benevides.2016} as interference) generated by the relay node. Thus, we represent the signal-to-interference ratio (SIR) at Eve following the same notation of~\cite{Benevides.2016}, so that $\Upsilon_{\text{I}}=\frac{\gamma_\text{AE}}{\gamma_\text{I}}$, where $\displaystyle \gamma_\text{I} = \sum_{k=0}^{n_{\text{R}}} \bar\gamma_{\text{RE}, k}\,|h_{\text{RE}, k}|^2$ is the jamming interference, written as the sum of all jamming signals sent by each $k$-th antenna of the relay. As a result, the capacity of Eve's channel can be written as \begin{equation} \label{Capacidade_Eve_AN} C_\text{E}^{(\text{AN})} = \frac{1}{2} \log_{2}\left(1+\Upsilon_{\text{I}}\right). \end{equation} Therefore, the secrecy outage probability for the AN scheme can be written as \begin{equation} \label{p_out_AN} \begin{split} p_\text{out}^\text{(AN)} &= \Pr\left\{\frac{1+\gamma_\text{AB}}{1+\Upsilon_{\text{I}}} < 2^{2\mathcal{R}}\right\}= \int\limits_{0}^{\infty} F_\text{AB}^{\text{TAS/MRC}}\left[2^{2\mathcal{R}}\left(1+x\right)-1\right]f_{\frac{\gamma_\text{AE}}{\gamma_\text{I}}}\left(x\right) \,\mathrm{d}{x}, \end{split} \end{equation} where $\displaystyle F_\text{AB}^{\text{TAS/MRC}}(z)=\left[1 - e^{-\frac{\gamma_{\text{AB}}}{\bar\gamma_{\text{AB}}}}\,\sum_{w=0}^{n_{\text{B}}-1} \left(\frac{1}{w!}\right)\left(\frac{\gamma_{\text{AB}}}{\bar\gamma_{\text{AB}}}\right)^w\right]^{n_{\text{A}}}$ is given by~\cite[eq. (13)]{Benevides.2016} and, \begin{equation} \label{PDF_GammaI} \begin{split} f_{\frac{\gamma_\text{AE}}{\gamma_\text{I}}}\left(x\right)={\frac{\partial}{\partial x}}\left[\int\limits_{0}^{\infty} F_{\text{AE}}^{\text{MRC}}\left(xz\right) f_{\gamma_\text{I}}\left(z\right)\,\mathrm{d}{z} \right], \end{split} \end{equation} with $\displaystyle F_{\text{AE}}^{\text{MRC}}(\gamma_\text{AE}) = 1 - e^{-\frac{\gamma_\text{AE}}{\bar\gamma_\text{AE}}}\,\sum_{w=0}^{n_{\text{E}}-1} \left(\frac{1}{w!}\right)\left(\frac{\gamma_\text{AE}}{\bar\gamma_\text{AE}}\right)^w$ and $\displaystyle f_{\gamma_\text{I}}\left(z\right)=\sum_{i=1}^{n_{\text{R}}}\,\frac{e^{-\frac{z}{\bar\gamma_{\text{I}}}}}{\bar\gamma_{\text{I}}}$, following~\cite[eq. (19)]{Benevides.2016}. In addition, let us remark that we consider the power is equally distributed among the jamming signals to simplify the analysis. Then, the SOP for the AN scheme yields~\cite{Benevides.2016} \begin{align} p_\text{out}^\text{(AN)}=&1- \sum_{k=1}^{n_{\text{A}}} \binom{k}{n_{\text{A}}} \prod_{\bar\gamma_{\text{AB}}} \sum_{u=0}^{n_{\text{E}}-1} \frac{(-1)^{k+1}}{u!} \frac{\Gamma(u+n_{\text{R}})}{\Gamma(n_{\text{R}})} \sum_{p=0}^{\beta_{1}} \binom{\beta_{1}}{p} \left(\frac{\bar\gamma_{\text{AE}}}{\bar\gamma_{\text{RE}}}\right)^{p} \left(2^{2\,\mathcal{R}}-1\right)^{\beta_{1}-p} 2^{2\,\mathcal{R}p} e^{-\frac{k(z-1)}{\bar\gamma_{\text{AB}}}} \left[ n_{\text{R}} \times \right. \nonumber \\ &\left. \times \Gamma(p+u+1) \Psi\left(p+u+1, p-n_{\text{R}}+1, \frac{k\bar\gamma_\text{AE}2^{2\,\mathcal{R}}}{\bar\gamma_\text{RE}\bar\gamma_\text{AB}}\right)-\mathcal{L}\right], \label{SOPAN} \end{align} where \begin{align} \mathcal{L}=\begin{cases} u\Gamma(p+u)\Psi\left(p+u, p-n_{\text{R}}, \frac{k\bar\gamma_{\text{AE}}2^{2\,\mathcal{R}}}{\bar\gamma_\text{RE}\bar\gamma_\text{AB}}\right)\, & \mbox{if } u \neq 0 \\ 0, & \mbox{if } u=0 \end{cases} \nonumber \end{align} with $\Psi(.,.,.)$ denoting the Tricomi's confluent hypergeometric function~\cite[\textsection 9.211.4]{gradshteyn.2007}. Let us remark that~\eqref{SOPAN} is associated with $n_{\text{B}}$ due to the term $\prod_{\bar\gamma_{\text{AB}}}$ defined in~\eqref{ProductGammaAB}. \subsection{Secure Energy Efficiency and Optimization} \label{sec:MetricasSeguranca} In order to capture both security and energy efficiency issues, let us define the SEE metric~as \begin{equation} \eta_\text{s} = \frac{\mathcal R\, \left(1-p_{\text{out}}^\text{(sch)}\right)}{P_{\text{total}}^\text{(sch)}}, \label{EnergyEfficiency} \end{equation} where $P_\text{total}^\text{(sch)}$ is the total power consumed by each cooperative scheme. In the case of CSI-DF we have \begin{align} &P_{\text{total}}^\text{(\text{CSI-DF})} = 2 \left[(1+\delta)P_{\text{A}} + P_{\text{TX}} + n_{\text{B}}\,P_{\text{RX}}\right]\, p_\text{dir} \\ &+\left[(1+\delta)(P_{\text{A}}+P_{\text{R}}) + 2\,P_{\text{TX}} + \left(2\,n_{\text{B}}+n_{\text{R}}\right) P_{\text{RX}} \right] p_\text{coop}, \nonumber \label{TotalPowerCSI_RC} \end{align} where $P_{\text{TX}}$ and $P_\text{RX}$ denote the power consumed by transmission and reception circuitry, respectively, while the efficiency loss of the power amplifier is $\delta$. Moreover, $p_\text{coop}$ ($p_\text{dir}$) is the probability that the transmission is cooperative (direct), given by $p_\text{coop} = 1 - p_\text{dir} \approx \frac{\bar\gamma_\text{AR}}{\bar\gamma_\text{AR} + \bar\gamma_\text{AB}}$~\cite{Jamil.2016}. On the other hand, in the case of AN we have \begin{equation} P_{\text{total}}^\text{(AN)} = 2\left[(1+\delta)\,(P_{\text{A}} + P_{\text{R}}) + \,\left(n_{\text{R}}+1\right)\,P_{\text{TX}} + n_\text{B}\,P_{\text{RX}}\right]. \label{TotalPowerAN} \end{equation} Additionally, we are interested in maximizing the SEE of each scheme by allocating $P_\text{A}$ and $P_\text{R}$, as well as the secrecy rate. The general optimization problem can be formalized as \begin{equation} \begin{split} \maxi_{(P_\text{A}, P_\text{R}, \mathcal{R})} \qquad & \eta_\text{s}^{(\text{sch})} = \frac{\mathcal R\, \left(1-p_{\text{out}}^\text{(sch)}\right)}{P_{\text{total}}^{(\text{sch})}} \\ \text{s.t.} \qquad & 0 < P_{i} \leq P_{\text{max}},\; \text{with} \; i \in \{\text{A}, \text{R}\}, \\ & 0 \leq \mathcal{R} \leq \mathcal{R}_{\text{max}}, \\ & p_{\text{out}}^{(\text{sch})} \leq \varphi, \label{OptimizationEE} \end{split} \end{equation} where $P_{\text{max}}$ is a maximum transmit power constraint, $\varphi$ is the maximum acceptable SOP of the system and $\mathcal{R}_{\text{max}} = C_{\text{B}}-\mathcal{R}_{\text{E}}$ is the maximum secrecy rate, assuming that $ C_{\text{B}}$ is known and that the target equivocation rate is given by $\mathcal{R}_{\text{E}}$. An alternative to solve~\eqref{OptimizationEE} is related to the use of low-complex iterative algorithms such as the Dinkelbach algorithm~\cite{Zappone.2015, Jamil.2016} to perform the power allocation, while the optimal secrecy rate can be obtained, \emph{e.g.}, by the golden section search algorithm. Let us remark that a similar approach has been used by the authors in the past~\cite{Jamil.2016}, so that we elaborate a brief explanation of the proposed algorithms in the following. The Dinkelbach algorithm~\cite{dinkelbach.67, Zappone.2015} allows to optimize the ratio between functions of the same variable (fractional programming). Therefore, the algorithm is specially useful for the optimization of the powers allocated at Alice and at the relay. A fractional programming is represented in a general form by~\cite{dinkelbach.67} \begin{equation} \maxi_{\mathbf{x \in S}} \; q(x)=\frac{f_{1}(x)}{f_{2}(x)}, \label{NonLinearFractional} \end{equation} where $\mathbf{S}\subseteq \mathbb{R}^{n}$ is a convex set, $f_{1}$, $f_{2}:\mathbf{S} \to \mathbb{R}$, being $f_{1}(x)$ concave and $f_{2}(x)>0$ convex. Using a parametric convex program, it is possible rewrite~\eqref{NonLinearFractional} as~\cite{Zappone.2015} \begin{equation} F(\lambda)=\maxi_{\mathbf{x \in S}} \; f_{1}(x)-\lambda f_{2}(x), \label{FractionalProgramI} \end{equation} in which $f_{1}(x)$ is maximized while $f_{2}(x)$ is minimized, with the parameter $\lambda$ being the weight associated with the denominator. Moreover, the optimum value of the function is found with \begin{equation} F(\lambda)=0 \iff \lambda = q^\star, \label{RelacaoOtima} \end{equation} where $q^\star$ is the optimum value of~\eqref{NonLinearFractional}. Therefore, solving~\eqref{NonLinearFractional} is equivalent to finding the root of \begin{equation} F(\lambda^\star)=\maxi_{\mathbf{x \in S}} f_{1}(x)-\lambda f_{2}(x) = 0. \label{OptimalCondition} \end{equation} The Dinkelbach algorithm is an efficient approach to find the solution of~\eqref{OptimalCondition}, which is based on Newton's method to find $\lambda$ for each $(n+1)$-th iteration by doing \begin{equation} \lambda_{n+1} = \lambda_{n}-\frac{F(\lambda_{n})}{F'(\lambda_{n})} = \frac{f_{1}(x_{n}^\star)}{f_{2}(x_{n}^\star)}. \label{MetodoNewton} \end{equation} Due to the allocation of the power at Alice and at the relay, the power allocation can be splitted in two steps: starting with Alice and then allocating power to the relay. Therefore, with respect to Alice, \eqref{OptimalCondition} can be rewritten as \begin{equation} F_{1}(\lambda)=\maxi_{P_{\text{A}} \geq 0} f_{1}(P_{\text{A}})-\lambda f_{2}(P_{\text{A}}) = 0, \label{F1_Dink} \end{equation} where $f_{1}(P_{\text{A}})=\mathcal R\, \left(1-p_{\text{out}}^\text{(sch)}\right)$ and $f_{2}(P_{\text{A}})=P_{\text{total}}^\text{(sch)}$. Moreover, the stationary condition is given by \begin{equation} {\frac{\partial f_{1}(P_{\text{A}})}{\partial P_{\text{A}}}}\at[\Bigg]{P_{\text{A}}=P_{\text{A}}^\star} - \lambda \,\, {\frac{\partial f_{2}(P_{\text{A}})}{\partial P_{\text{A}}}}\at[\Bigg]{P_{\text{A}}=P_{\text{A}}^\star} = 0, \label{DerivativeF1} \end{equation} with $P_{\text{A}}^\star$ obtained by the Dinkelbach method~\footnote{As we can observe, the complexity of the method is inherent to the complexity of the SOP expressions for each cooperative scheme, due to the derivatives specified in~\eqref{DerivativeF1} and~\eqref{DerivativeF2}. However, although the SOP expressions are complex for arbitrary number of antennas at each node, the expressions can be considerably simplified while fixing $n_{\text{A}}$, $n_{\text{B}}$, $n_{\text{R}}$ and $n_{\text{E}}$.}. In the sequence, considering the power allocation to the relay we have the \begin{equation} F_{2}(\lambda)=\maxi_{P_{\text{R}} \geq 0} f_{1}(P_{\text{R}})-\lambda f_{2}(P_{\text{R}}) = 0, \label{F2_Dink} \end{equation} where $f_{1}(P_{\text{R}})=\mathcal R\, \left(1-p_{\text{out}}^\text{(sch)}\right)$, $f_{2}(P_{\text{R}})=P_{\text{total}}^\text{(sch)}$ and with the following stationary condition \begin{equation} {\frac{\partial f_{1}(P_{\text{R}})}{\partial P_{\text{R}}}}\at[\Bigg]{P_{\text{R}}=P_{\text{R}}^\star} - \lambda \,\, {\frac{\partial f_{2}(P_{\text{R}})}{\partial P_{\text{R}}}}\at[\Bigg]{P_{\text{R}}=P_{\text{R}}^\star} = 0, \label{DerivativeF2} \end{equation} with $P_{\text{R}}^\star$ also obtained by the Dinkelbach method. Finally, with respect to the optimization of $\mathcal{R}$, we can employ a golden section search algorithm with parabolic interpolation as in~\cite{Jamil.2016}. Such algorithm allows to find the maximum of an unimodal function by narrowing the range of values inside a predefined interval~\cite{Press2007_NumericalRecipes}. \section{Numerical Results} \label{sec:NumericalResults} We consider that $\mathcal{R}=3$~bps/Hz, $d_\mathrm{AB} = 100$~m, $\upsilon=3$ and $\varphi=10^{-1}$. Moreover, as in~\cite{Jamil.2016}, $P_{\mathrm{TX}} = 112.2$~mW, $P_{\mathrm{RX}} = 97.9$~mW, $B=10$~kHz, $N_{0} = -174$~dBm/Hz, $M_\text{l} = 40$~dB, $G = 5$~dBi, $N_\text{f} = 10$~dB and $f_\text{c}=2.5$~GHz. First, Fig.~\ref{Outage} plots the secrecy outage probability of CSI-DF and AN, comparing both the exact expressions and Monte Carlo simulations. Although the figure only considers the case when the relay is placed an a intermediate position between Alice and Bob ($d_\text{AR}=0.5\,d_\text{AB}$), the same agreement between theoretic and simulation results is observed for different positions of the relay. \begin{figure}[!t] \centering \includegraphics[width=10cm]{fig1.pdf} \caption{Secrecy outage probability of CSI-DF and AN schemes, comparing exact expressions and Monte Carlo simulations for $d_\mathrm{AR} = 0.5\,d_\mathrm{AB}$.} \label{Outage} \end{figure} Fig.~\ref{EnergyEfficiency_SecrecyRate_dRE} plots the SEE as a function of $\mathcal{R}$ and $d_{\text{RE}}$, where we observe that, when Eve is closer, AN performs better since the relay interferes with more intensity at Eve, increasing the SEE. On the other hand, CSI-DF allows important improvements when $\mathcal{R}$ increases, also leading to the highest SEE point. \begin{figure}[!t] \centering \includegraphics[width=10cm]{fig2.png} \caption{SEE of CSI-DF and AN as a function of $\mathcal{R}$ and $d_\text{RE}$, with $d_\text{AR}=0.5\,d_\text{AB}$ and $n_\text{A} = n_\text{R} = n_\text{E} = 2$ with $n_\text{B}=1$ for AN scheme.} \label{EnergyEfficiency_SecrecyRate_dRE} \end{figure} Next, Fig.~\ref{EnergyEfficiency_dRE} compares the SEE with fixed power and rate, fixed rate and power allocation, and the rate and power allocation defined by~\eqref{OptimizationEE}. Additionally, we compare the SEE expressions with Monte Carlo simulations, in which a good agreement is shown. As we observe, a significant performance improvement is obtained when power and rate are allocated. In particular, it is worth noting that power allocation plays a major role to maximize the SEE. \begin{figure}[!t] \centering \includegraphics[width=10cm]{fig3.pdf} \caption{SEE of CSI-DF and AN as a function of $d_\text{RE}$ for different allocation strategies, with $d_\text{AR}=0.5\,d_\text{AB}$ and $n_\text{A} = n_\text{R} = n_\text{E} = 2$ with $n_\text{B}=1$.} \label{EnergyEfficiency_dRE} \end{figure} Fig.~\ref{EnergyEfficiency_OutageAlvo} plots the SEE, represented by solid lines, and the SOP, represented by dashed lines, as a function of $\varphi$, the minimal requirement for the SOP, which shows that a higher $\varphi$ increases the SEE, despite the penalty related to the number of secure transmitted bits. However, let us remark that even with an increase of SEE with $\varphi$, the SOP that maximizes the SEE is not close to one. This occurs due to the fact that a SOP close to one implies in a SEE that tends to zero in~\eqref{EnergyEfficiency}. \begin{figure}[!t] \centering \includegraphics[width=10cm]{fig4.pdf} \caption{SEE, represented by solid lines, and SOP, represented by dashed lines, of CSI-DF and AN as a function of $\varphi$, with $d_\text{RE}=1.5\,d_\text{AB}$, $d_\text{AR}=0.5\,d_\text{AB}$.} \label{EnergyEfficiency_OutageAlvo} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=10cm]{fig5.pdf} \caption{SEE of CSI-DF and AN as a function of the number of antennas at the relay and at Eve for $d_\text{AR}=0.5\,d_\text{AB}$, $d_\text{RE}=1.25\,d_\text{AB}$.} \label{EnergyEfficiency_Antennas} \end{figure} Finally, Fig.~\ref{EnergyEfficiency_Antennas} illustrates the SEE as a function of the number of antennas, with $n_\text{A} = n_{\text{B}}=n_{\text{E}}=2$ while varying $n_\text{R}$ in Fig.~\ref{EnergyEfficiency_Antennas}(a), and while varying $n_\text{E}$ with fixed $n_{\text{B}}=n_{\text{A}}=2$ and $n_{\text{R}}=3$ in Fig.~\ref{EnergyEfficiency_Antennas}(b). As we observe, increasing $n_\text{R}$ is more advantageous to CSI-DF than to AN, once the increase of $n_\text{R}$ yields a diversity gain in the case of CSI-DF when cooperation occurs. On the other hand, increasing $n_\text{R}$ in the AN scheme only yields a larger power consumption. In addition, in Fig.~\ref{EnergyEfficiency_Antennas}(b) we can observe that AN becomes more advantageous with the increase of $n_\text{E}$, \emph{i.e.}; when the number of antennas at Eve is much larger than that of the legitimate nodes, it is better for the relay to interfere at Eve by injecting Gaussian noise rather than to cooperate with Bob. \section{Conclusions} \label{sec:Conclusions} We investigate the SEE in a cooperative MIMO scenario with different setups with respect to the number of antennas and the maximum acceptable SOP of the system, also considering power and secrecy rate allocation. We compare a CSI-DF scheme, which exploits the available CSI to chose between direct or cooperative transmission, with an AN scheme, in which the relay uses a beamforming vector to interfere only at Eve. Results show that CSI-DF outperforms AN in most scenarios, except if Eve is closer to the relay or with the increase of antennas at Eve, when AN becomes more advantageous. \bibliographystyle{IEEEtran}
1,108,101,563,465
arxiv
\section{Introduction} The concordant, currently dark energy-dominated, spatially flat cold dark matter (CDM) cosmology is so successful that we are now said to be in an era of ``precision cosmology" \citep{peebles02,ostriker04}. In this concordant cosmology, dark energy (DE), mainly introduced to close the universe and to explain the accelerating expansion of the universe, attracts much attention in the realms of astrophysics and theoretical physics. The simplest candidate for DE is the cosmological constant, and its more general form, known as quintessence, is a cosmic scalar field minimally coupled with the usual matter \citep{caldwell,ma,peebles03}. The generalized Chaplygin gas, as a unification of dark matter and dark energy, was recently proposed and can be constrained by observations \citep{pad,zhu04}. In other cosmological models, DE is replaced by certain possible mechanisms, such as brane world cosmologies \citep{rsa,rsb} and the Cardassian expansion model \citep{freese02,zhu02,zhu03,zhu04a}. Despite its success, the standard CDM theory of cosmic structure formation has several problems, which exist mostly in the small-scale regime. For example, the observed rotation curves of dark-matter-dominated dwarf and low surface brightness (LSB) disk galaxies tend to favor mass profiles with a flat density core \citep[e.g.,][]{salucci00,gentile04}, unlike the singular profiles of the CDM $N$-body simulations (e.g., Navarro, Frenk \& White 1997, NFW profiles; Moore et al. 1999; Jing 2000; Jing \& Suto 2002) and that favored by baryon cooling models (i.e., singular isothermal sphere, SIS profile). While there are debates on whether the observed data were resolved well enough to indicate a soft core \citep{van,march}, quite recent $N$-body simulations of CDM with higher and higher force and mass resolution still favor cuspy halo profiles \citep{diemand,fukushige,navarro04,tasitsiomi,wambs04}. Recently, an analytical model was presented for the post-collapse equilibrium structure of virialized objects that condense out of a low-density cosmological background universe, with or without a cosmological constant \citep{shapiro99,Iliev01}. The model is based on the assumption that cosmological halos form from the collapse and virialization of `top-hat' density perturbations, and are spherical, isotropic and isothermal. According to the authors, this predicts a unique, non-singular, truncated isothermal sphere (NTIS) and provides a simple physical clue about the existence of soft cores in halos of cosmological origin. This NTIS model is claimed to be in good agreement with observations of the internal structure of dark matter dominated halos on scales ranging from dwarf galaxies to X-ray clusters. In particular, it matches quite well the mass profiles of dark matter dominated dwarf galaxies deduced from their observed rotation curves \citep{shapiro04}. Quite recently, the NTIS model was revisited by using the self-interacting dark matter (SIDM) hypothesis \citep{kyungjin}. There are other efforts for analytical derivations of the density profile; e.g., \citet{hansen04} derived the bound on the central density slope of -1 analytically (as found numerically by NFW). This is done by a simple solution to the Jeans equation, which is valid under the assumption that both the central density profile and the phase-space-like density are exact power laws. However, this work did not clearly give a density core. To investigate whether there is a soft core at the center of each CDM halo, we use another independent and robust tool, gravitational lensing, which has been widely used to detect the mass distributions in the Universe \citep{turner,schne,wambs95,wu,barte96,wu00,barte01,chen01,keeton2001a, keeton2001b,xue01,keeton2002,keeton2003,pen03,schne03, keeton2004,wu04,zhang04b} and the dark energy density and its equation of state \citep{fukugita90,fukugita91,turner90,krauss92,maoz93, kochanek95,kochanek96,falco98,cooray99,waga99,sarbu01,dev04}. Recently, motivated by the current largest homogeneous sample of lensed quasars, coming from the radio Cosmic Lens All-Sky Survey\citep[CLASS; ][]{myers,browne03}, an extension of the earlier Jodrell Bank/Very Large Array Astrometric Survey \citep[JVAS; ][]{patnaik92,king99}, much work has been devoted to statistics of strong lensing to constrain the cosmological parameters and mass distribution \citep{chae02,li02,oguri02,chae03,li03,oguri03a,oguri03b,oguri04, wj04,mitchell05,sereno05,zhang05}. Statistics of strong gravitational lensing by halos with a soft core was studied quite early \citep{hinshaw,chiba,cheng}, but no observational sample suitable for analysis could be used in their work; in particular, they did not use their cored density profile to fit the observed rotation curves of the dwarf and LSB disk galaxies. In strong gravitational lensing of quasars, the separation between multiple images is the most important observable. In some instances, asymmetry and shear of lens halos affect the properties of images considerably, but statistically, image separations are mainly determined by the potential wells of the host halos characterized by the slopes of their density profiles. Thus, the probabilities for lensing by NTIS halos with image separations greater than $\Delta\theta$ provide us with an independent and powerful probe of the existence of soft cores of CDM halos. Our calculations are performed in a concordant, spatially flat $\Lambda$CDM cosmology favored by the \textit{Wilkinson Microwave Anisotropy Probe} \citep[\textit{WMAP};][]{bennett03} plus large-scale structure (LSS) data from the Sloan Digital Sky Survey \citep[SDSS;][]{tegmark04a,tegmark04b}, with matter density parameter $\Omega_\mathrm{m}=0.3$, galaxy fluctuation amplitude $\sigma_8=0.9$, and Hubble parameter $h=0.72$. For comparison, we also give the results when NTIS is replaced by SIS+NFW and NFW profiles. \section{Lensing Probabilities} In what follows, we use a reduced mass of a halo defined as $M_{15}=M/(10^{15}\mathrm{h}^{-1}M_{\sun})$. The NTIS model is based on the assumption that cosmological halos form from the collapse and virialization of top-hat density perturbations and are spherical, isotropic, and isothermal \citep{shapiro99,Iliev01}. The well-fitted density profile is given by \citep{shapiro99,Iliev01,shapiro04} \begin{equation} \rho(r)=\rho_0\left(\frac{A}{a^2+r^2/r_0^2} -\frac{B}{b^2+r^2/r_0^2}\right), \end{equation} where $A=21.38$, $B=19.81$, $a=3.01$, and $b=3.28$. The central value of the density profile is $\rho_0=1.8\times 10^4\rho_\mathrm{c}(z_\mathrm{coll})$, where $\rho_\mathrm{c}(z_\mathrm{coll})$ is the critical density of the Universe at the epoch of halo collapse with redshift $z_\mathrm{coll}$. The small core radius depends on mass $M$ and $z_\mathrm{coll}$ and is given by \begin{equation} r_0=0.115(M/\rho_0)^{1/3}=6.73\times 10^{-2}M_{15}^{1/3}[\Omega_\mathrm{m}(1+z_\mathrm{coll})^3 +\Omega_{\Lambda}]^{-1/3}h^{-1}\mathrm{Mpc}, \label{r0} \end{equation} where we have used $\rho_\mathrm{c}(z)=\rho_\mathrm{c}(0)[\Omega_\mathrm{m} (1+z)^3 +\Omega_{\Lambda}]$ (with $\Omega_\mathrm{m}=0.3$ and $\Omega_\Lambda=0.7$ in later calculations) and $\rho_\mathrm{c}(0)=1.88\times 10^{-29}h^2\mathrm{g}\mathrm{cm}^{-3}=2.777\times 10^{11}h^2M_\sun\mathrm{Mpc}^{-3}$. It should be pointed out that the original formula for $r_0$, equation (88) in \citet{shapiro04}, is $r_0=1.51\times 10^{-3}(M/\rho_0)^{1/3}$ (we write $M_{200}$ as $M$, which is discussed in section 3). This is a clerical error, but we have checked that their subsequent results are not affected by this wrong formula. The gravitational lens equation is $\eta=D_\mathrm{S}\xi/D_\mathrm{L}-D_\mathrm{LS}\hat{\alpha}$, where $\eta$ and $\xi$ are the physical positions of a source in the source plane and an image in the image plane, respectively, $\hat{\alpha}$ is the deflection angle, and $D_\mathrm{L}$, $D_\mathrm{S}$, and $D_\mathrm{LS}$ are the angular diameter distances from observer to lens, observer to source, and lens to source, respectively. By defining dimensionless positions $y=D_\mathrm{L}\eta/D_\mathrm{S}r_0$ and $x=\xi/r_0$, and dimensionless angle $\alpha=D_\mathrm{L}D_\mathrm{LS}\hat{\alpha}/D_\mathrm{S}r_0$, the lens equation is then \citep{shapiro04} \begin{equation} y=x-\frac{2ab\kappa_c}{(Ab-Ba)}\left(A\sqrt{a^2+x^2} -B\sqrt{b^2+x^2}-Aa+Bb\right), \label{lenseq1} \end{equation} where $\kappa_c$ is the central convergence: \begin{equation} \kappa_c=\frac{\Sigma(\xi=0)}{\Sigma_\mathrm{crit}}=\frac{\pi\rho_0 r_0}{\Sigma_\mathrm{crit}}\left(\frac{A}{a}-\frac{B}{b}\right), \label{kc1} \end{equation} where $\Sigma_\mathrm{crit}=c^2D_\mathrm{S}/4\pi GD_\mathrm{L}D_\mathrm{LS}$ is the critical surface density. It is well known that generally, for any spherically symmetric density profiles of lensing halos, multiple images can be produced only if the central convergence is greater than unity \citep{schne}. When $\kappa_c\leq 1$, only one image is produced. Note that even if $\kappa_c>1$ is satisfied, multiple images can occur only when the source is located within $y_\mathrm{cr}=y(x_\mathrm{cr})$, where $x_\mathrm{cr}$ is determined from the lensing equation (eq. [\ref{lenseq1}]) with $dy/dx=0$ for $x<0$ (this is similar to lensing by NFW halos). For a singular density profile such as the SIS and NFW profiles, the central value is divergent, so $\kappa>1$ is always satisfied, and multiple images can be produced for any given mass. For density profiles with a finite soft core such as the NTIS profile, however, the condition $\kappa>1$ requires that only halos with mass greater than a certain value (determined by $\kappa_c=1$) can produce multiple images. This is clearly shown in Figure 1, where three curves for $\kappa_c=1.1, 1.05$, and $1.0$ are plotted, and when $\kappa_\mathrm{c}=1.0$, only one image is produced. In lensing statistics, this requirement will limit the populations of lensing halos to quite a small fraction. Such a conclusion is valid for any lensing halos with a finite soft core, which is discussed in detail later. When quasars at redshift $z_{\mathrm{s}}$ are lensed by foreground CDM halos of galaxies and clusters of galaxies, the lensing probability for image separations larger than $\Delta\theta$ is \citep{turner,schne} \begin{equation} P(>\Delta\theta)= \int\mathcal{P}(z_s)dz_s\int^{z_{\mathrm{s}}}_0 \frac{dD_{\mathrm{L}}^\mathrm{p}(z)} {dz}dz\int^{\infty}_0\bar{n}(M,z)\sigma(M,z)B(M,z)dM, \label{prob1} \end{equation} where $\mathcal{P}(z_\mathrm{s})$ is the redshift distribution for quasars approximated by a Gaussian model with a mean of 1.27 and a dispersion of 0.95 \citep{helbig1999,marlow2000,chae02,myers}, $D_{\mathrm{L}}^\mathrm{p}(z)$ is the proper distance from the observer to the lens located at redshift $z$, $\bar{n}(M,z)$ is the physical number density of virialized dark halos of masses between $M$ and $M+dM$, and $B(M,z)$ is the magnification bias. The physical number density $\bar{n}(M,z)$ is related to the comoving number density $n(M,z)$ by $\bar{n}(M,z)=n(M,z)(1+z)^3$; the latter is originally given by \citet{press74}, and the extended version is $n(M,z)dM=(\rho_0/M)f(M,z)dM$, where $\rho_0$ is the current mean mass density of the universe and \begin{equation} f(M,z)=(0.315/M)(d\ln\Delta_{\mathrm{z}}/d\ln M)\exp(-|\ln(\Delta_{\mathrm{z}}/1.68)+0.61|^{3.8}) \end{equation} is the mass function for which we use the expression given by \citet{jenki}. In this expression, $\Delta_{\mathrm{z}}=\delta_c(z)/\Delta(M)$, in which $\delta_c(z)$ is the overdensity threshold for spherical collapse at redshift $z$ and $\Delta(M)$ is the rms of the present variance of the fluctuations in a sphere containing a mean mass $M$. The overdensity threshold is given by $\delta_c(z)=1.68/D(z)$ for the $\Lambda$CDM cosmology \citep{nfw97}, where $D(z)=g[\Omega(z)]/[g(\Omega_{\mathrm{m}})(1+z)]$ is the linear growth function of the density perturbation \citep{carroll}, in which $g(x)=0.5x(1/70+209x/140-x^2/140+x^{4/7})^{-1}$ and $\Omega(z)=\Omega_{\mathrm{m}}(1+z)^3 /[1-\Omega_{\mathrm{m}}+\Omega_{\mathrm{m}}(1+z)^3]$. When we calculate the variance of the fluctuations $\Delta^2(M)$, we use the fitting formulae for the CDM power spectrum $P(k)=AkT^2(k)$ given by \citet{eisen}, where $A$ is the amplitude normalized to $\sigma_8=\Delta(r_{\mathrm{M}}=8h^{-1}\mathrm{Mpc})$, given by observations. The cross section for lensing is $\sigma(M,z)=\pi y_\mathrm{cr}^2r_0^2\vartheta(M-M_{\mathrm{min}})$, where $\vartheta(x)$ is a step function, and $M_{\mathrm{min}}$ is the minimum mass of halos above which lenses can produce images with separations greater than $\Delta\theta$. The minimum mass $M_{\mathrm{min}}$ can be derived directly from the relationship between the mass of a lens halo and the corresponding image separation, as follows. It is obvious from Figure 1 that the separation between the outer two images is almost independent of the source position $y$. Thus, similar to the analysis for NFW profiles \citep{li02}, the image separation $\Delta x(y)$ produced by a NTIS halo for a source at $y$ can be well approximated by $\Delta x(0)=2x_0$, where $x_0$ is the Einstein radius determined from the lens equation with $y=0$. The angular image separation is \begin{equation} \Delta\theta=\frac{\Delta xr_0}{D_\mathrm{L}}=\frac{2x_0r_0}{D_\mathrm{L}}. \end{equation} Then, from this equation and equation(\ref{r0}) we have \begin{equation} M_{\mathrm{min}}=1.26\times 10^{12}\left(\frac{1}{x_0}\right) \left(\frac{\Delta\theta} {1''}\right)\left(\frac{D_\mathrm{L}}{c/H_0}\right)^3 [\Omega_\mathrm{m}(1+z)^3+\Omega_\Lambda] \mathrm{h}^{-1}M_{\sun}, \end{equation} where $c/H_0=2997.9h^{-1}$Mpc is the present Hubble radius. The magnification bias $B(M,z)$ is calculated numerically with \begin{equation} B(M,z)=(2/y_\mathrm{cr})\int^{y_\mathrm{cr}}_0dyy[\mu(y)]^{1.1} \end{equation} given by \citet{oguri02}, where $\mu(y)$ is the total magnification of the two outer images for a source at $y$; this can also be computed numerically. The numerical results of Eq. (\ref{prob1}) for NTIS lens halos are plotted in figure 2 (\textit{thin solid line}). For comparison, the survey results of JVAS/CLASS and the predicted probability for lensing by SIS + NFW and NFW profiles are also shown. A subset of 8958 sources from the combined JVAS/CLASS survey form a well-defined statistical sample containing 13 multiply imaged sources (lens systems) suitable for analysis of the lens statistics \citep{myers,browne03,patnaik92,king99}. The observed lensing probabilities can be easily calculated \citep{chenb,chenc} by $P_{\mathrm{obs}}(>\Delta\theta)=N(>\Delta\theta)/8958$, where $N(>\Delta\theta)$ is the number of lenses with separation greater than $\Delta\theta$ in 13 lenses. The observational probability $P_{\mathrm{obs}}(>\Delta\theta)$ is plotted as a histogram in Figure 2. In the two-population model SIS + NFW, the galaxy-size and the cluster-size lens halos are approximated by SIS and NFW profiles, respectively \citep{sarbu01,li02, chena,chenb,chenc,chend,zhang2004}. In the one-population model NFW, lens halos with different sizes are all approximated by NFW profiles \citep{li02}; this is similar to the NTIS model. The theoretically predicted lensing probabilities shown in Figure 2 are calculated separately for three cases (NTIS, SIS+NFW and NFW) according to equation (\ref{prob1}) . The differences in lensing probability distributions for the three cases arise from their different values for the lensing cross section $\sigma(M,z)$ and magnification bias $B(M,z)$, since these two quantities are determined uniquely from the corresponding density profile . Here we have recalculated the lensing probabilities for SIS + NFW and NFW profiles according to equation (\ref{prob1}). Since the density profiles, lensing equations, and lensing cross sections for SIS and NFW profiles have been discussed many times in the literature, here we only give the final results of lensing probabilities for these two models \citep[for details about SIS+NFW and NFW profiles, see][and references therein]{li02,chenc}. \section{Discussion and Conclusions}\label{dis} One can see clearly from Figure 2 that probability for lensing by NTIS halos with an image separation greater than $\Delta\theta$ is far too low to match the observational results of CLASS/JVAS; it is even much less than that for NFW lenses. We thus conclude that, at least, NTIS as a model to approximate density profiles of dark matter halos is ruled out by statistical strong lensing. Within the framework of statistics of strong lensing as displayed in the literature, no mechanism can save this model. In fact the above conclusion is general for any mass profile with a flat soft core characterized by the core density $\rho_\mathrm{core}$ and core radius $r_\mathrm{core}$ (these two parameters are determined by rotation curves of dark matter dominated dwarf and LSB disk galaxies). To see this, we revisit another such density profile, an isothermal sphere with a soft core (cored isothermal sphere, CIS): $\rho(r)=\sigma_\mathrm{v}^2/2\pi G(r^2+r_\mathrm{core}^2)$ \citep{hinshaw,chiba,cheng}, where $\sigma_\mathrm{v}$ is the one-dimensional velocity dispersion. The lens equation is \begin{equation} y=x-2\kappa_\mathrm{c}^{\mathrm{CIS}}(\sqrt{x^2+1}-1)/x, \label{lenseq2} \end{equation} where $y$ and $x$ are defined in the same way as in equation (\ref{lenseq1}) (here $x=\xi/r_\mathrm{core}$) and \begin{equation} \kappa_\mathrm{c}^{\mathrm{CIS}}=\sigma_\mathrm{v}^2/2Gr_\mathrm{core} \Sigma_\mathrm{crit} \label{kc2} \end{equation} is the central convergence. Similarly, multiple images can be produced if and only if $\kappa_\mathrm{c}^{\mathrm{CIS}}>1$. The corresponding lens equation is plotted in Figure 3 for three different values of $\kappa_\mathrm{c}^{\mathrm{CIS}}$; the curves are quite similar to the NTIS model, even though their density profiles seem quite different. It is not difficult to understand the extremely low value of the probability for lensing by dark halos with a flat soft core. In fact, in our previous calculations, for the NTIS profile, $\kappa_\mathrm{c}$ can be written in the form \begin{equation} \kappa_\mathrm{c}=475.7\frac{\rho_\mathrm{c}^{2/3}(z_\mathrm{coll})} {\Sigma_\mathrm{crit}}M_{200}^{1/3}=3.65M^{1/3}_{15} [\Omega_\mathrm{m}(1+z)^3+\Omega_{\Lambda}]^{2/3} \frac{D_\mathrm{R}}{c/H_0}, \label{kc3} \end{equation} where $D_\mathrm{R}=D_\mathrm{L}D_\mathrm{LS}/D_\mathrm{s}$ . We have defined, as usual, the mass of a dark matter halo to be $M=M_{200}=4\pi\int_0^{r_{200}}\rho(r)r^2dr$ ($r_{200}$ has its usual definition) and assume that the redshift $z_\mathrm{coll}$ by which the dark halo collapsed is equal to the lens location redshift $z_\mathrm{L}$ and that such an assumption will not affect our results \citep{shapiro04}. For a source at $z_\mathrm{s}=3.0$ and the lens at $z_\mathrm{L}=0.5$, equation (\ref{kc3}) gives $\kappa_\mathrm{c}=1.08M_{15}^{1/3}$. In this case, $\kappa_\mathrm{c}=1$ implies $M\sim M_{15}$, and this means that for this typical lens system, multiple images can be produced only when the lens mass is higher than $10^{15}\mathrm{h}^{-1}M_{\sun}$, which is the typical size of galaxy clusters. By setting $\kappa_\mathrm{c}=1$ in equation (\ref{kc3}), the minimum lens mass for producing multiple images is determined as a function of both $z_\mathrm{L}$ and $z_\mathrm{s}$. In Figure 4, we plot the mass $M(\kappa_\mathrm{c}=1)$ (measured in $M_{15}$) as a function of $z_\mathrm{s}$ for three given values of $z_\mathrm{L}$: 0.1, 0.5, and 1.0. For a given $z_\mathrm{L}$, the minimum mass decreases with increasing $z_\mathrm{s}$. Since the mean value of the redshift of the sources is $\langle z_\mathrm{s}\rangle=1.27$ in our calculations, the actual mean value of the mass for producing multiple images (for lenses at $z_\mathrm{L}=0.5$) is larger than $10^{15}\mathrm{h}^{-1}M_{\sun}$. It is also obvious in Figure 4 that in most cases, multiple images can be produced only when the halo mass is larger than $10^{15}\mathrm{h}^{-1}M_{\sun}$. From equation (\ref{prob1}), we know that the lensing probability is determined by the number density of dark halos and the cross section. As pointed out by \citet{shapiro04}, an NFW profile would produce more multiple-image lenses than an NTIS one at relatively lower masses, and this trend is reversed at higher masses. Statistically, however, it is the requirement of $\kappa_\mathrm{c}>1$ and thus the existence of a large core radius that strongly limits the populations of lensing halos [number density $\bar{n}(M,z)$] to quite a small fraction. Thus, the extremely low value of the probability for lensing by NTIS halos arises from the quite low value of the corresponding number density of galaxy clusters. Furthermore, the lensing cross section $\sigma\sim y_\mathrm{cr}^2$, and from Figure 1 and Figure 3 we see that $y_\mathrm{cr}$ is quite sensitive to $\Delta\kappa_\mathrm{c}=\kappa_\mathrm{c}(>1)-1$ rather than $\kappa_\mathrm{c}$ itself. Therefore, a slight change in $\kappa_\mathrm{c}\sim M^{1/3}$ will result in a large change both in $y_\mathrm{cr}$ and $x_0$ (the Einstein radius), and since $x_0\sim \Delta\theta$, this explains the quite flat curve for the NTIS lensing probability in Figure 2 (\textit{thin solid line}). Namely, the insensitivity of NTIS lensing probability to $\Delta\theta$ reflects the fact that within the image separations of a few arcseconds, the lens mass and thus the corresponding number density of dark halos around that mass change only a little. Similarly, for a CIS profile, after solving the equations \begin{eqnarray} M&=&800\pi\rho_\mathrm{c}r_{200}^3/3 \label{mball} \\ &=&4\pi\int^{r_{200}}_0drr^2 \sigma_\mathrm{v}^2/2\pi G(r^2+r_\mathrm{core}^2)\\ &\approx&2\sigma_\mathrm{v}^2(r_{200}-\pi r_\mathrm{core}/2)/G \label{sigmav} \end{eqnarray} for $r_{200}$ and $\sigma_\mathrm{v}^2$, $\kappa_\mathrm{c}^{\mathrm{CIS}}$ can be related to dark halo mass $M$. The above analysis for the NTIS profile then is also true for the CIS model, if core radius $r_\mathrm{core}$ is large enough to be able to fit the observed rotation curves of the dwarf and LSB disk galaxies. Note that our conclusion that a CIS model as a cored mass profile is ruled out by statistical strong gravitational lensing seems inconsistent with previous works \citep{hinshaw,chiba,cheng}, since these authors used this model to constrain cosmological parameters. A simple analysis, however, shows us that there is no discrepancy. The central convergence can be expressed in terms of the mass and the core radius of a lens halo. For NTIS, from equation (\ref{r0}) and equation (\ref{kc1}), we have $\kappa_\mathrm{c}\propto M/r_0^2$; for CIS, from equation (\ref{kc2}), equation (\ref{mball}) and equation (\ref{sigmav}) we have (approximately) $\kappa_\mathrm{c}^{\mathrm{CIS}}\propto M^{2/3}/r_\mathrm{core}$. A larger $r_\mathrm{core}$ needs a larger $M$ to ensure $\kappa_\mathrm{c}^{\mathrm{CIS}}\geq 1$, or, for a fixed value of mass $M$, an appropriate smaller value of $r_\mathrm{core}$ would ensure $\kappa_\mathrm{c}^{\mathrm{CIS}}\geq 1$. Therefore, if $r_\mathrm{core}$ is not determined by currently observed rotation curves of the dwarf and LSB disk galaxies, but rather is adjustable \citep{hinshaw,chiba,cheng}, then no discrepancy appears. A more realistic mass profile should not be divergent at the center (cusp), that is, it should have a flat core, but the core radius $r_\mathrm{core}$ should be small enough to ensure that the lensing probability matches the observations of CLASS/JVAS. \begin{acknowledgements} I thank the anonymous referee for quite useful suggestions. This work was supported by the National Natural Science Foundation of China under grant 10233040. \end{acknowledgements}
1,108,101,563,466
arxiv
\section{Introduction}\label{intro} In the last ten years or so, Internet-of-Things (IoT) has become a main stream in communications technology. By connecting billions of devices to the Internet, IoT allows collecting and sharing a massive amount of data with ease. IoT enables objects to perform various tasks without human interaction. A ``thing'' in an IoT network can be a farm animal carrying a biochip transponder, a home security sensor that alerts its owner when there is a break-in, a heart monitor implanted inside a person, or anything else that can be assigned an address and is able to transfer data over Internet \cite{Fuq15,Zan14}. In many cases, the ubiquitous application of IoT requires exceptional energy efficiency of the end nodes (end devices), where battery lifetime is expected to last for many years. Moreover, business opportunities for new services in various fields such as logistic tracking, traffic control, environmental monitoring and personal healthcare \cite{Gha19} demand reliable communications over a long range (in the order of ten kilometers). These low power and long range requirements have led to the development of Low-Power Wide-Area Networks (LPWANs) solutions for IoT applications. Different from short-range wireless networks such as Bluetooth, Zigbee and Z-Wave (which are designed to cover very short distances with minimal power consumption) and cellular networks (which are geared towards very high data rate transmission and with higher transmission power), LPWANs trade transmission rate to achieve very large coverage and with excellent energy efficiency \cite{Raz17}. Prominent LPWAN technologies include Ingenu \cite{Ingenu}, DASH \cite{Weyn15}, LoRaWAN \cite{lorawan,Weyn15} and SigFox \cite{Sigfox}. These LPWANs are expected to co-exist in order to provide connectivity for billions of IoT devices \cite{Centenaro16,Augustin16}. This paper is specifically concerned with the physical layer of LoRaWAN. This technology is gaining tremendous commercial growth in more than 100 countries around the world\footnote{https://lora-alliance.org/}. The physical layer of LoRaWAN is built on the type of modulation known as chirp spread spectrum (CSS), which is also commonly referred to as LoRa modulation \cite{Rey16,Vangelista17}. In this paper, the terms CSS and LoRa are used interchangeably. CSS is known for its great flexibility in providing a trade-off between reception sensitivity and data rate. Spreading factor (SF) is the most important parameter in CSS modulation. Increasing SF can significantly extend the communication range, but it comes at the cost of a lower transmission rate. Bandwidth is another adjustable parameter in LoRaWAN. As expected, using a larger bandwidth enhances the transmission rate and, at the same time, provides better immunity to narrow-band noise and ingress. Evaluation of link performance, as well as system-level performance of LoRaWAN can be found in \cite{Feltrin18,Nguyen19}. Since the introduction of LoRaWAN, there has been active research in improving various aspects of LoRa modulation. For example, an efficient design of the LoRa transmitter is presented in \cite{Nguyen19,TungUS21} that eliminates the use of ROM and also introduces pulse shaping to improve the spectral compactness of LoRa signals. Since having low data rates is considered a major disadvantage of the conventional LoRa modulation, a large research effort was also devoted to improving the data rates of LoRaWAN. Using the starting phases of CSS symbols to carry additional information bits is explored in \cite{Nguyen19,Bom19} and the resulting scheme is called phase-shift keying CSS (PSK-CSS), or phase-shift keying LoRa (PSK-LoRa). In \cite{Almeida20}, superposition of in-phase and quadrature CSS signals is introduced to double the spectral efficiency of the conventional LoRa. The authors in \cite{InterleavedCSS} propose to use interleaved chirps along with linear up chirps to double the number of chirp signals, hence increasing the data rate by one bit per each symbol. Another approach to double the number of chirp signals is introduced in \cite{SSKLoRa} in which down chirps are used instead of the interleaved chirps. It is demonstrated in \cite{SSKLoRa} via correlation and bit error rate (BER) analysis that using down chirps instead of interleaved chirps reduces the peak correlation between the added signal set and the original signal set, hence improving the BER performance. More recently, the concept of index modulation has been applied to chirps in \cite{HanifUS20,hanif2021frequencyshift}, which leads to a novel CSS-based modulation scheme that can provide much higher data rates than the conventional LoRa modulation scheme. While much research has been conducted on improving the conventional LoRa modulation and transmitter, detailed studies and design of \emph{practical} LoRa receivers are missing in the literature. The commercialized LoRa physical-layer solution is currently patented by Semtech \cite{SemtechAN22,Horn10} and not described in detail \cite{sforza13}. In all the research papers discussed above, the LoRa receivers operate under the idealistic assumption of having \emph{perfect} synchronization, i.e., having no timing and frequency errors at the receiver \cite{sforza13,Rey16,Ouyang17,Van17,Mar19}. In practice, however, the timing error is unavoidable due to the imperfection in the preamble detection process, whereas the frequency error happens due to the fact that crystal oscillators at the transmitter and the receiver are physically separated and different. In reality, synchronization plays a crucial role in the final detection of the information bits. Large synchronization errors, if not accounted for, leads to severe interference (non-orthogonality) among chirps, and consequently unacceptable/unreliable detection performance. Against the above background, this paper develops and presents a detailed design of practical non-coherent and coherent receivers for CSS modulation. In particular, the proposed design includes circuits for timing, frequency and phase synchronization. The coarse timing and frequency synchronization is accomplished based on the preamble and can be used in a non-coherent receiver. Moreover, fine timing, frequency and phase synchronization is achieved using data symbols in the CSS burst. It is emphasized that, with fine synchronization, the proposed design not only enables coherent detection of conventional CSS modulation under practical scenarios of having timing and frequency offsets, it allows a practical implementation of the PSK-CSS scheme proposed in \cite{Nguyen19,Bom19} to provide higher data rates than that of the conventional CSS modulation scheme. Specifically, once the receiver's phase is locked, the PSK modulated bits can be recovered by examining the phase of the discrete-time Fourier transform (DFT) bin having the highest magnitude. The main ideas and algorithms presented in this paper were originally described in \cite{CSS-TxRx}. The remainder of this paper is organized as follows. Section \ref{sec-TX} reviews the transmitter of the conventional CSS system, as well as both non-coherent and coherent detection methods under the ideal channel condition, i.e., without any timing and frequency offsets between the transmitter and the receiver. It also reviews the PSK-CSS scheme proposed in \cite{Nguyen19,Bom19} to increase the data rates of the conventional CSS system. Section \ref{sec-RX} presents the proposed design of practical non-coherent and coherent receivers, which includes circuits for timing, frequency and phase synchronization. Simulation results and discussion are provided in Section \ref{sec-res}. Section \ref{sec-con} concludes the paper. \section{Conventional CSS and PSK-CSS Systems}\label{sec-TX} \subsection{Conventional CSS Transmitter} CSS or LoRa modulation is built on a set of $M$ chirps, which are signals whose frequency linearly increases or decreases over time. All $M$ chirps are orthogonal over the symbol duration $T_{\rm sym}$ and distinguished by properly chosen starting frequencies. This means that each chirp, or LoRa symbol, can carry ${\rm SF}=\log_2 M$ bits, which are also known as \emph{spreading factor}. In practice, ${\rm SF}$ could take values in the set $\{7, 8, \ldots, 12\}$. Let $B$ be the bandwidth and $T_s = 1/B$ the sampling period. Then each LoRa symbol can be represented by exactly $M = T_{\rm sym}/T_s = 2^{\textup{SF}}$ samples. Furthermore, the baseband discrete-time basic up chirp (of length $M$ samples) is given as \cite{Nguyen19}: \begin{equation}\label{x0n} x_0[n]=\exp\left(j2\pi\left(\frac{ n^2}{2M}-\frac{n}{2}\right)\right),\; n=0,1\dots, M-1. \end{equation} Then, the set of $M$ orthogonal chirps can be simply constructed from $x_0[n]$ as $x_m[n]=x_0[n+m]$, $m=0,1\dots, M$. \begin{figure*}[htb!] \centering \includegraphics[scale=0.80]{figures/CSS_transmitter_HN_TN.pdf} \caption{Block diagram of a LUT-based LoRa transmitter with spectral shaping.} \label{fig-css-tx} \end{figure*} Fig.~\ref{fig-css-tx} shows the block diagram of a conventional lookup-table (LUT) based LoRa transmitter. The most important component of the transmitter is a ROM that stores $M$ samples of the basic chirp in \eqref{x0n}. The binary data is converted to a stream of LoRa symbols, each carrying ${\rm SF}$ bits. Mapping from a LoRa symbol to a transmitted chirp is performed by cyclically shifting the basic chirp by an amount equivalent to the symbol value $m$, producing a corresponding digital complex baseband chirp $x_m[n]$. Each digital baseband chirp is then up-sampled (by a factor of $L$), spectrally shaped by a pair of square-root raised cosine (SRRC) filters before being converted to an analog baseband signal. The benefit of implementing SRRC filters in LoRa transmitter is demonstrated in \cite{Nguyen19}. Furthermore, a very efficient implementation of the LoRa transmitter that does not require a ROM is also presented in \cite{Nguyen19}. Since the focus of this paper is on the receiver design, the LUT-based transmitter is used for ease of explanation. \subsection{Detection of CSS Signals under Perfect Synchronization} Before presenting the proposed design of non-coherent and coherent receivers for LoRa signals in the next section, it is relevant to review the detection algorithms of LoRa signals under \emph{perfect} timing and frequency synchronization that are widely adopted in the literature. Without the use of SRRC filters and under perfect timing and frequency synchronization and additive white Gaussian noise (AWGN), the baseband received signal (after being down-converted and sampled) can be expressed as \begin{equation} y_m[n] = \exp(j \psi)x_m[n] + w[n], \; n = 0,1,\ldots,M-1. \end{equation} In the above expression, $\psi$ is a random phase rotation caused by the fact that the oscillators at the transmitter and receiver are not phase-locked, and $w[n]$ represents AWGN samples that are independent and identically-distributed with zero mean and variance $N_0$. For such an input/output model, it is simple to see that the signal-to-noise ratio (SNR) is $\text{SNR}~=~\frac{1}{N_0}$. Detection of LoRa signals starts with a multiplication with the complex conjugate of the basic up chirp, a process known as \emph{de-chirping}. This yields \begin{eqnarray} \label{vmn} u_m[n]&=& y_m[n] x^*_0[n] {\nonumber} \\ &=& \underbrace{\exp\left\{j \psi_m\right\}}_{\text{constant phase}} \underbrace{\exp \left\{ \frac{ j 2 \pi nm}{M} \right\}}_\text{linear phase} + \hat{w}[n],{\nonumber}\\ &&\qquad n=0,1,\ldots, M-1, \end{eqnarray} where the constant phase term is $\psi_m = \psi + 2 \pi \left( \frac{m^2}{2M} - \frac{m}{2}\right)$ and $\hat{w}[n]=w[n] x^*_0[n]$ is also a Gaussian noise sample with zero mean and variance $N_0$. Thus, in the absence of noise, the received signal after de-chirping is a pure sinusoid with a frequency of ${m}/{M}$ (cycles/sample). This suggests performing $M$-point DFT on the de-chirped signal to obtain \begin{eqnarray} \lefteqn{v_m[k] = \frac{1}{\sqrt{M}}\sum_{n=0}^{M-1} u_m[n] \exp\left\{ \frac{-j 2 \pi n k}{M} \right\}}{\nonumber}\\ & & = \frac{1}{\sqrt{M}}\sum_{n=0}^{M-1} \exp\left\{j \psi_m \right\} \exp \left\{ \frac{ j 2 \pi nm}{M} \right\} \exp \left\{ \frac{-j 2 \pi n k}{M} \right\} {\nonumber}\\ && + \underbrace{\frac{1}{\sqrt{M}} \sum_{n=0}^{M-1} \hat{w}[n] \exp \left\{ \frac{-j 2 \pi n k}{M} \right\}}_{W[k]} {\nonumber} \\ & & = \frac{\exp\left\{j \psi_m \right\}}{\sqrt{M}} \sum_{n=0}^{M-1} \exp \left\{ \frac{ j 2 \pi n(m-k)}{M} \right\} + W[k] {\nonumber} \\ & & = \begin{cases} \sqrt{M} \exp\left\{j \psi_m \right\} + W[m], \quad \text{if } k = m \\W[k], \quad \text{otherwise} \end{cases} \end{eqnarray} The above analysis shows that all the power in the de-chirped signal is concentrated at a single frequency bin, namely the $m$th bin if the transmitted LoRa symbol is $m$, whereas all the other $M-1$ bins contain only noise. At this point, it is useful to define a peak SNR (PSNR) as \begin{equation} \label{psnr} \text{PSNR} = \frac{ \mathbb{E} \left\{ \left\vert \sqrt{M} \exp\left\{j \psi_m \right\}\right\vert^2 \right\} }{\vert W[k] \vert^2}=\frac{M}{N_0}, \end{equation} which indicates the difference in terms of power between the ``power'' bin versus the rest (``noise'' bins). It is obvious that a higher PSNR would lead to a better chance of detecting the transmitted symbol correctly. The ratio between PSNR and SNR, which is $M$ in linear scale, is called the \emph{processing gain}. Using a higher SF would lead to a higher processing gain and thus improve the performance of the transmission, but at the cost of a lower transmission rate. Finally, the transmitted LoRa symbol can be detected coherently or non-coherently. If the receiver can be phase-locked to the transmitter (which requires a phase-locked loop (PLL) at the receiver), the phase offset $\psi$ can be estimated and the constant phase term $\psi_m$ can be computed for each LoRa symbol $m$. It follows that the coherent detection can be performed as $\hat{m}=\underset{k=0,\ldots,M-1}{\text{argmax}} v_m[k] \exp\{ -j \psi_m \}$. On the other hand, non-coherent detection does not require to estimate $\psi$ and is performed as $\hat{m}=\underset{k=0,\ldots,M-1}{\text{argmax}}\left\vert v_m[k] \right\vert$. \subsection{PSK-CSS Scheme} Performance analysis and comparison of both coherent and non-coherent detection algorithms under perfect timing, frequency and phase synchronization are presented in \cite{Nguyen19} for the conventional CSS system. While performing slighter better than con-coherent detection, the extra complexity required by coherent detection does not justify its use in existing LoRa systems. However, the situation is different when there is a need to increase the data rates of existing LoRa systems by means of embedding additional information bits into the starting phases of LoRa symbols. This scheme is known as PSK-CSS or PSK-LoRa \cite{Nguyen19,Bom19}. Specifically, using a $Q$-ary PSK constellation, a transmitted chirp (symbol) in the PSK-CSS system is defined using two values: the symbol number $m$, $0\leq m\leq M-1$ (as in the conventional CSS), and the phase symbol $p$, $0\leq p\leq Q-1$. The discrete-time baseband samples of such a transmitted chirp are simply $x_m[n] \mathrm{exp}\left\{j 2\pi (p/Q)\right\} = x_0[n+m] \mathrm{exp}\left\{j 2\pi (p/Q)\right\}$. It is obvious that for such an improved design of the PSK-CSS system, the use of a coherent receiver is a must. An efficient design of a practical coherent receiver, including timing, frequency and phase synchronization, is presented in the next section. \section{Proposed Design of Non-Coherent and Coherent Receivers}\label{sec-RX} Any practical receiver for digital communications over radio-frequency (RF) needs to have a timing recovery circuit. Furthermore, a coherent receiver also requires phase and frequency recovery circuits. In this section, we first present a method to detect the timing error by processing the received preamble symbols under perfect frequency synchronization (Section \ref{sec-timing}). The method is then extended in Section \ref{sec-timing-freq} to detect both timing and frequency errors, and hence serves as coarse timing and frequency synchronization that is based on preamble symbols only. The developed coarse timing and frequency synchronization circuits are employed in the proposed non-coherent receiver. Then Section \ref{sec-fine} presents fine timing, frequency and phase synchronization that is based on data-directed symbols. Such fine synchronization is employed in the proposed coherent receiver to achieve excellent BER performance of the PSK-CSS scheme. \subsection{Timing Estimation Under Perfect Frequency Synchronization}\label{sec-timing} A data burst sent from the transmitter to the receiver always starts with a preamble, which consists of one or multiple basic down chirps, followed by data symbols. To aid timing detection, the basic down chirp is used to create a preamble sequence. A basic down chirp is simply the complex conjugate of the basic up chirp, given as \begin{equation} x_0^*[n] = \exp \left\{-j 2 \pi \left( \frac{n^2}{2M} - \frac{n}{2} \right)\right\}, \; n = 0,1, \ldots, M-1 \end{equation} The processing of LoRa signals is similar to that of OFDM signals in the sense that the received signal is processed block-by-block (i.e., symbol-by-symbol) where each block (i.e., symbol) consists of $M$ samples. Typically at the receiver, there is a burst detector that detects the beginning of a LoRa burst based on a sudden rise in the incoming signal power. In other words, the burst detector is a \emph{coarse} timing detector. Because the coarse timing detector is not so accurate, there would be inter-symbol-interference (ISI) in the obtained preamble samples, i.e., each obtained preamble might be affected by data samples. However, the ISI can be avoided by appending a cyclic prefix on the preamble symbol, or by simply repeating the preamble symbol multiple times. Repetition guarantees that only the first received preamble symbol might suffer ISI but the subsequent preamble symbols do not. The repetition also gives better preamble detection under the influence of noise. The preamble sequence can be written as \begin{multline} p[n] = x_0^*[(n-N_{\rm CP}) \mod M], \\ n = 0, 1, \ldots, MN_{\rm PR}+N_{\rm CP}-1, \end{multline} where $N_{\rm CP}$ is the number of cyclic prefix samples and $N_{\rm PR}$ is the number of repeated basic chirps (symbols) in the preamble sequence. Fig.~\ref{fig-preamble} shows an example of a LoRa preamble with $N_{\rm CP} = M/2$ and $N_{\rm PR} = 2$, followed by 4 data symbols. \begin{figure*}[htb!] \centering \includegraphics[width=0.850\textwidth]{figures/preamble-eps-converted-to.pdf} \caption{Example of a preamble with $N_{\rm CP} = M/2$ and $N_{\rm PR} = 2$.} \label{fig-preamble} \end{figure*} With the implementation of pulse shaping filters, the received preamble after removing CP can be shown to be \begin{multline} \label{rn0} r[n] = \sum_{l=0}^{L-1} \hat{h}[l] p[n + N_{\rm CP}-l] + \eta[n], \\ n = 0, 1, \ldots, MN_{\rm PR}+N_{\rm CP}-1, \end{multline} where $\eta[n]$ is an AWGN sample and $\hat{h}[l]$ represents the \emph{composite} channel consisting of all filters between the transmitter and the receiver. Specifically, \begin{equation} \hat{h}[l] = h_{\rm tx}[l] * h_{\rm up}[l] * h_{\rm ch}[l] * h_{\rm down}[l] * h_{\rm rx}[l] \end{equation} where $h_{\rm tx}[l]$ is the pulse shaping filter at the transmitter (which is an SRRC filter), $h_{\rm up}[l]$ is the interpolation filter at the transmitter (to remove aliases caused by upsampling), $h_{\rm ch}[l]$ is the discrete-time impulse response of the channel, $h_{\rm down}[l]$ is the decimation filter at the receiver (to reject alias prior to downsampling), and $h_{\rm rx}[l]$ is the matched filter at the receiver (which is also an SRRC filter). Because LoRa signals have narrow bandwidths (125, 250, or 500 kHz), the channel is assumed to have a constant frequency response and can be represented as a pure delay, $h_{\rm ch}[l] = \delta(l - \tau)$, where $\tau$ represents the composite effect of the channel delay and coarse timing estimation error. Note that $\tau$ has both integer and fractional components. Since the pair of SRRC filters have the narrowest bandwidth in the system and because the combination of the decimation filter, $h_{\rm down}[l]$, and the interpolation filter, $h_{\rm up}[l]$, would yield a flat spectrum over the bandwidth of the SRRC filters, the composite channel is well approximated as \begin{equation} \hat{h}[l] \approx h_{\rm RC}(l - \tau), \end{equation} where $h_{\rm RC}(l)$ is the impulse response of a raised cosine (RC) filter, resulted by the convolution of the two SRRC filters, $h_{\rm tx}[l]$ and $h_{\rm rx}[l]$. The impulse response of the RC filter is given as \begin{multline} h_{\rm RC}(l) = \frac{\sin(\pi l ) }{ \pi l} \frac{\cos(\beta \pi l)}{ 1 - (2 \beta l)^2 }, \; l = 0, 1, \ldots, L-1 \end{multline} where $\beta$ is the roll-off factor and $L$ is the truncated length of the RC filter. Substituting the above approximation of $\hat{h}[l]$ into \eqref{rn0} has \begin{multline} \label{rn} r[n] = \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) p [n + N_{\rm CP}-l] + \eta[n] \\ n = 0, 1, \ldots, MN_{\rm PR}+N_{\rm CP}-1, \end{multline} Assuming $N_{\rm CP} \geq L$, i.e., \emph{sufficient} cyclic prefix, there is no ISI in preamble calculation. Moreover, to reduce the effect of noise, the $N_{\rm PR}$ consecutive preamble symbols in the preamble sequence are accumulated into one block of $M$ samples for preamble processing, i.e., \begin{eqnarray} \small \label{yn} & y[n] &= \frac{1}{N_{\rm PR}} \sum_{b=0}^{N_{\rm PR}-1} r[n + bM] ,\quad n=0,1,\ldots, M-1 {\nonumber} \\ & &\hspace{-1cm} = \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) x_0^*[n-l] + \frac{\sum_{b=0}^{N_{\rm PR}-1}\eta[n+bM]}{N_{\rm PR}}. \label{snrpr} \end{eqnarray} Because the noise samples are uncorrelated, the noise power is reduced by a factor of $N_{\rm PR}$ by averaging. As a result, the SNR in preamble processing is proportional to the number of repeated preamble symbols used in the preamble sequence. The accumulated preamble symbol is then de-chirped to obtain \begin{eqnarray} \label{uyx} & u[n] &= y[n] x_0[n] = \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) x_0^*[n-l] x_0[n] + \hat{\eta}[n] {\nonumber} \\ & &\hspace{-1cm}= \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) \exp \left\{j 2 \pi \left( -\frac{(n-l)^2}{2M} + \frac{n-l}{2} \right) \right\} {\nonumber} \\ & & \hspace{2cm} \times \exp \left\{j 2 \pi \left( \frac{n^2}{2M} - \frac{n}{2} \right) \right\} + \hat{\eta}[n] {\nonumber} \\ & &\hspace{-1cm}= \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) \exp\left\{-j \pi \left(\frac{l^2}{M} + l \right) \right\} {\nonumber} \\ & & \hspace{3cm} \times \exp\left\{\frac{j 2 \pi nl}{M} \right\} + \hat{\eta}[n]. \end{eqnarray} The de-chirping process is followed by an $M$-point DFT, whose output is \begin{eqnarray} \label{vk} &v[k] &= \frac{1}{M}\sum_{n=0}^{M-1} u[n] \exp\left\{ \frac{-j 2 \pi nk}{M} \right\} {\nonumber} \\ & &= \frac{1}{M}\sum_{n=0}^{M-1} \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) \exp\left\{-j \pi \left(\frac{l^2}{M} + l \right) \right\} {\nonumber} \\ & & \hspace{3cm} \times \exp\left\{\frac{j 2 \pi n(l-k)}{M} \right\} + \bar{\eta}[k] {\nonumber} \\ & &= \frac{1}{M}\sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) \exp\left\{-j \pi \left(\frac{l^2}{M} + l \right) \right\} {\nonumber} \\ & & \hspace{2cm} \times \sum_{n=0}^{M-1} \exp\left\{\frac{-j 2 \pi n(k-l)}{M} \right\}+ \bar{\eta}[k] {\nonumber} \\ & &= h_{\rm RC}(k - \tau) \exp\left\{-j \pi \left(\frac{k^2}{M} + k \right) \right\} + \bar{\eta}[k]. \end{eqnarray} A very important and useful observation of \eqref{vk} is that, in the absence of noise, the magnitude of the DFT samples follows the magnitude of the RC pulse, i.e., $\big\vert v[k] \big\vert \approx\big\vert h_{\rm RC}(k - \tau) \big\vert$. This suggests that the integer timing delay can be detected by simply locating the peak magnitude of $v[k]$: \begin{equation} \label{tau_int} \tau_{\rm int} = \underset{k}{\text{argmax}} \vert v[k] \vert. \end{equation} Next, the fractional timing delay can be found by obtaining two additional samples at half-a-sample delay on both sides of the peak, denoted as $v[\tau_{\rm int}-0.5]$ and $v[\tau_{\rm int}+0.5]$. How these samples can be obtained is explained in detail at the end of this section. Using these three samples, parabolic interpolation can be performed to estimate the true location of the peak. Specifically, the fractional timing delay is obtained as: \begin{equation} \label{tau_frac} \tau_{\rm frac} = \frac{ \vert v[\tau_{\rm int}-0.5] \vert - \vert v[\tau_{\rm int}+0.5]\vert}{ 4 (\vert v[\tau_{\rm int}-0.5] \vert + \vert v[\tau_{\rm int}+0.5]\vert - 2 \vert v[\tau_{\rm int}]\vert ) } \end{equation} Finally combining \eqref{tau_int} and \eqref{tau_frac} yields the absolute timing delay as \begin{equation} \tau = \tau_{\rm int} + \tau_{\rm frac} \quad ({\rm UI}), \end{equation} where UI stands for unit-interval of timing delay, which is equivalent to the sampling period $T_s=1/B$ in this paper. \begin{figure}[thb!] \centering \includegraphics[width=0.45\textwidth]{figures/rc_pulse_HN-eps-converted-to.pdf} \caption{Timing detection using 3-point interpolation.} \label{fig-rc_pulse} \end{figure} Fig. \ref{fig-rc_pulse} illustrates the idea of using 3 samples around the peak magnitude to detect the true peak of an RC pulse, and hence the timing error. It can be seen that, if the fractional delay is not accounted for, the signal power spreads out to multiple taps causing SNR reduction. Therefore it is very important to compensate for any timing error so that all the signal power is concentrated in a single tap, namely $v[\tau_{\rm int}]$, whose location determines the demodulated symbol. \begin{figure*}[htb!] \centering \includegraphics[scale=0.70]{figures/timing_detector_TN2.pdf} \caption{Block diagram of the demodulator and timing detector under ideal frequency synchronization.} \label{fig-detector} \end{figure*} Fig. \ref{fig-detector} shows a detailed block diagram of a CSS demodulator with the proposed timing detector (under ideal frequency synchronization). The received RF signal is first amplified using a low-noise amplifier (LNA), then mixed with a local oscillator (LO) to move the analog signal to baseband. The baseband analog signal is converted to digital using an analog-to-digital converter (ADC). Corresponding to the implementation at the transmitter, the digital signal at the output of the ADC has an up-sampling factor of $L$. This digital signal is then down-sampled to two times the baseband rate before processed by the SRRC filter to limit the noise bandwidth and reject ISI. After the SRRC filter, the signal is split into two branches: the first branch performs CSS symbol demodulation and the second branch detects the timing of the received symbol. As shown in the figure, the symbol timing detection uses some of the information obtained from the first branch. The two branches are included inside a block named ``demodulator and timing detector'' (DTD). In the first branch, the signal is further down-sampled to baseband to obtain $y[n]$. The signal $y[n]$ is then de-chirped and processed with the $M$-point DFT, as expressed in \eqref{yn}, \eqref{uyx} and \eqref{vk}, respectively. A peak detection is performed on $v[k]$ to demodulate the symbol and to obtain the integer delay $\tau_{\rm int}$ as in \eqref{tau_int}. Compared to the signal in the first branch, the signal in the second branch is offset by one sample delay. Since the single sample delay is performed at twice the baseband rate, after down-conversion by a factor of two, the baseband sequence effectively experiences half-a-sample delay. With a slight abuse of notation, we denote $y[n-0.5]$ as a sequence having half-a-sample delay with respect to $y[n]$, which is given as \begin{equation} \small \label{yn05} y[n-0.5] = \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau - 0.5) x_0^*[n-l] + \text{noise}. \end{equation} Performing the same signal processing operations as in \eqref{yn}--\eqref{vk} on $y[n\pm 0.5]$, we obtain $v[k\pm 0.5]$ after the DFT. It is pointed out that, since only two samples around the peak, i.e., $v[\tau_{\rm int} \pm 0.5]$, are needed, the second branch does not need a full-size $M$-point DFT, but rather a 2-output $M$-point DFT only. This should reduce the computational complexity of the timing detector. \subsection{Joint Timing and Frequency Estimation for the Proposed Non-Coherent Receiver}\label{sec-timing-freq} Besides the timing error, there is always a frequency error (or frequency offset) between the transmitter and the receiver. The amount of error depends on the accuracy of the local oscillator (LO), which is specified in part-per-million, or ppm. The typical accuracy of an LO is about $\pm 20$ ppm, which is equivalent to having a frequency offset of up to 18.3 kHz between the transmitter and the receiver if the LO frequency is set at 915 MHz. The frequency offset, normalized to the signal bandwidth $B$ and the spreading factor, is defined as \begin{equation} \epsilon = \frac{M \Delta_f}{B} \quad ({\rm UI}), \end{equation} where $\Delta_f$ is the offset in Hz. For example, considering a system with ${\rm SF}=10$ and $B=125$ kHz, then $\Delta_f=18.3$ kHz offset is equivalent to $\epsilon=149.91$ UI. Under both timing and frequency offsets, \eqref{rn} is rewritten as \begin{multline} \label{rn2} r[n] = \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) p[n + N_{\rm CP}-l] \exp\left\{\frac{j 2 \pi n \epsilon}{M} \right\} + \eta[n], \\ n = 0, 1, \ldots, MN_{\rm PR}+N_{\rm CP}-1. \end{multline} Let $r_b[n]$ denote the received signal corresponding to the $b$th preamble symbol after removing the cyclic prefix. It can be expressed as \begin{eqnarray} \label{rbn} &r_b[n] &= \sum_{l=0}^{L-1} h_{\rm RC}(l - \tau)x_0^*[n-l] \exp\left\{j 2 \pi \frac{(n + Mb)\epsilon}{M} \right\} {\nonumber} \\ & & + {\eta}[n + Mb] , {\nonumber} \\ & & n = 0, 1, \ldots, M-1, \; b= 0,1, \ldots, N_{\rm PR}-1 \end{eqnarray} Performing de-chirping on $r_b[n]$ yields \begin{eqnarray} & u_b[n] & =\sum_{l=0}^{L-1} h_{\rm RC}(l-\tau) \exp\left\{-j \pi \left( \frac{l^2 + l}{M} \right) \right\} {\nonumber} \\ & & \hspace{-1cm} \times \exp\left\{ \frac{j 2 \pi nl}{M} \right\} \exp\left\{j 2 \pi \frac{(n + Mb)\epsilon}{M} \right\} + \hat{\eta}[n + Mb] {\nonumber} \\ & & = \sum_{l=0}^{L-1} h_{\rm RC}(l-\tau) \exp\left\{-j \pi \left( \frac{l^2 + l}{M} \right) \right\} {\nonumber} \\ & & \hspace{-0.3cm} \times \exp\left\{ \frac{j 2 \pi n(l+\epsilon)}{M} \right\} \exp\left\{j 2 \pi b\epsilon \right\} + \hat{\eta}[n + Mb].{\nonumber}\\ \end{eqnarray} Next, $u_b[n]$ is processed by the DFT to produce \begin{eqnarray} \label{vkb} & v_b[k]& = \frac{1}{M}\sum_{l=0}^{L-1} h_{\rm RC}(l - \tau) \exp\left\{-j \pi \left(\frac{l^2}{M} + l \right) \right\} {\nonumber} \\ & & \hspace{-1cm} \times \sum_{n=0}^{M-1} \exp\left\{\frac{-j 2 \pi n(k-\epsilon-l)}{M} \right\} \exp\left\{j 2 \pi b\epsilon \right\} + \bar{\eta}_b[k] {\nonumber} \\ \end{eqnarray} For the sake of simplicity, define \begin{equation} \label{hh} \hat{h}_{\rm RC}(l-\tau) = h_{\rm RC}(l - \tau) \exp\left\{-j \pi \left(\frac{l^2}{M} + l \right) \right\}, \end{equation} which is a pulse whose magnitude is the same as the magnitude of the RC pulse. Furthermore, define \begin{eqnarray}\label{psinc} &\hspace{0cm}\Upsilon(k - \epsilon) &= \frac{1}{M}\sum_{n=0}^{M-1} \exp\left\{\frac{-j 2 \pi n(k-\epsilon)}{M} \right\} = {\nonumber} \\ & &\hspace*{-2.0cm} \begin{cases} \frac{\sin(\pi (k-\epsilon))}{M \sin(\pi (k-\epsilon)/ M)} \exp\left\{-j \pi \left( k-\epsilon - \frac{k-\epsilon}{M} \right) \right\}, & \text{if } k \ne \epsilon, \\ 1 & \text{otherwise}, \end{cases}{\nonumber}\\ \end{eqnarray} which is a periodic sinc function \cite{Nguyen16}, whose shape also resembles an RC pulse with infinitesimal roll-off factor. In fact, it can be easily seen that $\Upsilon(k - \epsilon)$ is an all-pass response that introduces a sample delay by an amount of $\epsilon$. Then substituting \eqref{hh} and \eqref{psinc} into \eqref{vkb} yields \begin{eqnarray} \label{vkb2} &v_b[k] &= \sum_{l=0}^{L-1} \hat{h}_{\rm RC}(l-\tau) \Upsilon(k-\epsilon -l) \exp\left\{j 2 \pi b\epsilon \right\} + \bar{\eta}_b[k] {\nonumber} \\ & & \approx \hat{h}_{\rm RC}(l-\tau-\epsilon) \exp\left\{j 2 \pi b\epsilon \right\} + \bar{\eta}_b[k]. \end{eqnarray} An important and useful observation of \eqref{vkb2} is that, in the absence of noise, the peak magnitude of $v_b[k]$ is shifted by an amount equivalent to $\tau+\epsilon$ when both timing and frequency offsets exist. One simple way to separate these two offsets is to use a preamble sequence contains both \emph{down-chirp} and \emph{up-chirp} symbols. Specifically, performing the same signal processing operations as in \eqref{rn2}-\eqref{vkb2} on up-chirp preamble symbols yields a signal whose magnitude follows the magnitude of an RC pulse, but shifted by an amount $\epsilon-\tau$. Therefore, by combining peak detection results from both down-chirp and up-chirp preamble symbols, we can find timing and frequency offsets separately. This is described in more detail in the following. Let $N_{\rm D}$ and $N_{\rm U}$, respectively, denote the numbers of down-chirp and up-chirp symbols. First, the DFT outputs of all the down-chirp preamble symbols are summed together in terms of power to obtain: \begin{equation} V_{\rm D}[k] = \sum_{b=0}^{N_{\rm D}-1} |v_b[k]|^2, \end{equation} from which the timing delay $\tau_{\rm D}$ is calculated based on the location of the peak sample. Similarly, the DFT outputs of all the up-chirp preamble symbols are summed together to give \begin{equation} V_{\rm U}[k] = \sum_{b=N_{\rm D}}^{N_{\rm D}+N_{\rm U}-1} |v_b[k]|^2, \end{equation} from which the timing delay $\tau_{\rm U}$ is also calculated. It follows that the coarse estimates of timing and frequency offsets can be obtained as \begin{equation}\label{tau-coarse} \tau_{\rm coarse} = 0.5 ( \tau_{\rm D} - \tau_{\rm U}) \quad (\text{UI}), \end{equation} and \begin{equation}\label{epsilon-coarse} \epsilon_{\rm coarse} = 0.5 ( \tau_{\rm D} + \tau_{\rm U}) \quad (\text{UI}), \end{equation} \begin{figure*}[htb!] \centering \includegraphics[scale=0.80]{figures/noncoh_receiver.pdf} \caption{Block diagram of the proposed non-coherent receiver.} \label{fig-noncoh-receiver} \end{figure*} With the coarse estimates of the timing and frequency offsets, a complete block diagram of the proposed non-coherent receiver design is shown in Fig. \ref{fig-noncoh-receiver}. Specifically, the DTD block contains the demodulator and timing detector modules as detailed in Fig. \ref{fig-detector}, which produces raw timing delay $\tau$ for each preamble symbol. From the raw timing delay, the preamble processing block produces coarse timing and frequency errors, i.e., \eqref{tau-coarse} and \eqref{epsilon-coarse}. These coarse timing and frequency errors are corrected before making non-coherent detection of the data symbols. Note that the detected frequency error is accumulated from sample to sample to create a continuous linear phase ramp, which is used for frequency correction. While the main ideas and algorithms presented in this paper were originally described in \cite{CSS-TxRx}, a similar idea of using both up chirps and down chirps in the preamble for frame synchronization of LoRa signals is recently presented in \cite{Bernier20}. It should be pointed out, however that the timing, frequency and phase detection techniques developed in this paper are different from what presented in \cite{Bernier20}. More importantly, while the work in \cite{Bernier20} is mainly concerned with preamble detection performance and frame synchronization failure probability, the present paper addresses not only coarse timing and frequency synchronization, but also fine timing, frequency and phase synchronization based on data-directed symbols. This fine synchronization is presented in the next subsection, which plays a key role in achieving the excellent BER performance of the recently proposed PSK-CSS scheme. \subsection{Fine Timing, Frequency and Phase Synchronization for the Proposed Coherent Receiver}\label{sec-fine} While the solutions for \emph{coarse} timing and frequency synchronization in \eqref{tau-coarse} and \eqref{epsilon-coarse} can be used in the proposed non-coherent receiver, further refinement can be achieved via \emph{fine} timing and frequency synchronization that makes use of data-directed symbols. More importantly, accurate phase detection of data symbols is also achieved, which facilitates the practical implementation of the higher-rate PSK-CSS scheme proposed in \cite{Nguyen19,Bom19}. \begin{figure*}[htb!] \centering \includegraphics[scale=0.80]{figures/receiver-TN2.pdf} \caption{Block diagram of the proposed coherent receiver.} \label{fig-receiver} \end{figure*} A complete block diagram of the proposed coherent receiver design is shown in Fig. \ref{fig-receiver}. As before, the DTD block contains the demodulator and timing detector modules as detailed in Fig. \ref{fig-detector}. The receiver has a \emph{timing loop} to track the timing difference between the transmitter and the receiver using all the transmitted symbols, including preamble and data, in a burst. The detected timing error from the DTD module is sent to a \emph{loop filter} that drives the \emph{timing correction module}. The loop filter is designed as a first-order phase-locked loop (PLL) with a transfer function of \begin{equation} \label{loop1} H^{(1)}(z) = \frac{k_g z^{-1}}{1-z^{-1}} \end{equation} where $k_g$ is the proportional loop gain. The timing correction module can be implemented using the well-known Farrow filter structure, or any timing interpolation methods in the literature \cite{Ves96}. The loop gain is an important factor that influences the receiver's performance. In particular, the gain provides a trade-off between convergence speed versus timing correction accuracy. A higher gain makes the loop converges faster, whereas a lower gain is more effective at reducing the phase detector's noise. \begin{figure}[htb!] \centering \includegraphics[scale=0.8]{figures/type_1_pll_TN2.pdf} \caption{First-order phase locked loop for tracking timing.} \label{fig-1st-order} \end{figure} Fig. \ref{fig-1st-order} shows a simplified model of the timing loop filter, in which the timing offset between the transmitter and the receiver is denoted as $\tau$. The PLL runs at the CSS symbol rate, which is $1/M$ of the CSS sample rate since there are $M$ samples in one CSS symbol. As shown in the figure, the first-order loop contains a single accumulator whose value indicates the best estimate of the timing, hence the name \emph{timing accumulator}. Let $s=1,2,3\ldots$ denote indices of CSS symbols in a CSS burst. At each iteration, the estimated time $\hat{\tau}[s]$ is applied to the next symbol using a timing correction circuit as shown in Fig. \ref{fig-receiver}. Since timing detection is affected by noise and tracking performance of the loop itself, the detected timing error can be modelled as \begin{equation} e_\tau[s] = \tau - \hat{\tau}[s-1] + w_\tau[s], \end{equation} where $w_\tau[s]$ is timing detection noise. To accelerate loop convergence, the timing accumulator is seeded with the coarse timing detected by processing the preamble, i.e., \begin{equation} \hat{\tau}[0] = \tau_{\rm coarse}. \end{equation} The loop employs a gain factor $k_g$ to filter out the detection noise. It can be shown that the best and unbias estimate can be obtained by using the gain factor \begin{equation} k_g = \frac{1}{s}, \quad s=1,2,3,\ldots \end{equation} While a formal proof of the above gain can be carried out by applying the Kalman filter formula on the PLL circuit, it can be intuitively demonstrated by the following operations of the circuit in Fig. \ref{fig-1st-order}: \begin{itemize} \item At the beginning, $\hat{\tau}[0] = 0$ and thus $e_\tau[1] = \tau + w_\tau[1]$. \item The accumulator is loaded with the new value: $\hat{\tau}[1] = \tau + w_\tau[1]$. \item The next symbol comes in, and a new timing error is detected: $e_\tau[2] = \tau + w_\tau[2] - (\tau + w_\tau[1]) = w_\tau[2] - w_\tau[1]$. \item The accumulator's new value is: $\hat{\tau}[2] = \tau + w_\tau[1] + \frac{w_\tau[2] - w_\tau[1]}{2} = \tau + \frac{w_\tau[2] + w_\tau[1]}{2}$. \item The next timing error is: $e_\tau[3] = \tau + w_\tau[3] - \tau - \frac{w_\tau[2] + w_\tau[1]}{2} = w_\tau[3] - \frac{w_\tau[2] + w_\tau[1]}{2} $ \item The accumulator's next value is: $\hat{\tau}[3] = \tau + \frac{w_\tau[2] + w_\tau[1]}{2} + \frac{w_\tau[3] - \frac{w_\tau[2] + w_\tau[1]}{2}}{3} = \tau + \frac{w_\tau[3] + w_\tau[2] + w_\tau[1]}{3}$. \end{itemize} Therefore, it can be established iteratively that \begin{equation} \label{hattaus} \hat{\tau}[s] = \tau + \frac{1}{s}\sum_{j=1}^s w_\tau[j], \end{equation} showing that the timing estimate is \emph{unbiased}, and improved at every iteration. This leads to large performance improvement as compared to a conventional PLL having a fixed loop gain. Finally, the receiver also contains a frequency-and-phase tracking loop that tracks the differences in frequency and phase between the transmitter and the receiver using all the transmitted symbols in a CSS packet. The tracking loop contains a \emph{phase detector}, a \emph{phase loop filter} and a \emph{phase correction circuit}. The phase error is extracted from the angle of $v_s[k_{\rm peak}]$ as \begin{equation} \label{phi} e_\phi[s] = {\tt smod} \left( \frac{\angle{ v_s[k_{\rm peak}] }}{ 2 \pi}, 1/Q \right) \quad ({\rm UI}), \end{equation} where $Q$ denotes the order of the PSK modulation employed for the initial phase of each chirp (e.g. $Q=4$ for QPSK), and {\tt smod} denotes the signed modulo function, defined as \begin{equation} {\tt smod}(a, b) = a - {\tt round}(a/b) \times b. \end{equation} The output of the phase detector is sent to a phase loop filter to drive the phase correction circuit. In order to track both frequency and phase, the loop filter is designed as a second-order PLL with a transfer function of \begin{equation} \label{loop2} H^{(2)}(z) = \left(k_p + \frac{k_i}{1-z^{-1}} \right) \frac{z^{-1}}{1-z^{-1}} \end{equation} where $k_p$ and $k_i$ are proportional gain and integral gain, respectively. \begin{figure}[htb!] \centering \includegraphics[scale=0.80]{figures/type_2_pll_TN2.pdf} \caption{Second-order phase locked loop for tracking frequency and phase.} \label{fig-2nd-order} \end{figure} Fig. \ref{fig-2nd-order} shows a simplified model of the phase loop system using a second-order PLL. Unlike the first-order loop, the second-order loop contains two accumulators, namely frequency accumulator and phase accumulator. After each iteration, the values inside the accumulators represent the best estimates of frequency and phase at that particular moment. To accelerate loop convergence, the frequency accumulator is initially seeded with the coarse frequency estimate, i.e., $\epsilon_{\rm coarse}$, and the phase accumulator is seeded with the phase extracted from the last preamble symbol. Similar to the timing loop, the phase loop can be optimized by representing the second-order PLL as a Kalman filter \cite{Patap99}. Specifically, define a dynamic system model with a state transition function of \begin{equation} \mathbf{X}_{s+1} = \mathbf{F} \mathbf{X}_s + \mathbf{W}_s, \end{equation} and an observation function of \begin{equation} \phi[s] = \mathbf{H} \mathbf{X}_s + w_\phi[s], \end{equation} where \begin{itemize} \item $\mathbf{X}_s$ is the hidden state, estimated at the $s$th iteration. \item $\mathbf{F} = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$ is the state transition matrix. \item $\mathbf{W}_s$ is the random process noise. \item $\mathbf{H} = \begin{bmatrix} 1 & 0 \end{bmatrix}$ is the observation matrix. \item $w_\phi[s]$ is the observation noise, i.e., the phase detector's noise. \end{itemize} From the above model, the Kalman gains for the second-order PLL can be iteratively calculated as \begin{equation} \begin{bmatrix} k_p[s] \\ k_i[s] \end{bmatrix} = \mathbf{K}_s = \frac{\mathbf{P}_s \mathbf{H}^T}{\mathbf{H} \mathbf{P}_s \mathbf{H}^T + \sigma^2_\phi }. \end{equation} It then follows that a \emph{prediction} of the measurement can be calculated as \begin{equation} \hat{\phi}[s] = \mathbf{H} \hat{\mathbf{X}}_s, \end{equation} and an estimation of the new state is \begin{equation} \hat{\mathbf{X}}_{s+1} = \hat{\mathbf{X}}_s + \mathbf{K}_s (\phi[s] - \hat{\phi}[s]) = \hat{\mathbf{X}}_s + \mathbf{K}_s e_{\phi}[s]. \end{equation} Furthermore, the predicted state covariance matrix is calculated as \begin{equation} \mathbf{P}_{s+1} = \mathbf{F}( \mathbf{I} - \mathbf{K}_s \mathbf{H}) \mathbf{P}_s \mathbf{F}^T + \mathbf{Q}. \end{equation} In the above expression, $\mathbf{Q}$ is the process noise covariance matrix, given as \begin{equation} \mathbf{Q} = \begin{bmatrix} \sigma^2_{\eta} & 0\\ 0 & \sigma^2_{\zeta} \end{bmatrix}, \end{equation} where $\sigma^2_{\eta} $ and $\sigma^2_{\zeta}$ are variances of the phase and frequency disturbances of the local oscillator. These variances are typically characterized by the phase noise analysis during the circuit calibration process. It is important to seed values of the predicted covariance matrix at a proper condition prior to the first iteration, which has the form \begin{equation} \mathbf{P}_0 = \begin{bmatrix} \sigma^2_{\phi} & 0\\ 0 & \sigma^2_{f} \end{bmatrix}. \end{equation} To this end, the phase variance is approximated from the complex Gaussian noise affecting $v_s[k_{\rm peak}]$ as \cite{Berscheid11}: \begin{equation} \sigma^2_{\phi} = 0.5 \frac{10^{-{\rm PSNR}/10}}{ (2 \pi)^2}. \end{equation} On the other hand, the frequency variance can be shown to be inversely proportional to the number of preamble symbols employed, i.e., \begin{equation} \sigma^2_f = \frac{\sigma^2_{\phi}}{N_{\rm D}+N_{\rm U}}. \end{equation} \section{Simulation Results}\label{sec-res} Performance of the proposed non-coherent and coherent receivers is investigated with Monte Carlo simulations in this section. First, convergence performance of the timing loop and the phase loop is evaluated, followed by the BER performance of the complete system under realistic channel conditions. In all simulations, the signal bandwidth is selected as 125 kHz, the channel is assumed to have a constant gain with an unknown delay and additive white Gaussian noise. Each of CSS bursts has 256 data symbols, preceded by a preamble having $N_{\rm D}=8$ down chirps and $N_{\rm U}=8$ up chirps. Since the detection of integer timing and frequency errors (i.e., the amount of timing or frequency error at an integer multiples of 1 UI) is budgeted as part of burst (frame) detection \cite{Bernier20}, the channel model in this section is assumed to introduce fractional errors only, where both $\tau$ and $\epsilon$ are uniformly distributed between $-0.5$ and $0.5$ UI. The digital models of both transmitter and receiver operates with the up-sampling factor $L=2$. The SRRC filters are selected to have the roll-off factor of 0.25 and designed as truncated finite impulse response (FIR) filters with 33 coefficients, i.e., the filter order is 16. At the receiver side, the timing correction block is designed as a the fifth-order Farrow interpolator that runs at an over-sampling factor of 2. \subsection{Convergence of the Timing Loop} \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{figures/timing_mse_HN-eps-converted-to.pdf} \caption{Convergence performance of the timing loop.} \label{fig-timing-mse} \end{figure} Fig. \ref{fig-timing-mse} plots simulation results illustrating convergence of the timing loop with ${\rm SF}=8$ and four different SNR values, namely of $-12$, $-11$, $-10$ and $-9$ dB. These SNR values correspond to $E_b/N_0=3.051$, $4.051$, $5.051$, and $6.051$ dB. The detected timing, which is the output of the timing loop filter, is recorded after each symbol and compared with the true channel delay to determine the mean square error (MSE) of the timing detection. It can be seen that the accuracy of the proposed timing detector gradually gets better after each detected symbol, thanks to the timing loop filter. When the SNR is above a certain threshold, e.g., $E_b/N_0 > 5$ dB, the MSE curves decline at a rate of $-10$ dB per decade, which matches the theoretical prediction in \eqref{hattaus}. On the other hand, when the SNR is below the threshold (i.e., the cases $E_b/N_0 = 3.051$ and $E_b/N_0=4.051$), the MSE curves approach error floors. The reason for this is that, at low SNR, the peak detection $k_{\rm peak}$ in the first branch of the DTD is not so reliable, causing large errors in timing detection in the second branch. However, the case that $E_b/N_0 < 5 $ dB is not common in practical settings of LoRa systems (see, e.g., \cite{Elsha18,Nguyen19,Bernier20}) since it leads to poor BER for $SF=8$. Therefore the MSE floors exhibited in \ref{fig-timing-mse} shall not be a concern. \subsection{Convergence of the Phase Loop} \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{figures/freq_mse_HN-eps-converted-to.pdf} \caption{MSE in tracking frequency.} \label{fig-freq-mse} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{figures/phase_mse_HN-eps-converted-to.pdf} \caption{MSE in tracking phase.} \label{fig-phase-mse} \end{figure} Figs. \ref{fig-freq-mse} and \ref{fig-phase-mse} show typical frequency and phase convergence curves, respectively, of the phase tracking loop for the same CSS system with ${\rm SF}=8$. The frequency error is obtained by comparing the value of the frequency accumulator inside the phase loop filter against the actual frequency error introduced by the channel, whereas the phase error is obtained at the output of the phase detector. It can be seen that the MSE of the phase estimate reduces at the rate of about $-10$ dB per decade before it reaches an error floor after about 16 symbols. The error floor is equivalent to $\sigma^2_{\phi}$, which agrees with the theoretical analysis. Since frequency, by definition, is the derivative of phase, the convergence rate of the frequency MSE is $-20$ dB per decade, which can be seen from Fig. \ref{fig-freq-mse}. Again, the frequency MSE floors observed for $E_b/N_0 = 3.051$ and $4.051$ dB are due to erroneous peak detection at low SNR, a phenomenon similar to what observed with the timing MSE plot in Fig. \ref{fig-timing-mse}. \subsection{Performance of the Proposed Non-Coherent Receiver} The BER performance of the proposed non-coherent receiver in Fig. \ref{fig-noncoh-receiver} is illustrated in Fig. \ref{fig-noncoh-det} for two different spreading factors ${\rm SF}=8$ and ${\rm SF}=10$. To show the importance of timing and frequency synchronization, performance of the \emph{naive} receiver is also shown in this figure. The naive receiver is a non-coherent receiver that does not include neither the timing nor frequency correction module, and is evaluated under the presence of random timing and frequency errors. As expected, performance of such a naive non-coherent receiver is very poor (unacceptable) without timing and frequency synchronization. On the contrary, as can be seen from Fig. \ref{fig-noncoh-receiver}, the proposed non-coherent receiver implementing only coarse timing and frequency detections enjoys the BER performance very close to the performance of an ideal non-coherent receiver. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{figures/noncoh_det_HN-eps-converted-to.pdf} \caption{BER performance of the CSS system with the proposed non-coherent receiver.} \label{fig-noncoh-det} \end{figure} \subsection{Performance of the Proposed Coherent Receiver} Having demonstrated the convergence performance of timing, frequency and phase synchronization, this section presents the BER performance of the proposed coherent receiver for two different spreading factors ${\rm SF}=8$ and ${\rm SF}=10$. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{figures/coh_det_TN2_HN-eps-converted-to.pdf} \caption{BER performance of the CSS system with the proposed coherent receiver.} \label{fig-coh-det} \end{figure} First, BER performance comparison between coherent (CH) and non-coherent (NC) detection methods is shown in Fig. \ref{fig-coh-det} in the presence of fractional timing and frequency errors. The system parameters used in the simulation are ${\rm SF}=8$, $N_{\rm D}=N_{\rm U}=8$. Each transmission burst contains 256 CSS symbols. For each burst, both the timing and frequency errors are uniformly distributed between $-0.5$ and $0.5$ UI. Because coherent detection relies on good phase tracking performance, in practice, it can be turned on after the phase loop filter is fully converged. Observation of the phase MSE plot in Fig. \ref{fig-phase-mse} suggests that phase convergence is achieved after 16 symbols. For that reason, the first 16 symbols are detected using the non-coherent method while the remaining symbols of the data burst (i.e., 240 CSS symbols) are detected using the proposed coherent method. To see how well the proposed receiver performs, the BER performance of the \emph{ideal} non-coherent receiver and \emph{ideal} coherent receiver is also shown in the figure. The ideal coherent receiver performs ideal timing, frequency and phase compensation assuming perfect knowledge of the errors introduced by the channel. It can be seen from Fig. \ref{fig-coh-det} that the ideal coherent receiver provides about 0.5 dB gain compared to the ideal non-coherent receiver for both cases of ${\rm SF}=8$ and ${\rm SF}=10$. With ${\rm SF}=8$ and $E_b/N_0 > 5$dB, the proposed receiver yields almost the same performance as that of the ideal counterpart, in both cases of non-coherent an coherent detection. The observation that the proposed receiver with non-coherent detection performs identical to the ideal con-coherent receiver indicates that both the timing and frequency loops perform well enough to a point that fractional timing and frequency error are no longer a performance limiting factor. For ${\rm SF}=10$, the proposed coherent receiver performs about 0.25 dB worst than the ideal coherent receiver. While such a performance gap is small enough to justify the practical use of the proposed coherent receiver, it also suggests there is a room for improvement when it comes to phase synchronization at higher SF values. \begin{figure}[htb!] \centering \includegraphics[width=0.45\textwidth]{figures/pscss_det_TN2_HN-eps-converted-to.pdf} \caption{BER performance of the PSK-CSS scheme with the proposed coherent receiver.} \label{fig-pscss-det} \end{figure} Finally, Fig. \ref{fig-pscss-det} plots the BER performance of the PSK-CSS system proposed in \cite{Nguyen19} under a realistic channel condition with fractional timing and frequency errors. Among the 256 CSS symbols on each burst, the first 16 symbols are not phase modulated (i.e., $Q=1$) considering the fact that phase synchronization is not fully converged in the first 16 symbols (see Fig. \ref{fig-phase-mse}). The remaining 240 data symbols carry extra information bits embedded in the initial phases of CSS symbols. Using QPSK modulation (i.e., $Q=$ 4) allows to send two additional bits per each of these 240 CSS symbols. With ${\rm SF}=8$, this represents $\frac{(8\times 16+ 10\times 240)}{8\times 256}=123.44\%$ data rate improvement, whereas it is $118.75\%$ for ${\rm SF}=10$. As can be seen, the ideal receiver with perfect synchronization provides 1.0 and 0.8 dB gains over the ideal non-coherent receiver of the conventional CSS system for ${\rm SF}=8$ and ${\rm SF}=10$, respectively. More importantly, the BER performance of the PSK-CSS scheme achieved with the proposed practical coherent receiver has only 0.25 dB gaps as compared to the ideal co-coherent receiver for both considered SF values. \section{Conclusions}\label{sec-con} With the objective of designing non-coherent and coherent receivers that can work under realistic channel conditions of having timing and frequency offsets, this paper has developed high-performance synchronization methods for recovering timing, frequency and phase of CSS signals. A novel method was proposed to perform coarse timing and frequency synchronization that makes use of the preamble consisting of both down chirps and up chirps and the shape of the pulse shaping/matched filters. Furthermore, to enable fine synchronization, timing and phase loop filters were designed as simple first-order and second-order PLL circuits with dynamic gain control. The obtained results show that, in practical scenario, timing and frequency synchronization is essential to non-coherent detection of CSS signals. The proposed non-coherent receiver, with only coarse timing and frequency detections, performs almost identical to an ideal non-coherent receiver. By further employing the fine timing, frequency and phase synchronization, the proposed coherent receiver yields about 0.5 dB gain as compared to the proposed non-coherent receiver. Finally, impressive BER performance obtained with the proposed coherent receiver is demonstrated for the higher-rate PSK-CSS system, which provides about 0.75 dB as compared to non-coherent detection of conventional CSS signals, and at the same time delivering $123.44\%$ and $118.75\%$ data rate improvements for ${\rm SF}=8$ and ${\rm SF}=10$, respectively. \section*{Acknowledgement} This work is supported by the NSERC/Cisco Industrial Research Chair in Low-Power Wireless Access in Sensor Networks. \bibliographystyle{ieeetr}
1,108,101,563,467
arxiv
\section{Introduction}% \label{Sec:Intro} Learning is an intrinsic human ability. We learn both from our own experience and from our peers. However, learning is not just about improving performance or assimilating new knowledge. It is also about analyzing new situations, understanding different perspectives, using knowledge to find commonalities between distinct situations, discussing, and even become competent at solving conflicts. Social learning, whereby we learn with and from our peers, teachers, parents and others, plays a fundamental role in most of these broad forms of learning \cite{piaget2013play}. It is not surprising then that the educational environment stimulates social collaboration between peers during learning. Collaborative learning has been associated with improving attitudes towards school, fostering achievement, developing thinking skills, promoting interpersonal and intergroup relations \cite{blumenfeld1996learning}. However, as technology evolves, so do learning paradigms. Currently, technology offers novel learning tools that complement the classical learning paradigms. Massive open online courses (MOOCs) \cite{li2014watching} and intelligent tutoring systems (ITS) \cite{nwana1990overview} are just two illustrative examples of new learning tools that are available through technology. Another change in the teaching and learning paradigm comes from serious games \cite{ritterfeld2009book}. Serious games are designed for a purpose other than pure entertainment. They have been around since at least the 1950s, and their applications in education are well-documented \cite{degloria2014serious}\footnote{Interestingly, research suggests that while serious games may be more effective in terms of learning, they are not always more motivating than conventional instruction methods \cite{wouters2013meta}.}. Collaborative serious games, in particular, combine the advantages of serious games with social learning, and some studies have suggested that they support learners in articulating the knowledge that would otherwise have remained intuitive \cite{van2011learning}. Although some researchers questioned the effectiveness of collaborative serious games, research in this area is still scarce and often ambiguous \cite{wouters2013meta}. In this paper, we describe the use of a {\em robotic tutor in the context of a collaborative serious game}. In particular, the research described herein seeks to combine the positive aspects of intelligent and peer tutoring, such as personalization and adaptation to the learner, with collaborative serious games, such as social learning and gamification. This offers a new role for robots as technological tools in education. Previous studies have explored the use of artificial robotic tutors and peers in both classroom environments \cite{kanda2004interactive,tanaka2007socialization, miyake2012robot} and as entertainment partners \cite{michalowski2007dancing}. However, such use is often limited to one-robot-one-user interactions, which significantly limits the social component of learning. By adopting a collaborative serious game as the interaction domain, we move from such typical one-robot-one-user interaction to a much richer scenario that involves one robot and two users, thus, a group interaction. In addition, some studies have investigated the effect of long-term interactions between children and robots regarding the quality of the relationship \cite{westlund2018measuring}, and other variables \cite{leite2013social}; however, few studies have looked at the impact of long-term interactions and their effects on learning gains, which is also a contribution of this paper. Our ambitious application scenario poses several technological challenges, particularly in endowing the robot with a behavior that is both socially plausible and able to successfully accomplish the pedagogical goals of the activity. Our robotic tutor must be both socially competent and successful in teaching. It should also be able to perceive the difficulties faced by the students, not only from their explicit behavior but more importantly from their implicit behavior. It should be able to understand their individual affective state and the ``emotional climate'' between them to intervene in a way that is adequate. In other words, a robot should be able to {\em empathize} with the human users, individually and as a group. A recent survey discussed the importance of empathy for an artificial agent (robot or virtual) that interacts with humans \cite{paiva2017empathy}. In the context of education, the empathy of the teacher towards the students was also shown to impact the learning process and outcomes \cite{feshbach2009empathy}. In a meta-analysis conducted by \citeN{cornelius2007learner}, empathy was one of the variables that if present in teachers is associated with positive students outcomes. In a subsequent meta-analysis conducted by \citeN{roorda2011influence}, empathy was included as one of the variables in the teacher-student's relationship that is associated with their engagement and achievements in school. This demonstrates the importance of empathy in tutor-student relationships and how it influences students' educational outcomes. However, few studies have measured this impact on learning outcomes. This paper describes an autonomous and empathic robot tutor that interacts with multiple learners via collaborative serious game and investigates the impact, both immediate and long-term, of the robot on the student's learning performance. Two field studies were performed, leading to a deeper understanding of empathy in learning and the long-term effects of having an autonomous robot in school for educational purposes. \subsection{Contributions} The work presented herein was developed in the context of the {{EMOTE}}{} project\footnote{{EMOTE project: \url{http://www.emote-project.eu}}, which stands for EMbOdied Perceptive Tutors for Empathy-based learning project.}. We investigated the use of an {\em autonomous and empathic robot tutor for} \emph{collaborative group learning}. We explored the impact of a robot interacting and teaching groups of students in their classroom in relation to learning gains. The contributions of this paper include the following: \begin{enumerate} \item We conducted a short-term evaluation study to investigate the immediate impact of the empathic robot in fostering collaborative learning. \item We conducted a long-term term study to investigate the long-term impact of the empathic robot in terms of learning. \item We deployed an autonomous robot in the context of real-world classrooms for group learning. \end{enumerate} The road-map of this work starts with section \ref{ch:state_art} presenting the state of the art regarding robots in education, groups of humans and robots, and empathy in social robots; section \ref{ch:collaborative_learning_activity} details on the collaborative learning activity, including the learning goals and the game-play dynamics; section \ref{ch:robot_tutor_behavior} refers to the development of the educational and social behaviors for the robotic tutor; section \ref{ch:method} explains the study method, hypothesis, measures, and materials used; section \ref{ch:short_term_study} describes the short-term study and Section \ref{ch:long_term_study} the long-term study; section \ref{ch:conclusion} presents the general discussion and conclusion, including limitations and future work. \subsection{Our previous work} This paper focuses on the evaluation of the robotics system with respect to learning gains in students. The details of the design and implementation of the robot are not a main contribution of this paper and can be found in previous publications. In particular, the robot behaviors and the development of the empathy module of the autonomous robot are detailed in \citeN{alves2015ec} and are summarized in section \ref{Subsec:EC} of this paper. The development of the collaborative \ac{AI} that sustains the robot's game-playing and pedagogical decision-making abilities is detailed in \citeN{sequeira2015ai} and is overviewed here in section~\ref{Subsec:GameAI}. The educational dialogue dimensions for collaborative learning designed for the robot are detailed in \citeN{alves2014towards} and summarized in section \ref{Subsec:Hybrid}. Finally, the design, development, and validation of the ``Restricted-perception Wizard-of-Oz'' methodology, which allowed the successful development of the social and educational behaviors of the autonomous robot, are detailed in \citeN{sequeira2016method}, and we summarized the main steps in section~\ref{Subsec:Restricted-woz}\footnote{The source code of all the robot's components is publicly-available and can be retrieved at: \url{https://github.com/emote-project/Scenario2}. Further details on each component can be found in the ``Downloads / Components'' and ``Publications / Deliverables'' sections of the project's website at: \url{http://www.emote-project.eu/}.}. \section{State of the art} \label{ch:state_art} The experiences we have during our childhood shape to some extent the way we think, grow, feel, and behave. It is thus important to surround children with nurturing and safe learning environments. The way children learn, however, is being transformed with new technologies for education, such as computers and tablets, that are enhanced with serious games \cite{savage1990conceptual} or computer-supported learning activities, to foster learning acquisitions. Robots in particular hold promise to facilitate learning outcomes and promote enjoyment during learning \cite{kennedy2015comparing}. Recent events, such as the R4L (or Robots 4 Learning) series of workshops\footnote{The workshops and special issue can be found in \url{https://r4l.epfl.ch/}.} are an example of the interest that this area is capturing in research and the potential that robots have in education. In this paper, we present research that advances the state of the art in the area of social and empathic robots for education, in particular for collaborative group learning scenarios. Thus, we now provide an overview of these areas to contextualize the contributions of the paper. \subsection{Robots in education} Educational robots are a subset of educational technologies in which robots are used as platforms or tools for students' learning, usually in subjects such as maths, problem solving, chemistry, etc. The use of robots as a medium to learn and understand curricular subjects started in the 60s with Papert's work in which he introduced the concept of ``robots as manipulatives'' (or physical objects) that are specifically designed to foster learning \cite{papert1980mindstorms}. An example is the LEGO Mindstorms\textregistered, which are used to teach STEAM-related curricular topics, providing a new tool for education \cite{hendler2000robots, khine2017robotics}. Furthermore, as robot technology matures, robots can be used as social actors in a classroom as a way to deliver educational content, instruct, foster discussion, challenge, scaffold, and support the learning of children in a social-intelligent manner. In fact, reviews on the applicability and potential of robots in education shows that robots are being developed for children across different learning domains \cite{mubin2013review, belpaeme2018social}. Despite the potential of robots in learning, a systematic review has shown that to have an impact on learning gains, they need to be skilfully used by teachers to attend to the students' needs. Otherwise, learning gains are not visible \cite{spolaor2017robotics}. There is investment in the field of \ac{HRI} to develop robots that are used for learning. Pioneering research by \citeN{kanda2004interactive} features ROBOVIE, a robot for teaching English to Japanese children in an elementary school context. This was one of the first field studies in educational \ac{HRI}, conducted over a period of 18 consecutive days in a school. The results showed that children who exhibited a lot of interest during the starting phase had a significantly elevated English score, and the robot indeed acted as a motivational factor for learning in these cases. In South Korea, the IROBI robot was also endowed with didactic content to support young children with learning English as a second language \cite{han2008comparative}. The robot was placed in a class with very young children over a long period of time. In the same application domain, the EU H2020 L2TOR project\footnote{L2TOR project: {\protect\url{http://www.l2tor.eu/}}.} aims to study if robots can be used as tutors to support teaching preschool children a second language \cite{kennedy2016social}. A review of the applicability of robots for second language acquisition was performed by \citeN{chang2010exploring} in which they have characterized the existing robots used and the instructional media explored for this type of task. Other projects with robots for children have dedicated attention to the investigation of robots as tools for learning and supporting positive interactions, such as the Socially Assistive Robotics, an NSF Expedition in Computing project\footnote{Socially Assistive Robotics, an NSF Expedition in Computing project: {\protect\url{https://robotshelpingkids.yale.edu/}}.}. Robots have also been used to support handwriting abilities using the paradigm of ``learning-by-teaching'' \cite{frager1970learning} in which children act as tutors of the robot, providing it with feedback for a better writing performance. This paradigm is known to benefit children's self-esteem, provide practice with hand-writing without noticing, and provide engagement in a so-called ``prot\'{e}g\'{e} effect'', which is a sense of responsibility over the robot's performance (since they are instructed to be the robot's teachers) \cite{chase2009teachable}. For example, \citeN{tanaka2012children} conducted a 6-day field trial with young students and a robot in school, and the results showed that a robot can help children efficiently learn new English verbs, when children give instructions to the robot. Additionally, projects such as the Co-writer project\footnote{Co-writer project: \protect\url{http://chili.epfl.ch/cowriter}.}, have studied the role of students as teachers of the robot. A study in the scope of the Co-writer project was conducted in school in which a group of children gathered around the NAO robot to provide feedback on its mistakes as the robot improved according to children's feedback \cite{lemaignan2016learning}. While providing feedback to the robot, children are put into small groups who are responsible for teaching the robot to write better. In a study in which the NAO robot acted as a student that needed to learn how to write and the children served as teachers who helped it write better, results showed high levels of commitment and engagement from children embracing this task \cite{hood2015children}. This demonstrated the promising results of using this educational system. Another study investigated interpersonal distancing of children both towards a human adult or a robot facilitator within a collaborative activity \cite{chandra2015can}. The scenario involved two children performing a collaborative learning activity following a learning-by-teaching approach, which included writing a word/letter on a tactile tablet. The study showed that children felt more responsible and provided more corrective feedback when the robot was present than when a human mediator was present (replacing the robot). This suggests that the role a robot can play can have an impact in the type of interactions that emerge, particularly corrective feedback. Beyond the typical curricular activities, social robots are also being used for social and emotional learning. The research by \citeN{jimenez2014effect} described a study in which the behavior of a robot prompts constructive interaction with a learner when compared with two students learning the same task. The types of prompts and interactions built into the robot as it learns together with the children lead to better performance of the robotic condition. Another study focused on fostering children to develop a growing mindset as part of social and emotional training \cite{park2017growing}. Therefore, in a scenario featuring a child playing puzzle games with a robot, a fully autonomous robot was built with the capability to exhibit ``behaviors suggestive of it having either a growth mindset or a neutral mindset'' \cite{park2017growing}. The results of a study that compared two types of robots in the same scenario have shown that children who played with a growth mindset robot self-reported having a stronger growth mindset and tried harder during the task. In a long-term study, \citeN{serholt2018breakdowns} investigated interaction breakdowns between children and a robot in a learning task. In this individual learning scenario, breakdowns in the interaction were associated with ``the robot's inability to evoke initial engagement and identify misunderstandings, confusing scaffolding, lack of consistency and fairness, and controller problems.'' In another long-term study in which a robot acted as an agent for learning, \citeN{jones2018adaptive} concluded that if a robot provides personalized and adapted scaffolding to students based on their learning, they can better regulate their own learning \cite{jones2017know}. Furthermore, in a study of two consecutive weeks in a school, a peer-robot that exhibited behavioural personalization was found to have a positive influence on learning when interacting with children, compared to a robot that exhibited non-personalization. Personalization of the robot was defined in terms of non-verbal behaviour, linguistic content, and performance alignment. Specifically, the results from this study showed that children exhibited significantly increased learning only in the novel learning task in the personalized condition \cite{baxter2017robot}. Although these three scenarios present themselves as extremely rich and challenging for social learning, especially because a robot was deployed in school for long-term educational gains, they were built for one-robot-one-student interactions. This paper goes beyond the current state of the art by exploring collaboration in a group context using a robot designed to support learning. In addition, we evaluated the learning gains, an aspect rarely measured in studies that use robots for education. In fact, usually studies evaluate other variables, such as likability and engagement between children and robots. \subsection{Groups of humans and robots} As mentioned, the majority of the application scenarios developed thus far to study humans and robots are designed for one-on-one interactions in which one robot interacts with one person. Even in scenarios in which the robot is placed in a classroom with many children, the type of interactions often designed consider one-on-one interactions, e.g., \citeN{belpaeme2018guidelines}. For this study, we were interested in scenarios using groups of two or three students who can learn together with the support of a social robot. According to \citeN{du2016impact}, dyads and triads are considered groups, with dyads being considered the smallest type of a group if they share common and dependent elements. In the case of our research, dyads of students share the same learning context as part of the group. Groups in \ac{HRI} can be studied according to different perspectives, such as (1) groups of humans interacting with one robot, (2) groups of robots interacting with one human, or (3) groups of humans interacting with groups of robots. In addition to workshops organized regarding this topic \cite{jung2017robots}, relevant studies have been conducted. For example, a study by \citeN{fraune2015rabble} showed that different number of robots (namely, a single robot or a group of robots) and the type of robots (anthropomorphic, zoomorphic, or mechanomorphic), determine the attitudes, emotions, and stereotypes that people hold when interacting with them, with anthropomorphic robots in groups being one of the preferred choices. In a field study at a university, \citeN{fraune2015three} studied how participants respond when robots (individually and in groups) enter their common space. The results showed that although participants reported enjoying interacting with both individual and robots in groups, they interacted more with groups of robots. In addition to this, the characteristics of social robots can affect the interactions. For example, entitative groups of robots (robots designed to look and act similar to each other, such as those that have a similar appearance and a shared goal) compared to single robots, were evaluated more negatively \cite{fraune2017threatening}. In a another study, the frequency for which robots acted as moderators affected the social and task-related features, namely group cohesion and task performance, in multi-party interactions \cite{short2017robot}. These studies show that a robot's behavior in a group should be carefully designed as their behavior and appearance influence interactions. Nonetheless, the perception of robots in groups not only depends on the robots' behaviors but is also influenced by the characteristics of the people. A study conducted by \citeN{correia2018choose} showed that when people only observe robots (before any direct interaction with them), they tend to choose robots that exhibit relationship behaviors (e.g., robots that foster a group climate) over competitive robots (e.g., robots that are more focused in succeeding and wining). However, after a direct interaction, the results seem to change, with participants who were competitive preferring a more competitive robot, and participants who were not as competitive preferring robots with relationship-driven characteristics. This study reflects that membership preferences between groups of humans and groups of robots goes beyond robot's characteristics, extending to the characteristics of each person. In another study regarding group membership, \citeN{sembroski2017he} found that in-group and out-group perception between humans and robots can lead to more conformist behaviors from people, depending on the request type and level of authority. A different study investigated a robot's potential to shape trust in groups of humans, concluding that the robot that exhibited vulnerable behavior (in comparison with a neutral robot) promoted more group engagement and social signs of trust, with people providing more support in times of failure \cite{strohkorb2018ripple}. Furthermore, a study dedicated to the investigation of non-verbal behavior between a robot and multiple people concluded that the gaze of the robot influences people's perception of the motion of the robot and that, in turn, affects the perception of the robot's gaze \cite{vazquez2017towards}. Additionally, when incorporating robots in unstructured social environments, such as in shopping malls or crowded city streets, it is important for the robot to exhibit adequate motion for collision avoidance to navigate fluently between groups of people \cite{mavrogiannis2018social}. Therefore, considering mutual dependency between the behavior of robots and the behavior of the humans is important when designing and evaluating robotic systems aimed to be social in group contexts. In relation to educational contexts and groups, \citeN{strohkorb2015classification} used interactive role-playing to help children improve their emotional intelligence skills. By focusing on finding ways to analyze non-verbal and verbal behavior to detect if a child was high or low in social dominance, the scenario featured groups of children, and the robot helped them in their social relations. Despite being a group scenario, the collaborative nature of the task was not the focus of the work. In fact, according to \citeN{dillenbourg1999you}, collaborative learning pertains to a situation in which particular forms of interaction among people are expected to occur that trigger learning. In robots applied to education, social robots can play roles, such as of a peer (learning together with a group of students); a tutor (teaching a group of students); a facilitator/mediator (mediating the learning interactions and interventions and helping the group to follow productive learning); a supervisor (supervising the work being done by students and providing feedback); or even a friend (supporting the students emotionally) \cite{zaga2015effect, alves2016role, broadbent2018could}. This demonstrates the richness of interactions that can be explored in the educational domain when using robots to foster learning. Although there is a wide variety of work in \ac{HRI} exploring robots as tutors in a classroom context, thus far, the type of activity and interactions established with robots have been fundamentally individual interactions. \subsection{Empathy in social robots} Empathy is an essential ingredient for any successful relationship. When attempting to define empathy, we come across various definitions. Empathy has been related to affective responses towards others (affective empathy) and a cognitive understanding of the emotional state of others (cognitive empathy). \citeN{hoffman2001empathy} brought attention to the processes that underlie empathic responses, defining empathy as the psychological process that leads to feelings that are congruent to the other person's situation more than to one's own. From a more behavioral perspective, \citeN{davis2018empathy} associated empathy with the responses of someone to the experiences of another person. Holistically, \citeN{preston2002empathy} considered empathy as a concept that relates sympathy and emotional contagion. The expression of empathy is foundational with respect to interpersonal relationships and with our ability to communicate. Indeed, it is what connects us emotionally and intellectually. In an educational context, there is a general understanding from teachers that empathy promotes positive interactions, a more supportive classroom climate, and enhances student-centered practices \cite{mcallister2002role}. In a meta-analysis conducted by \citeN{cornelius2007learner}, empathy was one of the variables associated with positive students outcomes when present in teachers. When robots are placed as collaborative partners within a group, features such as their ability to communicate and relate become relevant. Interactions with robots that are more open-ended and involve a high degree of collaboration that must be natural and engaging; thus, empathy needs to be addressed and modeled in those robots \cite{paiva2017empathy}. Recent research has been devoted to the implementation of empathy in robots in diverse application areas, such as in health care with socially assistive robotics \cite{tapus2007emulating}. In these cases, robots are perceived as having more empathy if they provided appropriate answers during a dialogue \cite{riek2010my}. In this line of research, robots that display accurate empathic behaviors enable the perception of a closer relationship with a human \cite{cramer2010effects}. Additionally, robots with empathy were perceived as being friendlier during an entertaining scenario \cite{leite2013influence}. Empathy also plays a role in child-robot interactions. In a study conducted by \citeN{leite2014empathic}, they explored whether a robot with empathic capabilities can influence the perception that children have towards the robot, specifically social presence, engagement and support. Indeed, in a chess game scenario, the robot that displayed empathic behaviors was perceived positively by the children, and the perceived levels of social presence, engagement and support were stable and high during a long-term interaction \cite{leite2014empathic}. This research was important for the study of robots with empathy and children in real-world settings. In a broader view of empathy in robots, \citeN{paiva2018towards} dedicated a book chapter on the creation of more humane machines in which successful interactions between humans and robots is associated with robots that are endowed with emotional processing and responses. Our research goes beyond the current state of the art as it considers how a robot can model empathy in a group setting in a collaborative learning activity. \section{Collaborative Learning about Sustainable Development} \label{ch:collaborative_learning_activity} A well designed scenario for collaborative learning increases the probability that some positive interactions occur, thus leading to learning experiences \cite{dillenbourg1999you}. The design of the scenario for collaborative learning is of utmost importance since the interactions between the members are crucial. As stated before, thus far social robots have been used extensively in domains in which the robot is able to establish one-on-one interactions with the learner, thus supporting problem solving, personalization, feedback and scaffolding. However, in collaborative learning, other elements come into play, such as perspective taking and understanding the consequences of actions. For the scenario in our studies, we used a collaborative multi-player game targeting issues of sustainable energy consumption. This game, which we shall refer to as {{M-Enercities}}{} (for Multi-player Enercities), corresponds to a multi-player, collaborative version of the original game {{Enercities}}{} \cite{knol2011enercities} (see an image of the game's interface in Fig.~\ref{Fig:enercities}). \subsection{Learning goals} The game was adjusted to the current schools' curriculum and was easily integrated into the school activities, allowing for 3 children to play the game, two children and a teacher, or two children and a robot (see Fig~\ref{Fig:scenario} for an overview of the group learning setting). The group of three work together to build a sustainable city, thus learning about sustainable development. This game is in line with Constructivism's basic idea that knowledge and meaning are built during social interaction and cooperation \cite{steffe1995constructivism}. As a collaborative group learning activity, {{M-Enercities}}{} motivates players to discuss and decide together how to build a sustainable city, providing exactly the type of social interaction dynamic to foster communication and cooperation as means to learn about sustainability. In fact, according to scholars, sustainable education is all about having meaningful discussions in which multiple perspectives and trade-offs are exchanged by expressing personal values \cite{fior2010children, antle2014}. In light of how sustainable education should be taught, the {{M-Enercities}}{} game supports learning goals related with factual knowledge about sustainable development, raised awareness for trade-offs and multiple perspectives concerning sustainability, and the role of personal values. These learning goals are elaborated in Table~\ref{tab:measure1}. \begin{table} \caption{Learning goals supported by the collaborative group learning game, {{M-Enercities}}{}.} \label{tab:measure1} \renewcommand{\arraystretch}{2} \begin{tabular}{ L{70pt} L{300pt} } \hline \textbf{Learning goal 1} & \emph{Factual knowledge} about different energy sources\\ \textbf{Learning goal 2} & Raise awareness about \emph{trade-offs} and the existence of multiple and possibly conflicting \emph{perspectives} in relation to sustainable development\\ \textbf{Learning goal 3} & Raise awareness of \emph{personal values} in relation to sustainable development\\ \hline \end{tabular} \end{table} \subsection{Game-play} The game is structured in levels that players need to master to continue their city. For example, to proceed to the second level in the game, players must make the population of the city grow to a certain amount by building residential areas. However, at the same time, if the city runs out of non-renewable resources, players may get into trouble because the city is not sustainable. Additionally, the game-play is based on a turn-taking dynamic where, at each each turn, one player decides what to do and the rest of the group assists in this decision process, discussing how they would like to build their city. In order for them to decide, they have to take into consideration the city indicators and how their actions influence the sustainability of the city. Each decision can have positive and negative effects on the ``environment'', ``economy'' and citizens' ``well-being'', which is indicated by a score for each of these factors. Players can choose among a set of actions that advances the game: \begin{figure*}[!tb] \centering \begin{subfigure}{0.45\columnwidth} \centering \includegraphics[width=\columnwidth]{enercities} \caption{}\label{Fig:enercities} \end{subfigure}\hspace{20pt}% \begin{subfigure}{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{scenario} \caption{}\label{Fig:scenario} \end{subfigure} \caption{Learning activity used in this study: \subref{Fig:enercities} a screenshot of the {{M-Enercities}}{} game used in the study, which is a multi-player, turn-based, collaborative version of the {{Enercities}}{} game about sustainable development; \subref{Fig:scenario} the learning space shows the interaction of either two students with a robotic tutor or three students playing the {{M-Enercities}}{} game over an interactive touch-table.} \label{Fig:activity} \end{figure*} \begin{description} \item[Making a construction --] Players can build a construction, such as a park, a market, or an industry. The different constructions influence the city indicators differently. Thus, a park positively affects the environmental indicator of the city, an industry positively influences the economy, and a market positively influences the well-being of the citizens. However, at the same time, these constructions can have negative effects on other indicators. Players can also invest in the city's energy by building energy supplying constructions (e.g., windmills) or increase the city's size and thereby the population by constructing more residential areas. \item[Performing an upgrade --] Players can invest in making an upgrade on previously created constructions. They can, e.g., add a recycling facility in an already built industry to upgrade it and improve the environment without wasting additional resources. Depending on the upgrade performed, it can influence different indicators. \item[Applying a policy --] Players can implement new policies in their city, such as implementing an energy education program. Policies are applied in the city-major building as a way to represent how the real world functions. \item[Skipping a turn --] Players can choose to skip a turn for the constructions and policies to come into full effect. When skipping a turn, constructions, upgrades, and policies present in the city augment their effect, e.g., if the players have chosen to build an industry to increase the economy of the city, the city will be richer when more turns have passed. Consider turns passing as time passing (or as a time indicator). Therefore, the more the players skip turns, more benefits over previous constructions, upgrades, or policies will occur. \end{description} To support turn-taking and to make every member of the group participate, there are virtual buttons located at different sides of the table, allowing players to perform different actions in turns. Since each decision can have both positive and negative effects, players must be aware of the trade-offs and the impossibility of creating a perfect city without any sacrifices. After reaching a group decision, the player closer to each button presses it, making a game action on their city. \section{Robotic tutor} \label{ch:robot_tutor_behavior} To fulfill its role as an autonomous empathic tutor in the collaborative {{M-Enercities}}{} game, the behavior of our robot can be seen at two distinct levels: an \emph{activity-related} level and an \emph{social interactive} level. Activity-related behaviors concern the decisions of the robot in the pedagogical activity itself. In our case, it involves all game actions for {{M-Enercities}}{} and the necessary game-play adaptation mechanisms. The social interactive behaviors involve all verbal and non-verbal expressive behaviors of the robot during interactions with the group of students, such as the dialogue management and physical animations. Both types of behaviors are selected based on information about the state of the game and contextual information about the physical environment provided by different audio-visual capture devices. Given the intended empathic nature of the robot, its behavior also depends on the individual and collective affective state of the students. This section overviews the design and implementation of the behavioral components of the robot, including task-specific behaviors, empathic behaviors, adaptation to the students, and behaviors driven by the learning goals. \subsection{Architecture overview} \label{Subsec:Architecture} \begin{figure}[!tb] \centering \includegraphics[width=0.95\textwidth]{architecture} \caption{The overall robot system architecture implemented to autonomously control the robotic tutor during the interaction with the students in our studies. Light-blue arrows denote the robot's actions, dotted orange lines represent communication between the system and the {{M-Enercities}}{} game application.} \label{fig:architecture} \end{figure} The overall architecture supporting the robot used in our studies is depicted in Fig.~\ref{fig:architecture}. The system comprises the following main modules\footnote{Technical details about each component can be found at \href{http://www.emote-project.eu/components}{www.emote-project.eu/components}.}: \begin{description} % \item[Perception --] Module responsible for processing all information from the robot's environment, including information on the student's emotional state, social behavior and actions performed during the learning activity. It comprises three main components: the \emph{User Awareness} component processes the students' state and actions; the \emph{User Actions} component manages the students' {{M-Enercities}}{} game input, providing the robot with the necessary task-related context for the interaction; finally, an \emph{Emotional Climate} component is responsible for detecting group-level emotional state. % \item[Memory --] This module keeps track of past events, and is subdivided in \emph{recent} and \emph{past event} memories. \emph{Recent Event Memory} stores the recent actions of the students in the game, collected directly from the User Actions perception component. Long term interaction, however, requires the robot to track information from previous sessions. For this reason, the information stored in the recent memory is moved to the \emph{Past Event Memory} at the end of each session. % \item[Task Management --] This module manages the execution of the learning activity itself, e.g., starting and ending the activity and specifying which students are interacting. % \item[Student Modeling --] This module is responsible for collecting information regarding the student's performance during the game session, using it to track possible changes in the learning and emotional state of students. Such information is used by the robot in later sessions to address specific learning challenges. % \item[Rapport --] This module regulates the robot's rapport during the interaction with the students. E.g., it adjusts the robot's speech volume to the student's to ensure a smooth communication; it shifts the robot's gaze towards the active speaker to provide a more natural interaction; it interrupts the robot's speech when a student is speaking; and performs back-channeling behaviors after the users' responses. % \item[Game \ac{AI} --] This module manages all the robot's actions in the {{M-Enercities}}{} game. It uses the student's past actions (retrieved from Memory) to learn the strategies being used by them during the activity, and generates possible actions according to the current state of the game. % \item[Hybrid Behavior Controller --] This is the core module controlling the robot's social interaction behavior. It decides which actions the robot should play in the {{M-Enercities}}{} game (informed by the Game \ac{AI}) and also how to structure the dialogue with the students (informed by the Emotional Climate and the other modules). % \end{description} While some of the aforementioned modules are standard in \ac{HRI} domains, some are specific to our scenario, namely in what refers to the interaction with multiple students. Below, we discuss in greater detail the methodological and technological considerations that drove the design of such components. \subsection{Designing the interaction behaviors of the robot}% \label{Subsec:Restricted-woz} As an empathic tutor, the robot should be able to play the game and to interact in a social and empathic manner with the students, raising awareness for personal values when considering sustainability. The robot should do so by setting a good example when choosing its game actions, explaining the reason behind each play in light of its ``personal values'', namely, achieving a balanced development of the city. We note that, in the context of our scenario, no personal values are wrong or right, and the students are not expected to adopt the robot's personal values. Instead, it is simply a means to open up the discussion and raise awareness of others' perspectives, as active participation is a sign of learning in sustainable development education \cite{lave1991situated}. Thefeore, to design the social behavior of the robot, we adopted the restricted-perception \ac{WoZ} methodology detailed in \citeN{sequeira2016method}, which can be summarized in the following steps: \begin{enumerate} % \item We gathered data from mock-up sessions where two students collaboratively interacted with a school teacher playing {{M-Enercities}}{}. The goal was to gain insight in common pedagogical and empathic strategies used by real teachers in our learning activity. % \item Data collected during the mock-ups was used to build the modules responsible for the perception, basic game behavior, and interaction behavior of the robot. % \item We conducted Restricted-Perception \ac{WoZ} studies, in which experts remotely-controlled the robot during the interaction with students in the {{M-Enercities}}{} game. Unlike the standard \ac{WoZ} paradigm in \ac{HRI}, when applying a Restricted-Perception \ac{WoZ} method, the experts controlling the robot have access only to processed observations from the interaction, similar to those which will drive the robot's autonomous behavior. This means that the decision-making of the wizard will be limited to the same perceptions that the autonomous robot will have over the interaction. This paradigm allows the operator to focus on the relevant aspects of the robot's social interaction. % \item Using the data collected during the Restricted-Perception \ac{WoZ} studies, we created the Hybrid Behavior Controller. The controller was built from two key elements: \emph{interaction strategy rules}, in which we encoded expert domain knowledge, e.g., explicit behavior patterns observed during the mock-up sessions and \ac{WoZ} studies; and a \emph{mapping function}, which identifies more complex behavior patterns discovered from the data using a \ac{ML} approach. \end{enumerate} \subsection{Implementing the interaction behaviors of the robot} Although this section describes work that is not a direct contribution of this paper, it is crucial to understand the design and implementation decisions that were taken regarding the behavior of the autonomous robotic tutor used in the studies reported here. In particular, we discuss the core modules that support the robot's social and empathic interaction with multiple students in the context of our learning activity (see Fig.~\ref{fig:architecture}). \subsubsection{Emotional Climate} \label{Subsec:EC} In order for the robot to interact with a group of students in an empathic and emotionally-intelligent way, it is fundamental that it is able to {\em detect} and {\em recognize the emotional state of the group}. Emotional climate is a central element in social group interactions between humans and has been studied in many group contexts. We consider emotional climate to be the valence state of a group-level emotion at a given time. Following the discussion in \citeN{alves2015ec}, in our scenario, the behavioral and emotional state of the students at any given time changes the emotional climate of the group at that time. This means that a positive emotional can be detected from the students expressing positive facial expressions, demonstrating joint attention in the educational task by looking at the table where the interaction takes place, etc. Conversely, a negative emotional is detected whenever students look away from the task and seem distracted or bored, etc. Emotional climate influences the behavior of the robotic tutor by changing the content and the way that certain utterances are performed. For example, if students are taking more time than the usual to decide what to do and a negative emotional climate valence is detected (e.g., boredom), the robot intervenes to maintain engagement by saying: \emph{``what should we do now?''} On the other hand, if a positive emotional climate valence is detected (e.g., engagement), the robot may say: \emph{``we are playing well''} as a positive reinforncement. These constitute examples of empathic behaviors designed for the robot. \subsubsection{Game \ac{AI}} \label{Subsec:GameAI} The Game \ac{AI} module is detailed in \citeN{sequeira2015ai} and ensures that the robot tutor is not only able to play the game competently, but also discusses the impact of each action performed by the group in the construction of a sustainable city. The robot's game-play promotes collaboration within the group and comprises a \emph{game-playing} and a \emph{social} component. The \emph{game playing component} (planner) is designed to accommodate a specific educational strategy, e.g., if the goal is to achieve a ``balanced'' strategy, it favors actions leading to game states where all scores (environment, economy and well-being) are as high and even as possible. It also detects game situations with the potential to provide interesting pedagogical opportunities, e.g., when the level of natural resources is low it suggests game actions that spend fewer resources. The \emph{social component} uses information about recent plays to build a model of the students' game strategies. It allows the robot to intervene during and after the players' actions in the game, e.g., the robot is able to suggest more suitable alternative plays in certain situations and explain the desired effect of such decisions over the city's development. Such game-play model also allow the robot to adapt its own strategy so to follow the perceived group strategy, a fundamental aspect of its game behavior due to the collaborative nature of the activity. For example, if the students are playing in an environmentally-aware fashion, the robot's strategy will also be more environment-friendly. \begin{minipage}{\linewidth} \begin{lstlisting}[caption=Example of a robot behavior.,label=List:BehaviorExample] Category: Strategy Sub-category: Wellbeing Behavior: { <gaze> game-ui <animate> sadness <animate> slow-eye-blink <speech> "Our population is not very happy." <glance> subject-1 <speech> "This worries me." <glance> subject-2 <speech> "What can we do?" <gaze> game-ui } \end{lstlisting} \end{minipage} \subsubsection{Student Modeling} The Student Modeling component in Fig.~\ref{fig:architecture} follows the discussion in \citeN{jones2015empathic}. It provides \emph{task and domain knowledge}, i.e., information both on the learning activity and on the knowledge and skills that the student is expected to acquire. The Student Modeling uses the information from the User Action module to update a high-level description of each student. Such description includes the student's task performance and estimated domain knowledge. Both are stored in the Memory module to be used in a pedagogical manner during the interaction, e.g., the robot can show its support with respect to the users' difficulties, and by summarizing the main results achieved at the end of each learning session. In the long-term study, the students' performance is especially useful to revisit specific tasks within {{M-Enercities}}{} that were completed or not, as well as information about how they were completed. This allows the tutor to ``recall'' previous sessions, highlighting learning gains and discussing specific challenges that the students went through. It also allows the tutor to provide the group with hints on how to address such challenges, thereby adapting to their learning needs. \subsubsection{Hybrid Behavior Controller} \label{Subsec:Hybrid} The interaction behaviors of the robot are governed by the Hybrid Behavior Controller module, whose design and implementation details can be found in \citeN{sequeira2016method}. The controller comprises a set of \emph{interaction strategy rules} and a \emph{mapping function}. The input of the controller is a set of \emph{perceptual features}, namely: facial features, encoding the students' expressive information; auditory features, identifying the active speaker and detecting a set of keywords spoken by the students that are relevant for the learning task; and game-related features, providing information about critical moments of the game, such as when a level changes or some resource of the city becomes scarce. All features are automatically extracted and encoded from raw data captured from microphones, cameras and other sensing devices strategically positioned in the environment. The output of the controller module is a \emph{social interaction behavior}, including all the animations, gaze functions and utterances spoken by the robot during the interaction with the students. The design of the behaviors, as discussed in \citeN{alves2014towards}, was inspired by observed teacher-students interactions during the aforementioned mock-up sessions. In addition to the dialog of the robot, each interaction behavior encodes the non-verbal behavior of the robot that was also inspired in the way real teachers and the students interacted, e.g, by shifting the robot's gaze between the game and the players in order to drive their focus of attention towards relevant aspects of the task. An example of a full behavior definition is specified in Listing~\ref{List:BehaviorExample}, designed to address a situation where the well-being of the city's population in {{M-Enercities}}{} was low\footnote{The list of encoded behaviors can be retrieved from ``Publications / Deliverables'' sections of the project's website at: \url{http://www.emote-project.eu/}.}. The Hybrid Behavior Controller is then comprised of: \begin{figure}[!tb] \centering \includegraphics[width=\textwidth]{ml-based-module} \caption{A depiction of the \ac{ML} procedure involved in creating the robot controller's \emph{Mapping Function}. Adapted from \citeN{sequeira2016method}.} \label{fig:learningMF} \end{figure} \begin{description} % \item [Interaction Strategy Rules --] Correspond to manually-encoded \emph{behavior rules} in the form \texttt{If}-perceptual state-\texttt{Then}-interaction behavior. The idea is that when the features have the values as specified in the rule's \texttt{If} statement, the rule becomes active, which in turn automatically triggers the associated interaction behavior of the robot. A set of rules was defined to encode domain knowledge that is relevant to improve the students' comprehension of the task and to understand their learning progress. Some rules were inspired by pedagogical strategies observed in teachers during the mock-up sessions. We also performed informal interviews with the teachers in order to understand their reasoning process and gather information about interaction dynamics and common strategies used during the several interaction studies. This lead to the design of rules such that whenever the game starts, the robot gives a short tutorial explaining the game rules, and after the game ends a rule is triggered that ``wraps-up'' by summarizing the main achievements and analyzing the group's performance. Other rules encode interaction-management functions, such as announcing the next player or other game-related information. The rationale was that the behaviors in these rules occurred at well-defined moments and in a consistent manner, hence we do not need to learn interaction strategies for such cases. % \item [Mapping Function --] In order to endow the robot with a more robust behavioral repertoire, the hand-designed strategy rules were complemented by interaction strategies discovered using a \ac{ML} technique. In particular, we used \ac{ML} to identify behavioral patterns that are less common and, therefore, harder to explicitly identify by the experts or through observation and annotation. An important aspect of the Restricted-Perception \ac{WoZ} method is that the behavior data from the operator is dependent on the same perceptual features that will drive the behavior of the robot during autonomous interaction \cite{sequeira2016method}. For this reason, such data is particularly amenable to a \ac{ML} analysis. We used the data from these studies to train a classifier that maps perceived situations to the robot's actions, i.e., a model of \emph{which} behaviors should be triggered and \emph{when} to trigger them. The procedure is illustrated in Fig.~\ref{fig:learningMF}. It starts with a \emph{Data Preparation} phase involving the transformation of the collected demonstrations into a data-set of state features-behavior pairs, which are referred to as training instances. The \emph{Training} phase learns a mapping function encoding the observed interaction strategies from the given data-set.% % \footnote{We note that the interaction controller module is agnostic to the \ac{ML} algorithm that is used to learn the Mapping Function. In that regard, standard \ac{ML} classification algorithms may be suitable to learn interaction strategies based on the collected \ac{WoZ} data.} % Specifically, we used a technique based on an associative metric within frequent-pattern mining that is detailed in \citeN{sequeira2010fpm} and in \citeN{sequeira2013assoc}. As illustrated in Fig.~\ref{fig:learningMF}, for each wizard behavior sampled from the log file, the corresponding ``Behavior'' tree is updated according to the perceptual features that were active at that time. This indicates that states where those perceptions are active are an example of \emph{when} to execute the behavior. For all other behaviors, the corresponding ``Not Behavior'' trees are updated, indicating that the features are an example of \emph{when not} to execute them. By the end of training, each ``(Not) Behavior'' tree stores patterns that indicate the perceptual states on which the corresponding behavior should (not) be executed. After having learned the mapping function, the system can choose an appropriate interaction behavior at run-time upon request, given the robot's perceptual state. We note that while the Rules module covers the question regarding when and when not to execute some behavior (the rules were handcrafted to ensure this), the \ac{ML} module had to be designed such that behaviors are not automatically-triggered incorrectly and at the wrong times, which could potentially ``break'' the interaction flow between the robot and the students. % \end{description} \section{Method} \label{ch:method} This section presents the design of the method to meet the proposed goal of this work. \subsection{Hypothesis} \label{Sub:Hypothesis} A few points can be highlighted to frame our studies: teachers with empathy competencies are associated with positive students' outcomes; collaborative learning environments can be more beneficial, depending on the educational topic that is being taught; social robots have been used for children in educational applications, with positive impact on students' engagement during learning. Therefore, we have formulated the following study hypothesis:\\ \noindent% \textbf{Hypothesis 1 -- In a collaborative group learning environment an empathic robot improves students' learning outcomes.} We have performed a between-subjects design study in which an empathic robotic tutor interacts with groups of children in a school classroom, performing a collaborative activity about sustainable development. This was a short-term study in which groups of children performed one session with the robot, distributed randomly across one of the three study conditions: (1) two children learn with an empathic robot, (2) two children learn with a non-empathic robot (3) three children learn without the presence of a robot. We hypothesize that students will have higher learning achievements in the condition in which they perform the learning activity with the empathic robot.\\ \noindent% \textbf{Hypothesis 2 -- In a collaborative learning environment groups of children learn over time with an empathic robot.} This hypothesis concerns a deeper understanding about empathy in robots, as it concerns a long-term study. Thus, we have performed a within-subjects design study in which groups of students interacted with the empathic robot over a period of two months (4 sessions, 1 session every other week), and have evaluated their learning outcomes. The learning content in our research is related with sustainable development curricula, a domain of knowledge that requires group discussions and understanding of others people's opinions and perspectives in order to make sustainable decisions. We hypothesize that learning gains will increase over time. \subsection{Ethical considerations} We developed a robotic tutor that forms a social bond with lower-secondary students in order to promote learning in a personalized way. As \citeN{fridin2014kindergarten} and \citeN{serholt2017case} describe, this entails ethical concerns specially related to long-term interactions. These ethical concerns include attachment to the robot, deception about the robot's abilities, and robot autonomy and authority. Regarding the attachment to the robot, it was explained to all children that participated in the study exactly how long the robotic tutor will be present in their school and when it will be removed, similar to introducing a temporary teacher. We have explained to children the robot's workings and answered any questions about it to avoid deception over the robot's abilities. In relation to the robot's authority, as children are aware of the balance between expertise and authority \cite{alves2016students}, we explained that while the robot is trying to help them to accomplish learning tasks it will not be responsible for grading and does not have the authority to keep them engaged in the task. Moreover, all participants provided written informed consent from their caregivers prior to participation and assented to participate in the study when asked before the starting of each session. The guidelines of the Declaration of Helsinki and the standards of the American Psychological Association were followed. \subsection{Measures} \label{ch:measures} \begin{table} \caption{Learning goals supported by the robotic tutor and by the {{M-Enercities}}{} game, matched with the measurement media used to evaluate sustainable development learning outcomes.} \label{tab:measure2} \renewcommand{\arraystretch}{1.5} \begin{tabular}{ L{110pt} L{180pt} L{60pt} } \textbf{Learning goals in sustainable education} & \textbf{Measurement media} & \textbf{Section}\\\toprule Factual knowledge & Multiple-choice questionnaire & Section~\ref{ch:factual_knowledge}\\ Trade-offs and multiple perspectives & Writing assessments: (1) trade-offs were measured according to the number of options considered to solve a sustainable problem; (2) multiple perspectives were measured according to the number of arguments. & Section~\ref{ch:tradeoffs_perpectives}\\ Personal values & Behavioral analysis about: (1) Scores comments, (2) In-depth discussions (3) Meaningful conversations & Section~\ref{ch:personal_values}\\ \bottomrule \end{tabular} \end{table} We have designed, developed, and evaluated two different assessment media to measure learning goals in sustainable development education. The two assessement media used were \emph{writing assignments} and \emph{behavioral analysis}. Their full description is described below and sumarized in Table~\ref{tab:measure2}\footnote{All writing assignments used in the work are made available online on Deliverable 7.2 of the {{EMOTE}}{} project at \url{http://www.emote-project.eu/publications/deliverables}.}. \subsubsection{Factual knowledge measure} \label{ch:factual_knowledge} A multiple-choice questionnaire was designed as a measure of Factual Knowledge about energy sources. The questions were designed according to the knowledge available in the {{M-Enercities}}{} game by a researcher of the {{EMOTE}}{} project who was also a teacher in school. The multiple-choice questionnaire about sustainability was piloted to determine whether the difficulty level in the pre- and post-tests assignments was appropriate, which would mean no statistical difference between pre- and post-tests scores. The pilot test was performed with $48$ children from grades 4 and 5 (the same age-group as the target students from our main study) and the difficulty level was evaluated based on the percentage of correct answers to each question. Results from the pilot test showed no significant difference between pre-test (\emph{M} = 5.0, SD = 0.16) and the post-test (\emph{M} = 4.9, SD = 0.16), \emph{p} > 0.05, therefore showing a similar level of difficulty. Both pre- and post-tests comprised 12 items each (24 items in total). \subsubsection{Trade-offs and multiple perspectives measure} \label{ch:tradeoffs_perpectives} To test students' ability to understand that there are many different perspectives to debate sustainable development, we created a writing exercise that reflects a sustainable problem in which different stances can be taken. Students were instructed to provide two types of answers: (1) chose one or more solutions to solve the problem, as a measure of Trade-offs; (2) argue for the chosen solution(s) as a measure of Multiple Perspectives. We piloted the exercises with the same $48$ children. Two researchers coded the data and the reliability score (Cohen's kappa) for the number of perspectives mentioned in the argument was .86, denoting a strong agreement between coders. Results from the pilot study indicated no significant differences between the pre- (N = 23, \emph{M} = 0.70, SD = 0.93) and post-test (N = 25, \emph{M} = 0.52, SD = 0.77) sustainable problems, enabling their use as a measure for the study. \subsubsection{Personal values} \label{ch:personal_values} Learning about sustainability is not a straightforward process. According to \citeN{fior2010children}, sustainable development education is not primarily about changing attitudes, instead \emph{``environmental learning in the presence of complexity, uncertainty, risk and necessity} [they argue] \emph{must be accepting of multiple perspectives supportive of meta-learning across perspectives, and detached from the making of decisions in its (and learners') own immediate context''} \cite{fior2010children}. Thus, instead of changing attitudes and behaviors, the learning measures were designed to capture children's awareness of different perspectives and the ever-present trade-offs around sustainability. Furthermore, according to \citeN{antle2014}, sustainability curricula for elementary schools \emph{``often focus on key concepts such as balancing conservation and consumption''} \cite[p. 37]{antle2014}, while ignoring the important role that children's personal values have in learning. They argue that sustainability education for elementary school-aged children should be modeled according to the Emergent Dialogue Model, especially in the area of digital media games such as Youtupia \cite{antle2013youtopia}. The core idea of the Emergent Dialogue Model is that children should be invited to participate in \emph{personally meaningful dialogues} during the game play. In the case of our study, we have used an autonomous robot that interacts with children as a way to foster meaningful dialogues about sustainable development during the game-play of the {{M-Enercities}}. Thus, Personal Values were measured by analyzing the video recording collected during the study for behavioral analysis. Using the dedicated software ELAN v4.8.1 \cite{wittenburg2006elan}, each study session was coded for behavioral analysis using the video recordings of the participants while performing the collaborative group learning activity. We have based our coding scheme on the one created by \citeN{antle2013youtopia}, for the analysis of the Emergent Dialogue Model. We were interested in the analysis of the verbal behavior of the participants during the learning activity to be able to gain insights into their meaningful participation in discussions of personal values as a way to measure learning outcomes. The coding scheme used was the following: \begin{itemize} \item \textbf{Scores comment --} Discussion or comments about the game scores. This category relates to children's comments or observations of the impact of their game actions on the game scores. It is related with an increase or decrease of the scores on any game parameters. An inclusion example would be: \emph{``We are running out of money''}.\\ % \item \textbf{In-depth discussions --} Includes events in which one or more children talk about decisions on what resources and developments to use. An in-depth event involves a sense of the world or individual values, which differs from simple preferences. It must also involve reasoning using those values, typically around trade-offs between human and natural needs. As such, statements like \emph{``I think we should have houses not trees''} is a preference and was not coded. However, statements such as \emph{``No, let's build houses instead of apartments because they use less lumber, and we can make more trees into nature reserves.''} was coded as in-depth discussion because it involves values in the context of reasoning about trade-offs related with sustainability.\\ \item \textbf{Meaningful conversations --} Includes verbal and/or physical disagreement with another's action(s), or utterances related to the sustainability domain. Meaningful conversations require an objection or stance on an issue, and therefore presenting available options or suggestions is not considered. Meaningful conversations may result in resolution, abandonment (unresolved) or unilateral decision-making. Inclusion example: \emph{``I disagree, without industry you cannot progress.''} \end{itemize} \begin{figure}[!tb] \centering \includegraphics[width=0.70\textwidth]{emote-eval-study} \caption{Two students playing the {{M-Enercities}}{} with our autonomous robot in a school classroom.} \label{fig:robot-interaction} \end{figure} \subsection{Materials} The list of materials used in the set-up of both the short-term and long-term studies is listed below. Consider also Fig. \ref{fig:robot-interaction} for a picture of the real set-up. \begin{enumerate} \item NAO torso robot from Aldebaran Robotics; \item Large interactive multi-touch table running the {{M-Enercities}}{}; \item Four video cameras for a full interaction recoding; \item Two lavaliere microphones for voice pitch recognition (no voice recognition was used; additional details can be found in Section \ref{Subsec:Architecture}); \item Voice recorder for behavioral analysis. \end{enumerate} \section{Short-term study} \label{ch:short_term_study} An experimental study was conducted to evaluate the impact of a robot with empathy competencies on students sustainability education outcomes in a collaborative group learning environment. This relates with our Hypothesis~$1$, detailed in Section~\ref{Sub:Hypothesis}. To achieve the proposed goal, the study was designed considering three experimental conditions:\\ \begin{itemize} \item\textbf{Condition $1$} -- Two children interacted with a robotic tutor with empathy competencies while playing the {{M-Enercities}}{} game.\\ \item\textbf{Condition $2$} -- Two children interacted with a robotic tutor without empathy competencies while playing the {{M-Enercities}}{} game.\\ \item \textbf{Condition $3$} -- Three children played the {{M-Enercities}}{} game without the presence of a robotic tutor. \end{itemize} \subsection{Empathic vs. non-empathic robot: impact on behavior} In order to create Conditions~$1$ and $2$, featuring the robot with and without empathic competencies, we have made a choice about which modules should be activated/deactivated. We refer to Section \ref{Subsec:Architecture} for the technical architecture overview of the robotic system. Table~\ref{tab:non-empathic} details which modules from the overall system architecture, depicted in Fig.~\ref{fig:architecture}, are fully or partially (de)activated in each version of the robotic tutor. Despite some modules being deactivated or partially activated when comparing the versions of the robot, it is crucial to note that during the learning activity, the percentage of interventions by the robot toward the students was similar. This was ensured as the behaviors of the robot were designed and developed to be balanced between conditions. In practice, this means that the robot talks and gestures the same amount of time in both empathic and non-empathic versions. Additionally, basic idle behavior, animations, and speech capabilities remain intact; the non-empathic robot will, however, appear less aware of the students. \begin{table}[!tb] \centering \caption{Overview of activation of all the modules in the empathic and non-empathic version of the robotic tutor.}% \label{tab:non-empathic} \begin{tabular}{lll} \multirow{2}{*}{\bf Module} & \multicolumn{2}{c}{\bf Activated} \\ & \bf{Empathic} & \bf{Non-Empathic} \\ \toprule \emph{Rapport} & Yes & Partially \\ \emph{Game-\ac{AI}} & Yes & Yes \\ \emph{Emotional Climate} & Yes & No \\ \emph{Past Event Memory} & Yes & No \\ \emph{Recent Event Memory} & Yes & Yes \\ \emph{Hybrid Behavior Controller} & Yes & Partially \\ \emph{Sustainable learning dialogue} & Yes & Yes \\ \bottomrule \end{tabular} \end{table} As we can see from Table~\ref{tab:non-empathic}, the \emph{Past Event Memory module} is deactivated in the non-empathic condition, which means that the robot is unable to recall previous learning sessions and summarize activities that occurred therein. The rationale is that using memory of other people's past experiences is a way to simulate how they feel in situations similar to those which they are currently facing, which consequently leads to empathic behavior \cite{ciaramelli2013individualized}. Notwithstanding, the robot has the \emph{Recent Event Memory module} activated in both conditions, which means it remembers the performance of students \emph{during} each learning session in both versions. The \emph{Emotional Climate module} is also deactivated in the non-empathic condition. This module is responsible for perceiving the emotional state of the group of students during the learning activity, an inherent perception for performing empathic behaviors. With this module deactivated in the non-empathic condition, the robot provides more generic suggestions for students during the game that are not related with emotional perceptions. Nonetheless, the \emph{Sustainable Learning Dialogue module} is activated in both empathic and non-empathic conditions and is responsible for ensuring that the robot performs similar dialogues about sustainable learning in both of the study conditions. The \emph{Rapport module} is partially deactivated in the non-empathic condition, meaning that some contingent behaviors are still activated, but not all of them. An example that can illustrate the impact on the behavior of the robot concerns gaze, an important social signal \cite{emery2000eyes}. In the empathic condition, the robot moves his eye-gaze to the student currently speaking by accurately locating the student using the sound coming from the microphone (each student is using a microphone, as can be seen in Fig. \ref{fig:robot-interaction}, in order for the robot to accurately follow the student, however, no speech recognition system is used). In the non-empathic condition, the robot still looks at the student who is speaking, but uses predefined coordinates of the students' locations in front of the table. This means that subtle changes in the students' locations, especially of their faces, are not tracked by the robot, resulting in a less context-aware and contingency behavior towards the students. However, it is important to note that this does not translate a random gaze behavior \cite{park2017telling}, but a less precise gaze orientation that does not lead to drastic effects on the perception of the robot between conditions. Another feature of the Rapport module that is deactivated in the non-empathic condition concerns the voice volume and pitch of the robot not being adjusted to the perceived volume and pitch of the students' voices, contrarily to what occurs in the empathic condition. As these adjustments are related with empathy behaviors \cite{imel2014association}, the characteristics of the robot's voice are kept constant in the non-empathic version. As for the \emph{Hybrid Behavior Controller module}, only the Interaction Strategy Rules are active for the non-empathic version, which means that there are no behaviors being triggered by the Mapping Function. Although this part of the controller does not necessarily lead to empathic behavior, we note that it is the result of applying a \ac{ML} algorithm aimed at discovering more subtle interaction strategies used by the wizard in the Restricted-Perception \ac{WoZ}, which cannot be hand-crafted and put in the Interaction Strategy Rules list. Notwithstanding, this does not mean that the robot will intervene inappropriately and/or at the wrong times, as all behavior is still controlled by manually encoded rules. Thus, the robot was kept as a knowledgeable and informative interlocutor in all study conditions, as previous studies have shown that children can easily distinguish between reliable robots as information sources \cite{breazeal2016young}. Overall, the deactivated modules concern perceptions of the cognitive and emotional states of the students. Notwithstanding, the social and pedagogical behaviors are similar in both study conditions, with the robot having similar frequency of interventions. This ensured the social and intelligent tutoring autonomous behavior of the robot. \subsection{Sample} A total of $63$ children ($M=13.74$, $\mathrm{SD}=0.73$, $25$ female) participated in this study. Participants were grouped by their school teachers according to groups of students they knew worked well together in a learning context, and were randomly assigned across study conditions. Therefore, $22$ participants interacted with the empathic robotic tutor, in a total of $11$ learning sessions consisting each of $2$ children and $1$ robot; $20$ participants interacted with a non-empathic robotic tutor, in a total of $10$ learning sessions consisting each of $2$ children and $1$ robot; $21$ participants were allocated in the condition with no robot, in a total of $7$ learning sessions consisting of $3$ children. Two researchers were responsible for the study in the school: a psychologist that interacted with the participants and acted as a leading researcher, and a computer scientist that was responsible for the technical equipment. \subsection{Procedure} \label{ch:procedure_st} Each group of students arrived to the designated classroom where the study took place. The leading researcher provided an explanation about the study they would undergo. Participants were invited to fill-in the pre-tests about sustainability explained in section~\ref{ch:measures}. After completing the pre-tests, participants were introduced to the robotic tutor (in case of conditions $1$ and $2$) and to the {{M-Enercities}}{} game, while they performed an initial trial round of the game to have a hands-on experience with the activity. When the trial round of the game was over, the researcher left the room, leaving the participants performing the learning activity with the robot (in case of conditions 1 and 2) or by themselves (in case of condition 3). Although the students were left alone in the classroom performing the learning activity, they had permanent indirect supervision by both researchers. This was ensured since the classroom had a large window to an external room, which allowed to monitor the progress and at the same time providing privacy to their learning process. Furthermore, this set-up enabled participants to reach out to the researcher to ask for help, e.g., if technical problems with the learning activity or with the robot occurred. Finally, when the learning activity was over, the researcher entered the room and closed the learning activity application, thus ending the activity. Participants were able to say goodbye to the robot and afterwards were invited to fill-in the post-tests about sustainability knowledge. At the end, some time was given to discuss their experience during the study, providing an open space for children to ask questions or share thoughts. Each session had a duration of $1$ hour, in which $30$ minutes were allocated to the activity of playing the {{M-Enercities}}{} game, and the remaining $30$ minutes were dedicated to pre- and post-tests. \subsection{Results} We present the learning gains for the different measures used about sustainable education. \subsubsection{Factual knowledge} We compared the results from the pre- and post-tests across the $3$ study conditions to compare the learning outcomes in the participants' factual knowledge about sustainable learning. According to the Mixed-ANOVA test, there was no significant difference between the study conditions for learning outcomes on factual knowledge about sustainable education, $F\left(2,59\right)=0.586$, $p=0.560$. The means for the pre-test result were the following: $M=9.29$, $\mathrm{SD}=1.42$; $M=9.20$, $\mathrm{SD}=1.06$; $M=9.33$, $\mathrm{SD}=1.02$, corresponding to the interaction with the empathic robot, non-empathic robot, and no-robot conditions, respectively. While the means for the post-tests were the following: $M=7.90$, $\mathrm{SD}=1.67$; $M=8.30$, $\mathrm{SD}=1.46$; $M=8.38$, $\mathrm{SD}=1.24$, corresponding to the interaction with the empathic robot, non-empathic robot, and no-robot conditions, respectively. \subsubsection{Trade-offs and multiple perspectives} The assessment of participants' understanding of trade-offs and perspectives was performed, as illustrated in Table ~\ref{tab:measure2}. \paragraph*{Trade-offs (or number of solutions)} We ran a Mixed-ANOVA test to analyze if there were differences in the number of solutions chosen by the participants across the $3$ study conditions to solve the sustainable exercise problems. We took into account their performance in the pre-test compared to their performance in the post-test and the condition they were allocated in. Results showed no significant differences across conditions when comparing the results from the pre- and post-tests, $F\left(2,60\right)=1.726, \emph{p}=0.187$. \paragraph*{Multiple perspectives (or number of arguments)} The number of arguments mentioned by the participants to justify their solutions was the measure for the multiple perspectives in solving a sustainable problem. Results show that the number of arguments did not change significantly, $F\left(2,493\right)=0.925$, $p=0.402$, when participants learned with the empathic robotic tutor compared to the other study conditions ($M=1.05$, $\mathrm{SD}=0.19$; $M=1.00$, $\mathrm{SD}=0.20$, for pre- and post-tests, respectively). \subsubsection{Personal values} We performed verbal behavior analysis across the $3$ study conditions to measure personal values, considering \textit{scores comment}, \textit{in-depth discussions}, and \textit{meaningful conversations} (see coding scheme in Section~\ref{ch:personal_values} for more details). When running a Chi-squared test, we can see that there is a statistically significant association between the conditions of the study and how children exchanged personal values about sustainability, $\chi^{2}\left(4\right)=9.375$, $p=0.05$, with the strength of the relationship with Cram\'{e}r's V being $\phi_{c}=0.109$ revealing a medium effect. We performed a post-hoc test to analyze contingency tables to understand in which study conditions personal values exchanges were statistically significant \cite{beasley1995multiple}. \paragraph*{Scores comment} We found that in the empathic robotic tutor condition participants commented on scores statistically less comparing to the other study conditions, $p=0.01$. Thus, participants commented less on the scores when in the condition with the empathic robotic tutor ($64.52\%$), compared to the condition with the non-empathic robotic tutor ($77.97\%$) and with the no-robot condition ($75.00\%$), as illustrated in Fig.~\ref{Fig:scores_short}. The abovementioned results are significant for $p=0.016$, according to the procedure of residual analysis. \paragraph*{In-depth discussions} No significant result was found for in-depth discussions between study conditions, \emph{p} > 0.05. \paragraph*{Meaningful conversations} By using the adjusted standardized residuals method of analysis \cite{garcia2003cellwise}, we discovered significant differences in meaningful conversations between study conditions. Therefore, when children learned with the empathic robotic tutor more meaningful conversations emerged ($25.16\%$, $p=0.013$), followed by the no-robot condition ($18.33\%$, $p=0.012$), and the least meaningful discussion occurred when children learned with the non-empathic robotic tutor ($11.86\%$, $p=0.016$) (see Fig.~\ref{Fig:meaningful_short}). \begin{figure*}[!tb] \centering \begin{subfigure}{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{scores_short} \caption{Score comments}\label{Fig:scores_short} \end{subfigure}\hspace{5pt}% \begin{subfigure}{0.48\columnwidth} \centering \includegraphics[width=\columnwidth]{meaningful_short} \caption{Meaningful conversations}\label{Fig:meaningful_short} \end{subfigure} \caption{Results for comments and meaningful conversations about sustainable development during the short-term learning activity across the 3 study conditions. Results are presented in frequencies and significant results are represented with the symbol $*$, $p<0.05$.} \label{Fig:st_results2} \end{figure*} \subsection{Discussion} This section discusses the results regarding the learning gains across the $3$ study conditions for this short-term study in a school classroom.\\ \noindent% {\bf Children learned to have meaningful conversations about sustainability and worried less about scores when learning with an empathic robot.} \noindent% Because sustainable development education is about engaging in deep discussions that consider the existence of \emph{``complexity, uncertainty, risk and necessity''} to solve sustainability-related problems \cite{fior2010children}, the results showed that this was successfully accomplished when children learned with a robot with empathy competencies. However, when children interacted with a robot without empathy or without a robot at all, they seemed to be more concerned about passing levels (the traditional way of playing any game), instead of engaging in dialogue about sustainable education. We emphasize that the tutoring behaviors for the empathic robot vs. the non-empathic conditions were the same, i.e., the sustainable learning dialogue that the robots had was similar in both study conditions. This makes the result particularly important since it shows that empathic competencies in a robot impacted the way children engaged in the learning process, partially supporting our first study hypothesis. In fact, engaging in meaningful conversations implies that children share their personal values by objecting or taking a stance that generates discussion about sustainability while playing the game. The empathic robotic tutor fostered and motivated the children to engage in this type of dialogue as a way to increase their knowledge about sustainable education. Future studies should include more in depth analysis of educational interactions, specifically related with the emergence of educational dialogues (e.g., if it is facilitated by the robot or instigated by children themselves).\\ \noindent% {\bf No other impact on sustainable development learning was found.} \noinden Results from the multiple-choice questionnaire on factual knowledge about sustainability did not present significant results. Additionally, the writing exercise in which children were invited to choose solutions to solve a problem related to sustainable development and to argue about their options for solving it, also did not show significant results. The results might be a reflection of the short-term interaction with the robot. In fact, learning takes time \cite{fisher1981teaching}, especially when we consider sustainable education, a challenging curricular topic to teach that is complex to learn \cite{moore2005barriers}. The short-term interaction that students had with the {{M-Enercities}}{} game, which was their learning environment, can also help explain the lack of learning gains. Indeed, {{M-Enercities}}{} enables students to explore the virtual world of the game in an unrestricted way. By providing freedom to open game menus that seem interesting to them according to the game action they want to perform, this can result in students not opening all of the game menus; thus, they may be exposed to only part of the overall knowledge that the game can offer. Therefore, due to the short-term nature of the interaction with the learning environment and with the robotic tutor, children would benefit for more extended period of interaction to be exposed to more learning content. This aspect will be explored in the long-term study described in section \ref{ch:long_term_study}. \section{Long-term study} \label{ch:long_term_study} A descriptive long-term study was conducted in order to investigate the learning outcomes of students that learned in groups with an empathic robotic tutor over an extended period of time in school. To achieve our goal, we have deployed a robot with empathy capabilities for 2 consecutive months in a school setting (4 sessions, 1 session every other week), to teach small groups of students about sustainable education using {{M-Enercities}}{} as the collaborative learning environment. To sustain the achievements of children within these weeks, the robotic tutor would recall leanings and difficulties of previous sessions upon starting each learning session, thus ensuring reflection over previous acquisitions. This study related wit hypothesis number $2$, explained in section \ref{Sub:Hypothesis}. \subsection{Sample} A total of 20 children ($M=13.78$, $\mathrm{SD}=0.65$, $9$ female) participated in the study. Due to technical problems, one session was excluded and the final sample resulted in $18$ children. The results presented for this study exclude the session with technical problems. \subsection{Procedure} Although this was a different study, the procedure is similar to the one present in section~\ref{ch:procedure_st}. We present in this section the variations in the procedure. Thus, participants filled-in tests about sustainable education at three time periods: (1) \emph{baseline}, to measure their initial knowledge on sustainable development before interacting with the robot and with the learning environment; (2) at the \emph{end of the first collaborative session}, to measure learning achievements after one interaction with the robotic tutor, which is typical for many studies in the \ac{HRI} field; (3) at the \emph{end of the 2-month period}, to be able to compare learning achievements and understand the learning curve after a long-term interaction with the empathic robotic tutor. Learning sessions were performed once every other week, therefore, two sessions per month in a total of four sessions in two consecutive months. Each session lasted about $30$ minutes with the first and last sessions taking longer as the assessments of sustainable education were applied in these sessions. The session dynamics were organized with the school teachers in order to minimize disturbances in the usual daily activities that children are involved in while in school. \begin{figure*}[!tb] \centering \begin{subfigure}{0.4\columnwidth} \centering \includegraphics[width=\columnwidth]{facts} \caption{}\label{Fig:facts} \end{subfigure}\hspace{20pt} \begin{subfigure}{0.4\columnwidth} \centering \includegraphics[width=\columnwidth]{gameplay} \caption{}\label{Fig:logs} \end{subfigure} \caption{Results of the long-term study: \subref{Fig:facts} factual knowledge achievements; \subref{Fig:facts} actions performed by students in the learning environment across the $4$ learning sessions with the empathic robotic tutor. Results revealed to be significant for the upgrades and skip turns' actions.} \label{Fig:long-term} \end{figure*} \subsection{Results} In this Section we present the results for the long-term study with a robot with empathy in a classroom environment in school. \subsubsection{Factual knowledge} The factual knowledge learning about sustainability was analyzed using the Friedman's test, and students' achievements in sustainability education showed a statistically significant difference over time, $\chi^2\left(2\right)=15.464$, $p<0.001$ with a Kendall W of $.43$ indicating a moderate effect \cite{tomczak2014need}. Post hoc analysis with Wilcoxon signed-rank tests was conducted with a Bonferroni correction applied, revealing a significance difference when comparing the baseline ($M=9.00$, $\mathrm{SD}=1.41$) to the short-term learning results ($M=6.89$, $\mathrm{SD}=1.28$), $Z=-3.344$, $p=0.001$, $r=-0.56$; and when comparing the baseline with the long-term learning results ($M=7.28$, $\mathrm{SD}=1.23$), $Z=-3.084$, $p=0.002$, $r=-0.51$. No other statistical significant result was found, $p>0.05$. From Fig.~\ref{Fig:facts}, we can see that participants' knowledge about sustainability topics started high, and after interacting with the robot in the pedagogical activity it decreased, albeit with a slightly increase in the long-term, possibly showing a tendency to return to the baseline. This result possibly translates normative results in children's learning, in which they question previously-accommodated knowledge about the topics. \subsubsection{Trade-offs and multiple perspectives} Similar to the analysis performed for the short-term evaluation, we evaluated the trade-offs and multiple perspectives about sustainability according to the \emph{number of solutions} proposed to solve a given sustainable dilemma, and the \emph{number of arguments} considered to justify their solutions. We explain the results below. \paragraph*{Trade-offs (or number of solutions)} We analyzed the number of solutions that participants considered possible to solve the given environmental problem. Participants considered more options over time, however this increase was not significant, $\chi^2\left(2\right)=1.125$, $p>0.05$. \paragraph*{Multiple perspectives (or number of arguments)} Although there was a slight increase in the number of perspectives considered by participants, Friedman's test showed that this was not statistically significant, $\chi^2\left(2\right)=3.375$, $p>0.05$. \subsubsection{Personal values} A Repeated Measures ANOVA with a Greenhouse-Geisser correction determined that mean personal values did not differ significantly between the first and the last learning session with the empathic robotic tutor, $F\left(1,1.756\right)=3.530, \emph{p}=0.061$. Since personal values were measured using behavioral analysis of the $4$ learning sessions, we have no baseline result (baseline considers assessments conducted prior to the starting of the intervention). \begin{figure}[!tb] \centering \includegraphics[width=0.7\columnwidth]{memo} \caption{Example of a MemoLine filled out by one of the children and adapted for the context of the present study. Children were asked to use pencils with different colors to express how hard/easy it was for them to (1) play the game (red color signals ``I did not understand what I should do in the game'' and green signals ``I understood what I should do in the game''), (2) and how much the robotic tutor was useful for them in understanding how to play the game (purple signals ``The robotic tutor did not help me understand the game'' and yellow signals ``The robotic tutor helped me understand how to play the game''). The MemoLine should be read in timeline manner. Therefore, the line in the center signals the middle point of the sessions in order to situate children in their assessment. The left side of the line concerns Sessions~1 and 2 and the left side concerns the Sessions~3 and 4.} \label{Fig:memo} \end{figure} \subsubsection{Game-play} The behavior (or actions) of the participants during game-play also served as a measure for analyzing sustainability learning. By using the game logs we were able to extract what actions children performed during the game, representing where they invested and the dynamics of their game-play. The exact McNemar's test showed that participants' actions related with applying policies, $p=1.00$, and performing constructions, $p=0.125$, did not differ statistically between the first and the last session. However, participants performed statistically significantly more constructions' upgrades from the first to the last session, $p<0.001$, as they also skipped more turns (representing passage of time in the game as a way to enjoy from the city resources), $p<0.001$ (see Figure~\ref{Fig:logs}). This means that the proportion of upgrades and skipped turns was higher in the the last session compared with the first learning session. In order to better understand the changes in game-play behavior, we analyzed how helpful the robot was towards a better understanding of how to play {{M-Enercities}}{}. For this, we have used MemoLine, a retrospect evaluation method dedicated to children that asks them to recall previous experiences with a given product or application \cite{vissers2013memoline}. See Fig. ~\ref{Fig:memo} for an example of a MemoLine filled in by a participant. A MemoLine was given to all participants at the end of the long-term study. According to its developers, MemoLine should be evaluated by making comparisons of the areas colored by children in a timeline perspective, in order to extract patterns of use. Our results indicated consistency in responses, as most MemoLines showed that children were slightly confused by the game and did not find the robotic tutor particularly helpful during the initial session. However, as time passed, they have perceived more help from the robotic tutor, and could better understand the game itself revealing a mastery in game-play. This result is representative of the overall sample of the study, since 18 children out of 20 filled the MemoLine in the same way. Taken together, participants' change in the behaviors of the game seem to be in line with their better understanding of the game dynamics, an understanding that was guided by the interaction with the robotic tutor. More importantly, understanding the game requires mastering concepts of sustainable education, which can explain the changes in game-play. \subsection{Discussion} This Section presets the discussion of the learning gains across a long-term study of $4$ sessions distributed over a period of $2$ consecutive months in school.\\ \noindent% {\bf No learning gains were found after children interacted with the empathic robotic tutor for long periods of time.} \noinden The overall results from the long-term study seems to show that although some variations in the learning outcomes of students occurred, those differences were not deemed significant after repeated educational interactions with the empathic robotic tutor. Although research about null results in learning gains with technology use are still scarce, a meta-analysis conducted by \citeN{wouters2013meta} studied factors that impacted learning gains when students used serious games for learning. The authors concluded that more learning occurred if: (1) learning sessions were complemented with other instruction methods, (2) when multiple training sessions were involved, (3) and if learned worked in groups \cite{wouters2013meta}. Althought our study involved multiple learning sessions and occurred in a group educational context, the learning sessions with the robotic tutor were not supplemented with other instruction methods. This deserves further investigation to understand the role of robots in learning contexts and their long-term effects on students' learning gains. Similarly, \citeN{hew2007integrating} discussed factors related with successful technology integration in schools, such as the reconsideration of assessment metrics. The authors refer that \textit{``because curriculum and assessment are closely intertwined, there is a need to either completely reconsider the assessment approaches when technology is integrated into the school curriculum, or consider more carefully how the use of technology can meet the demands of standards-based accountability''}. This suggests that metrics used to measure students' learning gains that account for technology inclusion (e.g., robots) require further investigation to meet curricular goals. In the case of our study, despite the support provided to students, both from the robot and stimulated by the {{M-Enercities}}{}, it did not impacted in a measurable way students' learning in sustainability education present in formal school curriculum. Another explanation for the lack of learning progress is provided by \citeN{gerber2001teacher} and concerns the lack of clearly defined roles of educational aids that can hinder learning gains due to an undefined presence in school. A result presented in a previous publication related with our learning scenario, showed that students assigned the role of a classmate to the robot although being explicitly instructed they would be learning with a robotic tutor \cite{alves2016role}. Therefore, we can be a facing a need to develop precise design guidelines to develop specific roles for robots in the educational sector whose goal is to increase learning outcomes. In addition to this, more research is needed when designing and including robots for education. For example, studies have been showing that the mere physical presence of a robot can lead to learning gains \cite{leyzberg2012physical, kennedy2015robot}, however the variables that affect learning gains are not established yet. Furthermore, the design of robots for education should be tailored to the time required to learn certain curricular concepts, and we hypothesize that sustainable learning can be a case in which more learning time is required. The null results from our long-term study seem to indicate that long-term \ac{HRI} installations for education require more investigation and may even require a change in the interaction design between the robot and children. Our work introduced this discussion highlighting the need for a better understanding of long-term deployment of social robots amonst the educational sector.\\ \noindent% {\bf Game-play behavior changes over time due to a better understating of the game guided by the robotic tutor.} \noinden The way children played the game about sustainable education changed over time and this change seems to be related with the interaction with the robotic tutor. Indeed, there was a statistically significant result found in the game-play behavior of children during the long-term study. Children's actions in the game showed that, over time, they have performed more upgrades to their city they have performed more skip turns (representing the passage of time in the game, such as days passing by). By performing upgrades, the city can become more sustainable, and by skipping some turns the players allow the structures and upgrades they have chosen to implement, to get their full effect before advancing to the next level. This behaviour thus seems to indicate a more thoughtful design of the city, which matches with Antle's design principles for sustainability games \cite{antle2014}. These changes in game play are not trivial, as children needed to move away from the traditional competitiveness of passing levels (a so-called traditional mind set of a game-play), to become concerned about spending less money and taking more advantages of the resources from previous constructions. This seems to show that the change in game-play behavior goes hand in hand with the perceived easiness to understand and play the the game, guided by the interactions with the robotic tutor. \section{General discussion and conclusions} \label{ch:conclusion} In this paper, we presented a novel educational scenario for social robots, in which a group of children interacts with an autonomous robot in a serious collaborative game. The goal of the interaction was to infer learning outcomes with regards to environmental sustainability and the associated trade-offs involved when designing a city. We conducted a short-term study that addressed the effects of the empathic robot and a long-term study that addressed the long-term deployment of the robot in school. The short-term study compared three conditions: empathic robot, non-empathic robot and no robot. The results showed no significant results in the majority of learning outcomes; however, there was an increase in meaningful conversations with the empathic robotic tutor, which is a stated goal for collaborative learning scenarios targeting sustainability. During the long-term study, the empathic robot was deployed for $2$ months in school. Results showed no significant change in learning gains over time. Additionally, a change in game-play behaviors related to the game was observed, in which children perform more game actions towards sustainability over the sessions. The lack in learning progress may be due to several aspects, such as the quality of the interaction, the role of robots in school, and group dynamics. This reflects the need to conduct additional research in group interactions between humans and robots for educational purposes. Summarizing, the highlights of our research are: \begin{enumerate} \item We designed, developed, and evaluated a robot tutor for education that autonomously interacted with students in a real-world environment of a school for $2$ months. \item We contribute to the field of group interaction studies between humans and robots by framing the educational context as a collaborative group learning scenario. \item We concluded that an autonomous robot with empathy (compared with a robot without empathy or learning without a robot) fosters meaningful discussions about sustainability between students, which is a learning outcome in sustainability education. \item We concluded that long-term educational interactions with an empathic robot did not impacted in a significant way learning gains, although there was a change in game-actions to achieve more sustainability during game-play. \end{enumerate} \subsection{Limitations and future work} Empathy is a complex construct that is highly dependent on the content of what people say to each other. Our empathic robot was able to perform contingent behaviors that translate empathy by receiving limited input from the children, but had no access to their verbal discussions. For effective empathic robotic tutors to operate in group learning environments, they should be able to understand what children say and accordingly personalize their empathic responses. To this end, developments in speech recognition for children are needed to build better empathic interactions. We also did not observe as many learning gains as expected, highlighting the importance of conducting more research in collaborative group learning environments with robots. Aspects such as the time duration for a learning gain to occur need to be considered when deploying a robot in a school setting. Additional qualitative research is also needed to understand the factors that favor learning gains and the factors that can hinder it. Regarding the wider use of robotic tutors in learning, there is already a large body of research investigating robots for second language acquisition, handwriting skills, and even the understanding of other complex curricular topics, such as chemistry and wind formations. Although this reflects positive and promising directions for \ac{HRI} for education, more educational scenarios need to be considered to understand how robots can be used for best impact on learning. This work has provided a corpus of reflection for future research, leading to questions, such as ``which variables lead to learning gains when using a robot for collaborative group education?'' and ``what is the role that a robot can have in school that fosters learning gains?'' With this work, we have begun to explore the potential for robots in group learning, bringing attention to empathy as an important competence for a robot to have when interacting with students. \section*{Acknowledgements} This work was partially supported by national funds through Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia (FCT) with reference UID/CEC/50021/2013, through the project AMIGOS grant n. PTDC/EEISII/7174/2014, the Carnegie Mellon Portugal Program and its Information and Communications Technologies Institute, under project CMUP-ERI/HCI/0051/2013, and by the EU-FP7 project EMOTE under grant agreement no. 317923. P. Alves-Oliveira acknowledges a FCT PhD grant ref. SFRH/BD/110223/2015. We show our gratitude to the teachers, students, and school-staff from Escola Quinta do Marqu\^{e}s (Oeiras, Portugal) for their involvement in the studies. We also thank Daniel Silva for collaborating in the coding of the verbal behavioral. The authors are solely responsible for the content of this publication. It does not represent the opinion of the European Commission (EC), and the EC is not responsible for any use that might be made of data appearing therein. \bibliographystyle{ACM-Reference-Format}
1,108,101,563,468
arxiv
\section{Optimal Path for Example~\ref{ex:4node-optimal}} Since the graph $G_{\mathsf e}$ is strongly connected, the optimal path is independent of the initial state. We can also neglect a finite prefix of any path with respect to the computation of the optimal limit-average collected rewards. Any path in the graph $G_{\mathsf e}$ can be (after neglecting a finite prefix) $(abc)^w$ or $(ad)^w$ or can be presented as $\pi = (abc)^{n_0}(ad)^{m_0}(abc)^{n_1}(ad)^{m_1}\ldots$ for unique sequences $\{n_k\}_k$ and $\{m_k\}_k$ where $n_k,m_k\in\mathbb Z_+$. In the following we focus on the limit-average cost function $c_{\sav}(\pi)$ and provide a path-independent lower bound which is also achievable by some paths in the graph. To study the $c_{\sav}(\pi)$, we need to compute $\limsup_{N\rightarrow\infty}$ of the fraction $c_{\ssum}^{(N)}(\pi)/(N+1)$. Since we intend to provide a lower bound for this quantity, we focus on the limiting behavior of the fraction only for the subsequence $\{\sum_{k=0}^{n_{\mathfrak a}}(3n_k+2m_k)-1\ |\ n_{\mathfrak a} = 1,2,\ldots\}$. The finite-horizon cost associated to $\pi$ for the horizon $N = \sum_{k=0}^{n_{\mathfrak a}}(3n_k+2m_k)-1$ is the following \begin{align*} c_{\ssum}^{(N)}(\pi) = g_0(\gamma,n_0,m_0)& +\sum_{k=1}^{n_{\mathfrak a}}\left[\gamma^2+2\gamma^{2m_{k-1}+3}+(3n_k-3)\gamma^3\right]\\ &+\sum_{k=1}^{n_{\mathfrak a}}\left[\gamma^3+\gamma^{3n_k+2}+(2m_k-2)\gamma^2\right], \end{align*} where $g_0(\gamma,n_0,m_0)$ is the polynomial that includes the terms generated by the prefix $(abc)^{n_0}(ad)^{m_0}$. Then \begin{align*} \frac{c_{\ssum}^{(N)}(\pi)}{N+1} &= \frac{\sum_{k=1}^{n_{\mathfrak a}} f(\gamma,n_k,m_k)}{\sum_{k=0}^{n_{\mathfrak a}}(3n_k+2m_k)} +\frac{g_0(\gamma,n_0,m_0)+2\gamma^{2m_0+3}-2\gamma^{2m_n+3}}{\sum_{k=0}^{n_{\mathfrak a}}(3n_k+2m_k)} \end{align*} where $f(\gamma,n_k,m_k) := (2m_k-1)\gamma^2+(3n_k-2)\gamma^3+2\gamma^{2m_k+3}+\gamma^{3n_k+2}$. The second fraction in the right-hand side goes to zero for $n_{\mathfrak a}\rightarrow\infty$ thus the limit of the fraction is the same as the following \begin{align} \frac{\sum_{k=1}^{n_{\mathfrak a}} f(\gamma,n_k,m_k)}{\sum_{k=1}^{n_{\mathfrak a}}(3n_k+2m_k)} = \frac{\sum_{i,j=1}a_{ij}^{(n_{\mathfrak a})}f(\gamma,i,j)}{\sum_{i,j=1}a_{ij}^{(n_{\mathfrak a})}(3i+2j)} = \sum_{i,j=1}\frac{a_{ij}^{(n_{\mathfrak a})}(3i+2j)}{\sum_{i_1,j_1=1}a_{i_1j_1}^{(n_{\mathfrak a})}(3i_1+2j_1)}\frac{f(\gamma,i,j)}{3i+2j},\label{eq:convex_comb} \end{align} where $a_{ij}^{(n_{\mathfrak a})}$ is the number of pairs $(i,j)$ that appear in the finite sequence $\{(n_k,m_k)\ |\ k=1,2,\ldots,n_{\mathfrak a}\}$. This equality indicates that \eqref{eq:convex_comb} is a convex combination of fractions $f(\gamma,i,j)/(3i+2j)$ thus $c_{\sav}(\pi)$ is lower bounded by \begin{equation} \label{eq:lower_bound_cost} c_{\sav}(\pi)\ge \inf\left\{\frac{f(\gamma,i,j)}{3i+2j}\,\bigg|\,i,j\in\mathbb Z_+\right\}. \end{equation} Note that $f(\gamma,i,j)/(3i+2j)$ is exactly the cost of path $\pi$ with the constant sequences $n_k=i$ and $m_k=j$ for all $k$. Therefore, inequality \eqref{eq:lower_bound_cost} states that for the graph of Example~\ref{ex:4node-optimal}, the cost of non-periodic paths can not be less that the minimum cost of periodic paths. Then we need to compare a countable number of polynomials and find their minimum over the interval $\gamma\in(0,1)$. Such a comparison reveals that there are two support polynomials with $(i,j)$ equal to $(1,1),(2,1)$. There is also one limit polynomial in which $j$ is fixed and $i$ goes to infinity. This polynomial $\gamma^3$ is not present in \eqref{eq:lower_bound_cost} explicitly but is generated by the path $(abc)^w$ which we left a side from the beginning. This completes the proof of computation in Example~\ref{ex:4node-optimal}. \section{Missing Proofs from Section~\ref{section:FiniteHorizon}} \medskip The NP-hardness result (Theorem~\ref{thm:complexity-finite}) for $\gamma <1$ is based on the following lemma. \begin{lemma} For any finite graph $G = (V,E)$ with $|V| = n_{\mathsf v}$, $v\in V$ and $\gamma\in(0,1)$, we have that \begin{equation} \label{eq:Hamilt_finite} R_{\ssum}^{n_{\mathsf v}-1}(v) \le \frac{\lambda\left(n_{\mathsf v}-(n_{\mathsf v}+1)\gamma+\gamma^{n_{\mathsf v}+1}\right)}{(1-\gamma)^2}. \end{equation} The equality holds if and only if the graph has a Hamiltonian path starting from $v$. \end{lemma} \begin{proof} The definition of $\Last_\pi(t,v)$ implies that $\Last_\pi(t,v)\le t+1$. Then for any path $\pi$ with $|\pi| = n_{\mathsf v}-1$ and $\pi[0]=v$, \begin{align*} c_{\ssum}^{n_{\mathsf v}-1}(\pi) & = \sum_{t=0}^{n_{\mathsf v}-1}\gamma^{\Last_\pi(t,\pi[t])}\ge \sum_{t=0}^{n_{\mathsf v}-1} \gamma^{t+1} = \frac{\gamma(1-\gamma^{n_{\mathsf v}})}{1-\gamma}\\ \Rightarrow& r_{\ssum}^{n_{\mathsf v}-1}(\pi) \le \frac{n_{\mathsf v}\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma} \frac{\gamma(1-\gamma^{n_{\mathsf v}})}{1-\gamma}, \end{align*} which proves \eqref{eq:Hamilt_finite}. If the path $\pi$ is Hamiltonian, then the path does not contain any repeated node and $\Last_\pi(t,\pi[t]) = t+1$ thus $\pi$ is optimal and achieves the upper bound in \eqref{eq:Hamilt_finite}. For the other direction, we show that a path $\pi$ that satisfies the condition of the claim is a Hamiltonian path. Suppose that this is not the case. Since the length of $\pi$ is $n_{\mathsf v}-1$, $\pi$ contains at least one repeated node. Thus for at least one index $t$ we have $\Last_\pi(t,\pi[t])< t+1$, and then $\sum_{t=0}^{n_{\mathsf v}-1}\gamma^{\Last_\pi(t,\pi[t])}> \sum_{t=0}^{n_{\mathsf v}-1} \gamma^{t+1}$. Therefore, $r_{\ssum}^{n_{\mathsf v}-1}(\pi)$ is strictly less than the right-hand side of \eqref{eq:Hamilt_finite}, which contradicts the choice of the path $\pi$. \end{proof} \begin{proof}[\textbf{Theorem~\ref{thm:complexity-finite}}] The case for $\gamma\in (0,1)$ can be proved as follows. We reduce the Hamiltonian path problem to the finite horizon value decision problem as follows. Given a graph $G = (V,E)$ with $|V| = n_{\mathsf v}$, we fix some constants $\lambda$ and $\gamma$. By the inequality we showed in \eqref{eq:Hamilt_finite}, $G$ contains a Hamiltonian path if and only if for some $v \in V$ we have $R_{\ssum}^{n_{\mathsf v}-1}(v) \geq \lambda\left[n_{\mathsf v}-(n_{\mathsf v}+1)\gamma+\gamma^{n_{\mathsf v}+1}\right]/(1-\gamma)^2$. This completes the reduction. The proof for $\gamma =1$ is similar to the proof of Theorem~\ref{theorem:GeneralFiniteNP}. \end{proof} \section{Missing Proofs from Section~\ref{section:InfiniteHorizon}} \noindent \textbf{Lemma~\ref{lemma:Hamilt}} For any graph $G = (V,E)$ and any node $v_0\in V$, $R_{\sav}(v_0)$ is bounded as \begin{equation} \label{eq:boundsAppendix} \lambda\frac{1-\gamma^p}{1-\gamma}\le \,R_{\sav}(v_0)\,\le \lambda\frac{1-\gamma^{n_{\mathsf v}}}{1-\gamma}, \end{equation} where $n_{\mathsf v} = |V|$ and $p$ is the length of the longest simple cycle in $G$. \begin{proof} Let $\pi$ be an infinite path and denote with $\pi_N = \pi[0\ldots N]$ the prefix of $\pi$ of length $N$. We apply the inequality of arithmetic and geometric means for $n\in\mathbb N$ \begin{equation*} \frac{a_1\!+\!a_2\!+\!\ldots\!+\!a_n}{n}\ge \sqrt[n]{a_1a_2\ldots a_n}, \quad a_1,a_2,..,a_n\ge 0, \end{equation*} where the equality holds if and only if all $a_i$'s are equal. Then we have \begin{align*} c_{\ssum}^{(N)}(\pi) &\ge (N\!+\!1)\gamma^{\frac{1}{N+1}\sum_{t=0}^{N}\Last_\pi(t,\pi[t])} = (N\!+\!1)\gamma^{\sum_v y_v/(N\!+\!1)}, \end{align*} where $y_{v} = 1+\max\{j\in\mathbb N \mid j\le N,\,\, \pi[j] = v\}$ if $v$ is visited by $\pi_N$, otherwise $y_v = 0$. This implies that $y_{v}\le N+1$ and $ c_{\ssum}^{(N)}(\pi)\ge (N+1)\gamma^{n_{\mathsf v}(N+1)/(N+1)} = (N+1)\gamma^{n_{\mathsf v}}. $ Dividing both sides by $(N+1)$ and taking $\limsup_{N\rightarrow\infty}$ gives a general path-independent lower bound $\gamma^{n_{\mathsf v}}$ for $c_{\sav}(\pi)$, which also implies $C_{\sav}(v_0) = $ $ \inf_{\pi}\left\{c_{\sav}(\pi)\ |\ \pi[0] = v_0\right\} \ge \gamma^{n_{\mathsf v}}$. This proves the upper bound in \eqref{eq:boundsAppendix}. The particular path $\pi_p = (v_1v_2\ldots v_p)^\omega$, with $v_1v_2\ldots v_pv_1$ being the longest simple cycle, gives the bound $R_{\sav}(v_0) \ge r_{\sav}(\pi_p)$ which is the lower bound in \eqref{eq:boundsAppendix}. \end{proof} \begin{proposition} Strategies based only on last visitation records not suffice for the infinite-horizon optimal path problem. \end{proposition} \begin{proof} Recall that for the graph $G_{\mathsf e} = (V_{\mathsf e},E_{\mathsf e})$ in Figure \ref{fig:graph1} and $\gamma=0.26$ the path $(abcabcad)^\omega$ is optimal. Upon visiting node $a$ the corresponding strategy chooses one of the nodes $b$ or $d$ depending on the \emph{number} of visits to $b$ since the last occurrence of $d$. We will now show that every strategy based \emph{only on the order of last visits} results in an ultimately periodic path the loop of which is one of the cycles $(abc)^\omega$, or $(ad)^\omega$, or $(abcad)^\omega$, and is hence not optimal. To this end, consider the graph $\widehat G_{\mathsf e} = (\widehat V_{\mathsf e},\widehat E_{\mathsf e})$ with set of states $\widehat V_{\mathsf e} = V_{\mathsf e} \times O$, where the set $O$ consists of the vectors of elements of $V$, of length less than or equal to $4$, that represent the order of last occurrences of the nodes $V$ in the traversed path. For example, $\langle a,c,b\rangle$ denotes the fact that the last occurrence of $a$ was after the last occurrence of $c$, which was after the last occurrence of $b$, and node $d$ has not occurred yet in the path. The edge relation $\widehat E_{\mathsf e} \subseteq \widehat V_{\mathsf e} \times \widehat V_{\mathsf e}$, where $(v,o) \rightarrow (v',o')$ implies $(v,v') \in E_{\mathsf e}$, updates the last occurrence order in the expected way, depending on the taken edge $(v,v')$. For example, the unique path in $\widehat G_{\mathsf e}$ that corresponds to $(abcabcad)^\omega$ is \[\widehat\pi = (a,\langle\rangle)(b,o_1)(c,o_2)(a,o_3)(b,o_4)(c,o_5)(a,o_3)(d,o_4) \cdot \widehat{ \rho}^\omega,\] where \[\widehat{ \rho} = (a,o_6)(b,o_7)(c,o_8)(a,o_9)(b,o_{10})(c,o_{11})(a,o_{9})(d,o_{10})\] and \[ \begin{array}{l} o_1 = \langle a \rangle, o_2 = \langle b,a\rangle, o_3 = \langle c, b, a\rangle, o_4 = \langle a,c,b \rangle, o_5 = \langle b,a,c \rangle, o_6 = \langle d,a,c,b\rangle, o_7 = \langle a,d,c,b\rangle,\\ o_8 = \langle b,a,d,c\rangle, o_9 = \langle c,b,a,d\rangle, o_{10} = \langle a,c,b,d \rangle, o_{11} = \langle b,a,c,d \rangle. \end{array} \] Since there are occurrences of node $(a,o_3)$ in the path followed by different nodes, $(b,o_4)$ and $(d,o_4)$ respectively, and similarly for node $(a,o_9)$, followed by either $(b,o_{10})$ or $(d,o_{10})$, the path $\widehat\pi$ cannot be generated by a memoryless strategy in $\widehat G$. In general, in $\widehat G$ there are $5$ possible reachable states of the form $(a,o)$, namely, $(a,\langle\rangle)$, $(a,\langle c,b,a\rangle)$, $(a,\langle d, a\rangle)$, $(a,\langle d,a,c,b\rangle)$ and $(a,\langle c,b,a,d\rangle)$. A memoryless strategy in $\widehat G$ (that is, a strategy that only records the order of last visits) must map each of these nodes to a unique successor node. Thus, such a strategy results in one of the paths $(abc)^\omega$, $abc(ad)^\omega$, $(abcad)^\omega$, $(ad)^\omega$, $ad(abc)^\omega$, or $(adabc)^\omega$. For $\gamma=0.26$ each of these paths has reward strictly smaller than $(abcabcad)^\omega$ and is hence not optimal.This concludes the proof. \end{proof} \subsection{Decay Profiles} \smallskip\noindent\textbf{General Decay Profiles.} In this section we assume the expected value of the generated reward for nodes to be time independent. Given a node $v\in V$, the associated \emph{decay profile $\Gamma_v$} for the node is defined to be be a strictly monotonically decreasing sequence $1, a_1, a_2, \dots$ (starting from $1$) converging to $0$. We denote the $i$-th element of the sequence as $\Gamma_v(i)$, and the sequence portion $\Gamma_v(i), \Gamma_v(i+1), \dots \Gamma_v(j)$ as $\Gamma_v[i..j]$. If the original generated reward at a node $v$ was $\lambda$, after $\Delta$ time units only $\lambda \cdot \Gamma_v(\Delta)$ of that reward remains after decay if uncollected. Denote the sum of the elements of $\Gamma_v[i..j]$ as $\mysum\left(\Gamma_v[i..j]\right)$, that is $\mysum\left(\Gamma_v[i..j]\right) \ = \ \sum_{k=i}^j \Gamma_v(k)$. \noindent For a path $\pi = v_0,v_1,\ldots,$ (and under fixed decay profiles $\Gamma_v$ for the nodes in $V$) we define the finite horizon cumulative (expected) reward as \[ r_{\ssum}^{(N)}(\pi) \ =\ \sum_{t=0}^N \lambda(v_t) \cdot \mysum\left(\Gamma_{v_t}[0\, ..\ t\!-\! \Last_\pi(t,v_t)\! -\!1 ]\right). \] Note that the quantities $\lambda(v_t)\cdot \Gamma_{v_t}(0), \, \lambda(v_t)\cdot \Gamma_{v_t}(1), \ldots, \lambda(v_t)\cdot \Gamma_{v_t}( t- \Last_\pi(t,v_t) -1 )$ are the expectations of the uncollected rewards at node $v_t$ for times $t, t-1, \ldots, t- \Last_\pi(t,v_t) -1$ respectively. We define the infinite horizon limit-average (expected) reward as for the multiplicative discounting in \eqref{eq:rav-orig}. Value decision problems are defined analogously to Definition~\ref{def:ValueDecision}. \begin{comment} \begin{definition}[Value Decision Problems] Let a finite directed graph $G = (V,E)$, a real number $\beta \geq 0$, expected rewards $\lambda(v)$ for $v\in V$, and reward decay profiles $\Gamma_v$ for $v\in V$ be given. We define the following two decision problems. \begin{enumerate} \item Given a $N\in {\mathbb{N}}$, the \emph{finite horizon} $r_{\ssum}$-value decision problem is to decide if there exists a path $\pi$ in $G$ of length $N$ such that $r_{\ssum}(\pi) \geq \beta$. \item The \emph{infinite horizon} $r_{\sav}$-value decision problem is to decide if there exists an infinite path $\pi$ in $G$ such that $r_{\sav}(\pi) \geq \beta$. \qed \end{enumerate} \end{definition} \end{comment} The following theorem shows that the $r_{\ssum}$-value decision problem remains NP-hard under any decay profile (the proof is given in the appendix); and that the $r_{\ssum}$-values can be computed in EXPTIME (a version of the augmented weighted graph can be defined similar to the multiplicative discounting case to compute the finite horizon reward value $R_{\ssum}^{(N)}(v)$). \begin{theorem} \label{theorem:GeneralFiniteNP} Let $G = (V,E)$ be a finite directed graph, and let the expected rewards and reward decay profiles be node independent (and non-zero). \begin{compactenum} \item The finite horizon $r_{\ssum}$-value decision problem is NP-complete in case $N$ is given in unary, and NP-hard otherwise. \item The value $R_{\ssum}^{(N)}(v)$ for $v\!\in\! V$ (given $N$ in binary) is computable in EXPTIME. \end{compactenum} \end{theorem} \begin{proof} Consider $N = \abs{V}-1$. Let $\beta = \lambda\cdot \sum_{i=0}^N \mysum\left(\Gamma[0\, ..\, i]\right)$. We reduce deciding existence of the Hamiltonian path to the $r_{\ssum}$-value decision problem. Clearly if there exists a Hamiltonian path in $G$, then the answer to the $r_{\ssum}$-value decision problem is affirmative. Suppose the answer to the $r_{\ssum}$-value decision problem is affirmative. We show in this case that $G$ must have a Hamiltonian path. Consider any non-Hamiltonian path $\pi= v_0, \dots, v_N$ of length $N$. We show $r_{\ssum}(\pi) < \beta$. We have \[ \frac{\beta-r_{\ssum}^{(N)}(\pi)}{\lambda} = \sum_{i=0}^N a_i, \] where \[ a_i = \begin{cases} & 0 \quad \text{ if } v_i \text{ does not appear in } v_0, v_1, \dots, v_{i-1}\\ & \mysum\left(\Gamma[0 \,..\, i]\right) \ - \ \mysum\left(\Gamma[0\, ..\ \Last_\pi(i,v_i) -1 ]\right) \quad \text{ otherwise.} \end{cases} \] Observe that \[\mysum\left(\Gamma[0 \,..\, i]\right) \ - \ \mysum\left(\Gamma[0\, ..\ \Last_\pi(i,v_i) -1 ]\right) > 0\] in case $\Last_\pi(i,v_i) < i+1$, \emph{i.e.}\xspace, if $v_i$ has appeared in the path prefix $v_0, v_1, \dots, v_{i-1}$. Since we must have at least one node which appears twice in $\pi$, we have that at least one $a_i$ in $\sum_{i=0}^N a_i$ must be strictly positive. This concludes the proof. \end{proof} \begin{comment} \begin{IEEEproof} Consider $N = \abs{V}-1$. Let $\beta = \lambda\cdot \sum_{i=0}^N \mysum\left(\Gamma[0\, ..\, i]\right)$. We reduce deciding existence of the Hamiltonian path to the $r_{\ssum}$-value decision problem. Clearly if there exists a Hamiltonian path in $G$, then the answer to the $r_{\ssum}$-value decision problem is affirmative. Suppose the answer to the $r_{\ssum}$-value decision problem is affirmative. We show in this case that $G$ must have a Hamiltonian path. Consider any non-Hamiltonian path $\pi= v_0, \dots, v_N$ of length $N$. We show $r_{\ssum}(\pi) < \beta$. We have \[ \frac{\beta-r_{\ssum}(\pi)}{\lambda} = \sum_{i=0}^N a_i \] where \[ \begin{cases} a_i & = 0 \quad \text{ if } v_i \text{ does not appear in } v_0, v_1, \dots, v_{i-1}\\ & = \mysum\left(\Gamma[0 \,..\, i]\right) \ - \ \mysum\left(\Gamma[0\, ..\ \Last(v_i, i, \pi) -1 ]\right) \quad \text{ otherwise} \end{cases} \] Observe that $\mysum\left(\Gamma[0 \,..\, i]\right) \ - \ \mysum\left(\Gamma[0\, ..\ \Last(v_i, i, \pi) -1 ]\right) > 0$ in case $\Last(v_i, i, \pi) < i+1$, \emph{i.e.}\xspace, if $v_i$ has appeared in the path prefix $v_0, v_1, \dots, v_{i-1}$. Since we must have at least one node which appears twice in $\pi$, we have that at least one $a_i$ in $\sum_{i=0}^N a_i$ must be strictly positive. This concludes the proof. \qed \end{IEEEproof} \end{comment} \begin{comment} \begin{proposition} \label{proposition:NoDecay} Let $G = (V,E)$ be a finite directed graph in which the generated rewards do not decay, and let $\lambda >0$ be the expected reward generated at each node. The finite horizon $r_{\ssum}$-value decision problem is NP-hard. \end{proposition} \begin{IEEEproof} The proof is similar to the proof of Proposition~\ref{proposition:GeneralFiniteNP}. \qed \end{IEEEproof} \end{comment} In the infinite horizon case, a truncated weighted graph can be defined along the lines of $\widetilde G_K$ to compute $\epsilon$-approximate bounds on $R_{\ssum}^{(N)}(v)$ (\emph{even if } $\sum_{i=0}^{\infty} \Gamma_v(i) = \infty$). In this case, given $\epsilon\! >\! 0$, the constant $K$ depends on the rate of decay of the sequences $\Gamma_v$. \begin{comment} \smallskip\noindent\textbf{Models with Probabilistic Transitions.} Another possible generalization of the model is by including probabilistic transitions in the graph. In the generalized model, instead of a path for the collecting agent, we are required to compute an optimal strategy which instead of choosing a graph node to move to, chooses an \emph{action}, which then results in a graph node determined according to a probability distribution given for the current node and action. An optimal strategy maximizes the robot's expected reward. Note that here the choices of collecting agent react to the choices of the probabilistic environment, as the agent observes the current node (as before, in this setting the collecting agent does not receive information about the actual generated reward instances). Since we made the assumption that the rewards are independent of the agent's move and position on the graph, we can generalize the augmented graph constructions Sections~\ref{section:FiniteHorizon} and~\ref{section:InfiniteHorizon} to construct an augmented MDP and cast the problem to maximizing the expected (mean-payoff) reward in the augmented MDP. \end{comment} \smallskip\noindent\textbf{Two-Player Turn-Based Game Setting.} In this setting, the nodes of the graph are partitioned into player-1 and player-2 nodes, with the outgoing edges from a node being controlled by the respective player. The objective of player~1 (the collecting agent) is to maximise its reward ($r_{\ssum}^N(\pi)$ or $r_{\sav}(\pi)$) when facing an adversarial player~2 (as earlier, in this setting the collecting agent does not receive information about the actual generated reward instances). Plays and strategies are defined in the standard way (see \emph{e.g.}\xspace,~\cite{ZwickP96}). It can be checked that the augmented weighted graphs of Sections~\ref{section:FiniteHorizon} and~\ref{section:InfiniteHorizon} also work in this game-based setting, and thus we obtain algorithms for computing $R_{\ssum}^{(N)}(v)$ and $\epsilon$-optimal $R_{\sav}(v)$ values applying algorithms for shortest path and mean-payoff games~\cite{Darmann2017,ZwickP96}. \section{Introduction}\label{sec:intro} Reward collecting problems on metric spaces are at the core of many applications, and studied classically in combinatorial optimization under many well-known monikers: the traveling salesman problem, the knapsack problem, the vehicle routing problem, the orienteering problem, and so on. Typically, these problems model the metric space as a discrete graph whose nodes or edges constitute rewards, either deterministic or stochastic, and ask how to traverse the graph to maximize the collected rewards. In most versions of the problem, rewards are either fixed or cumulative. In particular, once a reward appears, it stays there until collection. However, in many applications, existing rewards may disappear (e.g., a customer changing her mind) or have more ``value'' if they are collected fast. We introduce the \emph{Robot Routing problem}, which combines the spatial aspects of traveling salesman and other reward collecting problems on graphs with stochastic reward generation and with the possibility that uncollected rewards may disappear at each stage. The robot routing problem consists of a finite graph and a reward process for each node of the graph. The reward process models dynamic requests which appear and disappear. At each (discrete) time point, a new reward is generated for the node according to a stochastic process with expectation $\lambda$. However, at each point, a previously generated reward disappears with a fixed probability $\delta$. When the node is visited, the entire reward is collected. The optimization problem for robot routing asks, given a graph and a reward process, what is the optimal (or $\epsilon$-optimal) path a robot should traverse in this graph to maximize the expected reward? As an illustrating example for our setting, consider a vendor planning her path through a city. At each street corner, and at each time step, a new customer arrives with expectation $\lambda$, and an existing customer leaves with probability $\delta$. When the vendor arrives at the corner, she serves all the existing requests at once. We ignore other possible real-world features and behaviors e.g., customers leaving queues if the queue length is long. How should the vendor plan her path? Similar problems can be formulated for traffic pooling \cite{Wongpiromsarn13}, for robot control \cite{HSCC2017}, for patrolling \cite{HoshinoU16}, and many other scenarios. Despite the usefulness of robot routing in many scenarios involving dynamic appearance and disappearance of rewards, algorithms for its solution have not, to the best of our knowledge, been studied before. In this paper, we study two optimization problems: the \emph{value computation problem}, that asks for the maximal expected reward over a \emph{finite} or \emph{infinite} horizon, and the \emph{path computation problem}, that asks for a path realizing the optimal (or $\epsilon$-optimal) reward. The key observation to solving these problems is that the reward collection can be formulated as discounted sum problems over an extended graph, using the correspondence between stopping processes and discounted sum games. For finite horizon robot routing we show that the value \emph{decision} problem (deciding if the maximal expected reward is at least a certain amount) is NP-complete when the horizon bound is given in unary, and the value and optimal path can be computed in exponential time using dynamic programming. For the infinite horizon problem, where the accumulated reward is defined as the long run average, we show that the value decision problem is NP-hard if the probability of a reward disappearing is more than a threshold dependent on the number of nodes. We show that computing the optimal long run average reward can be reduced to a 1-player mean-payoff game on an \emph{infinite graph}. By solving the mean payoff game on a finite truncation of this graph, we can approximate the solution up to an arbitrary precision. This gives us an algorithm that, for any given $\epsilon$, computes an $\epsilon$-optimal path in time exponential in the size of the original graph and logarithmic in $1/\epsilon$. Unlike finite mean-payoff 2-player games, strategies which generate optimal paths for robot routing even in the 1-player setting can require memory. For the \emph{non-discounted} infinite horizon problem (that is, when rewards do not disappear) we show that the optimal path and value problems are solvable in polynomial time.\looseness=-1 \vspace*{-4mm} \subparagraph*{Related work} The robot routing problem is similar in nature to a number of other problems studied in robot navigation, vehicle routing, patrolling, and queueing network control, but to the best of our knowledge has not been studied so far. There exists a plethora of versions of the famous traveling salesman problem (TSP) which explore the trade-off between the cost of the constructed path and its reward. Notable examples include the orienteering problem~\cite{VansteenwegenSO11}, in which the number of locations visited in a limited amount of time is to be maximized, vehicle routing with time-windows~\cite{Kolen87} and deadlines-TSP~\cite{BansalBCM04}, which impose restrictions or deadlines on when locations should be visited, as well as discounted-reward-TSP~\cite{BlumCKLMM07} in which soft deadlines are implemented by means of discounting. Unlike in our setting, in all these problems, rewards are static, and there is no generation and accumulation of rewards, which is a key feature of our model. In the dynamic version of vehicle routing~\cite{BulloFPSS11} and the dynamic traveling repairman problem~\cite{BertsimasR91}, tasks are dynamically introduced and the objective is to minimize the expected task waiting time. In contrast, we focus on limit-average objectives, which are a classical way to combine rewards over infinite system runs. Patrolling~\cite{BrazdilHKRA15} is another graph optimization problem, motivated by operational security. The typical goal in patrolling is to synthesize a strategy for a defender against a single attack at an undetermined time and location, and is thus incomparable to ours. A single-robot multiple-intruders patrolling setting that is close to ours is described in~\cite{HoshinoU16}, but there again the objective is to merely detect whether there is a visitor/intruder at a given room. Thus, the patrolling environment in~\cite{HoshinoU16} is described by the probability of detecting a visitor for each location. On the contrary, our model can capture \emph{counting patrolling problems}, where the robot is required not only to detect the presence of visitors but to register/count as many of them as possible. % Another related problem is the information gathering problem~\cite{Stranders13}. The key difference between the information gathering setting and ours is that~\cite{Stranders13} assumes that making an observation earlier has bigger value than if a lot of observations have already been made. This restriction on the reward function is \emph{not} present in our model, since the reward value collected when visiting node $v$ at time $t$ (making observation $(v,t)$, in their terms) only depends on the last time when $v$ was previously visited, and not on the rest of the path (the other observations made, in their terms). Average-energy games~\cite{ChatterjeeP15, Bouyer2017} are a class of games on finite graphs in which the limit-average objective is defined by a double summation. The setting discussed in~\cite{ChatterjeeP15, Bouyer2017} considers static edge weights and no discounting. Moreover, the inner sum in an average-energy objective is over the whole prefix so far, while in our setting the inner sum spans from the last to the current visit of the current node, which is a crucial difference between these two settings. Finally, there is a rich body of work on multi-robot routing~\cite{WeiHJ16a,AmadorOZ14,MelvinKKTO07,EkiciKK09,EkiciR13} which is closely related to our setting. However, the approaches developed there are limited to static tasks with fixed or linearly decreasing rewards. The main focus in the multi-robot setting is the task allocation and coordination between robots, which is a dimension orthogonal to the aggregate reward collection problem which we study. Markov decision processes (MDP) \cite{Puterman94} seem superficially close to our model. In an MDP, the rewards are determined statically as a function of the state and action. In contrast, the dynamic generation and accumulation of rewards in our model, especially the individual discounting of each generated reward, leads to algorithmic differences: for example, while MDPs admit memoryless strategies for long run average objectives, strategies require memory in our setting and there is no obvious reduction to, e.g., an exponentially larger, MDP. We employed the reward structure of this article in \cite{HSCC2017} with the goal of synthesizing controllers for reward collecting Markov processes in continuous space. The work \cite{HSCC2017} is mainly focused on addressing the continuous dynamics of the underlying Markov process where the authors use abstraction techniques \cite{SA13} to provide approximately optimal controllers with formal guarantees on the performance while maintaining the probabilistic nature of the process. In contrast, we tackle the challenges of this problem with having a deterministic graph as the underlying dynamical model of the robot and study the computational complexity of the proposed algorithms thoroughly. \vspace*{-2mm} \subparagraph*{Contributions} We define a novel optimization problem for formalizing and solving reward collection in a metric space where stochastic rewards appear as well as disappear over time. \begin{itemize} \item We consider reward-collection problems in a novel model with \emph{dynamic generation} and \emph{accumulation} of rewards, where each reward \emph{can disappear with a given probability}. \item We study the value decision problem, the value computation problem, and the path computation problem over a finite horizon. We show that the value decision problem is NP-complete when the horizon is given in unary. We describe a dynamic programming approach for computing the optimal value and an optimal path in exponential time. \item We study the value decision problem, the value computation problem, and the path computation problem over an infinite horizon. We show that for sufficiently large values of the disappearing factor $\delta$ the value decision problem is NP-hard. We provide an algorithm which for any given $\epsilon > 0$, computes an $\epsilon$-optimal path in time exponential in the size of the original graph and logarithmic in $1/\epsilon$. We demonstrate that strategies (in the 1-player robot routing games) which generate infinite-horizon optimal paths can require memory. \end{itemize} \section{Problem Formulation}\label{sec:problem} \vspace*{-2mm} \subparagraph*{Preliminaries and notation} A finite directed graph $G = (V,E)$ consists of a finite set of nodes $V$ and a set of edges $E \subseteq V \times V$. A path $\pi = v_0,v_1,\ldots$ in $G$ is a finite or infinite sequence of nodes in $G$, such that $(v_i,v_{i+1}) \in E$ for each $i \geq 0$. We denote with $|\pi| = N$ the length (number of edges) of a finite path $\pi = v_0,v_1,\ldots,v_N$ and write $\pi[i]= v_i$ and $\pi[0\ldots N] = v_0,v_1,\ldots v_N$. For an infinite path $\pi$, we define $|\pi| = \infty.$ We also denote the cardinality of a finite set $U$ by $|U|$. % We denote by $\mathbb N\!=\!\{0,1,..\}$ and $\mathbb Z_+ \!= \!\{1,2,..\}$ the sets of non-negative and positive integers respectively. We define $\mathbb Z[n,m] = \{n,n+1,\ldots,m\}$ for any $n,m\in\!\mathbb N,\,n\le m$. We denote with $\mathbb I(\cdot)$ the indicator function which takes a Boolean-valued expression as its argument and returns $1$ if this expression evaluates to true and 0 otherwise. \vspace*{-4mm} \subparagraph*{Problem setting} Fix a graph $G = (V,E)$. We consider a discrete-time setting where at each time step $t\in \mathbb{N}$, at each node $v\in V$ a reward process generates rewards according to some probability distribution. Once generated, each reward at a node decays according to a decaying function. A \emph{reward-collecting} robot starts out at some node $v_0\in V$ at time $t=0$, and traverses one edge in $E$ at each time step. Every time the robot arrives at a node $v\in V$, it collects the reward accumulated at $v$ since the last visit to $v$. Our goal is to compute the maximum expected reward that the robot can possibly collect, and to construct an optimal path for the robot in the graph, i.e., a path whose expected total reward is maximal. To formalize reward accumulation, we define a function $\Last_{\pi}$ which (for path $\pi$) maps an index $t \leq |\pi|$ and a node $v \in V$ to the length of the path starting at the previous occurrence of $v$ in $\pi$ till position $t$; and to $t+1$ if $v$ does not occur in $\pi$ before time $t$: \begin{equation*} \Last_\pi(t,v) := \min\left(t+1, \ \left\{t-j \in \mathbb{N} \mid j < t, \pi[j] = v\right\}\right). \end{equation*} \vspace*{-4mm} \subparagraph*{Reward functions} Let $\xi : \Omega\times V\to \mathbb R$ be a set of random variables defined on a sample space $\Omega$ and indexed by the set of nodes $V$. Then $\xi(\cdot,v)$, $v\in V,$ is a measurable function from $\Omega$ to $\mathbb R$ that generates a random reward at node $v$ at any time step. Let $\pi$ be the path in $G$ traversed by the robot. At time $t$, the position of the robot is the node $\pi[t]$, and the robot collects the uncollected decayed reward generated at node $\pi[t]$ (since its last visit to $\pi[t]$) up till and including time $t$. Then, the robot traverses the edge $(\pi[t], \pi[t+1])$, and at time $t+1$ it collects the rewards at node $\pi[t+1]$. The uncollected reward at time $t$ at a node $v$ given a path $\pi$ traversed by the robot is defined by the random variable \begin{equation*} \acc_{\pi}(t, v)\ := \hspace{-0.1in}\sum_{j = 0}^{\Last_\pi(t,v)-1}\hspace{-0.1in}\gamma(v)^{j} \xi(w(t-j),v),\quad w(\cdot)\in\Omega. \end{equation*} The value $\gamma(v)$ in the above definition is a \emph{discounting factor} that models the probability that a reward at node $v$ survives for one more round, that is, the probability that a given reward instance at node $v$ disappears at any step is $1-\gamma(v)$. Note that the previous time a reward was collected at node $v$ was at time $t- \Last_\pi(v, t)$, the time node $v$ was last visited before $t$. Thus $\acc_{\pi}(t, v)$ corresponds to the rewards generated at node $v$ at times $t,\, t-1, \dots,\, t- \Last_\pi(t,v)+1$, which have decayed by factors of $\gamma(v)^0, \gamma(v)^1, \dots, \gamma(v)^{\Last_\pi(t,v)-1},$ respectively. When traversing a path $\pi$, the robot collects the accumulated reward $\acc_{\pi}(t, \pi[t])$ at time $t$ at node $\pi[t]$. We define the \emph{expected finite $N$-horizon sum reward} for a path $\pi$ as: \begin{equation*} r_{\ssum}^{(N)}(\pi) \ :=\ \mathbb E\left[\sum_{t=0}^N \acc_{\pi}(t, \pi[t])\right]. \end{equation*} Let $\lambda : V \to \mathbb{R}_{\geq 0}$ be a function that maps each node $v \in V$ to the \emph{expected value of the reward generated at node $v$} for each time step, $\lambda(v) = \mathbb E\left[\xi(\cdot,v)\right]$. We assume that the rewards generated at each node are independent of the agent's move. Thus, the function $\lambda$ will be sufficient for our study, since we have \begin{align} &r_{\ssum}^{(N)}(\pi) \ =\ \sum_{t=0}^N \Eacc_{\pi}(t, \pi[t]) \text{, where } \Eacc_{\pi}(t, v):=\hspace{-0.1in} \sum_{j = 0}^{\Last_\pi(t,v)-1}\hspace{-0.1in}\gamma(v)^{j} \lambda(v). \label{eq:rsum-orig} \end{align} For an infinite path $\pi$, the \emph{limit-average} expected reward is defined as \begin{equation} \label{eq:rav-orig} r_{\sav}(\pi) \ =\ \liminf_{N \to \infty}\frac{r_{\ssum}^{(N)}\left( \pi\right)}{N+1}. \end{equation} The finite and infinite-horizon \emph{reward values} for a node $v$ are defined as the best rewards over all paths originating in $v$: $R_{\ssum}^{(N)}(v) = \sup_{\pi}\left\{r_{\ssum}^{(N)}(\pi)\,|\,\pi[0]=v,|\pi| = N\right\}$ and $R_{\sav}(v) = \sup_{\pi}\left\{r_{\sav}(\pi)\,|\,\pi[0]=v,|\pi| = \infty\right\}$, respectively. The choice of limit-average in \eqref{eq:rav-orig} is due to the unbounded sum reward $r_{\ssum}^{(N)}(\pi)$ when $N$ goes to infinity. For a given path $\pi$, the sequence $r_{\ssum}^{(N)}\left( \pi\right)/(N\!+\!1)$ in \eqref{eq:rav-orig} may not converge. Thus we opt for the worst case limiting behavior of the sequence. Alternatively, one may select the best case limiting behavior $\limsup_{N \to \infty}$ in \eqref{eq:rav-orig} with no substantial change in the results of this paper. \vspace*{-4mm} \subparagraph*{Node-invariant functions $\lambda$ and $\gamma$ and definition of cost functions} In the case when the functions $\lambda$ and $\gamma$ are constant, we write $\lambda$ and $\gamma$ for the respective constants. In this case, the expressions for $r_{\ssum}^{(N)}(\pi)$ and $r_{\sav}(\pi)$ can be simplified using the identity $1+\gamma + \gamma^2 + \dots + \gamma^{q-1} = \frac{1-\gamma^q}{1-\gamma}$ for $\gamma < 1$. Then we have \begin{equation} \label{eq:rsumNew} \begin{split} r_{\ssum}^{(N)}(\pi)& = \sum_{t=0}^N\sum_{j = 0}^{\Last_\pi(\pi[t],t)-1}\gamma^{j} \lambda = \lambda\cdot \sum_{t=0}^N \left(1 + \gamma +\ldots+\gamma^{\Last_\pi(t,\pi[t])-1}\right)\\ & = \lambda\cdot \sum_{t=0}^N \frac{1 - \gamma^{\Last_\pi(t,\pi[t])}}{1-\gamma} = \frac{(N+1)\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma}\sum_{t=0}^N \gamma^{\Last_\pi(t,\pi[t])}. \end{split} \end{equation} The expression $r_{\sav}(\pi)$ can be simplified as: \begin{equation} \label{eq:ravNew} r_{\sav}(\pi) = \liminf_{N \to \infty}\frac{1}{N+1}r_{\ssum}^{(N)}(\pi) = \frac{\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma}\limsup_{N \to \infty}\frac{1}{N+1} \sum_{t=0}^N \gamma^{\Last_\pi(t,\pi[t])}. \end{equation} For the special case $\gamma=1$ (i.e., when the rewards are not discounted), the expression for the finite-horizon reward is $ r_{\ssum}^{(N)}(\pi) = \lambda\sum_{t=0}^{N}\Last_\pi(t,\pi[t])$. We define \emph{cost functions} that map a path $\pi$ to a real valued finite- or infinite-horizon cost:\looseness=-1 \begin{equation} \label{eq:cost_functions} c_{\ssum}^{(N)}(\pi) := \sum_{t=0}^{N} \gamma^{\Last_\pi(\pi[t],t)}\quad \text{ and }\quad c_{\sav}(\pi) := \limsup\limits_{N \to \infty}\frac{ c_{\ssum}^{(N)}(\pi)}{N+1}. \end{equation} From Equations~\eqref{eq:rsumNew} and~\eqref{eq:ravNew}, the computation of optimal paths for the reward functions $r_{\ssum}^{(N)}$ and $r_{\sav}$ corresponds to computing paths that minimize the cost functions $c_{\ssum}^{(N)}(\pi)$ and $c_{\sav}(\pi)$, respectively. Analogously to $R_{\ssum}^{(N)}(v)$ and $R_{\sav}(v)$, the infimums of the cost functions in \eqref{eq:cost_functions} over paths are denoted by $C_{\ssum}^{(N)}(v)$ and $C_{\sav}(v)$ respectively. \begin{wrapfigure}{l!}{0.4\textwidth} \centering \begin{tikzpicture}[node distance=2 cm,auto,>=latex',line join=bevel,transform shape] \node[circle,draw] at (0,0) (a) {$a$}; \node [left of=a,circle,draw] (d) {$d$}; \node [above right of =a,yshift=-.7cm,circle,draw] (b) {$b$}; \node [below right of =a,yshift=.7cm,circle,draw] (c) {$c$}; \draw [->] (a) edge[bend left] (b); \draw [->] (b) edge[bend left] (c); \draw [->] (c) edge[bend left] (a); \draw [->] (a) edge[bend right] (d); \draw [->] (d) edge[bend right] (a); \end{tikzpicture} \caption{A graph $G_{\mathsf e} = (V_{\mathsf e},E_{\mathsf e})$ with two simple cycles sharing a single node.} \label{fig:graph1} \end{wrapfigure} \begin{example} \label{ex:4node-reward} Consider the graph $G_{\mathsf e} = (V_{\mathsf e},E_{\mathsf e})$ in Figure \ref{fig:graph1} with $V_{\mathsf e} = \{a,b,c,d\}$, which we will use as a running example throughout the paper. The functions $\lambda$ and $\gamma$ are constant. \end{example} Consider the finite path $\pi_1 = adabcad$. For the occurrences of node $a$ in $\pi_1$ we have $\Last_{\pi_1}(0,a) = 1$, $\Last_{\pi_1}(2,a) = 2$, $\Last_{\pi_1}(5,a) = 3$, and similarly for the other nodes in $\pi_1$. The reward for $\pi_1$ as a function of $\lambda$ and $\gamma$ is $r_{\ssum}^6(\pi_1) = \frac{7\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma}(\gamma + \gamma^2 + \gamma^2 + \gamma^4 + \gamma^5 + \gamma^3 + \gamma^5)$ for $\gamma<1$ and $r_{\ssum}^6(\pi_1) = 22\lambda$ for $\gamma=1$. For the infinite path $\pi_2 = (abc)^\omega$ we have $\Last_{\pi_2}(0,a) = 1$, $\Last_{\pi_2}(1,b) = 2$, $\Last_{\pi_2}(2,c) = 3$ and the value of $\Last$ is $3$ in all other cases. Thus we have $r_{\sav}(\pi_2) = \frac{\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma}\gamma^3$ for $\gamma<1$ and $r_{\sav}(\pi_2) = 3\lambda$ for $\gamma=1$. Similarly, for $\pi_3 = (abcad)^\omega$ we have $r_{\sav}(\pi_3) = \frac{\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma} \cdot \frac{(\gamma^2+\gamma^3+3\gamma^5)}{5}$ for $\gamma<1$ and $r_{\sav}(\pi_3) = 4\lambda$ for $\gamma=1$. \subparagraph*{Problem statements} We investigate optimization and decision problems for finite and infinite-horizon robot routing. % The \emph{value computation problems} ask for the computation of $R_{\ssum}^{(N)}(v)$ and $R_{\sav}(v)$. The corresponding decision problems asks to check if the respective one of these two quantities is greater than or equal to a given threshold $R\in\mathbb R$.\looseness=-1 \begin{definition}[Value Decision Problems] \label{def:ValueDecision} Given a finite directed graph $G = (V,E)$, an expected reward function $\lambda:V\rightarrow\mathbb R_{\ge 0}$, a discounting function $\gamma:V\rightarrow (0,1]$, an initial node $v_0 \in V$ and a threshold value $R \in \mathbb{R}$, \begin{compactitem} \item The \emph{finite horizon value decision problem} is to decide, given $N$, if $R_{\ssum}^{(N)}(v_0) \geq R$. \item The \emph{infinite horizon value decision problem} is to decide if $R_{\sav}(v_0) \geq R$. \end{compactitem} \end{definition} For a finite directed graph $G = (V,E)$, expected reward and discounting functions $\lambda:V\rightarrow\mathbb R_{\ge 0}$ and $\gamma:V\rightarrow (0,1]$ and $v_0 \in V$, a finite path $\pi$ is said to be an \emph{optimal path} for time-horizon $N$ if (a)~$\pi[0] = v_0$ and $|\pi|=N$, and (b)~for every path $\pi'$ in $G$ with $\pi'[0] = v_0$ and $|\pi'|=N$ it holds that $r_{\ssum}^{(N)}(\pi) \geq r_{\ssum}^{(N)}(\pi')$. Similarly, an infinite path $\pi$ is said to be \emph{optimal for the infinite horizon} if $\pi[0] = v_0$ and for every infinite path $\pi'$ with $\pi'[0] = v_0$ in $G$ it holds that $r_{\sav}(\pi) \geq r_{\sav}(\pi')$. We can also define corresponding \emph{threshold paths}: given a value $R$ a path $\pi$ is said to be threshold $R$-optimal if $r_{\ssum}^{(N)}(\pi) \geq R$ or $r_{\sav}(\pi) \geq R$, respectively. An \emph{$\epsilon$-optimal} path is one which is $R_{\ssum}^{(N)}(v_0) -\epsilon$ or $R_{\sav}(v_0) - \epsilon$ threshold optimal (for finite or infinite horizon respectively). \begin{example} \label{ex:4node-optimal} Consider again the graph $G_{\mathsf e}$ shown in Figure \ref{fig:graph1}. Examining the expressions computed in Example~\ref{ex:4node-reward}, we have that $r_{\sav}(\pi_2) > r_{\sav}(\pi_3)$ for $\gamma=0.1$ and $r_{\sav}(\pi_3) > r_{\sav}(\pi_2)$ for $\gamma=0.9$. Thus, in general, the optimal value depends on $\gamma$. Due to the structure of the set of infinite paths in $G_{\mathsf e}$ we can analytically compute the optimal value $R_{\sav}(v)$ for each $v \in V_{\mathsf e}$ as a function of $\gamma$ and a corresponding optimal path (the proof is in the appendix): \begin{itemize} \item if $\gamma \in [0,a_1]$, then $R_{\sav}(v) = \frac{\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma}\gamma^3$ and the path $(abc)^\omega$ is optimal; \item if $\gamma \in [a_1,a_2]$, then $R_{\sav}(v) = \frac{\lambda}{1-\gamma} - \frac{\lambda}{1-\gamma}\cdot\frac{(\gamma^2+4\gamma^3+2\gamma^5+\gamma^8)}{8}$ and $(abcabcad)^\omega$ is optimal; \item if $\gamma \in [a_2,1]$, then $R_{\sav}(v)\! = \! \frac{\lambda}{1-\gamma} \!- \!\frac{\lambda}{1-\gamma}\! \cdot\! \frac{(\gamma^2+\gamma^3+3\gamma^5\!)}{5}$ and the path $(abcad)^\omega$ is optimal. \end{itemize} The constants $a_1 \approx 0.2587$ and $a_2 \approx 0.2738$ are respectively the unique real roots of polynomials $\gamma^6+2\gamma^3-4\gamma+1$ and $5\gamma^6-14\gamma^3+12\gamma-3$ in the interval $(0,1)$. Note that for $\gamma=1$ we have $R_{\sav}(v) = 4\lambda$ which is achieved by $(abcad)^\omega$. The path $(abc)(ad)(abc)^2(ad)^2\hspace{-0.05in}\ldots(abc)^n(ad)^n\hspace{-0.05in}\ldots$, which is not ultimately periodic, also achieves the optimal reward. \end{example} \vspace*{-7mm} \subparagraph*{Paths as strategies} We often refer to infinite paths as resulting from strategies (of the collecting agent). A \emph{strategy} $\sigma$ in $G$ is a function that maps finite paths $\pi[0\ldots m]$ to nodes such that if $\sigma(\pi)$ is defined then $(\pi[m],\sigma(\pi)) \in E$. Given an initial node $v_0$, the strategy $\sigma$ generates a unique infinite path $\pi$, denoted as $\outcome(v_0, \sigma)$. Thus, every infinite path $\pi = v_0,v_1,\ldots$ defines a unique strategy $\sigma_\pi$ where $\sigma_\pi(\pi[0\ldots i]) = v_{i+1}$, and $\sigma_\pi(\epsilon) = v_0$, and $\sigma_\pi$ is undefined otherwise. Clearly, $\outcome(v_0, \sigma_\pi) = \pi$. We say a strategy $\sigma$ is optimal for a path problem if the path $\outcome(v_0, \sigma)$ is optimal. % A strategy $\sigma$ is \emph{memoryless} if for every two paths $\pi'[0\ldots m'], \pi''[0\ldots m'']$ for which $\pi'[m']=\pi''[m'']$, it holds that $\sigma(\pi') = \sigma(\pi'')$. We say that memoryless strategies suffice for the optimal path problem if there always exists a memoryless strategy $\sigma$ such that $\outcome(v_0, \sigma)$ is an optimal path. \section{Finite Horizon Rewards: Computing $R_{\ssum}^{(N)}(v)$} \label{section:FiniteHorizon} In this section we consider the finite-horizon problems associated with our model. The following theorem summarizes the main results. \begin{theorem} \label{thm:complexity-finite} Given $G = (V,\!E)$, expected reward and discounting functions, node $v\!\in\!V$, and horizon $N\!\in\! \mathbb N$: \begin{compactenum} \item The finite-horizon value decision problem is NP-complete if $N$ is in unary. \item The value $R_{\ssum}^{(N)}(v)$ for $v\!\in\! V$ is computable in exponential time even if $N$ is in binary. \end{compactenum} \end{theorem} Analogous results hold for the related reward problem where in addition to the initial node $v$, we are also given a destination node $v_f$, and the objective is to go from $v$ to $v_f$ in at most $N$ steps while maximizing the reward. The finite-horizon value problem is NP-hard by reduction from the Hamiltonian path problem (the proof is in the appendix), even in the case of node-invariant $\lambda$ and $\gamma$. Membership in NP in case $N$ is in unary follows from the fact that we can guess a path of length $N$ and check that the reward for that path is at least the desired threshold value. To prove the second part of the theorem, we construct a finite \emph{augmented weighted graph}. For simplicity, we give the proof for node-invariant $\lambda$ and $\gamma$, working with the cost functions $c_{\ssum}$ and $c_{\sav}$. The augmented graph construction in the general case is a trivial generalization by changing the weights of the nodes, and the dynamic programming algorithm used for computing the optimal cost values is easily modified to compute the corresponding reward values instead. For $\gamma < 1$ the objective is to minimize $c_{\ssum}^{(N)}(\pi) = \sum_{t=0}^N \gamma^{\Last_\pi(t,\pi[t])}$ and for $\gamma = 1$ the objective is to maximize $r_{\ssum}^{(N)}(\pi) = \lambda\sum_{t=0}^{N}\Last_\pi(t,\pi[t])$ over paths $\pi$. \vspace*{-4mm} \subparagraph*{Augmented weighted graph} Given a finite directed graph $G = (V,E)$ we define the \emph{augmented weighted graph} $\widetilde G = (\widetilde V, \widetilde E)$ which ``encodes'' the values $\Last_\pi(t,v) $ for the paths in $G$ explicitly in the augmented graph node. We can assume w.l.o.g.\ that $V= \set{1,2,\dots, \abs{V}}$. \begin{compactitem} \item The set of nodes $\widetilde V $ is $ V \times \mathbb Z_+^{\abs{V}}$ (the set $\widetilde V $ is infinite). A node $(v, b_1, b_2, \dots, b_{\abs{V}}) \in \widetilde V$ represents the fact that the current node is $v$, and that for each node $u\in V$ the last visit to $u$ (before the current time) was $b_u$ time units before the current time. \item The weight of a node $(v, b_1, b_2, \dots, b_{\abs{V}}) \in \widetilde V$ is $\gamma^{b_v}$. \item The set of edges $\widetilde E $ consists of edges $(v, b_1, b_2, \dots, b_{\abs{V}}) \ \rightarrow\ (v', b'_1, b'_2, \dots, b'_{\abs{V}})$ such that $(v,v') \in E$; and $b'_v = 1$, and $b'_u= b_u+1$ for all $u\neq v$. \end{compactitem} Let $\pi$ be a path in $G$. In the graph $\widetilde G$ there exists a unique path $\widetilde \pi$ that corresponds to $\pi$: \begin{equation} \label{eq:aug_path} \widetilde \pi = (\pi[0], 1, 1, \dots, 1)\, ,\, (\pi[1], b^1_1, b^1_2, \dots, b^1_{\abs{V}})\, ,\, (\pi[2], b^2_1, b^2_2, \dots, b^2_{\abs{V}})\, , \dots \end{equation} starting from the node $(\pi[0], 1, 1, \dots, 1)$ such that for all $t$ and for all $v\in V$, we have $\Last_\pi(t,v) = b^t_{v}$. Dually, for each path $\widetilde \pi $ in $\widetilde G$ starting from $(v_0, 1, 1, \dots, 1)$, there exists a unique path $\pi$ in $G$ from the node $v_0$ such that $\Last_\pi(t,v) = b^t_{v}$ for all $t$ and $v$. For a path $\widetilde \pi$ in the form of \eqref{eq:aug_path} let \begin{equation*} {\widetilde c}_{\ssum}^{(N)} (\widetilde \pi) := \sum_{t=0}^N \gamma^{b^t_{v_t}}\quad \text{ and }\quad {\widetilde c}_{\sav}(\widetilde \pi) := \limsup_{N \to \infty}\frac{1}{N+1} \sum_{t=0}^N \gamma^{b^t_{v_t}}. \text{ Observe that:} \end{equation*} \begin{compactitem} \item ${\widetilde c}_{\ssum}^{(N)} (\widetilde \pi) $ is the sum of weights associated with the first $N+1$ nodes of $\widetilde \pi$. \item ${\widetilde c}_{\sav}(\widetilde \pi) $ is the limit-average of the weights associated with the nodes of $\widetilde \pi$. \end{compactitem} Thus, ${\widetilde c}_{\ssum}^{(N)}$ and ${\widetilde c}_{\sav}$ define the classical total finite sum (shortest paths) and limit average objectives on weighted (infinite) graphs~\cite{ZwickP96}. Additionally, $ {\widetilde c}_{\ssum}^{(N)} (\widetilde \pi) \ = c_{\ssum}^{(N)} ( \pi)$, and $ {\widetilde c}_{\sav} (\widetilde \pi) \ = c_{\sav}( \pi)$ where $\pi$ is the path in $G$ corresponding to the path $\widetilde \pi$. Now, define ${\widetilde C}_{\ssum}^{(N)} \left((v_0, 1,1,\dots, 1) \right)$ as the infimum of ${\widetilde c}_{\ssum}^{(N)}(\widetilde \pi)$ over all paths $\widetilde \pi$ with $\widetilde{\pi}[0] = (v_0, 1,1,\dots, 1)$, and similarly for $ {\widetilde C}_{\sav}\left((v_0, 1,1,\dots, 1) \right)$. Then it is easy to see that $ C_{\ssum}^{(N)} ( v_0) ={\widetilde C}_{\ssum}^{(N)} \left((v_0, 1,1,\dots, 1) \right)$ and $C_{\sav}( v_0) = {\widetilde C}_{\sav} \left((v_0, 1,1,\dots, 1) \right)$. Thus, we can reduce the optimal path and value problems for $G$ to standard objectives in $\widetilde G$. The major difficulty is that $\widetilde G$ is infinite. However, note that only the first $N+1$ nodes of $\widetilde \pi$ are relevant for the computation of $ {\widetilde c}_{\ssum}^{(N)} (\widetilde \pi)$. Thus, the value of $ {\widetilde C}_{\ssum}^{(N)} ( (v_0, 1, \ldots, 1)) $ can be computed on a \emph{finite} subgraph of $\widetilde G$, obtained by considering only the finite subset of nodes $V\!\times \! \mathbb Z[1,N+1]^{\abs{V}} \ \subseteq \widetilde V$. For $\gamma < 1$, we obtain the value ${\widetilde C}_{\ssum}^{(N)} (\widetilde \pi)$ by a standard dynamic programming algorithm which computes the shortest path of length $N$ on this finite subgraph starting from the node $(v_0, 1, 1,\dots, 1)$ (and keeping track of the number of steps). For $\gamma = 1$, where the objective is to maximize $r_{\ssum}^{(N)}(\pi) = \lambda\sum_{t=0}^{N}\Last_\pi(t,\pi[t])$ over paths $\pi$, we proceed analogously. Note that the subgraph used for the dynamic programming computations is of exponential size in terms of the size of $G$ and the description of $N$. This gives the desired result in Theorem \ref{thm:complexity-finite}. \section{Infinite Horizon Rewards: Computing $R_{\sav}(v)$} \label{section:InfiniteHorizon} Since we consider finite graphs, every infinite path eventually stays in some strongly connected component (SCC). Furthermore, the value of the reward function $r_{\sav}(\pi)$ does not change if we alter/remove a finite prefix of the path $\pi$. Thus, it suffices to restrict our attention to the SCCs of the graph: the problem of finding an optimal path from a node $v \in V$ reduces to finding the SCC that gives the highest reward among the SCCs reachable from node $v$. Therefore, we assume that the graph is strongly connected. \subsection{Hardness of Exact $R_{\sav}(v)$ Value Computation} Since it is sufficient for the hardness results, we consider node-invariant $\lambda$ and $\gamma$. \vspace*{-4mm} \subparagraph*{Insufficiency of memoryless strategies.} Before we turn to the computational hardness of the value decision problem, we look at the \emph{strategy complexity} of the optimal path problem and show that optimal strategies need memory. \begin{proposition} Memoryless strategies do not suffice for the infinite horizon problem.\looseness=-1 \end{proposition} \begin{proof} Consider Example~\ref{ex:4node-optimal}. A memoryless strategy results in paths which cycle exclusively either in the left cycle, or the right cycle (as from node $a$, it prescribes a move to either only $b$ or only $d$). As shown in Example~\ref{ex:4node-optimal}, the optimal path for $\gamma\!\ge\! a_1$ needs to visit both cycles. Thus, memoryless strategies do not suffice for this example. \end{proof} For $\omega$-regular objectives, strategies based on \emph{latest visitation records} \cite{GH82,Thomas95}, which depend only on the \emph{order} of the last node visits (\emph{i.e.}\xspace, for all node pairs $v_1\neq v_2 \in V$, whether the last visit of $v_1$ was before that of $v_2$ or vice versa) are sufficient. However, we can show that such strategies do not suffice either. To see this, recall the graph in Figure~\ref{fig:graph1} for which the optimal path for $\gamma=0.26$ is $(abcabcad)^\omega$. Upon visiting node $a$ this strategy chooses one of the nodes $b$ or $d$ depending on the \emph{number} of visits to $b$ since the last occurrence of $d$. On the other hand, every strategy based \emph{only on the order of last visits} is not able to count the number of visits to $b$ and thus, results in a path that ends up in one of $(abc)^\omega$, or $(ad)^\omega$, or $(abcad)^\omega$, which are not optimal for this $\gamma$. The proof is given in the appendix. It is open if finite memory strategies are sufficient for the infinite-horizon optimal path problem. \vspace*{-4mm} \subparagraph*{NP-Hardness of the value decision problem} To show NP-hardness of the infinite-horizon value decision problem, we first give bounds on $R_{\sav}(v_0)$. The following Lemma proven in the appendix, establishes these bounds. \begin{lemma} \label{lemma:Hamilt} For any graph $G = (V,E)$ and any node $v_0\in V$, $R_{\sav}(v_0)$ is bounded as \begin{equation} \label{eq:bounds} \lambda\frac{1-\gamma^p}{1-\gamma}\le \,R_{\sav}(v_0)\,\le \lambda\frac{1-\gamma^{n_{\mathsf v}}}{1-\gamma}, \end{equation} where $n_{\mathsf v} = |V|$ and $p$ is the length of the longest simple cycle in $G$. \end{lemma} \begin{corollary} \label{cor:Hamilt} If the graph $G = (V,E)$ contains a Hamiltonian cycle, any path $\pi = (v_1v_2\ldots v_{|V|})^\omega$, with $v_1v_2\ldots v_{|V|}v_1$ being a Hamiltonian cycle, is optimal and the optimal value of $R_{\sav}(v_0)$ is exactly the upper bound in \eqref{eq:bounds}. \end{corollary} The following lemma establishes a relationship between the value of optimal paths and the existence of a Hamiltonian cycle in the graph, and is useful for providing a lower bound on the computational complexity of the value decision problem. \begin{lemma} \label{lemma:existHamilt} If the upper bound in \eqref{eq:bounds} is achieved with a path $\pi$ for some $\gamma< 1/|V|$, then the graph contains a Hamiltonian cycle $\rho$ that occurs in $\pi$ infinitely often. \end{lemma} \begin{proof} The proof is by contradiction. Suppose $\pi$ does not visit any Hamiltonian cycle infinitely often. Then it visits each such cycle at most a finite number of times. Without loss of generality we can assume that the path doesn't visit any such cycles, since the total number of Hamiltonian cycles is finite. We have for $n_{\mathsf v}=|V|$ \begin{align*} c_{\ssum}^{(N)}(\pi) &\ge \sum_{t=0}^N \gamma^{\Last_\pi(t,\pi[t])}\mathbb I(\Last_\pi(t,\pi[t])<n_{\mathsf v}) \ge \gamma^{n_{\mathsf v}-1}\sum_{t=0}^N\mathbb I(\Last_\pi(t,\pi[t])<n_{\mathsf v}). \end{align*} Now let's look at $\pi_{n,n_{\mathsf v}} = \pi[n(n+1)\ldots(n+n_{\mathsf v})]$ a finite sub-path of length $n_{\mathsf v}$. There is at least one node repeated in $\pi_{n,n_{\mathsf v}}$ since the graph has $n_{\mathsf v}$ distinct nodes. Note that if $\pi[n]=\pi[n+n_{\mathsf v}]$, there must be another repetition due to the lack of Hamiltonian cycles in the path. In either case, there is an $i_n\in\mathbb Z[n,n+n_{\mathsf v}]$ such that $\Last_\pi(i_n,\pi[i_n])<n_{\mathsf v}$. \begin{align*} & c_{\ssum}^{(N)}(\pi) \ge \gamma^{n_{\mathsf v}-1}\sum_{t=0}^N\mathbb I(\Last_\pi(t,\pi[t])<n_{\mathsf v})\ge \gamma^{n_{\mathsf v}-1}\left\lfloor\frac{N}{n_{\mathsf v}}\right\rfloor\\ & \Rightarrow c_{\sav}(\pi)\ge \gamma^{n_{\mathsf v}-1}\limsup_{N\rightarrow\infty}\frac{1}{N+1}\left\lfloor\frac{N}{n_{\mathsf v}}\right\rfloor = \gamma^{n_{\mathsf v}-1}\frac{1}{n_{\mathsf v}} \Rightarrow \gamma^{n_{\mathsf v}} \ge \left(\gamma^{n_{\mathsf v}-1}\right)/n_{\mathsf v} \ \Rightarrow\ \gamma \ge 1/n_{\mathsf v}, \end{align*} which is a contradiction with the assumption that $\gamma< 1/n_{\mathsf v}$. Then a necessary condition for $c_{\sav}(\pi) \!= \!\gamma^{n_{\mathsf v}}$ with some $\gamma\!<\!1/n_{\mathsf v}$ is the existence of a Hamiltonian cycle. \end{proof} \noindent\emph{Remark.} Following the same reasoning of the above proof it is possible to improve the upper bound in \eqref{eq:bounds} as $R_{\sav}(v_0)\le \lambda\left(1-\gamma^p/n_{\mathsf v}\right)/(1-\gamma)$ for small values of $\gamma$, where $p$ is the length of the longest simple cycle of the graph. \begin{theorem}\label{thm:complexity-infinite} The infinite horizon value decision problem is NP-hard for $\gamma\! < \!1/|V|$. \end{theorem} \begin{proof} We reduce the Hamiltonian cycle problem to the infinite horizon optimal path problem. Given a graph $G = (V,E)$, we fix some $\lambda$ and $\gamma < 1/|V|$. We show that $G$ is Hamiltonian iff $R_{\sav}(v_0) \geq \lambda\left(1-\gamma^{|V|}\right)/(1-\gamma)$. If $G$ has a Hamiltonian cycle $v_1v_2\ldots v_{|V|}v_1$, then the infinite path $\pi = (v_1v_2\ldots v_{|V|})^\omega$ has reward $r_{\sav}(\pi) =\lambda\left(1-\gamma^{|V|}\right)/(1-\gamma)$, for any choice of $\gamma$. For the other direction, applying Lemma~\ref{lemma:existHamilt} with $\gamma < 1/|V|$ implies that $G$ is Hamiltonian. \end{proof} \vspace*{-4mm} \subparagraph*{Non-discounted rewards ($\gamma = 1$) and node-invariant function $\lambda$} Contrary to the finite-horizon non-discounted case, the infinite-horizon optimal path and value problems for $\gamma=1$ can be solved in polynomial time. To see this, note that the reward expression $r_{\ssum}^{(N)}(\pi)$ can be written as $r_{\ssum}^{(N)}(\pi) = \lambda\sum_{v\in V}y(v,\pi,N)$, where $y(v,\pi,N)$ is defined as $y(v,\pi,N) = 1+\max U,\,\,\text{for } U = \{j\in \mathbb{N}|j\le N, \pi[j]=v\}\cup\{-1\}$. Then, we can bound the reward by\looseness=-1 \hspace{-3mm}\begin{align*} r_{\sav}(\pi)&\le \lim_{N\rightarrow\infty}\frac{\lambda \sum_{i=1}^{|V(\pi)|}(N+1-i+1)}{N+1} = \lim_{N\rightarrow\infty}\frac{\lambda |V(\pi)|(2N-|V(\pi)|+3)}{2(N+1)} = \lambda|{\mathsf{Inf}}(\pi)|, \end{align*} where ${\mathsf{Inf}}(\pi)$ is the set of nodes visited in the path $\pi$ infinitely often. This indicates that the maximum reward is bounded by $\lambda$ times the maximal size of a reachable SCC in the graph $G$. This upper bound is also achievable: we can construct an optimal path by finding a maximal SCC reachable from the initial node and a (not necessarily simple) cycle $v_1\ldots v_n v_1$ that visits all the nodes in this SCC. Then, a subset of optimal paths contains paths of the form $\pi_0\cdot(v_1\ldots v_n)^\omega$, where $\pi_0$ is any finite path that reaches $v_1$. This procedure can be done with a computational time polynomial in the size of $G$. \begin{example} Consider the graph in Figure \ref{fig:graph1}. The optimal reward for the infinite-horizon non-discounted case is $4\lambda$, achievable by the path $\pi = (abcad)^\omega$. Another path which is not ultimately periodic but achieves the same reward $\pi' = (abc)(ad)(abc)^2(ad)^2\ldots(abc)^n(ad)^n\ldots$. \end{example} Note that for the case $\gamma=1$ there always exists an ultimately periodic optimal path, such a path is generated by a finite-memory strategy. \subsection{Approximate Computation of $R_{\sav}(v)$} In the previous section we discussed how to solve the infinite-horizon value and path computation problems for the non-discounted case. Now we show how the infinite-horizon path and value computation problems for $\gamma < 1$ can be effectively approximated. We first define functions that over and underapproximate $C_{\sav}(v)$ (thus also $R_{\sav}(v)$) and establish bounds on the error of these approximations. Given an integer $K \in \mathbb{N}$, approximately optimal paths and an associated interval approximating $C_{\sav}(v)$ can be computed using a finite augmented graph $\widetilde G_K$ based on the augmented graph $\widetilde G$ of Section~\ref{section:FiniteHorizon}. Intuitively, $\widetilde G_K$ is obtained from $\widetilde G$ by pruning nodes that have a component greater than $K$ in their augmentation. By increasing the value of $K$, the approximation error can be made arbitrarily small. \looseness=-1 We describe the approximation algorithm for node-invariant $\lambda$ and $\gamma$. The results generalize trivially to the case when $\lambda$ and $\gamma$ are not node-invariant by choosing $K$ large enough to satisfy the condition that bounds the approximation error for each $\lambda(v)$ and $\gamma(v)$. \vspace*{-4mm} \subparagraph*{Approximate cost functions} Consider the following functions from $V^*\!\!\times\!\mathbb N$ to $\mathbb R_{\geq 0}$: \begin{align*} \bar{c}_{\mathsf{sum}}^{(N)}(\pi,K) & := \sum_{t=0}^N \gamma^{\min\{K,\Last_\pi(t,\pi[t])\}} \quad \text{ and }\\ \underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K) & := \sum_{t=0}^N \gamma^{\Last_\pi(t,\pi[t])}\mathbb I(\Last_\pi(t,\pi[t])\le K). \end{align*} Informally, for $\bar{c}_{\mathsf{sum}}^{(N)}(\pi,K)$, if the last visit to node $\pi[t]$ occurred more than $K$ time units before time $t$, the cost is $\gamma^K$, rather than the original smaller amount $ \gamma^{\Last_\pi(t,\pi[t])}$. For $\underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K)$, if the last visit to $\pi[t]$ occurred more than $K$ time steps before time $t$, then the cost is $0$. For both, if the last visit to the node $\pi[t]$ occurred less than or equal to $K$ steps before, we pay the actual cost $\gamma^{\Last_\pi(t,\pi[t])}$. The above definition implies that $\underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K) \le c_{\ssum}^{(N)}(\pi)\le \bar{c}_{\mathsf{sum}}^{(N)}(\pi,K)$ for every $\pi$. Then we have $\underline C(v_0,K) \leq C_{\sav}(v_0) \leq \overline C(v_0,K)$, where we define \begin{align*} \underline C(v_0,K)& := \inf_{\pi,\pi[0] = v_0}\limsup_{N \to \infty}\frac{\underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K)}{N+1} \quad\text{ and }\\ \overline C(v_0,K) & := \inf_{\pi,\pi[0] = v_0}\limsup_{N \to \infty}\frac{\bar{c}_{\mathsf{sum}}^{(N)}(\pi,K)}{N+1}. \end{align*} The difference between the upper and lower bounds can be tuned by selecting $K$: \begin{align*} & \bar{c}_{\mathsf{sum}}^{(N)}(\pi,K)- \underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K) = \sum_{t=0}^N \gamma^K\mathbb I(\Last_\pi(t,\pi[t])\ge K+1)\\ & \Longrightarrow 0\le \bar{c}_{\mathsf{sum}}^{(N)}(\pi,K)- \underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K) \le (N+1)\gamma^K\\ & \Longrightarrow \underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K)\le \bar{c}_{\mathsf{sum}}^{(N)}(\pi,K)\le \underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K) + (N+1)\gamma^K\\ & \Longrightarrow \underline C(v_0,K)\le \overline C(v_0,K) \le \underline C(v_0,K)+\gamma^K. \end{align*} Therefore $C_{\sav}(v_0)$ belongs to the interval $[\underline C(v_0,K),\overline C(v_0,K)]\subset[0,\gamma^K]$ and the length of the interval is at most $\gamma^K$. In order to guarantee the total error of $\epsilon>0$ for the actual reward $R_{\sav}(v_0)$\footnote{Since $R_{\sav}(v_0)$ is upper bounded by $\lambda/(1-\gamma)$, we assume that the required accuracy $\epsilon$ is less than this upper bound.}, we can select $K\in \mathbb N$ according to $ \frac{\lambda}{1-\gamma}\gamma^K\le\epsilon\Longrightarrow K\ge \ln\left[\frac{\epsilon(1-\gamma)}{\lambda}\right]/\ln\gamma. $ The value $C_{\sav}(v_0)$ can be computed up to the desired degree of accuracy by computing either $\overline C(v_0,K)$ or $\underline C(v_0,K)$. Next, we give the procedure for computing $\overline C(v_0,K)$ (the procedure for $\underline C(v_0,K)$ is similar). It utilizes a finite augmented weighted graph $\widetilde G_K$. \vspace*{-4mm} \subparagraph*{Truncated augmented weighted graph $\widetilde G_K$} Recall the infinite augmented weighted graph $\widetilde G$ from Section~\ref{section:FiniteHorizon}. We define a truncated version $\widetilde G_K= (\widetilde V_K, \widetilde E_K)$ of $\widetilde G$ where we only keep track of $\Last_\pi(t,v)$ values less than or equal to $K$. For $\widetilde G_K$ we define two weight functions $\overline w$ and $\underline w$, for $\bar{c}_{\mathsf{sum}}^{(N)}$ and $\underline{c}_{\,\mathsf{sum}}^{(N)}$ respectively. \begin{compactitem} \item The set of nodes $\widetilde V_K $ is $ V \times \mathbb Z[0,K]^{\abs{V}}$. \item For a node $\widetilde v = (v, b_1, b_2, \dots, b_{\abs{V}}) \in \widetilde V$, \begin{itemize} \item the weight $\overline\omega (\widetilde v)$ is $\gamma^{b_v}$ if $b_v > 0$ and $\gamma^{K}$ otherwise. \item the weight $\underline{\omega} (\widetilde v)$ is $\gamma^{b_v}$ if $b_v > 0$ and $0$ otherwise. \end{itemize} \item The set of edges $\widetilde E_K $ consists of edges $(v, b_1, b_2, \dots, b_{\abs{V}}) \ \rightarrow\ (v', b'_1, b'_2, \dots, b'_{\abs{V}})$ such that $(v \rightarrow v') \in E$, $b'_v = 1$, and for $u\neq v$: \begin{compactitem} \item $b'_u= b_u+1$ if $b_u > 0$ and $b_u+1 \leq K$, \item $b'_u=0$ if $b_u=0$ or $b_u+1 > K$. \end{compactitem} Thus, in $\widetilde G_K$ we specify two different weights for path $\pi$ at time step $t$ in the case when the previous visit to $\pi[t]$ in $\pi$ was more than $K$ time steps ago. \end{compactitem} Similarly to the infinite augmented graph we have \begin{compactitem} \item $\bar{c}_{\mathsf{sum}}^{(N)}(\pi,K)$ is the sum of weights assigned by $\overline w$ to the first $(N+1)$ nodes of $\widetilde \pi$. \item $\underline{c}_{\,\mathsf{sum}}^{(N)}(\pi,K)$ is the sum of weights assigned by $\underline w$ to the first $(N+1)$ nodes of $\widetilde \pi$. \end{compactitem} It is easy to see that $\overline C(v_0,K)$ is the least possible limit-average cost with respect to $\overline w$ in $\widetilde G_K$ starting from the node $\widetilde v_0 = \left(v_0, 1,1,\dots, 1\right)$. The same holds for $\underline C(v_0,K)$ with $\underline w$. Below we show how to compute $\overline C(v_0,K)$. The case $\underline C(v_0,K)$ is analogous, and thus omitted. \vspace*{-4mm} \subparagraph*{Algorithm for computing $\overline C(v_0,K)$} We now describe a method to compute $\overline C(v_0,K)$ as the least possible limit-average cost in $\widetilde G_K$ with respect to $\overline w$. It is well-known that this can be reduced to the computation of the minimum cycle mean in the weighted graph \cite{ZwickP96}, which in turn can be done using the algorithm from~\cite{Karp78} that we now describe. As before, first we assume that $\widetilde G_K$ is strongly connected. For every $\widetilde v\in\widetilde V_K$, and every $n \in \mathbb Z_+$, we define $W_n(\widetilde v)$ as the minimum weight of a path of length $n$ from $\widetilde v_0$ to $\widetilde v$; if no such path exists, then $W_n(\widetilde v) = \infty$. The values $W_n(\widetilde v)$ can be computed by the recurrence \begin{equation*} W_n(\widetilde v) = \min_{(\widetilde u, \widetilde v)\in\widetilde E_K}\{W_{n-1}(\widetilde u)+ \overline w(\widetilde v)\},\quad n = 1,2,\ldots,|\widetilde V_K|, \end{equation*} with the initial conditions $W_0(\widetilde v_0) = 0$ and $W_0(\widetilde v) = \infty$ for any $\widetilde v\ne \widetilde v_0$. Then, we can compute \begin{equation*} \overline C(v_0,K) = \min_{\widetilde v\in\widetilde V_K}\max_{n\in\mathbb Z[0,|\widetilde V_K|-1]}\left[\frac{W_{|\widetilde V_K|}(\widetilde v)-W_n(\widetilde v)}{|\widetilde V_K|-n} \right]. \end{equation*} A cycle $\widetilde \rho$ with the computed minimum mean can be extracted by fixing the node $\widetilde v$ which achieves the minimum in the above value and the respective path length $n$ and finding a minimum-weight path from $\widetilde v_0$ to $\widetilde v$ and a cycle of length $|\widetilde V_K| - n$ within this path. Thus, the path $\widetilde \pi$ in $\widetilde G_K$ obtained by repeating $\widetilde \rho$ infinitely often realizes this value. A path $\pi$ from $v_0$ in $G$ with $c_{\sav}(\pi) = \overline C(v_0,K)$ is obtained from $\widetilde \pi$ by projection on $V$. In the general case, when $\widetilde G_K$ is not strongly connected we have to consider each of its SCCs reachable from $\widetilde v_0$, and determine the one with the least minimum cycle mean. For each SCC with $m$ edges and $n$ nodes the computation of the quantities $W$ requires $O(n\cdot m)$ operations. The computation of the minimum cycle mean for this component requires $O(n^2)$ further operations. Since $n \leq m$ because of the strong connectivity, the overall computation time for the SCC is $O(n\cdot m)$. Finally, the SCCs of $\widetilde G_K$ can be computed in time $O(|\widetilde V_K| + |\widetilde E_K|)$ \cite{Tarjan71}. This gives us the following result. \begin{lemma}\label{lem:approx-cost} The value $\overline C(v_0,K)$ and a path $\pi$ with limit average cost $c_{\sav}(\pi) = \overline C(v_0,K)$ can be computed in time polynomial in the size of $\widetilde G_K$. \end{lemma} The same result can be established for the under approximation $\underline C(v_0,K)$. \noindent\emph{Remark.} The number of nodes of $\widetilde G_K$ is $|\widetilde V_K| = |V| \times (K+1)^{|V|}$. For the approximation procedure described above it suffices to augment the graph with the information about which nodes were visited in the last $K$ steps and in what order. Thus, we can alternatively consider a graph with $|V| \times {|V|}^K$ nodes in the case when the computed $K$ is smaller than $|V|$. \begin{example} \label{ex:4node-approx} Figure \ref{fig:graph2} shows the SCC of the augmented graph $\widetilde G_K$ reachable from initial node $(a,1,1,1,1)$ for the graph given in Figure \ref{fig:graph1} and $K=5$. The nodes in the SCC are denoted by shorthands in the picture, \emph{e.g.}\xspace, $a_1 = (a,3,2,1,4)$. The labels on the nodes correspond to the values of the weight functions $\overline w$ and $\underline w$. As we can see, $\widetilde G_5$ already contains simple cycles $(a_2b_2c_2a_2),(a_3b_3c_2a_1b_1c_1a_2d_2a_3),(a_3b_3c_2a_1d_1a_3)$, which correspond to the optimal paths presented in Example~\ref{ex:4node-optimal}. The outcome of the minimum cycle mean for $\widetilde G_5$ will be the same with the two sets of weights only for the first and third interval for $\gamma$ determined in Example~\ref{ex:4node-optimal}, but will be different for the second case in which the term $\gamma^8$ is replaced respectively by $\textcolor{blue}{\gamma^5}$ and $\textcolor{red}{0}$ for the upper and lower bounds. \begin{figure} \begin{minipage}{0.55\textwidth} \includegraphics[scale=0.7]{fig2.pdf} \end{minipage} \begin{minipage}{0.45\textwidth} $a_1 = (a,3,2,1,4)$ \quad $a_2 = (a,3,2,1,0)$\\ $a_3 = (a,2,4,3,1)$ \quad $a_4 = (a,2,0,5,1)$\\ $a_5 = (a,2,0,0,1)$\\[2mm] $b_1 = (b,1,3,2,5)$ \quad $b_2 = (b,1,3,2,0)$\\ $b_3 = (b,1,5,4,2)$ \quad$ b_4 = (b,1,0,0,2)$\\[2mm] $c_1 = (c,2,1,3,0) $ \quad $c_2 = (c,2,1,5,3) $\\ $c_3 = (c,2,1,0,3)$\\[2mm] $d_1 = (d,1,3,2,5)$ \quad $d_2 = (d,1,3,2,0)$\\ $d_3 = (d,1,5,4,2)$ \quad $d_4 = (d,1,0,0,2)$ \end{minipage} \caption{The SCC of the finite augmented graph $\widetilde G_5$ for the graph in Figure~\ref{fig:graph1}. The node labels are the values of the functions $\overline w$ and $\underline w$ (in black if $\overline w = \underline w$, otherwise respectively in \textcolor{blue}{blue} and \textcolor{red}{red}).} \label{fig:graph2} \vspace{-.5cm} \end{figure} \end{example} Theorem~\ref{thm:approx-reward}, a consequence of Lemma~\ref{lem:approx-cost}, states the approximate computation result. \begin{theorem}\label{thm:approx-reward} Given a graph $G = (V,E)$, node $v_0 \in V$, rational values $\lambda$ and $0< \gamma < 1$, and error bound $\epsilon > 0$, we can compute in time polynomial in $|V|(K+1)^{|V|}$ for $K = \ln\left[\frac{\epsilon(1-\gamma)}{\lambda}\right]/\ln\gamma$ (i.e., exponential in $|V|$), infinite paths $\pi_{\mathsf{under}}$ and $\pi^{\mathsf{over}}$ and values $r_{\sav}(\pi_\mathsf{under})$ and $r_{\sav}(\pi_\mathsf{over})$ such that $r_{\sav}(\pi_\mathsf{under})\leq R_{\sav}(v_0) \leq r_{\sav}(\pi_\mathsf{over})$ and $r_{\sav}(\pi_\mathsf{over}) -r_{\sav}(\pi_\mathsf{under})\le \epsilon$. \end{theorem} \subsection{Approximation via Bounded Memory} The algorithm presented earlier is based on an augmentation of the graph with a specific structure updated deterministically and whose size depends on the desired quality of approximation. Furthermore, in this graph there exists a memoryless strategy with approximately optimal reward. We show that this allows us to quantify how far from the optimal reward value is a strategy that is optimal among the ones with bounded memory of fixed size. First, we give the definition of memory structures. A \emph{memory structure} $\mathcal M = (M,m_0,\delta)$ for a graph $G = (V,E)$ consists of a set $M$, initial memory $m_0 \in M$ and a memory update function $\delta : M \times V \to M$. The memory update function can be extended to $\delta^* : V^* \to M$ by defining $\delta^*(\epsilon) = m_0$ and $\delta^*(\pi\cdot v) = \delta(\delta^*(\pi),v)$. A memory structure $\mathcal M$ together with a function $\tau : V \times M \to V$ such that $(v,\tau(v,m))\in E$ for all $v \in V$ and $m \in M$, and an initial node $v_0 \in V$ define a strategy $\sigma : V^* \to V$ where $\sigma(\epsilon) = v_0$ and $\sigma(\pi\cdot v) = \tau(v,\delta^*(\pi))$. In this case we say that the strategy $\sigma$ has memory $\mathcal M$. Given a bound $B \in \mathbb N$ on memory size we define the finite graph $G \times B = (V^{G\times B}, E^{G \times B})$, where $V^{G\times B} = V \times \mathbb Z[1,B]$; and $E^{G \times B} = \{((v,i),(v',j)) \in (V\! \times\! \mathbb Z[1,B])\times (V\! \times\! \mathbb Z[1,B]) \mid (v,v') \in E\})$. Memoryless strategies in this graph precisely correspond to strategies that have memory of size $B$. More precisely, for each strategy $\sigma$ in $G = (V,E)$ that has memory $\mathcal M = (M,m_0,\delta)$ there exists a memoryless strategy $\sigma_{\mathcal M}$ in $G \times |M|$ such that the projection of $\outcome((v_0,m_0),\sigma_{\mathcal M})$ on $V$ is $\outcome(v_0,\sigma)$. Conversely, for each memoryless strategy $\sigma_{\mathcal M}$ in $G \times B$ there exist a memory structure $\mathcal M = (M,m_0,\delta)$ with $|M| = B$ and a strategy $\sigma$ with memory $\mathcal M$ in $G$ such that the projection of $\outcome((v_0,m_0),\sigma_{\mathcal M})$ on $V$ is $\outcome(v_0,\sigma)$. \begin{example} Recall that in Example~\ref{ex:4node-optimal} we established that an optimal path for $\gamma=0.26$ is the path $(abcabcad)^\omega$. In the graph $G \times 3$ there exists a simple cycle corresponding to this path, namely $(a,1)(b,1)(c,2)(a,2)(b,2)(c,3)(a,3)(d,1)(a,1)$. Thus, the optimal path $(abcabcad)^\omega$ corresponds to a strategy with memory size of $3$. \end{example} An optimal strategy among those with memory of a given size $B$ can be computed by inspecting the memoryless strategies in $G \times B$ and selecting the one with maximal reward (these strategies are finitely but exponentially many). A strategy returned by the approximation algorithm presented earlier uses a memory structure of size $(K+1)^{|V|}$. If $(K+1)^{|V|} \leq B$, then this strategy is isomorphic to some memoryless strategy $\sigma$ in $G \times B$. Since the reward achieved by the optimal memoryless strategy in $G \times B$ is at least that of $\sigma$, its value is at most $\frac{\lambda}{1-\gamma}\gamma^{K}$ away from $R_{\sav}(v_0)$. Now, $(K+1)^{|V|} \leq B$ iff $K \leq \left\lfloor\frac{\ln B}{\ln \abs{V}} \right\rfloor -1$. Thus, under a memory size $B$, we are guaranteed to find a strategy which has reward that at most $\frac{\lambda}{1-\gamma}\gamma^{ \left\lfloor\frac{\ln B}{\ln \abs{V}} \right\rfloor -1 }$ away from the optimal. Next we sketch an algorithm for computing optimal $B$-memory bounded strategies. As mentioned previously, we can restrict our attention to memoryless strategies in $G\times B$. Memoryless strategies in this graph lead to lasso shaped infinite paths, with an initial non-cyclic path followed by a simple cycle which is repeated infinitely often. This means that we can restrict our attention to paths of length $\abs{V}\cdot B$ which complete the lasso. Now, we apply this length bound to restrict our attention to the finite portion of the augmented graph $\widetilde G$ from Section~\ref{section:FiniteHorizon} that corresponds to path lengths at most $\abs{V}\!\cdot\! B$. This finite subgraph contains nodes $V\times {\mathbb{N}}[1, \abs{V}\!\cdot\!B]^{\abs{V}}$. The dynamic programming algorithm is polynomial time on this graph, hence we get a running time which is polynomial in $\abs{V}\cdot \left(\abs{V}\cdot B\right)^{\abs{V}}$. \begin{theorem} Given a graph $G = (V,E)$, a node $v_0 \in V$, rational values $\lambda$ and $0< \gamma < 1$, and a memory bound $B > 1$, we can compute a $B$-memory optimal strategy $\sigma$ with reward $r_{\sav}$ at most $\frac{\lambda}{1-\gamma}\gamma^{ \left\lfloor\frac{\ln B}{\ln \abs{V}} \right\rfloor -1 }$ away from the optimal $R_{\sav}(v_0)$ in time polynomial in $\abs{V}^{(\abs{V}+1)}\cdot B^{\abs{V}}$. \end{theorem} \section{Generalizations of the Model} \input{general-decay.tex} \section{Conclusion and Open Problems} We have introduced the robot routing problem, which is a reward collection problem on graphs in which the reward structure combines spatial aspects with dynamic and stochastic reward generation. We have studied the computational complexity of the finite and infinite-horizon versions of the problem, as well as the strategy complexity of the infinite-horizon case. We have shown that optimal strategies need memory in general, and that strategies based on last visitation records do not suffice. However, the important question about whether finite-memory suffices for the infinite-horizon optimal path problem or infinite memory is needed remains open. In case finite-memory suffices, it will be desirable to provide an algorithm for precisely solving the infinite-horizon value problem. So far we have only given methods for approximating the optimal solution. Another interesting line of future work is the generalization of the model to incorporate timing constraints. More specifically, in this work we have assumed that all the rewards accumulated at a node are instantaneously collected. This assumption is justified by the fact that in many request-serving scenarios the time taken to serve the requests at a given location is negligible compared to the time necessary to travel between the locations. A more precise model, however, would have to incorporate the serving time per node, which would depend on the amount of collected reward. This would imply that the elapsed time becomes a random variable, while in our current model it is deterministic. Other extensions include the setting where the robot can react to the environment by observing the collected rewards, or observing the accumulated rewards at nodes of the graph, or where there is probabilistic uncertainty in the transitions of the robot in the graph. % Finally, the robot routing problem presents new challenges for development of efficient approximation algorithms, which is a major research direction concerning path optimization problems.\looseness=-1
1,108,101,563,469
arxiv
\chapter*{{\Huge Abstract}} \addcontentsline{toc}{chapter}{Abstract} The realm of this thesis is cryptographic protocol theory in the quantum world. We study the security of quantum and classical protocols against adversaries that are assumed to exploit quantum effects to their advantage. Security in the quantum world means that quantum computation does not jeopardize the assumption, underlying the protocol construction. But moreover, we encounter additional setbacks in the security proofs, which are mostly due to the fact that some well-known classical proof techniques are forbidden by certain properties of a quantum environment. Interestingly, we can exploit some of the very same properties to the benefit of quantum cryptography. Thus, this work lies right at the heart of the conflict between highly potential effects but likewise rather demanding conditions in the quantum world. \clearemptydoublepage \chapter*{{\Huge Acknowledgments}} \addcontentsline{toc}{chapter}{Acknowledgments} Thanks to all the people, I met on the way of my PhD education, for introducing me to new ways of thinking, inspiring me by fascinating ideas, and helping me in so many other ways. Thanks to all the people that stayed with me on this way without questioning these very same aspects. To mention just a few $\ldots$\\[-2ex] $\ldots$ special thanks to my supervisors Ivan Damg{\aa}rd---who taught me so much about cryptography---and Louis Salvail---who taught me so much about quantum---and often vice versa. Thanks to my co-authors Christian Schaffner, Jesper Buus Nielsen, and Serge Fehr for their support throughout my PhD studies and for teaching me many fascinating details. $\ldots$ thanks to Prof.\ Claude Cr\'{e}peau from McGill University in Montr\'eal and Prof.\ Stefan Wolf from ETH Z\"urich as well as Prof. Susanne B{\o}dker from Aarhus University for agreeing to constitute the evaluation committee for my PhD thesis. $\dots$ thanks for proof-reading some parts of the thesis to Chris, Dorthe, and Jesper as well as for valuable comments to Claude and Louis. $\ldots$ thanks to the Crypto-Group with all its members that left, stayed or newly arrived. This environment is a great place to do research in (while drinking espresso and eating cake), and many non-research activities (like launching julefrokost, fridaybaring, dancing danske folkedanse, and often absurd discussions about everything under the sun) are unforgettable. Thanks to the always helpful staff at the department, especially to Dorthe, Ellen, and Hanne. $\ldots$ thanks to Aarhus University for enabling me to travel around the world, and in that respect also to IACR, Universit\'e de Montr\'eal, INTRIQ, and Princeton University. Furthermore, my studies were supported by the MOBISEQ research project, which is funded by Det Strategiske Forskningsr{\aa}ds Programkomite for Nanovidenskab og -teknologi, Bioteknologi og IT (NABIIT). Many thanks also to Institut Mittag-Leffler of the Royal Swedish Academy of Sciences for providing time, grant, and space to think---as well as to prepare the defense and to revise the thesis. Thanks to Prof.\ Mary Beth Ruskai for pointing to the quantum information program at the institute. $\dots$ thanks to Prof.\ Christoph Kreitz from the University of Potsdam for sparking my interest in crypto in the first place. $\ldots$ and thanks to my family and many friends for their support, often from the distance. Grazie a Claudio, with whom I went through all phases of ever-changing three years of PhD studies. Vielen Dank an MaPa, Steffi, und Dande, whose support and (blind) acceptance was most helpful during my education. And last but not least, mange tak til Jesper. \vspace{0.5ex} \begin{flushright} \emph{Carolin Lunemann}\\ \emph{{\AA}rhus, August 31, 2010} \end{flushright} \clearemptydoublepage \tableofcontents \clearemptydoublepage \pagenumbering{arabic} \setcounter{secnumdepth}{3} \setcounter{tocdepth}{2} \pagestyle{fancy} \pagenumbering{arabic} \clearemptydoublepage \chapter{Introduction} \label{chapter:intro} \section{On Cryptography} \label{section:crypto} \mycitation {\\The multiple human needs and desires that demand privacy among two or more people \\ in the midst of social life must inevitably lead to cryptology wherever men thrive \\ and wherever they write.} {David Kahn} Cryptography is the art of \emph{secret writing} (from Greek $\kappa \rho \upsilon \pi \tau o \varsigma$ and $\gamma \rho \alpha \varphi \omega$) and may be considered almost as old as writing itself. Cryptography played a crucial role throughout the history of any society that depended on information, from the Greek Scytale and the Roman Caesar cipher, over the Vigen\`ere cipher, electromechanical rotor machines and encryption standards, to forming the backbone of electronic infrastructures in modern life (see e.g.~\cite{Singh00} for a historic survey of cryptography). The first cryptographic methods are known as \emph{secret-key cryptography}, based on one secret key shared between the communicating parties and used both for encryption and decryption. Already apparent from this description derives its main problem, which lies in the logistics of distributing the key securely: Prior to any secret communication, the involved parties must be in possession of the same secret key. Nevertheless, secret-key cryptography was in use for thousands of years, adjusting its complexity to ever-increasing developments in technique and technology. \emph{Public-key cryptography} was the technological revolution, solving the key distribution problem. The idea was independently discovered by Diffie and Hellman in~\cite{DH76} with Rivest, Shamir, and Adleman, providing the first implementation~\cite{RSA78}, and slightly earlier but in secrecy, by Ellis, followed by Cocks' and Williamson's practical application (e.g.~\cite{Cocks08}). Public-key cryptography is based on a pair of keys for each communicating party, namely a public key for encryption and a corresponding secret key for decryption, where it must hold that it is computationally infeasible (in polynomial time) of deriving the secret key from the public one. Then, we require a family of trapdoor one-way functions defining the encryption and decryption procedure. Informally, that means that encryption is a one-way operation, which is efficiently computable, given the public key, whereas the decryption function is hard to compute, unless the trapdoor is known, i.e.\ the secret key. Thus, the public key can be published without compromising security, and hence, public-key cryptography does not suffer from key distribution problems. Due to that and to the fact that the technique additionally allows for digital signatures that are verifiable with the public key and yet unforgeable without the secret key, the concept of public-key cryptography is highly used and required in the age of the Internet and the proliferation of electronic communication systems. New potential in cryptography emerged with \emph{quantum cryptography}, starting with Wiesner's groundbreaking paper~\cite{Wiesner83}\footnote{The paper was written in the early 1970ies but rejected and only published retroactively in 1983.}, suggesting that ``quantum mechanics allows us novel forms of coding without analogue [in classical physics]'' (p.~78). His approach of conjugate coding did not only lay the foundations of the new cryptographic technique but also suggested a system for sending ``two mutually exclusive messages'' (p.~83), which is today known as the powerful primitive of oblivious transfer. It took several years (and the Caribbean sea) to establish quantum cryptography as a scientific discipline, accomplished by Bennett and Brassard, mainly by the BB84-protocol for quantum key distribution (QKD) in~\cite{BB84} after preceding work such as~\cite{BBBW82, BB83}, culminating in the first practical realizations~\cite{BB89, BBBSS92}. An alternative QKD scheme was independently proposed by Ekert in~\cite{Ekert91}, based on a different approach using quantum entanglement. Since then, QKD was further researched, both on a theoretical and an experimental level. Today, conjugate BB84-coding also forms the basis for various more general quantum cryptographic tasks other than key distribution. Modern cryptography concerns, besides the secrecy and authenticity of communication, also the security of various other task. For instance, theoretical research in the sub-field of \emph{cryptographic protocol theory} covers cryptographic primitives with fundamental properties for secure multi-party computation. Each primitive can be seen as a building block that implements some basic functionality. Composition of such primitives within outer protocols yield applications that implement a specific task securely over a distance. \section{On the Quantum World} \label{sec:quantum} \mycitation {\\Anyone who is not shocked by quantum theory has not understood it.}{Niels Bohr} In the quantum world, we consider the behavior of systems at the smallest scale, which cannot be explained nor described by classical physics. A \emph{quantum}\footnote{quantus (Latin) - how much} is the smallest unit of a physical entity, and the fundamental concept in \Index{quantum information theory} is a quantum bit, or short, a\index{qubit} qubit. Quantum information theory was established at the beginning of the last century, but has been subject to different interpretations ever since---both scientific and philosophical. This thesis is divided into two subareas of quantum information theory, constituting the following two main parts, Part~\ref{part:quantum.cryptography} and Part~\ref{part:cryptography.in.quantum.world} (Part~\ref{part:preliminaries} is dedicated to preliminaries). Part~\ref{part:quantum.cryptography} is in the realm of quantum cryptography, where---informally speaking---the transmission of qubits followed by some classical post-processing is employed to accomplish a cryptographic task. The security is mainly derived by the special properties of the qubits during and after transmission, and therewith, directly from physical laws. Part~\ref{part:cryptography.in.quantum.world} on cryptography in a quantum world refers to the study of cryptography with completely classical messages exchange, but where the environment around is quantum. In other words, the security of the classical schemes must withstand powerful quantum computing capabilities. We now present---in brief and on a (counter-)intuitive level---the aspects unique to the quantum world, which are relevant in the context of this work. Interestingly, these quantum features can be exploited to the benefit of quantum cryptography. However, the very same properties impose intriguingly new challenges in classical cryptography. In other words, ``what quantum mechanics takes away with one hand, it gives back with the other''~\cite[p.~582]{NC00}. And so, this work lies right at the heart of the conflict between highly potential effects, but likewise rather demanding conditions. \paragraph{\sc Information gain vs.\ disturbance.} This aspect might be argued to constitute the most outstanding advantage of quantum cryptography over the classical world, and forms ``the engine that powers quantum cryptography''~\cite[p.~1]{Fuchs96}. In the classical case, a bit can simply be read in transmission, and the information gain solely depends on the security of the respective encryption used. In quantum cryptography, information is typically encoded in two complementary types of photon polarization or, in other words, a qubit is prepared in one out of two conjugate bases with orthogonal basis states. To gain information about such an unknown qubit, it must be observed, but observing in the quantum world means measuring. Measuring, or more precisely distinguishing between two non-orthogonal quantum states, is destructive and therewith any measurement disturbes the system. This is explained in the \emph{Heisenberg uncertainty principle}, which states that certain pairs of quantum properties are complementary in that measuring one of them necessarily disturbs the other. Consequently, eavesdropping on a qubit transmission disturbs the system, and can therefore be noticed in a statistically detectable way. Moreover, the quantitative trade-off between information gain and disturbance is useful not only against an external adversary, but it is also a main ingredient when proving security against a dishonest player. This fact is inherent in the basic security aspects of all our quantum two-party protocols, discussed later in Part~\ref{part:quantum.cryptography}. \paragraph{\sc An unknown quantum state cannot be copied.} This fact---unheard of in the case of classical data---is formalized in the \emph{no-cloning theorem}\index{no-cloning theorem}~\cite{WZ82}. The peculiar property constitutes another major security feature in quantum communications and underlies all our quantum protocols in Part~\ref{part:quantum.cryptography}. However, it also sets severe restriction in the theory of quantum computing. This becomes apparent in Part~\ref{part:cryptography.in.quantum.world}, where the commonly used classical proof technique \emph{rewinding}, which is also shortly discussed below, requires to copy certain data, and so has to be carefully reviewed in the quantum world. \paragraph{\sc Quantum memory is limited.} \index{quantum storage} A more practical issue concerns the limitation of the amount of qubits that can be stored and then retrieved undisturbed. This may be seen as a snapshot of current state of the art. However, ongoing research strongly suggest that it is---and will be---much easier to transmit and measure qubits than it is to store them for a non-negligible time. We will make use of this given fact in our quantum protocols in Chapter~\ref{chap:hybrid.security.applications}, which are designed such that dishonest parties would need large quantum memory to attack successfully---a security property that classical protocols cannot achieve. Yet, we do not exclusively rely on this condition only, but investigate a wider diversification of security that is not threatened by potential breakthroughs in developing quantum storage. \paragraph{\sc Quantum rewinding is tricky.} \index{rewinding!quantum} As already indicated, this statement is a key aspect in Part~\ref{part:cryptography.in.quantum.world}, and originates from most of the above mentioned properties ``all wrapped up together''. Rewinding is a very powerful technique in simulation-based proofs against a classical dishonest party: We can prove security against a cheating player by showing that a run of a protocol between him and the honest player can be efficiently simulated without interacting with the honest player, but with a simulator instead. A simulator is a machine which does not know the secrets of the honest party but yet it sends messages like the honest player would do but with more freedom, e.g.\ in how and when to generate these. Then to conclude the proof, we have to show that the running time of the simulation as well as the distribution of the conversation are according to expectations. A simulator basically prepares a valid conversation and tries it on the dishonest party. Now, in case this party does not send the expected reply, we need the possibility to rewind him.\footnote{More precisely, we model the player---similar to the simulator---as a machine, and thus, we can just set back this machine to an earlier status, i.e., erase parts of the memory and start a new conversation. In that sense, rewinding can be thought of as, for instance, rebooting a computer after it crashed.} Unfortunately, rewinding as a proof technique can generally not be directly applied in the quantum world, i.e., if the dishonest machine is a quantum computer. First, we cannot trivially copy and store an intermediate state of a quantum system, and second, quantum measurements are in general irreversible. In order to produce a classical transcript, the simulator would have to partially measure the quantum system without copying it beforehand, but then it would become impossible to reconstruct all information necessary for correct rewinding. Due to these difficulties, no simple and straightforward security proof for the quantum case was known. However, Watrous recently showed that in a limited setting an efficient quantum simulation, relying on the newly introduced \emph{quantum rewinding theorem} (see~\cite{Watrous09} and Section~\ref{sec:quantum.rewinding}), is possible. We will discuss this aspect in more detail in Chapters~\ref{chap:coin.flip} and \ref{chap:framework}: We will show that the quantum rewinding argument can also be applied to classical non-constant round coin-flipping in the quantum world, and propose a framework to weaken certain assumptions on the coin, in quest for a quantum-secure constant round protocol. \paragraph{\sc Spooky actions at a distance.} This famous naming by Einstein\footnote{``Spooky actions at a distance'' was put down originally as ``spukhafte Fernwirkung'' in~\cite{Einstein71}.} describes the phenomenon of \Index{entanglement}. Informally, two qubits are called entangled, if their state can only be described with reference to each other. This has the effect that a measurement on one particle has an instantaneous impact on the other one---despite any distance separating the qubits spatially. Entanglement is definitely a unique resource to the quantum world only. In the words of Schr\"{o}dinger, entanglement is not ``\emph{one} but rather \emph{the} characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought''~\cite[p.~555]{Schroedinger35}. Besides constituting a disturbing aspect---intuitively and philosophically, entanglement opens up for interesting applications such as \emph{quantum teleportation}~\cite{BBCJPW93} and \emph{superdense coding}~\cite{BW92}, as well as for various aspects in quantum cryptography and computing. We will use entanglement as a thought experiment in our quantum protocols when analyzing an equivalent \emph{purified EPR-version}\footnote{An EPR-pair denotes a pair of entangled qubits. The name (ironically) originates from the paper~\cite{EPR35} by Einstein, Podolsky, and Rosen, in which they criticized quantum mechanics as an incomplete theory---due to entanglement.} (Chapter~\ref{chap:hybrid.security}). \section{Contributions} \label{sec:contributions} This dissertation is based on research done during the three years of my PhD studies at the Department of Computer Science, Aarhus University, Denmark. Part of the research was conducted while visiting Universit\'e de Montr\'eal, Qu\'ebec, Canada. The realm of this work is quantum cryptography and classical cryptography in the quantum world. More specifically, the thesis covers aspects of (quantum) cryptographic protocol theory, based on cryptographic primitives. The main results are outlined in the following sections and pictorially represented in Figure~\ref{fig:pic.thesis}. \subsection{The Importance of Mixed Commitments} \label{sec:cont.commit} \index{rewinding} Classical mixed (or dual-mode) commitments are of great significance for most constructions discussed in this work. Here, we explain the challenges that the quantum world imposes on commitments in general and summarize the results of~\cite{DFLSS09,DL09,LN10} in that aspect.\\ Security for classical constructions in the quantum world means that quantum computation does not jeopardize the underlying mathematical assumption that guarantees the security, for instance, in the context of commitments, the hiding and binding property. However, we encounter even more setbacks in the context of actually proving such constructions secure in an outer protocol, which, in regard of this work with its main focus on simulation-based security, are mostly due to the strong restrictions on rewinding in the quantum world. The first difficulty in any attempt to rewind the adversary regards the fact that the reduction from the computational security of an outer protocol to the \emph{computationally binding} property of a commitment does not simply translate from the classical to the quantum world. Computational binding means that if a dishonest party can open a commitment to two different values, then the computational assumption does not hold. In the classical case, a simulator simulates a run of the outer protocol with the committer, such that the latter outputs a valid commitment at some point during the execution. Later in the protocol he must then provide a correct opening. The simulator has the possibility to rewind the player to any step in the protocol execution, e.g.\ to a point after the commitment was sent. Then it can repeat the simulation of the outer protocol, which can now be adapted to the simulator's knowledge of the committed value. If the dishonest committer opened the same commitment to a different value than previously, he could break the underlying assumption guaranteeing computational binding. In other words, two valid openings of the same commitment imply the inversion of the underlying one-way function, which concludes the proof. Such a technique, however, is impossible to justify in the quantum world, since we cannot trivially copy and store an intermediate state, and measurements are in general irreversible. In order to succeed, the simulator would have to partially measure the quantum system without copying it beforehand to obtain the first transcript, but then it would become impossible to reconstruct all information necessary for correct rewinding. The second challenge we encounter is to prove an outer protocol with an embedded \emph{computationally hiding} commitment secure. Generally speaking, in a classical simulation of the outer protocol, the simulator aims e.g.\ at hitting an ideal outcome to a function of which it then commits. Then, if the reply from the possibly dishonest counterpart matches this prepared function such that both sides conclude on the ideal value as their result and the transcript is indistinguishable from a real run of the protocol, the simulation was successful. Otherwise, the simulator rewinds the dishonest player completely and repeats the simulation. We show a natural and direct translation of this scenario to the quantum world in Chapter~\ref{chap:coin.flip}, where we use a technique that allows quantum rewinding in this very setting when using bit commitments (see Section~\ref{sec:cont.coin.flip}). In case of string commitments however, we cannot rewind the other player in poly-time to hit the guess, since that guess consists of a bit-string. A possible solutions for simulating against a classical adversary is to let him commit to his message before the simulator commits. Then the player's message can be extracted and the simulation can be matched accordingly. This technique, however, is again doomed to fail in the quantum realm, since it reduces to the previous case where the simulator cannot preserve the other party's intermediate status as required during such a simulation. We will circumvent both of the above aspects by introducing mixed commitment schemes in our protocols. Generally speaking, the notion of mixed commitments requires some trapdoor information, given to the simulator in the ideal world. Depending on the instantiation, the trapdoor provides the possibility for \emph{extraction} of information out of the commitments or for \emph{equivocability} when opening the commitments. This allows us to circumvent the necessity of rewinding in the proof, while achieving high security in the real protocol. The idea of mixed commitment schemes is described in more detail in Section~\ref{sec:mixed.commit.idea} and a quantum-secure instantiation is proposed in Section~\ref{sec:mixed.commit.instantiation}. Various extensions are then discussed to match the construction to respective requirements in different outer protocols (Sections~\ref{sec:extended.commit.compiler},~\ref{sec:extended.commit.coin}, and~\ref{sec:mixed.commit.trapdoor.opening}). \subsection{Improving the Security of Quantum Protocols} \label{sec:cont.hybrid.security} The following results are joint work with Damg{\aa}rd, Fehr, Salvail, and Schaffner~\cite{DFLSS09} and will be addressed in detail in Chapter~\ref{chap:hybrid.security}.\\ We propose a general compiler for improving the security of a large class of two-party quantum protocols, implementing different cryptographic tasks and running between mutually distrusting players Alice and Bob. Our main result states that if the original protocol is secure against a so-called \emph{benign} Bob, who is only required to treat the qubits ``almost honestly'' but can deviate arbitrarily afterwards, then the compiled protocol is secure against a \emph{computationally bounded} quantum Bob. The unconditional security against Alice is preserved during compilation and it requires only a constant increase of transmitted qubits and classical messages. The consequences of such a compiler are twofold. First, the basic assumption in designing new protocols for any two-party functionality is reduced to the relatively weak assumption on benignity. On the other hand, the proofs for already existing protocols within the specific class typically go through under the assumption (at least after some minor adaptions). And second, security in the bounded-quantum-storage model implies benign security. Therefore, by compilation of such protocols, we can achieve \emph{hybrid security}, which means that the adversary now needs \emph{both} large quantum memory \emph{and} large quantum computing power to break these new protocols. In more detail, the protocols we consider here start with a qubit transmission from Alice to Bob, where each qubit is encoded in one of two conjugate bases. This implies that, whenever Bob measures in the complementary basis, he obtains a random outcome. The second part of the protocol consist of arbitrary classical messages and local computations, depending on the task at hand but typically relying on the fact that a dishonest Bob has high uncertainty about a crucial piece of information. The basic technique to construct the compiler was already suggested in the first quantum oblivious transfer protocol~\cite{CK88}. We want to force Bob to measure by asking him to commit (using a classical scheme) to all his basis choices and measurement results, and then require him to open some of them later. While classical intuition suggests that the commitments should force Bob to measure (almost) all the qubits, it was previously very unclear what exactly it would achieve in the quantum world. To our best knowledge, it was never formally proven that the classical intuition also holds for a quantum Bob. We now give a full characterization of the commit\&open approach in general quantum settings, namely that it forces Bob to be benign. We propose a formal definition for \emph{benignity}, which might be of independent interest. A benign Bob is characterized by the following two conditions, which must be satisfied after the qubit transmission. First, his quantum storage is very small, and second, there exists a basis-string such that the uncertainty about Alice's encoded bit is essentially one bit whenever the encoding basis does not match the basis indicated in that string. These two conditions imply that a successfully passed opening of his commitments for a random test subset puts Bob in a situation, which is close to a scenario in which he measured as supposed to: His quantum memory is essentially of size zero, and furthermore, measuring the untested qubits in a basis complementary to the one Bob (claims to have) used, leads to a result with large uncertainty. The bounds on Bob's uncertainty and his quantum memory are proven for an ideal state that is negligible close to the real state. For the ideal state, we can then show that the remaining subsystem after the test is a superposition of states with \emph{relative Hamming distance upper bounded by the test estimate}. To conclude the proof, we assume that the original protocol implements some ideal functionality with statistical security against benign Bob. Then we show that the compiled protocol with the commitments also implements that functionality but now with security against any computationally bounded (quantum) Bob. To preserve the unconditional security of the original protocol, we require an unconditionally hiding commitment scheme. Since the common reduction from the computational security of the protocol to the computational binding property of a commitment would require rewinding, we use a mixed dual-mode commitment, which allows us to avoid rewinding Bob in this step (see also Section~\ref{sec:cont.commit}). We generalize our result to noisy quantum communication and show that the compilation does not render sequential composability insecure. We then extend the underlying commitment scheme for a more general composability guarantee and obtain that any compiled protocol \emph{computationally quantum-UC-emulates} its corresponding ideal functionality. \subsection{Classical Coin-Flipping in the Quantum World} \label{sec:cont.coin.flip} The result on quantum-secure single coin-flipping is based on~\cite{DL09}, co-authored with Damg{\aa}rd, and will be fully discussed in Chapter~\ref{chap:coin.flip}. The proposed amplification framework for obtaining strong coin-strings from weak initial assumption on the coins is joint work with Nielsen~\cite{LN10} and will be addressed in more detail in Chapter~\ref{chap:framework}.\\ We first investigate the standard coin-flipping protocol with classical messages exchange but where the adversary is assumed to be capable of quantum computing. The output of the protocol is a uniformly random unbiased bit, and the construction does not require any set-up assumptions. Therewith, the communicating parties can interactively generate true randomness from scratch in the quantum world. Our result constitutes the most \emph{direct quantum analogue} of the classical security proof by using a recent result of Watrous~\cite{Watrous09} that allows for quantum rewinding in this restricted setting and when flipping a single coin. The full potential of coin-flipping lies in the possibility of flipping a string of coins instead of a bit, such that the parties can interactively generate a common random string from scratch. Therewith, it is possible, for instance, to implement the theoretical assumption of the common-reference-string-model, which then implies that various interesting applications can be realized in a simple manner without any set-up assumptions. We show that with our definitions, the single coin-flipping protocol composes sequentially. Additionally, we sketch an extended construction of the underlying commitment scheme, allowing for efficient simulation on both sides, with which we achieve more general composition guarantees. Both compositions, however, are not fully satisfactory. Sequential coin-flipping allows for implementations without set-up assumptions but leads to a non-constant round application. In contrast, parallel composition achieves much better efficiency with constant round complexity but requires some set-up assumptions in our proposed construction here. Unfortunately, we do not know how to extend Watrous quantum rewinding to the case of bit-strings, while keeping the running time of the simulator polynomial. The proof technique in the purely classical setting is impossible to apply in the quantum world (see also Section~\ref{sec:cont.commit}). Other techniques to achieve constant round coin-flipping are not known to date. Our framework in Chapter~\ref{chap:framework} can be understood as a step towards \emph{constant round coin-flipping}. We first investigate different security degrees of a string of coins. We then propose protocol constructions that allow us to amplify the respective degrees of security such that weaker coins are converted into very strong ones. The final result constitutes an amplification towards a coin-flipping protocol with long outcomes, which is fully poly-time simulatable on both sides against quantum adversaries. The protocol can be implemented with quantum-computational security in the plain model without any set-up assumptions. It only assumes mixed commitment schemes, which we know how to construct with quantum security, and no other assumptions are put forward. With this solution, we still have to compose the single coin-flip as sketched above sequentially to obtain long outcomes, but we achieve coins with stronger security. Our method of amplifying the security strength of coins also applies to potential constant round coin-flipping. If the underlying weak protocol already produces string outcomes and is constant round, then the resulting strong protocol is also constant round, and we consider it a contribution in itself to define the weakest security notion for any potential candidate that allows to amplify to the final strong protocol using a constant round reduction. \subsection{Applications} \label{sec:cont.applications} We consider our applications in both parts of the thesis (Chapters~\ref{chap:hybrid.security.applications} and~\ref{chap:coin.flip.applications}) well suited as examples for the respective precedent main results, since they all have some special properties. Depending on the context they are proposed in, they appeared in~\cite{DFLSS09,DL09,LN10}.\\ The first quantum protocol in Section~\ref{sec:hybrid.security.ot} implements \emph{oblivious transfer} (OT), which constitutes a highly relevant cryptographic primitive that is complete for general two-party computation. Interestingly, the idea behind this primitive was introduced in the context of quantum cryptography, namely, in the pioneering paper of Wiesner~\cite{Wiesner83} that also paved the way for quantum cryptography by introducing the concept of conjugate coding. The very nature of conjugate coding implies oblivious transfer, and with that, it can be understood as a natural quantum primitive. Classical and quantum OT cannot be implemented without any additional restrictions. However, in contrast to classical OT, quantum OT reduces to classical commitment. The idea of using a classical commitment within quantum protocols was already suggested in the first quantum oblivious transfer protocol~\cite{CK88} and its follow-up work in~\cite{BBCS91}. Various partial results followed, such as assuming a perfect ideal commitment~\cite{Yao95,Mayers96,Unruh10} or a (theoretical) quantum string commitment~\cite{CDMS04}. Based on the analysis of our compilation (sketched in Section~\ref{sec:cont.hybrid.security}), we can now rather simply apply our compiler to (a variant of) the original quantum OT-protocol, and therewith, give a complete proof for a concrete commitment scheme. In a rather straightforward way, oblivious transfer as a building block easily extends to \emph{password-based identification}, which is needed for any authenticated set-up. The quantum identification scheme in Section~\ref{sec:hybrid.security.id} allows for identification by solely proving the knowledge of a secret password without actually announcing it in clear. Furthermore, it has some special properties, which indicates its utility value in practice. First, the only option without being in possession of the password is to guess it, which implies that the same password may be safely reused for a long time. Second, the scheme tolerates a possibly non-uniform password, which translates to a realistic assumption of user-memorizable passwords. And last, a typical setting for identification is not necessarily required to run over large distances to be considered useful, and as such, it can actually be implemented with existing technology. Naturally, an identification scheme, secure under diversified assumptions and against any external adversary, is an important step towards an actual implementation. The classical \emph{generation of commitment keys} in Section~\ref{sec:key.generation.coin} nicely combines the above applications with the results on quantum-secure coin-flipping, fulfilling the requirement on our mixed commitment construction. By running a coin-flipping protocol as an initial step in the quantum protocols above, the communicating players can interactively generate their commitment keys for compilation. This allows us to avoid the common-reference-string-model and yields implementations of \emph{entire} protocols in the quantum world without any set-up assumptions. The two application in the context of zero-knowledge are interesting in that the interactive generation of coins at the beginning or during outer protocols allows for quantum-secure realizations of classical schemes from scratch. First in Section~\ref{sec:coin.iqzk}, we show a simple transformation from non-interactive zero-knowledge to \emph{interactive quantum zero-knowledge}. Then in Section~\ref{sec:coin.zkpk}, we propose a \emph{quantum-secure zero-knowledge proof of knowledge}, which relies not only on initial randomness but also on enforceable randomness and is based on a witness encoding scheme providing a certain degree of extractability, defined for the quantum context to resemble special soundness of classical schemes. Both zero-knowledge constructions nicely highlight that the realization of coin-flipping as a stand-alone tool allows for using it rather freely in various contexts. \begin{figure}[h] \includegraphics[width=\textwidth]{pic.pdf} \vspace{-3ex} \caption{Picture of the Thesis.} \label{fig:pic.thesis} \end{figure} \clearemptydoublepage \part{Setting The Stage} \label{part:preliminaries} \chapter{Cryptographic Toolbox} \label{chap:crypto.tools} In this work, we are interested in classical and quantum cryptographic two-party protocols, i.e., our focus lies on enabling two players to accomplish a specific task securely by communicating over a distance. In a perfect world of gentlemen, we could, of course, just communicate over a distance without using cryptographic security precautions. In an ideal world, we can simply assume a ``black-box'' that solves what we want while not leaking anything of importance. However, we operate in the real world. This means that we do not only have to take various dishonest players into account when implementing our protocols, but also that we have to work within a restricted framework of given conditions and existing resources.\footnote{Note that, throughout this work, we will use the terms \emph{ideal world} and \emph{real world} also in the more formal context of the so-called \emph{two-world paradigm} (see Section~\ref{sec:worlds}) for simulation-based proofs.} In the following sections we formalize this intuitive description in cryptographic terms. The chapter is not intended to provide a full introduction to cryptographic protocol theory, but rather to give a brief but complete overview of notation, tools, conditions, and settings we will use, and to fix terminology that may vary in standard literature. In short, we are setting the stage for the results in this thesis. \section{Players} \label{sec:players} Our main characters are Alice and Bob, who are subject to different roles and cheating capabilities. The \emph{correctness} of our two-party protocols is ensured, if they implement the task at hand in the desired way. This scenario only concerns \emph{honest}\index{player!honest} parties Alice and Bob, who may have different roles, such as sender, receiver, committer, verifier, user and server, depending on the respective functionality to be carried out. An honest player is denoted by~${\sf P}$. \emph{Security} is shown by investigating the case where one of the parties is \emph{dishonest}\index{player!dishonest}. More precisely, a dishonest party ${\sf P}'$ can try, for instance, to bias the outcome of the protocol or to succeed illegitimately. Between these two extremes, there are various nuances of cheating. For instance, the common notion of \emph{semi-honest}\index{player!semi-honest} describes an ``honest-but-curious'' player who is curious enough in trying to gain additional information while following the protocol honestly. We will in Chapters~\ref{chap:hybrid.security} and~\ref{chap:hybrid.security.applications} use another intermediate notion that captures \emph{benignly dishonest}\index{player!benign} behavior in quantum protocols. The protocols consist of a quantum transmission phase and some classical post-processing. A benign receiver of qubits is assumed to treat these ``almost honestly'', which means he immediately measures most of the qubits upon reception in the specified bases. Afterwards during the classical post-processing, he can deviate arbitrarily. Thus, in some sense, he wants to cheat but is incapable of mastering the quantum information in any other way than simply measuring it. We will define this newly introduced notion in greater detail later on, as it forms the foundation of our improved quantum protocols. A very different external adversary is the so-called \emph{man-in-the-middle}\index{player!man-in-the-middle} Eve (denoted by ${\sf E}$), who tries to eavesdrop on the classical and quantum communication between Alice and Bob, with the intention to break the protocol---or at least gain some information---without being detected. Quantum cryptography provides its protocols with \emph{automatic intrusion detection}, due to the fact that here any kind of intrusion will inevitably disturb the system. However, we have to thoroughly implement the testing of qubits for interference as well as investigate the potential information leakage of the classical communication. \section{Security Flavors, Assumptions, and Models} \label{sec:flavors} The purpose and objective of theoretical cryptography is to design protocols with the highest security possible under \emph{any} condition, this means without any restriction on adversarial resources such as computing power and memory size. However, this \emph{unconditional security}\index{security!unconditional} is extremely hard to obtain for both players simultaneously in the classical and in the quantum world. In fact, some tasks are proven to be impossible to achieve with unconditional security for both players. The most well-known example thereof might be the impossibility results on unconditionally secure classical and quantum bit commitment (proven in the quantum case by~\cite{Mayers97,LC97}). Furthermore, for two distrusting parties, the only applications actually proven to be unconditionally secure regarding confidentiality are Vernam's symmetric one-time pad encryption~\cite{Vernam26,Shannon49} as well as quantum key distribution~\cite{BB84,SP00}. Thus, the level of security has to be lowered for implementing other functionalities, and we have to achieve a reasonable balance between realistic assumptions under consideration of current and future technology---as weak as possible---and yet meaningful security---as strong as possible. For that purpose, we specify cryptographic models to capture various notions of security and to impose realistic restrictions on the adversary. To mention just a few, such models consider limited computing power, limited memory size~\cite{Maurer92,DFSS05}, a common resource with special properties (e.g.\ initially shared randomness), noisy storage~\cite{WST08} or restricted quantum measurement (e.g.\ a limited set of measurements~\cite{KMP04} or a limited set of qubits to be measured at the same time~\cite{Salvail98}). \paragraph{\sc Computational Security.} \index{security!computational} Restricting the adversarial classical computing power and time is currently the most applied model in practical public-key cryptography. Thus, it is known as the \emph{plain model}\index{plain model}, achieving \emph{computational security} based on classical hardness assumptions that some problems are computationally infeasible to solve in polynomial\index{polynomial} time\footnote{An algorithm is poly-time, if its running time is upper bounded by a polynomial in the size of its input, i.e.~$O(n^c)$. In more detail, there exist constants $c > 1$ and $n_0$ such that $poly(n) \leq n^{c}$ for all $n > n_0$. As synonyms, we often use \emph{feasible} or \emph{efficient}.}. Usually, security is shown by reducing the security of the actual scheme to that of a well-known mathematical problem. However, the hardness of such complexity assumptions is unproven. It should also not go unnoted that with the emergence of quantum computers which, due to their speed-up in running time, have great potential to solve several of the basic assumptions in polynomial time, security of various crypto-systems would fold. To give examples, Shor showed algorithms for efficiently factoring large integers~\cite{Shor97}, which would jeopardize the RSA assumption, and for the related problem of computing discrete logarithms underlying e.g.\ the ElGamal encryption system. Grover's algorithm for conducting a search through some unstructured search space shows a quadratic speed-up over classical computation. This, for instance, also affects the time of performing exhaustive search over the set of possible keys, used in symmetric crypto-systems (e.g.\ DES). Of course, these algorithms only yield profitable results, if large-scale quantum computers can be built. Interestingly, the very quantum effects that makes them so powerful, also makes them so difficult to control---so far. \paragraph{\sc Quantum-Computational Security.} \index{security!quantum-computational} Recently, the new sub-field of so-called post-quant\-um crypto\-gra\-phy has emerged within public-key cryptography.\footnote{The common classification might be slightly confusing, in that the notion ``post-quantum'' relates to the time after the successful development of large-scale \emph{quantum computers} as opposed to \emph{quantum cryptography.}} There, the focus lies on researching assumptions which are believed to be hard even on a quantum computer, and thus, on achieving \emph{quantum-computational security}. Post-quantum crypto-schemes include, for instance, the McEliece crypto-system based on a coding-theoretic problem~\cite{McEliece78} and lattice-based crypto-systems (e.g.~\cite{Ajtai96, Regev05}). The latter provide, besides good efficiency when en- and decoding, the merit that breaking the security of such protocols implies to solve a hard lattice problem in the \emph{worst case}. However, we should stress also in this context that this hardness is again assumed; formal proofs are still to come. In this work, we will use lattice-based crypto-systems for implementing mixed commitment schemes, secure in the quantum world (Chapters~\ref{chap:hybrid.security} and \ref{chap:framework}). \paragraph{\sc Quantum Security.} In contrast to security through mathematical hardness assumptions in classical cryptography, the security in quantum cryptography is based on quantum mechanical laws. Proofs for physical limitations are not by reduction as for computational limitations but in information-theoretic terms. That means that in such models, an adversary does not learn \emph{any} information, except with at most negligible\index{negligible} probability.\footnote{Negligible in $n$ means that any function of $n$ is smaller than the inverse of any polynomial, provided $n$ is sufficiently large, i.e., for all constants $c$ there exists a constant $n_c$ such that $\negl{n} \leq n^{-c}$ for all $n > n_c$.} \paragraph{\sc Bounded-Quantum-Storage Model.} \index{bounded-quantum-storage model} In the quantum cryptographic setting, one such physical limitation is formalized in the \emph{bounded-quantum-storage model} (BQSM), proposed in~\cite{DFSS05}. The intuitive idea behind the model is that the most sensitive information is encoded in qubits that are transmitted in the first phase of the protocol. Then, at some later point, typically an announcement of the encoding bases follows to complete the task at hand. Now, under the assumption that an adversary's quantum memory size is limited, he cannot store all of the qubits but has to measure some fraction. Thus, by converting quantum information into classical information without complete knowledge of the right bases, information gets irreversibly destroyed. The protocols in this model achieve unconditional protection against cheating by one of the players, while if the other is corrupted, the protocols are secure under the sole assumption that his quantum storage is of limited size, namely of size at most a constant fraction of the qubits sent. Such a bound can also be applied to an external eavesdropper's quantum memory by slightly extending the respective original protocol. The underlying motivation for the BQSM is the fact that transmission and measurement of qubits is well within reach of current technology. Storing quantum information however requires keeping the qubit state stable under controlled conditions for a non-negligible time, which still constitutes a major technological challenge, and an attack would require large quantum storage with a long lifetime. In contrast, honest parties, following the protocol, do not need quantum memory at all. Furthermore, neither honest nor dishonest parties are bounded with respect to their classical storage or computing power. We want to stress that the impossibility results against the bounded-classical-storage model (see e.g.~\cite{Maurer90,Maurer92,CCM98,DM04}) do not hold in the quantum setting.\footnote{The bounded-classical-storage model ensures security as long as the adversary's memory size is at most quadratic in the memory size of the honest players. A favorably larger gap between the storage assumptions on honest and dishonest parties was shown to be impossible~\cite{DM04}.} Hence, the BQSM is realistic for fundamental physical reasons and potentially useful in practice. Many two-party applications investigated in the BQSM (like identification) are not necessarily required to run over large distances to be considered useful. Thus, such protocols can actually be implemented with existing devices, and many applications have been proven BQSM-secure~\cite{DFSS05,DFSS07,Schaffner07}. We will work in this model in Chapter~\ref{chap:hybrid.security.applications}, where it constitutes one of the security layers in our quantum protocols. \paragraph{\sc Common-Reference-String-Model.} \index{common-reference-string-model} Another useful model, which we will consider, is the \emph{common-reference-string-model} (CRS-model). In this model, as the name suggests, the parties are provided with a classical common public string before communication, taken from some fixed distribution that only depends on the security parameter. For efficiency and composability, we will often assume the model to allow for techniques, which require an initially shared random string. However, we consider a random string ``in the sky'' a set-up, which is only theoretically useful. To meet more practical demands, we suggest in Chapter~\ref{chap:coin.flip} a quantum-secure implementation of the CRS-model ``from scratch''. \section{Worlds} \label{sec:worlds} \paragraph{\sc Classical vs.\ Quantum World.} We are interested in cryptography in the quantum world, covering both quantum and classical cryptographic protocol theory, which is evident in the separation of the thesis in the two main parts, Part~\ref{part:quantum.cryptography} on quantum cryptography and Part~\ref{part:cryptography.in.quantum.world} on classical cryptography in the quantum world. Thus, throughout this work, we consider quantum potential---achieving very high security in the first case but also imposing new demands in the latter. In contrast, the (pure) classical world of cryptography does traditionally not assume adversarial quantum effects. However, we emphasize our very strong requirement also for all classical protocols and proofs to be quantum-computationally secure, which implies both the exclusive use of post-quantum crypto-schemes, and the avoidance or carefully adaption of classical proof techniques. \paragraph{\sc Ideal vs.\ Real World.} \index{two-world paradigm} \index{ideal world} \index{ideal-world adversary} \index{ideal functionality} \index{real world} \index{real-world adversary} For the definition of security, we work in two different worlds, which are captured in the \emph{two-world paradigm} of simulation-based proofs. The basic idea of the paradigm is to first specify the ideal functionality $\mathcal{F}$ that models the intended behavior of the protocol, or in other words, the properties we would have in an \emph{ideal world}. The ideal functionality can be thought of as a trusted third party or simply a black-box that gets private inputs from the players, accomplishes a specific task without leaking any information, and then outputs the result to the respective player. Honest and dishonest players in the ideal world are modeled by probabilistic poly-time machines, denoted by $\hat{\sf P}$ and $\hat{\sf P}'$, respectively. The \emph{real world} captures the actual protocol $\Pi$, consisting of message exchange between the parties and local computations. Recall that real-world players are indicated by honest ${\sf P}$ and dishonest ${\sf P}'$. Now, the input-output behavior of $\mathcal{F}$ defines the required input-output behavior of $\Pi$. Intuitively, if the executions are indistinguishable, security of the protocol in real life follows. In other words, a dishonest real-world player ${\sf P}'$ that attacks protocol $\Pi$ cannot achieve (significantly) more than an ideal-world adversary $\hat{\sf P}'$, attacking the corresponding ideal functionality $\mathcal{F}$. We will make this aspect more formal in Section~\ref{sec:security.definition}. \section{Primitives} \label{sec:primitives} In the following, we will describe those two-party cryptographic \Index{primitives}, along with some known facts about them, that are relevant in the context of this work. Primitives are fundamental problems that are later used as basic building blocks in larger outer protocols. Discussed on their own, primitives might seem to be somewhat limited but still constitute intriguing thought experiments. For clarification, an identification scheme, as discussed in Section~\ref{sec:primitives.id}, may commonly not count as a primitive per se, although it may well constitute a building block in a larger outer protocol. Our prime purpose for introducing it in the context of primitives, however, is the close relation to oblivious transfer in its construction. \subsection{Commitments} \label{sec:primitives.commitment} \index{commitment} \index{commitment!binding} \index{commitment!hiding} Commitment schemes constitute a very important building block within cryptographic protocols. In fact, all our protocols proposed here implementing a wide range of cryptographic tasks, make use of various types of commitment schemes, which may indicate the significance of the construction. Commitments can be realized with classical schemes or through quantum communication. Here, we will only discuss and construct commitments from classical crypto schemes, but with a strong requirement of quantum-computational security. Intuitively, a commitment scheme allows a player to commit to a value while keeping it hidden ({\em hiding property}), yet preserving the possibility to reveal the value fixed at commitment time later during the so-called opening phase ({\em binding property}). More formally, a basic commitment scheme $\commitx{m}{r}$ takes a message $m$ and some random variable $r$ as input. Depending on the respective scheme, the message $m$ can be a single bit (\emph{bit commitment}) or a bit sequence (\emph{string commitment}). The length of the randomness $r$ is polynomial in the security parameter. It is also possible to construct a so-called \emph{keyed commitment schemes} of the form $\commitk{m}{r}{K}$, which takes key $K$ as additional input. The most common way of opening commitment $\commitx{m}{r}$ to reveal the committed message $m$ when time is ripe, is to send values $m$ and $r$ in plain, so that the receiver of the commitment can check its validity. In Chapter~\ref{chap:framework}, we will change this way of opening a commitment, due to the special requirements of the particular construction there. The hiding property is formalized by the non-existence of a distinguisher able to distinguish with non-negligible advantage between two commitments, i.e., we have indistinguishability between two commitments with $\commitx{m_1}{r_1} \approx \commitx{m_2}{r_2}$. The binding property is fulfilled, if it is infeasible for a forger to open a commitment to more than one valid value, i.e., we have $\commitx{m_1}{r_1} \neq \commitx{m_2}{r_2}$ for $m_1 \neq m_2$. Each property, hiding and binding, can be satisfied unconditionally or subject to a complexity assumption. The ideal case of unconditionally secure commitments, i.e.\ unconditionally hiding and unconditionally binding at the same time, is impossible. Consequently, we have to decide on one of the two flavors of commitment schemes, namely unconditionally hiding and computationally binding or unconditionally binding and computationally hiding.\footnote{Note that certain applications---beyond the scope of this work---have computational security simultaneously for both properties hiding and binding.} For completeness, it is worth noting that the same applies in quantum cryptography~\cite{Mayers97,LC97}, where perfect commitments can only be achieved when assuming some restrictions on the adversary, for instance, the BQSM-model~\cite{DFSS05,DFRSS07}. In the context of oblivious transfer (OT; see Section~\ref{sec:primitives.ot}), we know that a classical commitment does not imply classical OT without any additional requirement (such as key agreement). In contrast, a classical commitment implies quantum OT, which is all the more interesting as OT is complete for secure two-party computation. This implication in the quantum case was realized in~\cite{CK88} and proven partially in~\cite{Yao95,Mayers96,CDMS04}. We will give the first full proof in Section~\ref{sec:hybrid.security.ot}. Commitments are equivalent to \Index{one-way function}s, i.e., a function $f: \{0,1\}^* \rightarrow \{0,1\}^*$ for which it is easy to compute $f(x)$, given $x$. But given only $y = f(x)$ where $x$ is random, it is computationally infeasible in poly-time to compute any element in $f^{-1}(y)$. Thus, from an appropriate one-way function, secure against quantum adversaries, we can construct quantum-secure commitment schemes (e.g.~\cite{Naor91}). Bit commitments, in turn, imply a quantum-secure coin-flip, which we will show in Chapter~\ref{chap:coin.flip}. Naturally, the hiding, respectively binding, property holds with unconditional security in the classical and the quantum setting, if the distinguisher, respectively the forger, is unrestricted with respect to his (quantum) computational power. Recall that in case of a poly-time bounded classical distinguisher, respectively forger, the commitment is computationally hiding, respectively binding. The computationally hiding property translates to the quantum world by simply allowing the distinguisher to be quantum. However, the case of a quantum forger cannot be handled in such a straightforward manner, since the commonly used classical proof technique relies on rewinding the possibly dishonest committer, which is in general prohibited by the laws of quantum mechanics. Another restriction on rewinding occurs when committing to a string instead of a single bit. Solutions for proving string commitments secure are known for the classical case, but they cannot be adapted to the quantum world. Thus, solutions for quantum-secure constant round coin-flipping are yet to come (see Chapter~\ref{chap:framework} and also Section~\ref{sec:primitives.coin.flip}). \subsection{Oblivious Transfer} \label{sec:primitives.ot} \index{oblivious transfer} As already indicated, another highly relevant primitive in cryptography is oblivious transfer, commonly abbreviated by OT. Interestingly, the basic idea for OT was first proposed by Wiesner in the context of quantum cryptography, where he suggests conjugate coding as ''a means for transmitting two messages either but not both of which may be received''~\cite[p.~79]{Wiesner83}. OT as a cryptographic concept was then introduced by Rabin ($\tt Rabin\text{--}OT$ in~\cite{Rabin81}) and Even, Goldreich, and Lempel ($\ot{1}{2}$ in~\cite{EGL85}). OT is a \emph{complete} cryptographic primitive, i.e., it is sufficient for secure two-party computation~\cite{Kilian88}, meaning that secure 1-2 OT allows for implementing any cryptographic two-party functionality. In this work, we are mainly interested in $\ot{1}{2}^\ell$, i.e.\ one[message]-out-of-two[messages] oblivious transfer, with message length $\ell$. In an $\ot{1}{2}^\ell$ protocol, the sender sends two $\ell$-bit strings $s_0$ and $s_1$ to the receiver. The receiver can choose which string to receive, i.e.~$s_c$ according to his choice bit $c$, but does not learn anything about the other message $s_{1-c}$. At the same time, the sender does not learn $c$, i.e., he does not learn which string the other party has chosen. As in the classical case, quantum OT cannot be implemented without any additional restrictions, such as bounded quantum memory in the BQSM~\cite{DFSS05, DFRSS07}. However, in contrast to classical OT, quantum OT reduces to classical commitment, as already discussed before (more in Section~\ref{sec:hybrid.security.ot}). $\tt Rand\text{--}OT$ is a randomized variation of general $\ot{1}{2}$ and essentially coincides, except that the sender does not input the two messages himself, rather they are generated uniformly at random during the protocol (and then output to the sender). For completeness, we note that $\tt{Rabin\text{--}OT}$ is another slightly varied but equivalent version of $\ot{1}{2}$, where the sender transmits a message $s$ with probability $1/2$. However, he remains oblivious about whether or not the receiver actually got $s$. Thus, $\tt{Rabin\text{--}OT}$ can be seen as a \emph{secure erasure channel}. We conclude this introduction by mentioning two natural generalizations of $\ot{1}{2}$. First, $\ot{1}{n}$ allows the receiver to obtain exactly one element out of a set of $n$ elements. This application is similar to private information retrieval in database settings but constitutes a stronger notion than the latter, as it additionally requires that the user is oblivious to all other items (as in database privacy). An even further generalization is $\ot{m}{n}$, in which the receiver can choose a subset of $m$ elements out of the entire set of size $n$. Interestingly, $\ot{1}{n}$ underlies the construction of a quantum identification scheme in~\cite{DFSS05}, which exemplifies the significance of the primitive. More details on this transformation are given in Section~\ref{sec:primitives.id}. \subsection{Password-Based Identification} \label{sec:primitives.id} \index{identification} A password-based identification scheme (ID, in short) allows a user to identify himself to a server by proving his knowledge of a previously agreed secret password. In addition, we will put forward the following security requirement: Any party that is not in possession of the valid password can (essentially) not succeed by any other means but trying to guess. This means that a user without password---or in other words, a user who pretends to be someone else---cannot delude the server with a probability that exceeds the probability of guessing the respective password. Similarly, the server can only guess a user's password and then learn whether the guess is correct or not---but no information beyond that. This in particular implies that the same password may be safely reused in further runs of the protocol. Furthermore, our aim is to develop a scheme that tolerates a possibly non-uniform password, or in short, a realistic user-memorizable password (such as a PIN code) without jeopardizing security. For reasons of their significance in any authenticated set-up, a wide range of classical and quantum ID-schemes can be found in the literature (see Section~\ref{sec:hybrid.security.id}). Here, we will however focus on the quantum identification scheme, proposed in~\cite{DFSS05} and proven secure against any dishonest server with bounded quantum storage. Interestingly, in the context of primitives, it is constructed out of an extension of a \emph{randomized} $\ot{1}{2}^\ell$ to a \emph{randomized} $\ot{1}{n}^\ell$. We will briefly sketch the intuitive idea here: Recall that such a $\ot{1}{n}^\ell$ supplies the user with $n$ random $\ell$-bit strings but yields only one of the strings on the server's side. Such a scheme can then be used for the purpose of identification, when the server ``chooses'' the one specific string indexed by the password, and the user proves which of the $n$ strings obtained is the one with indices matching the password. Note that this last step of comparison must be secured by another cryptographic technique such as a hash-function and the strings must have large Hamming distance, which is not covered by the OT application itself. However, by the nature of secure OT, a dishonest user does not gain any information on the server's choice and thus, does not know which string is the one getting accepted. A dishonest server can likewise not do better than guessing a choice, and so the string he later receives from the user is most probably random to him and hence, contains no information on the password. We want to stress again that for simplicity, we skip many subtle but important details of the final ID-scheme as well as the means regarding better efficiency. More details are given in Section~\ref{sec:hybrid.security.id}, where we propose an extension of the scheme towards higher and more diverse security. \subsection{Coin-Flipping} \label{sec:primitives.coin.flip} \index{coin-flipping} True randomness is a crucial ingredient in cryptographic applications. Therefore, coin-flipping (or coin-tossing) is yet another essential primitive in this work. Secure coin-flipping allows two parties to agree on a uniformly random bit in a fair way, which means that neither party can influence the value of the coin to his advantage. Intuition suggest that this should be easily obtainable for an actual coin-toss if the parties met, flipped a coin together and simply looked at the outcome. Now, we want to achieve a similar fairness even when the parties are communicating over a distance. This problem was first formalized in cryptographic terms by Blum as \emph{coin-flipping by telephone}~\cite{Blum81}. An ideal coin-flip can be modeled as follows: Each player inputs a bit of his choice, independently of each other, and the box then outputs the exclusive disjunction of the two bits as the coin. When implementing the primitive however, we must consider that one party must make a first move during communication, and therefore the other one may choose his bit accordingly. The most straightforward way to achieve fairness also over a distance is by bit commitments as follows. The first player chooses a random bit $x_1$ and commits to it, the other one then sends his bit $x_2$ in plain, then the commitment is opened, and the resulting coin is $x_1 \oplus x_2$. Thus, bit commitment implies secure coin-flipping, since the first player is bound to his bit, but can still keep it hidden until the second player makes his move. Secure implementations for coin-flipping have been proposed also by means of quantum communication. For instance, solutions for a strong\index{coin-flipping!strong} coin-flip with a potential, optimal coin bias of approx.\ $0.2$ and for the weaker\index{coin-flipping!weak} notation with arbitrary small bias. Note that in the quantum literature, ``strong'' or ``weak'' indicates weather the dishonest party cannot bias the coin more than specified or the dishonest party can influence the coin entirely towards one outcome but only by the specified bias towards the other value, respectively (see e.g.~\cite{Wehner08} for an overview). We want to stress that throughout this work, we use the (intuitive) literal interpretation of a ``weak'' and ``strong'' coin, indicating its degrees of security. We are interested in the standard coin-flipping protocol with classical messages exchange, but where the adversary is assumed to be capable of quantum computing. Even when basing the embedded commitment on a computational assumption that withstands quantum attacks, the security proof of the entire coin-flipping and its integration into other applications could previously not be naturally translated from the classical to the quantum world. We will propose a solution based on Watrous' quantum rewinding in Chapter~\ref{chap:coin.flip}. Certainly, the desirable protocol would be constant round\index{constant round complexity}, meaning that a string of coins can be flipped in a constant number of rounds, instead of having the number of rounds depending on the number of coins. Towards this aim, we present a framework that transforms weaker demands on the coins into very strong properties, with the final result of a fully simulatable coin-flipping protocol, secure against poly-sized quantum adversaries, which can be implemented in the plain model from scratch (see Chapter~\ref{chap:framework}). On a side note, implementing constant round coin-flipping is an open problem in the quantum setting. Interestingly, the first quantum application, namely quantum key distribution (QKD), enables two parties to produce a secret random bit-string (which is then used as a key in symmetric crypto-systems). However, by assumption on its purpose, the QKD-setting does not have to hold against an internal dishonest party. The requirements for secure coin-flipping are much stronger in this sense, and it turns out that in a typically QKD-protocol, the key could theoretically always be biased by one of the parties. We conclude here by stressing the importance of truly random, fair coins for cryptographic purposes. Namely, by producing a string of coins, the communicating parties can interactively generate a common random string from scratch. The generation can then be integrated into other (classical or quantum) cryptographic protocols that work In the common-reference-string-model. This way, various interesting applications can be implemented entirely in a simple manner without any set-up assumptions. We will discuss some examples thereof in Chapter~\ref{chap:coin.flip.applications}. \subsection{Zero-Knowledge} \label{sec:primitives.zk} \index{zero-knowledge} Informally, a zero-knowledge (ZK) proof system is ``both convincing and yet yield nothing beyond the validity of the assertion''~\cite{Goldreich10}[p.~1]. Thus, only this one bit of knowledge is communicated from prover to verifier. Such building blocks are typically used in outer cryptographic protocols for enforcing that potentially dishonest players behave according to the protocol specification, namely, they are required to prove in zero-knowledge the correctness of a secret-based action without leaking the secret. As examples, we want to mention zero-knowledge proofs for Graph Isomorphism and Graph 3-Coloring, proven secure in the classical and quantum setting by~\cite{GMW91} and~\cite{Watrous09}, respectively. For a survey about zero-knowledge, we refer e.g.\ to~\cite{Goldreich01,Goldreich02,Goldreich10}. On a very intuitive level, such proof systems typically proceed in several rounds of a protocol. In each round, the prover must answer a \emph{challenge} from the verifier which he does not know beforehand. In order to be able to answer all challenges in all rounds, the prover must know whatever he claims. We differentiate between \emph{proofs}\index{zero-knowledge!proofs} and \emph{proofs of knowledge}. The respective definitions are given by two properties, which vary and are informally stated below. Loosely speaking, the distinction between proofs and proofs of knowledge is drawn in the content of the assertion: In a proof the prover claims the existence of an object. In contrast, in a proof of knowledge, he claims knowledge of an object. We stress that a proof of existence cannot be modeled via an ideal functionality in the natural way, whereas a proof of knowledge can. The third property of zero-knowledge does not differ in both systems. \paragraph{\sc Zero-Knowledge Proofs.} \index{zero-knowledge!proofs!completeness} \index{zero-knowledge!proofs!soundness} Informally, a zero-knowledge proof for set $\cal L$ on common input $x$ yields no other knowledge than the validity of membership $x \in \cal{L}$, which holds if the following three requirements are satisfied. First, if the statement is true, i.e.~$x \in \cal{L}$, an honest verifier will be convinced of this fact by an honest prover, and thus accept the proof (\emph{completeness}). This holds with overwhelming probability. Second, if the statement is false, i.e.~$x \notin \cal{L}$, a dishonest prover cannot convince an honest verifier of the contrary, except with low probability (\emph{soundness}). And last, if the statement is true, a dishonest verifier learns nothing beyond this fact (\emph{zero-knowledge}). The latter is shown by formally arguing that, given only the statement, a simulator can (by itself) produce a transcript that is indistinguishable from a real interaction between honest prover and dishonest verifier. The degree of indistinguishability then specifies the flavor of zero-knowledge. Note also that the first two properties are general aspects of interactive proof systems. However, in this context, they are defined in probabilistic terms, and we require the completeness and the soundness error to be negligible, at least after sufficient (sequential) repetitions. The notion of (interactive) zero-knowledge first appeared in~\cite{GMR85} by Goldwasser \emph{et al.} Then in~\cite{GMW86}, it was shown that ZK proofs exist for any \ensuremath{\mathcal{NP}}-language under the assumption that commitments exist, which in turn is implied in the existence of one-way functions~\cite{Naor91,HILL99}.\footnote{As in standard literature, $\ensuremath{\mathcal{NP}}$ (\emph{non-deterministic polynomial time})\index{complexity class!NP ($\ensuremath{\mathcal{NP}}$)} refers to the set of all decision problems, where the "yes"-instances can be recognized in polynomial time by a non-deterministic Turing machine. The class $\ensuremath{\mathcal{P}}$ (\emph{deterministic polynomial time})\index{complexity class!P ($\ensuremath{\mathcal{P}}$)} contains all decision problems which can be solved by a deterministic Turing machine in polynomial time. Note that every set in $\ensuremath{\mathcal{P}}$ has a trivial zero-knowledge proof in which the verifier proves membership by himself.} Blum \emph{et al.} showed that the interaction between prover and verifier in any ZK proof can be replaced by sharing a short common reference string available to all parties from the start of the protocol~\cite{BFM88}. Note that a reference string is a weaker requirement than interaction. The requirement for non-interactive zero-knowledge is simpler than for general zero-knowledge, since all information is communicated mono-directional from prover to verifier. The verifier does not influence the distribution in the real world. Thus, in the ideal world, we require a simulator that only produces output that is indistinguishable from the real distribution of the output. We will use such a generic construction in Section~\ref{sec:coin.iqzk}, where we show a simple transformation from non-interactive zero-knowledge to interactive zero-knowledge in the quantum world. \paragraph{\sc Zero-Knowledge Proofs of Knowledge.} \index{zero-knowledge!proofs of knowledge} \index{zero-knowledge!proofs of knowledge!completeness} \index{zero-knowledge!proofs of knowledge!special soundness} Intuitively, a zero-knowledge proof of knowledge for relation $\Rel$ with common instance $x$ and prover's private witness $w$ yields no other knowledge to the verifier than the validity of $(x,w) \in \Rel$. Especially, it holds that witness $w$ is not leaked. This is formulated by the following three requirements. First, if the prover follows the protocol and knows $w$, such that $(x,w) \in \Rel$, he will always convince the verifier. Note that this holds with probability 1, or in other words, \emph{completeness} is defined deterministically rather than probabilistically. Second, if the (possibly dishonest) prover can with whatever strategy convince the verifier to accept, then he knows $w$. This holds, except with probability determined by the knowledge error, which again must be negligible in the length of the challenge (\emph{special soundness}). Note here that in the context of machines, we interpret knowledge via behavior. In more detail, to define knowledge, we specify a knowledge extractor for which it holds that if the extractor can extract $w$ from the prover, for instance, by simulating two accepting conversations via rewinding, we say that the prover knows $w$. This idea prevents the prover to output the knowledge itself, and therewith, the last requirement, i.e.\ the property of \emph{zero-knowledge}, capturing that a dishonest verifier learns (essentially) nothing, remains unchanged from the description above. The concept of proofs of knowledge was first introduced also in~\cite{GMR85} and formulated in greater detail in~\cite{BG93}. We will propose a quantum-secure zero-knowledge proof of knowledge based on simulatable witness encoding in Section~\ref{sec:coin.zkpk}. \paragraph{\sc $\Sigma$-protocols.} A \Index{$\Sigma$-protocol} is a special case of the above, in that it is an honest-verifier zero-knowledge proof of knowledge. Such a protocol is of three-move-form, starting with the prover's message $\tt a^\Sigma$, followed by the verifier's challenge $\tt c^\Sigma$, and concluded with the prover's response $\tt z^\Sigma$. Its name originates from this form, as the ``$\Sigma$'' visualizes first the common input $x$, and then the flow of communication (from top to bottom). The flavor of honest-verifier zero-knowledge\index{zero-knowledge!honest-verifier} (HVZK), although weaker than general zero-knowledge, still allows for useful building blocks, which would be impossible to implement with a stronger notion in certain settings. As the name suggests, it captures a scenario in which, instead of covering any feasible verifier strategy, the verifier behaves honest (or rather honest-but-curious), and maintains and outputs a transcript of the entire interaction. By its nature of being a proof of knowledge, special soundness holds for a $\Sigma$-protocol, and therewith, that from two accepting conversations with different challenges a $w$ can be extracted such that $(x,w) \in \Rel$. We will use an honest-verifier simulator as a black-box in Sections~\ref{sec:extended.commit.compiler} and~\ref{sec:extended.commit.coin} to receive, on input $x$, a valid conversation $\big( {\tt a^{\Sigma}, c^\Sigma, z^{\Sigma}} \big)$. Intuitively, the purpose of using $\Sigma$-protocols then lies in the fact that only one valid conversation could have been produced unequivocally without knowing the witness. \subsection{Secure Secret Sharing} \label{sec:primitives.sss} \index{secret sharing} Secure secret sharing refers---as the name suggests---to a method for distributing one secret in several shares amongst the players. The secret can only be reconstructed by combining a sufficient number of shares (threshold), but any individual share or any number of shares below the threshold does not contain any useful information on its own. Classical secret sharing schemes were introduced independently in~\cite{Shamir79} and~\cite{Blakely79}, and quantum secret sharing was first proposed in~\cite{HBB99,CGL99}. Classical secret sharing is an extremely powerful primitive and is widely used in multi-party computation. We will use secret sharing as a building block for equipping our mixed commitments with trapdoor openings (Section~\ref{sec:mixed.commit.trapdoor.opening}). This extended construction will then constitute one essential step in bootstrapping fully simulatable coin-flipping from weak coin-flipping (Chapter~\ref{chap:framework}). \chapter{Quantum Tools} \label{chap:quantum.tools} \emph{Quantum} refers to a discrete unit of a physical quantity at the smallest scale, for which quantum mechanics constitutes the underlying mathematical framework. For the main part of this thesis, we will work with abstract mathematical objects, as our focus lies on theory, as opposed to realizing, for instance, a qubit as an actual physical system such as a ``light quantum'', encoded by polarization of a photon. In this chapter, we give an overview of the aspects of quantum mechanics, essential for this work. The connection between the mathematical description and physical reality is best reflected in the postulates of quantum mechanics, which are covered in Section~\ref{sec:postulates}. This section is also intended to fix the terminology we will use later on. Next, we will describe distance measures (Section~\ref{sec:distinguishability}) and uncertainty measures (Section~\ref{sec:entropies}). Then we will discuss the concept of information reconciliation and privacy amplification (Section~\ref{sec:reconciliation.amplification}) as well as the problems of rewinding in general quantum systems and the technique of quantum rewinding (Section~\ref{sec:rewinding}). Finally in Section~\ref{sec:security.definition}, we will introduce the definitions of security, which underlie all our following main results. \section{Postulates and Terminology} \label{sec:postulates} We now briefly introduce the field of quantum mechanics on the basis of its postulates, capturing quantum-physical events and processes in mathematical formalisms. We will closely follow the descriptions given in~\cite{NC00} and refer thereto for more details. \paragraph{\sc Description of an isolated system.} \index{quantum state} A general $d$-dimensional quantum state, where $d \in \N$, is described mathematically by a positive semi-definite \emph{density matrix} $\rho$ defined in the complex Hilbert space of dimension $d$, i.e., a complete inner product space denoted by $\mathcal{H}_d$. The standard notation to write a \emph{pure quantum state}\index{quantum state!pure} is represented in Dirac's \emph{bra-ket notation} by a vector as $\ket{\Psi} \in \mathcal{H}_d$, and is given, for complex coefficients $\alpha_i \in \C$, as \begin{eqnarray} \ket{\Psi} = \sum_{i = 0}^{d-1} \alpha_i \ket{i} \, . \label{eq.state} \end{eqnarray} The orthonormal \emph{basis}\index{basis} is denoted by the set $\{\ket{0},\ldots,\ket{d-1}\}$, i.e.\ the linearly independent spanning set of mutually orthogonal unit vectors. The form of a pure state as given in Eq.~\eqref{eq.state} as linear combinations nicely reflects an interference phenomenon unique to the quantum world, namely the \emph{superposition}\index{superposition} of basis states. Informally speaking, it highlights the fact that a quantum particle is in all possible basis states at once. And thus, a complete description of such a particle must include the description of every possible state as well as the probability of the particle being in that state, given by $|\alpha_i|^2$ for each respective $\ket{i}$. By the normalization condition, the total sum of probabilities, i.e.\ $\sum_i |\alpha_i|^2$, equals 1. A \emph{mixed quantum state}\index{quantum state!mixed} is a statistical ensemble of pure states $\{ \lambda_i, \ket{i} \}$, where again $\{ \ket{i} \}_i$ forms a basis, and can be represented as density matrix by \begin{eqnarray} \rho = \sum_{i} \lambda_i \proj{i} \, , \label{eq.mixed.state} \end{eqnarray} with eigenvalues $\lambda_i$ and eigenstates $\ket{i}$. Again, it holds that the system is in state $\ket{i}$ with probability $\lambda_i$, where $\lambda_i \geq 0$ and, by the normalization condition, we have $\sum_i \lambda_i = 1$. More specifically, a \emph{qubit}\index{qubit} is a two-dimensional pure quantum state living in $\mathcal{H}_2$. The \emph{computational basis}\index{basis!computational} (also called $\+\,$-basis, standard basis, canonical basis, or rectilinear basis) is defined by the pair $\{ \ket{0}, \ket{1} \}$, where \begin{equation} \ket{0} = \left(\begin{array}{c} 1\\ 0 \end{array}\right) \text{ and } \ket{1} = \left(\begin{array}{c} 0\\ 1 \end{array}\right) \, . \end{equation} The pair $\{ \ket{+}, \ket{-} \}$ denotes the \emph{diagonal basis}\index{basis!diagonal} (also named the $\x$-basis or Hadamard basis), where \begin{align} & \ket{+} = (\ket{0}+\ket{1})/\sqrt{2} \ \text{ and}\\ & \ket{-} =(\ket{0}-\ket{1})/\sqrt{2} \, . \end{align} Another common denotation is $\{ \ket{0}_+, \ket{1}_+ \}$ for the computational basis and $\{ \ket{0}_\x, \ket{1}_\x \}$ for the diagonal basis. We use $\{ +,\x \}$ as shorthand to refer to the set of these two most commonly used \emph{conjugate bases}. \paragraph{\sc Evolution in a closed system.} The dynamics that apply in a closed systems as described above are captured in the description of a \emph{unitary transform}\index{transformation!unitary} $\op{U}$. $\op{U}$ is unitary, if it holds that $\op{U^\dag U} = \id$. Unitary operations preserve inner products between vectors, which yields their more intuitive expression in outer product representation as follows. Define $\ket{out_i} = \op{U} \ket{in_i}$ to be the transformation from ``input'' basis $\{ \ket{in_i} \}_i$ into ``output'' basis $\{ \ket{out_i} \}_i$. Then, \begin{eqnarray} \op{U} = \sum_{i} \ket{out_i}\bra{in_i} \, . \end{eqnarray} From the requirement of unitarity, it is evident that such a transformation must be \emph{reversible}\index{reversible}. That means that undoing operation $\op{U}$ on $\ket{in}$ corresponds to applying its inverse $\op{U^\dag}$ on $\ket{out}$ and recreates $\ket{in}$. For completeness we note that, although part of this postulate, we will not consider the refined version of time evolution, defined by the Schr\"odinger equation. In the more specific case of single qubits, the transformation from the computational basis to the diagonal basis, and vice versa, is obtained by applying the \emph{Hadamard operation}\index{transformation!Hadamard} $\op{H}$, where \begin{equation} \op{H} = \frac{1}{\sqrt{2}} \left(\begin{array}{lr} 1 & 1\\ 1 &-1 \end{array}\right) \, , \end{equation} and note that $\op{H} = \op{H}^\dag$. The two-dimensional \emph{Identity operator}\index{transformation!Identity} $\id$ is represented by matrix \begin{equation} \id = \left(\begin{array}{lr} 1 & 0\\ 0 & 1 \end{array}\right)\, , \end{equation} other important operations are described by the Pauli matrices \begin{equation} \op{\sigma_X} = \left(\begin{array}{lr} 0 & 1\\ 1 & 0 \end{array}\right) \ \ \text{and} \ \ \op{\sigma_Z} = \left(\begin{array}{lr} 1 & 0\\ 0 &-1 \end{array}\right) \, . \end{equation} Operator $\op{\sigma_X}$ describes a \emph{bit-flip}\index{transformation!bit-flip}. Matrix $\op{\sigma_Z}$ defines a \emph{phase-flip} operation\index{transformation!phase-flip}, adding a phase factor of \ -1 for non-zero entries, and otherwise leaving the bit invariant. For completeness, we also explicitly state \begin{equation} \op{\sigma_Y} = \left(\begin{array}{lr} 0 &-i\\ i & 0 \end{array}\right) \, , \end{equation} but note that $\op{\sigma_Y} = i\op{\sigma_X\sigma_Z}$. The \emph{controlled-NOT operation}\index{transformation!controlled-NOT} $\op{CNOT}$ is a combination of $\id$ and $\op{\sigma_X}$ and is defined for two input qubits as \begin{equation} \op{CNOT} = \left(\begin{array}{lccr} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 \end{array}\right) \, . \end{equation} Thus, if the control qubit is 1, $\op{CNOT}$ flips the target qubit. Otherwise, $\id$ is applied to the target qubit. Or in other words, the value of the second output qubit corresponds to the classical exclusive disjunction (XOR). \paragraph{\sc Quantum measurements.} \index{quantum measurement} To extract information of a quantum system, it must be measured. The following descriptions of measurements illustrate the \emph{irreversible}\index{irreversible} nature of quantum measurements in general, and therewith, the disturbance caused by observation. In other words, some information about a state before measurement is lost after measurement. This fact stands in sharp contrast to the reversible transformations within a closed system as described previously. Quantum measurements are described by a collection of measurement operators\index{quantum measurement!operator} $\mathcal{M} = \{ \op{M}_m \}_m$, where $m$ denotes the measurement outcome. The \emph{probability} $\prob{m}$ to obtain outcome $m$ when measuring state $\ket{\psi}$ with $\mathcal{M}$ is given by \begin{eqnarray} \prob{m} = \bra{\psi} \op{M^\dag}_m \op{M}_m \ket{\psi} \, , \label{eq.general.measurement.prob} \end{eqnarray} with completeness equation $\sum_m \op{M^\dag}_m \op{M}_m = \id$, or equivalent, $\sum_m \bra{\psi} \op{M^\dag}_m \op{M}_m \ket{\psi} = 1$. Conditioned on having obtained $m$, the \emph{post-measurement state}\index{quantum measurement!post-measurement state} must be renormalized to \begin{eqnarray} \rho_m = \frac{\op{M}_m \ket{\psi}}{\sqrt{\bra{\psi} \op{M}^\dag_m \op{M}_m \ket{\psi}}} \, \ . \label{eq.general.measurement.state} \end{eqnarray} We also want to stress that quantum measurements do not necessarily commute, that means that different measurement orders may yield different measurement outcomes. If all operators $\op{M}_m$ are orthogonal projectors, denoted by $\op{P}_m = \op{M}_m^\dag \op{M}_m$, we call the measurement \emph{projective}\index{quantum measurement!projective} and $\op{M} = \sum_m m \op{P}_m$ its\index{quantum measurement!observable} \emph{observable}. The respective probability and post-measurement state are then given by \begin{eqnarray} \prob{m} = \bra{\psi}\op{P}_m\ket{\psi} \end{eqnarray} and \begin{eqnarray} \frac{\op{P}_m\ket{\psi}}{\sqrt{\prob{m}}} \, , \end{eqnarray} Measuring in basis $\{ \ket{m} \}_m$ means to apply a projective measurement defined by projectors $\op{P}_m = \proj{m}$. When only specifying mappings $\op{E}_m = \op{M}_m^\dag \op{M}_m$, we obtain an expression in the \emph{positive operator-valued measure formalism}\index{quantum measurement!positive operator-valued measure} (POVM), similar to Eq.~\eqref{eq.general.measurement.prob}, namely, \begin{eqnarray} \prob{m} = \tr(E_m \rho) \, , \end{eqnarray} where $\mathcal{E} = \{ \op{E}_m \}_m$ is the POVM, denoting the set of Hermitian operators such that $\sum_m \op{E}_m = \id$ and $\op{E}_m \geq 0$. This formalism is simpler than the general expressions in Eqs.~\eqref{eq.general.measurement.prob} and~\eqref{eq.general.measurement.state}, but sufficient for many purposes, as it yields simple measurement statistics. It also becomes evident here that for a complete description of measuring the observable of a quantum system, the formulation of a quantum system must include uncertainty in that the probability for all possible outcomes must be encoded in it. Again more specifically, measuring a single qubit in the computational or diagonal basis\index{basis!computational} \index{basis!diagonal} means applying the measurement described by projectors $\proj{0}$ and $\proj{1}$ or projectors $\proj{+}$ and $\proj{-}$, respectively. We want to point out a very important consequence of using such conjugate bases (also called mutually unbiased bases). Measuring a qubit, prepared in one of two conjugate bases, is equivalent to distinguishing between two non-orthogonal quantum states. Non-orthogonal states however cannot be distinguished (with arbitrary precision), which can be derived from the above formalisms. Thus, any measurement must destroy information and therewith disturb the system---except, of course, a measurement of a basis state in its own basis. In other words, a state with fixed measurement outcome in one basis implies maximal uncertainty about the measurement outcome in the other basis. \paragraph{\sc Composite systems.} The joint state of a \emph{multipartite system} in $\mathcal{H}_{2^n}^{\otimes n}$ is given by the tensor product $\ket{\Psi}_1 \otimes \cdots \otimes \ket{\Psi}_n$. For simplicity, we consider a bipartite joint state $\rho_{AB} \in \mathcal{H}^A \otimes \mathcal{H}^B$ shared between Alice and Bob, i.e., \begin{eqnarray} \rho_{AB} = \ket{\Psi}_A \ket{\Psi}_B = \sum_i \alpha_i \ket{i}_A \sum_j \beta_j \ket{j}_B \, , \label{eq.product.state} \end{eqnarray} with orthonormal bases $\{ \ket{i}_A \}_i$ for $\mathcal{H}^A$ and $\{ \ket{j}_B \}_j$ for $\mathcal{H}^B$. The form of the state in Eq.~\eqref{eq.product.state} indicates a \emph{product state}\index{product state}, which is \emph{separable}, since it can be decomposed into two definite pure states. For string $x = (x_1,\ldots,x_n) \in \{0,1\}^n$, encoded in bases $\theta = (\theta_1,\ldots,\theta_n) \in \{\+,\x\}^n$, we write $\ket{x}_\theta = \ket{x_1}_{\theta_1} \otimes \cdots \otimes \ket{x_n}_{\theta_n}$. For $S \subseteq \{ 1, \ldots, n \}$ of size $s$, we define $x|_S \in \{ 0,1 \}^s$ and $\theta|_S \in \{ +,\x \}^s$ to be the restrictions $(x_i)_{i \in S}$ and $(\theta_i)_{i \in S}$, respectively. If all qubits are encoded in the same basis $\theta \in [ +,\x ]$, then $\ket{x}_\theta = \ket{x_1 \ldots x_n}_\theta$. In contrast to the product states of Eq.~\eqref{eq.product.state}, we can also have pure composite systems in some \emph{entangled states}\index{entanglement} of the form \begin{eqnarray} \rho_{AB} &=& \sum_{i,j} \gamma_{ij} \, \ket{i}_A \ket{j}_B \label{eq.entangled} \end{eqnarray} with $\gamma_{ij} \neq \alpha_i \beta_j$. Entangled components mean that they can only be described with reference to each other. Special cases thereof are the maximally entangled \emph{EPR-pairs} (or Bell states): \begin{equation} \begin{array}{ll} &\ket \Phi_{00} = (\ket{00} + \ket{11}) / \sqrt{2} \, , \\ &\ket \Phi_{11} = (\ket{00} - \ket{11}) / \sqrt{2} \, , \\ &\ket \Phi_{01} = (\ket{01} + \ket{10}) / \sqrt{2} \, , \text{ and} \\ &\ket \Phi_{10} = (\ket{01} - \ket{10}) / \sqrt{2} \, . \end{array} \end{equation} Important for cryptographic purposes are the following observations. First, as Eq.~\eqref{eq.entangled} indicates, upon observing one of the two particles, entangled in one single state, the system will collapse, and thus, the other particle will at least partially be determined---even though the particles may be spatially separated. On a side note, the outcome of the first measurement is random, and therewith the state, to which the composite system collapses into, is so as well. Hence, information (i.e.\ a non-random message) cannot be transmitted faster than the speed of light by shared entanglement. Second, entanglement is basis-independent, e.g.\ $\ket \Phi_{00} = (\ket{00} + \ket{11}) / \sqrt{2} = (\ket{++} + \ket{--}) / \sqrt{2} \ $. And last, if an entangled state $\rho_{AB}$ is pure, then it cannot be entangled with any other state, for instance, one in Eve's hands, so it holds that $\rho_{ABE} = \rho_{AB} \otimes \rho_E$. Thus, under the assumption of it being pure, entanglement is monogamic. Subsystems of a composite system can be described by the \emph{reduced density operator} computed by the \emph{partial trace}. Let $\rho_{AB} = \big(\ket{a_1}\bra{a_2} \, \otimes \, \ket{b_1}\bra{b_2} \big)$ and assume that only subsystem $A$ is accessible. Then, we have \begin{eqnarray} \label{eq.partial.trace} \tr_B(\rho_{AB}) = \langle b_2 | b_1 \rangle \, |a_1\rangle\langle{a_2}| \, . \end{eqnarray} Trivially, when tracing system $B$ out of a product state, we have $\rho_{AB} = tr_B(\rho_A \otimes \rho_B) = \rho_A$. However, the reduced density operator in an entangled EPR-pair is a complete mixture with trace distance $1/2$ (see next Section~\ref{sec:distinguishability}). Thus interestingly, the joint state of two entangled qubits is pure and can be completely determined, yet its subsystems alone are completely mixed. \section{Distance, Distinguishability, and Dependence} \label{sec:distinguishability} We will need various measures to determine the distance between classical and quantum states. Distance measures possess an important operational meaning in the context of distinguishability between two systems. \paragraph{\sc Distance.} For classical information, the distance between two binary strings of equal length can be measured by means of the \emph{Hamming distance}\index{Hamming distance} $d_H$, which is the number of positions at which the strings differ, or more formally, for strings $x,y \in \set{0,1}^n$, we have \begin{eqnarray}\label{eq.hamming} d_H(x,y) \assign \left|\Set{i}{x_i \neq y_i}\right| \, . \end{eqnarray} We will also need the\index{Hamming distance!relative} \emph{relative Hamming distance} \begin{eqnarray} \label{eq.relative.hamming} r_H(x,y) \assign \frac{d_H(x,y)}{n} \, \ . \end{eqnarray} For completeness, we note that the \emph{Hamming weight}\index{Hamming weight} $w_H$ is the Hamming distance to $x$ from the all-zero string (of same length), i.e.~$w_H(x) \assign \left|\Set{i}{x_i = 1}\right|$. In the classical world, the \emph{statistical or variational distance}\index{statistical distance} between two classical probability distributions $P$ and $Q$ over the same finite set $\mathcal{X}$ with events $E \subseteq \mathcal{X}$ is determined by \begin{eqnarray} \delta\big( P,Q \big) \assign \frac{1}{2} \sum_{x \in \mathcal{X}} |P(x) - Q(x)| = \max_{E} | P(E) - Q(E)| \, . \label{eq.classical.distance} \end{eqnarray} A measure of proximity is given by the \emph{fidelity}\index{fidelity} \begin{eqnarray} F\big( P,Q \big) \assign \sum_{x \in \mathcal{X}} \sqrt{P(x) Q(x)} \, . \label{eq.classical.fidelity} \end{eqnarray} The classical notions of distance and fidelity can be generalized to the distance and proximity of two quantum states $\rho$ and $\sigma$. The quantum analogue to the classical distance in Eq.~\eqref{eq.classical.distance} is the \emph{trace distance}\index{trace distance}, given as \begin{eqnarray} \delta \big( \rho,\sigma \big) \assign \frac{1}{2} \tr \big( |\rho - \sigma| \big)\label{eq.trace.distance} \, , \end{eqnarray} where $|A| = \sqrt{A^\dag A}$ is the trace norm of any operator $A$. The notion of fidelity translates to \emph{quantum fidelity}\index{fidelity!quantum} by \begin{eqnarray} F \big( \rho,\sigma \big) \assign \tr\sqrt{\sqrt{\rho} \ \sigma \sqrt{\rho}} \, . \label{eq.quantum.fidelity} \end{eqnarray} The relation between classical variational distance and quantum trace distance can be made more explicit by \begin{eqnarray} \delta \big( \rho,\sigma \big) = \max_\mathcal{E} \ \delta \big( \mathcal{E}(\rho),\mathcal{E}(\sigma)\big) \, , \label{eq.trace.distance.povm} \end{eqnarray} where the maximum is taken over all POVMs $\mathcal{E}$, and $\rho,\sigma$ indicate the probability distributions obtained when measuring $\rho$ or $\sigma$ using $\mathcal{E}$. Moreover, it is worth pinpointing that, for mixtures of pure quantum states $\rho = \sum_i \lambda_i \proj{i}$ and $\sigma = \sum_i \gamma_i \proj{i}$ with same orthonormal basis $\set{\ket{i}}_i$ but potentially different eigenstates $\lambda_i$ and $\gamma_i$, the quantum measure naturally reduces to the classical one between the eigenvalue distributions $\lambda = \{ \lambda_i \}_i$ and $\gamma = \{ \gamma_i \}_i$ by \begin{eqnarray} \delta(\rho,\sigma) = \frac{1}{2}\tr|\rho - \sigma| = \frac{1}{2} \Big| \sum_i (\lambda_i - \gamma_i) \proj{i} \Big| = \frac{1}{2} \sum_i |\lambda_i - \gamma_i | = \delta(\lambda,\gamma) \, . \end{eqnarray} A similar reduction can be obtained for the fidelity. Trace distance and quantum fidelity are, in general, equivalent concepts---but with partly different characteristics and properties, so we will use one or the other, depending on the respective context (see~\cite{FG99} or~\cite{NC00} for a more detailed discussion). However, they are closely related in that we have \begin{eqnarray} 1 - F \big( \rho,\sigma \big) \leq \delta \big( \rho,\sigma \big) \leq \sqrt{1-F \big( \rho,\sigma \big)^2} \, . \end{eqnarray} For pure states $\rho = \proj{\psi}$ and $\sigma = \proj{\phi}$, expressions \eqref{eq.trace.distance} and \eqref{eq.quantum.fidelity} simplify to \begin{eqnarray} \label{eq.trace.distance.pure} \delta \big( \rho,\sigma \big) = \sqrt{1- | \langle \psi | \phi \rangle |^2} \; \text{ and } \ F \big( \delta, \sigma \big) = | \langle \psi | \phi \rangle | \, , \end{eqnarray} where the latter can be seen as transition probability. Furthermore, the fidelity measure for a pure state $\rho = \proj{\psi}$ and an arbitrary quantum state $\sigma$ is given by \begin{eqnarray} F \big( \rho,\sigma \big) = \sqrt{\bra{\psi}\sigma\ket{\psi}} \label{eq.quantum.fidelity.simple} \, , \end{eqnarray} and shows that the square root of the overlap between the states determines the fidelity. \paragraph{\sc Distinguishability.} \index{indistinguishability} The importance of both quantum measures is due to their operational meaning of distinguishability. The fidelity can be seen as an ``upside down'' trace distance in that the limits 0 and 1 in $0 \leq F \big( \rho,\sigma \big) \leq 1$ meaning perfectly distinguishable and perfectly indistinguishable, respectively. In contrast, the trace distance $0 \leq \delta \big( \rho,\sigma \big) \leq 1$ increases for decreasing indistinguishability, such that we get $\delta \big( \rho,\sigma \big) = 0$ for $\rho = \sigma$ and $\delta \big( \rho,\sigma \big) = 1$ for $\rho$ orthogonal to $\sigma$. Coming back to Eq.~\eqref{eq.trace.distance.povm} in this context, it is worth noting that the POVM $\mathcal{E}$ that achieves the maximum is the optimal POVM for distinguishing $\rho$ and $\sigma$. Furthermore, we want to single out two important properties by means of the trace distance. First, we have unitary invariance with $\delta \big( \rho, \sigma \big) = \delta \big( \op{U} \rho \op{U^\dag}, \op{U} \sigma \op{U^\dag} \big)$, meaning that the distance between the states does not change when a unitary operation $\op{U}$ is applied to both of them. And second, any trace-preserving quantum operation $\op{T}$ is contractive (monotonicity under quantum operations) with $\delta \big( \op{T}(\rho), \op{T}(\sigma) \big) \leq \delta \big( \rho, \sigma \big)$. Informally, no physical process can achieve an increased distance, or in other words, no modification on the states can help to better distinguish two states. An important special case relating the partial trace shows that $\delta \big( \tr_B(\rho_{AB}),\tr_B(\sigma_{AB}) \big) \leq \delta \big( \rho_{AB},\sigma_{AB} \big)$, which again informally states that two systems are at least as hard to distinguish when only a part of them is accessible. Two families of probability distributions $\set{P_n}_{n \in \naturals}$ and $\set{Q_n}_{n \in \naturals}$ are called \emph{perfectly} indistinguishable\index{indistinguishability!perfect}, denoted by $P \approxp Q$, if their output distributions on each input are identical, namely $P_n = Q_n$ for all $n \in \naturals$. In other words, an unbounded adversary cannot distinguish the outcomes, which holds with probability 1. Relaxing this condition defines \emph{statistical}\index{indistinguishability!statistical} indistinguishability ($P \approxs Q$), which holds if the statistical distance $\delta\big( P_n,Q_n \big)$ is negligible (in the length of the input). This covers the setting, in which an unbounded adversary cannot distinguish the outcomes, except with negligible probability. For $\delta\big( P_n,Q_n \big) \leq \varepsilon$, and therewith, indistinguishability except with probability $\varepsilon$, we also call the distributions \emph{$\varepsilon$-close}. Thus, perfect and statistical indistinguishability are defined in the information-theoretic sense and we call the resulting security flavor \emph{unconditional}. In the computational setting, we require that the two distributions cannot be distinguished by any computationally efficient procedure. More formally, let \\\noindent$\prob{{\cal A}(P(x)) = 1 \, | \, x \leftarrow P}$ denote the probability that an algorithm $\cal A$ is successful in that it outputs ``P'', if the input $x$ comes from $P$, and analogue for $Q$. To claim \emph{computational}\index{indistinguishability!computational} indistinguishability between $P$ and $Q$, denoted by $P \approxc Q$, for any probabilistic poly-time algorithm $\cal A$, it must hold that the (distinguishing) advantage $adv$, i.e., $$ adv ({\cal A}) = \vert \, \prob{{\cal A}(P_n) = 1} - \prob{{\cal A}(Q_n) = 1} \, \vert \, , $$ is negligible in the length of the input. \emph{Quantum-computational} \index{indistinguishability!quantum-computational} indistinguishability ($P \approxq Q$) is defined similarly for the case of a \emph{quantum} algorithm $\cal A$. In other words, (quantum) computational security holds with overwhelming probability against a poly-time (quantum) adversary. Consider a quantum algorithm consisting of a uniform family $\{ C_n\}_{n \in \naturals}$ of quantum circuits, which is said to run in polynomial time, if the number of gates of $C_n$ is polynomial in $n$. Then, two families of quantum states $\set{\rho_n}_{n \in \naturals}$ and $\set{\sigma_n}_{n \in \naturals}$ are called \emph{perfectly} indistinguishable\index{indistinguishability!perfect} with $\rho \approxp \sigma$, if $\delta \big( \rho_n,\sigma_n \big) = 0$ in the case of unrestricted running time. We have \emph{statistical} indistinguishability\index{indistinguishability!statistical} with $\rho \approxs \sigma$, if $\delta \big( \rho_n,\sigma_n \big) \leq \varepsilon$, for $\varepsilon$ negligible in~$n$, and without any restriction on the running time. Again, for $\delta(\rho,\sigma) \leq \varepsilon$, we call the quantum states $\varepsilon$-close---or indistinguishable, except with probability $\varepsilon$. Then, to prove sufficient closeness between an ideal system and the real state, we require $\varepsilon$ to be negligible (in the security parameter). Last, we have \emph{quantum-computationally} indistinguishable\index{indistinguishability!quantum-computational}, denoted by $\rho \approxq \sigma$, if any polynomial-time quantum algorithm has negligible advantage $\varepsilon$ of distinguishing $\rho_n$ from $\sigma_n$. \paragraph{\sc Dependence.} \index{independence} We will often use upper case letters for random variables (for proofs) that describe respective values (in the actual protocol). Let $P_X$ denote the probability distribution of a classical random variable $X \in \mathcal{X}$ over finite set $\mathcal{X}$. Let \begin{eqnarray} \rho_X = \sum_{x \in \mathcal{X}} P_X(x) \proj{x} \label{eq:quantum.representation.X} \end{eqnarray} denote the quantum representation of the classical random variable $X$. Let $\rho_E^x$ denote a state in register $E$, depending on value $x \in \mathcal{X}$ of random variable $X$ over $\mathcal{X}$ with distribution $P_X$. Then, from the view of an observer, who holds register $E$ but does not know $X$, the system is in state \begin{eqnarray} \rho_E = \sum_{x \in \mathcal{X}} P_X(x) \rho_E^x \, , \label{eq.dependence.classical.quantum} \end{eqnarray} where $\rho_E$ depends on $X$ in the sense that $E$ is in state $\rho_E^x$ exactly iff $X = x$. \emph{Independence} in a bipartite joint state with classical and quantum parts can be expressed as \begin{eqnarray} \rho_{XE} = \sum_{x \in \mathcal{X}} P_X(x) \proj{x} \otimes \rho_E^x \, . \end{eqnarray} Such a state is formally called a \emph{cq-state}. Note that naturally, $\rho_E = tr_X(\rho_{XE}) = \sum_x P_X(x) \rho_E^x$, and that the notation can be extended to states depending on more classical variables, i.e.\ \emph{ccq-states}, \emph{cccq-states} etc. Full independence of classical and quantum parts within one state is given iff $\rho_{E}^x = \rho_E$ for any $x$ and therewith $\rho_{XE} = \rho_X \otimes \rho_E$. This means in particular that no information on $X$ is gained by observing only $\rho_E$. However, full independence is often too strong a requirement. For our purposes, it suffices that the real state is close to the ideal situation. Last in this context, we want to express that a random variable $X$ is independent of a quantum state $\rho_E$ when given a random variable $Y$. Independence in this case means that, when given $Y$, the state $E$ gives no additional information on $X$. Yet another way to understand \emph{conditional independence}\index{independence!conditional} is that $E$ is obtained from $X$ and $Y$ by solely processing $Y$. Formally, adopting the notion introduced in~\cite{DFSS07}, we require that $\rho_{X Y E}$ equals $\rho_{X\leftrightarrow Y \leftrightarrow E}$, where the latter is defined as \begin{eqnarray} \label{eq:conditional.independence} \rho_{X\leftrightarrow Y \leftrightarrow E} := \sum_{x,y}P_{XY}(x,y)\proj{x} \otimes \proj{y} \otimes \rho_{E}^y \, , \end{eqnarray} where $\rho_{E}^y = \sum_x P_{X|Y}(x|y) \rho_E^{x,y}$. In other words, $\rho_{X Y E} = \rho_{X\leftrightarrow Y \leftrightarrow E}$ precisely if $\rho_E^{x,y} = \rho_E^{y}$ for all $x$ and $y$. \section{Entropies} \label{sec:entropies} \index{entropy} Entropies are useful measures of ``information, choice and uncertainty''. We will give a brief recap here, only covering the concepts most important in the context of this work. For a general introduction we refer to e.g.~\cite{NC00,Renner05,Schaffner07} for more details and proofs. The \emph{Shannon entropy}\index{entropy!Shannon}~\cite{Shannon48} \begin{eqnarray} H(X) \assign - \log \left( \sum_{x} P_X(x) \right) = - \sum_{x} p_x \log p_x \end{eqnarray} applies to a classical probability distribution $P_X$ over $\mathcal{X}$ with probabilities $p_x$, and as such quantifies the information gain on average after learning $X$, or complementary, the average uncertainty before learning $X$.\footnote{Note that the logarithmic base is 2 for a result in bits.} The binary version thereof, namely the \emph{binary entropy function}\index{entropy!binary}, is defined for the case of two possibilities as \begin{eqnarray} h(\mu) \assign -\big(\mu\log({\mu}) + (1-\mu)\log{(1-\mu)}\big) \end{eqnarray} with $0 \leq \mu \leq \frac{1}{2}$. We will use that, given the ball of all $n$-bit strings at Hamming distance at most $\mu n$ from $x$, denoted as $\mathrm{B}^{\mu n}(x)$, we have that $|\mathrm{B}^{\mu n}(x)| \leq 2^{h(\mu) n}$. For a cryptographic scenario with not necessarily independent repetitions, its generalization is given by the \emph{R\'enyi entropy}\index{entropy!Renyi@R\'enyi}~\cite{Renyi61} of order $\alpha$ as \begin{eqnarray} H_\alpha(X) = \frac{1}{1-\alpha} \log \left( \sum_{x \in \mathcal{X}} P_X(x)^\alpha \right) \end{eqnarray} for $\alpha \geq 0$. Note that the Shannon entropy is the special case for limit $\alpha \rightarrow 1$. The \emph{joint entropy}\index{entropy!joint} of a pair of random variables $(X_0, X_1)$ measures the total uncertainty about the pair and is naturally defined as \begin{eqnarray} H(X_0 X_1) = - \log \left( \sum_{{x_0},{x_1}} P_{{X_0}{X_1}}(x_0,x_1) \right) \, . \end{eqnarray} Assume now that $X_1$ is learned, and therewith, $H(X_1)$ bits of information about $(X_0,X_1)$. Then, the remaining uncertainty of $X_0$, conditioned on knowing $X_1$, is given by the \emph{conditional entropy}\index{entropy!conditional} \begin{eqnarray} H(X_0|X_1) \assign H(X_0 X_1) - H(X_1) \, . \end{eqnarray} R\'enyi entropies can also be defined for the quantum world, i.e., where a density matrix $\rho$ replaces the probability distribution, and we have \begin{eqnarray} H_\alpha(\rho) \assign \frac{1}{1-\alpha} \log \big( \tr(\rho^\alpha) \big) \, , \end{eqnarray} for $\alpha \in [0,\infty]$. The \emph{von Neumann} entropy\index{entropy!von Neumann} is then given by \begin{eqnarray} H(\rho) \assign - \tr(\rho \log \rho) \, , \end{eqnarray} which corresponds to the Shannon entropy when measuring quantum state $\rho_X$ in basis $\{ \ket{x}\bra{x} \}$, or in other words $H(\rho) = - \sum_x \lambda_x \log \lambda_x$, where $\lambda_x$ are the eigenvalues of $\rho_X$. Thus, it naturally holds that $H_\alpha(\rho_X) = H_\alpha(X)$, whenever classical variable $X$ is encoded in quantum state $\rho_X$. A special entropy measure is obtained when taking the limit $\alpha \rightarrow \infty$, namely the \emph{min-entropy}\index{entropy!min-entropy}. The notion of min-entropy is used in the context of randomness extraction and privacy amplification in the presence of a dishonest receiver or an eavesdropper on the transmission (see Section~\ref{sec:reconciliation.amplification}). Intuitively, the (classical) min-entropy is determined by the highest peak in a distribution, and therewith, describes the maximum amount of potentially leaked information, which in turn formalizes security for cryptographic applications in the worst case. In other words, the min-entropy measures the probability of an adversary's best guess about an unknown value. \begin{definition}[Min-Entropy] \label{def:min.entropy} Let $X$ be a random variable over alphabet $\mathcal{X}$ with probability distribution $P_X$. The min-entropy of $X$ is defined as $$\hmin{X} = -\log\bigl( \max_x P_X(x) \bigr) \, .$$ \end{definition} Another important special case is the \emph{max-entropy}\index{entropy!max-entropy} with values for $\alpha$ approaching zero. Its definition captures a R\'enyi entropy, in which all possible events are considered equally, regardless of their probabilities. Its operational meaning lies in information reconciliation (see also Section~\ref{sec:reconciliation.amplification}). \begin{definition}[Max-Entropy] \label{def:max.entropy} The max-entropy of a density matrix $\rho$ is defined as $$\hzero{\rho} = \log\bigl( \rank{\rho}\bigr)\, .$$ \end{definition} For completeness, we note that another notion of R\'enyi entropies with a (non-negative) smoothing parameter $\epsilon$ was introduced in \cite{Renner05,RW05}\index{entropy!smooth}. Intuitively, it holds that for two random variables $X_0$ and $X_1$ with almost the same probability distribution (e.g.\ $X_0 = X_1$ with high probability), the difference between $H^\epsilon_\alpha(X_0)$ and $H^\epsilon_\alpha(X_1)$ is small. However, in this work we will only use the ``un-smoothed'' R\'enyi entropies as discussed above. Last, we conclude with the following lemma, which we will need in the context of oblivious transfer. Informally, it states that if the joint entropy of two random variables $X_0$ and $X_1$ is large, then at least one of them has half of the original entropy---in a randomized sense. \begin{lemma}[Min-Entropy-Splitting Lemma~\cite{Wullschleger07,DFRSS07}] \label{lemma:splitting} Let $X_0, X_1$ be random variables with $\hmin{X_0 X_1} \geq \alpha$. Then, there exists a binary random variable $K \in \{ 0,1 \}$ such that $\hmin{X_{1-K}K} \geq \alpha/2$. \end{lemma} \section{Information Reconciliation and Privacy Amplification} \label{sec:reconciliation.amplification} \index{information reconciliation} \index{privacy amplification} Errors and eavesdropping affect the communication in our quantum protocols such that the honest parties might end up with bit-strings of measurement outcomes that differ or have leaked in some positions. Countermeasures were proposed already in the first practical implementation of QKD~\cite{BBBSS92}. The honest parties first \emph{reconcile} their shared data by public discussion to obtain consistent strings. Note that this process has to be accomplished without revealing more information than absolutely necessary to an adversary eavesdropping on the public (classical) channel. The simplest procedure involves a test on a subset of all shared (qu)bits to compute the error rate, i.e., the relative number of all positions with different outcomes. In that case, these publicly announced bits must later be discarded, which in turn means that more qubits have to be sent at the beginning of the protocol. According to the error rate in the testset, error correction must be applied to the untested remaining set. Since the transmission of qubits is very efficient in practice and good error correction techniques are known, we will use this simple technique in our quantum protocols. After successful reconciliation, the honest parties are in possession of identical bit-strings. To turn these strings into completely secure ones, \emph{privacy amplification}~\cite{BBR88} can be applied, which intuitively distills a shorter but (essentially) private shared string. More concretely, privacy amplification employs \emph{two-universal hashing}\index{two-universal hashing} (see Definition~\ref{def:hashing}) to transform a partially secret string into a highly secure ``hashed down'' string, about which any adversary only has negligible information and which looks essentially random to him. Note that two-universal hashing also works against quantum adversaries, i.e., in the case when the attacker holds quantum information about the initial string~\cite{KMR05,RK05,Renner05}. In fact, it is essentially the only efficient way to perform privacy amplification against quantum adversaries. \begin{definition}[Two-Universal Hashing] \label{def:hashing} A class $\mathcal{F}: \{ 0,1 \}^n \rightarrow \{ 0,1 \}^\ell$ of hashing functions is called {\em two-universal}, if for any pair $x, y \in \{ 0,1 \}^n$ with $x \neq y$, and $F$ uniformly chosen from $\mathcal{F}$, it holds that $$ \prob{F(x)=F(y)} \leq \frac{1}{2^{\ell}} \, . $$ \end{definition} In the slightly stronger notion of {\em strongly two-universal} hash-functions\index{two-universal hashing!strong}, we require the random variables $F(x)$ and $F(y)$ to be independent and uniformly distributed over $\{ 0,1 \}^\ell$. Let classical $X$ be correlated with classical part $U$ and quantum part $E$, i.e., $\rho_{XUE} = \sum_{x \in \{ 0,1 \}^n} P_X(x) \rho^x_{UE}$. Let $F$ be a hash-function chosen uniformly from $\mathcal{F}$. After applying $F$ to $X$, we obtain the \emph{cccq-state} $\rho_{F(X)FUE}$ of form \begin{eqnarray} \rho_{F(X)FUE} = \sum_{f \in \mathcal{F}} \sum_{z \in \{ 0,1 \}^\ell} \proj{z} \otimes \proj{f} \otimes \sum_{x \in f^{-1}(z)} P_X(x) \rho^x_{UE} \, . \end{eqnarray} The basic theorem for privacy amplification in the quantum world was introduced in~\cite{RK05} and~\cite{Renner05}, and confined in~\cite{Schaffner07}. Here, we give the version from~\cite[Corollary 2.25]{Schaffner07} but in its un-smoothed form and tailored to our context. \begin{theorem}[Privacy Amplification] \label{theo:privacy-amplification} Let $\rho_{XUE}$ be a \emph{ccq-state} with classical $X$ distributed over $\{ 0,1 \}^n$, classical $U$ in the finite domain $\mathcal{U}$, and quantum state $\rho_E$. $U$ and $\rho_E$ may depend on $X$. Let $F$ be the random and independent choice of a member of a universal-2 class of hash-functions $\mathcal{F}: \{ 0,1 \}^n \rightarrow \{ 0,1 \}^\ell$. Then, $$\delta\bigl(\rho_{F(X)FUE},{\textstyle\frac{1}{2^\ell}} \mathbbm{1} \otimes \rho_{FUE}\bigr) \leq \frac{1}{2} 2^{-\frac{1}{2}\big(\hmin{X|U}-\hzero{\rho_E}-\ell\big)} \, . $$ \end{theorem} Note that if the rightmost term of the theorem is negligible, then we are in a situation where $F(X)$ is essentially uniform and independent of $F$ and $E$. \section{Rewinding} \label{sec:rewinding} \index{rewinding} We require for classical schemes in the quantum world that quantum computation does not jeopardize the underlying mathematical assumption that guarantees the security. But we encounter even more setbacks in the context of actually proving a cryptographic protocol secure in a quantum environment, which in the realm of this work are mostly due to the strong restrictions on general rewinding---a common proof technique for showing the security of different protocols in the computational setting. \subsection{Problems with General Rewinding} \label{sec:general.rewinding} \index{rewinding!problems} Recall that in the context of simulation-based security, we prove security against a cheating player by showing that a run of a protocol between him and the honest player can be efficiently simulated without interacting with the honest player but with a simulator instead. Basically, such a simulator prepares a valid conversation and tries it on the dishonest party. In case this party does not send the expected replies, a classical simulator rewinds the machine of the corrupted player to an earlier status and repeats the simulation. Note that if the dishonest party sends an invalid reply, the simulation is aborted. To conclude the proof, we then show that the running time of the simulation as well as the distribution of the conversation are according to expectations. Such a technique, however, is impossible to justify in the quantum world. Generally speaking, the simulator had to partially measure the quantum system without copying it beforehand to obtain the protocol transcript. But then it would become impossible to reconstruct all information necessary for correct rewinding. The problem of rewinding in general quantum systems was originally observed in~\cite{vdGraaf97}, detailed discussions can also be found e.g.\ in~\cite{DFS04,Watrous09}. In the context of this work, there are two relevant rewinding settings. The first setting applies to simulations intended to collect several transcripts of conversations. An example thereof is the classical simulation for protocols with embedded \emph{computationally binding} commitments. Recall that computational binding means that if a dishonest party can open a commitment to two different values, then the computational assumption does not hold. In a classical simulation, the simulator simulates a run of the outer protocol with the committer, such that the latter outputs a valid commitment and later provides a correct opening. Now, the simulator has the possibility to rewind the player to a point after the commitment was sent and repeat the simulation, which can be adapted to the simulator's knowledge of the committed value. The event of obtaining a different opening for the same commitment in this second run implies the inversion of the underlying one-way function, which is assumed to be infeasible. In such a simulation, the simulator must store the previous transcript before rewinding. Another example of this setting occurs when proving \emph{special soundness} in a proof of knowledge. There, a classical simulator simulates a run of a protocol against a dishonest prover. It then keeps a transcript of the simulation and rewinds him. From two accepting conversations, the simulator can extract the prover's witness. Again, the simulator must store transcripts of the communication before rewinding. The second setting requires the simulator to rewind the dishonest player to the beginning of a protocol, if the reply from the dishonest party does not match the prepared outcome of the protocol such that both sides conclude on the ideal values as their result. This setting applies, for instance, when proving an outer protocol with an embedded \emph{computationally hiding} commitment secure. Fortunately, if such a simulation complies with a restricted setting, the newly introduced \emph{quantum rewinding lemmas} of~\cite{Watrous09} can be applied. Therewith, rewinding is possible in a restricted quantum setting. We will discuss this technique in more detail in the following section, but in short, it requires a one bit reply from the dishonest party (e.g.\ a bit reply to a previous bit commitment), the simulation circuit must be unitary, and in case of rewinding, we do not intend to keep intermediate transcripts nor collect all possible results (see Section~\ref{sec:quantum.rewinding}). Unfortunately, we do not know how to translate this technique to a multi-bit reply, while keeping the running time of the simulator polynomially bounded. In that case, the classical simulation would again reduce to the first setting above, in which the simulator must store previous transcripts, namely a previous message from the dishonest party that commits him to his multi-bit reply beforehand. \subsection{Quantum Rewinding} \label{sec:quantum.rewinding} \index{rewinding!quantum} Recall that we consider the second setting of the previous section. In a classical simulation against dishonest Bob, a poly-time simulator guesses, for instance, a valid reply $b'$ of dishonest Bob and prepares the protocol transcript according to it. When the simulator finally receives Bob's actual reply $b$, it checks if the values coincide ($b = b'$), i.e., if its guess was correct and therewith, if the simulation was successful. If that is not the case, the simulator rewinds Bob and repeats the simulation until $b = b'$. No previous information has to be stored nor collected. Recently, Watrous proposed a quantum analogue of such a simulator with the potential of rewinding, and proved therefore, that quantum zero-knowledge is possible in an unrestricted model. We will sketch the most important aspects of his construction here but refer to~\cite{Watrous09} for further details and proofs. More specifically, Watrous proved how to construct a quantum zero-knowledge proof system for Graph Isomorphism and introduced two so-called \emph{quantum rewinding lemmas}; one for an exact setting and one that holds for slightly weaker assumptions and therewith covers a scenario with perturbations. The investigated protocol proceeds as a $\Sigma$-protocol, i.e., a protocol in three-move form, where the verifier flips a single coin in the second step and sends this challenge to the prover. Thus, the setting applies to the case where the reply $b$ from above is a single bit. This will also be the case for our simulation in Chapter~\ref{chap:coin.flip}, and therefore, we can use Watrous' result in a black-box manner. Unfortunately, we do not know how to translate his technique to a multi-bit reply, while keeping the running time of the simulator polynomially bounded. The quantum rewinding procedure is implemented by a general quantum circuit $R$, which receives Bob's input registers $\big( W,X \big)$, where $W$ contains any $n$-qubit auxiliary input $\ket{\psi}$ and $X$ is a working register, initialized to the all-zero state of size $k$. As a first step, $R$ applies a unitary quantum circuit $Q$ to all registers to simulate the conversations, obtaining as output a multi-qubit register $Y$ and a single-qubit register $G$. Register $G$ contains the outcome of the $\op{CNOT}$-operation on the dishonest party's bit $b$ (as control) and the simulator's guess $b'$. Thus, by measuring this register in the computational basis, the simulator can determine whether the simulation was successful. In more detail, the transformation from $\big( W,X \big)$ to $\big( G,Y \big)$ by applying $Q$ can be written as \begin{eqnarray*} Q \ket{\psi}_{W} \ket{0^{k}}_{X} = \sqrt{p} \ket{0}_{G} \ket{\phi_{good}(\psi)}_{Y} + \sqrt{1-p} \ket{1}_{G} \ket{\phi_{bad}(\psi)}_{Y}\, , \end{eqnarray*} where $0 < p < 1$ and $\ket{\phi_{good}(\psi)}$ denotes the state, we want the system to be in for a successful simulation. The qubit in register $G$ is then measured with respect to the standard basis, which indicates success or failure of the simulation. A successful execution (where $b = b'$) results in outcome 0 with probability $p$. In that case, $R$ outputs $Y$. A measurement outcome 1 indicates $b \neq b'$, and hence, implies an unsuccessful simulation. In that case, $R$ quantumly rewinds the system by applying the reverse circuit $Q^\dag$, and then a phase-flip transformation (on register $X$) before another iteration of $Q$ is applied, i.e., \begin{eqnarray*} && Q \bigg( 2 \Big( \mathbb{I} \otimes \ket{0^{k}}\bra{0^{k}} \Big) - \mathbb{I} \bigg) Q^\dag \ket{1}_G\ket{\phi_{bad}(\psi)}_Y \\ &=& 2\sqrt{p(1-p)}\ket{0}_G\ket{\phi_{good}(\psi)}_Y + (1-2p) \ket{1}_G\ket{\phi_{bad}(\psi)}_Y \, . \end{eqnarray*} Thus, after this rewinding, the amplitudes of the ``good'' and the ``bad'' states are increased and decreased, respectively. Thus, a measurement of register $G$ in the computational basis will result in outcome $0$ with higher probability $4p(1-p)$. Note that for the special case where $p$ equals $1/2$ and is independent of $\ket{\psi}$, the simulation terminates after at most one rewinding. Watrous' ideal quantum rewinding lemma (without perturbations) then states the following: Under the condition that the probability $p$ of a successful simulation is non-negligible and independent of any auxiliary input, $R$ is poly-time and its output $\rho(\psi)$ has square-fidelity close to 1 with state $\ket{\phi_{good}(\psi)}$ of a successful simulation, i.e., $$ \bra{\phi_{good}(\psi)}\rho(\psi)\ket{\phi_{good}(\psi)} \geq 1 - \varepsilon \, , $$ with error bound $\varepsilon > 0$. However, we cannot apply the exact version of Watrous' rewinding lemma in our simulation in Chapter~\ref{chap:coin.flip}, since we simulate against a dishonest party with an underlying commitment that only provides quantum-computational hiding against this party. Therefore, we can only claim that the party's input is \emph{close to independent} from the probability $p$. In other words, we must allow for small perturbations in the quantum rewinding procedure and the slightly weaker notion of Watrous' quantum rewinding lemma, as stated below, applies. \begin{lemma} [Quantum Rewinding Lemma with small perturbations~\cite{Watrous09}] \label{lemma:qrewind} Let $Q$ be the unitary $(n,k)$-quantum circuit and let $R$ be the general quantum circuit describing the quantum rewinding procedure. Let $p_0, q \in (0,1)$ and $\varepsilon \in (0,\frac{1}{2})$ be real numbers such that \begin{enumerate} \item $|p - q| < \varepsilon$ \item $p_0(1-p_0) \leq q(1-q)$, and \item $p_0 \leq p$ \end{enumerate} for all n-qubit states $\ket{\psi}$. Then there exists $R$ of size $$O \left( \frac{log(1/\varepsilon) size(Q)}{p_0(1-p_0)} \right)$$ such that, for every $n$-qubit state $\ket{\psi}$, the output $\rho(\psi)$ of $R$ satisfies \begin{eqnarray*} \bra{\phi_{good}(\psi)}\rho(\psi)\ket{\phi_{good}(\psi)} \geq 1 - \varepsilon' \end{eqnarray*} where $\varepsilon' = 16 \varepsilon \frac{log^2( 1 / \varepsilon)}{p_0^2(1-p_0)^2}$. \end{lemma} Intuitively, Requirement~(1.)~allows for small perturbation between the actual probability $p$ and the ideal probability $q$. Thus, $\varepsilon$ can be understood as the advantage of the dishonest party. It follows that if $\varepsilon$ is negligible, we can argue that $p$ is \emph{close} to $q$ and therefore, close to independent of the auxiliary input. Probability $p_0$ in Requirement~(3.)~denotes the lower bound on the actual probability, for which the procedure guarantees correctness and terminates in poly-time. Instead of using $p$ in circuit $R$, we use $p_0$. Furthermore, $Q$ is replaced by $U$ with $U = VQ$. Lemma~\ref{lemma:qrewind} reflects these replacements. On a very intuitive level, the general input state $\ket{\psi}$ is analyzed in more detail, i.e.~$\ket{\psi} = \sum_{i=1}^{2^n} \alpha_i \ket{\psi_i}$ leading to $$ \ket{\phi_{good}(\psi)} = \sum_{i = 1}^{2^n} \alpha_i \sqrt{\frac{p(\psi_i)}{p(\psi)}} \ket{\phi_{good}(\psi_i)} \, , $$ and similar for $\ket{\phi_{bad}(\psi)}$. This more detailed description allows that in any position, the probability is only near-independent of the input. The slight variations must then be addressed by an operator $V$, such that $U = VQ$ is close to $Q$ but satisfies the exact case of rewinding. In other words, applying $U$ on the perturbed input state gives the ideal outcome \begin{eqnarray*} U \ket{\psi}_{W} \ket{0^{k}}_{X} = \sqrt{q} \ket{0}_{G} \ket{\delta_{good}(\psi)}_{Y} + \sqrt{1-q} \ket{1}_{G} \ket{\delta_{bad}(\psi)}_{Y}\, . \end{eqnarray*} Transformation $V$ can therewith be understood as a correction. The bound in Requirement~(2.)~follows from proof details which will not be addressed here. Finally, note that the bounds are not necessarily tight. Important for our proof is, however, that all operations can be performed by polynomial-size circuits, and thus, the simulator has polynomial size (in the worst case). Furthermore, for negligible $\varepsilon$, it follows that the ``closeness'' of output $\rho(\psi)$ with good state $\ket{\phi_{good}(\psi)}$ is slightly reduced, but quantum rewinding remains possible and the output $\rho(\psi)$ of $R$ has still square-fidelity close to 1 with state $\ket{\phi_{good}(\psi)}$ of a successful simulation. \section{Definition of Security} \label{sec:security.definition} \index{security!definitions} \index{composition} \index{composition!sequential} We will now define security for our two-party protocols, along the lines informally described in Section~\ref{sec:flavors}. To this end, we will work in the framework put forward by Fehr and Schaffner in~\cite{FS09}. There, they propose general definitions for correctness and security for any quantum protocol that implements a \emph{classical non-reactive two-party functionality}, meaning that in- and output must be classical. We stress that the framework also allows functionalities which behave differently in case of a dishonest player. They then show that such a quantum protocol, complying with the framework, composes \emph{sequentially} in a classical environment, or in other words, within an outer classical protocol. Their security definitions are phrased in simple information-theoretic conditions, depending on the functionality, which implies strong simulation-based security. For the sake of simplicity, the framework does not assume additional entities such as e.g.\ an environment, without of course compromising correctness in the given setting. Throughout this work, we are interested in quantum and classical protocols that implement classical functionalities. As already mentioned, such primitives are often used as building blocks in more complicated classical (multi-party) protocols which implement more advanced tasks. Therefore, it can be justified in Part~\ref{part:quantum.cryptography} to restrict the focus to such quantum protocols that run in a classical environment and have classical in- and outputs. Furthermore, although the framework was originally proposed for quantum protocols that compose in a classical environment, we adapt it here for classical protocols against quantum attacks, composing equally well when imposing the suggesting restriction regarding the in- and outputs. Thus, we will use it also in Part~\ref{part:cryptography.in.quantum.world} for defining security of our classical protocols. Although various other security and composition frameworks have been proposed (such as~\cite{BM04,Unruh04,Unruh10,WW08}), we consider the security level achieved in this framework as a reasonable balance between weak demands and yet meaningful security. Furthermore, its structure is as simple and clear as possible and compliance with the definitions gives us sequential composition. Towards a general composition, we must, of course, extend the basic protocols as shown in Sections~\ref{sec:composability.compiler} and~\ref{sec:composability.coin}. We will now introduce the framework more formally for a general functionality. We will use information-theoretic definitions in our notions of unconditional security as investigated in~\cite{FS09}. In addition, we will also show that computational security can be defined similarly, although with some modifications. \subsection{Correctness} \label{sec:security.definition.correctness} A protocol $\Pi$ consists of an infinite family of interactive quantum circuits for players Alice and Bob, indexed by the security parameter. For instance, in our quantum protocols this security parameter $m$ corresponds to the number of qubits transmitted in the first phase. However, to ease notation, we often leave the dependence on the security parameter implicit. Since we assume the common input state $\rho_{UV}$ to be classical, i.e., $$\rho_{UV} = \sum_{u,v} P_{UV}(u,v) \proj{u} \otimes \proj{v} \, , $$ for some probability distribution $P_{UV}$, we understand $U,V$ as random input variables. The same holds for the classical output state $\rho_{XY}$ with output $X,Y$. The input-output behavior of the protocol is uniquely determined by $P_{XY|UV}$, and we write $\Pi(U,V) = (X,Y)$. Then, a classical non-reactive two-party ideal functionality $\mathcal{F}$ is given by a conditional probability distribution $P_{\mathcal{F}(U,V)|UV}$ with $\mathcal{F}(U,V)$ denoting the ideal-world execution, where the players forward their inputs $U,V$ to $\mathcal{F}$ and output whatever they obtain from $\mathcal{F}$. The definition of correctness of a protocol is now straightforward. \begin{definition}[Correctness] \label{def:correctness} A protocol $\Pi(U,V) = (X,Y)$ correctly implements an ideal classical functionality $\mathcal{F}$, if for every distribution of the input values $U$ and $V$, the resulting common output satisfies \[ (U,V,X,Y) \approxs (U,V, \mathcal{F}(U,V)) \, . \] \end{definition} \subsection{Information-Theoretic Security} \label{sec:security.definition.unconditional} We define information-theoretic security based on~\cite[Proposition 4.3]{FS09}. Note that in the following, we simplify the joint output representation (compared to~\cite{FS09}) in that we denote the output in the real world by $out_{{\sf A},{\sf B}}^\Pi$ (which is equivalent to $\Pi_{{\sf A},{\sf B}}\rho_{UV}$), and the output in the ideal world by $out_{\hat{\sf A},\hat{\sf B}}^\mathcal{F}$ (equivalent to $(\mathcal{F}_{\hat{\sf A},\hat{\sf B}}) \rho_{UV}$). Recall that $U$ denotes honest Alice's classical input, and let $Z$ and $V'$ denote dishonest Bob's classical and quantum information. Then, any input state $\rho_{U Z V'}$ is restricted to be of form $$ \rho_{\MC{U}{Z}{V'}} = \sum_{u,z} P_{UZ}(u,z) \proj{u} \otimes \proj{z} \otimes \rho_{V'}^z \, , $$ where it holds here that $\rho_{V'}^z = \rho_{V'}^{u,z}$. This implies that Bob's quantum part~$V'$ is correlated with Alice's part only via his classical~$Z$. \begin{definition} [Unconditional security against dishonest Bob] \label{def:qmemoryBob} A protocol $\ \Pi$ implements an ideal classical functionality $\mathcal{F}$ unconditionally securely against dishonest Bob, if for any real-world adversary ${\sf B}'$, there exists an ideal-world adversary $\hat{\sf B}'$ such that, for any input state with $\rho_{UZV'} = \rho_{\MC{U}{Z}{V'}}$, it holds that the outputs in the real and the ideal world are statistically indistinguishable, i.e., $$ out_{{\sf A},{\sf B}'}^\Pi \approxs out_{\hat{\sf A},\hat{\sf B}'}^\mathcal{F} \, . $$ \end{definition} For completeness, we state these output states explicitly, i.e., $$ out_{{\sf A},{\sf B}'}^\Pi = \rho_{UXZY'} \ \text{ and } \ out_{\hat{\sf A},\hat{\sf B}'}^\mathcal{F} = \rho_{\MC{UX}{Z}{Y'}} \, , $$ which shows that Bob's possibilities in the ideal world are limited. He can produce some classical input $V$ for $\mathcal{F}$ from his quantum input state $V'$, and then he can obtain a quantum state $Y'$ by locally processing $V$ and possibly $\mathcal{F}$'s classical reply $Y$. This description is also depicted in Figure~\ref{fig:protocol.functionality}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol $\Pi_{{\sf A},{\sf B}'}$ \hspace{29ex}Functionality $\mathcal{F}_{\hat{\sf A},\hat{\sf B}'}$} \\ \begin{center} \vspace{-3ex} \input{real.ideal.world.pic.new} \vspace{-2ex} \end{center} \end{framed} \vspace{-1.5ex} \caption{Real World vs.\ Ideal World~\cite{Chris10}.} \label{fig:protocol.functionality} \end{figure} Analogously, we can define unconditional security for honest Bob against dishonest Alice. In this case, we assume a classical $Z$ and a quantum state $U'$ as dishonest Alice's input and a classical input $V$ of honest Bob. \begin{definition} [Unconditional security against dishonest Alice] \label{def:unboundedAliceNiceOrder} A protocol $\ \Pi$ implements an ideal classical functionality $\mathcal{F}$ unconditionally securely against dishonest Alice, if for any real-world adversary ${\sf A}'$, there exists an ideal-world adversary $\hat{\sf A}'$ such that, for any input state with $\rho_{U'ZV} = \rho_{\MC{U'}{Z}{V}}$, it holds that the outputs in the real and the ideal world are statistically indistinguishable, i.e., $$ out_{{\sf A}',{\sf B}}^\Pi \approxs out_{\hat{\sf A}',\hat{\sf B}}^\mathcal{F} \, . $$ \end{definition} Note that in the definitions above, we do not require the running time of ideal-world adversaries to be polynomial whenever the real-life adversaries run in polynomial time. This way of defining unconditional security can lead to the (unwanted) effect that unconditional security does not necessarily imply computational security. However, as mentioned before, by extending our basic constructions we can achieve efficient ideal-life adversaries. Intuitively, the composition theorem below states that if quantum protocols $\pi_1\cdots\pi_\ell$ securely implement ideal functionalities $\mathcal{F}_1\cdots\mathcal{F}_\ell$, then a protocol $\Sigma^{\pi_1\cdots\pi_\ell}$ is \emph{essentially} as secure as a classical hybrid protocol $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ with sequential calls to $\mathcal{F}_1\cdots\mathcal{F}_\ell$. Note that for the hybrid protocol to be {\em classical}, we mean that it has classical in- and output (for the honest players), but also that all communication between the parties is classical.\footnote{We want to stress that a \emph{hybrid protocol} is a protocol that makes sequential calls to ideal functionalities. This term should not be confused with the notion of \emph{hybrid security} in Chapter~\ref{chap:hybrid.security}, which refers to quantum protocols providing twofold security in case of an adversary who is either bounded in quantum storage or bounded in quantum-computational power.} The above facts imply that such protocols compose sequentially. Below, we state (a simplified variant of) the theorem in~\cite{FS09}. We omit its proof here but note that it proceeds along similar lines as the proof of Theorem~\ref{thm:composition.computational}, translating sequential composition to the case of computational security. \begin{theorem}[Composition Theorem I~\cite{FS09}] \label{thm:composition.unconditional} Let $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ be a classical hybrid protocol which makes at most $k$ calls to $\mathcal{F}_1\cdots\mathcal{F}_\ell$, and for every $i \in \set{1,\ldots,\ell}$, let protocol $\pi_i$ be an $\varepsilon$-secure implementation of $\mathcal{F}_i$ against $\mathfrak{A}$ and $\mathfrak{B}$. Then the output of $\Sigma^{\pi_1\cdots\pi_\ell}$ is at distance at most $O(k\varepsilon)$ to the output produced by $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$. \end{theorem} We want to explicitly state here that if the hybrid protocol is secure, then so is the real-life protocol, and as such it could itself be use as a sub-protocol in yet another classical outer protocol. \begin{corollary} If $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ is a $\delta$-secure implementation of $\cal G$ against $\mathfrak{A}$ and $\mathfrak{B}$, and if $\pi_i$ is an $\varepsilon$-secure implementation of $\mathcal{F}_i$ against $\mathfrak{A}$ and $\mathfrak{B}$ for every $i \in \set{1,\ldots,\ell}$, then $\Sigma^{\pi_1\cdots\pi_\ell}$ is a $O(\delta + \varepsilon)$-secure implementation of $\cal G$. \end{corollary} \subsection{Computational Security} \label{sec:security.definition.computational} One can define security against a computationally bounded dishonest Bob in the CRS-model analogously to information-theoretic security, with the two differences that the input given to the parties has to be sampled by an efficient quantum algorithm and that the output states of Definition~\ref{def:qmemoryBob} should be computationally indistinguishable. Recall that in the CRS-model, all participants in the real world have access to a classical public common reference string~$\omega$, which is chosen before any interaction starts, according to a distribution only depending on the security parameter. On the other hand, the participants in the ideal-world execution $\mathcal{F}_{\hat{\sf A},\hat{\sf B}}$, interacting only with the ideal functionality, do not make use of string~$\omega$. Hence, an ideal-world adversary $\hat{\sf B}'$ that operates by simulating a real-world adversary ${\sf B}'$ is free to choose $\omega$ in any way he wishes. In order to define computational security against a dishonest Bob in the CRS-model, we consider a polynomial-size quantum circuit, called \emph{input sampler}, which takes as input the security parameter and the common reference string $\omega$ (chosen according to its distribution) and which produces the input state $\rho_{U Z V'}$. Again, $U$, $Z$, and $V'$ denote Alice's classical, Bob's classical, and Bob's quantum information, respectively, and we require from the input sampler that $\rho_{U ZV'} = \rho_{\MC{U}{Z}{V'}}$. In the following, we let $\dBob_{\mathrm{poly}}$ be the family of all {\em polynomial-time strategies} for dishonest Bob. \begin{definition} [Computational security against dishonest Bob] \label{def:polyboundedBobCRS} A protocol $\ \Pi$ implements an ideal classical functionality $\mathcal{F}$ computationally securely against dishonest Bob, if for any real-world adversary ${\sf B}' \in \dBob_{\mathrm{poly}}$, who has access to the common reference string $\omega$, there exists an ideal-world adversary $\hat{\sf B}' \in \dBob_{\mathrm{poly}}$, not using $\omega$, such that, for any efficient input sampler as described above, it holds that the outputs in the real and the ideal world are quantum-computationally indistinguishable, i.e., $$ out_{{\sf A},{\sf B}'}^\Pi \approxq out_{\hat{\sf A},\hat{\sf B}'}^\mathcal{F} \, . $$ \end{definition} Protocols fulfilling the definition above provide sequential composition in a naturally weaker but otherwise similar notion as unconditionally secure protocols. We can therefore adapt the original composition theorem to the case of computational security. For completeness, we will include its proof as given in~\cite{DFLSS09}. Consider a dishonest ${\sf B}'$ and the common state \smash{$\rho_{U_j V'_j}$} at any point during the execution of the hybrid protocol when a call to functionality $\mathcal{F}_i$ is made. The requirement for the oracle protocol to be \emph{classical} is now expressed in that there exists a classical $Z_j$---to be understood as consisting of $\hat{\sf B}'$'s classical communication with $\hat{\sf A}$ and with the $\mathcal{F}_{i'}$'s up to this point---such that given $Z_j$, Bob's quantum state $V'_j$ is not entangled with Alice' classical input and auxiliary information: \smash{$\rho_{U_j Z_j V'_j} = \rho_{\MC{U_j}{Z_j}{V'_j}}$}. Furthermore, we require that we may assume $Z_j$ to be part of $V'_j$ in the sense that for any $\hat{\sf B}'$ there exists $\hat{\sf B}''$ such that $Z_j$ is part of $V'_j$. This definition is motivated by the observation that if Bob can communicate only classically with Alice, then he can entangle his quantum state with information on Alice's side only by means of the classical communication. We also consider the protocol we obtain by replacing the ideal functionalities by quantum two-party sub-protocols $\pi_1\cdots\pi_\ell$ with classical in- and outputs for the honest parties, i.e., whenever $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ instructs $\hat{\sf A}$ and $\hat{\sf B}$ to execute $\mathcal{F}_i$, they instead execute $\pi_i$ and take the resulting outputs. We then write $\Sigma^{\pi_1\cdots\pi_\ell}$ for the real quantum protocol we obtain this way. Recall that we require from the input sampler that $\rho_{U ZV'} = \rho_{\MC{U}{Z}{V'}}$, i.e., that $V'$ is correlated with Alice's part only via the classical~$Z$. When considering classical hybrid protocols $\Sigma^{\pi_1\cdots\pi_\ell}$ in the real world, where the calls are replaced with quantum protocols using a common reference string, it is important that every real protocol $\pi_i$ uses a separate instance (or part) of the common reference string which we denote by $\omega_i$. \begin{theorem}[Composition Theorem II] \label{thm:composition.computational} Let $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ be a classical two-party hybrid protocol which makes at most $k=\mathit{poly}(n)$ calls to the functionalities, and for every $i \in \set{1,\ldots,\ell}$, let protocol $\pi_i$ be a computationally secure implementation of $\mathcal{F}_i$ against $\dBob_{\mathrm{poly}}$. Then, for every real-world adversary ${\sf B}' \in \dBob_{\mathrm{poly}}$ who accesses the common reference string $\omega=\omega_1,\ldots, \omega_k$ there exists an ideal-world adversary $\hat{\sf B}' \in \dBob_{\mathrm{poly}}$ who does not use $\omega$ such that for every efficient input sampler, it holds that the outputs in the real and the ideal world are quantum-computationally indistinguishable, i.e., $$ out_{{\sf A},{\sf B}'}^{\Sigma^{\pi_1\cdots\pi_\ell}} \approxq out_{\hat{\sf A},\hat{\sf B}'}^{\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell} } \, . $$ \end{theorem} Note that we do not specify what it means for the hybrid protocol to be secure. In fact, Theorem~\ref{thm:composition.computational} guarantees that {\em whatever} the hybrid protocol achieves, an indistinguishable output is produced by the real-life protocol with the functionality calls replaced by protocols. Of course, if the hybrid protocol {\em is} secure in the sense of Definition~\ref{def:polyboundedBobCRS}, then so is the real-life protocol. \begin{corollary} If $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ is a computationally secure implementation of $\cal G$ against $\dBob_{\mathrm{poly}}$, and if $\pi_i$ is a computationally secure implementation of $\mathcal{F}_i$ against $\dBob_{\mathrm{poly}}$ for every $i \in \set{1,\ldots,\ell}$, then $\Sigma^{\pi_1\cdots\pi_\ell}$ with at most $k=\mathit{poly}(n)$ oracle calls is a computationally secure implementation of~$\cal G$ against $\dBob_{\mathrm{poly}}$. \end{corollary} \begin{proof} \def\mark#1{\bar{#1}} We prove the claim in Theorem~\ref{thm:composition.computational} by induction on $k$. If no calls are made, we can set $\hat{\sf B}' \assign {\sf B}'$ and the claim holds trivially. Consider now a protocol $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ with at most $k > 0$ oracle calls. For simplicity, we assume that the number of oracle calls equals $k$, otherwise we instruct the players to make some ``dummy calls''. Let $\rho_{U_k Z_k V'_k}$ be the common state right before the $k$-th, and thus, last call to one of the sub-protocols $\pi_1,\ldots,\pi_\ell$ in the execution of the real protocol $\Sigma^{\pi_1,\ldots,\pi_\ell}$. To simplify notation in the rest of the proof, we omit the index $k$ and write \smash{$\rho_{\mark{U} \mark{Z} \mark{V}'}$} instead (see Figure~\ref{fig:composition.proof}). We know from the induction hypothesis for $k-1$ that there exists an ideal-world adversary $\hat{\sf B}' \in \dBob_{\mathrm{poly}}$ not using the common reference string such that $\rho_{\mark{U} \mark{Z} \mark{V}'} \approxq \sigma_{\mark{U} \mark{Z} \mark{V}'}$ where $\sigma_{\mark{U} \mark{Z} \mark{V}'}$ is the common state right before the $k$-th call to a functionality in the execution of the hybrid protocol $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ with input $\rho_{U Z V'}$. As described, $\mark{U}$ and $\mark{Z},\mark{V}'$ are to be understood as follows. $\mark{U}$ denotes ${\sf A}$'s (respectively $\hat{\sf A}$'s) input to the sub-protocol (respectively functionality) that is to be called next. $\mark{Z}$ collects the classical communication dictated by $\Sigma^{\mathcal{F}_1\ldots,\mathcal{F}_\ell}$ as well as $\hat{\sf B}'$'s classical inputs to and outputs from the previous calls and $\mark{V}'$ denotes the dishonest player's current quantum state. Note that the existence of $\mark{Z}$ is guaranteed by our formalization of {\em classical} hybrid protocols and $\sigma_{\mark{U} \mark{Z} \mark{V}'} = \sigma_{\MC{\mark{U}}{\mark{Z}}{\mark{V}'}}$. Let $\omega_i$ be the common reference string used in protocol $\pi_i$. For simplicity, we assume that the index $i$, which determines the sub-protocol $\pi_i$ (or functionality~$\mathcal{F}_i$) to be called next, is {\em fixed} and we just write $\pi$ and $\mathcal{F}$ for $\pi_i$ and $\mathcal{F}_i$, respectively. \begin{figure} \begin{framed} \centering \input{CompProofFig.new} \end{framed} \caption{Steps of the Composability Proof} \label{fig:composition.proof} \end{figure} It follows from Definition~\ref{def:polyboundedBobCRS} of computational security that there exists $\hat{\sf B}' \in \dBob_{\mathrm{poly}}$ (independent of the input state) not using $\omega_i$ such that the corresponding output states $\sigma_{\mark{X}\mark{Z}\mark{Y}'}$ and $\tau_{\mark{X}\mark{Z}\mark{Y}'}$ produced by $\mathcal{F}_{\hat{\sf A},\hat{\sf B}'}$ (as prescribed by the oracle protocol) and $\pi_{{\sf A},{\sf B}'}$ run on the state $\sigma_{\mark{U} \mark{Z} \mark{V}'} = \sigma_{\MC{\mark{U}}{\mark{Z}}{\mark{V}'}}$ are quantum-computationally indistinguishable. The induction step is then completed with $$ out_{{\sf A},{\sf B}'}^{\Sigma^{\pi}} = \rho_{\mark{X}\mark{Z} \mark{Y}'} = (\pi_{{\sf A},{\sf B}'})\, \rho_{\mark{U}\mark{Z}\mark{V}'} \approxq (\pi_{{\sf A},{\sf B}'})\,\sigma_{\mark{U}\mark{Z}\mark{V}'} = \sigma_{\mark{X}\mark{Z}\mark{Y}'} \approxq \tau_{\mark{X}\mark{Z}\mark{Y}'} = out_{\hat{\sf A},\hat{\sf B}'}^{\Sigma^{\mathcal{F}}} \, , $$ where $(\pi_{{\sf A},{\sf B}'})\, \rho_X$ should be understood as running protocol $\pi_{{\sf A},{\sf B}'}$ with input $\rho_X$. Note that the strategy of $\hat{\sf B}'$ does not depend on the state $\sigma_{\mark{U}\mark{Z}\mark{V}'}$, and hence, the overall ideal-world adversary $\hat{\sf B}'$ does not depend on the input state either. Furthermore, the concatenation of two polynomially bounded players is polynomially bounded, i.e.~$\hat{\sf B}' \in \dBob_{\mathrm{poly}}$. \end{proof} \clearemptydoublepage \part{Quantum Cryptography} \label{part:quantum.cryptography} \chapter{Introduction} \label{chap:intro.quantum.cryptography} In this part of the thesis, we present our research in quantum cryptography, which offers a secure alternative to some conventional cryptographic schemes that are rendered insecure by the potential emerge of large-scale quantum-computing. We also want to mention an actual implementation of quantum protocols within the research project MOBISEQ (``Mobile Quantum Security''), which is a joint project of the cryptology group from the computer science department and the iNano center at the physics department, both at Aarhus University. The main goal of MOBISEQ is the development of technology for secure quantum communication that can compete with conventional methods on practicality, velocity and security and that can be integrated into existing infrastructures. However, at the time of writing, the implementation is still ``under construction''. In the next sections, we will introduce the concept of mixed (classical) commitment schemes, since they are an important underlying construction in our quantum protocols. In Chapter~\ref{chap:hybrid.security}, we discuss our main result on improving the security of quantum protocols via a commit\&open step, based on these mixed commitments. We first introduce the setting and then propose a general compiler therein. We further show that the construction remains secure in the case of noisy communication. We then proceed with combining the compilation technique with the bounded-quantum-storage model. Last, we show sequential composability and further use the extended commitment construction, discussed in Section~\ref{sec:extended.commit.compiler}, towards a more general composition. In Chapter~\ref{chap:hybrid.security.applications}, we discuss that the compiler can be applied to known protocols and show two example applications, with the result of achieving hybrid-secure protocols. \section{Mixed Commitments} \label{sec:mixed.commit} \index{commitment!mixed} Commitments were introduced on an intuitive level in Section~\ref{sec:primitives.commitment} and capture the process of a party being committed to his message by the binding characteristic without immediately revealing it to the other party due to the hiding aspect. \subsection{Motivation} \label{sec:mixed.commit.motivation} Our compiler construction in the following chapters requires a classical yet quantum-secure commitment from ${\sf B}$ to ${\sf A}$. Since we aim at preserving the unconditional security against ${\sf A}$ in the outer quantum protocols, the commitment can only be quantum-computationally binding. As described in Section~\ref{sec:rewinding}, the standard reduction from the computational security of the protocol to the computational binding property of the commitment would require rewinding ${\sf B}'$, which is not possible in the assumed protocol scenario. Therefore, we construct keyed commitment schemes, which are special in that they are \emph{mixed commitments} or \emph{dual-mode commitments}.\footnote{The notions are interchangeable. The term of mixed commitments was introduced in~\cite{DN02}. In~\cite{DFLSS09}, the name dual-mode commitments was used to relate to the notion of a dual-mode crypto-system~\cite{PVW08}, which is similar in spirit, but slightly more involved. Last we want to mention that our schemes are similar to the commitment schemes used in~\cite{DFS04} but with extensions.} Generally speaking, the notion of mixed commitments requires some trapdoor information, related to the common reference string and given to the simulator in the ideal world. This trapdoor provides the possibility for \emph{extracting} information out of the commitments\index{commitment!extractability}, which finally allows us to circumvent the necessity of rewinding ${\sf B}'$. We will discuss this in detail in Section~\ref{sec:mixed.commit.idea}. Additionally, we require that the basic mathematical assumption, which guarantees the hiding and binding properties of the commitments, withstands quantum attacks. We will propose an actual instantiation in Section~\ref{sec:mixed.commit.instantiation}. \subsection{Idea} \label{sec:mixed.commit.idea} Recall that a keyed bit or string commitment $C = \commitk{m}{r}{pk}$ takes as input a message $m$ and some randomness $r$ of size polynomial in the security parameter, as well as a public key $pk$. The message $m$ can be a single bit $b$ for the implementation of bit commitments or, in order to achieve string commitments, a bit-string $m = b_0,\ldots,b_s$. In order to open the commitment, message $m$ and random variable $r$ are sent in plain and the receiver therewith checks the correctness of $C$. Hiding is typically formalized by the requirement $\big( pk,\commitk{m_1}{r_1}{pk} \big) \approx \big( pk,\commitk{m_2}{r_2}{pk} \big)$ with different flavors of indistinguishability, while binding prohibits that there exist $C,m_1,r_1,m_2,r_2$, such that $m_1 \neq m_2$, but $\commitk{m_1}{r_1}{pk} = C = \commitk{m_2}{r_2}{pk}$. We construct our commitments in the CRS-model such that they provide dual modes depending on the public key. In more detail, let $\tt commitK = (\mathtt{commit}, {\cal G}_{\tt H}, {\cal G}_{\tt B}, \tt{xtr})$ denote a (keyed) mixed commitment scheme. The commitment key $pk$ is generated by one of the two possible key-generation algorithms, ${\cal G}_{\tt H}$ or ${\cal G}_{\tt B}$. Generator ${\cal G}_{\tt B}$ takes as input the security parameter $\kappa$ and generates a key pair $(pk,sk) \leftarrow {\cal G}_{\tt B}$, where $pk \in \{0,1\}^\kappa$ is a public key and $sk$ is the corresponding secret key. $\tt{xtr}$ is a poly-time extraction algorithm that takes $sk$ and $C$ as input and produces $m$ as output, i.e., $\xtr{C}{sk}= \xtr{\commitk{m}{r}{pk}}{sk} = m$, which must hold for all pairs $(pk,sk)$ generated by ${\cal G}_{\tt B}$ and for all values $m,r$. In other words, the secret key $sk$ allows to efficiently extract $m$ from $C$, and as such the commitment is unconditionally binding. We often denote this type of key therefore by ${\tt pkB}$. For a key $pk \leftarrow {\cal G}_{\tt H}$, the commitment scheme is unconditionally hiding (and we often refer to this type as ${\tt pkH}$). Furthermore, we need the unconditionally binding key ${\tt pkB}$ and the unconditionally hiding key ${\tt pkH}$ to be computationally indistinguishable even against quantum attacks, i.e., ${\tt pkB} \approxq {\tt pkH}$. We want to stress that we can even weaken the assumption on the hiding key in that we merely require that there exists a public-key encryption scheme where a random public key looks pseudo-random to poly-time quantum circuits. Thus, $\mathtt{commit}$ does not require actual unconditionally hiding keys, but we can use uniformly random strings from $\{0,1\}^\kappa$ as such. This is feasible in our proposed construction, sketched below, and still provides unconditional hiding, except with negligible probability. This fact also ensures that most keys of a specific domain are in that sense unconditionally hiding keys. Finally, to avoid rewinding we use the following proof method: In the real-world protocol, ${\sf B}$ uses the unconditionally hiding key ${\tt pkH}$ to maintain unconditional security against any unbounded ${\sf A}$. To argue security against a computationally bounded ${\sf B}'$, an information-theoretic argument involving the simulator $\hat{\sf B}'$ is given to prove that ${\sf B}'$ cannot cheat with the unconditionally binding key ${\tt pkB}$. Security in real life then follows from the quantum-computational indistinguishability of ${\tt pkH}$ and ${\tt pkB}$. \subsection{Instantiations} \label{sec:mixed.commit.instantiation} As a candidate for instantiating our commitment construction, we propose the lattice-based public-key encryption scheme of Regev~\cite{Regev05}. The crypto-system is based on the (conjectured) hardness of the learning with error (LWE) problem, which can be reduced from worst-case hardness of the approximation of the shortest vector problem (in its decision version). Thus, breaking Regev's crypto-system implies an efficient algorithm for approximating the lattice problem in the worst-case, which is assumed to be hard even with quantum computing power. In more detail, the crypto-system uses dimension $n$ as security parameter and is para\-metrized by two integers $m$ and $p$, where $p$ is a prime bounded by $n^2 \leq p \leq 2n^2$, and a probability distribution on $\mathbb{Z}_p$. A regular public key (in $\mathbb{Z}_p^{m \times n}$) for Regev's scheme is proven to be quantum-computationally indistinguishable from the case where a public key is chosen from the uniform distribution, and therewith, independently from a secret key. In this case, the ciphertext carries essentially no information about the message~\cite[Lemma 5.4]{Regev05}. This proof of semantic security for Regev's crypto-system is in fact the property we require for our commitment, as the public key of a regular key pair can be used as the unconditionally binding commitment key ${\tt pkB}$ in the ideal-world simulation. Then, for the real protocol, an unconditionally hiding commitment key ${\tt pkH}$ can simply be constructed by uniformly choosing numbers in $\mathbb{Z}_p^{n \times m}$. Both public keys will be of size $\tilde{O}(n^2)$, and the encryption process involves only modular additions, which makes its use simple and efficient.\footnote{The notation $\tilde{O}(\cdot)$ is similar to the asymptotic Landau notation $O(\cdot)$ but ignores logarithmic factors.} For simplicity and efficiency, we use a common reference string, which allows us to use Regev's scheme in a simple way and, since it is relatively efficient, we get a protocol that is potentially practical. More specifically, in the CRS-model we assume the key ${\tt pkB}$ for the commitment scheme, generated by ${\cal G}_{\tt B}$, to be contained in the common reference string. We want to stress however that we show in Part~\ref{part:cryptography.in.quantum.world}, Section~\ref{sec:key.generation.coin}, how to avoid the CRS-model at the cost of a non-constant round construction, where we let the parties generate a common reference string jointly by coin-flipping. For the compiler construction here, we will use Regev's original version, as we require bit commitments. However, a multi-bit variant of Regev's scheme is given in the full version of~\cite{PVW08}. All requirements as described above are maintained in this more efficient variant, which improves the performance of Regev's scheme by essentially a factor of $n$, e.g., the scheme can encrypt $n$ bits using $\tilde{O}(n)$ bits. We use later in Part~\ref{part:cryptography.in.quantum.world}, Chapter~\ref{chap:framework}, that this implies that we can flip a $\lambda$-bit string using $O(\lambda)$ bits of communication when $\lambda$ is large enough. We also rely on this multi-bit version for our extended commitment construction, which we will describe in the next Section~\ref{sec:extended.commit.compiler} and then use in Section~\ref{sec:general.composition.compiler}, where we show how to achieve efficient simulation also against a dishonest ${\sf A}'$. \subsection{Extended Construction} \label{sec:extended.commit.compiler} \index{commitment!extended} To achieve efficient simulation against both players, i.e.\ additional efficient simulation also against ${\sf A}'$ (in Section~\ref{sec:general.composition.compiler}), we need to extend our commitments by yet another trapdoor, which provides the commitment with \emph{equivocability}\index{commitment!equivocability}. Intuitively, this means that we now enable the simulator in the ideal world that it can construct commitments equivocally such that it can open them later to different bits. As we still need in addition the properties of the mixed commitment scheme of Section~\ref{sec:mixed.commit.idea} in its multi-bit variant, we will build the new scheme around it, such that its trapdoor can still be used for extraction. The new extension is based on the idea of UC-commitments~\cite{CF01} and requires a $\Sigma$-protocol for a (quantumly) hard relation $\Rel = \{(x,w)\}$, i.e.\ an honest-verifier perfect zero-knowledge proof of knowledge with instance $x$ and witness $w$ (see also Section~\ref{sec:primitives.zk}). Conversations are of form $\tt \big( a^{\Sigma}, c^{\Sigma}, z^{\Sigma} \big)$, where the prover sends $\tt a^{\Sigma}$, the verifier challenges him with bit $\tt c^{\Sigma}$, and the prover replies with $\tt z^{\Sigma}$. For practical candidates of $\Rel$, see e.g.~\cite{DFS04}. By special soundness, it holds that from two accepting conversations with different challenges, i.e.\ $\big( {\tt a^\Sigma},0,{\tt z^\Sigma_0} \big)$ and $\big( {\tt a^\Sigma},1,{\tt z^\Sigma_1} \big)$, the simulator can extract $w$ such that $(x,w) \in \Rel$. In real life, the common reference string consists of commitment key ${\tt pkH}$ and instance $x$. To commit to a bit $b$, the committer ${\sf B}$ first runs the honest-verifier simulator to get, on input $x$, a conversation $\big( {\tt a^{\Sigma}}, b, {\tt z^{\Sigma}} \big)$. Then, he commits by sending $\big( {\tt a_\Sigma}, C_0, C_1 \big)$, where $C_b= \commitk{{\tt z^\Sigma}_b}{r_b}{{\tt pkH}}$ and $C_{1-b} = \commitk{0^{z'}}{r_{1-b}}{{\tt pkH}}$ with randomness $r_b,r_{1-b}$ and $z' = |{\tt z^\Sigma}|$. To open a commitment, ${\sf B}$ reveals $b$ and opens $C_b$ by sending ${\tt z^\Sigma}_b$, $r$. The receiver checks that $\big({\tt a^{\Sigma}}, b, {\tt z_{\Sigma}} \big)$ is a valid conversation and that $C_b$ was correctly opened. Assuming that the $\Sigma$-protocol is honest-verifier perfect zero-knowledge and ${\tt pkH}$ provides unconditional hiding, the new commitment construction is again unconditionally hiding. In the ideal world, we assume that the simulator (simulating against ${\sf A}'$) knows $w$ such that $(x,w) \in \Rel$ (and public key ${\tt pkH}$). Therewith, it can compute two valid conversations $\big( {\tt a^\Sigma},0,{\tt z^\Sigma}_0 \big)$ and $\big( {\tt a^\Sigma},1,{\tt z^\Sigma}_1 \big)$ and set $C_0= \commitk{{\tt z^\Sigma}_0}{r_0}{{\tt pkH}}$ and $C_1= \commitk{{\tt z^\Sigma}_1}{r_1}{{\tt pkH}}$. This enables to open both ways, assuming the knowledge of the trapdoor $w$. We maintain extraction, since in the respective simulation against ${\sf B}'$, the public key is chosen in a different but indistinguishable way, namely as $(x,{\tt pkB})$, where ${\tt pkB}$ is the binding commitment key, generated together with $sk$. Now, given a commitment $(a,C_0,C_1)$, the simulator can decrypt $C_0,C_1$ to determine which of them contains a valid reply ${\tt z^\Sigma}_b$ of the $\Sigma$-protocol. The only way this could fail is in the case where both $C_0$ and $C_1$ contain valid replies, which would imply that the committer ${\sf B}'$ could compute a valid $w$. For a polynomial-time bounded committer and a (quantumly) hard relation $\Rel$, however, this can occur only with negligible probability. \clearemptydoublepage \chapter{Improved Security for Quantum Protocols} \label{chap:hybrid.security} Here, we propose a general compiler\index{compiler} for improving the security of two-party quantum protocols, implementing different cryptographic tasks and running between mutually distrusting players Alice and Bob. The compiler extends security against an ``almost honest'' adversary by security against an arbitrary computationally bounded (quantum) adversary. Furthermore, we can achieve hybrid security such that certain protocols can only be broken by an adversary who has large quantum memory {\em and} large computing power. The results in this chapter are joint work with Damg{\aa}rd, Fehr, Salvail and Schaffner, and appeared in~\cite{DFLSS09}. \section{Motivation} \label{sec:motivation.hybrid.security} Our proposed compiler applies to a large class of quantum protocols, namely to so-called \emph{BB84-type} protocols that follow a particular but very typical construction design for quantum communication. Our main result states that if the original protocol is secure against a so-called\index{player!benign} \emph{benign} Bob who is only required to treat the qubits ``almost honestly'' but can deviate arbitrarily afterwards, then the compiled protocol is secure against a \emph{computationally bounded} quantum Bob. The unconditional security against Alice that BB84-type protocols usually achieve is preserved during compilation and it requires only a constant increase of transmitted qubits and classical messages. In other words, with our compiler, one can build a protocol for any two-party functionality by designing a protocol that only has to be secure if Bob is benign, which is a relatively weak assumption. On the other hand, many protocols following the BB84-type pattern (at least after some minor changes) have been proposed, e.g.\ for Oblivious Transfer, Commitment, and Password-Based Identification \cite{CK88,DFSS08,DFRSS07,DFSS07}. Typically, their proofs go through under our assumption. For instance, our compiler can easily be applied to existing quantum protocols implementing ID and OT, which we will show as example applications in Chapter~\ref{chap:hybrid.security.applications}. In more detail, the compiler incorporates the mixed commitment scheme, discussed in Section~\ref{sec:mixed.commit}, into the basic protocols with Bob as committer. Recall that we need such a mixed commitment to preserve the unconditional security against Alice that BB84-type protocols typically achieve but cannot apply the typical reduction from the computational security of the protocol to the computational binding property of the commitment, due to the restrictions on rewinding in the quantum world (see Section~\ref{sec:rewinding}). The idea of introducing a (plain) commitment in quantum protocols has already been sketched in other works, for instance, in~\cite{CK88,BBCS91}. Furthermore, there are partial results, investigating this scenario, e.g.~\cite{Yao95,CDMS04,Mayers96}. We will go into more details of preceding work in Section~\ref{sec:hybrid.security.ot}. Previously, it was very unclear what exactly such a $\mathtt{Commit\&Open}$-step would achieve in the quantum world. The intuition is clearly that if Bob passes the test, he must have measured most of his qubits, also in the remaining untested subset. But---to our best knowledge---it was never formally proven that the classical intuition also holds for a quantum Bob. We now give a full characterization of $\mathtt{Commit\&Open}$ in our quantum setting, namely that it forces Bob to be benign, for which we propose a formal definition and which might be of independent interest. These aspects are covered in Section~\ref{sec:compiler}. In this context, we want to mention the follow-up work in~\cite{BF10}. They phrase the $\mathtt{Commit\&Open}$-approach more clearly as the quantum version of classical sampling, and additionally, investigate sampling in quantum settings more generally. In Section~\ref{sec:compiler.noise}, we generalize our result to noisy quantum communication. Furthermore, security in the bounded-quantum-storage model that assumes the adversary's quantum storage to be of limited size, implies benign security. Therefore by compilation of such protocols, we can achieve hybrid security, which means that the adversary now needs \emph{both} large quantum memory \emph{and} large quantum computing power to break these new protocols. The preservation of BQSM-security allows us to get security properties that classical protocols cannot achieve, if the assumption on the limited quantum memory holds---which definitely is the case with current state-of-the-art (Section~\ref{sec:compiler.hybrid.security}). However, if the assumption should fail and the adversary could perfectly store all qubits sent, the known protocols can be easily broken. Thus, by applying our compiler, we obtain another security layer that equips such protocols with additional quantum-computational security. Last, we sketch that the compiled protocols in their basic form remain sequentially composable. Moreover, by using the extended commitment construction of Section~\ref{sec:extended.commit.compiler}, we achieve efficient simulations on both sides, and therewith, a more general composition. This result is discussed in Section~\ref{sec:composability.compiler}. \section{Introducing $\mathtt{Commit\&Open}$} \label{sec:compiler} We now discuss our compiler construction in detail, starting from describing the form of BB84-type protocols and formalizing our notion of benignity. Then, we show the transformation from benign security towards computational security and conclude with its proof. \subsection{Initial situation} \label{sec:initial.compiler} We consider quantum two-party protocols that follow a particular but very typical construction design. These protocols consist of two phases, called \emph{preparation} and \emph{post-processing} phase. We call such a protocol a \emph{BB84-type} protocol\index{BB84-type}, as they have the same structure and the same encoding scheme as the first (complete) quantum protocol by Bennett and Brassard in 1984 for quantum key distribution~\cite{BB84}. However, we want to stress again that we are interested in protocols for cryptographic tasks other than key distribution, and therewith, we also consider the case of dishonest players. A generic BB84-type protocol $\Pi$ is specified in Figure~\ref{fig:BB84-type}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol $\Pi$} \\[-4ex] \begin{description}\setlength{\parskip}{0.5ex} \item[{\it Preparation:}]$ $ \\ ${\sf A}$ chooses $x \in_R \set{0,1}^n$ and $\theta \in_R \set{\+,\x}^n$ and sends $\ket{x}_{\theta}$ to~${\sf B}$, and ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^n$ and obtains $\hat{x} \in \set{0,1}^n$ by measuring $\ket{x}_{\theta}$ in bases $\hat{\theta}$. \item[{\it Post-processing:}]$ $\\ Arbitrary classical communication and classical local computations. \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{The Generic BB84-type Quantum Protocol $\Pi$. } \label{fig:BB84-type} \end{figure} In the preparation phase, Alice transmits $n$ random BB84-qubits to Bob. More specifically, Alice chooses a random bit string $x= x_1,...,x_n$ and a random basis-string $\theta =\theta_1,...,\theta_n$ from a set of two conjugate bases, encodes her qubits accordingly, i.e., $x_i$ is encoded in the state of the $i$th particle using basis $\theta_i$, and sends them to Bob. Bob chooses a basis-string $\hat{\theta} = \hat{\theta}_1,..,\hat{\theta} _n$ and measures the $i$th particle in basis $\hat{\theta}_i$. If Bob plays honestly, he learns $x_i$ whenever the bases match, i.e.\ $\hat{\theta}_i= \theta_i$. Otherwise, he gets a random independent result. The second phase of the protocol, the post-processing, consist of arbitrary classical messages and local computations, depending on the task at hand. However, the fact that all BB84-type protocols have in common is that the classical post-processing typically relies on Bob's subsets of correct and random outcomes, or in other words, on the fact that a dishonest Bob has high uncertainty about a crucial piece of information. Thus, BB84-type protocols---in their basic form---may be broken by a dishonest Bob, who does not measure the qubits immediately. This is due to the fact that Alice typically reveals $\theta$ at a later stage so that Bob knows the correct subset. However, a dishonest Bob could measure all stored qubits in matching bases $\hat{\theta} = \theta$, and thus, learn more information than he was supposed to. This aspect is captured in our definition of security against a \emph{benign} Bob, or more precisely a ``benignly dishonest'' Bob, who treats the qubits ``almost honestly'' in the preparation phase but can deviate arbitrarily otherwise. Note that, in contrast to Bob's situation, BB84-type protocols typically achieve unconditional security against cheating by Alice in their default form. On a very intuitive level, it should now be evident that we want to enforce Bob's measurement upon qubit reception before any further announcement by Alice. In the next section, we will make this definition more formal. \subsection{Security against Benign Bob} \label{sec:defbenign.compiler} \index{player!benign} The following security definition captures information-theoretic security against a benign Bob. Recall that such a dishonest Bob is benign in that, in the preparation phase, he does not deviate (too much) from what he is supposed to do. In the post-processing phase, though, he may be arbitrarily dishonest. To make this description formal, we fix an arbitrary choice of $\theta$ and an arbitrary value for the classical information, $z$, which Bob may obtain as a result of the preparation phase (i.e.~$z = (\hat{\theta},\hat{x})$ in case Bob is actually honest). Let $X$ denote the random variable describing the bit-string $x$, where we understand the distribution $P_X$ of $X$ to be conditioned on the fixed choices for $\theta$ and~$z$. Furthermore, let $\rho_E$ be the state of Bob's quantum register $E$ after the preparation phase. Note that, still with fixed $\theta$ and~$z$, $\rho_E$ is of the form $\rho_E = \sum_x P_X(x) \rho^x_E$, where $\rho^x_E$ is the state of Bob's quantum register in case $X$ takes on the value $x$. In general, the $\rho^x_E$ may be mixed, but we can think of them as being reduced pure states with $\rho^x_E = \tr_R \big( \proj{\psi^x_{ER}} \big)$ for a suitable register $R$ and pure states $\ket{\psi^x_{ER}}$. We then call the state $\rho_{ER} = \sum_x P_X(x) \proj{\psi^x_{ER}}$ a point-wise purification (with respect to $X$) of $\rho_E$. Obviously, in case Bob is honest, $X_i$ is fully random whenever $\theta_i \neq \hat{\theta}_i$, and we have $$H_{\infty}\bigl(X|_I \,\big|\,X|_{\bar{I}} = x|_{\bar{I}}\bigr) = d_H\bigl(\theta|_I,\hat{\theta}|_I\bigr) \, , $$ for every $I \subseteq \set{1,\ldots,n}$ and every $x|_I$, where $\bar{I}$ denotes the complementary set. In that case, Bob does not store any non-trivial quantum state so that $R$ is ``empty'' and $$ H_0(\rho_{ER}) = H_0(\rho_E) = 0 \, . $$ A benign Bob ${\sf B}'$ is now specified to behave close-to-honestly in the preparation phase in that, after the preparation, he produces an auxiliary output $\hat{\theta}$. Given this output, we are in a certain sense close to the ideal situation where Bob really measured in basis $\hat{\theta}$ as far as the values of $H_{\infty}\bigl(X|_I \,\big|\,X|_{\bar{I}} = x|_{\bar{I}}\bigr)$ and $H_0(\rho_{ER})$ are concerned.\footnote{The reason why we consider the point-wise purification of $\rho_E$ is to prevent Bob from artificially blowing up $H_0(\rho_{ER})$ by locally generating a large mixture or storing an unrelated mixed input state.} Informally speaking, the following definition states (under Point~(1.)) that there exists a string $\hat{\theta}$ of ${\sf B}'$'s measurement bases, such that the uncertainty about ${\sf A}$'s bit $x_i$ is essentially 1 whenever $\theta_i \neq \hat{\theta_i}$. Furthermore, ${\sf B}'$'s quantum storage is small.\index{entropy!min-entropy}\index{entropy!max-entropy} \begin{definition} [Unconditional security for Alice against {\em benign} Bob] \label{def:BenignBob} A BB84-type quantum protocol $\Pi$ securely implements $\mathcal{F}$ against a {\em $\beta$-benign} ${\sf B}'$ for some parameter $\beta \geq 0$, if it securely implements $\mathcal{F}$ according to Definition~\ref{def:qmemoryBob}, with the following two modifications: \begin{enumerate} \item The quantification is over all ${\sf B}'$ with the following property: After the preparation phase ${\sf B}'$ either aborts, or else produces an auxiliary output $\hat{\theta} \in \set{\+,\x}^n$. Moreover, the joint state of ${\sf A}$ and ${\sf B}'$ after $\hat{\theta}$ has been output is statistically indistinguishable from a state for which it holds that, for any fixed values for $\theta$, $\hat{\theta}$ and $z$, for any subset $I \subseteq \set{1,\ldots,n}$, and for any $x|_{\bar{I}}$, \begin{equation}\label{eq:staterequirements} H_{\infty}\bigl(X|_I \,\big|\,X|_{\bar{I}} = x|_{\bar{I}}\bigr) \geq d_H\bigl(\theta|_I,\hat{\theta}|_I\bigr) - \beta n \qquad\text{and}\qquad H_0\bigl(\rho_{ER}\bigr) \leq \beta n \, , \end{equation} where $\rho_{ER}$ is a point-wise purification of $\rho_E$ with respect to $X$. \item $\hat{\sf B}'$'s running time is polynomial in the running time of ${\sf B}'$. \end{enumerate} \end{definition} \subsection{From Benign to Computational Security} \label{sec:begnign.computational.compiler} We now show a generic compiler which transforms any BB84-type protocol into a new quantum protocol for the same task. The compiler achieves that, if the original protocol is unconditionally secure against dishonest Alice and unconditionally secure against \emph{benign} Bob, then the compiled protocol remains to be unconditionally secure against dishonest Alice but is now \emph{computationally secure} against an \emph{arbitrary} dishonest Bob. The idea behind the construction of the compiler is to incorporate a commitment scheme and force Bob to behave benignly by means of the $\mathtt{Commit\&Open}$-procedure. More precisely, we let Bob classically and position-wise commit to all his measurement bases and outcomes. Then Alice chooses a random test-subset of size $\alpha m$ and checks by Bob's openings that the bits coincide whenever the bases match. If the test is passed, the post-processing is conducted on the remaining unopened positions. Otherwise, Alice aborts. Figure~\ref{fig:compiled} shows the compilation of an arbitrary BB84-type protocol $\Pi$. The quantum communication is increased from $n$ to $m = n/(1-\alpha)$ qubits, where $0 < \alpha < 1$ is an additional parameter that can be arbitrarily chosen, and the compiled protocol requires three more rounds of interaction. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol ${\cal C}^{\alpha}(\Pi)$ :} \\[-4ex] \begin{description}\setlength{\parskip}{0.5ex} \item[{\it Preparation:}] $ $\\ ${\sf A}$ chooses $x \in_R \set{0,1}^m$ and $\theta \in_R \set{\+,\x}^m$ and sends $\ket{x}_{\theta}$ to~${\sf B}$. Then, ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^m$ and obtains $\hat{x} \in \set{0,1}^m$ by measuring $\ket{x}_{\theta}$ in bases $\hat{\theta}$. \item[{\it Verification:}] \begin{enumerate} \item[] \item ${\sf B}$ commits to $\hat{\theta}$ and $\hat{x}$ position-wise by $c_i := \commitx{(\hat{\theta}_i,\hat{x}_i)}{r_i}$ with randomness $r_i$ for $i = 1,\ldots,m$. He sends the commitments to ${\sf A}$. \item\label{step:check} ${\sf A}$ sends a random test subset $T \subset \{1,\ldots,m \}$ of size $\alpha m$. ${\sf B}$ opens $c_i$ for all $i \in T$. ${\sf A}$ checks that the openings are correct and that $x_i = \hat{x}_i$ whenever $\theta_i = \hat{\theta}_i$. If all tests are passed, ${\sf A}$ accepts. Otherwise, she rejects and aborts. \item The tested positions are discarded by both parties: ${\sf A}$ and ${\sf B}$ restrict $x$ and $\theta$, respectively $\hat{\theta}$ and $\hat{x}$, to $i \in \bar{T}$. \end{enumerate} \item[{\it Post-processing:}] $ $\\ As in $\Pi$ (with $x, \theta,\hat{x}$ and $\hat{\theta}$ restricted to positions $i \in \bar{T}$). \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{The Compiled Protocol ${\cal C}^{\alpha}(\Pi)$.} \label{fig:compiled} \end{figure} Although apparently simple---intuition clearly suggests that if Bob passes the measurement test, he must have measured most of his qubits, also in the remaining untested subset---this $\mathtt{Commit\&Open}$ approach is not trivial to rigorously prove for a quantum Bob. Moreover, in order to preserve unconditional security against dishonest Alice, the commitment scheme needs to be unconditionally hiding, and so can be at best quantum-computationally binding. For a plain commitment scheme however, the common reduction from computational security of the protocol ${\cal C}^{\alpha}(\Pi)$ to the computational binding property of a commitment scheme would require rewinding, but we do not know of any technique for our protocol structure (see also Section~\ref{sec:rewinding} for an elaborated discussion). Therefore, we use our mixed dual-mode commitment construction $\mathtt{commit}$ from Section~\ref{sec:mixed.commit} that allows use to circumvent the necessity of rewinding. Recall that $\mathtt{commit}$ is a keyed dual-mode commitment scheme with unconditionally hiding key ${\tt pkH}$, generated by ${\cal G}_{\tt H}$, and unconditionally binding key ${\tt pkB}$, generated by ${\cal G}_{\tt B}$ along with a secret key ${\tt sk}$ that allows to efficiently extract $m$ from $\commitk{m}{r}{{\tt pkB}}$. Furthermore, we have that ${\tt pkH} \approxq {\tt pkB}$. For simplicity and efficiency, we consider the CRS-model, and we assume the key ${\tt pkB}$ for the commitment scheme, generated according to ${\cal G}_{\tt B}$, to be contained in the common reference string. We discuss in Section~\ref{sec:generation.generation} how to avoid the CRS-model, at the cost of a non-constant round construction where the parties generate a common reference string jointly by coin-flipping. Such an approach allows us to implement the entire application without any set-up assumptions. With our dual-mode commitment scheme, we arrive at the following theorem, capturing the compilation of any protocol from benign security towards computational security. \begin{theorem}[Compiler] \label{thm:compiler} Let $\Pi$ be a BB84-type protocol, unconditionally secure against dishonest Alice and against $\beta$-benign Bob for some constant $\beta \geq 0$. Consider the compiled protocol ${\cal C}^{\alpha}(\Pi)$ for arbitrary $\alpha > 0$, where the commitment scheme is instantiated by a dual-mode commitment scheme. Then, ${\cal C}^{\alpha}(\Pi)$ is unconditionally secure against dishonest Alice and quantum-computationally secure against dishonest Bob in the CRS-model. \end{theorem} \begin{proof} We sometimes write ${\cal C}^{\alpha}_{\tt pkH}(\Pi)$ for the compiled protocol ${\cal C}^{\alpha}(\Pi)$ to stress that key ${\tt pkH}$, produced by ${\cal G}_{\tt H}$, is used for the dual-mode commitment scheme. Analogously, we write ${\cal C}^{\alpha}_{\tt pkB}(\Pi)$ when key ${\tt pkB}$, produced by ${\cal G}_{\tt B}$, is used instead. Correctness is trivially checked. In order to show unconditional security against ${\sf A}'$, first note that the unconditionally hiding property of the commitment ensures that ${\sf A}'$ does not learn any additional information. Furthermore, as the ideal-world adversary $\hat{\sf A}'$ is not required to be poly-time bounded, according to Definition~\ref{def:unboundedAliceNiceOrder}, $\hat{\sf A}'$ can break the binding property of the commitment scheme, and thereby, perfectly simulate the behavior of honest ${\sf B}$ towards ${\sf A}'$ attacking ${\cal C}^{\alpha}(\Pi)$. The issue of efficiency of the ideal-life adversaries will be addressed in Section~\ref{sec:composability.compiler}. As for computational security against dishonest Bob, according to Definition~\ref{def:polyboundedBobCRS}, we need to prove that for every real-world adversary ${\sf B}' \in \dBob_{\mathrm{poly}}$ attacking ${\cal C}^{\alpha}(\Pi)$, there exists a suitable ideal-world adversary $\hat{\sf B}' \in \dBob_{\mathrm{poly}}$ attacking $\mathcal{F}$ such that \begin{equation} out_{{\sf A},{\sf B}'}^{{\cal C}^{\alpha}(\Pi)} \approxq out_{\hat{\sf A},\hat{\sf B}'}^\mathcal{F} \, . \end{equation} \newline First, note that by the computational indistinguishability of ${\tt pkH}$ and ${\tt pkB}$, \begin{equation}\label{eq:KeySwitch} out_{{\sf A},{\sf B}'}^{{\cal C}^{\alpha}(\Pi)} = out_{{\sf A},{\sf B}'}^{{\cal C}^{\alpha}_{\tt pkH}(\Pi)} \approxq out_{{\sf A},{\sf B}'}^{{\cal C}^{\alpha}_{{\tt pkB}}(\Pi)} \, . \end{equation} Then, we construct an adversary ${\sf B}'_\circ \in \dBob_{\mathrm{poly}}$ who attacks the unconditional security against benign Bob of protocol $\Pi$, and which satisfies \begin{equation}\label{eq:BenignBob} out_{{\sf A},{\sf B}'}^{{\cal C}^{\alpha}_{\tt pkB}(\Pi)} = out_{{\sf A}_\circ,{\sf B}'_\circ}^{\Pi} \, , \end{equation} where ${\sf A}_\circ$ honestly executes $\Pi$. We define ${\sf B}'_\circ$ in the following way. Consider the execution of ${\cal C}^{\alpha}(\Pi)$ between ${\sf A}$ and ${\sf B}'$. We split entity ${\sf A}$ into two players ${\sf A}_\circ$ and $\tilde{{\sf A}}$, where we think of $\tilde{{\sf A}}$ as being placed in between ${\sf A}_\circ$ and ${\sf B}'$. The splitted entities of this proof are also depicted in Figure~\ref{fig:splitted.entities}. ${\sf A}_\circ$ plays honest ${\sf A}$'s part of $\Pi$. $\tilde{{\sf A}}$ can be understood as completing $\mathtt{Commit\&Open}$. More specifically, $\tilde{{\sf A}}$ acts as follows. It receives $n$ qubits from ${\sf A}_\circ$ and produces $\alpha n/(1-\alpha)$ random BB84-qubits of its own. Then, it interleaves the produced qubits randomly with the received qubits and sends the resulting $m= n/(1-\alpha)$ qubits to ${\sf B}'$. $\tilde{{\sf A}}$ then completes the verification step of ${\cal C}^{\alpha}(\Pi)$ with ${\sf B}'$, asking him to have the commitments opened which correspond to $\tilde{{\sf A}}$'s produced qubits. If this results in accept, $\tilde{{\sf A}}$ lets ${\sf A}_\circ$ finish the protocol with ${\sf B}'$. Note that pair $({\sf A}_\circ,\tilde{{\sf A}})$ does exactly the same as ${\sf A}$. However, we can also move the actions of $\tilde{{\sf A}}$ to ${\sf B}'$'s side, and define ${\sf B}'_\circ$ as follows. ${\sf B}'_\circ$ samples $({\tt pkB},{\tt sk})$ according to ${\cal G}_{\tt B}$ and executes $\Pi$ with ${\sf A}$ by locally running $\tilde{{\sf A}}$ and ${\sf B}'$, using ${\tt pkB}$ as commitment key. If $\tilde{{\sf A}}$ accepts the verification, then ${\sf B}'_\circ$ outputs $\hat{\theta} \in \{0,1\}^n$ (as required from a benign Bob), obtained by decrypting the unopened commitments with the help of ${\tt sk}$. Otherwise, ${\sf B}'_\circ$ aborts at this point. It is now clear that Eq.~\eqref{eq:BenignBob} holds. Exactly the same computation takes place in both ``experiments'', the only difference being that they are executed partly by different entities. \newline \newline The last step is to show that, for some $\hat{\sf B}'$, \begin{equation}\label{eq:ByAssumption} out_{{\sf A}_\circ,{\sf B}'_\circ}^{\Pi} \approxs out_{\hat{\sf A},\hat{\sf B}'}^\mathcal{F} \, . \end{equation} Eq.~\eqref{eq:ByAssumption} actually claims that $\hat{\sf A}'$ and $\hat{\sf B}'$ successfully simulate ${\sf A}_\circ$ and ${\sf B}'_\circ$ executing $\Pi$. This follows by assumption of benign security of $\Pi$, if we can show that ${\sf B}'_\circ$ is $\beta$-benign, according to Definition~\ref{def:BenignBob}, for any $\beta \geq 0$. We show this in the following subsection, or more precisely, we prove that the joint state of ${\sf A}_\circ, {\sf B}'_\circ$ after the preparation phase is statistically indistinguishable from a state $\rho_{Ideal}$ which satisfies the bounds in Eq.~\eqref{eq:staterequirements} of Definition~\ref{def:BenignBob}. We conclude the current proof by claiming that Theorem~\ref{thm:compiler} follows from Eqs.~\eqref{eq:KeySwitch}~--~\eqref{eq:ByAssumption} together. \end{proof} \begin{figure} \begin{framed} \begin{center} \begin{tikzpicture} \tikzstyle{every node}=[minimum size=6mm] \draw (0,0) node[draw] (zeroA) {${\sf A}_\circ$}; \draw (5,0) node[draw] (tildeA) {$\tilde{{\sf A}} $}; \draw (10,0) node[draw] (dB) {${\sf B}'$}; \draw[rounded corners] (zeroA.east) -- node[below] {$\Pi$} (tildeA); \draw[rounded corners] (tildeA.east) -- node[above] {${\cal C}^{\alpha}(\Pi)$} (dB); \draw[rounded corners] (zeroA.north) ++(-4mm,1mm) -- +(0mm, 2mm) -| node[pos=0.25,above] {${\sf A}$} ([shift={(4mm,1mm)}] tildeA.north); \draw[rounded corners] (tildeA.south) ++(-4mm,-1mm) -- +(0mm, -2mm) -| node[pos=0.25,below] {${\sf B}'_\circ$} ([shift={(4mm,-1mm)}] dB.south); \end{tikzpicture} \end{center} \end{framed} \vspace{-3.5ex} \caption{Constructing an attacker ${\sf B}'_\circ$ against $\Pi$ from an attacker ${\sf B}'$ against ${\cal C}^{\alpha}(\Pi)$.} \label{fig:splitted.entities} \end{figure} \subsection{Proof of Bounds on Entropy and Memory Size} \label{sec:bounds.compiler} Recall that ${\sf A}_\circ$ executing $\Pi$ with ${\sf B}'_\circ$ can equivalently be thought of as ${\sf A}$ executing ${\cal C}^{\alpha}_{\tt pkB}(\Pi)$ with ${\sf B}'$ (Figure~\ref{fig:splitted.entities}). Furthermore, a joint state of ${\sf A},{\sf B}'$ is clearly also a joint state of ${\sf A}_\circ, {\sf B}'_\circ$. To show the existence of $\rho_{Ideal}$ for ${\sf A}_\circ, {\sf B}'_\circ$ as promised above, it therefore suffices to show such a state for ${\sf A},{\sf B}'$. In other words, we need to show that the execution of ${\cal C}^{\alpha}_{\tt pkB}(\Pi)$ with honest ${\sf A}$ and arbitrarily dishonest ${\sf B}'$---after verification---will be close to a state where Eq.~\eqref{eq:staterequirements} holds. To show this closeness, we consider an equivalent EPR-version, where Alice creates $m$ EPR-pairs $(\ket{00}+\ket{11})/\sqrt{2}$, sends one qubit in each pair to Bob, and keeps the others in register $A$. Then, Alice can measures her qubits only when needed, namely, she measures the qubits within $T$ in Step~(\ref{step:check}.)~of the verification phase, and the remaining qubits at the end of the verification phase. With respect to the information Alice and Bob obtain, this EPR-version is {\em identical} to the original protocol ${\cal C}^{\alpha}_{\tt pkB}(\Pi)$ based on single qubits, since the only difference is the point in time when Alice obtains certain information. We can further modify the procedure without affecting Eq.~\eqref{eq:staterequirements} as follows. Instead of measuring her qubits in $T$ in {\em her} basis $\theta|_T$, Alice measures them in {\em Bob's} basis $\hat{\theta}|_T$. However, she still verifies only whether $x_i = \hat{x}_i$ for those $i \in T$ with $\theta_i = \hat{\theta}_i$. Because the positions $i \in T$ with $\theta_i \neq \hat{\theta}_i$ are not used in the protocol at all, this change has no effect. As the commitment scheme is unconditionally binding, if key ${\tt pkB}$ is used, Bob's basis $\hat{\theta}$ is well defined by his commitments (although hard to compute), even if Bob is dishonest. The resulting scheme is given in Figure~\ref{fig:epr.version}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex}{\sc Protocol }EPR-${\cal C}^{\alpha}_{\tt pkB}(\Pi)$ : \\[-4ex] \begin{description}\setlength{\parskip}{0.5ex} \item[{\it Preparation:}] $ $\\ ${\sf A}$ prepares $m$ EPR-pairs $\frac{1}{\sqrt{2}}(\ket{00}+\ket{11})$ and sends the second qubit in each pair to ${\sf B}$, while keeping the other qubits in register $A = A_1\cdots A_m$. ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^m$ and obtains $\hat{x} \in \set{0,1}^m$ by measuring the received qubits in bases $\hat{\theta}$. \item[{\it Verification:}] \begin{enumerate} \item[] \item ${\sf B}$ commits to $\hat{\theta}$ and $\hat{x}$ position-wise by $c_i := \commitx{(\hat{\theta}_i,\hat{x}_i)}{r_i}$ with randomness $r_i$ for $i = 1,\ldots,m$. He sends the commitments to ${\sf A}$. \item\label{step:EPRcheck} ${\sf A}$ sends a random test subset $T \subset \{1,\ldots,m \}$ of size $\alpha m$. ${\sf B}$ opens $c_i$ for all $i \in T$. ${\sf A}$ chooses $\theta \in_R \set{\+,\x}^m$, measures registers $A_i$ with $i \in T$ in basis $\hat{\theta}_i$ to obtain $x_i$, and checks that the openings are correct and that $x_i = \hat{x}_i$ whenever $\theta_i = \hat{\theta}_i$ for $i \in T$. If all tests are passed, ${\sf A}$ accepts. Otherwise,, she rejects and aborts. \item\label{step:EPRrest} ${\sf A}$ measures the remaining registers in basis $\theta|_{\bar{T}}$ to obtain $x|_{\bar{T}}$. The tested positions are discarded by both parties: ${\sf A}$ and ${\sf B}$ restrict $x$ and $\theta$, respectively $\hat{\theta}$ and $\hat{x}$, to $i \in \bar{T}$. \end{enumerate} \item[{\it Post-processing:}] $ $\\ As in $\Pi$ (with $x, \theta,\hat{x}$ and $\hat{\theta}$ restricted to positions $i \in \bar{T}$). \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{The EPR-version of ${\cal C}^{\alpha}_{\tt pkB}(\Pi)$.} \label{fig:epr.version} \end{figure} We consider an execution of EPR-${\cal C}^{\alpha}_{\tt pkB}(\Pi)$ in Figure~\ref{fig:epr.version} with an honest ${\sf A}$ and a dishonest ${\sf B}'$, and we fix $\hat{\theta}$ and $\hat{x}$, determined by ${\sf B}'$'s commitments. Let $\ket{\varphi_{AE}} \in {\cal H}_A\otimes{\cal H}_E$ be the state of the joint system right before Step~(\ref{step:check}.)~of the verification phase. Since we are anyways interested in the point-wise purification of ${\sf B}'$'s state, we may indeed assume this state to be pure. If it is not pure, we purify it and carry the purifying register $R$ along with $E$. Clearly, if ${\sf B}'$ had honestly done his measurements then, for some $\ket{\varphi_E}\in {\cal H}_E$, \begin{equation*} \ket{\varphi_{AE}} = \ket{\hat{x}}_{\hat{\theta}} \otimes \ket{\varphi_E} \, . \end{equation*} In this case, the quantum memory $E$ would be empty, i.e., \begin{equation*} H_0(\proj{\varphi_E}) = 0 \, , \end{equation*} and the uncertainty about $X$, obtained when measuring $A|_{\bar{T}}$ in basis $\theta|_{\bar{T}}$ would be maximal in the sense that it would be exactly one bit in each position with non-matching bases, i.e., \begin{equation*} H_{\infty}(X) = d_H(\theta|_{\bar{T}},\hat{\theta}|_{\bar{T}}) \, . \end{equation*} We now show that the verification phase enforces these properties for an arbitrary dishonest ${\sf B}'$, at least approximately in the sense of Eq.~\eqref{eq:staterequirements}. Recall that $T \subset \set{1,\ldots,m}$ is random subject to $|T| = \alpha m$. Furthermore, for fixed $\hat{\theta}$ but randomly chosen $\theta$, the subset $T' = \Set{i \in T}{ \theta_i = \hat{\theta}_i}$ is a random subset (of arbitrary size) of $T$. Let the random variable $T\hspace{-0.3ex}e\hspace{-0.15ex}st$ describe the choice of $t\hspace{-0.15ex}e\hspace{-0.1ex}st = (T,T')$ as specified above, and consider the state $\rho_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$, consisting of the classical $T\hspace{-0.3ex}e\hspace{-0.15ex}st$ and the quantum state $\ket{\varphi_{AE}}$ with \begin{equation*} \rho_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE} = \rho_{T\hspace{-0.3ex}e\hspace{-0.15ex}st} \otimes \proj{\varphi_{AE}} = \sum_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} P_{T\hspace{-0.3ex}e\hspace{-0.15ex}st}(t\hspace{-0.15ex}e\hspace{-0.1ex}st) \proj{t\hspace{-0.15ex}e\hspace{-0.1ex}st} \otimes \proj{\varphi_{AE}} \, . \end{equation*} Recall that $r_H(\cdot,\cdot)$ denotes the relative Hamming distance between two strings (see Eq.~\eqref{eq.relative.hamming}). The following lemma shows that, we are in state $\rho_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$ close to an ``ideal state'' $\tilde{\rho}_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$, capturing a situation , where for {\em any} choice of $T$ and $T'$ and for {\em any} outcome $x|_T$ when measuring $A|_T$ in basis $\hat{\theta}|_T$, the relative error $r_H(x|_{T'},\hat{x}|_{T'})$ (the test estimate) gives an upper bound (which holds with probability 1) on the relative error $r_H(x|_{\bar{T}},\hat{x}|_{\bar{T}})$ one would obtain by measuring the remaining subsystems $A_i$ with $i \in \bar{T}$ in basis $\hat{\theta}_i$. \begin{lemma} \label{lemma:ideal.state} For any $\varepsilon > 0$, $\hat{x}\in\{0,1\}^m$ and $\hat{\theta}\in \{\+,\x\}^m$, the state $\rho_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$ is negligibly close (in $m$) to a state $$ \tilde{\rho}_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE} = \sum_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} P_{T\hspace{-0.3ex}e\hspace{-0.15ex}st}(t\hspace{-0.15ex}e\hspace{-0.1ex}st) \proj{t\hspace{-0.15ex}e\hspace{-0.1ex}st} \otimes \proj{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}} \, , $$ where for any $t\hspace{-0.15ex}e\hspace{-0.1ex}st = (T,T')$, we have $$ \ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}} = \sum_{x \in B_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}} \alpha^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_x \ket{x}_{\hat{\theta}} \ket{\psi_E^x}\, , $$ for $B_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} = \{x \in \{0,1\}^m \,|\, r_H(x|_{\bar{T}},\hat{x}|_{\bar{T}}) \leq r_H(x|_{T'},\hat{x}|_{T'}) + \varepsilon \}$ and arbitrary coefficients $\alpha^x_{test} \in \C$. \end{lemma} We want to point out that the ``ideal state'' $\ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}}$ in the remaining subsystem after the test is a superposition of states with relative Hamming distance upper bounded by the test estimate (plus a small error $\varepsilon$). This is the case, since we sum over all $x$ restricted to the set specifying exactly that, and also note that ${\sf B}'$'s subsystem $\ket{\psi_E^x}$ depends on $x$, which means, informally speaking, only such states survive. Yet in other words, we are indeed left with a superposition over all strings that have relative Hamming distance $\varepsilon$-close to the estimate of the test.\\ \begin{proof} For any $t\hspace{-0.15ex}e\hspace{-0.1ex}st$, we let $\ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}}$ be the renormalized projection of $\ket{\varphi_{AE}}$ into the subspace $\mathrm{span}\{\ket{x}_{\hat{\theta}} \,|\, x \in B_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} \} \otimes {\cal H}_E$ and we let $\ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st \perp}_{AE}}$ be the renormalized projection of $\ket{\varphi_{AE}}$ into the orthogonal complement, such that $\ket{\varphi_{AE}} = \varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} \ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}} + \varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}^\perp \ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st \perp}_{AE}}$ with $\varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} = \braket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}}{\varphi_{AE}}$ and $\varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}^\perp = \braket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st \perp}_{AE}}{\varphi_{AE}}$. By construction, $\ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}}$ is of the form as required in the statement of the lemma. A basic property of the trace norm of pure states leads to \begin{equation*} \delta\bigl( \proj{\varphi_{AE}}, \proj{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}} \bigr) = \sqrt{1 - |\braket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}}{\varphi_{AE}}|^2} = \sqrt{1 - |\varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}|^2} = |\varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}^\perp| \, . \end{equation*} This last term corresponds to the square root of the probability, when given $t\hspace{-0.15ex}e\hspace{-0.1ex}st$, to observe a string $x \not\in B_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}$ when measuring subsystem $A$ of $\ket{\varphi_{AE}}$ in basis $\hat{\theta}$. Furthermore, using elementary properties of the trace norm with Jensen's inequality\footnote{In this context, we use Jensen's inequality with $f\big( \sum_i p_i x_i \big) \leq \sum_i p_i f(x_i)$, for positive $p_i$ and real convex function $f$.} gives \begin{align*} \delta\bigl( \rho_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE},\tilde{\rho}_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE} \bigr)^2 &= \bigg( \sum_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} P_{T\hspace{-0.3ex}e\hspace{-0.15ex}st}(test) \,\delta\bigl( \proj{\varphi_{AE}}, \proj{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}} \bigr) \bigg)^2 \\ &= \bigg( \sum_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} P_{T\hspace{-0.3ex}e\hspace{-0.15ex}st}(test) \, |\varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}^\perp| \bigg)^2 \leq \sum_{t\hspace{-0.15ex}e\hspace{-0.1ex}st} P_{T\hspace{-0.3ex}e\hspace{-0.15ex}st}(test) \, |\varepsilon_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}^\perp|^2 \, , \end{align*} where the last term is the probability to observe a string $x \not\in B_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}$ when choosing $t\hspace{-0.15ex}e\hspace{-0.1ex}st$ according to $P_{T\hspace{-0.3ex}e\hspace{-0.15ex}st}$ and measuring subsystem $A$ of $\ket{\varphi_{AE}}$ in basis $\hat{\theta}$. This situation, though, is a classical sampling problem, for which it is well known that for any measurement outcome $x$, the probability (over the choice of $t\hspace{-0.15ex}e\hspace{-0.1ex}st$) that $x \not\in B_{t\hspace{-0.15ex}e\hspace{-0.1ex}st}$ is negligible in $m$ (see~e.g.~\cite{Hoeffding63}). Thus, it follows that state $\rho_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$ is negligibly close (in $m$) to state $\tilde{\rho}_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$. \end{proof} Next, we need a preliminary lemma, stating that a pure state can be written as a ``small superposition'' of basis vectors. \begin{lemma} \label{lemma:small.superposition} Let $\ket{\varphi_{AE}} \in {\cal H}_A \otimes {\cal H}_E$ be a state of the form $\ket{\varphi_{AE}} = \sum_{i \in J} \alpha_i \ket{i}\ket{\varphi_E^i}$, where $\set{\ket{i}}_{i \in I}$ is a basis of ${\cal H}_A$ and $J \subseteq I$. Then, the following holds. \begin{enumerate} \item\label{it:boundHoo} Let $\tilde{\rho}_{AE} = \sum_{i \in J} |\alpha_i|^2 \proj{i}\otimes\proj{\varphi_E^i}$, and let $W$, and $\tilde{W}$, be the outcome of measuring $A$ of $\ket{\varphi_{AE}}$, respectively of $\tilde{\rho}_{AE}$, in some basis $\set{\ket{w}}_{w \in \cal W}$. Then, $$ H_{\infty}(W) \geq H_{\infty}(\tilde{W}) - \log|J| \, . $$ \item\label{it:boundHo} The reduced density matrix $\rho_E = \tr_A(\proj{\varphi_{AE}})$ has max-entropy $$ H_0(\rho_E) \leq \log |J| \, . $$ \end{enumerate} \end{lemma} Note that when using Renner's definition for conditional min-entropy of~\cite{Renner05} under Point~(\ref{it:boundHoo}.), one can actually show that $H_{\infty}(W|E) \geq H_{\infty}(\tilde{W}|E) - \log|J|$.\\ \begin{proof} For Point~(\ref{it:boundHoo}.), we may understand $\tilde{\rho}_{AE}$ as being in state $\ket{i}\ket{\varphi_E^i}$ with probability $|\alpha_i|^2$, so that we easily see that \begin{align*} P_{\tilde{W}}(w) &= \sum_{i \in J} |\alpha_i|^2 |\braket{w}{i}|^2 = \sum_{i \in J} |\alpha_i|^2 |\braket{w}{i}|^2 \cdot \sum_{i \in J} 1^2 \cdot \frac{1}{|J|} \\ &\geq \bigg|\sum_{i \in J} \alpha_i \braket{w}{i}\bigg|^2 \cdot \frac{1}{|J|} = \bigg|\bra{w}\sum_{i \in J} \alpha_i \ket{i}\bigg|^2 \cdot \frac{1}{|J|} = P_{W}(w) \cdot \frac{1}{|J|} \, , \end{align*} where the inequality is Cauchy-Schwartz\footnote{In this context, we use the inequality phrased as $\sum_i | x_i|^2 |y_i|^2 \geq |\sum_i x_i y_i|^2$}. The claim follows (with Definition~\ref{def:min.entropy}). For Point~(\ref{it:boundHo}.), note that $\rho_E = \tr_A( \proj{\varphi_{AE}} ) = \sum_{i \in J} |\alpha_i|^2 \proj{\varphi_E^i}$. The claim follows immediately from the sub-additivity of the rank, i.e., $$ \rank{\rho_E} \leq \sum_{i \in J} \rank{|\alpha_i|^2 \proj{\varphi_E^i}} \leq \sum_{i \in J} 1 = |J| \, , $$ where we use that all $\proj{\varphi_E^i}$ have rank (at most) 1. \end{proof} Now, combining the fact that it holds for the binary entropy $h$ that $\big|\{ y \in \{0,1\}^n \,|\, d_H(y,\hat{y}) \leq \mu n \}\big| \leq 2^{h(\mu) n}$ for any $\hat{y} \in \set{0,1}^n$ and $0 \leq \mu \leq \frac12$ with Lemma~\ref{lemma:small.superposition} on ``small superpositions of product states'', we can conclude the following corollary. \begin{corollary} \label{corSerge} Let $\tilde{\rho}_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$ be of the form as in Lemma~\ref{lemma:ideal.state} (for given $\varepsilon$, $\hat{x}$ and $\hat{\theta}$). For any fixed $t\hspace{-0.15ex}e\hspace{-0.1ex}st = (T,T')$ and for any fixed $x|_T \in \{0,1\}^{\alpha m}$ with \smash{$e\hspace{-0.1ex}r\hspace{-0.15ex}r := r_H(x|_{T'},\hat{x}|_{T'}) \leq \frac12$}, let $\ket{\psi_{AE}}$ be the state to which \smash{$\ket{\tilde{\varphi}^{t\hspace{-0.15ex}e\hspace{-0.1ex}st}_{AE}}$} collapses when, for every $i \in T$, subsystem $A_i$ is measured in basis $\hat{\theta}_i$ and $x_i$ is observed, where we understand $A$ in $\ket{\psi_{AE}}$ to be restricted to the registers $A_i$ with $i \in \bar T$. Finally, let $\sigma_E = \tr_A(\proj{\psi_{AE}})$ and let the random variable $X$ describe the outcome when measuring the remaining $n = (1-\alpha) m$ subsystems of $A$ in basis $\theta|_{\bar{T}} \in \set{\+,\x}^n$. Then, for any subset $I \subseteq \set{1,\ldots,n}$ and any $x|_I$,\footnote{Below, $\theta|_I$ (and similarly $\hat{\theta}|_I$) should be understood as first restricting the $m$-bit vector $\theta$ to $\bar{T}$, and then restricting the resulting $n$-bit vector $\theta|_{\bar{T}}$ to $I$: $\theta|_I:= {(\theta|_{\bar{T}})|}_I$.} $$ H_{\infty}\bigl(X|_I \,\big|\,X|_{\bar{I}} = x|_{\bar{I}}\bigr) \geq d_H\bigl(\theta|_I,\hat{\theta}|_I\bigr) - h(e\hspace{-0.1ex}r\hspace{-0.15ex}r + \varepsilon) n $$ and $$ H_0\bigl(\sigma_E\bigr) \leq h(e\hspace{-0.1ex}r\hspace{-0.15ex}r + \varepsilon) n \, . $$ \end{corollary} Thus, the number of errors between the measured $x|_{T'}$ and the given $\hat{x}|_{T'}$ gives us a bound on the min-entropy of the outcome, when measuring the remaining subsystems of $A$, and on the max-entropy of the state of subsystem $E$.\\ \begin{proof} To simplify notation, we write $\vartheta = \theta|_{\bar{T}}$ and $\hat{\vartheta} = \hat{\theta}|_{\bar{T}}$. By definition of $\tilde{\rho}_{T\hspace{-0.3ex}e\hspace{-0.15ex}st AE}$, for any fixed values of $\varepsilon,\hat{x}$ and $\hat{\theta}$, the state $\ket{\psi_{AE}}$ is of the form $\ket{\psi_{AE}} = \sum_{y \in {\cal Y}} \alpha_y \ket{y}_{\hat{\vartheta}} \otimes \ket{\psi^y_E}$, where ${\cal Y} = \Set{y \in \set{0,1}^n}{d_H(y,\hat{x}|_{\bar{T}}) \leq (e\hspace{-0.1ex}r\hspace{-0.15ex}r + \varepsilon)n}$. Recall here that $r_H(y,\hat{x}|_{\bar{T}}) = d_H(y,\hat{x}|_{\bar{T}})/n$. Consider the corresponding mixture $\tilde{\sigma}_{AE} = \sum_{y \in {\cal Y}} |\alpha_y|^2 \ket{y}_{\hat{\vartheta}} \bra{y}_{\hat{\vartheta}} \otimes\proj{\psi_E^y}$ and define $\tilde{X}$ as the random variable for the outcome when measuring register $A$ of $\tilde{\sigma}_{AE}$ in basis $\vartheta$. Note that $H_{\infty}(\tilde{X})\geq d_H(\vartheta,\hat{\vartheta})$, since any state $\ket{y}_{\hat{\vartheta}}$, when measured in basis $\vartheta$, produces a random bit for every position $i$ with $\vartheta \neq \hat{\vartheta}$ (see also the definition of the min-entropy (Definition~\ref{def:min.entropy}) and note that there exist $2^{d_H(\vartheta,\hat{\vartheta})}$ possible outcomes). Lemma~\ref{lemma:small.superposition} allows us to conclude that $$ H_0(\sigma_E) \leq \log |{\cal Y}| \leq \log 2^{h(err+\varepsilon)n} = h(err+\varepsilon)n \, , $$ and similarly, $$ H_{\infty}(X) \geq H_{\infty}(\tilde{X})-\log{|{\cal Y}|} \geq d_H(\vartheta,\hat{\vartheta})-h(err+\varepsilon)n \, . $$ This proves the claim for $I = \set{1,\ldots,n}$. For arbitrary $I \subset \set{1,\ldots,n}$ and $x|_I$, we can consider the pure state, obtained by measuring the registers $A_i$ with $i \not\in I$ in basis $\vartheta_i$, when $x|_{\bar{I}}$ is observed. This state is still a superposition of at most $|{\cal Y}|$ vectors and thus we can apply the exact same reasoning to obtain Eq.~\eqref{eq:staterequirements}. \end{proof} The initial claim to be shown now follows by combining Lemma~\ref{lemma:ideal.state} and Corollary~\ref{corSerge}. Indeed, the ideal state $\rho_{Ideal}$ we promised, for which \eqref{eq:staterequirements} holds, is produced by putting ${\sf A}$ and ${\sf B}'$ in state $\tilde{\rho}_{TestAE}$, defined in Lemma~\ref{lemma:ideal.state}, and then running Steps~(\ref{step:EPRcheck}.)~and~(\ref{step:EPRrest}.) of the verification phase. This state is negligibly close to the real state, since by Lemma~\ref{lemma:ideal.state}, we were negligibly close to the real state before these operations. Corollary~\ref{corSerge} ensures that the bounds for benign Bob as stated in the definition of benignity in Eq.~\eqref{eq:staterequirements} are satisfied. \section{In the Presence of Noise} \label{sec:compiler.noise} In the description of the compiler and in its analysis in the previous section, we assume the quantum communication to be noise-free. Indeed, in the case of transmission errors, honest Alice is likely to reject an execution with honest Bob. However, it is straightforward to generalize the result to noisy quantum communication as follows. In Step~(\ref{step:check}.)~in the verification phase of ${\cal C}^{\alpha}(\Pi)$, Alice rejects and aborts if the relative number of errors between $x_i$ and $\hat{x}_i$ for $i \in T$ with $\theta_i = \hat{\theta}_i$ exceeds the error probability $\phi$, induced by the noise in the quantum communication, by some small $\varepsilon' > 0$. By Hoeffding's inequality~\cite{Hoeffding63}, this guarantees that honest Alice does not reject honest Bob, except with exponentially small probability. Furthermore, proving the security of this ``noise-resistant'' compiler goes along the exact same lines as for the original compiler. The only difference is that when applying Corollary~\ref{corSerge}, the parameter $e\hspace{-0.1ex}r\hspace{-0.15ex}r$ has to be chosen as $e\hspace{-0.1ex}r\hspace{-0.15ex}r = \phi + \varepsilon'$, such that the bounds in Eq.~\eqref{eq:staterequirements} hold for $$ \beta = h(e\hspace{-0.1ex}r\hspace{-0.15ex}r+\varepsilon) = h(\phi+\varepsilon'+\varepsilon) \, . $$ Thus, the claim of our compiler-theorem (Theorem~\ref{thm:compiler}) holds for any $\beta$-benign Bob with $\beta > h(\phi)$ (by choosing $\varepsilon,\varepsilon' > 0$ small enough). \section{Bounded-Quantum-Storage Security} \label{sec:compiler.hybrid.security} \index{bounded-quantum-storage model} In this section we show that our compiler preserves security in the bounded-quantum-storage model. Recall that in the BQSM, one of the players, in our case it is Bob, is assumed be able to store only a limited number of qubits beyond a certain point in the protocol. BQSM-secure OT- and ID-protocols are known~\cite{DFSS07}, but can be efficiently broken, if the memory bound does not hold. Therefore, we show here that applying the compiler produces protocols with better security, namely the adversary needs large quantum storage {\em and} large computing power to succeed. In Chapter~\ref{chap:hybrid.security.applications}, we will then discuss the compiled protocols with hybrid security in more detail. Consider a BB84-type protocol $\Pi$. For a constant $0 < \gamma < 1$, let $\dBob_{\text{\sc bqsm}}^\gamma(\Pi)$ be the set of dishonest players ${\sf B}'$ that store only $\gamma n$ qubits after a certain point in $\Pi$, where $n$ is the number of qubits sent initially. Protocol $\Pi$ is said to be unconditionally secure against such a $\gamma$-BQSM Bob, if it satisfies Definition~\ref{def:qmemoryBob} with the restriction that the quantification is over all dishonest ${\sf B}' \in \dBob_{\text{\sc bqsm}}^\gamma(\Pi)$. \begin{theorem} \label{thm:BQSM} If protocol $\Pi$ is unconditionally secure against $\gamma$-BQSM Bob, then the compiled protocol ${\cal C}^{\alpha}(\Pi)$ is unconditionally secure against $\gamma(1\!-\!\alpha)$-BQSM Bob, where $0 < \alpha < 1$. \end{theorem} \begin{proof} The proof proceeds as the proof for our compiler-theorem (Theorem~\ref{thm:compiler}). We have a dishonest ${\sf B}'$ that attacks ${\cal C}^{\alpha}(\Pi)$, and we construct a ${\sf B}'_\circ$ that attacks the original protocol $\Pi$. The only difference here is that we let ${\sf B}'_\circ$ generate the common reference string ``correctly'' as ${\tt pkH}$ sampled according to ${\cal G}_{\tt H}$. It follows by construction of ${\sf B}'_\circ$ that $out^{{\cal C}^{\alpha}(\Pi)}_{{\sf A},{\sf B}'} = out^{\Pi}_{{\sf A}_\circ, {\sf B}'_\circ} \, .$ Furthermore, since ${\sf B}'_\circ$ requires the same amount of quantum storage as ${\sf B}'$ but communicates an $\alpha$-fraction fewer qubits, it follows that ${\sf B}'_\circ \in \dBob_{\text{\sc bqsm}}^\gamma(\Pi)$, if ${\sf B}' \in \dBob_{\text{\sc bqsm}}^{\gamma(1-\alpha)}({\cal C}^{\alpha}(\Pi))$. Thus, it follows that there exists $\hat{\sf B}'$ such that $out^{\Pi}_{{\sf A}_\circ, {\sf B}'_\circ} \stackrel{s}{\approx} out^{\mathcal{F}}_{\hat{\sf A},\hat{\sf B}'} \, .$ This proves the claim. \end{proof} \section{Composability} \label{sec:composability.compiler} Several composition frameworks for the quantum setting have been proposed, for instance, sequential composability in a classical environment~\cite{FS09}, sequential composability in a quantum environment but restricted to the BQSM~\cite{WW08}, or attempts of generalizing the universal classical composability framework (UC in~\cite{Canetti01}) to universal quantum composability~\cite{BM04,Unruh04,Unruh10}. Here, we will briefly investigate our protocols in the particular composition frameworks, we consider most appropriate for our setting. \subsection{Sequential Composition} \label{sec:sequential.composition.compiler} \index{composition!sequential} All our definition for correctness and security of our two-party quantum protocols comply with the composition framework of~\cite{FS09} as described in detail in Section~\ref{sec:security.definition}. In particular, we will show in the next chapter that all of our quantum protocols $\pi$ securely implements their corresponding ideal functionality $\mathcal{F}$. Thus, according to the Composition Theorems~\ref{thm:composition.unconditional} and~\ref{thm:composition.computational}, we arrive at a situation where an outer protocol $\Sigma^{\pi_1\cdots\pi_\ell}$, composed of possibly different inner sub-protocols $\pi_i$, is essentially as secure as any hybrid protocol $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ with sequential calls to the corresponding ideal functionalities $\mathcal{F}_i$. Sequential composition in a classical environment follows immediately. \subsection{General Composition} \label{sec:general.composition.compiler} \index{composition!general} Our strong simulation-based security approach is clearly closely related to the concept of universal composability, but our definitions do not imply UC-security. The definitions of unconditional security leading to sequential composability do not require the running time of ideal-world adversaries to be polynomial whenever the real-life adversaries run in polynomial time. Fortunately, by extending our basic commitment construction, we can achieve efficient ideal-life adversaries. Therewith, we get efficient simulation on both sides without rewinding any dishonest player. Note that it might still be the case that our compilation preserves efficiency of the simulator, namely if protocol $\Pi$ is secure against dishonest ${\sf A}'$ with efficient simulator $\hat{\sf A}'$, then so is ${\cal C}^{\alpha}(\Pi)$. Although this would be desirable, it does not seem to be the case for our basic construction for the following reason. In order to show such a result, we would need to simulate the pre-processing phase against dishonest ${\sf A}'$ efficiently and without measuring the qubits that are not opened during pre-processing. Then after preparation and verification, we could give the remaining qubits to $\hat{\sf A}'$ to simulate the rest of the protocol as specified previously. However, the whole point of the pre-processing is to ensure that a real Bob measures all qubits, unless he can break the binding property of the commitments. Thus, the only way to resolve this situation is to give the simulator some trapdoor with which it can make commitments and open them any way it wants, or in other words, to equip the simulator with the possibility of equivocate\index{commitment!equivocability} its commitments. With such a equivocability trapdoor, the simulation of the verification phase is straightforward. $\hat{\sf A}'$ just waits until ${\sf A}'$ reveals the test subset, measures the qubits in the test subset, and opens the commitments according to the measurement results. Then, $\hat{\sf A}'$ simulates the protocol with the remaining unopened qubits. Our basic commitment construction, introduced in Section~\ref{sec:mixed.commit}, does not provide such an equivocability trapdoor. However, we can extend the scheme as discussed in Section~\ref{sec:extended.commit.compiler} by first extending our mixed commitment to the multi-bit crypto-system of~\cite{PVW08} and then combining it with an HVZK-$\Sigma$-protocol construction for some quantumly hard $\ensuremath{\mathcal{NP}}$-relation $\Rel$. As previously shown, equivocability emerges in this construction with the simulator's knowledge of a valid witness $w$ such that $(x,w)\in \Rel$. In that case, the simulator can compute two accepting conversations for the $\Sigma$-protocol, and therewith, answer both challenges. The extension preserves the different but indistinguishable dual-modes of the underlying commitment scheme such that the committed bit can still be extracted by a simulator $\hat{\sf B}'$, decrypting both commitments $C_0,C_1$ to determine, which contains a valid reply in the $\Sigma$-protocol. In~\cite{Unruh10} a special case of our generic construction, namely the quantum oblivious transfer protocol of Section~\ref{sec:hybrid.security.ot} is related to the quantum-UC context\index{composition!quantum-UC}. It is shown that the protocol \emph{statistically quantum-UC-emulates} its ideal functionality in the case of corrupted Alice and corrupted Bob, if it is instantiated with an \emph{ideal commitment functionality}. Furthermore, it is established that security as specified in~\cite{FS09} implies quantum-UC-security in the case of our OT-protocol.\footnote{The security we achieve here is called \emph{quantum stand-alone security} in~\cite{Unruh10}, but we prefer to describe the statements in the terms used throughout this work.} Last, the OT-protocol in its randomized version and when instantiated by an unconditionally binding commitment scheme implements its corresponding ideal functionality with \emph{statistical security} in the case of corrupted Bob. Even though for the last result the protocol is based on an actual commitment, the case considers only a dishonest Bob, and by using an unconditionally binding scheme in the real world, we would loose unconditional security against dishonest Alice. However, by combining our extended construction as described above with the results of Section~\ref{sec:extended.commit.compiler} and with~\cite[Theorem 20]{Unruh10}, we get the following stronger result that applies to our generic compiler construction: Let $\Pi$ be a BB84-type protocol as specified in Theorem~\ref{thm:compiler} and let ${\cal C}^{\alpha}(\Pi)$ be its compilation, instantiated with an extended mixed commitment construction in the CRS-model as described above. Then, ${\cal C}^{\alpha}(\Pi)$ \emph{computationally quantum-UC-emulates} its corresponding ideal functionality $\mathcal{F}$ for \emph{both dishonest players}. \chapter{Applications} \label{chap:hybrid.security.applications} Our compiler, discussed in the previous chapter, can be easily applied to known protocols. Here, we show two example applications, namely oblivious transfer and password-based identification. Since the original protocols are BQSM-secure, we also obtain hybrid security by compilation. These results appeared in~\cite{DFLSS09}. We then show that the compiled identification protocol is secure against man-in-the-middle attacks, which was sketch in~\cite{DFLSS09} but formal proofs were omitted. \section{Oblivious Transfer} \label{sec:hybrid.security.ot} \index{oblivious transfer} Oblivious transfer, as introduced in Section~\ref{sec:primitives.ot}, constitutes a highly relevant cryptographic primitive, which is complete for general two-party computation and reduces to classical commitment in its quantum variant. As a building block it can be securely used in outer quantum or classical protocols and extends, for instance, to quantum identification. \subsection{Motivation and Related Work} \label{sec:hybrid.security.ot.history} As mentioned already, the idea of introducing a $\mathtt{Commit\&Open}$-step to improve the security of quantum protocols was suggested in the first quantum OT protocol of Cr\'epeau and Kilian~\cite{CK88}, which---in its original form---proposes a protocol for $\tt{Rabin\text{--}OT}$, and in the practical follow-up work of Bennett, Brassard, Cr\'epeau and Skubiszewska~\cite{BBCS91}, implementing $\ot{1}{2}^1$. The $\mathtt{Commit\&Open}$ approach is sketched as a ``conceptually easy fix''~\cite[p.~14]{BBCS91} in a situation where a dishonest Bob has large quantum storage. Various partial results for OT in that context followed. For instance, in~\cite{Yao95} such a construction is proven secure against any receiver in the case of noiseless communication. To make the proof work, however, an ideal black-box commitment scheme is assumed. This approach was then generalized for noisy channels and perfect string commitments in~\cite{Mayers96}. Another approach in the computational setting was taken in~\cite{CDMS04}. There it was shown that a computationally binding quantum string commitment would enforce an apparent collapse of Bob's quantum information, which in turn would imply secure OT. The paper concludes with the open question of how to construct an actual commitment scheme as required to get an applicable protocol. Based on our analysis of Section~\ref{sec:compiler.hybrid.security}, we can now rather simply apply our compiler to (a variant of) the protocol in~\cite{BBCS91}, and therewith, give a complete proof for a concrete unconditionally hiding commitment scheme. \subsection{The Protocol} \label{sec:hybrid.security.ot.protocol} The variant we consider here achieves $\ot{1}{2}^\ell$. Recall that in such a protocol, the sender ${\sf A}$ sends two $\ell$-bit strings $s_0$ and $s_1$ to the receiver ${\sf B}$. ${\sf B}$ can select a string to receive, $s_k$, but he does not learn anything about $s_{1-k}$. Finally, ${\sf A}$ does not learn ${\sf B}$'s choice bit $k$. The ideal oblivious transfer functionality $\mathcal{F_{\sf{OT}}}$ is shown in Figure~\ref{fig:ideal.functionality.ot}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex}{\sc Functionality $\mathcal{F_{\sf{OT}}}$ :}\\ Upon receiving $\tt s_0$ and $\tt s_1$ from Alice and $\tt k$ from Bob, $\mathcal{F_{\sf{OT}}}$ outputs $\tt s_k$ to Bob. \vspace{-1ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal Functionality for String OT.}\label{fig:otF} \label{fig:ideal.functionality.ot} \end{figure} Our protocol is almost identical to $\ot{1}{2}^1$ introduced in~\cite{BBCS91}, but instead of using parity values to mask the bits in the last protocol message, we follow the approach of~\cite{DFRSS07}. Their BQSM-secure protocol $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$ for the randomized version uses hash-functions that allow for transferring an $\ell$-bit string instead of a bit as final message. Let $\mathcal{F}$ denote a suitable family of two-universal hash-functions with range $\set{0,1}^n \rightarrow \set{0,1}^\ell$ as specified in Definition~\ref{def:hashing}. Note that if the input to the function is smaller than $n$, we can pad it with zeros without decreasing its entropy. We further assume that $\ell = \lfloor \lambda n \rfloor$ for some constant $\lambda > 0$. Then, after the modifications described above, we obtain the basic ${\tt 1\text{-}2 \, QOT^\ell}$ protocol, depicted in Figure~\ref{fig:basic.ot}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol ${\tt 1\text{-}2 \, QOT^\ell}:$ } \\[-4ex] \begin{description}\setlength{\parskip}{0.5ex} \item[{\it Preparation:}]$ $ \\ ${\sf A}$ chooses $x \in_R \set{0,1}^n$ and $\theta \in_R \set{+,\x}^n$ and sends $\ket{x}_{\theta}$ to~${\sf B}$. ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^n$ and obtains $\hat{x} \in \set{0,1}^n$ by measuring $\ket{x}_{\theta}$ in bases $\hat{\theta}$. \item[{\it Post-processing:}] \begin{enumerate} \item[] \item ${\sf A}$ sends $\theta$ to ${\sf B}$. \item ${\sf B}$ partitions all positions $1\leq i \leq n$ in two subsets according to his choice bit $k \in \{ 0,1 \}$: the ``good'' subset $I_k := \{ i: \theta_i = \hat{\theta}_i \}$ and the ``bad'' subset $I_{1-k} := \{ i: \theta_i \neq \hat{\theta}_i \}$. ${\sf B}$ sends $(I_0,I_1)$ in this order. \item ${\sf A}$ sends descriptions of $f_0,f_1\in_R {\cal F}$ together with $m_0 := s_0 \oplus f_0(x|_{I_0})$ and $m_1 := s_1 \oplus f_1(x|_{I_1})$. \item ${\sf B}$ computes $s_k = m_k \oplus f_k(\hat{x}|_{I_k})$. \end{enumerate} \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Basic Protocol for String OT.} \label{fig:basic.ot} \end{figure} \begin{proposition} Protocol ${\tt 1\text{-}2 \, QOT^\ell}$ satisfies correctness and achieves unconditional security against dishonest Alice, according to Definitions~\ref{def:correctness} and~\ref{def:unboundedAliceNiceOrder}, respectively. \end{proposition} \begin{proof} Correctness for honest players is obvious: ${\sf B}$ selects one string to receive, which is masked by the hashed bit-string of outcomes, measured in the matching basis. In the positions with non-matching bases, he does not know the outcomes, and therewith he does not learn anything about $s_{1-k}$. Finally, ${\sf A}$ does not learn which is the ``good'' subset, and hence, which is ${\sf B}$'s choice $k$. Security against dishonest Alice is derived in a straightforward way from $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$ of~\cite{DFRSS07} as follows. Note that in $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$, the receiver measures all his qubits in one basis, depending on his choice bit $k$, i.e.\ $\theta \in [0,1]_{k}$. As described previously in Chapter~\ref{chap:hybrid.security}, our compiler requires measurement in random bases $\theta \in_R \{ 0,1 \}^n$. Otherwise, the opened and tested positions during $\mathtt{Commit\&Open}$ would obviously leak $k$. Due to the non-interactivity in $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$, ${\sf A}'$ cannot learn ${\sf B}$'s choice bit $k$, so the protocol is perfectly receiver-secure. More formally, the proof compares the real output to an ideal output, which is obtained by letting ${\sf A}'$ run the protocol with an unbounded receiver who measures his qubits in ${\sf A}'$'s bases $\theta$, samples \emph{independent} $K$ from the correct distribution, and sets $S_K$ correspondingly. The only difference between the two executions is the point in time and the choice of bases, in which positions $i \in I_{1-k}$ is measured. However, these parameters do not influence the output states, once $K$ is fixed. Now, the preparation phase combined with Step~(2.)~of the post-processing in ${\tt 1\text{-}2 \, QOT^\ell}$ is equivalent to ${\sf B}$ measuring all qubits in the basis, dictated by $K$. Thus, the same analysis can be applied to ${\tt 1\text{-}2 \, QOT^\ell}$, achieving unconditional security against ${\sf A}'$. \end{proof} \begin{theorem}\label{thm:ot} Protocol ${\tt 1\text{-}2 \, QOT^\ell}$ is unconditionally secure against $\beta$-benign Bob for any $\beta < \frac18 - \frac{\lambda}{2}$. \end{theorem} \begin{proof} For any given benign Bob ${\sf B}'$, we construct $\hat{\sf B}'$ the following way: $\hat{\sf B}'$ runs locally a copy of ${\sf B}'$ and simulates an execution by running $\hat{\sf A}$ up to but not including Step~(3.). Since ${\sf B}'$ is benign, $\hat{\sf B}'$ obtains $\hat{\theta}$ after the preparation phase. When the simulation of $\hat{\sf A}$ reaches the point just after the announcement of $f_0$ and $f_1$ in Step~(3.), $\hat{\sf B}'$ finds $k'$ such that $d_H(\hat{\theta}|_{I_{k'}},\theta|_{I_{k'}})$ is minimum for $k'\in\{0,1\}$. $\hat{\sf B}'$ then calls $\mathcal{F_{\sf{OT}}} $ with input $k'$ and obtains output $s_{k'}$. $\hat{\sf B}'$ sets $m'_{k'} = s_{k'} \oplus f_{k'}(x|_{I_{k'}})$ and $m'_{1-k'} \in_R\{0,1\}^\ell$ before sending $(m_0,m_1)$ to ${\sf B}'$. Finally, $\hat{\sf B}'$ outputs whatever ${\sf B}'$ outputs. We now argue that the state output by $\hat{\sf B}'$ is statistically close to the state output by ${\sf B}'$ when executing ${\tt 1\text{-}2 \, QOT^\ell}$ with the real ${\sf A}$. The only difference is that, while $\hat{\sf B}'$ outputs $m'_{1-k'}\in_R\{0,1\}^\ell$, ${\sf B}'$ outputs $m_{1-k'} = s_{1-k'} \oplus f_{1-k'}(x|_{I_{1-k'}})$. Thus, we simply have to show that $m_{1-k'}$ is statistically indistinguishable from uniform in the view of ${\sf B}'$. Note that, since $\theta$ and $\hat{\theta}$ are independent and $\theta$ is a uniform $n$-bit string, we have that for any $\epsilon>0$, $$ d_H(\theta,\hat{\theta})> \frac{(1-\epsilon)n}{2} \, ,$$ except with negligible probability. We can now claim that with overwhelming probability $$ d_H(\theta|_{I_{1-k'}},\hat{\theta}|_{I_{1-k'}})\geq \frac{(1-\epsilon)n}{4} \, . $$ Now, since ${\sf B}'$ is $\beta$-benign, we get with Definition~\ref{def:BenignBob} that $$ H_{\infty} \bigl( X|_{I_{1-k'}} \,\big|\, X|_{I_{k'}} = x|_{I_{k'}} \bigr) \geq \frac{(1-\epsilon)n}{4}-\beta n \ \text{ and } \ H_0(\rho_E)\leq \beta n \, . $$ It follows from privacy amplification (Theorem~\ref{theo:privacy-amplification}) that $f_{1-k'}(x|_{I_{1-k'}})$ is statistically indistinguishable from uniform for ${\sf B}'$, provided that $$ \frac{\ell}{n} < \frac{1}{4}-2\beta-\epsilon'$$ for any $\epsilon'>0$. Finally, by the properties of exclusive-OR, we can now also conclude that $m_{1-k'}$ is statistically close to uniform. Solving the last inequality for $\beta$, we obtain $$ \beta < \frac18 - \frac{\lambda}{2} - \frac{\epsilon'}{2} \, ,$$ and Theorem~\ref{thm:ot} follows. \end{proof} Informally, the next Corollary~\ref{cor:ot} states that, when compiling the basic protocol ${\tt 1\text{-}2 \, QOT^\ell}$, we obtain an improved protocol ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell})$ with \emph{hybrid security}\index{hybrid security}, such that a dishonest Bob is required to have large quantum computing power \emph{and} large quantum storage to succeed. For completeness, ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell})$ is given explicitly in Figure~\ref{fig:compiled.ot}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell}):$ }\\[-4ex] \begin{description}\setlength{\parskip}{0.5ex} \item[{\it Preparation:}]$ $ \\ ${\sf A}$ chooses $x \in_R \set{0,1}^m$ and $\theta \in_R \set{+,\x}^m$ and sends $\ket{x}_{\theta}$ to~${\sf B}$. ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^m$ and obtains $\hat{x} \in \set{0,1}^m$ by measuring $\ket{x}_{\theta}$ in bases $\hat{\theta}$. \item[{\it Verification:}] \begin{enumerate} \item[] \item ${\sf B}$ sends $c_i := \mathtt{commit_{pkH}}((\hat{\theta}_i,\hat{x}_i),r_i)$ with randomness $r_i$ for all $i = 1,\ldots,m$. \item ${\sf A}$ sends random $T \subset \{1,\ldots,m \}$ with $|T| = \alpha m$. ${\sf B}$ opens $c_i \ \forall \ i \in T$, and ${\sf A}$ checks that the openings were correct and that $x_i = \hat{x}_i$ whenever $\theta_i = \hat{\theta}_i$. If all tests are passed, ${\sf A}$ accepts. Otherwise, she rejects and aborts. \item ${\sf A}$ and ${\sf B}$ restrict $x, \theta$ and $\hat{x}, \hat{\theta}$, respectively, to the remaining $n$ positions $i \in \bar{T}$. \end{enumerate} \item[{\it Post-processing:}] \begin{enumerate} \item[] \item ${\sf A}$ sends $\theta$ to ${\sf B}$. \item ${\sf B}$ partitions all positions $1\leq i \leq n$ in two subsets according his choice bit $k \in \{ 0,1 \}$: the ``good'' subset $I_k := \{ i: \theta_i = \hat{\theta}_i \}$ and the ``bad'' subset $I_{1-k} := \{ i: \theta_i \neq \hat{\theta}_i \}$. ${\sf B}$ sends $(I_0,I_1)$ in this order. \item ${\sf A}$ sends descriptions of $f_0,f_1\in_R {\mathcal{F}}$ together with $m_0 := s_0 \oplus f_0(x|_{I_0})$ and $m_1 := s_1 \oplus f_1(x|_{I_1})$. \item ${\sf B}$ computes $s_k = m_k \oplus f_k(\hat{x}|_{I_k})$. \end{enumerate} \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Improved Protocol for String OT.} \label{fig:compiled.ot} \end{figure} \begin{corollary}\label{cor:ot} If $\lambda < \frac14$, then protocol ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell})$ is computationally secure against dishonest Bob and unconditionally secure against $\gamma (1\!-\!\alpha)$-BQSM Bob with $\gamma < \frac14 - 2 \lambda$. Correctness and unconditional security against dishonest Alice is maintained during compilation. \end{corollary} \begin{proof} The corollary is obtained by the following steps: First, we sketch that the analysis of protocol $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$ in~\cite{DFRSS07} can be almost analogously applied to ${\tt 1\text{-}2 \, QOT^\ell}$. Then, we combine this result with our BQSM-theorem (Theorem~\ref{thm:BQSM}). And finally, we apply Theorem~\ref{thm:ot} with our compiler-theorem (Theorem~\ref{thm:compiler}). Note that, by definition, all these transformations do not touch correctness nor unconditional security against ${\sf A}'$. In more detail, the main difference from $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$ to ${\tt 1\text{-}2 \, QOT^\ell}$ is that ${\sf B}$ measures all his qubits in the basis corresponding to his choice bit $k$, i.e.\ $\theta \in [0,1]_{k}$. Since we require these measurements to be in random bases $\theta \in_R \{ 0,1 \}^n$, we loose the non-interactivity and must include the additional message $(I_0,I_1)$ from ${\sf B}$ to ${\sf A}$ in Step~(2.), so that ${\sf A}$ obtains the same partitions. However, the partitions are send in fixed order and do not allow to conclude on the ``good'' subset $I_k$. No other message is sent by ${\sf B}$. Furthermore, recall that in randomized OT, ${\sf A}$ does not input the two messages $s_0, s_1$ herself by masking them with the hashed output of the measurement outcomes. Instead, only these hash-values, generated uniformly at random during the protocol, are output. However, due to the characteristic of exclusive-OR, the security properties in this aspect do not change. Thus, ${\tt 1\text{-}2 \, QOT^\ell}$ inherits the BQSM-security of $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$, and we can claim that ${\tt 1\text{-}2 \, QOT^\ell}$ is unconditionally secure against $\gamma$-BQSM Bob for all $\gamma$ strictly smaller than $\frac{1}{4} - 2\lambda$. Then, by Theorem~\ref{thm:BQSM}, we obtain unconditional security for ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell})$ against $\gamma(1\!-\!\alpha)$-BQSM Bob. Last, we know from Theorem~\ref{thm:ot} that ${\tt 1\text{-}2 \, QOT^\ell}$ is unconditionally secure against a $\beta$-benign Bob for $\beta < \frac18 - \frac{\lambda}{2}$. It follows with Theorem~\ref{thm:compiler} that $\mathtt{Commit\&Open}$, instantiated by our dual-mode commitment scheme, leads to quantum-computational security for ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell})$ against any ${\sf B}'$. \end{proof} \section{Password-Based Identification} \label{sec:hybrid.security.id} \index{identification} Password-based identification is introduced in Section~\ref{sec:primitives.id}, where we also describe a construction from randomized $\ot{1}{n}^\ell$, and the therewith inherited OT-security aspects. Secure identification is highly significant in any authenticated set-up of outer protocols, and may provide re-usability of the initial user-memorizable passwords, if cleverly implemented. \subsection{Motivation and Related Work} \label{sec:hybrid.security.id.motivation} There exist various approaches for classical and quantum identification, based on different techniques, e.g.\ on zero-knowledge~\cite{FS86,FFS87}, on password-based key-agreement~\cite{KOY01}, and on quantum memory restrictions~\cite{DFSS07}. Here, we will subject the quantum identification scheme of~\cite{DFSS07}, denoted in the following by $\tt BQSM\text{--}QID$, to our compiler technique, yielding more diverse security assumptions. $\tt BQSM\text{--}QID$ was proven to be unconditionally secure against arbitrary dishonest Alice and against quantum-memory-bounded dishonest Bob by using quantum-information-theoretic security definitions. In~\cite{FS09} it was then shown that these security definitions imply simulation-based security as considered here, with respect to the functionality $\mathcal{F_{\sf{ID}}}$ given in Figure~\ref{fig:ideal.functionality.id}. Actually, the definition and proof from~\cite{DFSS07} guarantee security only for a slightly weaker functionality, which gives some unfair advantage to dishonest Alice in case she guesses the password correctly. However, as discussed in~\cite{FS09}, the protocol from~\cite{DFSS07} does implement functionality $\mathcal{F_{\sf{ID}}}$. \subsection{The Protocol} \label{sec:hybrid.security.id.protocol} Recall that we require from an identification scheme that a user ${\sf A}$ succeeds in identifying herself to a server ${\sf B}$, if she knows an initial, secret password $w$. Additionally, a dishonest user ${\sf A}'$ should not succeed with higher probability than at a guess, and similarly, a dishonest server ${\sf B}'$ should be only able to guess ${\sf A}$'s password without learning anything beyond the (in)correctness of his guess. These last requirements provide re-usability of the password. To achieve security under realistic assumptions, we further want to allow memorizable passwords with low entropy. Let $\cal W$ be the set of possible keys, not necessarily large in size, with $w \in \cal{W}$ denoting the initially shared password. For clarity, we will often use $w_A$ and $w_B$ to indicate ${\sf A}$'s and ${\sf B}$'s input to the protocol, and only accept if $w_A = w_B$, which implies equality to $w$. Let $\mathfrak{c}:{\cal W} \rightarrow \set{+,\x}^n$ be the encoding function of a binary code of length $n$ with $|{\cal W}|$ codewords and minimal distance $d$. Families of codes as required for our subsequent results, correcting a constant fraction of errors efficiently and with constant information rate are indeed known ~\cite{SS96}. And finally, let $\mathcal{F}$ and $\G$ denote suitable families of (strongly) two-universal hash-functions, as specified in Definition~\ref{def:hashing}, with range $\mathcal{F}: \set{0,1}^n \rightarrow \set{0,1}^\ell$ and $\G: \mathcal{W} \rightarrow \set{0,1}^\ell$, respectively. Again we stress that we can pad the input to the functions with zero, if it is smaller than expected. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex}{\sc Functionality $\mathcal{F_{\sf{ID}}}$ :}\\ Upon receiving $\mathtt{w_A, w_B} \in \cal W$ from Alice and Bob, respectively, $\mathcal{F_{\sf{ID}}}$ outputs the bit \smash{$\mathtt{y}:= (\mathtt{w_A} \stackrel{\raisebox{-1ex}{\tiny?}}{=} \mathtt{w_B})$} to Bob. In case Alice is dishonest, she may choose $\mathtt{w_A} = \,\perp$ (where $\perp \, \not\in \cal W$), and (for any choice of $\tt w_A$) the bit $\tt y$ is also output to Alice. \vspace{-1ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal Functionality for Password-Based Identification.} \label{fig:ideal.functionality.id} \end{figure} We cannot directly apply our compiler to the original $\tt BQSM\text{--}QID$, since it is {\em not} a BB84-type protocol. Similar to $\tt RAND \ {\tt 1\text{-}2 \, QOT^\ell}$ described in the previous Section~\ref{sec:hybrid.security.ot}, ${\sf B}$ does not measure the qubits in a random basis but in a basis-string $c$ determined by his password $w_B \in {\cal W}$ by $c = \mathfrak{c}(w_B)$. After ${\sf A}$'s basis announcement, both players compute set $I_w = \set{i:\theta_i = \mathfrak{c}(w)_i}$ with the positions on which they base the last steps of the post-processing. We briefly sketch now the transformation from $\tt BQSM\text{--}QID$ into a BB84-type protocol, without affecting security and without loosing efficiency. The first step is naturally to let ${\sf B}$ measure in random basis $\hat{\theta} \in_R \set{\+,\x}^n$. The most straightforward next step would be to include a new message from ${\sf B}$ to ${\sf A}$ during post-processing, in which ${\sf B}$ announces $I_B = \set{i:\hat{\theta}_i = \mathfrak{c}(w)_i}$. Then, ${\sf A}$ sends $\theta$ and the remaining post-processing could be conducted on $I_w = \set{i \in I_B: \theta_i = \hat{\theta}_i}$. Note, however, that this solution here is less efficient than in the original protocol, since only approx.\ $1/4$ of all measurement outcomes could be used. So instead, we let Bob apply a {\em random shift} $\kappa$ to the code, which ${\sf B}$ announces to ${\sf A}$ in the post-processing phase, namely $\hat{\theta}= \mathfrak{c}(w) \oplus \kappa$ with $\kappa\in \{ +, \times \}^n$ and $\+ \equiv 0$ and $\x \equiv 1$ for computing the $\oplus$-operation. Then, we define $\mathfrak{c'}(w):= \mathfrak{c}(w) \oplus \kappa$. Finally, after ${\sf A}$'s announcements of $\theta$ the protocol is completed with the shifted code, i.e., based on positions in $I_w := \set{i:\theta=\mathfrak{c'}(w)_i}$. This has the effect that the post-processing is actually based on positions $i$ with $\theta_i = \hat{\theta}_i$, and thus, on approx.\ $1/2$ of all qubits as in protocol $\tt BQSM\text{--}QID$. Our resulting protocol $\mathtt{QID}$ is described in Figure~\ref{fig:basic.id}. We show in the following proofs that the modification does not affect security as given in~\cite{DFSS07} (and~\cite{FS09}). \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol $\mathtt{QID}:$ } \\[-4ex] \begin{description \item[{\it Preparation:}] $ $ \\ ${\sf A}$ chooses $x \in_R \set{0,1}^n$ and $\theta \in_R \set{+,\x}^n$ and sends $\ket{x}_{\theta}$ to~${\sf B}$. ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^n$ and obtains $\hat{x} \in \set{0,1}^n$ by measuring $\ket{x}_{\theta}$ in bases $\hat{\theta}$. \item[{\it Post-processing:}] \begin{enumerate} \item[] \item ${\sf B}$ computes a string $\kappa\in \{ +, \times \}^n$ such that $\hat{\theta}= \mathfrak{c}(w) \oplus \kappa$, where we think of $\+$ as 0 and $\x$ as 1 so that $\oplus$ makes sense. He sends $\kappa$ to ${\sf A}$ and we define $\mathfrak{c'}(w):= \mathfrak{c}(w) \oplus \kappa$. \item ${\sf A}$ sends $\theta$ and $f \in_R \mathcal{F}$ to ${\sf B}$. Both compute $I_w := \set{i: \theta_i=\mathfrak{c'}(w)_i}$. \item ${\sf B}$ sends $g \in_R \mathcal{G}$. \item ${\sf A}$ sends $z:= f(x|_{I_{w}}) \oplus g(w)$ to ${\sf B}$. \item ${\sf B}$ accepts if and only if $z=f(\hat{x}|_{I_w}) \oplus g(w)$. \end{enumerate} \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Basic Protocol for Password-Based Identification} \label{fig:basic.id} \end{figure} \begin{proposition} Protocol $\mathtt{QID}$ satisfies correctness and achieves unconditional security against dishonest Alice, according to Definitions~\ref{def:correctness} and~\ref{def:unboundedAliceNiceOrder}, respectively. \end{proposition} \begin{proof} Correctness for honest players is obvious: If both ${\sf A}$ and ${\sf B}$ know $w$, i.e.~$w_A = w_B$, they can compute $\mathfrak{c}(w)$ and $\mathfrak{c'}(w)$. Following the last steps as supposed to, they conclude with $f(x|_{I_w}) \oplus g(w_A) = f(\hat{x}|_{I_w}) \oplus g(w_B)$. Security against dishonest ${\sf A}'$ is derived in a straightforward way from $\tt BQSM\text{--}QID$ as follows. Recall that in $\tt BQSM\text{--}QID$, ${\sf B}$ measures all his qubits in one basis, depending on $c = \mathfrak{c}(w)$. In $\mathtt{QID}$, the preparation phase combined with Step~(1.)~of the post-processing, where ${\sf B}$ sends $\kappa$, can be seen as an equivalent situation from the view of ${\sf A}'$. The important positions are now defined by $\mathfrak{c'}(w)$, which is however only deducible if $\mathfrak{c}(w)$ is known in addition, since otherwise, $\kappa$ looks completely random. All subsequent steps are exactly as in $\tt BQSM\text{--}QID$, and thus, the same analysis can be applied to $\mathtt{QID}$. In the following, we will sketch the intuitive idea thereof. ${\sf A}'$ runs the protocol with a memory-unbounded server who measures his qubits in ${\sf A}'$'s bases $\theta$ and therefore obtains $x$. He then computes $s_j = f(x|_{I_j}) \oplus g(j)$ for all codewords $j = 1,\ldots,|\cal W|$, where $s_{w}$ would be expected from ${\sf A}'$ for an accepting run of the protocol. By the strongly universal-two property of $g$, all $s_j$ are pairwise independent, and thus, it follows that all $s_j$'s are distinct, except with some negligible probability. Assume that the accepting message is one of the $s_j$'s for a \emph{random} variable $w'$, i.e.~$z = s_{w'}$. ${\sf A}'$ will only succeed, if $w' = w$, and ${\sf A}'$ does not learn anything beyond that. A further analysis of ${\sf A}'$'s state before the final accept/reject-message shows its independence from $w$, given $w'$ and conditioned on $w' \neq w$ and on the pairwise distinction of all $s_j$'s. And finally, for ${\sf A}'$'s state after the final message it is shown that the event of all distinct $s_j$'s is independent of $w$ and $w'$. Statistical security against ${\sf A}'$ follows. \end{proof} \begin{theorem}\label{thm:id} If $\mathfrak{c}:{\cal W} \rightarrow \set{+,\x}^n$ has minimal distance $d \geq \delta n$ and is polynomial-time decodeable, then protocol $\mathtt{QID}$ is unconditionally secure against $\beta$-benign Bob for any $\beta < \frac{\delta}{4}$. \end{theorem} \begin{proof} For any given benign Bob ${\sf B}'$, we construct $\hat{\sf B}'$ as follows. $\hat{\sf B}'$ runs locally a copy of ${\sf B}'$ and simulates Alice's actions by running ${\sf A}$ faithfully except for the following modifications. After the preparation phase, $\hat{\sf B}'$ gets $\hat{\theta}$ and $\kappa$ from ${\sf B}'$. It then computes $w' \in \cal W$ such that $\mathfrak{c}(w')$ has minimal Hamming distance to $\hat{\theta} \oplus \kappa$. Note that this can be done in polynomial-time by assumption on the code. Then, $\hat{\sf B}'$ submits $w'$ as input $w_B$ to $\mathcal{F_{\sf{ID}}}$ and receives output $y \in \set{0,1}$. If $y = 1$, then $\hat{\sf B}'$ faithfully completes ${\sf A}$'s simulation using $w'$ as $w$. Otherwise, $\hat{\sf B}'$ completes the simulation by using a random $z'$ instead of $z$. In the end, $\hat{\sf B}'$ outputs whatever ${\sf B}'$ outputs. We need to show that the state output by $\hat{\sf B}'$ (respectively ${\sf B}'$) above is statistically close to the state output by ${\sf B}'$ when executing $\mathtt{QID}$ with real ${\sf A}$. For simpler notation, we use $w$ for honest Alice's input $w_A$. Note that if $w' = w$, then the simulation of ${\sf A}$ is perfect and thus the two states are equal. If $w' \neq w$ then the simulation is not perfect, as the real ${\sf A}$ would use $z= f(x|_{I_{w}}) \oplus g({w})$ instead of random $z'$. It thus suffices to argue that $f(x|_{I_{w}})$ is statistically close to random and independent of the view of ${\sf B}'$ for any fixed ${w} \neq w'$. Note that this is also what had to be proven in~\cite{DFSS07}, but under a different assumption, namely that ${\sf B}'$ has bounded quantum memory, rather than that he is benign. Nevertheless, we can recycle part of the proof. Recall from the definition of a benign Bob that the common state after the preparation phase is statistically close to a state for which it is guaranteed that $H_{\infty}(X|_I) \geq d_H(\theta|_I,\hat{\theta}|_I) - \beta n$ for any $I \subseteq \set{1,\ldots,n}$, and $H_0(\rho_E) \leq \beta n$. By the closeness of these two states, switching from the real state of the protocol to the ideal state satisfying these bounds, has only a negligible effect on the state output by $\hat{\sf B}'$. Thus, we may assume these bounds to hold. Recall that $\hat{\theta} \oplus \kappa$ is at Hamming distance at most $d/2 $ from $\mathfrak{c}(w')$. Since the distance from here to the (distinct) codeword $\mathfrak{c}({w})$ is greater than $d$, we see that $\hat{\theta}\oplus\kappa$ is at least $d/2$ away from $\mathfrak{c}({w})$. It follows that $ \mathfrak{c}'(w) = \mathfrak{c}(w) \oplus \kappa$ has Hamming distance at least $d/2$ from $\hat{\theta}$. Furthermore, for arbitrary $\varepsilon > 0$ and except with negligible probability, the Hamming distance between $\theta|_{I_{w}} = \mathfrak{c}'({w})|_{I_w}$ and $\hat{\theta}|_{I_{w}}$ is at least $d/4 - \varepsilon n$. Therefore, we can conclude that $$ H_{\infty}(X|_{I_{w}}) \geq d/4 - \varepsilon n - \beta n \ \text { and } \ H_0(\rho_E) \leq \beta n \, . $$ We require $H_{\infty}(X|_{I_{w}}) - H_0(\rho_E) - \ell$ to be positive and linear in $n$, which is the case here for parameters $$ \beta n \leq d/8 - (\varepsilon/2) \ n - \ell/2 \, . $$ We conclude by privacy amplification that $f(x|_{I_w})$ and therewith $z$ are close to random and independent of $E$, conditioned on $w \neq w$. This concludes the proof. \end{proof} The next corollary informally states that, when applying our compiler to the basic protocol $\mathtt{QID}$, we obtain a hybrid-secure protocol\index{hybrid security} ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell})$. Thus, any dishonest Bob needs large quantum computing power \emph{and} large quantum storage to launch a successful attack. For completeness, we again give ${\cal C}^{\alpha}(\mathtt{QID})$ explicitly in Figure~\ref{fig:compiled.id}. \begin{corollary}\label{cor:id} If $|{\cal W}| \leq 2^{\nu n}$, and if $\mathfrak{c}:{\cal W} \rightarrow \set{+,\x}^n$ has minimal distance $d \geq \delta n$ for $\delta > 0$ and is polynomial-time decodeable, then protocol ${\cal C}^{\alpha}(\mathtt{QID})$ is computationally secure against dishonest Bob and unconditionally secure against $\gamma (1\!-\!\alpha)$-BQSM Bob with $\gamma < \frac{\delta}{4} - \nu$. Correctness and unconditional security against dishonest Alice is maintained during compilation. \end{corollary} \begin{proof} We can show hybrid security by first adapting and connecting the results of~\cite{DFSS07} with our BQSM-Theorem~\ref{thm:BQSM}, and then combining Theorem~\ref{thm:id} with our compiler theorem (Theorem~\ref{thm:compiler}). All definitions preserve correctness and unconditional security against ${\sf A}'$. In more detail, the main difference from $\tt BQSM\text{--}QID$ of~\cite{DFSS07} to $\mathtt{QID}$ is that ${\sf B}$ measures all his qubits in the bases corresponding to $c = \mathfrak{c}(w_B)$. Then after ${\sf A}$'s basis announcement, both players base the remaining post-processing on $I_w = \set{i:\theta_i = \mathfrak{c}(w)_i}$. In $\mathtt{QID}$ instead, ${\sf B}$ measures in random bases, computes $\hat{\theta}= \mathfrak{c}(w) \oplus \kappa$, and announces $\kappa$ to ${\sf A}$. Then after ${\sf A}$'s basis announcements, the protocol is completed based on positions in $I_w := \set{i:\theta=\mathfrak{c'}(w)_i}$ with $\mathfrak{c'}(w):= \mathfrak{c}(w) \oplus \kappa$. Note that both situations however are equivalent. First, the important positions are those $i$ where $\theta_i = \hat{\theta}_i$ in both cases. And second, $\kappa$ looks completely random and is of no value without the knowledge of $\mathfrak{c}({w})$. Thus, $\mathtt{QID}$ inherits the BQSM-security of $\tt BQSM\text{--}QID$, and we can claim that $\mathtt{QID}$ is unconditionally secure against $\gamma$-BQSM Bob for all $\gamma < \frac{\delta}{4} - \nu$. From Theorem~\ref{thm:BQSM} unconditional security of ${\cal C}^{\alpha}({\tt 1\text{-}2 \, QOT^\ell})$ against $\gamma(1\!-\!\alpha)$-BQSM Bob follows. $\mathtt{QID}$ is guaranteed by Theorem~\ref{thm:id} to achieve unconditional security against a $\beta$-benign Bob for $\beta < \frac{\delta}{4}$ and it follows with Theorem~\ref{thm:compiler} that $\mathtt{Commit\&Open}$, instantiated by our dual-mode commitment scheme, yields quantum-computational security for ${\cal C}^{\alpha}(\mathtt{QID})$ against any ${\sf B}'$. \end{proof} \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol ${\cal C}^{\alpha}(\mathtt{QID}):$ } \\[-4ex] \begin{description \item[{\it Preparation:}] $ $ \\ ${\sf A}$ chooses $x \in_R \set{0,1}^n$ and $\theta \in_R \set{+,\x}^n$ and sends $\ket{x}_{\theta}$ to~${\sf B}$. ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^n$ and obtains $\hat{x} \in \set{0,1}^n$ by measuring $\ket{x}_{\theta}$ in bases $\hat{\theta}$. \item[{\it Verification:}] \begin{enumerate} \item[] \item ${\sf B}$ sends $c_i := \mathtt{commit_{{\tt pkH}}}((\hat{\theta}_i,\hat{x}_i),r_i)$ with randomness $r_i$ for all $i = 1,\ldots,m$. \item ${\sf A}$ sends random $T \subset \{1,\ldots,m \}$ with $|T| = \alpha m$. ${\sf B}$ opens $c_i \ \forall \ i \in T$, and ${\sf A}$ checks that the openings were correct and that $x_i = \hat{x}_i$ whenever $\theta_i = \hat{\theta}_i$. If all tests are passed, ${\sf A}$ accepts. Otherwise, she rejects and aborts. \item ${\sf A}$ and ${\sf B}$ restrict $x, \theta$ and $\hat{x}, \hat{\theta}$, respectively, to the remaining $n$ positions $i \in \bar{T}$. \end{enumerate} \item[{\it Post-processing:}] \begin{enumerate} \item[] \item ${\sf B}$ computes a string $\kappa\in \{ +, \times \}^n$ such that $\hat{\theta}= \mathfrak{c}(w) \oplus \kappa$, where we think of $\+$ as 0 and $\x$ as 1 so that $\oplus$ makes sense. He sends $\kappa$ to ${\sf A}$ and we define $\mathfrak{c'}(w):= \mathfrak{c}(w) \oplus \kappa$. \item ${\sf A}$ sends $\theta$ and $f \in_R \mathcal{F}$ to ${\sf B}$. Both compute $I_w := \set{i: \theta_i=\mathfrak{c'}(w)_i}$. \item ${\sf B}$ sends $g \in_R \mathcal{G}$. \item ${\sf A}$ sends $z:= f(x|_{I_w}) \oplus g(w)$ to ${\sf B}$. \item ${\sf B}$ accepts if and only if $z=f(\hat{x}|_{I_w}) \oplus g(w)$. \end{enumerate} \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Improved Protocol for Password-Based Identification} \label{fig:compiled.id} \end{figure} \section{Man-in-the-Middle Security for Identification} \label{sec:hybrid.security.mitm} \index{player!man-in-the-middle} In a man-in-the-middle attack, we assume an external adversary who attacks an execution of the protocol with honest communicating parties, while having full control over the classical and the quantum communication. \subsection{Motivation} \label{sec:hybrid.security.mitm.motivation} The compiled quantum protocols from Sections~\ref{sec:hybrid.security.ot} and \ref{sec:hybrid.security.id} protect against (arbitrary) dishonest Alice and against (computationally or quantum-storage bounded) dishonest Bob. However, in particular in the context of identification, it is also important to protect against a man-in-the-middle attacker Eve (${\sf E}$). Both, $\mathtt{QID}$ and ${\cal C}^{\alpha}(\mathtt{QID})$, are insecure in this model. Eve might measure one of the transmitted qubits, say, in the $\+$-basis, and this way learn information on the basis $\hat{\theta}_i$ used by ${\sf B}$, and thus on the password $w$, simply by observing if ${\sf B}$ accepts or rejects in the end.\footnote{Note that this attack does not immediately apply to the scheme sketched in the previous section, but similar, however more sophisticated, attacks may still apply.} In~\cite{DFSS07} it was shown how to enhance $\tt BQSM\text{--}QID$ in order to obtain security (in the bounded-quantum-storage model) against man-in-the-middle attacks. The very same techniques can also be used to obtain {\em hybrid security} against man-in-the-middle attacks for ${\cal C}^{\alpha}(\mathtt{QID})$. The techniques from~\cite{DFSS07} consist of the following two add-on's to the original protocol. \begin{enumerate} \item A test on a random subset of qubits in order to detect disturbance of the quantum communication. \item Authentication of the classical communication. \end{enumerate} First note that ${\cal C}^{\alpha}(\mathtt{QID})$ already does such a check as required in Point~(1.), namely in the verification phase, so this is already taken care of here. Point~(2.)~requires that Alice and Bob, in addition to the password, share a high-entropy key $k$ that could be stored, e.g.\ on a smart-card. This key will be used for a so-called {\em extractor MAC}. Besides being a MAC, i.e.\ a message authentication code, such a construction has the additional property that it also acts as an extractor. This means that if the message to be authenticated has high enough min-entropy, then the key-tag pair is close to randomly and independently distributed. As a consequence, the tag gives away (nearly) no information on $k$, and thus, $k$ can be re-used in the next execution of the protocol.\footnote{This is in sharp contrast to the standard way of authenticating the classical communication, where the authentication key can only be used a bounded number of times.} For further details, we refer to~\cite{DFSS07,DKRS06}. More specifically, in order to obtain hybrid security against man-in-the-middle attacks for ${\cal C}^{\alpha}(\mathtt{QID})$, ${\sf A}$ will, in her last move of the protocol, use the extractor MAC to compute an authentication tag on all the classical messages exchanged plus the string $x|_{I_w}$. This, together with the test of a random subset, prevents Eve from interfering with the (classical and quantum) communication without being detected, and security against Eve essentially follows from the security against impersonation attacks. Note that including the $x|_{I_w}$ into the authenticated message guarantees the necessary min-entropy, and as such the re-usability of the key $k$. We emphasize that the protocol is still secure against impersonation attacks (i.e.\ dishonest Alice or Bob), even if the adversary knows $k$, but with slightly weaker parameters due to the ``entropy-loss'' within $x|_{I_w}$, caused by the additional information for authentication and private error correction that is now available. \subsection{The Set-Up} \label{sec:hybrid.security.mitm.setup} In addition to the previous setting in Section~\ref{sec:hybrid.security.id}, we now have the following assumptions. Let $MAC^*: \mathcal{K} \times \mathcal{M} \rightarrow \{0, 1\}^\ell$ be the extractor MAC with arbitrary key space $\mathcal{K}$, message space $\mathcal{M}$ and error probability $2^{-\ell}$. Its extractor property guarantees that for any message $M$ and quantum state $E$ (which may depend on $M$), the tag $T = MAC^*(K,M)$ of $M$ is such that $\rho_{T K E}$ is $2^{-(H_{\infty}(M)-H_0(\rho_E)-\ell)/2}$-close to $\frac{1}{2^\ell} \mathbb{I} \otimes \rho_K \otimes \rho_E$. Recall that $\mathfrak{c}:{\cal W} \rightarrow \set{+,\x}^n$ is the encoding function of a binary code with minimal distance $d$, and we have strongly universal-2 classes of hash-functions $\mathcal{F}: \{0,1\}^n \rightarrow \{ 0,1 \}^{\ell}$ and $\mathcal{G}: \mathcal{W} \rightarrow \{ 0,1 \}^{\ell}$. In order to do error correction, let $\set{syn_j}_{j \in \cal J}$ be the family of syndrome functions\footnote{Note that we have the following convention: For a bit string $y$ of arbitrary length, $syn_j(y)$ is to be understood as $syn_j(y0\cdots0)$ with enough padded zeros if its bit length is smaller than $n'$, and as $\big(syn_j(y'),y''\big)$, where $y'$ consist of the first $n'$ and $y''$ of the remaining bits of $y$, if its bit length is bigger than $n'$.}, corresponding to a family ${\cal C} = \set{C_j}_{j \in \cal J}$ of linear error correcting codes of size $n' = n/2$, where $n = (1-\alpha)m$. We require the property that any $C_j$ allows to efficiently correct a $\phi''$-fraction of errors for some constant $\phi'' > 0$. For a random $j \in \cal J$, the syndrome of a string with $t$ bits of min-entropy is $2^{-\frac14(t-2q)}$-close to uniform given $j$ and any quantum state with max-entropy at most $q$. We refer to~\cite{DS05,DFSS07,FS08} for the existence of such families and example constructions. Protocol ${\cal C}^{\alpha}(\QID^+)$ can tolerate a noisy quantum communication up to any error rate $\phi < \phi''$. We stress that for security against man-in-the-middle attacks, error correction with $\phi'' > 0$ needs to be done even if we assume perfect quantum communication (with $\phi = 0$), as should become clear from the analysis of the protocol given below. Finally, we let $\phi'$ be a constant such that $\phi < \phi' < \phi''$.\\ The ideal functionality $\mathcal{F}_\IDplus$ is given in Figure~\ref{fig:ideal.functionality.mitm}. The following definition captures unconditional security against a man-in-the-middle attacker, where ${\sf E}$ gets classical $W'$ and quantum state $E$ as input and both honest players ${\sf A}$ and ${\sf B}$ get classical input $W$ and $K$. The joint state is then of the form $$ \rho_{K W W' E|W' \neq W} = \rho_K \otimes \rho_{W\leftrightarrow W'\leftrightarrow E| W' \neq W} \, . $$ Note that we require that the adversary's quantum register $E$ is correlated with the honest players' parts only via her classical input $W'$, conditioned on $W \neq W'$. \begin{definition}[Unconditional security against a Man-in-the-middle] \label{def:unconditional.Eve} A protocol \\ $\Pi$ implements an ideal classical functionality $\mathcal{F}$ unconditionally securely against a man-in-the-middle attacker, if for any real-world adversary ${\sf E}$, there exists an ideal-world adversary $\hat{\E}$, such that, for any input state as specified above, it holds that the outputs in the real and the ideal world are statistically indistinguishable, i.e., $$ out_{{\sf A},{\sf B},{\sf E}}^\Pi \approxs out_{\hat{\sf A},\hat{\sf B},\hat{\E}}^\mathcal{F} \, . $$ \end{definition} \begin{figure} \begin{framed} \noindent\hspace{-1.5ex}{\sc Functionality $\mathcal{F}_{\IDplus}$ :}\\ The ideal functionality $\mathcal{F}_{\IDplus}$ receives pairs of strings $(w_A,k_A)$ and $(w_B,k_B)$ from honest Alice and Bob, and a string $w_E$ from Eve, where $w_A,w_B \in \cal W$ and $k \in \mathcal{K}$. If $w_E = w_A$, then $\mathcal{F}_{\IDplus}$ sends $({\sf correct},k_A)$ to Eve. Otherwise, $\mathcal{F}_{\IDplus}$ sends ${\sf incorrect}$. Last, Eve is asked to input an ``override bit'' $d$, and $\mathcal{F}_{\IDplus}$ outputs the bit $(w_A \stackrel{\raisebox{-1ex}{\tiny?}}{=} w_B) \wedge d$ to Bob and to Eve. \vspace{-1ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal Functionality with Man-in-the-Middle Security.} \label{fig:ideal.functionality.mitm} \end{figure} Computational security against a man-in-the-middle is defined as follows. For a given value of the security parameter $m$, the common reference string $\omega$ is chosen at first. The polynomial-size input sampler takes as input $m$ and $\omega$ and samples an input state of the form $$ \rho_{W_A K_A W_B K_B Z E} = \rho_{\MC{W_A K_A W_B K_B}{Z}{E}} \, , $$ where honest Alice gets as input password $W_A$, honest Bob gets $W_B$, and Eve's quantum register $E$ is correlated with the honest player's part only via her classical input $Z$. In addition to their passwords $W_A,W_B$, the honest players are given high-entropy keys $K_A,K_B$. We restrict the input sampler to choose $K_A$ uniformly at random from $\mathcal{K}$ and guarantee that $K_A=K_B$ whenever $W_A = W_B$. \begin{definition}[Computational Security against a Man-in-the-middle] \label{def:computational.Eve} A protocol $\\ \Pi$ implements an ideal classical functionality $\mathcal{F}$ computationally securely against a man-in-the-middle attacker, if for any poly-time real-world adversary ${\sf E}$ who has access to the common reference string $\omega$, there exists a poly-time ideal-world adversary $\hat{\E}$, not using $\omega$, such that for any input sampler as described above, it holds that the outputs in the real and the ideal world are quantum-computationally indistinguishable, i.e., $$ out_{{\sf A},{\sf B},{\sf E}}^\Pi \approxq out_{\hat{\sf A},\hat{\sf B},\hat{\E}}^\mathcal{F} \, . $$ \end{definition} \subsection{The Protocol} \label{sec:hybrid.security.mitm.protocol} The extended and compiled protocol ${\cal C}^{\alpha}(\QID^+)$ is depicted in Figure~\ref{fig:qid.plus}. Corollary~\ref{cor:Eve} states hybrid security against man-in-the-middle attacks, such that a computationally or quantum-storage bounded Eve can do no better than trying to guess the password. If the guess is incorrect, she learns (essentially) nothing. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol ${\cal C}^{\alpha}(\QID^+)$} \\[-4ex] \begin{description}\setlength{\parskip}{0.5ex} \item[{\it Preparation:}] $ $\\ ${\sf A}$ chooses $x \in_R \{0,1\}^m$ and $\theta \in_R \set{+,\x}^m$ and sends $\ket{x}_{\theta}$ to~${\sf B}$. ${\sf B}$ chooses $\hat{\theta} \in_R \set{0,1}^m$ and obtains $\hat{x} \in \set{0,1}^m$ by measuring $\ket{x}_{\theta}$ in bases $\hat{\theta}$. \item[{\it Verification:}] \begin{enumerate} \item[] \item ${\sf B}$ sends $c_i := \commitk{(\hat{\theta}_i,\hat{x}_i)}{r_i}{{\tt pkH}}$ with randomness $r_i$ for $i = 1,\ldots,m$. \item\label{step:check} ${\sf A}$ sends random $T \subset \set{1,\ldots,m}$ with $|T| = \alpha m$. ${\sf B}$ opens $c_i \ \forall \ i \in T$, and ${\sf A}$ checks that the openings were correct and that $x_i = \hat{x}_i$ whenever $\theta_i = \hat{\theta}_i$. ${\sf A}$ accepts, if this is the case for all but a $\phi^\prime$-fraction of the tested bits. Otherwise, she rejects and aborts. \item ${\sf A}$ and ${\sf B}$ restrict $x,\theta$ and $\hat{\theta},\hat{x}$,respectively, to the remaining $n$ positions $i \in \bar{T}$. \end{enumerate} \item[{\it Post-processing:}] \begin{enumerate} \item[] \item ${\sf B}$ computes a string $\kappa\in \{ +, \times \}^n$ such that $\hat{\theta}= \mathfrak{c}(w) \oplus \kappa$. He sends $\kappa$ to ${\sf A}$ and we define $\mathfrak{c'}(w):= \mathfrak{c}(w) \oplus \kappa$. \item ${\sf A}$ sends $\theta$, $f \in_R \mathcal{F}$, $j \in_R \cal J$, and $syn=syn_j(x|_{I_w})$, where $I_w := \{ i: \theta_i = \mathfrak{c'}(w)_i \}$. \item ${\sf B}$ sends $g \in_R \mathcal{G}$. \item ${\sf A}$ sends $z:= f(x|_{I_w}) \oplus g(w)$ to ${\sf B}$. Additionally, she sends the authentication tag of all previously transmitted classical information, i.e.~$tag^* := MAC^*_k (\theta, j, syn, f, g, z, \kappa, T, test, x|_{I_w})$ with $test = \{(c_i, \hat{x}_i, \hat{\theta}_i, r_i)\}_{i \in T}$. \item ${\sf B}$ uses $syn$ to correct the errors within $\hat{x}|_{I_w}$, and he accepts if and only if $tag^*$ verifies correctly and $z = f(\hat{x}|_{I_w}) \oplus g(w)$. \end{enumerate} \end{description} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Extended and Compiled Protocol for Password-Based Identification.} \label{fig:qid.plus} \end{figure} \begin{corollary} \label{cor:Eve} Assume that $|{\cal W}| \leq 2^{\nu n}$ and that $\mathfrak{c}:{\cal W} \rightarrow \set{+,\x}^n$ has minimal distance $d \geq \delta n$ for $\delta > 0$ and is polynomial-time decodeable. Then, protocol ${\cal C}^{\alpha}(\QID^+)$ is computationally secure against Eve with $\beta < \frac{\delta}{6}$, and unconditionally secure against $\gamma (1\!-\!\alpha)$-BQSM Eve with $\gamma < \frac{\delta}{2} - \nu - 2\ell$. \end{corollary} We split the proof of Corollary~\ref{cor:Eve} into two parts. First, we show computational security in Proposition~\ref{pro:computational.Eve}, and second, we show unconditional security in the bounded-quantum-storage model in Proposition~\ref{pro:unconditional.Eve}. \begin{proposition} \label{pro:computational.Eve} Let $\mathfrak{c}:{\cal W} \rightarrow \set{+,\x}^n$ have minimal distance $d \geq \delta n$ and be polynomial-time decodeable. Then, ${\cal C}^{\alpha}(\QID^+)$ is computationally secure against Eve with $\beta < \frac{\delta}{6}$, according to Definition~\ref{def:computational.Eve}. \end{proposition} \begin{proof} We start with the real-life execution of ${\cal C}^{\alpha}(\QID^+)$ with honest ${\sf A}$ and ${\sf B}$ with respective inputs $(w_A,k_A)$ and $(w_B,k_B)$, and a man-in-the-middle attacker ${\sf E}$. We then modify it step by step without (essentially) changing the common output state, such that in the end we have a simulation of the protocol as required. First, we change the action of ${\sf B}$ in that we assume that ${\sf B}$ learns in the final step of ${\cal C}^{\alpha}(\QID^+)$ ``by magic'' whether one of the classical messages communicated was modified by ${\sf E}$ and whether $w_A = w_B$ or not. He accepts the execution if none of the messages was modified, if $w_A = w_B$, and if $z$ verifies correctly. This changes the outcome of the protocol only by a negligible amount. Indeed, if $w_A = w_B$, the restriction on the input sampler guarantees that $k_A = k_B$ and the claim follows from the security of the MAC. If $w_A \neq w_B$, then ${\sf B}$ rejects anyway in both versions, except with negligible probability. Next, we further change the action of ${\sf B}$ in that ${\sf B}$ accepts the execution in the final step of ${\cal C}^{\alpha}(\QID^+)$ if none of the messages was modified and if $w_A = w_B$ (without verifying $z$). We argue that this modification does not change the common output state, up to a negligible amount. Note that by Lemma~\ref{lemma:ideal.state}, we may replace the real state consisting of the qubits obtained by ${\sf A}$ and the choice of $T$ and $T' = \Set{i \in T}{\theta_i = \hat{\theta}_i}$ by a negligibly close ideal state (with the same $T$ and $T'$) such that the error rate within $T'$, i.e.\ the fraction of $i \in T'$ with $x_i \neq \hat{x}_i$, gives an exact upper bound on the error rate outside of $T$. Thus, it follows that if ${\sf A}$ does not reject during verification, then ${\sf B}$ will recover the correct string $x|_{I_w}$ in the final step (except with negligible probability) and correctly verify $z$ if and only if $w_A = w_B$. The next modification is that ${\sf B}$ runs the modified protocol with some ``dummy input" instead of his real input $w_B$, but he still accepts only if $w_A$ equals his real input $w_B$ and no transmitted message was modified by ${\sf E}$. Since ${\sf B}$ does not reveal any information on his input before the last step, this modification does not change the common output state at all. We write ${\sf B}^*$ for this modified ${\sf B}$. As last modification, we choose an unconditionally binding key ${\tt pkB}$ as reference string, together with the decryption key ${\tt sk}$. The new common output state is computationally indistinguishable from the previous one by assumption on the commitment keys. Now, this modified protocol can be simulated by an ideal-life adversary $\hat{\E}$ via the following two arguments.\\ \noindent(1) $\hat{\E}$ can simulate ${\sf A}$ as $\hat{\sf B}'$ does in the proof of security against dishonest Bob (see Theorem~\ref{thm:id}) by sampling unconditionally binding key ${\tt pkB}$, such that $\hat{\E}$ also knows the decryption key ${\tt sk}$, extracting $w_B$ from ${\sf B}$'s commitments, and inquiring the ideal functionality $\mathcal{F}_{\IDplus}$. In more detail, upon receiving $\kappa$ from ${\sf B}$, $\hat{\E}$ attempts to decode the string $\hat{\theta} \oplus \kappa$. If this is successful (a codeword at distance at most $d/2$ is returned), it computes the password $w'$ such that $\mathfrak{c}(w')$ is the decoded codeword. If decoding fails, $\hat{\E}$ chooses an arbitrary $w'$. It then sends $w'$ to $\mathcal{F}_{\IDplus}$. If the functionality replies by $({\sf correct},k_A)$, then $\hat{\E}$ completes the simulation by following the protocol with inputs $w' = w_A$ and $k_A$. In that case, the simulation is perfect and the final outputs are equal. In case the extracted password $w'$ does not match $w_A$, $\hat{\E}$ follows the protocol but uses random values $syn'$, $tag^{*'}$ and $z'$. Note that the real ${\sf A}$ would use $z= f(x|_{I_{w_{A}}}) \oplus g(w_A)$ instead of random $z'$. Thus, we have to argue that $f(x|_{I_{w_{A}}})$ is statistically close to random and independent of the view of ${\sf E}$ (for any fixed $w'$). Recall that the common state after the verification phase is statistically close to a state for which it is guaranteed that $H_{\infty}(X|_I) \geq d_H(\theta|_I, \hat{\theta}|_I) - \beta n$ for any $I \subseteq \set{1,\ldots,n}$, and $H_0(\rho_E) \leq \beta n$. Hence, switching between these two states has only a negligible effect on the final output, and thus we may assume these bounds also to hold here. By the way $w'$ was chosen, it is guaranteed that $\hat{\theta} \oplus \kappa$ has Hamming distance at most $d/2$ from $\mathfrak{c}(w')$, which is at distance greater than $d$ from $\mathfrak{c}(w)$. Thus, the Hamming distance between $\hat{\theta} \oplus \kappa$ and $\mathfrak{c}(w)$ is at least $d/2$, except with negligible probability. The same holds if decoding fails, since $\hat{\theta} \oplus \kappa$ is at least $d/2$ away from any codeword and $\mathfrak{c}(w) \oplus \kappa$ has distance at least $d/2$ from $\hat{\theta}$. It follows that the Hamming distance between $\theta|_{I_{w_A}}$ and $\hat{\theta}|_{I_{w_B}}$ is at least $(d/4 - \varepsilon)n$. Therefore, we can conclude that $H_{\infty}(X|_{I_{w_A}}) \geq d/4 - \varepsilon n - \beta n$. Finally, note that by the property of the code family as described previously, it follows that if $H_{\infty}(X|_{I_{w_A}}) > 2 H_0(\rho_E)$ with a linear gap, then $syn$ is close to uniformly distributed from ${\sf E}$'s point of view. Then, from the extractor property of $MAC^*$, it follows that $tag^*$ is essentially random and independent of $k, f, test, T, w, w', \theta$ and $E$, conditioned on $w \neq w'$. And further, privacy amplification guarantees that $f(x|_{I_{w_A}})$ is uniformly distributed and thus $z$ is close to random and independent of $E$ (conditioned on $w_A \neq w'$). Now, given the two $\ell$-bit strings $tag^*$ and $z$, the bound on the min-entropy is slightly reduced by $2\ell$.\\ \noindent(2) $\hat{\E}$ can also simulate modified ${\sf B}^*$ up to before the final step, as ${\sf B}^*$ uses a ``dummy input". If simulated ${\sf A}$ rejects in the verification, or ${\sf E}$ has modified one of the communicated messages, then $\hat{\E}$ sends ``override bit" $d = 0$ to the ideal functionality. Otherwise, it sends $d = 1$ and therewith learns, whether $w_A = w_B$ or not. In both cases, $\hat{\E}$ can easily complete the simulation for ${\sf B}^*$. The claim follows. \end{proof} \begin{proposition} \label{pro:unconditional.Eve} If $|{\cal W}| \leq 2^{\nu n}$, then protocol ${\cal C}^{\alpha}(\QID^+)$ is unconditionally secure against $\gamma (1\!-\!\alpha)$-BQSM Eve with $\gamma < \frac{\delta}{2} - \nu - 2\ell$, according to Definition~\ref{def:unconditional.Eve}. \end{proposition} \begin{proof} Here, we can reason similarly to the proof in~\cite{DFSS07} against a man-in-the-middle. By the security of the MAC, ${\sf E}$ cannot modify any classical message without being detected (and the extractor property guarantees re-usability). Therefore, one can show security against ${\sf E}$ up to the point {\em before} ${\sf B}$ announces whether to accept the protocol execution or not. In order to show security even after ${\sf B}$ has announced his decision, one can make the following case distinction. If ${\sf E}$ modifies the quantum communication in such a way that she only introduces a few errors in the test set, then she also only introduced a few errors in the remaining positions, except with small probability. Those positions will be corrected by the error correction, and thus, ${\sf B}$ accepts---independent of what $w$ is. In the other case, namely if ${\sf E}$ modifies the quantum communication in such a way that she introduces many errors in the test set, then ${\sf A}$ rejects already early in the protocol---independent of what $w$ is. Hence, this case distinction does not depend on $w$. It follows that ${\sf B}$'s announcement of whether he accepts or rejects gives away no information on $w$. Let $w'$ denote ${\sf E}$'s guess on the password. Then, if $w' \neq w$, $x_{I|_w}$ has $d/4 - \nu$ bits of entropy, given $w$, $w'$ and $\theta$. Furthermore, given $tag^*$ and $f(x|_{I_w})$, the min-entropy is reduced by $2\ell$. By the properties of the code family and the privacy amplification property of $MAC^*$ and the hash-function, we get that $syn$ as well as $tag^*$ and $f$ are essentially random and independent, conditioned on $w \neq w'$, for $\gamma < d/4 - \nu - 2\ell$. \end{proof} \clearemptydoublepage \part{Cryptography in the Quantum World} \label{part:cryptography.in.quantum.world} \clearemptydoublepage \chapter{Introduction} \label{chap:intro.cryptography.in.quantum.world} In this part of the thesis, we want to investigate classical cryptography in the quantum world, which means that we consider the security of classical protocols subject to quantum attacks. This scenario is of practical importance and independent of any progress towards large-scale quantum computing. In the following sections, we introduce various commitment schemes and extended variants thereof, which we will use as underlying constructions of the protocols in the subsequent chapters. In Chapter~\ref{chap:coin.flip}, we show that a quantum-secure bit commitment, as discussed in Section~\ref{sec:bit.commit}, implies a quantum-secure single coin-flip. Then, we will use the mixed commitments, described in Part~\ref{part:quantum.cryptography}, Section~\ref{sec:mixed.commit}, together with a variation of its extended construction (described in Section~\ref{sec:extended.commit.coin}) to equip the underlying commitment construction with extraction and equivocability such that we achieve an efficiently simulatable and more general composable single coin-flip. In Chapter~\ref{chap:framework}, we propose a framework for the quantum-secure amplification of the security degree of coins, where we rely on the mixed commitments of Section~\ref{sec:mixed.commit}. One step towards a fully simulatable coin-flipping protocol, however, requires an extended construction allowing for an untypical way of opening a commitment in that, instead of sending the plaintext, we do a trapdoor opening (Section~\ref{sec:mixed.commit.trapdoor.opening}). In Chapter~\ref{chap:coin.flip.applications}, we show different example applications, where the interactive generation of coins at the beginning or during outer protocols results in implementations without set-up assumptions and allows for quantum-secure realizations of classical schemes. \section{Regular Bit Commitment} \label{sec:bit.commit} \index{commitment} In Chapter~\ref{chap:coin.flip}, we will show a natural and direct translation of standard coin-flipping to the quantum world. Recall from Section~\ref{sec:primitives.commitment} that commitments imply coin-flipping. More specifically, we require an {\it unconditionally binding} and {\it quantum-computationally hiding} bit commitment scheme from ${\sf A}$ to ${\sf B}$ that takes a bit and some randomness $r$ of length $\ell$ as input, i.e.\ ${\tt commit}: \set{0,1} \times \set{0,1}^\ell \rightarrow \set{0,1}^*$. As discussed, the unconditionally binding property is fulfilled, if it is impossible for any forger $\Forg$ to open one commitment to both 0 and 1, i.e.\ to compute $r,r'$ such that $\commitx{0}{r} = \commitx{1}{r'}$. Quantum-computationally hiding is ensured, if no quantum distinguisher $\Dist$ can distinguish between $\commitx{0}{r}$ and $\commitx{1}{r'}$ for random $r,r'$ with non-negligible advantage. Note that we will use this simple notation for the commitments in the following sections. For a specific scheme, the precise notation has to be naturally adapted. For an actual instantiation we can use, for instance, Naor's commitment based on a pseudorandom generator~\cite{Naor91}. A pseudorandom generator is a function that maps a short, randomly chosen seed to a long pseudorandom sequence, which is computationally indistinguishable from a truly random string for any polynomial-time bounded adversary. Informally speaking, pseudorandomness ensures unpredictability of the next bit in the sequence after learning the previous one. There are two main arguments for commitments based on pseudorandomness. First, this construction does not require any initially shared information between the players. This aspect is of particular importance, when we later propose sequential coin-flipping for actually implementing the CRS-model assumption, and therewith, implementing other functionalities from scratch without \emph{any} set-up assumptions. The second reason relates to our claim of quantum security. Given any one-way function, pseudorandom generators can be constructed, where its security parameter is defined by the length of the seeding key. A brute-force search through the key space would find all seeds, and thus, all pseudorandom sequences could be computed. Now, under the assumption of a quantum-secure one-way function, Grover's optimal quantum search algorithm provides only quadratic speed-up for brute-searching. More efficient attacks are not known, and therewith, we claim that for any poly-time bounded quantum adversary, we achieve quantum-secure schemes. More formally~\cite{Naor91}, let $f(n)$ denote a function with $f(n) > n$. Then, $G: \{0,1\}^n \rightarrow \{0,1\}^{f(n)}$ defines a pseudorandom generator, if for all polynomial-time (quantum) distinguisher $\Dist$, it holds that $$|Pr[\Dist(y) = 1] - Pr[\Dist(G(s)) = 1]| \leq \varepsilon \, ,$$ where $y \in_R \{0,1 \}^{f(n)}$, $s \in_R \{ 0,1 \}^n$, and $\varepsilon$ is negligible in the security parameter $n$. A bit commitment scheme using pseudorandomness is now constructed as follows. Let $a$ be the bit to which Alice wants to commit, and let $G_i(s)$ denote the $i$th bit of the pseudorandom sequence on seed $s$. To ensure the binding property, the receiver Bob sends a random vector $R_B = (r_1, \ldots, r_{3n})$ where $r_i \in_R \{ 0,1 \}$ for $1 \leq i \leq 3n$. Alice selects $s \in_R \{ 0,1 \}^n$ and sends the vector $R_A = (r^\prime_1, \ldots, r^\prime_{3n})$, where $$ r^\prime_i = \begin{cases} G_i(s) & \text{if } r_i = 0 \\ G_i(s) \oplus a & \text{if } r_i = 1 \, . \end{cases} $$ To open the commitment, Alice sends $s$ and Bob then verifies that for all $i$, $r^\prime_i = G_i(s)$ for $r_i = 0$ and $r^\prime_i = G_i(s) \oplus a$ for $r_i = 1$. Assuming that a dishonest receiver is polynomial-time bounded, he cannot learn anything about $a$. Otherwise, he could be used to construct a distinguisher $\Dist$ between pseudorandom and truly random outputs. This also holds in the quantum world, since the reduction does not require rewinding. It follows that any quantum-computationally bounded receiver can only guess $a$ with probability essentially $1/2$, so the commitment scheme is {\it quantum-computationally hiding}. For any (unbounded) dishonest committer, opening a commitment to both values 0 and 1, requires a seed pair $(s_1, s_2)$, such that sequences $G_{3n}(s_1)$ and $G_{3n}(s_2)$ agree for all $i$ where $r_i = 0$ and disagree for all $i$ where $r_i = 1$, i.e.~$r_i = G_i(s_1) \oplus G_i(s_2)$ for exactly one $R_B$ chosen by the other player. The probability for the existence of such a pair is at most $2^{2n}/2^{3n} = 2^{-n}$. It follows that the committer can reveal only one possible $a$, except with probability less than $2^{-n}$, which satisfies {\it statistical binding}. \section{Extended Construction for Mixed Commitments} \label{sec:extended.commit.coin} \index{commitment!extended} \index{commitment!extractability} \index{commitment!equivocability} We will, also in the context of a single coin-flip, need an extended construction, which is similar to the extension of Section~\ref{sec:extended.commit.compiler} but adapted to the case of an underlying commitment from ${\sf A}$ to ${\sf B}$ with flavors {\it unconditionally binding} and {\it quantum-computationally hiding}. We again aim at providing the respective simulator with a trapdoor for either extraction to efficiently simulate in case of ${\sf A}'$ or equivocability to avoid rewinding ${\sf B}'$. As in Section~\ref{sec:extended.commit.compiler}, we require a $\Sigma$-protocol for a (quantumly) hard relation $\Rel = \{(x,w)\}$ with conversations $\tt \big(a^{\Sigma}, c^{\Sigma}, z^{\Sigma} \big)$. Furthermore, we will also use the keyed dual-mode commitment scheme described in Section~\ref{sec:mixed.commit.idea}, based on the multi-bit version of~\cite{PVW08} with keys ${\tt pkH}$ and ${\tt pkB}$, where it holds that ${\tt pkH} \approxq {\tt pkB}$. In the real protocol, the common reference string consists of commitment key ${\tt pkB}$ and an instance $x'$ for which it holds that $\nexists \ w'$ such that $(x',w') \in \Rel$, where we assume that $x \approxq x'$. To commit to bit $a$, ${\sf A}$ runs the honest-verifier simulator to get a conversation $\big( {\tt a^{\Sigma}}, a, {\tt z^{\Sigma}} \big)$. She then sends $\tt a_{\Sigma}$ and two commitments $C_0, C_1$ to ${\sf B}$, where $C_a = \commitk{{\tt z^{\Sigma}}}{r_a}{{\tt pkB}}$ and $C_{1-a} = \commitk{0^{z'}}{r_{1-a}}{{\tt pkB}}$ with randomness $r_a,r_{1-a}$ and $z' = |{\tt z^\Sigma}|$. Then, $\big( a, ({\tt z^{\Sigma}}, r_a) \big)$ is send to open the relevant commitment $C_a$, and ${\sf B}$ checks that $\big( {\tt a^{\Sigma}}, a, {\tt z^{\Sigma}} \big)$ is an accepting conversation. Assuming that the $\Sigma$-protocol is honest-verifier zero-knowledge and ${\tt pkB}$ leads to unconditionally binding commitments, the new commitment construction is again unconditionally binding. During simulation, $\hat{\sf A}'$ chooses a ${\tt pkB}$ such that it knows the matching decryption key $sk$. Then, it can extract ${\sf A}'$'s choice bit $a$ by decrypting both $C_0$ and $C_1$ and checking which contains a valid $\tt z^\Sigma$. Again, not both $C_0$ and $C_1$ can contain a valid reply, since otherwise, ${\sf A}'$ would know a $w'$ such that $(x',w') \in \Rel$. In order to simulate in case of ${\sf B}'$, $\hat{\sf B}'$ chooses ${\tt pkH}$ and $x$. Hence, the commitment is unconditionally hiding in this simulation. Furthermore, it can be equivocated, since now $\exists \ w$ with $(x,w) \in \Rel$ and therefore, $C_0, C_1$ can both be computed with valid replies, i.e.~$C_0 = \commitk{{\tt z^\Sigma}_0}{r_0}{{\tt pkH}}$ and $C_1 = \commitk{{\tt z^\Sigma}_1}{r_1}{{\tt pkH}}$. Quantum-computational security against ${\sf B}'$ follows from the indistinguishability of the keys ${\tt pkB}$ and ${\tt pkH}$ and the indistinguishability of the instances $x$ and $x'$, and efficiency of both simulations is ensured, due to extraction and equivocability. \section{Trapdoor Opening for Mixed Commitments} \label{sec:mixed.commit.trapdoor.opening} \index{commitment!trapdoor opening} \index{commitment!equivocability} The typical notion of mixed commitment schemes is stronger than we require for our basic construction of mixed commitments, namely, it postulates trapdoors for both extraction and equivocability. As previously discussed, it suffices in our basic construction to only rely on an extraction trapdoor. This aspect is very convenient, since it allows us to weaken the assumption on its underlying construction, i.e., we can build it from a public-key crypto-system with regular keys $pk$ and $sk$ as binding commitment key and extraction key, and require only an indistinguishable hiding key, generated as a random string in the key space. This, in turn, offers the possibility of generating the hiding key solely by a precedent interactive coin-flipping procedure without any set-up assumptions. For a more advanced usage of commitments as in our strong coin-flipping notion in Chapter~\ref{chap:framework}, however, we have (in some sense) the requirement of equivocability. We want to maintain the interactive generation of the key at any rate, which means that we do not have enough control of its generation and even less control to equip it with a trapdoor (as done in Sections~\ref{sec:extended.commit.compiler} and~\ref{sec:extended.commit.coin}). We therefore develop a special notion of trapdoor opening, where the ability to do a trapdoor opening is not associated to a special knowledge of the hiding key, but is rather done by cheating in the opening phase. Specifically, we do the opening not by sending the plaintext and the randomness, committed to in the first phase but instead by sending only the plaintext and then doing an interactive proof that this plaintext is indeed what was committed to. The ability to do trapdoor openings will then be associated with being able to control the challenge in the interactive proof. We will get this control by using a weak coin-flipping protocol as sub-protocol. This will be one of the essential steps in bootstrapping fully simulatable strong coin-flipping from weak coin-flipping. As before, we denote the mixed string commitment scheme of Section~\ref{sec:mixed.commit} by $\mathtt{commit}_{pk}$. Let $\kappa$ be the security parameter defining the key space $\{0,1\}^\kappa$ and let $\sigma$ be the secondary security parameter controlling the soundness error in the interactive proof, which we want to be negligible in $\sigma$ when $\mathtt{commit}_{pk}$ is unconditionally binding. We equate the plaintext space $\{0,1\}^\ell$ of $\mathtt{commit}_{pk}$ with the Galois field $\mathbb{F} = \mathbb{F}_{2^\kappa}$. The new extended commitment scheme, equipped with the possibility to do trapdoor openings, is denoted by $\mathtt{COMMIT}_{pk}$. We assume its plaintext space to be $\mathbb{F}^\sigma$ and denote by $\mathtt{sss}$ a secret sharing scheme over $\mathbb{F}$. Given message $m = (m_1, \ldots, m_\sigma) \in \mathbb{F}^\sigma$ and randomizer $s = (s_1, \ldots, s_\sigma) \in \mathbb{F}^\sigma$, let $f_{m,s}({\tt X})$ denote the unique polynomial of degree $2\sigma-1$, for which $f_{m,s}(-i+1) = m_i$ for $i = 1, \ldots, \sigma$ and $f_{m,s}(i) = s_i$ for $i = 1, \ldots, \sigma$. Furthermore, we ``fill up'' positions $i = \sigma+1, \ldots, \Sigma$, where $\Sigma = 4 \sigma$, by letting $s_i = f_{m,s}(i)$. The shares are now $s = (s_1,\ldots,s_\Sigma)$. The new commitment scheme $\mathtt{COMMIT}_{pk}$ is described in Figure~\ref{fig:sss.commit}. We stress two simple facts about this scheme. First, for any message $m \in \mathbb{F}^\sigma$ and any subset $S \subset \{ 1, \ldots, \Sigma \}$ of size $\vert S \vert = \sigma$, the shares $s|_S$ are uniformly random in $\mathbb{F}^\sigma$, when $S$ is chosen uniformly at random in $\mathbb{F}^\sigma$ and independent of $m$. This aspect is trivial for $S = \{ 1, \ldots, \sigma \}$, as we defined it that way, and it extends to the other subsets using Lagrange interpolation. And second, if $m^1, m^2 \in \mathbb{F}^\sigma$ are two distinct messages, then $\mathtt{sss}(m^1;s^1)$ and $\mathtt{sss}(m^2;s^2)$ have Hamming distance at least $\Sigma - 2 \sigma$. Again, this follows by Lagrange interpolation, since the polynomial $f_{m^1,s^1}({\tt X})$ has degree at most $2\sigma-1$, and hence, can be computed from any $2 \sigma$ shares $s_i$ using Lagrange interpolation. The same holds for $f_{m^2,s^2}({\tt X})$. Thus, if $2\sigma$ shares are the same, then $f_{m^1,s^1}({\tt X})$ and $f_{m^2,s^2}({\tt X})$ are the same, which implies that the messages $m^1 = f_{m^1,s^1}(-\sigma+1),\ldots, f_{m^1,s^1}(0)$ and $m^2 = f_{m^2,s^2}(-\sigma+1),\ldots, f_{m^2,s^2}(0)$ are the same. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Commitment Scheme $\mathtt{COMMIT}_{pk}$:} \begin{itemize} \item[]{\sc Commitment Phase:} \begin{enumerate} \item Let message $m \in \mathbb{F}^\sigma$ be the message to get committed to. The committer samples uniformly random $s \in \mathbb{F}^\sigma$ and computes the shares $\mathtt{sss}(m;s) = (s_1, \ldots, s_\Sigma)$, where $s_i \in \mathbb{F}$. \item He computes $\Commitk{m}{(s,r)}{pk} = \big( M_1,\ldots,M_\Sigma \big)$. In more detail, for $i = 1, \ldots, \Sigma$, the committer computes $M_i = \commitk{s_i}{r_i}{pk}$ with shares $s = (s_1,\ldots,s_\Sigma)$ and randomness $r = (r_1,\ldots,r_\Sigma)$. \item The committer sends $(M_1,\ldots,M_\Sigma)$.\\[-2ex] \end{enumerate} \item[]{\sc Opening Phase:} \begin{enumerate} \item The committer sends the shares $s = (s_1, \ldots, s_\Sigma)$ to the receiver. \item If the shares are not consistent with a polynomial of degree at most $2\sigma-1$, the receiver aborts. Otherwise, he picks a uniformly random subset $S \subset \set{1,\ldots,\Sigma}$ of size $\vert S \vert = \sigma$ and sends $S$ to the committer. \item The committer sends $r|_S$. \item The receiver verifies that $M_i = \commitk{s_i}{r_i}{pk}$ for all $i \in S$. If the test fails, he aborts. Otherwise, he computes the message $m \in \mathbb{F}^\sigma$ consistent with $s$. \end{enumerate} \end{itemize} \vspace{-1ex} \end{framed} \vspace{-2ex} \caption{The Commitment Scheme $\mathtt{COMMIT}_{pk}$.} \label{fig:sss.commit} \vspace{-1ex} \end{figure} First note that if the underlying commitment $\mathtt{commit}_{pk}$ is unconditionally hiding, then so is $\mathtt{COMMIT}_{pk}$. In the following, we investigate the extraction property of $\mathtt{COMMIT}_{pk}$, under the assumption that we work in the unconditionally binding mode of $\mathtt{commit}_{pk}$. Given any commitment $M = \big( M_1,\ldots,M_\Sigma \big)$, we extract $$ \big( \xtr{M_1}{sk},\ldots,\xtr{M_\Sigma}{sk} \big) = (s_1,\ldots,s_\Sigma) = s \, . $$ Assume $s' = (s'_1,\ldots,s'_\Sigma)$ is the consistent sharing closest to $s$. That means that $s'$ is the vector which is consistent with a polynomial $f_{m',s'}({\tt X})$ of degree at most $2\sigma-1$ and which at the same time differs from $s$ in the fewest positions. Note that we can find $s'$ in poly-time when using a Reed Solomon code, which has efficient minimal distance decoding. We then interpolate this polynomial $f_{m',s'}({\tt X})$, let $m' = f_{m',s'}(-\sigma+1),\ldots,f_{m',s'}(0)$, and define $m'$ to be the message committed to by $\mathtt{COMMIT}_{pk}$. Any other sharing $s''= (s_1'',\ldots,s_\Sigma'')$ must have Hamming distance at least $2 \sigma$ to $s'$. Now, since $s$ is closer to $s'$ than to any other consistent sharing, it must, in particular, be closer to $s'$ then to $s''$. This implies that $s$ is at distance at least $\sigma$ to $s''$. We will use this observation for proving soundness of the opening phase. To determine the soundness error, assume that $\mathtt{COMMIT}_{pk}$ does not open to the shares $s'$ consistent with $s$. As observed, this implies that $\big( \xtr{M_1}{sk},\ldots,\xtr{M_\Sigma}{sk} \big)$ has Hamming distance at least $\sigma$ to $s'$. However, when $\mathtt{commit}_{pk}$ is unconditionally binding, all $M_i$ can only be opened to $\xtr{M_i}{sk}$. From the above two facts, we have that there are at least $\sigma$ values $i \in \set{1,\ldots,\Sigma}$ such that the receiver cannot open $M_i$ to $s_i$ for $i \in S$. Since $\Sigma = 4 \sigma$, these $\sigma$ bad indices (bad for a dishonest sender) account for a fraction of $\frac14$ of all points in $\set{1,\ldots,\Sigma}$. Thus, the probability that none of the $\sigma$ points in $S$ is a bad index is at most $(\frac34)^\sigma$, which is negligible. Lemma~\ref{lemma:soundness.sss} follows. \begin{lemma} \label{lemma:soundness.sss} If $pk$ is unconditionally binding, then the probability that an unbounded cheating committer can open $M = \Commitk{m}{(s,r)}{pk}$ to a plaintext different from $\xtr{M}{sk}$ is at most $(\frac34)^\sigma$, assuming that the challenge $S$ is picked uniformly at random and independent of $M$. \end{lemma} In the context of simulation, we will use the challenge $S$ as the simulators trapdoor, allowing him to equivocally open his commitments. In such a simulation, the ideal-world adversary $\hat{S}$ can---by means discussed later---enforce a specific challenge, i.e., it is guaranteed that this will be the challenge in the opening phase. Thus, for simplicity, we assume here that it simply gets a fixed challenge $S$ as input. The simulation is described in Figure~\ref{fig:simulation.sss}. Lemma \ref{lemma:simulation.sss} follows via a hybrid argument, which relies on the quantum-computational indistinguishability in switching unconditionally binding and unconditionally hiding commitment keys. We omit a proof here but refer to Chapter~\ref{chap:framework}, where the construction will be explicitly proven within its outer construction. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Simulating $\mathtt{COMMIT}_{pk}$ with Trapdoor $S$:} \\[-4ex] \begin{enumerate} \item $\hat{S}$ gets as input a uniformly random subset $S \subset \set{1,\ldots,\Sigma}$ of size $\sigma$ and an initial message $m \in \mathbb{F}^\sigma$. \item $\hat{S}$ commits honestly to $m \in \mathbb{F}^\sigma$ by $M = \Commitk{m}{(s,r)}{sk}$, as specified in the commitment phase. \item $\hat{S}$ is given an alternative message $\tilde{m} \in \mathbb{F}^\sigma$, i.e., the aim is opening $M$ to $\tilde{m}$. \item $\hat{S}$ lets $s|_S$ be the $\sigma$ messages committed to by $M|_S$. Then it interpolates the unique polynomial $f_{\tilde{m},s}$ of degree at most $2\sigma-1$ for which $f_{\tilde{m},s}(i) = s_i$ for $i \in S$ and for which $f_{\tilde{m},s}(-i+1) = \tilde{m}_i$ for $i = 1,\ldots,\sigma$. Note that this is possible, as we have exactly $2\sigma$ points which restrict our choice of $f_{\tilde{m},s}$. $\hat{S}$ sends $s = \big( f_{\tilde{m},s}(1),\ldots,f_{\tilde{m},s}(\Sigma) \big)$ to the receiver. \item The receiver sends the challenge $S$. \item For all $i \in S$, the sender opens $M_i$ to $f_{\tilde{m},s}(i)$. This is possible, since $f_{\tilde{m},s}(i) = s_i$ is exactly the message committed to by $M_i$ when $i \in S$. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal-World Simulation of $\mathtt{COMMIT}_{pk}$.} \label{fig:simulation.sss} \end{figure} \begin{lemma} \label{lemma:simulation.sss} If $\tilde{m} = m$, then the transcript of the protocol is identical to that of an honest commitment to $m$, followed by an honest opening phase to $m$, and run with a uniformly random challenge $S$. If $\tilde{m} \ne m$, then the transcript of the protocol is quantum-computationally indistinguishable to that of an honest commitment to $\tilde{m}$, followed by an honest opening phase to $\tilde{m}$, and run with a uniformly random challenge $S$. \end{lemma} \clearemptydoublepage \chapter{Quantum-Secure Coin-Flipping} \label{chap:coin.flip} \index{coin-flipping} Coin-flipping is introduced in Section~\ref{sec:primitives.coin.flip} and allows two parties to agree on a uniformly random bit in a fair way. Security for both parties follows, if neither party can influence the value of the coin to his advantage. Thus, it enables the parties to interactively generate true randomness from scratch. The chapter is based on parts of~\cite{DL09}. \section{Motivation and Related Work} \label{sec:coin.flip.motivation} We are interested in the standard coin-flipping protocol~\cite{Blum81} with classical message exchange but we here assume that the adversary is capable of quantum computing. As already mentioned, bit commitment implies a secure coin-flipping, but even when basing the embedded commitment on a computational assumption that withstands quantum attacks, the security proof of the entire coin-flipping (and its integration into other applications) could previously not be translated from the classical to the quantum world. Typically, security against a classical adversary is argued in such a context by rewinding the adversary in a simulation. Recall that, in general, rewinding as a proof technique cannot be directly applied in the quantum world. Based on a recent result of Watrous~\cite{Watrous09}, which originally allowed to prove unconditionally that quantum zero-knowledge of certain interactive proofs is possible and that the classical definitions can be translated into the quantum world, we show the most natural and direct quantum analogue of the classical security proof for standard coin-flipping. We want to mention an alternative approach, which was independently investigated but never published~\cite{Smith09}. They propose a classical protocol for zero-knowledge proofs of knowledge secure against quantum adversaries. The protocol consists of a commitment phase and two zero-knowledge proofs. Instead of opening the commitment, the committer claims the value of the committed coins and gives the first zero-knowledge proof that the claim is correct. To simulate this zero-knowledge proof, Watrous' technique is used. Note that this approach allows for flipping a string of coins in the commitments, and thus, arrives at a coin-flipping protocol with round complexity independent of the length of the flipped string at first. However, the required zero-knowledge proof has round complexity depending on the security parameter, i.e.\ how many proofs must be completed to achieve a negligible soundness error. Finally, the coin-string is used as key to encode the witness and the second zero-knowledge proof is given that this statement is actually true. As encryption scheme, they suggest a scheme with similar properties as in our mixed commitment constructions---but at least to our best knowledge, the question of its actual secure implementation was left open. We stress that we aim at establishing coin-flipping as a stand-alone tool that can be used in several contexts and different generic constructions. Some example applications thereof are discussed in Chapter~\ref{chap:coin.flip.applications}, including an independently proposed zero-knowledge proof of knowledge. In order to include coin-flipping securely in other applications, we conclude this chapter by proving the basic construction secure under sequential composition and propose an extended construction for general composability. \section{The Protocol} \label{sec:coin.flip.protocol} The standard coin-flipping protocol $\mathtt{COIN}$ is shown in Figure~\ref{fig:coin.flip}, allowing players ${\sf A}$ and ${\sf B}$ to interactively generate a random and fair $\mathtt{coin}$ in one execution without any set-up requirements. As underlying commitment scheme, we use the \emph{unconditionally binding} and \emph{quantum-computationally hiding} scheme described in Section~\ref{sec:bit.commit} with security parameter $n$. We will use its simpler notation here, namely $\commitx{a}{r}$ with input $a \in \{0,1\}$, randomness $r \in \ell$ and output in $\{0,1\}^*$. To indicate the opening phase, where ${\sf A}$ sends $a$ and $r$, we will write $\open{a}{r}$. The corresponding ideal coin-flipping functionality $\cF$ is depicted in Figure~\ref{fig:cF}. Note that dishonest ${\sf A}'$ may refuse to open $\commitx{a}{r}$ in the real world after learning ${\sf B}$'s input. For this case, $\cF$ allows her a second input $\bot$, modeling the abort of the protocol. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol $\mathtt{COIN}$:} \\[-4ex] \begin{enumerate}\setlength{\parskip}{0.5ex} \item ${\sf A}$ chooses $a \in_R \set{0,1}$ and computes $\commitx{a}{r}$. She sends $\commitx{a}{r}$ to ${\sf B}$. \item ${\sf B}$ chooses $b \in_R \set{0,1}$ and sends $b$ to ${\sf A}$. \item ${\sf A}$ sends $\open{a}{r}$ and ${\sf B}$ checks if the opening is valid. \item Both compute $\mathtt{coin} = a \oplus b$. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{The Coin-Flipping Protocol.} \label{fig:coin.flip} \end{figure} \begin{figure} \normalsize \begin{framed} \noindent\hspace{-1.5ex} {\sc Functionality $\cF:$}\\ Upon receiving requests $\mathtt{start}$ from Alice and Bob, $\cF$ outputs uniformly random $\mathtt{coin}$ to Alice. It then waits to receive Alice's second input $\top$ or $\bot$ and outputs $\mathtt{coin}$ or $\bot$ to Bob, respectively. \vspace{-1ex} \end{framed} \vspace{-2ex} \caption{The Ideal Functionality for a Coin-Flip.} \label{fig:cF} \vspace{-1ex} \end{figure} \begin{proposition} Protocol $\mathtt{COIN}$ satisfies correctness, according to Definition~\ref{def:correctness}. \end{proposition} Correctness is obvious by inspection of the protocol: If both players are honest, they independently choose random bits $a$ and $b$. These bits are then combined via exclusive disjunction, resulting in a uniformly random $\mathtt{coin}$. \begin{theorem} Protocol $\mathtt{COIN}$ is unconditionally secure against any unbounded dishonest Alice according to Definition~\ref{def:unboundedAliceNiceOrder}, provided that the underlying commitment scheme is unconditionally binding. \end{theorem} \begin{proof} We construct an ideal-world adversary $\hat{\sf A}'$, such that the real output of the protocol is statistically indistinguishable from the ideal output produced by $\hat{\sf A}'$, $\cF$ and ${\sf A}'$. The ideal-world simulation is depicted in Figure~\ref{fig:simulationA}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Simulation $\hat{\sf A}'$ :} \\[-4ex] \begin{enumerate} \item Upon receiving $\commitx{a}{r}$ from ${\sf A}'$, $\hat{\sf A}'$ sends $\mathtt{start}$ and then $\top$ to $\cF$ as first and second input, respectively, and receives a uniformly random $\mathtt{coin}$. \item\label{step:compute-a} $\hat{\sf A}'$ computes $a$ and $r$ from $\commitx{a}{r}$. \item\label{step:compute-b} $\hat{\sf A}'$ computes $b = \mathtt{coin} \oplus a$ and sends $b$ to ${\sf A}'$. \item $\hat{\sf A}'$ waits to receive ${\sf A}'$'s last message and outputs whatever ${\sf A}'$ outputs. \end{enumerate} \vspace{-1ex} \end{framed} \vspace{-2ex} \caption{The Ideal-World Simulation against dishonest Alice.} \label{fig:simulationA} \vspace{-1ex} \end{figure} First note that $a, r$ and $\commitx{a}{r}$ are chosen and computed as in the real protocol. From the statistically binding property of the commitment scheme, it follows that ${\sf A}'$'s choice bit $a$ is uniquely determined from $\commitx{a}{r} = c$, since for any $c$, there exists at most one pair $(a,r)$ such that $c = \commitx{a}{r}$, except with probability negligible in the security parameter $n$. Hence in the real world, ${\sf A}'$ is unconditionally bound to her bit before she learns ${\sf B}$'s choice bit, which means $a$ is independent of $b$. Therefore in Step~(\ref{step:compute-a}.), the simulator can correctly (but not necessarily efficiently) compute $a$ (and $r$). Note that, in the case of unconditional security, we do not have to require the simulation to be efficient. However, we show in Section~\ref{sec:general.composition.coin} how to extend the underlying commitment in order to extract ${\sf A}'$'s inputs. This extraction requires a extraction trapdoor and yields an efficient simulation in the CRS-model. Finally, due to the properties of XOR, ${\sf A}'$ cannot tell the difference between the random $b$ computed from the ideal, random $\mathtt{coin}$ in the simulation in Step~(\ref{step:compute-b}.)~and the randomly chosen $b$ of the real world. It follows that the simulated output is statistically indistinguishable from the output in the real protocol. \end{proof} To prove security against any dishonest quantum-computationally bounded ${\sf B}'$, we will follow the lines of argument as in Section~\ref{sec:security.definition.computational}, in particular Definition~\ref{def:polyboundedBobCRS}, with slight modifications. More specifically, we do not require a common reference string, so we can omit this part of the definition. Thus, we show that there exists an ideal-world simulation $\hat{\sf B}'$ with output quantum-computationally indistinguishable from the output of the protocol in the real world. For the ideal world, we consider the poly-size input sampler, which takes as input only the security parameter and produces a valid input state $\rho_{U ZV'} = \rho_{\MC{U}{Z}{V'}}$ as specified in Section~\ref{sec:security.definition.computational}. In a simulation against a \emph{classical} adversary, a classical poly-time simulator would work as follows. It inquires $\mathtt{coin}$ from $\cF$, chooses random $a$ and $r$, and computes $b' = \mathtt{coin} \oplus a$ as well as $\commitx{a}{r}$. It then sends $\commitx{a}{r}$ to ${\sf B}'$ and receives ${\sf B}'$'s choice bit $b$. If $b = b'$, the simulation was successful. Otherwise, the simulator rewinds ${\sf B}'$ and repeats the simulation. For a security proof against any \emph{quantum} adversary\index{rewinding!quantum}, we construct a poly-time quantum simulator proceeding similarly to its classical analogue. However, it requires quantum registers as work space and relies on Watrous' {\it quantum rewinding lemma} (see Lemma~\ref{lemma:qrewind}). Recall from Section~\ref{sec:quantum.rewinding} that Watrous constructs the quantum simulator for a $\Sigma$-protocol, i.e.\ a protocol in three-move form, where the verifier flips a single coin in the second step and sends this challenge to the prover. Since these are the essential aspects also in our protocol $\mathtt{COIN}$, we can apply Watrous' quantum rewinding technique (with slight modifications) as a black-box to our protocol. We also follow his notation and line of argument here. For a more detailed description and proofs, we refer to~\cite{Watrous09} and Section~\ref{sec:quantum.rewinding}. \begin{theorem} \label{thm:computational.security.single.coin} For $p_0 \geq \frac14$, protocol $\mathtt{COIN}$ is quantum-computationally secure against any poly-time bounded dishonest Bob (according to Definition~\ref{def:polyboundedBobCRS} but with the modification described above), provided that the underlying commitment scheme is quantum-computationally hiding. \end{theorem} \begin{proof} Let $\ket{\psi}$ denote ${\sf B}'$'s $n$-qubit auxiliary input. Let $W$ denote ${\sf B}'$'s auxiliary input register, containing $\ket{\psi}$. Let $V$ and $B$ denote ${\sf B}'$'s work space, where $V$ is an arbitrary polynomial-size register and $B$ is a single qubit register. ${\sf A}$'s classical messages are considered in the following as being stored in quantum registers $A_1$ and $A_2$. In addition, the quantum simulator uses registers $R$, containing all possible choices of a classical simulator, and $G$, representing its guess $b'$ on ${\sf B}'$'s message $b$ in the second step. Finally, let $X$ denote a working register of size $k$, which is initialized to the state $\ket{0^{\tilde{k}}}$ and corresponds to the collection of all registers as described above except $W$. The quantum rewinding procedure is implemented by a general quantum circuit $R_{\mathtt{coin}}$ with input $(W,X, {\sf B}', \mathtt{coin})$. As a first step, it applies a unitary $(n,k)$-quantum circuit $Q$ to $(W,X)$ to simulate the conversation, obtaining registers $(G,Y)$. Then, a test takes place to observe whether the simulation was successful. In that case, $R_{\mathtt{coin}}$ outputs the resulting quantum register. Otherwise, it {\it quantumly rewinds} by applying the reverse circuit $Q^\dag$ on $(G,Y)$ to retrieve $(W,X)$ and then a phase-flip transformation on $X$ before another iteration of $Q$ is applied. Note that $R_{\mathtt{coin}}$ is essentially the same circuit as $R$ described in~\cite{Watrous09} (and Section~\ref{sec:quantum.rewinding}), but in our application it depends on the value of a given $\mathtt{coin}$, i.e., we apply $R_0$ or $R_1$ for $\mathtt{coin} = 0$ or $\mathtt{coin} = 1$, respectively. \\ \noindent In more detail, $Q$ transforms $(W,X)$ to $(G,Y)$ by the following unitary operations: \begin{enumerate} \item[(1.)] It constructs a superposition over all possible random choices of values in the real protocol, i.e., $$ \frac{1}{\sqrt{2^{\ell+1}}} \sum_{a,r} \ket{a,r}_{R} \ket{\commitx{a}{r}}_{A_1} \ket{b' = \mathtt{coin} \oplus a}_{G} \ket{\open{a}{r}}_{A_2} \ket{0}_{B} \ket{0^{k^*}}_{V} \ket{\psi}_{W} \, , $$ where $k^* < k$. Note that the state of registers $\big( A_1, G, A_2 \big)$ corresponds to a uniform distribution of possible transcripts of the interaction between the players. \item[(2.)] For each possible $\commitx{a}{r}$, it simulates ${\sf B}'$'s possible actions by applying a unitary operator to $\big( W,V,B,A_1 \big)$ with register $A_1$ as control, i.e., $$ \frac{1}{\sqrt{2^{\ell+1}}} \sum_{a,r} \ket{a,r}_{R} \ket{\commitx{a}{r}}_{A_1} \ket{b'}_{G} \ket{\open{a}{r}}_{A_2} \ket{b}_{B} \ket{\tilde{\phi}}_{V} \ket{\tilde{\psi}}_{W} \, , $$ where ${\tilde{\phi}}$ and ${\tilde{\psi}}$ describe modified quantum states. Note that register $B$ now includes ${\sf B}'$'s reply $b$. \item[(3.)] Finally, a $\op{CNOT}$-operation is applied to pair $\big( B,G \big)$ with $B$ as control to check whether the simulator's guess of ${\sf B}'$'s choice was correct. The result of the $\op{CNOT}$-operation is stored in register $G$. $$\frac{1}{\sqrt{2^{\ell+1}}} \sum_{a,r} \ket{a,r}_{R} \ket{\commitx{a}{r}}_{A_1} \ket{b' \oplus b}_{G} \ket{\open{a}{r}}_{A_2} \ket{b}_{B} \ket{\tilde{\phi}}_{V} \ket{\tilde{\psi}}_{W} \, .$$ \end{enumerate} Note that the qubit in register $G$ gives the information about success or failure of the simulated run, and the other registers are combined in the residual $n + k - 1$-qubit register $Y$. Since the commitment scheme in the protocol is only quantum-computationally hiding, we must allow for small perturbations in the quantum rewinding procedure, according to Lemma~\ref{lemma:qrewind} : Bound $\varepsilon$ indicates ${\sf B}'$'s advantage over a random guess on the committed value with $q = 1/2$ (and therefore, his advantage to bias the outcome), due to his computing power, i.e.~$\varepsilon = |p - 1/2|$. From the hiding property of the commitment scheme, it follows that $\varepsilon$ is negligible in the security parameter $n$. Thus, we can argue that probability $p$ is \emph{close} to independent of the auxiliary input. As a lower bound on the success probability, we chose $p_0 \geq 1/4$, which matches our setting. Thus, we have circuit $Q$ as described above and our setting achieves the given bounds. Lemma~\ref{lemma:qrewind} applies. We can now construct an ideal-world quantum simulator $\hat{\sf B}'$ (see Figure~\ref{fig:simulationB}), interacting with ${\sf B}'$ and the ideal functionality $\cF$ and executing Watrous' quantum rewinding algorithm. We then compare the output states of the real process and the ideal process. In case of indistinguishable outputs, quantum-computational security against ${\sf B}'$ follows. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Simulation $\hat{\sf B}'$ :} \\[-4ex] \begin{enumerate} \item $\hat{\sf B}'$ gets ${\sf B}'$'s auxiliary quantum input $W$ and working registers $X$. \item\label{step:get-coin} $\hat{\sf B}'$ sends $\mathtt{start}$ and then $\top$ to $\cF$. It receives a uniformly random $\mathtt{coin}$. \item Depending on the value of $\mathtt{coin}$, $\hat{\sf B}'$ applies the corresponding circuit $R_{\mathtt{coin}}$ with input $W, X ,{\sf B}'$ and $\mathtt{coin}$. \item $\hat{\sf B}'$ receives output register $Y$ with $\ket{\phi_{good}(\psi)}$ and ``measures the conversation'' to retrieve the corresponding $\big( \commitx{a}{r},b,\open{a}{r} \big)$. It outputs whatever ${\sf B}'$ outputs. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal-World Simulation against dishonest Bob.} \label{fig:simulationB} \end{figure} First note that the superposition constructed as described above in circuit $Q$ in Step~(1.) corresponds to all possible random choices of values in the real protocol. Furthermore, the circuit models any possible strategy of quantum ${\sf B}'$ in Step~(2.), depending on control register $\ket{\commitx{a}{r}}_{A_1}$. The $\op{CNOT}$-operation on $(B,G)$ in Step~(3.), followed by a standard measurement of $G$, indicate whether the guess $b'$ on ${\sf B}'$'s choice $b$ was correct. If that was not the case (i.e.~$b \neq b'$ and measurement result 1), the system gets quantumly rewound by applying reverse transformations (3)-(1), followed by a phase-flip operation. The procedure is repeated until the measurement outcome is 0 and hence $b=b'$. Watrous' technique then guarantees that, for negligible advantage $\varepsilon$ and a lower bound $p_0 \geq \frac14$, $\varepsilon'$ is negligible. Thus, the final output of the simulation is close to the ``good'' state of a successful simulation. More specifically, the output $\rho(\psi)$ of $R_{coin}$ has square-fidelity close to 1 with state $\ket{\phi_{good}(\psi)}$ of a successful simulation, i.e.~ $$\bra{\phi_{good}(\psi)}\rho(\psi)\ket{\phi_{good}(\psi)} \geq 1 - \varepsilon' \, , $$ where $\varepsilon' = 16 \ \varepsilon \log^2(1 / \varepsilon) / (p_0^2 \ (1-p_0)^2)$. Last, note that all operations in $Q$ (and therewith in $R_{coin}$) can be performed by polynomial-size circuits, and thus, the simulator has polynomial size (in the worst case). It follows that the output of the ideal simulation is indistinguishable from the output in the real world for any quantum-computationally bounded ${\sf B}'$. \end{proof} \section{Composability} \label{sec:composability.coin} As already discussed in the previous part, there are several composition frameworks proposed for the quantum setting, but for sequential composition we will argue along the lines of our security framework (Section~\ref{sec:sequential.composition.coin}). In Section~\ref{sec:general.composition.coin}, we will use an extend commitment construction to achieve a more general composability in the CRS-model. Note that only sequential composition allows us to do coin-flipping from scratch. \subsection{Sequential Composition} \label{sec:sequential.composition.coin} \index{composition!sequential} Recall that we prove correctness and security for our single coin-flip according to the security framework as described in Section~\ref{sec:security.definition}, with the one modification that we do not assume a common reference string in the simulation against a dishonest Bob (see Theorem~\ref{thm:computational.security.single.coin}). However, we can still apply the Composition Theorems I and II (Theorems~\ref{thm:composition.unconditional} and~\ref{thm:composition.computational}), where we also omit the reference string in the latter. We will state the composition result explicitly here. \begin{corollary} \label{cor:sequential.coin.flipping} Let $\pi_i = \cPi{{\sf A}}{{\sf B}}$ and $\mathcal{F}_i = \cF$, and let $\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$ be a classical two-party hybrid protocol which makes at most $\ell=\mathit{poly}(n)$ calls to the functionalities. Then, for every $i \in \set{1,\ldots,\ell}$, each protocol $\pi_i$ is a statistically secure implementation of $\mathcal{F}_i$ against $\mathfrak{A}$ and a computationally secure implementation of $\mathcal{F}_i$ against $\dBob_{\mathrm{poly}}$. \noindent It holds that there exists an ideal-world adversary $\hat{\sf A}' \in \mathfrak{A}$ such that $$ out_{{\sf A}',{\sf B}}^{\Sigma^{\pi_1\cdots\pi_\ell}} \approxs out_{\hat{\sf A}',{\sf B}'}^{\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell} } \, , $$ and an ideal-world adversary $\hat{\sf B}' \in \dBob_{\mathrm{poly}}$ such that for every efficient input sampler, we have $$ out_{{\sf A},{\sf B}'}^{\Sigma^{\pi_1\cdots\pi_\ell}} \approxq out_{\hat{\sf A},\hat{\sf B}'}^{\Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell} } \, . $$ \end{corollary} The ideal functionality for sequential coin-flipping, i.e.~$\cxF{\ell} = \Sigma^{\mathcal{F}_1\cdots\mathcal{F}_\ell}$, is depicted in Figure~\ref{fig:clambdaF}. Note that $\cxF{\ell}$ is in fact derived from composing the functionality $\cF$ of a single coin-flip sequentially but interpreted more directly, e.g.\ it does not output the bits one after another but as a string, and thus, does not output the precedent coins in case of an intermediate abort. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Functionality} $\cxF{\ell}$ :\\[-4ex] \begin{enumerate} \item Upon receiving requests $\mathtt{start}$ from both Alice and Bob, $\cxF{\ell}\,$ outputs uniformly random $h \in_R \{0,1\}^\ell$ to Alice. \item It then waits to receive her second input $\top$ or $\bot$ and outputs $h$ or $\bot$ to Bob, respectively. \end{enumerate} \vspace{-1ex} \end{framed} \vspace{-2ex} \small \caption{The Ideal Functionality for Sequential $\ell$-bit Coin-Flipping.} \label{fig:clambdaF} \vspace{-1ex} \end{figure} \subsection{General Composition} \label{sec:general.composition.coin} \index{composition!general} For our coin-flipping protocol without set-up, we cannot claim universal composability. We do not require (nor obtain) an efficient simulator in case of unconditional security against dishonest Alice and furthermore, we allow rewinding in case of dishonest Bob. These two aspects contradict the universal composability framework. Efficient simulation requires some trapdoor information in the commitment construction, which is available only to a simulator, so that it is able to extract dishonest Alice's choice bit efficiently. Therefore, we have to extend the commitment scheme by including an extraction trapdoor. To circumvent the necessity of rewinding dishonest Bob, we further extend the scheme with respect to equivocability, i.e., the simulator can now construct a valid commitment, which can later be opened to both bit values as desired. Note that with such requirements, the CRS-model seems unavoidable. An appropriate extended construction is proposed in Section~\ref{sec:extended.commit.coin}. The real-world key consists of commitment key ${\tt pkB}$ and (invalid) instance $x'$. During simulation against ${\sf A}'$, $\hat{\sf A}'$ chooses ${\tt pkB}$ with matching decryption key ${\tt sk}$ and therefore, it can extract ${\sf A}'$'s choice bit $a$ by decrypting both commitments $C_0$ and $C_1$. In both worlds, the commitment is unconditionally binding. During simulation against ${\sf B}'$, $\hat{\sf B}'$ chooses commitment key ${\tt pkH}$ and (valid) instance $x$. Hence, the commitment is unconditionally hiding and can be equivocated by using $w$ to compute two valid replies in the underlying $\Sigma$-protocol. Quantum-computational security in real life follows from the indistinguishability of the keys ${\tt pkB}$ and ${\tt pkH}$ and the indistinguishability of the instances $x$ and $x'$, and efficiency of both simulations is ensured due to extraction and equivocability. Again, by combining our extended construction in the CRS-model providing efficient simulations on both sides with the results of Section~\ref{sec:extended.commit.coin} and~\cite[Theorem 20]{Unruh10}, we get the following result\index{composition!quantum-UC} that $\cPi{{\sf A}}{{\sf B}}$ \emph{computationally quantum-UC-emulates} its corresponding ideal functionality $\cF$ for \emph{both dishonest players}. In the next Chapter~\ref{chap:framework}, we will show another method of achieving fully simulatability in the plain model without any set-up assumption, when both players are poly-time bounded. \clearemptydoublepage \chapter{Amplification Framework for Strong Coins} \label{chap:framework} \index{coin-flipping!amplification} Here, we present a framework that amplifies weak security requirements on coins into very strong properties, with the final result of a quantum-secure and fully simulatable coin-flipping protocol, which can be implemented in the plain model from scratch. The results in this chapter are joint work with Nielsen~\cite{LN10}. \section{Motivation} \label{sec:framework.motivation} Coin-Flipping of a single coin is in itself an intriguing and prolific primitive in cryptographic protocol theory. Its full potential is tapped in the possibility of flipping a string of coins, which opens up for various applications and implementations without any set-up assumptions. We will later in Chapter~\ref{chap:coin.flip.applications} discuss some examples thereof. In this chapter, we first investigate the different degrees of security that a string of coins can acquire. Then, we propose and prove constructions that allow us to amplify the respective degrees of security such that weaker coins are converted into very strong ones in a straightforward way.\footnote{For the sake of clarity, we note that we use the (intuitive) literal interpretation of ``weak'' and ``strong'' coins related to their degrees of security, which differs from their definitions in the quantum literature (see also Section~\ref{sec:primitives.coin.flip}).} Our method only assumes mixed commitment schemes, which we know how to construct with quantum security, no other assumptions are put forward. Our final result is a coin-flipping protocol, which is fully simulatable in polynomial time, even against poly-sized \emph{quantum} adversaries on both sides, and which can be implemented with quantum-computational security in the plain model from scratch. Our method of amplifying the security of coin-flipping also applies to potential \emph{constant round} coin-flipping. Such a strong and efficient construction would require a basic quantum-secure coin-flip protocol with long outcomes (in constant round), and poly-time simulatability on one side. Its construction, however, is still a fascinating open problem in the quantum world. \section{Security Notions} \label{sec:notions.coin.flip} We denote a generic protocol with a $\lambda$-bit coin-string as output by $\cxPi{{\sf A}}{{\sf B}}{\lambda}$, corresponding to an ideal functionality $\cxF{\lambda}$. Recall that the outcome of such a protocol is $c \in \{0,1\}^\lambda \cup \set{\bot}$, i.e., either an $\lambda$-bit string or an error message.\footnote{We want to stress that throughout the chapter, a reference to any \emph{coin-flip} is understood as one run of coin-flipping with a coin-string outcome.} We will use several security parameters, indicating the length of coin-strings for different purposes. The length of a coin-flip yielding a key and a challenge are denoted by $\kappa$ and $\sigma$, respectively, and the length of a final coin-flip is indicated by $\ell$, i.e., we allow that $\lambda$ is a function of the respective parameter, e.g.\ $\lambda(\kappa)$, but we write $\kappa$ instead. Throughout this chapter, we restrict both players Alice and Bob to the families $\dAlice_{\mathrm{poly}}$ and $\dBob_{\mathrm{poly}}$ of classical polynomial-time strategies, i.e.\ for the honest case, we require ${\sf A} , \hat{\sf A} \in \dAlice_{\mathrm{poly}}$ and ${\sf B}, \hat{\sf B} \in \dBob_{\mathrm{poly}}$, as well as for possibly quantum dishonest entities, we demand ${\sf A}', \hat{\sf A}' \in \dAlice_{\mathrm{poly}}$ and ${\sf B}', \hat{\sf B}' \in \dBob_{\mathrm{poly}}$. We want to stress here that, in contrast to previous chapters, both players are poly-time bounded. This means, in particular, that the ideal functionality is defined symmetric such that always the respective dishonest party has an option to abort. For clarity, we will explicitly show the ideal functionalities in the case of both players being honest (Figure~\ref{fig:clambdaF.honest}) and in the case of dishonest Alice and honest Bob (Figure~\ref{fig:clambdaF.dishonest}). The latter then also applies to honest Alice and dishonest Bob by simply switching sides and names. \begin{figure}[here] \begin{framed} \noindent\hspace{-1.5ex} {\sc Functionality} $\cxF{\lambda}$ {\sc with honest players:}\\ Upon receiving requests $\mathtt{start}$ from both Alice and Bob, $\cxF{\lambda}\,$ outputs uniformly random $h \in_R \{0,1\}^\lambda$ to Alice and Bob. \vspace{-1ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal Functionality for $\lambda$-bit Coin-Flipping (without Corruption).} \label{fig:clambdaF.honest} \vspace{-1ex} \end{figure} \begin{figure}[here] \begin{framed} \noindent\hspace{-1.5ex} {\sc Functionality} $\cxF{\lambda}$ {\sc with dishonest Alice:}\\[-4ex] \begin{enumerate} \item Upon receiving requests $\mathtt{start}$ from both Alice and Bob, $\cxF{\lambda}\,$ outputs uniformly random $h \in_R \{0,1\}^\lambda$ to Alice. \item It then waits to receive her second input $\top$ or $\bot$ and outputs $h$ or $\bot$ to Bob, respectively. \end{enumerate} \vspace{-1ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal Functionality for $\lambda$-bit Coin-Flipping (with Corruption).} \label{fig:clambdaF.dishonest} \end{figure} Recall that the \emph{joint output representation} of a protocol execution is denoted by $out_{{\sf A},{\sf B}}^\Pi \,$ with $\Pi = \cxPi{{\sf A}}{{\sf B}}{\lambda}$ and given here for the case of honest players. The same notation with $\mathcal{F} = \cxF{\lambda}$ and $\hat{\sf A}, \hat{\sf B}$ applies in the ideal world as $out_{\hat{\sf A},\hat{\sf B}}^\mathcal{F}$, where the players invoke the ideal functionality $\cxF{\lambda}$ and output whatever they obtain from it. We need an additional notation here, describing the \emph{outcome} of a protocol run between e.g.\ honest ${\sf A}$ and ${\sf B}$, namely $c \leftarrow \cxPi{{\sf A}}{{\sf B}}{\lambda}$. \index{coin-flipping!uncontrollable} \index{coin-flipping!random} \index{coin-flipping!enforceable} We will define three flavors of security for coin-flipping protocols, namely \defterm{uncontrollable (uncont)}, \defterm{random} and \defterm{enforceable (force)}. The two sides can have different flavors. Then, if a protocol $\cxPi{{\sf A}}{{\sf B}}{\lambda}$ is, for instance, enforceable against Alice and random against Bob, we write $\pi^{(\texttt{force},\texttt{random})}$, and similarly for the eight other combinations of security. Note that for simplicity of notation, we will then omit the indexed name as well as the length of the coin, as they are clear from the context. Similar to the ideal functionality for the case of dishonest Alice, we define all three flavors for Alice's side only, as the definitions for Bob are analogue. The flavors are defined along the lines of the security framework introduced in Section~\ref{sec:security.definition} but with adaptions to reflect the particular context here. Recall that $U'$, $Z$, and $V$ denote dishonest Alice's quantum and classical input, and honest Bob's classical input, respectively. Note that an honest player's input is empty but models the invocation $\mathtt{start}$. Any input state $\rho_{U' Z V}$ is restricted to $ \rho_{U' Z V} = \rho_{\MC{U'}{Z}{V}}$, such that Alice's quantum and Bob's classical part are only correlated via Alice's classical~$Z$. We assume again a poly-size input sampler, which takes as input the security parameter, and then produces a valid input state $\rho_{U' Z V} = \rho_{\MC{U'}{Z}{V}}$ (and analogous $\rho_{U Z V'}$ in case of dishonest Bob). We stress that we require for all three security flavors and for all $c \in \{0,1\}^{\lambda}$ that $$\prob{c \leftarrow \cxPi{{\sf A}}{{\sf B}}{\lambda}} = 2^{-\lambda} \, ,$$ which implies that when both parties are honest, then the coin is unbiased. Below we only define the extra properties required for each of the three flavors.\\ We call a coin-flip \defterm{uncontrollable} against Alice, if she cannot force the coin to hit some negligible subset, except with negligible probability. \begin{definition}[Uncontrollability against dishonest Alice] \label{def:uncont} We say that the protocol $\ \cxPi{{\sf A}}{{\sf B}}{\lambda} \ $ implements an \defterm{uncontrollable} coin-flip against dishonest Alice, if it holds for any poly-sized adversary ${\sf A}' \in \dAlice_{\mathrm{poly}}$ with inputs as specified above and all negligible subsets $Q \subset \{0,1\}^\lambda$ that the probability $$ \prob{c \leftarrow \cxPi{{\sf A}'}{{\sf B}}{\lambda}\, : \, c \in Q} \in \negl{\kappa} \, . $$ \end{definition} Note that we denote by $Q \subset \{0,1\}^\lambda$ a family of subsets $\set{Q(\kappa) \subset \{0,1\}^{\lambda(\kappa)}}_{\kappa \in \mathbb{N}}$ for security parameter $\kappa$. Then we call $Q$ negligible, if $\vert Q(\kappa) \vert 2^{-\lambda(\kappa)}$ is negligible in $\kappa$. In other words, we call a subset negligible if it contains a negligible fraction of the elements in the set in which it lives.\\ We call a coin-flip \defterm{random} against Alice, if she cannot enforce a non-uniformly random output string in $\{0,1\}^\lambda$, except by making the protocol fail on some chosen runs. That means she can at most lower the probability of certain output strings compared to the uniform case. \begin{definition}[Randomness against dishonest Alice] \label{def:random} We say that $ $ protocol $\ \cxPi{{\sf A}}{{\sf B}}{\lambda} \ $ implements a \defterm{random} coin-flip against dishonest Alice, if it holds for any poly-sized adversary ${\sf A}' \in \dAlice_{\mathrm{poly}}$ with inputs as specified above that there exists an event E such that $\prob{E} \in \negl{\kappa}$ and for all $x \in \{0,1\}^\lambda$ it holds that $$ \prob{c \leftarrow \cxPi{{\sf A}'}{{\sf B}}{\lambda} \, : \, c = x \, \vert \, \bar{E}} \leq 2^{-\lambda} \, . $$ \end{definition} It is obvious that if a coin-flip is random against Alice, then it is also an uncontrollable coin-flip against her. We will later discuss a generic transformation going in the other direction from uncontrollable to random coin-flipping.\\ We call a coin-flip \defterm{enforceable} against Alice, if it is possible, given a uniformly random $c$, to simulate a run of the protocol hitting exactly the outcome $c$, though we still allow that the corrupted party forces abort on some outcomes. \begin{definition}[Enforceability against dishonest Alice] \label{def:force} We call a protocol $ \ \cxPi{{\sf A}}{{\sf B}}{\lambda} \ $ \defterm{enforceable} against dishonest Alice, if it implements the ideal functionality $\ \cxF{\lambda} \ $ against her. \end{definition} In more detail, that means that for any poly-sized adversary ${\sf A}' \in \dAlice_{\mathrm{poly}}$, there exists an ideal-world adversary $\hat{\sf A}' \in \dAlice_{\mathrm{poly}}$ that simulates the protocol with ${\sf A}'$ as follows. $\hat{\sf A}'$ requests output $h \in \{0,1\}^\lambda$ from $\cxF{\lambda}$. Then it simulates a run of the coin-flipping protocol with ${\sf A}'$ and tries to enforced output $h$. If $\hat{\sf A}'$ succeeds, it inputs $\top$ as ${\sf A}'$'s second input to $\cxF{\lambda}$. In that case, $\cxF{\lambda}$ outputs $h$. Otherwise, $\hat{\sf A}'$ inputs $\bot$ to $\cxF{\lambda}$ as second input and $\cxF{\lambda}$ outputs $\bot$. The simulation is such that the ideal output is quantum-computationally indistinguishable from the output of an actual run of the protocol, i.e., $$ out_{{\sf A}',{\sf B}}^\Pi \approxq out_{\hat{\sf A}',\hat{\sf B}}^\mathcal{F} \, , $$ where $\Pi = \cxPi{{\sf A}'}{{\sf B}}{\lambda}$ and $\mathcal{F} = \cxF{\lambda}$. Note that an enforceable coin-flip is not necessarily a random coin-flip, as it is allowed that the outcome of an enforceable coin-flip is only quantum-computationally indistinguishable from uniformly random, whereas a random coin-flip is required to produce truly random outcomes on the non-aborting runs.\\ We defined an enforceable coin-flip against dishonest Alice to be a coin-flip, simulatable on her side and implementing the corresponding ideal functionality against her. The same result with switched sides also holds for any poly-time bounded Bob. Thus, we obtain a coin-flip protocol, for which we can simulate both sides\index{coin-flipping!fully simulatable} in polynomial time. Corollary~\ref{cor:double.simulatable} follows. \begin{corollary} \label{cor:double.simulatable} Let $\cxPi{{\sf A}}{{\sf B}}{\lambda}$ be an enforceable coin-flip against both parties Alice and Bob with ${\sf A} \in \dAlice_{\mathrm{poly}}$ and ${\sf B} \in \dBob_{\mathrm{poly}}$, i.e.~$\cxPi{{\sf A}}{{\sf B}}{\lambda} = \pi^{(\texttt{force},\texttt{force})}$. Then $\pi^{(\texttt{force},\texttt{force})}$ is a \defterm{fully poly-time simulatable} coin-flipping protocol for the ideal functionality $\ \cxF{\lambda} \, $ with quantum-computational indistinguishability between the real and the ideal output. \end{corollary} Combining the part regarding simulatability in Corollary~\ref{cor:sequential.coin.flipping}, where we again omit the common reference string, in contrast to the original Composition Theorem II (Theorem~\ref{thm:composition.computational}), with the results of Corollary~\ref{cor:double.simulatable}, we can show that each protocol $\pi^{(\texttt{force},\texttt{force})}$ is a computationally secure implementation of $\cxF{\lambda}$ against both $\dAlice_{\mathrm{poly}}$ and $\dBob_{\mathrm{poly}}$. \begin{corollary} \label{cor:double.simulatable.sequential.composition} Protocol $\pi^{(\texttt{force},\texttt{force})}$ composes sequentially. \end{corollary} \section{Amplification Theorems} \label{sec:amplification.coin.flip} We now propose and prove theorems, which allow us to amplify the security strength of coins. Ultimately, we aim at constructing a strong coin-flipping protocol $\pi^{(\texttt{force},\texttt{force})}$ with outcomes of any polynomial length $\ell$ in $\lambda$ from any weaker coin-flip protocol, i.e., either from a protocol $\pi^{(\texttt{force},\texttt{random})}$ producing one-bit outcomes (Section~\ref{sec:amplification.short.long}), or from a protocol $\pi^{(\texttt{force},\texttt{uncont})}$ giving outcomes of length $\kappa$, as described in Section~\ref{sec:amplification.uncont.random}. In both cases, the first step towards $\pi^{(\texttt{force},\texttt{force})}$ is to build a protocol $\pi^{(\texttt{force},\texttt{random})}$ with outcomes of length $\ell$. We want to stress that if the underlying protocol already produces $\ell$-bit outcomes and is constant round, then the resulting protocol $\pi^{(\texttt{force},\texttt{force})}$ will also be constant round. If we start from a protocol only producing constant-sized outcomes, then $\pi^{(\texttt{force},\texttt{force})}$ will use $O(\ell)$ times the number of rounds used by the underlying scheme. We note here that we do not know of any candidate protocol with flavor $(\texttt{force},\texttt{uncont})$ but not $(\texttt{force},\texttt{random})$. However, we consider it as a contribution in itself to find the weakest security notion for coin-flipping that allows to amplify to the final strong $(\texttt{force},\texttt{force})$ notion using a constant round reduction. \subsection{From Short Outcomes to Long Outcomes} \label{sec:amplification.short.long} To obtain long coin-flip outcomes, we can repeat a given protocol $\pi^{(\texttt{force},\texttt{random})}$ with one-bit outcomes $\ell$ times in sequence to get a protocol $\pi^{(\texttt{force},\texttt{random})}$ with $\ell$-bit outcomes. A candidate for $\pi^{(\texttt{force},\texttt{random})}$ with one-bit outcomes is the protocol of Chapter~\ref{chap:coin.flip}, which is---in terms of this context---enforceable against one side in poly-time and random on the other side, with empty event $E$ according to Definition~\ref{def:random}, and the randomness guarantee even withstanding an unbounded adversary. The protocol was argued to be sequentially composable according to Corollary~\ref{cor:sequential.coin.flipping}. Note that this protocol is previously described and proven as $\pi^{(\texttt{random},\texttt{force})}$. However, due to the symmetric coin-flip definitions here and the restriction of entities to families of classical polynomial-time strategies, we can easily switch sides between ${\sf A}$ and ${\sf B}$. \subsection{From $(\texttt{force},\texttt{uncont})$ to $(\texttt{force},\texttt{random})$} \label{sec:amplification.uncont.random} Assume that we are given a protocol $\pi^{(\texttt{force},\texttt{uncont})}$, that only guarantees that Bob cannot force the coin to hit a negligible subset (except with negligible probability). We now amplify the security on Bob's side from $\defterm{uncontrollable}$ to $\defterm{random}$ and therewith obtain a protocol $\pi^{(\texttt{force},\texttt{random})}$, in which Bob cannot enforce a non-uniformly random output string, except by letting the protocol fail on some occasions. The stronger protocol $\pi^{(\texttt{force},\texttt{random})}$ is given in Figure~\ref{fig:force.random}. The underlying commitment $\mathtt{commit}$ denotes the commitment algorithm of the keyed mixed string commitment scheme as described in Section~\ref{sec:mixed.commit}. Recall that $\mathtt{commit}$ does not require actual unconditionally hiding keys, but rather it suffices to use uniformly random strings from $\{0,1\}^\kappa$, which unconditionally hide the plaintext, except with negligible probability. The possibility of using random strings ensures that most keys of the given domain are in that sense unconditionally hiding keys. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol} $\pi^{(\texttt{force},\texttt{random})}$: \\[-4ex] \begin{enumerate} \item ${\sf A}$ and ${\sf B}$ run $\pi^{(\texttt{force},\texttt{uncont})}$ to produce a public key $pk \in \{0,1\}^\kappa$. \item ${\sf A}$ samples $a \in_R \{0,1\}^\ell$, commits to it with $A = \commitk{a}{r}{pk}$ and randomizer $r \in_R \{0,1\}^\ell$, and sends $A$ to ${\sf B}$. \item ${\sf B}$ samples $b \in_R \{0,1\}^\ell$ and sends $b$ to ${\sf A}$. \item ${\sf A}$ opens $A$ towards ${\sf B}$. \item The outcome is $c = a \oplus b$. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \small \caption{Amplification from $(\texttt{force},\texttt{uncont})$ to $(\texttt{force},\texttt{random})$.} \label{fig:force.random} \end{figure} \begin{proposition} Protocol $\pi^{(\texttt{force},\texttt{random})}$ satisfies correctness, according to Definition~\ref{def:correctness}. \end{proposition} Correctness is obvious by inspection of the protocol. If both players are honest, they independently choose random strings $a$ and $b$. The result of these strings combined by the XOR-operation gives a uniformly random coin $c$ of length $\ell$. \begin{theorem} \label{thm:force.random} If $\pi^{(\texttt{force},\texttt{uncont})}$ is enforceable against Alice and uncontrollable against Bob, then protocol $\pi^{(\texttt{force},\texttt{random})}$ is enforceable against Alice and random for Bob. \end{theorem} \begin{proof}[ (\emphbf{Enforceability against Alice})] In case of corrupted ${\sf A}'$, $\hat{\sf A}'$ samples $(pk,sk) \leftarrow {\cal G}_{\tt B}$ as input. It then requests a uniformly random value $h$ from $\clF$. It runs $\pi^{(\texttt{force},\texttt{uncont})}$ with ${\sf A}'$, in which $\hat{\sf A}'$ enforces the outcome $pk$ in the first step. When ${\sf A}'$ sends commitment $A$, $\hat{\sf A}'$ uses $sk$ to decrypt $A$ to learn the unique string $a$ that $A$ can be opened to. $\hat{\sf A}'$ computes $b = h \oplus a$ and sends $b$ to ${\sf A}'$. If ${\sf A}'$ opens commitment $A$ correctly, then the result is $c = a \oplus b = a \oplus (h \oplus a) = h$ as desired. In case she does not open correctly, $\hat{\sf A}'$ aborts with result $\bot$. Otherwise, $\hat{\sf A}'$ outputs whatever ${\sf A}'$ outputs. Since $h$ is uniformly random and independent of $A$ and $a$, it follows that $b = h \oplus a$ is uniformly random and independent of $A$, exactly as in the protocol. Therefore, the transcript of the simulation has the same distribution as the real protocol, except that $pk$ is uniform in $\mathcal{X}$ and not in $\{0,1\}^\kappa$. This is, however, quantum-computationally indistinguishable, as otherwise, ${\sf A}'$ could distinguish random access to samples from $\mathcal{X}$ from random access to samples from $\{0,1\}^\kappa$. The formal proof proceeds through a series of hybrids as described in full detail in the proof for Theorem~\ref{thm:force.force} in the next Section~\ref{sec:amplification.force.force}. The above two facts, that first we hit $h$ when we do not abort, and second that the transcript of the simulation is quantum-computationally indistinguishable from the real protocol, show that the resulting protocol is enforceable against Alice and simulatable on Alice's side for functionality $\clF$, according to Definition~\ref{def:force} combined with Theorem~\ref{def:force}. \end{proof} \begin{proof}[ (\emphbf{Randomness against Bob})] For any ${\sf B}'$, $pk$ is uncontrollable, i.e.~$pk \in \{0,1\}^\kappa \setminus \mathcal{X}$, except with negligible probability, as $\mathcal{X}$ is negligible in $\{0,1\}^\kappa$. This, in particular, means that the commitment $A$ is perfectly hiding the value $a$. Therefore, $a$ is uniformly random and independent of $b$, and thus, $h = a \oplus b$ is uniformly random. This proves that the resulting coin-flip is random against Bob, according to Definition~\ref{def:random}. \end{proof} \subsection{From $(\texttt{force},\texttt{random})$ to $(\texttt{force},\texttt{force})$} \label{sec:amplification.force.force} We now show how to obtain a coin-flipping protocol, which is enforceable against both parties. Then, we can also claim by Corollary~\ref{cor:double.simulatable} that this protocol is a strong coin-flipping protocol, poly-time simulatable on both sides for the natural ideal functionality $\clF$. The protocol $\pi^{(\texttt{force},\texttt{force})}$ is described in Figure~\ref{fig:force.force}. Note that the final protocol makes two calls to a subprotocol with random flavor on one side and enforceability on the other side, but where the sides are interchanged for each instance, i.e.~$\pi^{(\texttt{force},\texttt{random})}$ and $\pi^{(\texttt{random},\texttt{force})}$. That means that we switch the players' roles as well as the direction of the messages. Furthermore, note that we use here the possibility of trapdoor openings in our extended commitment construction $\mathtt{COMMIT}$, based on secret sharing and mixed commitments, as described in detail in Section~\ref{sec:mixed.commit.trapdoor.opening}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol} $\pi^{(\texttt{force},\texttt{force})}$: \\[-4ex] \begin{enumerate} \item ${\sf A}$ and ${\sf B}$ run $\pi^{(\texttt{force},\texttt{random})}$ to produce a random public key $pk \in \{0,1\}^\kappa$. \item ${\sf A}$ computes and sends commitments $\Commitk{a}{(s,r)}{pk} = (A_1,\ldots,A_\Sigma) $ to ${\sf B}$. In more detail, ${\sf A}$ samples uniformly random $a, s \in \mathbb{F}^\sigma$. She then computes $\mathtt{sss}(a;s) = (a_1,\ldots,a_\Sigma)$ and $A_i = \commitk{a_i}{r_i}{pk}$ for all $i = 1, \ldots, \Sigma$. \item ${\sf B}$ samples uniformly random $b \in \{0,1\}^\ell$ and sends $b$ to ${\sf A}$. \item ${\sf A}$ sends secret shares $(a_1,\ldots,a_\Sigma)$ to ${\sf B}$. If $(a_1, \ldots, a_\Sigma)$ is not consistent with a polynomial of degree at most $(2\sigma-1)$, ${\sf B}$ aborts. \item ${\sf A}$ and ${\sf B}$ run $\pi^{(\texttt{random},\texttt{force})}$ to produce a challenge $S \subset \set{1,\ldots,\Sigma}$ of length $\vert S \vert = \sigma$. \item ${\sf A}$ sends $r|_S$ to ${\sf B}$. \item ${\sf B}$ checks if $A_i = \commitk{a_i}{r_i}{pk}$ for all $i \in S$. If that is the case, ${\sf B}$ computes message $a \in \mathbb{F}^\sigma$ consistent with $(a_1, \ldots, a_\Sigma)$ and the outcome of the protocol is $c = a \oplus b$. Otherwise, ${\sf B}$ aborts and the outcome is $c = \bot\,$. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \small \caption{Amplification from $(\texttt{force},\texttt{random})$ to $(\texttt{force},\texttt{force})$.} \label{fig:force.force} \end{figure} \begin{proposition} Protocol $\pi^{(\texttt{force},\texttt{force})}$ satisfies correctness, according to Definition~\ref{def:correctness}. \end{proposition} Again, correctness can be trivially checked, first by observing that honest players independently input uniformly random strings $a$ and $b$, and second by verifying that these strings combined by XOR result in a uniformly random coin $c$ of length $\ell$. \begin{theorem} \label{thm:force.force} If $\pi^{(\texttt{force},\texttt{random})}$ is enforceable against Alice and random against Bob, then protocol $\pi^{(\texttt{force},\texttt{force})}$ is enforceable against both Alice and Bob. \end{theorem} \begin{proof}[ (\emphbf{Enforceability against Alice})] If ${\sf A}'$ is corrupted, $\hat{\sf A}'$ samples $(pk,sk) \leftarrow {\cal G}_{\tt B}$ as input and enforces $\pi^{(\texttt{force},\texttt{random})}$ in the first step to hit the outcome $pk$. It then requests value $h$ from $\clF$. When ${\sf A}'$ sends commitments $(A_1,\ldots,A_\Sigma)$, $\hat{\sf A}'$ uses $sk$ to extract $a'$ with $\big( a'_1,\ldots,a'_\Sigma \big) = \big( \xtr{A_1}{sk},\ldots,\xtr{A_\Sigma}{sk} \big)$. $\hat{\sf A}'$ then sets $b = h \oplus a'$, and sends $b$ to ${\sf A}'$. Then $\hat{\sf A}'$ finishes the protocol honestly. In the following, we will prove that the transcript is quantum-computationally indistinguishable from the real protocol and that if $c \neq \bot$, then $c = h$, except with negligible probability. First, we show indistinguishability. The proof proceeds via a hybrid\index{hybrid argument} argument.\footnote{Briefly, a hybrid argument is a proof technique to show that two (extreme) distributions are computationally indistinguishable via proceeding through several (adjacent) hybrid distributions. If all adjacent distributions are pairwise computationally indistinguishability, it follows by transitivity that the two end points are so as well. We want to point out that we are not subject to any restrictions in how to obtain the hybrid distributions as long as we maintain indistinguishability.} Let $\D{0}$ denote the distribution of the output of the simulation as described. We now change the simulation such that, instead of sending $b = h \oplus a'$, we simply choose a uniformly random $b \in \{0,1\}^\ell$ and then output the corresponding $h = a' \oplus b$. Let $\D{1}$ denote the distribution of the output of the simulation after this change. Since $h$ is uniformly random and independent of $a'$ in the first case, it follows that then $b = h \oplus a'$ is uniformly random. Therefore, the change to choose a uniformly random $b$ in the second case actually does not change the distribution at all, and it follows that $\D{0} = \D{1}$. By sending a uniformly random $b$, we are in a situation where we do not need the decryption key $sk$ to produce $\D{1}$, as we no longer need to know $a'$. So we can now make the further change that, instead of forcing $\pi^{(\texttt{force},\texttt{random})}$ to produce a random public key $pk \in \mathcal{X}$, we force it to hit a random public key $pk \in \{0,1\}^\kappa$. This produces a distribution $\D{2}$ of the output of the simulation. Since $\D{1}$ and $\D{2}$ only differ in the key we enforce $\pi^{(\texttt{force},\texttt{random})}$ to hit and the simulation is quantum poly-time, there exists a poly-sized circuit $Q$, such that $Q(\mathcal{U}(\mathcal{X})) = \D{1}$ and $Q(\mathcal{U}(\{0,1\}^\kappa)) = \D{2}$, where $\mathcal{U}(\mathcal{X})$ and $\mathcal{U}(\{0,1\}^\kappa)$ denote the uniform distribution on $\mathcal{X}$ and the uniform distribution on $\{0,1\}^\kappa$, respectively. As $\mathcal{U}(\mathcal{X})$ and $\mathcal{U}(\{0,1\}^\kappa)$ are quantum-computationally indistinguishable, and $Q$ is poly-sized, it follows that $Q(\mathcal{U}(\mathcal{X}))$ and $Q(\mathcal{U}(\{0,1\}^\kappa))$ are quantum-computationally indistinguishable, and therewith, $\D{1} \approxq \D{2}$. A last change to the simulation is applied by running $\pi^{(\texttt{force},\texttt{random})}$ honestly instead of enforcing a uniformly random $pk \in \{0,1\}^\kappa$. Let $\D{3}$ denote the distribution obtained after this change. As given in Definition~\ref{def:force}, real runs of $\pi^{(\texttt{force},\texttt{random})}$ and runs enforcing a uniformly random value are quantum-computationally indistinguishable. Using a similar argument as above, where $Q$ is the part of the protocol following the run of $\pi^{(\texttt{force},\texttt{random})}$, we get that $\D{2} \approxq \D{3}$. Finally by transitivity, it follows that $\D{0} \approxq \D{3}$. The observation that $\D{0}$ is the distribution of the simulation and $\D{3}$ is the actual distribution of the real protocol concludes the first part of the proof. We now argue the second part, i.e., if $c \neq \bot$, then $c = h$, except with negligible probability. This follows by arguing soundness of the commitment scheme $\mathtt{COMMIT}$, according to Lemma~\ref{lemma:soundness.sss}. Recall that, if $pk \in \mathcal{X}$, then the probability that ${\sf A}'$ can open any $A$ to a plaintext different from $\xtr{A}{sk}$ is at most $(\frac34)^\sigma$ when $S$ is picked uniformly at random and independent of $A$. The requirement on $S$ is however guaranteed (except with negligible probability) by the $\texttt{random}$ flavor of the underlying protocol $\pi^{(\texttt{random},\texttt{force})}$ producing $S$. This concludes the proof of enforceability against Alice, as given in Definition~\ref{def:force}. \end{proof} \begin{proof}[ (\emphbf{Enforceability against Bob})] To prove enforceability against corrupted ${\sf B}'$, we construct a simulator $\hat{\sf B}'$ as shown in Figure~\ref{fig:force.force.dhB}. It is straightforward to verify that the simulation always ensures that $c = h$, if ${\sf B}'$ does not abort. However, we must explicitly argue that the simulation is quantum-computationally indistinguishable from the real protocol. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Simulation} $\hat{\sf B}'$ for $\pi^{(\texttt{force},\texttt{force})}$: \\[-4ex] \begin{enumerate} \item $\hat{\sf B}'$ requests $h$ from $\clF$ and runs $\pi^{(\texttt{force},\texttt{random})}$ honestly with ${\sf B}'$ to produce a uniformly random public key $pk \in \{0,1\}^\kappa$. \item $\hat{\sf B}'$ computes $\Commitk{a'}{(s,r)}{pk} = (A_1,\ldots,{\sf A}_\Sigma)$ for uniformly random $a',s \in \mathbb{F}^\sigma$ and sends $(A_1,\ldots,A_\Sigma)$ to ${\sf B}'$. \item $\hat{\sf B}'$ receives $b$ from ${\sf B}'$. \item\label{step:trapdoor} $\hat{\sf B}'$ computes $a = b \oplus h$. It then picks a uniformly random subset $S \subset \set{1,\ldots,\Sigma}$ with $|S| = \sigma$, and lets $a'|_S$ be the $\sigma$ messages committed to by $A|_S$. Then, it interpolates the unique polynomial $f$ of degree at most $(2\sigma-1)$ for which $f(i) = a'_i$ for $i \in S$ and for which $f(-i+1) = a_i$ for $i \in \set{1,\ldots,\Sigma} \setminus S$. Finally, it sends $(f(1), \ldots, f(\Sigma))$ to ${\sf B}'$. \item During the run of $\pi^{(\texttt{random},\texttt{force})}$, $\hat{\sf B}'$ enforces the challenge $S$. \item\label{step:test.on.S} $\hat{\sf B}'$ sends $r|_S$ to ${\sf B}'$. \item $\hat{\sf B}'$ outputs whatever ${\sf B}'$ outputs. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Simulation for Bob's $\texttt{force}$ in $\pi^{(\texttt{force},\texttt{force})}$.} \label{fig:force.force.dhB} \end{figure} Indistinguishability follows by first arguing that the probability for $pk \notin \{0,1\}^\kappa \setminus \mathcal{X}$ is negligible. This follows from $\mathcal{X}$ being negligible in $\{0,1\}^\kappa$ and $pk$ produced with flavor $\texttt{random}$ against ${\sf B}'$ by $\pi^{(\texttt{force},\texttt{random})}$ being uniformly random in $\{0,1\}^\kappa$, except with negligible probability. Second, we have to show that if $pk \in \{0,1\}^\kappa \setminus \mathcal{X}$, then the simulation is quantum-computationally close to the real protocol. This can be shown via the following hybrid argument. Let $\D{0}$ be the distribution of the output of the simulation and let $\D{1}$ be the distribution of the output of the simulation where we send all $a'_i$ for all $i = \set{1,\ldots,\Sigma}$ at the end of Step~(\ref{step:trapdoor}.). Since commitments by $\commitk{\cdot}{\cdot}{pk}$ are unconditionally hiding in case of $pk\in \{0,1\}^\kappa \setminus \mathcal{X}$, commitments by $\Commitk{\cdot}{\cdot}{pk}$ are unconditionally hiding as well. Furthermore, both $a'$ and $a$ are uniformly random, so we obtain statistical closeness between $(a',\Commitk{a'}{(s,r)}{pk})$ and $(a,\Commitk{a'}{(s,r)}{pk})$. Note further that distributions $\D{0}$ and $\D{1}$ can be produced by a poly-sized circuit applied to either $(a',\Commitk{a'}{(s,r)}{pk})$ or $(a,\Commitk{a'}{(s,r)}{pk}$, it holds that $\D{0} \approxq \D{1}$. Now, let $\D{2}$ be the distribution obtained by not simulating the opening via the trapdoor, but instead doing it honestly to the value committed to, i.e.~$(a',r)$. We still use the challenge $S$ from the forced run of $\pi^{(\texttt{random},\texttt{force})}$ though. However, for uniformly random challenges, real runs are quantum-computationally indistinguishable from simulated runs, and we get $\D{1} \approxq \D{2}$. Next, let $\D{3}$ be the distribution of the output of the simulation where we run $\pi^{(\texttt{random},\texttt{force})}$ honestly instead of enforcing outcome $S$. We then use the honestly produced $S'$ in the proof in Step~(\ref{step:test.on.S}.)~instead of the enforced $S$. We can do this, as we modified the process leading to $\D{2}$ towards an honest opening without any trapdoor, so we no longer need to enforce a particular challenge. Under the assumption that $\pi^{(\texttt{random},\texttt{force})}$ is enforceable against ${\sf B}'$, and observing that real runs are quantum-computationally indistinguishable from runs enforcing uniformly random outcomes, we obtain $\D{2} \approxq \D{3}$. Finally, we get by transitivity that $\D{0} \approxq \D{3}$ and conclude the proof by observing that after our changes, the process producing $\D{3}$ is the real protocol. This concludes the proof of enforceability against Bob, according to Definition~\ref{def:force} with switched sides. \end{proof} \chapter{Applications} \label{chap:coin.flip.applications} Coin-flipping as a stand-alone tool allows us to use it rather freely in several contexts. Shared randomness is a crucial ingredient in many cryptographic implementations. Applications in the common-reference-string-model, that assumes a random public string before communication, achieve great efficiency and composability, and many protocols have been proposed in the model. In this chapter, we will discuss example applications that rely on shared randomness. Two applications relate to the context of zero-knowledge. First, we show a simple transformation from non-interactive zero-knowledge to interactive quantum zero-knowledge. This result appeared in~\cite{DL09}. Then, we propose a quantum-secure zero-knowledge proof of knowledge, which is interesting also in that the construction relies not only on initial randomness but also on enforceable randomness as discussed in Chapter~\ref{chap:framework}. This construction is part of the results in~\cite{LN10}. Last, we discuss the interactive generation of a common reference string for the proposed lattice-based instantiation of the compiler construction, proposed in Chapter~\ref{chap:hybrid.security} and applied in Chapter~\ref{chap:hybrid.security.applications}. This result appeared in~\cite{DFLSS09} and~\cite{DL09}. \section{Interactive Quantum Zero-Knowledge} \label{sec:coin.iqzk} \index{zero-knowledge!proofs} Zero-knowledge proofs, as described in Section~\ref{sec:primitives.zk}, are an important building block for larger cryptographic protocols, capturing the definition of convincing the verifier of the validity of a statement with no information beyond that. \subsection{Motivation and Related Work} \label{sec:iqzk.motivation} As in the classical case, where ZK protocols exist if one-way functions exist, quantum zero-knowledge (QZK) is possible under the assumption that quantum one-way functions exist. In~\cite{K03}, Kobayashi showed that a common reference string or shared entanglement is necessary for non-interactive quantum zero-knowledge. Interactive quantum zero-knowledge protocols in restricted settings were proposed by Watrous in the honest-verifier setting~\cite{Watrous02} and by Damg{\aa}rd \emph{et al.} in the CRS-model~\cite{DFS04}, where the latter introduced the first $\Sigma$-protocols for QZK withstanding even active quantum attacks. In~\cite{Watrous09}, Watrous then proved that several interactive protocols are zero-knowledge against general quantum attacks. It has also been shown that any honest-verifier zero-knowledge protocol can be made zero-knowledge against any classical and quantum verifier~\cite{HKSZ08}. In more detail, they showed how to transform a $\Sigma$-protocol with stage-by-stage honest-verifier zero-knowledge into a new $\Sigma$-protocol that is zero-knowledge against all verifiers. Special bit commitment schemes are proposed to limit the number of rounds, and each round is viewed as a stage in which an honest-verifier simulator is assumed. Then, by using a technique of~\cite{DGW94}, each stage can be converted to obtain zero-knowledge against any classical verifier. Finally, Watrous' quantum rewinding lemma is applied in each stage to prove zero-knowledge also against any quantum verifier. We now show a simple transformation from non-interactive (quantum) zero-knowledge to interactive quantum zero-knowledge by combining the coin-flip protocol with any non-interactive ZK protocol. Note that a non-interactive zero-knowledge proof system can be trivially turned into an interactive honest-verifier zero-knowledge proof system by just letting the verifier choose the reference string, and therefore, this consequence of our result also follows from~\cite{HKSZ08}. However, our proof is much simpler and the coin-flipping is not restricted to a specific zero-knowledge construction. In addition, we obtain the corollary that if there exist mixed commitments, then we can achieve interactive quantum zero-knowledge against any poly-sized quantum adversary without any set-up assumptions. \subsection{Formal Definition of Zero-Knowledge Proofs} \label{sec:iqzk.definitions} In Section~\ref{sec:primitives.zk}, we gave an intuitive introduction to zero-knowledge proof systems. Here, we make this description formal. Recall that a zero-knowledge proof for set $\cal L$ on common input $x$ yields no other knowledge than the validity of membership $x \in \cal{L}$. An interactive proof system must fulfill completeness and soundness, as given in Definitions~\ref{def:iqzk.complete} and~\ref{def:iqzk.sound}, and is quantum zero-knowledge (IQZK), if in addition Definition~\ref{def:iqzk.zero-knowledge} holds. Note that in the following, we let ${\sf A}$ be the prover and let ${\sf B}$ denote the verifier. \index{zero-knowledge!proofs!completeness} \index{zero-knowledge!proofs!soundness} \begin{definition}[Completeness] \label{def:iqzk.complete} If $x \in \cal L$, the probability that $({\sf A},{\sf B})$ rejects $x$ is negligible in the length of $x$. \end{definition} \begin{definition}[Soundness] \label{def:iqzk.sound} If $x \notin \cal L$, then for any unbounded prover ${\sf A}'$, the probability that $({\sf A}',{\sf B})$ accepts $x$ is negligible in the length of $x$. \end{definition} \begin{definition}[Zero-Knowledge] \label{def:iqzk.zero-knowledge} An interactive proof system $({\sf A},{\sf B}')$ for language $\cal L$ is quantum zero-knowledge, if for any quantum verifier ${\sf B}'$, there exists a simulator $\hat{S}$ with output quantum-computationally indistinguishable from the real output, i.e., $$ out^{\hat{S}} \approxq out^{\pi(x,\omega)}_{{\sf A},{\sf B}'} \, , $$ on common input $x \in \cal L$ and arbitrary additional (quantum) input to ${\sf B}'$. \end{definition} According to~\cite{BFM88}, the interaction between prover and verifier can be replaced by a common reference string. Then, there is only a single message sent from prover to verifier, who makes the final decision weather to accept or not. More precisely, both parties ${\sf A}$ and ${\sf B}$ get common input $x$. A common reference string $\omega$ of size $\kappa$ allows the prover ${\sf A}$, who knows a witness $w$, to give a non-interactive zero-knowledge proof $\pi(\omega,x)$ to a (possibly quantum) verifier, poly-time bounded in $\kappa$. For simplicity, we consider the proof of a single theorem of size smaller than $n$ (and $n \leq \kappa$, i.e.~${\cal L}_\kappa = \set{ x \in {\cal L} \; \vert \; |x| \leq \kappa}$. The extension to a more general notion is rather straightforward (see~\cite{BFM88} for details).\\ Completeness and soundness hold as defined above, but we explicitly state the definitions as given in~\cite{BFM88} and adapted to our context. \index{zero-knowledge!non-interactive} \begin{definition}[Completeness in NIZK] \label{def:nizk.complete} There exists a constant $c > 0$ such that for all $x \in {\cal L}_\kappa$, the acceptance probability is overwhelming, i.e., $$ \pro{complete} = \prob{\omega \leftarrow \{0,1\}^{n^c}, \pi(x,\omega) \leftarrow {\sf A}(\omega,x,w): {\sf B}(\omega,x,\pi(x,\omega)) = 1} > 1 - \varepsilon $$ where $\varepsilon$ is negligible in $n$ (and $\kappa$). \end{definition} \begin{definition}[Soundness in NIZK] \label{def:nizk.sound} There exists a constant $c > 0$ such that for all $x \notin {\cal L}_\kappa$ and for all provers ${\sf A}'$, the acceptance probability is negligible, i.e., $$ \pro{sound} = \prob{\omega \leftarrow \{0,1\}^{n^c}, \pi(x,\omega) \leftarrow {\sf A}'(\omega,x): {\sf B}(\omega,x,\pi(x,\omega)) = 1} \leq \varepsilon' $$ where $\varepsilon'$ is negligible in $n$ (and $\kappa$). \end{definition} The non-interactive zero-knowledge requirement is simpler than for general zero-knowledge for the following reason. Since all information is communicated mono-directional from prover to verifier in the protocol, the verifier does not influence the distribution in the real world. Thus, in the ideal world, we require a simulator that only outputs pairs that are (quantum) computationally indistinguishable from the distribution of pairs $(\omega,\pi(x,\omega))$ in the real world, where $\pi$ is generated with uniformly chosen $\omega$ and random $x$.\footnote{Indistinguishability, in turn, implies that the proof construction withstands quantum-computationally bounded verifiers.} In other words, we can eliminate the quantification over all ${\sf B}'$ in the zero-knowledge definition. \begin{definition}[Non-Interactive Zero-Knowledge] \label{def:nizk.zero-knowledge} There exist a constant $c > 0$ and a simulator $\hat{S}$ with output quantum-computationally indistinguishable from the real output, i.e., $$ out^{\hat{S}(x)} \approxq out^{\pi(x,\omega)}_{{\sf A},{\sf B}'} \, , $$ where $out^{\hat{S}(x)} = \set{ \omega \leftarrow \{0,1\}^{|x|^c}, \pi(x,\omega) \leftarrow {\sf A}(x,\omega): (\omega,\pi(x,\omega))}$. \end{definition} \subsection{The Transformation} \label{sec:iqzk.transformation} We obtain a generic transformation of non-interactive zero-knowledge into interactive quantum zero-knowledge as follows. In each invocation, protocol $\mathtt{COIN}$ generates a truly random $\mathtt{coin}$ even in the case of a malicious quantum ${\sf B}'$. A string of such coins, obtained by sequential composition as described in Section~\ref{sec:sequential.composition.coin} by the ideal functionality in Figure~\ref{fig:clambdaF}, is then used as reference string in any ($\NIZK$)-subprotocol with properties as defined previously. The final protocol $\mathtt{IQZK}$ is shown in Figure~\ref{fig:iqzk}. To prove that it is an interactive quantum zero-knowledge protocol, we first construct an intermediate protocol $\mathtt{IQZK^{\cxF{\kappa}}}$ (see Figure~\ref{fig:iqzkf}) that runs with the ideal functionality $\cxF{\kappa}$. Then we prove that $\mathtt{IQZK^{\cxF{\kappa}}}$ satisfies completeness, soundness and zero-knowledge according to Definitions~\ref{def:iqzk.complete} -~\ref{def:iqzk.zero-knowledge}. To complete the proof, the calls to $\cxF{\kappa}$ are replaced with actual invocations of $\cxPi{{\sf A}}{{\sf B}}{\kappa}$, and we arrive at $\mathtt{IQZK}$. \begin{figure} \begin{myprotocol}{\sc protocol $\mathtt{IQZK^{\cxF{\kappa}}}$} \item[$(\mathtt{COIN})$] \item\label{step:coin} ${\sf A}$ and ${\sf B}$ invoke $\cxF{\kappa}$. If ${\sf A}$ aborts by sending $\bot$ as second input, ${\sf B}$ aborts the protocol. Otherwise, ${\sf A}$ and ${\sf B}$ set $\omega = h$. \item[$(\NIZK)$] \item ${\sf A}$ sends $\pi(x,\omega)$ to ${\sf B}$. ${\sf B}$ checks the proof and accepts or rejects accordingly. \end{myprotocol} \vspace{1ex} \caption{Intermediate Protocol for IQZK.} \label{fig:iqzkf} \end{figure} \begin{claim} \label{claim:zk.complete} Protocol $\mathtt{IQZK^{\cxF{\kappa}}}$ satisfies completeness, according to Definition~\ref{def:iqzk.complete}. \end{claim} \begin{proof} From the ideal functionality $\cxF{\kappa}$ it follows that $\omega$ is uniformly random. Then by Definition~\ref{def:nizk.complete} of any ($\NIZK$)-subprotocol, we know that, for $x \in {\cal L}_\kappa$, ${\sf B}$ accepts, except with negligible probability (in the length of $x$. Thus, completeness for the $\mathtt{IQZK^{\cxF{\kappa}}}$ follows. \end{proof} \begin{claim} \label{claim:sound} Protocol $\mathtt{IQZK^{\cxF{\kappa}}}$ satisfies soundness, according to Definition~\ref{def:iqzk.sound}. \end{claim} \begin{proof} Assume that $x \notin {\cal L}_\kappa$. Any dishonest ${\sf A}'$ might stop $\mathtt{IQZK^{\cxF{\kappa}}}$ at any point during execution. For example, she can block the output in Step~(\ref{step:coin}.)~or she can refuse to send a proof $\pi$ in $(\NIZK)$. Furthermore, ${\sf A}'$ can use an invalid $\omega$ (or $x$) for $\pi$. In all of these cases, ${\sf B}$ will abort without even checking the proof. Therefore, ${\sf A}'$'s best strategy is to ``play the entire game'', i.e.\ to execute $\mathtt{IQZK^{\cxF{\kappa}}}$ without making obvious cheats. ${\sf A}'$ can only convince ${\sf B}$ in the $(\NIZK)$-subprotocol of a $\pi$ for any given (i.e.\ normally generated) $\omega$ with a probability that is negligible in the length of $x$ (see Definition~\ref{def:nizk.sound}). Therefore, the probability that ${\sf A}'$ can convince ${\sf B}$ in the full $\mathtt{IQZK^{\cxF{\kappa}}}$ in case of $x \notin {\cal L}_\kappa$ is also negligible and its soundness follows. \end{proof} \begin{claim} \label{claim:qzk} Protocol $\mathtt{IQZK^{\cxF{\kappa}}}$ is an interactive zero-knowledge proof, according to Definition~\ref{def:iqzk.zero-knowledge}. \end{claim} \begin{proof} We construct a simulator $\sf{\hat{S}}_{\IQZK^{\cxF{\kappa}}}$, interacting with dishonest ${\sf B}'$ and a simulator $\sf{\hat{S}}_{\NIZK}$. As given in Definition~\ref{def:nizk.zero-knowledge}, such a simulator generates, on input $x \in \cal L$, a randomly looking $\omega$ together with a valid proof $\pi$ for $x$ (without knowing witness $w$). $\sf{\hat{S}}_{\IQZK^{\cxF{\kappa}}}$, described in Figure~\ref{fig:simulationZKF}, receives a random string $\tilde{\omega}$ from $\sf{\hat{S}}_{\NIZK}$, which now replaces the coin-string $h$ produced by $\cxF{\kappa}$ in protocol $\mathtt{IQZK^{\cxF{\kappa}}}$. By assumption on $\sf{\hat{S}}_{\NIZK}$, this is quantum-computationally indistinguishable for ${\sf B}'$. Thus, the simulated proof $\pi(\omega,x)$ is indistinguishable from a real proof, which proves that the $\mathtt{IQZK^{\cxF{\kappa}}}$ is zero-knowledge. \end{proof} \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Simulation $\sf{\hat{S}}_{\IQZK^{\cxF{\kappa}}}$:}\\[-4ex] \begin{enumerate} \item $\sf{\hat{S}}_{\IQZK^{\cxF{\kappa}}}$ gets input $x$ and invokes $\sf{\hat{S}}_{\NIZK}$ with $x$ to receives $\pi(\omega, x)$. \item Let $\omega = h$. $\sf{\hat{S}}_{\IQZK^{\cxF{\kappa}}}$ sends $h$ to ${\sf B}'$. \item $\sf{\hat{S}}_{\IQZK^{\cxF{\kappa}}}$ sends $\pi(\omega, x)$ to ${\sf B}'$ and outputs whatever ${\sf B}'$ outputs. \end{enumerate} \vspace{-1ex} \end{framed} \vspace{-1.5ex} \caption{The Simulation of the Intermediate Protocol for IQZK.} \label{fig:simulationZKF} \end{figure} It would be natural to think that $\mathtt{IQZK}$ could be proven secure simply by showing that $\mathtt{IQZK^{\cxF{\kappa}}}$ implements some appropriate functionality and then use a composition theorem from Section~\ref{sec:security.definition}. Recall, however, that a zero-knowledge protocol---which is not necessarily a proof of knowledge---cannot be modeled by a functionality in a natural way. Instead, we prove the standard properties of a zero-knowledge proof system explicitly and therewith the following Theorem~\ref{thm:iqzk}. \begin{theorem}[Interactive Quantum Zero-Knowledge] \label{thm:iqzk} Protocol $\mathtt{IQZK}$ is an interactive proof system, satisfying completeness and soundness. Since, for any quantum verifier ${\sf B}'$, there exists a simulator $\sf{\hat{S}}_{\IQZK}$ with output quantum-computationally indistinguishable from the real output, we additionally achieve quantum zero-knowledge. \end{theorem} \begin{figure} \begin{myprotocol}{\sc protocol $\mathtt{IQZK}$} \item[($\mathtt{COIN}$)] ${\sf A}$ and ${\sf B}$ run $\cxPi{{\sf A}}{{\sf B}}{\kappa}$ and set $\omega = h$.\medskip \item[$(\NIZK)$] ${\sf A}$ sends $\pi(\omega,x)$ to ${\sf B}$. ${\sf B}$ checks the proof and accepts or rejects accordingly. \end{myprotocol} \vspace{1ex} \caption{Interactive Quantum Zero-Knowledge.} \label{fig:iqzk} \end{figure} \begin{proof} From the analysis of protocol $\mathtt{COIN}$, its sequential composability, and the indistinguishability from the ideal functionality $\cxF{\kappa}$, it follows that if both players are honest $\omega$ is a random common reference string of size $\kappa$ and the acceptance probability of the $(\NIZK)$-subprotocol as given previously holds. Completeness of $\mathtt{IQZK}$ follows. To show soundness, we again only consider the case where ${\sf A}'$ executes the entire protocol without making obvious cheats, since otherwise, ${\sf B}$ immediately aborts. Assume that ${\sf A}'$ could cheat in $\mathtt{IQZK}$, i.e., ${\sf B}$ would accept an invalid proof with non-negligible probability. Then we could combine ${\sf A}'$ with simulator $\hat{\sf A}'$ of protocol $\mathtt{COIN}$ (Figure~\ref{fig:simulationA}) to show that $\mathtt{IQZK^{\cxF{\kappa}}}$ was not sound. This, however, is inconsistent with the previously given soundness argument in the proof of Claim~\ref{claim:sound}, and thus proves by contradiction that $\mathtt{IQZK}$ is sound. To further prove that the interactive proof system is also quantum zero-knowledge, we compose a simulator $\sf{\hat{S}}_{\IQZK}$ of simulator $\sf{\hat{S}}_{\IQZK^{\cxF{\kappa}}}$ (Figure~\ref{fig:simulationZKF}) and simulator $\hat{\sf B}'$ of protocol $\mathtt{COIN}$ (Figure~\ref{fig:simulationB}). In more detail, $\sf{\hat{S}}_{\IQZK}$ gets classical input $x$ as well as quantum input $W$ and $X$. It then receives a valid proof $\pi$ and a random string $\omega$ from $\sf{\hat{S}}_{\NIZK}$. $\omega$ is split into $coin_1 \ldots coin_k$. For each $coin_i$, it will then invoke $\hat{\sf B}'$ to simulate one coin-flip execution with $\mathtt{coin} = coin_i$ as result. In other words, whenever $\hat{\sf B}'$ asks $\cF$ to output a bit (Step~(\ref{step:get-coin}.), Figure~\ref{fig:simulationB}), it instead receives this $coin_i$. We see that the transcript of the simulation is indistinguishable from the transcript of the protocol $\mathtt{IQZK}$ for any quantum-computationally bounded ${\sf B}'$. This concludes the proof. \end{proof}\\ We conclude this section by the corollary, immediately following from the previous proof and stating that quantum-secure commitments, as defined in Section~\ref{sec:bit.commit}, imply interactive quantum zero-knowledge. \begin{corollary} If there exist quantum-secure commitment schemes, then we can obtain interactive quantum zero-knowledge against any quantum adversary ${\sf P}' \in \dPlayer_{\mathrm{poly}}$ without any set-up assumptions. \end{corollary} \section{Zero-Knowledge Proof of Knowledge} \label{sec:coin.zkpk} \index{zero-knowledge!proofs of knowledge} A zero-knowledge proof of knowledge is a special case of zero-knowledge proof systems, introduced in Section~\ref{sec:primitives.zk}. Here, we propose a quantum-secure construction based on witness encoding, which we define in the context of simulation. \subsection{Motivation and Related Work} \label{sec:zkpk.motivation} Recall that the purpose of a zero-knowledge proof of knowledge is to verify in classical poly-time in the length of the instance, whether $w$ is a valid witness for instance $x$ in relation $\Rel$, i.e.~$(x,w) \in \Rel$. We call $\Rel$ an \ensuremath{\mathcal{NP}}-relation, as the language ${\cal L(R)} = \set{ x \in \{0,1\}^* \vert \ \exists \, w \ \text{s.t.} \ (x,w) \in \Rel }$ is seen to be an \ensuremath{\mathcal{NP}}-language. Interestingly, such a zero-knowledge proof of knowledge, in contrast to zero-knowledge proofs, can be modeled by an ideal functionality. Our protocol is based on a witness encoding scheme, providing a certain degree of extractability and simulatability, defined in Section~\ref{sec:zkpk.witness.encoding}. We want to stress that the extractability requirement resembles special soundness in proof systems, which are secure in the classical world and typically come along with a knowledge error negligible in the length of the challenge. We have to reformulate this aspect in stronger terms in the quantum world, since special soundness seems to be impossible to use in the quantum realm, due to the restrictions within rewinding. However, we obtain a similar result also with knowledge error negligible in the length of the challenge. Furthermore, our construction requires a mixed bit commitment (see Section~\ref{sec:mixed.commit}) and two calls to the coin-flip protocol $\pi^{(\texttt{force},\texttt{force})}$, described in Figure~\ref{fig:force.force}, Chapter~\ref{chap:framework}, which is poly-time simulatable for both sides even against quantum adversaries. Since this protocol only assumes mixed commitments as well, we get the corollary that if there exists a mixed commitment scheme, then we can construct a classical zero-knowledge proof of knowledge against any poly-sized quantum adversary. This is of particular interest, as the problems of rewinding in the quantum realm complicate implementing proofs of knowledge from scratch. As already mentioned in Chapter~\ref{chap:coin.flip}, the unpublished approach of~\cite{Smith09} suggest another solution for this concept. Instead of composing the coin-string from single coins, they use a string commitment with special opening and compose the subsequent zero-knowledge proof. The coin-string is used as key to encode the witness and the second zero-knowledge proof is given to prove it. \subsection{Simulatable Witness Encodings of \ensuremath{\mathcal{NP}}} \label{sec:zkpk.witness.encoding} \index{simulatable witness encoding} We first specify a simulatable encoding scheme for binary relation $\Rel \subset \{0,1\}^* \times \{0,1\}^*$, which consists of five classical poly-time algorithms $(E,D,S,J,\hat{E})$. Then, we define completeness, extractability and simulatability for such a scheme in terms of the requirements of our zero-knowledge proof of knowledge. Let $E: \Rel \times \{0,1\}^m \rightarrow \{0,1\}^n$ denote an \defterm{encoder}, such that for each $(x,w) \in \Rel$, the $n$-bit output $e \leftarrow E(x,w,r')$ is a random encoding of $w$, with randomness $r' \in \{0,1\}^m$ and polynomials $m(\vert x \vert)$ and $n(\vert x \vert)$. The corresponding \defterm{decoder} $D: \{0,1\}^* \times \{0,1\}^n \rightarrow \{0,1\}^*$ takes as input an instance $x \in \{0,1\}^*$ and an encoding $e \in \{0,1\}^n$ and outputs $w \leftarrow D(x,e)$ with $w \in \{0,1\}^*$. Next, let $S$ denote a \defterm{selector} with input $s \in \{0,1\}^\sigma$ (with polynomial $\sigma(\vert x \vert)$) specifying a challenge, and output $S(s)$ defining a poly-sized subset of $\set{1,\ldots,n}$ corresponding to challenge $s$. We will use $S(s)$ to select which bits of an encoding $e$ to reveal to the verifier. For simplicity, we use $e_s$ to denote the collection of bits $e|_{S(s)}$. We denote with $J$ the \defterm{judgment} that checks a potential encoding $e$ by inspecting only bits $e_s$. In more detail, $J$ takes as input instance $x \in \{0,1\}^*$, challenge $s \in \{0,1\}^\sigma$ and the $\vert S(s) \vert$ bits $e_s$, and outputs a judgment $j \leftarrow J(x,s,e_s)$ with $j \in \set{\texttt{abort}, \texttt{success}}$. Finally, the \defterm{simulator} is called $\hat{E}$. It takes as input instance $x \in \{0,1\}^*$ and challenge $s \in \{0,1\}^\sigma$ and outputs a random collection of bits $t|_{S(s)} \leftarrow \hat{E}(x,s)$. Again for simplicity, we let $t|_{S(s)} = t_s$. Then, if this set has the same distribution as bits of an encoding $e$ in positions $S(s)$, the bits needed for the judgment to check an encoding $e$ can be simulated given just instance $x$ (see Definition~\ref{def:simulate}). \index{simulatable witness encoding!completeness} \begin{definition}[Completeness] \label{def:zkpk.complete} If an encoding $e\leftarrow~E(x,w,r)$ is generated correctly, then $\texttt{success} \leftarrow J(x,s,e_s)$ for all $s \in_R \{0,1\}^\sigma$. \end{definition} We will call an encoding $e$ \defterm{admissible} for $x$, if there \emph{exist} two distinct challenges $s,s' \in \{0,1\}^\sigma$ for which $\texttt{success} \leftarrow J(x,s,e_s)$ and $\texttt{success} \leftarrow J(x,s',e_{s'})$. \index{simulatable witness encoding!extractability} \index{simulatable witness encoding!admissible} \begin{definition}[Extractability] \label{def:extract} If an encoding $e$ is admissible for $x$, then $\big( x,D(x,e) \big)\in \Rel$. \end{definition} We want to stress that extractability is similarly defined to the special soundness property of a classical $\Sigma$-protocol, which allows to extract $w$ from two accepting conversations with different challenges. Such a requirement would generally be inapplicable in the quantum setting, as the usual rewinding technique is problematic and in particular in the context here, we cannot measure two accepting conversations during rewinding in the quantum world. Therefore, we define the stronger requirement that if there \emph{exist} two distinct answerable challenges for one encoding $e$, then $w$ can be extracted given only $e$. This condition works nicely in the quantum world, since we can obtain $e$ without rewinding, as we will show in our quantum-secure proof construction. \index{simulatable witness encoding!simulatability} \begin{definition}[Simulatability] \label{def:simulate} For all $(x,w) \in \Rel$ and all $s \in_R \{0,1\}^\sigma$, the distribution of $e \leftarrow E(x,w,r')$ restricted to positions $S(s)$ is identical to the distribution of $t_s \leftarrow \hat{E}(x,s)$, i.e., $$ {\cal D}(e_s) = {\cal D}(t_s) \, . $$ \end{definition} There are several commit\&open proofs for $\ensuremath{\mathcal{NP}}$. One can, for instance, start from the commit\&open protocol for circuit satisfiability, where the bits of the randomized circuit committed to by the sender is easy to see as a simulatable encoding of a witness being a consistent evaluation of the circuit to output $1$. The challenge in the protocol is one bit $e$ and the prover replies by showing either the bits corresponding to some positions $S'(0)$ or positions $S'(1)$. The details can be found in~\cite{BCC88}. This gives us a simulatable witness encoding for any $\ensuremath{\mathcal{NP}}$-relation $\Rel$ with $\sigma = 1$, using a reduction from $\ensuremath{\mathcal{NP}}$ to circuit simulatability. By repeating it $\sigma$ times in parallel we get a simulatable witness encoding for any $\sigma$. For $i = 1, \ldots, \sigma$, compute an encoding $e^i$ of $w$ and let $e = (e^1, \ldots, e^\sigma)$. Then for $s \in \{0,1\}^\sigma$, let $S(s)$ specify that the bits $S'(s_i)$ should be shown in $e^i$ and check these bits. Note, in particular, that if two distinct $s$ and $s'$ passes this judgment, then there exists $i$ such that $s_i \ne s_i'$, so $e^i$ passes the judgment for both $s_i = 0$ and $s_i = 1$, which by the properties of the protocol for circuit satisfiability allows to compute a witness $w$ for $x$ from $e^i$. One can find $w$ from $e$ simply by trying to decode each $e^j$ for $j = 1, \ldots, \sigma$ and check if $(x,w_j) \in \Rel$. \subsection{The Protocol} \label{sec:zkpk.protocol} We now construct a quantum-secure zero-knowledge proof of knowledge from prover ${\sf A}$ to verifier ${\sf B}$. Recall that we are interested in the \ensuremath{\mathcal{NP}}-language ${\cal L(R)} = \set{ x \in \{0,1\}^* \, \vert \, \exists \, w \ \text{s.t.} \ (x,w) \in \Rel }$, where ${\sf A}$ has input $x$ and $w$, and both ${\sf A}$ and ${\sf B}$ receive positive or negative judgment of the validity of the proof as output. We assume in the following that on input $(x,w) \notin \Rel$, honest ${\sf A}$ aborts. The final protocol $\mathtt{ZKPK(\Rel)}$ is describe in Figure~\ref{fig:zkpk}. As already mentioned, unlike zero-knowledge proofs, proofs of knowledge can be modeled by an ideal functionality, given as $\F_{\ZKPK}$ in Figure~\ref{fig:zkpkF}. $\F_{\ZKPK}$ can be thought of as a channel which only allows to send messages in the language $\cal L(R)$. It models \emph{zero-knowledge}, as it only leaks instance $x$ and judgment $j$ but not witness $w$. Furthermore, it models a \emph{proof of knowledge}, since Alice has to know and input a valid witness $w$ to obtain output $j = \texttt{success}$. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Functionality $\F_{\ZKPK}$:}\\[-4ex] \begin{enumerate} \item On input $(x,w)$ from Alice, $\F_{\ZKPK}$ sets $j = \texttt{success}$ if $(x,w) \in \Rel$. Otherwise, it sets $j = \texttt{abort}$. \item $\F_{\ZKPK}$ outputs $(x,j)$ to Alice and Bob. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{The Ideal Functionality for a Zero-Knowledge Proof of Knowledge.}\label{fig:zkpkF} \end{figure} Protocol $\mathtt{ZKPK(\Rel)}$ is based on our fully simulatable coin-flip protocol $\pi^{(\texttt{force},\texttt{force})}$, which we analyze here in the hybrid model by invoking the ideal functionality of sequential coin-flipping twice (but with different output lengths). Note that in the hybrid model, a simulator can enforce a particular outcome to hit also when invoking the ideal coin-flipping functionality. We can then use Definition~\ref{def:force} to replace the ideal functionality by the actual protocol $\pi^{(\texttt{force},\texttt{force})}$. One call to the ideal functionality $\cxF{\kappa}$ with output length $\kappa$ is required to instantiate a mixed bit commitment scheme $\mathtt{COMMIT}$ as discussed in Section~\ref{sec:mixed.commit.trapdoor.opening}. Recall that it is therewith possible to sample an unconditionally binding key $pk \in \{0,1\}^\kappa$ along with an extraction key $sk$. Since such keys are quantum-computationally indistinguishable from random values in $\{0,1\}^\kappa$, the latter serves us as unconditionally hiding instantiations of $\mathtt{COMMIT}$. The second call to the functionality $\cxF{\sigma}$ produces $\sigma$-bit challenges for a simulatable witness encoding scheme with $(E,D,S,J,\hat{E})$ as specified in the previous Section~\ref{sec:zkpk.witness.encoding}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Protocol $\mathtt{ZKPK(\Rel)}$ :}\\[-4ex] \begin{enumerate} \item ${\sf A}$ and ${\sf B}$ invoke $\cxF{\kappa}$ to get a commitment key $pk \in \{0,1\}^\kappa$. \item ${\sf A}$ samples $e \leftarrow E(x,w,r')$ with randomness $r' \in \{0,1\}^m$ and commits position-wise to all $e_i$ for $i = 1,\ldots,n$, by computing random commitments $E_i = \Commitk{e_i}{r_i}{pk}$ with randomness $r \in \{0,1\}^n$. She sends $x$ and all $E_i$ to ${\sf B}$. \item ${\sf A}$ and ${\sf B}$ invoke $\cxF{\sigma}$ to flip a challenge $s \in_R \{0,1\}^\sigma$. \item ${\sf A}$ opens her commitments to all $e_s$. \item If any opening is incorrect, ${\sf B}$ outputs $\texttt{abort}$. Otherwise, he outputs $j \leftarrow J(x,s,e_s)$ with $j \in \set{\texttt{success},\texttt{abort}}$. \end{enumerate} \vspace{-1ex} \end{framed} \vspace{-1.5ex} \caption{Zero-Knowledge Proof of Knowledge.} \label{fig:zkpk} \end{figure} \begin{theorem}[Zero-Knowledge Proof of Knowledge]\label{thm:zkpk} For any simulatable witness encoding scheme $(E,D,S,J,\hat{E})$, satisfying completeness, extractability, and simulatability according to Definitions~\ref{def:zkpk.complete} -~\ref{def:simulate}, and for negligible knowledge error $2^{-\sigma}$, protocol $\mathtt{ZKPK(\Rel)}$ is a zero-knowledge proof of knowledge and securely implements $\F_{\ZKPK}$. \end{theorem} Completeness is obvious. A honest party ${\sf A}$, following the protocol with $(x,w) \in \Rel$ and any valid encoding $e$, will be able to open all commitments in the positions specified by any challenge $s$. Honest Bob then outputs $J(x,s,e_s) = \texttt{success}$.\\ \begin{proof}[ (\emphbf{Security against dishonest Alice})] To prove security in case of corrupted ${\sf A}'$, we construct a simulator $\hat{\sf A}'$ that simulates a run of the actual protocol with ${\sf A}'$ and $\F_{\ZKPK}$. The proof is then twofold. First, we show indistinguishability between the distributions of simulation and protocol. And second, we verify that the extractability property of the underlying witness encoding scheme (see Definition~\ref{def:extract}) implies a negligible knowledge error. Note that if ${\sf A}'$ sends $\texttt{abort}$ at any point during the protocol, $\hat{\sf A}'$ sends some input $(x',w') \notin \Rel$ to $\F_{\ZKPK}$ to obtain output $(x,j)$ with $j = \texttt{abort}$, and the simulation halts. Otherwise, the simulation proceeds as shown in Figure~\ref{fig:simulation.zkpk.dhA}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Simulation $\hat{\sf A}'$ for $\mathtt{ZKPK(\Rel)}$ :}\\[-4ex] \begin{enumerate} \item $\hat{\sf A}'$ samples a random key $pk$ along with the extraction key $sk$. Then it enforces $pk$ as output from $\cxF{\kappa}$ \item When $\hat{\sf A}'$ receives $x$ and $(E_1,\ldots,E_n)$ from ${\sf A}'$, it extracts $e = (\xtr{E_1}{sk},\ldots,\xtr{E_n}{sk})$. \item $\hat{\sf A}'$ completes the simulation by following the protocol honestly. If any opening of ${\sf A}'$ is incorrect, $\hat{\sf A}'$ aborts. Otherwise, $\hat{\sf A}'$ inputs $\big( x,D(x,e) \big)$ to $\F_{\ZKPK}$ and receives $(x,j)$ back. $\hat{\sf A}'$ outputs the final state of ${\sf A}'$ as output in the simulation. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Simulation against dishonest Alice.} \label{fig:simulation.zkpk.dhA} \end{figure} Note that the only difference between the real protocol and the simulation is that $\hat{\sf A}'$ uses a random public key $pk$ sampled along with an extraction key $sk$, instead of a uniformly random $pk \in \{0,1\}^\kappa$. It then enforces $\cxF{\kappa}$ to hit $pk$. However, by assumption on the commitment keys and by the properties of the ideal coin-flipping functionality, the transcripts of simulation and protocol remain quantum-computationally indistinguishable under these changes. Next, we analyze the output in more detail. It is clear that whenever honest ${\sf B}$ would output $\texttt{abort}$ in the actual protocol, also $\hat{\sf A}'$ aborts, namely, if ${\sf A}'$ does deviate in the last steps of protocol and simulation, respectively. Furthermore, $\hat{\sf A}'$ accepts if and only if $(x,D(x,e)) \in \Rel$ or in other words, the judgment of the functionality is positive, denoted by $j_\mathcal{F} = \texttt{success}$. It is therefore only left to prove that the case of $j_\mathcal{F} = \texttt{abort}$ but $j_J = \texttt{success}$ is negligible, where the later denotes the judgment of algorithm $J(x,s,e_s)$ as in the protocol. In that case, we have $(x,D(x,e)) \notin \Rel$. This means that $w$ is not extractable from $D(x,e)$, which in turn implies that $(\xtr{E_1}{sk},\ldots,\xtr{E_n}{sk}) = e$ is not admissible. Thus, there are no two distinct challenges $s$ and $s'$, in which ${\sf A}'$ could correctly open her commitment to $e$. It follows by contradiction that there exists at most one challenge $s$ which ${\sf A}'$ can answer. We produce $s \in \{0,1\}^\sigma$ uniformly at random, from which we obtain an acceptance probability of at most $2^{-\sigma}$. Thus, we conclude the proof with negligible knowledge error, as desired.\\ \end{proof} \begin{proof}[ (\emphbf{Security against dishonest Bob})] To prove security in case of corrupted ${\sf B}'$, we construct simulator $\hat{\sf B}'$ as shown in Figure~\ref{fig:simulation.zkpk.dhB}. Our aim is to verify that this simulation is quantum-computationally indistinguishable from the real protocol. The key aspect will be the simulatability guarantee of the underlying witness encoding scheme, according to Definition~\ref{def:simulate}. \begin{figure} \begin{framed} \noindent\hspace{-1.5ex} {\sc Simulation $\hat{\sf B}'$ for $\mathtt{ZKPK(\Rel)}$ :}\\[-4ex] \begin{enumerate} \item $\hat{\sf B}'$ invokes $\cxF{\kappa}$ to receive a uniformly random $pk$. \item $\hat{\sf B}'$ samples a uniformly random challenge $s \in \{0,1\}^\sigma$ and computes $t_s \leftarrow \hat{E}(x,s)$. $\hat{\sf B}'$ then computes commitments $E_i$ as follows: For all $i \in S(s)$, it commits to the previously sampled $t_s$ via $E_i = \Commitk{t_i}{r_i}{pk}$. For all other positions $i \in \bar{S}$ (where $\bar{S} = \set{1,\ldots,n} \setminus S(s)$), it commits to randomly chosen values $t'_i \in_R \{0,1\}$, i.e.~$E_i = \Commitk{t'_i}{r_i}{pk}$. It sends $x$ and all $E_i$ to ${\sf B}'$. \item $\hat{\sf B}'$ forces $\cxF{\sigma}$ to hit $s$. \item $\hat{\sf B}'$ opens $E_i$ to $t_i$ for all $i \in S(s)$, i.e.\ to all $t_s$. \item $\hat{\sf B}'$ outputs whatever ${\sf B}'$ outputs. \end{enumerate} \vspace{-1.5ex} \end{framed} \vspace{-1.5ex} \caption{Simulation against dishonest Bob.} \label{fig:simulation.zkpk.dhB} \end{figure} The proof proceeds via a hybrid argument. Let $\D{0}$ be the distribution of the simulation as described in Figure~\ref{fig:simulation.zkpk.dhB}. Let $\D{1}$ be the distribution obtained from the simulation but with the following change: We inspect $\F_{\ZKPK}$ to get a valid witness $w$ for instance $x$, and let $e \leftarrow E(x,w,r')$ be the corresponding encoding. Note that this is possible as a thought experiment for any adjacent distribution in a hybrid argument. From $e$ we then use bits $e_s$ for the same $S(s)$ as previously, instead of bits $t_s$ sampled by $\hat{E}(x,s)$. All other steps are simulated as before. By the simulatability of the encoding scheme (Definition~\ref{def:simulate}), it holds that the bits $t_s$ in $\D{0}$ and the bits $e_s$ in $\D{1}$ have the same distribution. Thus, we obtain $\D{0} = \D{1}$. We further change the simulation in that we compute the bits in all positions $i \in \bar{S}$ by $e_i$ of the encoding $e$ defined in the previous step. Again, all other steps of the simulation remain unchanged. Let $\D{2}$ denote the new distribution. The only difference now is that for $i \in \bar{S}$, the commitments $E_i$ are to the bits $e_i$ of a valid $e$ and not to uniformly random bits $t'_i$. This, however, is quantum-computationally indistinguishable to ${\sf B}'$ for $pk \in_R \{0,1\}^\kappa$, as $\mathtt{COMMIT}$ is quantum-computationally hiding towards ${\sf B}'$. Note that $pk$ is guaranteed to be random by an honest call to $\cxF{\kappa}$ and recall that we do not have to open the commitments in these positions. Hence, we get that $\D{1} \approxq \D{2}$.\\ Note that after the two changes, leading to distributions $\D{1}$ and $\D{2}$, the commitment step and its opening now proceed as in the actual protocol, namely, we commit to the bits of $e \leftarrow E(x,e,r')$ and open the subset corresponding to $S(s)$. The remaining difference to the real protocol is the enforcement of challenge $s$, whereas $s$ is chosen randomly in the protocol. Now, let $\D{3}$ be the distribution of the modified simulation, in which we implement this additional change of invoking $\cxF{\sigma}$ honestly and then open honestly to the resulting $s$. Note that both processes, i.e., first choosing a random $s$ and then enforcing it from $\cxF{\sigma}$, or invoking $\cxF{\sigma}$ honestly and receiving a random $s$, result in a uniformly random distribution on the output of $\cxF{\sigma}$. Thus, we obtain $\D{2} = \D{3}$. By transitivity, we conclude that $\D{0} \approxq \D{3}$, and therewith, that the simulation is quantum-computationally indistinguishable from the actual protocol. \end{proof}\\ We conclude this section by the corollary that follows straightforward from the above construction and proof and states that mixed commitments, as defined in Section~\ref{sec:mixed.commit.trapdoor.opening}, imply classical zero-knowledge proofs of knowledge against any poly-sized quantum adversary. \begin{corollary} If there exist mixed commitment schemes, then we can construct a classical zero-knowledge proof of knowledge against any quantum adversary ${\sf P}' \in \dPlayer_{\mathrm{poly}}$ without any set-up assumptions. \end{corollary} \section{Generation of Commitment Keys} \label{sec:key.generation.coin} Here, we briefly describe the initial generation of a common reference string for the proposed lattice-based instantiation of the generic compiler, introduced in Chapter~\ref{chap:hybrid.security}, according to the specific requirements of its underlying mixed commitment scheme, discussed in Section~\ref{sec:mixed.commit}. \subsection{Motivation} \label{sec:generation.motivation} The compiler is constructed in the CRS-model to achieve high efficiency. We now aim at circumventing the CRS-assumption to achieve the potential of allowing the implementation of \emph{complete} protocols in the quantum world without any set-up assumptions. More specifically, we integrate the generation of a common reference string from scratch based on our quantum-secure coin-flipping, which will then be used during compilation as commitment key. We want to stress, however, that implementing the entire process comes at the cost of a non-constant round construction, added to otherwise very efficient protocols under the CRS-assumption. \subsection{The Generation} \label{sec:generation.generation} \index{commitment!key generation} Recall that the argument for computational security in Section~\ref{sec:compiler} proceeds along the following lines. After the preparation phase ${\sf B}$ commits to all his measurement bases and outcomes. The keyed dual-mode commitment scheme has the special properties that the key can be generated by one of two possible key-generation algorithms ${\cal G}_{\tt H}$ or ${\cal G}_{\tt B}$. Depending on the key in use, the scheme provides both flavors of security. Namely, with key ${\tt pkH}$ generated by ${\cal G}_{\tt H}$, respectively ${\tt pkB}$ produced by ${\cal G}_{\tt B}$, the commitment scheme is unconditionally hiding respectively unconditionally binding. Furthermore, the commitment is secure against a quantum adversary and it holds that ${\tt pkH} \approxq {\tt pkB}$. In the real-world protocol, ${\sf B}$ uses the unconditionally hiding key ${\tt pkH}$ to maintain unconditional security against any unbounded ${\sf A}'$. To argue security against a computationally bounded ${\sf B}'$, an information-theoretic argument involving the simulator $\hat{\sf B}'$ is given (in the proof of Theorem~\ref{thm:compiler}) to prove that ${\sf B}'$ cannot cheat with the unconditionally binding key ${\tt pkB}$. Security in real life then follows from the quantum-computational indistinguishability of ${\tt pkH}$ and ${\tt pkB}$. We want to repeat that we can even weaken the assumption on the hiding key in that we do in fact not require an actual unconditionally hiding key, if the public-key encryption scheme guarantees that a random public key looks pseudo-random to poly-time quantum circuits. As discussed in Section~\ref{sec:mixed.commit}, the lattice-based crypto-system of Regev~\cite{Regev05}, which is considered to withstand quantum attacks, is a good candidate to construct such a dual-mode commitment scheme. The public key of a regular key pair can be used as the unconditionally binding key ${\tt pkB}'$ in our commitment scheme for the ideal-world simulation, and for the real protocol, an unconditionally hiding commitment key ${\tt pkH}'$ can simply be constructed by uniformly choosing numbers in the same domain. The idea is now the following. Let $k$ denote the length of a regular key ${\tt pkB}'$. We add (at least) $k$ executions of our protocol $\mathtt{COIN}$ as a first step to the compiler-construction to generate a uniformly random sequence $coin_1 \ldots coin_k$. These $k$ random bits produce a ${\tt pkH}'$ as sampled by $\mathcal{G}_\mathtt{H}$, except with negligible probability. Hence, in the real world, Bob can use key $coin_1 \ldots coin_k = {\tt pkH}'$ for committing with $c_i = \commitk{\hat{\theta}_i,\hat{x}_i)}{r_i}{{\tt pkH}'}$ on all positions $i$. Since an ideal-world adversary $\hat{\sf B}'$ is free to choose any key, it can generate $({\tt pkB}', {\tt sk}')$, i.e., a regular public key together with a secret key according to Regev's crypto-system. For the security proof, write ${\tt pkB}' = coin_1 \ldots coin_k$. In the simulation, $\hat{\sf B}'_{\tt compile}$ (as described in the proof of Theorem~\ref{thm:compiler}) first invokes $\hat{\sf B}'_{\tt coin}$ (Figure~\ref{fig:simulationB}) for each $coin_j$ to simulate one coin-flip with $coin_j$ as result. Whenever $\hat{\sf B}'_{\tt coin}$ asks $\cF$ to output a bit, it instead receives this $coin_i$. Then $\hat{\sf B}'_{\tt compile}$ has the possibility to decrypt dishonest ${\sf B}'$'s commitments $c_i = \commitk{(\hat{\theta}_i,\hat{x}_i)}{r_i}{{\tt pkB}'}$ during simulation, which binds ${\sf B}'$ unconditionally to his committed measurement bases and outcomes. Finally, since we proved in the analysis of protocol $\mathtt{COIN}$ that ${\tt pkH}'$ is a uniformly random string, Regev's proof of semantic security applies, namely that a random public key, chosen independently from a secret key, is indistinguishable to a regular key and that such encodings carry essentially no information about the message. Thus, we obtain ${\tt pkH}' \approxq {\tt pkB}'$ and quantum-computational security in real life follows. \clearemptydoublepage \addcontentsline{toc}{chapter}{Bibliography} \bibliographystyle{alpha}
1,108,101,563,470
arxiv
\subsection{Settings of Pre-training}\label{sec:4_1} \noindent \textbf{Datasets.} We pre-trained our model on three public datasets: \textbf{1)} HowTo100M~\cite{miech2019howto100m}. It consists of more than 1.2M videos accompanied with ASR-generated speech transcription. The provided transcription is used to create video-sentence pairs separated by each timestamp. \textbf{2)} WebVid-2M~\cite{miech2019howto100m}. It contains about 2.5M well-aligned web video-text pairs. \textbf{3)} Google Conceptual Captions~\cite{sharma2018conceptual}. It contains 3.3M image and description pairs harvested from the web. \noindent \textbf{Encoders.} Following \cite{bain2021frozen,wang2021object,yan2021video}, we adopted ViT-B/16~\cite{dosovitskiy2020image} with space-time attention~\cite{bertasius2021space} as the video encoder. The spatial attention weights in the transformer were initialized with ImageNet-21k pre-trained weights while the temporal attention weights were set to zero. We chose a lightweight DistilBERT~\cite{sanh2019distilbert} as the language encoder. Following~\cite{bain2021frozen,patrick2020support,tang2021decembert,amrani2020noise}, the language encoder was initialized with the weights pre-trained on English Wikipedia and Toronto Book Corpus. \noindent \textbf{Implementation Details.} For the video in each video-sentence pair, we sampled 8 clips of 16 frames equidistantly and fed them to the video encoder to obtain clip-level features. All frames were resized to $224 \times 224$. For downstream transfer, we extracted video features with the well-trained model in a dense manner, \textit{i}.\textit{e}., every 16 consecutive frames were grouped to compute one clip feature. Experiments were conducted on 64 V100 GPUs with a batch size of 256 and lasted for 200 epochs. We used Adam~\cite{loshchilov2017decoupled} with the initial learning rate $10^{-4}$ as the optimizer. The learning rate decayed by $0.1$ at the $100^{th}$ and $160^{th}$ epoch. Random flip, random crop, and color jitter for video data augmentation were included. The loss balance factors $\lambda_{c}$, $\lambda_{f}$, and $\lambda_{t}$ were set to 0.5, 1, 1, respectively. The temperature factor $\tau$ used in contrastive learning was set to 0.07 following \cite{wu2018unsupervised,radford2021learning} and $K$ in Eq.\eqref{equ:2} was set to 3. Features in all three contrastive losses were $\ell_{2}$-normalized before computation. \subsection{Transfer Results on Action Step Localization} \label{sec:4_3} \noindent \textbf{Settings.} In action step localization, each video belongs to a task and is annotated with multiple action steps described with short natural languages. The goal is to align each frame with the correct step in the text form. Following \cite{miech2019howto100m,zhu2020actbert,luo2020univl,yang2021taco}, we take \cite{zhukov2019cross} as the downstream localization method. Specifically, we compute the similarity between each frame and the action step descriptions in feature space to find the optimal frame-wise order of action steps for a video. \input{tables_figs/tabSOTAstepLocActSeg} \noindent \textbf{Datasets and Metrics.} We experiment on the instructional video dataset CrossTask~\cite{zhukov2019cross}, which includes 83 tasks and 4.7K videos. Each task is described with an ordered list of steps with manual natural language descriptions. We perform the same evaluation protocol as in \cite{zhukov2019cross} by reporting the average recall (CTR). \noindent\textbf{Results.} Table~\ref{tab:sotaStepLocActSeg} reports the action step localization performance on CrossTask dataset. Our LocVTP pre-trained feature achieves state-of-the-art performance with CTR reaching 51.7\%, surpassing the previous method VideoClip by 4.4\%. Our competitive performance demonstrates that LocVTP features can effectively perceive detailed action steps. \subsection{Transfer Results on Action Segmentation} \label{sec:4_4} \noindent\textbf{Settings.} We assess our LocVTP on action segmentation, which aims to predict the action label frame-wisely for each video frame. It is a pure vision task without the use of the text encoder. Following \cite{yang2021taco,luo2020univl,zhu2020actbert}, we encode the input video frames with the well-trained video encoder and apply a linear classifier upon the features to predict action labels. \noindent \textbf{Datasets and Metrics.} We conduct experiments on the widely used COIN dataset~\cite{tang2019coin} and the frame-wise accuracy (FA) is taken as the evaluation metric. \noindent\textbf{Results.} As shown in Table~\ref{tab:sotaStepLocActSeg}, our LocVTP achieves state-of-the-art performance with FA reaching 72.9\%. This further demonstrates the superiority of our feature in localization tasks even in the absence of language guidance. \subsection{Transfer Results on Temporal Grounding} \label{sec:4_2} \noindent \textbf{Settings.} We validate the performance of pre-trained representations on temporal grounding, which aims to localize actions corresponding to the sentence from an untrimmed video. Specifically, we re-train the mainstream temporal grounding method 2D-TAN~\cite{zhang2020learning}\footnote{We choose 2D-TAN since it is relatively simple without too many dataset-specific parameters, which can fairly verify the effectiveness of pre-training features. Results on more advanced baselines are available in the supplementary material.} by only replacing the original input features with pre-trained ones. For ease of feature extraction, we choose representative VTP methods with publicly-available codes for comparisons. \noindent \textbf{Datasets and Metrics.} \textbf{1)} ActivityNet Captions (ANet)~\cite{krishna2017dense}. It contains 20K untrimmed videos with 100K descriptions. By convention, we use 37,417 video-query pairs for training, 17,505 pairs for validation, and 17,031 pairs for testing. \textbf{2)} Charades-STA~\cite{gao2017tall}. Following the official split, 12,408 video-query pairs are used for training, and 3,720 pairs for testing. \textbf{3)} TACoS~\cite{regneri2013grounding}. It has 10,146 video-query pairs for training, 4,589 pairs for validation, and 4,083 pairs for testing. Following prior works, we adopt “R@n, IoU@m” (abbreviated as $R^m_n$) as the metric, Specifically, $R^m_n$ is defined as the percentage of at least one of top-n retrieved moments having IoU with the ground-truth moment larger than $m$. \noindent\textbf{Results.} \textbf{1)} As shown in Table~\ref{tab:sotaVG}, even trained with a much larger dataset, the current popular video-text pre-training frameworks achieve inferior performance compared to the separately pre-trained one. For example, Frozen~\cite{bain2021frozen} reaches 43.3\% at $R_1^{0.5}$ on ANet Captions, which is 1.1\% absolute value lower than the separately pre-trained counterpart. \textbf{2)} Either pre-trained on HowTo100M or CC + WV, our LocVTP outperforms both video-text pre-training methods by a large margin on all three datasets. For example, pre-trained on HowTo100M, LocVTP surpasses the separately pre-trained method by 3.8\% on $R_1^{0.5}$ of ANet Captions. \textbf{3)} For more fair comparisons, we sample a subset of HowTo100M by selecting the same training sample as Kinetics~\cite{kay2017kinetics} (300K training pairs), denoted as HT$^\ddag$ in Table~\ref{tab:sotaVG}. Although using noisy ASR captions, the results demonstrates that under the same training data volume, our LocVTP still shows better performance compared to the separately pre-trained method. This manifests that our performance improvement is brought by the sound architecture design rather than just the use of the large-scale dataset. \subsection{Ablation Study on Training Objective\protect\footnote{If not specified, all ablation studies are conducted on the downstream temporal grounding task at ActivityNet Captions dataset. We use LocVTP pre-trained on HowTo100M with ImageNet initialization.\label{ablations}}}\label{subsec:4_6_1} \noindent \textbf{Training Strategy}. Coarse-grained contrastive alignment loss $\mathcal{L}_{c}$ provides a basic cross-modal matching prior and we introduce three potential ways to use it: \textbf{1)} \textit{multi-stage training}: first perform coarse-grained training and then use the trained model to initialize other stages. \textbf{2)} \textit{warm-up training}: decrease $\lambda_{c}$ exponentially from 1 to 0 throughout the training process. \textbf{3)} \textit{weighted training}: set $\lambda_{c}$ to a constant value. Here we set $\lambda_{c} = 0.5$. As shown in Table~\ref{tab:trainStrategy}, we find the weighted training strategy achieves the best performance and warm-up training is slightly behind. Multi-stage training is the least effective one. \noindent \textbf{Loss Component}. We present the loss component ablations in Table~\ref{tab:lossComp}. As shown, both fine-grained loss $\mathcal{L}_{f}$ and temporal aware loss $\mathcal{L}_{t}$ are crucial. For example, compared to the full version (exp.\#1), removing $\mathcal{L}_{f}$ and $\mathcal{L}_{t}$ brings about 1.4\% and 1.5\% performance degradation on the $R_1^{0.5}$ metric, respectively. \noindent \textbf{More downstream temporal grounding baselines.} We take another temporal grounding method CSMGAN~\cite{liu2020jointly} as the downstream baseline. As shown in Table.~\ref{tab:baseline}, our LocVTP pre-trained feature consistently benefits this more advanced baseline. \input{tables_figs/tabAbla1to3} \subsection{Ablations on Fine-grained Contrastive Loss}\label{subsec:4_6_2} \noindent \textbf{Correspondence Discovery Strategies}. We experiment four potential strategies to extract cross-modal correspondences: \textbf{1)} \emph{random}: randomly select $K$ words for each clip; \textbf{2)} \emph{2d-topk}: select the most similar $K \times T$ clip-word pairs; \textbf{3)} \emph{word-topk}: select the most similar $K$ clips for each word; \textbf{4)} \emph{clip-topk}: select the most similar $K$ words for each clip, namely the method illustrated in Section~\ref{sec:3.3}. As indicated in Table.~\ref{tab:MatchStrategy}, the \textit{random} and \textit{2d-topk} matching strategies are the two worst options. For the \textit{word-topk} matching, it is also sub-optimal, which can be attributed to the possibility of introducing words without concrete meanings (\textit{e}.\textit{g}., articles or pronouns) into matched pairs. \noindent \textbf{Number of Selected Pairs $K$}. We further ablate the hyper-parameter $K$ used in the \emph{clip-topk} strategy. Table~\ref{tab:AblaK} shows that the performance saturates at $K=3$ and slightly decreases for $K=4$. We conjecture that this may be because too few words have vague meanings while too large $K$ value leads to the inability to establish accurate correspondences. \subsection{Ablations on Temporal aware Contrastive Loss} \input{tables_figs/tabAblation} \noindent \textbf{Context Projection Head Components.} In Eq.~\eqref{equ:4}, the warped feature is generated based on both the direction $\operatorname{sgn}(\delta)$ and distance $\lvert\delta\rvert$. Here we investigate eliminating either of them to see the difference. We observe in Table.~\ref{tab:projAbla} that removing either component decreases the performance, which indicates that both the direction and distance of bias $\delta$ are crucial for feature warping. \noindent \textbf{Maximum Bias Distance $\delta_{max}$.} Here we ablate different values for $\delta_{max}$. From Table \ref{tab:shiftRatio}, we can see that $\delta_{max} = 4$ achieves the best performance. This may be because that small bias makes the model unable to perceive enough context, while a large bias makes contextual reasoning too difficult. \noindent \textbf{Intra-modal \emph{v.s.} Cross-modal Constraint.} In Section.~\ref{sec:3.4}, given the matched clip-word pair $\{\boldsymbol{v}^{t}, \boldsymbol{q}^{t}_{+}\}$ and the warped feature $\boldsymbol{z}^{t}$, we force the \emph{cross-modal} supervision, \textit{i}.\textit{e}., $\boldsymbol{z}^{t} \leftrightarrow \boldsymbol{q}^{t}_{+}$. Here, we apply the temporal aware contrastive loss $\mathcal{L}_{t}$ in a \emph{intra-modal} manner which regards $\boldsymbol{z}^{t}$ and $\boldsymbol{v}^{t}$ as positive pairs, \textit{i}.\textit{e}., $\boldsymbol{z}^{t} \leftrightarrow \boldsymbol{v}^{t}$. The results in Table~\ref{tab:intraVScross} show that our adopted cross-modal mode outperforms the intra-modal one. \noindent\textbf{Temporal Sensitivity Analysis.} As a sanity check, we devise two proxy tasks to evaluate the temporal sensitivity of pre-trained video features. As shown in Fig.~\ref{fig:loclizableEval}, $n$ equidistantly sampled clips from one video are fed into the frozen video backbone to extract their corresponding features. Two linear classifiers are trained to perform two tasks: \textit{order prediction} and \textit{distance estimation}. The first task predicts the temporal index while the second one estimates the temporal distance of two clips. The results in Table~\ref{tab:locEval} show that our LocVTP with temporal aware loss $\mathcal{L}_{t}$ outperforms the variant without it as well as two typical VTP methods (\textit{i}.\textit{e}., UniVL and MIL-NCE), which shows that $\mathcal{L}_{t}$ clearly contributes to the localization ability. \subsection{Transfer Results on Video Retrieval} \label{sec:4_5} \input{tables_figs/tabSOTAVR1} \noindent \textbf{Datasets.} We evaluate our LocVTP on the widely-used benchmark \textbf{MSR-VTT} dataset~\cite{xu2016msr}. It is composed of 10K YouTube videos (9K for training and 1K for test). We report results on the train/test splits introduced in \cite{yu2018joint}. \noindent\textbf{Results.} \textbf{1)} As can be seen, we achieve state-of-the-art performance under both sets of data, \textit{i}.\textit{e}., HowTo100M and CC3M+WV2M. Specifically, when pre-trained on CC3M+WV2M, LocVTP outperforms Frozen~\cite{bain2021frozen} by an absolute lift of 4.8\% on R@5. \textbf{2)} It should be pointed out that although using RGB data only, our LocVTP achieves better performance than the methods using multi-modal expert features including motion, face, and speech, \textit{e}.\textit{g}., MMT~\cite{gabeur2020multi}. \textbf{3)} The recent work CLIP~\cite{radford2021learning} provides a stronger vision encoder and we also evaluate the performance based on it. It is shown that the CLIP's weights greatly improve the performance of LocVTP with R@5 achieving 72.8\%, surpassing top-performing CLIP-based methods. \textbf{4)} Our LocVTP also outperforms previous methods under the zero-shot setting, showing its generalization ability. \subsection{Visualization\protect \footnote{More visualizations are left in the supplementary materials.\label{supple}}} \label{sec:4_7} \noindent\textbf{Cross-modal Correspondence Visualizations.} Fig.~\ref{fig:fineMatch} shows two frames\footnote{Here we use “frame” to indicate the center frame of a video snippet. \label{frame_path}} and their corresponding similarity scores with caption words. The top \emph{K} highest scored words are marked with red ($K=3$). Frame \#1 and frame \#2 have similar appearance views yet correspond to different action processes. Our method pinpoints the subtle differences and accurately finds the most relevant words. \input{tables_figs/figFineMatch} \input{tables_figs/figUmap} \noindent\textbf{UMAP Visualizations.} As shown in Fig.~\ref{fig:umap}, we provide UMAP~\cite{mcinnes2018umap} visualizations for \emph{fused} multi-modal features, which are generated by multiplying the extracted video feature by one query feature. With the temporal aware loss $\mathcal{L}_{t}$, our LocVTP shows more separable distributions compared with LocVTP \emph{w/o} $\mathcal{L}_{t}$, manifesting that $\mathcal{L}_{t}$ helps distinguish action-of-interest from background. \noindent\textbf{Similarity Distribution Visualizations.} In Eq.\eqref{equ:4}, context projection head warps contextual clip $\boldsymbol{v}^{t+\delta}$ to the reference one $\boldsymbol{v}^{t}$. Here we collect 10K paired training samples and compute three sets of cosine similarities: reference similarity $\small{(\boldsymbol{v}^{t}, \boldsymbol{q}_{+}^{t})}$, bias similarity $\small{(\boldsymbol{v}^{t+\delta}, \boldsymbol{q}_{+}^{t})}$, and projection similarity $\small{(\boldsymbol{z}^{t}, \boldsymbol{q}_{+}^{t})}$. Fig.~\ref{fig:distribute} plots the histogram of these similarities. We can see that the distribution of projection similarity is close to that of reference similarity while far away from that of bias similarity. This demonstrates that our context projection head can effectively warp contextual features conditioned on the temporal information. \section{Experiments}\label{expes} \input{exps/4_1_setting} \input{exps/4_5_sota_vr} \input{exps/4_2_sota_vg} \input{exps/4_3_sota_stepLoc} \input{exps/4_4_sota_actSeg} \input{exps/4_6_ablation} \input{exps/4_7_vis} \section{Conclusions} \vspace{-2mm} In this paper, we propose LocVTP, the first video-text pre-training framework for temporal localization tasks. Specifically, we apply cross-modal contrastive learning at both coarse-grained video-sentence and fine-grained clip-word levels. Besides, we propose a context warping pretext task and a temporal aware contrastive loss to enhance the temporal awareness of video features. Experimental results show that LocVTP achieves state-of-the-art performance when transferred to both retrieval-based and localization-based downstream tasks. \noindent \textbf{Acknowledgements.} This paper was partially supported by NSFC (No: 621760\\08) and Shenzhen Science \& Technology Research Program (No: GXWD20201231\\165807007-20200814115301001). \section{Introduction} \input{tables_figs/fig1} Video-Text Pre-training (VTP)~\cite{sun2019videobert,miech2019howto100m,liu2019use,liu2021hit,lei2021less,bain2021frozen,yan2021video,wang2021object} has attracted increasing attention with the aim to learn generic and transferable \emph{joint} video-language (VL) representations. Compared to the conventional \emph{separate} pre-training on each single modality, \textit{e}.\textit{g}., video features are pre-trained under the action recognition datasets (Kinetics~\cite{kay2017kinetics}, Sport1M~\cite{KarpathyCVPR14}), VTP has several advantages: 1) It leverages large-scale unlabeled narrated video data with automatically generated corresponding text data for video-text correspondence pre-training. 2) It tries to map different modality features into a shared latent space, which reduces the difficulties of the cross-modal feature interaction. Thanks to these advantages, VTP has significantly improved the performance of many downstream VL tasks. For example, as illustrated in \cite{ging2020coot}, the video retrieval performance using features pre-trained with the VTP method \texttt{MIL-NCE}~\cite{miech2020end} is much higher than that using separately pre-trained way (cf. Fig.~\ref{fig:teaserBar} (left)). Despite their encouraging performance, we find that most current VTP methods are applicable to limited downstream tasks, \textit{i}.\textit{e}., they focus on \emph{retrieval-based} tasks which require video-level predictions, \textit{e}.\textit{g}., video retrieval~\cite{xu2016msr}, video captioning~\cite{rohrbach2015dataset}, and video question answering~\cite{jang2017tgif}. In contrast, there exists another mainstream \emph{localization-based} tasks which expect more fine-grained clip-level or frame-level predictions, \textit{e}.\textit{g}., temporal grounding~\cite{gao2017tall}, action segmentation~\cite{tang2019coin}, action step localization~\cite{zhukov2019cross} (cf. Fig.~\ref{fig:retrievalVSground}). Unfortunately, through experiments, we find their poor generalization abilities on this type of downstream tasks. For example, on temporal grounding, even pre-trained with a much larger dataset HowTo100M~\cite{miech2019howto100m}, the VTP method \texttt{MIL-NCE} still performs worse than the separately pre-trained counterpart (cf. Fig~\ref{fig:teaserBar} (right)). In this paper, we analyze that this poor transfer ability on localization-based tasks is due to the absence of two indispensable characteristics: \textbf{1) \emph{Fine-grained alignment}}: We contend that the alignment should be conducted on more \emph{fine-grained} clip-word level instead of the \emph{coarse-grained} video-sentence\footnote{Here we use ``sentence" to represent the whole paired text for each video, such as the ASR in HowTo100M~\cite{miech2019howto100m} or query language in ActivityNet Caption~\cite{krishna2017dense}.} level. As the temporal grounding example shown in Fig.~\ref{fig:TeaserMotivate}, a given query sentence may contain multiple actions (\textit{e}.\textit{g}., ``\texttt{hit the golf ball}" ($q^{s1}$) and ``\texttt{bend down to pick up the ball}" ($q^{s2}$)). Thus, aligning each action (or words) to the corresponding clips (\textit{i}.\textit{e}., $v^{t1}$ and $v^{t2}$) will help to obtain more detailed and accurate feature representations. \textbf{2) \emph{Temporal relation reasoning}}: We hope the clip features of a certain action can also perceive other actions in the same video. For example, for a typical \texttt{golf} video, action $q^{s2}$ (``\texttt{bend down to pick up the ball}") always occurs shortly after action $q^{s1}$ (``\texttt{hit the golf ball}"). Thus, incorporating such temporal relationship into VTP can help to improve the temporal awareness of video features. \input{tables_figs/fig2} Based on these observations, we propose a novel video-text pre-training framework for localization tasks, dubbed as LocVTP. By considering both above-mentioned characteristics, LocVTP achieves state-of-the-art performance not only on the widely studied retrieval-based tasks, but also on the less-focused localization-based tasks. Specifically, \textbf{for fine-grained alignment}, we extend the coarse-grained contrastive training with video-sentence alignment to a fine-grained one with clip-word alignment. Since there are no clip-word correspondence annotations in existing large-scale datasets, we utilize the latent space established by the coarse-grained contrastive learning to estimate the clip-word similarity, and then select the clip-word pairs with high similarities as positive samples. To further illustrate this, as shown in Fig.~\ref{fig:TeaserMotivate} (right), suppose $\{\boldsymbol{v}^{t1}, \boldsymbol{q}^{s1}\}$ and $\{\boldsymbol{v}^{t2}, \boldsymbol{q}^{s2}\}$ are two matched clip-word feature pairs. Semantic embeddings in each pair are mapped to be close to each other, \textit{i}.\textit{e}., $\boldsymbol{v}^{t1} \leftrightarrow \boldsymbol{q}^{s1}$, $\boldsymbol{v}^{t2} \leftrightarrow \boldsymbol{q}^{s2}$. \textbf{For temporal relation reasoning}, we propose a new pretext task called \emph{context warping}. Here we use Fig.~\ref{fig:TeaserMotivate} (right) for illustration. Context warping is designed to generate a new temporally relevant clip features $\boldsymbol{z}^{t1}$, which imitates $\boldsymbol{v}^{t1}$, conditioned on another clip $\boldsymbol{v}^{t2}$ and the relative distance $t2 - t1$ in time, \textit{i}.\textit{e}., $\boldsymbol{z}^{t1}=\operatorname{warp}(\boldsymbol{v}^{t2}, t2 - t1)$. The predicted relevant clip feature $\boldsymbol{z}^{t1}$ is enforced to maintain the original established cross-modal correspondence unchanged, \textit{i}.\textit{e}., $\boldsymbol{z}^{t1} \leftrightarrow \boldsymbol{q}^{s1}$. In this manner, we simulate the contextual reasoning process and enhance the temporal awareness of video features. We conduct extensive experiments on four downstream tasks (\textit{i}.\textit{e}., video retrieval, temporal grounding, action step localization, and action segmentation) across six datasets. The results on both retrieval-based and localization-based tasks demonstrate the superiority and the generalization ability of our LocVTP. In summary, we make three contributions in this paper: \begin{itemize}[topsep=0pt, partopsep=0pt, leftmargin=13pt, parsep=0pt, itemsep=3pt] \item We propose a localization-oriented video-text pre-training framework, LocVTP, which benefits both retrieval-based and the less-explored localization-based downstream tasks. \item We pinpoint two crucial designs in LocVTP, \textit{i}.\textit{e}., fine-grained video-text alignment and temporal relation reasoning. \item Experimental results show that our LocVTP significantly outperforms previous state-of-the-art methods when transferred to various downstream tasks. \end{itemize} \section{Related Work}\label{relatedwork} \noindent \textbf{Video-Text Pre-training (VTP).} With the release of the large-scale instructional dataset HowTo100M, VTP has spurred significant interest in the community. Overall, the mainstream methods can be broadly classified into two classes: 1) Generative methods: Several methods~\cite{li2019visualbert,lu2019vilbert,tan2019lxmert,chen2020uniter,hu2021transformer,wang2021t2vlad,liu2019use,wang2021dig} try to extend BERT~\cite{vaswani2017attention} to the cross-modal domain, \textit{i}.\textit{e}., they accept both visual and textual tokens as input and perform the masked-token prediction task. 2) Discriminative methods. These methods~\cite{lei2021less,bain2021frozen,patrick2020support,liu2021hit} learn representations by differentiating input samples using objectives such as the metric loss~\cite{hoffer2015deep,wu2017sampling} or contrastive loss~\cite{he2020momentum,chen2020simple}. ClipBert~\cite{lei2021less} enables affordable pre-training from sparsely sampled frames. Frozen~\cite{bain2021frozen} adapts the recent ViT~\cite{dosovitskiy2020image} as the visual encoder and is flexible to be trained on both image and video datasets. T2VLAD~\cite{wang2021t2vlad} and FCA~\cite{han2021fine} also perform the fine-grained interactions between video clips and phrases. However, both of them resort to additional overload, \textit{e}.\textit{g}., k-means cluster or graph auto-encoder. In contrast, our LocVTP explicitly models the clip-word matching with a more light-weighted similarity comparison manner. \noindent \textbf{Pre-training for localization tasks.} Compared to the retrieval tasks~\cite{xu2016msr,rohrbach2015dataset,jang2017tgif} which only require only video-level predictions, localization tasks~\cite{gao2017tall,tang2019coin,zhukov2019cross} are essentially different since they need dense clip-level or frame-level predictions and thus the pre-training for these tasks is more challenging. In the pure video domain, this gap has been noticed and several pre-training works~\cite{xu2021boundary,alwassel2021tsp,xu2021low,zhang2022unsupervised} tailored for action localization have been proposed. BSP~\cite{xu2021boundary} synthesizes temporal boundaries using existing action recognition datasets and conducts boundary type classification to generate localization-friendly features. TSP~\cite{alwassel2021tsp} trains video encoders to be temporally sensitive by predicting the foreground clip label and classifying whether a clip is inside or outside the action. As for the video-language domain, our LocVTP is the first pre-training framework designed for localization tasks. Besides, compared to TSP and BSP which require label information for supervised pre-training, our LocVTP can directly learn from narrated videos. \section{Approach} \input{tables_figs/fig3} \subsection{Overview of LocVTP}\label{sec:3.1} An overview of LocVTP is illustrated in Fig.~\ref{fig:pipeline}. We firstly feed the video and language modalities to their respective encoders $f_{v}(\cdot)$ and $f_{q}(\cdot)$ to obtain embedded features. We follow the sparse sampling spirit in \cite{lei2021less} and sample $T$ clips for each video, yielding the encoded video $\boldsymbol{v}= \{\boldsymbol{v}^{t}\}_{t=1}^{T}$, where $\boldsymbol{v}^{t}\in \mathbb{R}^{D}$ is the $t^{th}$ clip feature and $D$ is the feature dimension. The text embedding is represented as $\boldsymbol{q}= \{\boldsymbol{q}^{s}\}_{s=1}^{S_{q}}$, where $\boldsymbol{q}^{s}\in \mathbb{R}^{D}$ is the $s^{th}$ word embedding and $S_q$ is the word length of $\boldsymbol{q}$. Three types of contrastive methods are then performed to learn cross-modal features: 1) The coarse-grained contrastive loss builds the video-sentence level alignment; 2) A correspondence discovery strategy is proposed to build clip-word relations, based on which the fine-grained contrastive loss is applied; 3) Temporal aware contrastive loss with the context warping pretext task is proposed to encode temporal information into video representations. \subsection{Coarse-grained Contrastive Learning}\label{sec:3.2} We firstly conduct contrastive alignment at the global video-sentence level. Specifically, to obtain the video and sentence level features, we average pool $\boldsymbol{v}$ and $\boldsymbol{q}$ along the temporal and word index dimension, respectively. The global features are represented as $\overline{\boldsymbol{v}}$, $\overline{\boldsymbol{q}}\in \mathbb{R}^{D}$. Then we formulate this video-sentence alignment into the contrastive framework~\cite{he2020momentum} as follows: \begin{equation} \mathcal{L}_{c}=-\log \frac{\exp \left(\overline{\boldsymbol{v}} {\cdot} \overline{\boldsymbol{q}} / \tau\right)}{\sum_{i=1}^{N} \exp \left(\overline{\boldsymbol{v}} {\cdot} \overline{\boldsymbol{q}}_{i} / \tau\right)}, \label{equ:1} \end{equation} \noindent where $\overline{\boldsymbol{q}}_{i}, i \in [1, N]$, is the sentence feature for other samples within the batch. $N$ denotes the batch size and $\tau$ is the temperature parameter. The coarse-grained contrastive loss $\mathcal{L}_{c}$ serves as a base loss to conduct video-sentence level constraint and induces a basic latent space where the detailed cross-modal matching is achieved. Though usually coarse and noisy, this latent space encodes prior for fine-grained clip-word correspondence discovery. In Section~\ref{subsec:4_6_1}, we design and analyze three potential ways to use this cross-modal matching prior. \subsection{Fine-grained Contrastive Learning}\label{sec:3.3} Beyond the coarse-grained video-sentence alignment, we propose to conduct contrastive learning in a fine-grained manner, \emph{i.e.}, clip-word matching. We contend that introducing such alignment learning into the pre-training stage could narrow down its gap with downstream localization tasks and calibrate the pre-trained feature to be more temporally aware. \noindent \textbf{Clip-word correspondence discovery.} Before performing fine-grained contrastive learning, we firstly need to estimate the clip-word correspondences from video-sentence pairs. Thanks to the priors well established by the coarse-grained contrastive learning, we compute the cosine similarities between the video clips and their corresponding caption words in the pre-built latent space and choose the most similar $K$ words as the correspondence for each video clip. Note that we select multiple positive words rather than simply pick one with the highest similarity because individual words may have vague meanings while sense-group\footnote{A group or sequence of words conveying a particular meaning or idea in linguistics.\label{sensegroup}} conveys more precise information (cf. Section~\ref{subsec:4_6_2}). Given the video sentence pair $\left\{\boldsymbol{v}, \boldsymbol{q}\right\}$, for the encoded $t^{th}$ video clip $\boldsymbol{v}^{t}$, we compute its cosine similarities with the $s^{th}$ word embedding $\boldsymbol{q}^{s}$ and apply the $\operatorname{topk}$ operation to select the most matched $K$ ones. Following~\cite{wang2021dense}, these $K$ selected items are average pooled to form the final positive sample: \begin{equation} \boldsymbol{q}_{+}^{t}=\operatorname{avgpool} \Big(\underset{s\in [1, S_{q}]}{\arg \operatorname{topk} } \left(\boldsymbol{v}^{t} {\cdot} \boldsymbol{q}^{s}\right)\Big), \label{equ:2} \end{equation} \noindent where $\boldsymbol{q}_{+}^{t}$ is the final positive sample for $\boldsymbol{v}^{t}$. $(\boldsymbol{u} {\cdot} \boldsymbol{v})=\boldsymbol{u}^{\top} \boldsymbol{v} /\|\boldsymbol{u}\|\|\boldsymbol{v}\|$ represents the cosine similarity between $\ell_{2}$ normalized $\boldsymbol{u}$ and $\boldsymbol{v}$. This process can be efficiently performed for all the video clips using matrix operations. \noindent \textbf{Fine-grained contrastive loss.} With the selected clip-word correspondence as positive pairs, we perform fine-grained representation learning following the cross-modal InfoNCE~\cite{he2020momentum} loss (cf. Figure~\ref{fig:posLossA}). The negative samples are taken from the other words within the batch. Therefore, the fine-grained contrastive loss is defined as follows. \begin{equation} \mathcal{L}_{f}=\frac{1}{T}\sum_{t=1}^{T}-\log \frac{ \exp (\boldsymbol{v}^{t} {\cdot} \boldsymbol{q}_{+}^{t} / \tau)}{\sum_{i=1}^{N} \sum_{s=1}^{S_{q_{i}}} \exp \left(\boldsymbol{v}^{t} {\cdot} \boldsymbol{q}_{i}^{s} / \tau\right)}, \label{equ:3} \end{equation} \noindent where $\boldsymbol{q}_{i}^{s}$ is the $s^{th}$ word feature of the $i^{th}$ sentence $\boldsymbol{q}_{i}$. \subsection{Temporal aware Contrastive Learning}\label{sec:3.4} \input{tables_figs/figPosLoss} Compared with the video-level retrieval task, which favors temporal invariant features~\cite{pan2021videomoco,qian2021spatiotemporal}, the clip-level localization task~\cite{chen2020rethinking,lu2019debug,xiao2021boundary,cao2021pursuit,xiao2021natural,yuan2021closer,zhang2021cola,cao2021deep,zhang2021synergic} prefers temporal aware video embeddings. Specifically, correlated actions in the same video should perceive each other. This characteristic is however not embodied in the aforementioned contrastive learning. \noindent \textbf{Context warping head.} To alleviate this, we set up a \emph{context-warping} operation to enforce the video clip to perceive the context. For the video clip $\boldsymbol{v}^{t}$ in a matched clip-word pair $\{\boldsymbol{v}^{t}, \boldsymbol{q}^{t}_{+}\}$ (cf. Section \ref{sec:3.3}), we warp its contextual video clip with $\delta$ temporal distance, \textit{i}.\textit{e}., $\boldsymbol{v}^{t+\delta}$, to ``reconstruct" itself. To supervise this warping process, we set up a temporal aware contrastive loss to maintain the established correspondence. Specifically, we propose a \textit{context warping head} $g(\cdot)$ to instantiate this warping process, by taking the context clip feature $\boldsymbol{v}^{t+\delta}$ and temporal distance $\delta$ as input. \begin{equation} \begin{aligned} \boldsymbol{z}^{t} &=g(\boldsymbol{v}^{t+\delta}; \delta) \\ &=\operatorname{ReLU}\left(W[\boldsymbol{v}^{t+\delta}, \operatorname{sgn}(\delta),|\delta|]\right), \label{equ:4} \end{aligned} \end{equation} \noindent where $\boldsymbol{z}^{t}$ is the warped feature. $W \in \mathbb{R}^{(D+2) \times D}$ are the trainable weights. $\delta$ is randomly sampled within the range of $[-\delta_{max}, \delta_{max}]$. $\operatorname{sgn}(\cdot)$ is the sign function which returns $1$ for positive values and $-1$ for negative ones. Here $\operatorname{sgn}(\delta)$ and $\lvert\delta\rvert$ indicate the direction and distance of the temporal difference $\delta$, respectively. \noindent \textbf{Temporal aware contrastive loss.} Through the context warping head, the warped feature $\boldsymbol{z}^{t}$ should mimic the reference feature $\boldsymbol{v}^{t}$. Since $\boldsymbol{v}^{t}$ has the clip-word alignment with $\boldsymbol{q}^{t}_{+}$, such correspondence should be preserved between the warped feature $\boldsymbol{z}^{t}$ and $\boldsymbol{q}^{t}_{+}$ (cf. Fig.~\ref{fig:posLossB}). \begin{equation} \mathcal{L}_{t}=\frac{1}{T}\sum_{t=1}^{T}-\log \frac{ \exp (\boldsymbol{z}^{t} {\cdot} \boldsymbol{q}_{+}^{t} / \tau)}{\sum_{i=1}^{N} \sum_{s=1}^{S_{q_{i}}} \exp \left(\boldsymbol{z}^{t} {\cdot} \boldsymbol{q}_{i}^{s} / \tau\right)}. \label{equ:5} \end{equation} This process enforces video features to learn the ability of temporally reasoning, thus leading to more localization-friendly video features. Integrating the above constraints, our final loss function is as follows. \begin{equation} \mathcal{L} = \lambda_{c}\mathcal{L}_{c} + \lambda_{f}\mathcal{L}_{f} + \lambda_{t}\mathcal{L}_{t}, \label{equ:lossAll} \end{equation} \noindent where $\lambda_{c}$, $\lambda_{f}$, and $\lambda_{t}$ balance the focus on different constraints during training.
1,108,101,563,471
arxiv
\section{Introduction} In recent years, the discovery that electronic properties of twisted stacked graphene multilayers can be controlled by the twist angle, which modulates the interlayer tunneling between the graphene layers, has led to the burgeoning field of ``twistronics"~\cite{Santos_2007,Morrell_2010,Andrei_2020, Carr_2017, Ren_2020}. A remarkable discovery~\cite{BM} in the single-particle physics of these systems is that they host flat bands at certain ``magic'' angles, as first shown for the simplest case of twisted bilayer graphene (TBG). The flatness of these bands suggests that when interactions are included, they should host correlated electron states. And indeed, with improving sample preparation techniques, such states have been observed, most prominently Mott insulating states and superconductivity~\cite{Cao_2018a,Cao_2018b}. Such exotic correlated electron states are not unique to TBG~\cite{Yankowitz_2019,Wong_2020}, but are also present in other twistronic systems including those involving hexagonal boron nitride~\cite{hBN1, hBN2, hBN3, Yang_2020, Andelkovic2020, Wang2019}, twisted tungsten selenide and other transition metal dichalcogenides~\cite{Wang_2020, Zhang_2019, Li_2021, Naik_2018, Naik_2020, Zhan_2020}, twisted double bilayer graphene~\cite{Haddadi, Cheroblu, Burg2019, Koshino2019, He_2021,Zhang_2021, Cao_2020, Fang_2016, Culchac_2020, Choi_2019,Liu_2020,Lee_2019, Adak_2020}, twisted trilayer graphene~\cite{Kaxiras,Park_2021,Hao_2021, Suarez_2013, Chen_2016, Zuo_2018, Ma_2021, Xu_2021, Li_2019}, as well as other systems of stacked twisted graphene multilayers~\cite{Liu_2020, Hierarchy, Denner2020, Tritsaris_2020, Gupta2020}. They are even present in systems that do not possess a moir\'e potential~\cite{Kerelskye_2021, 2021, zhou2021isospin, delabarrera2021cascade, seiler2021quantum} and may also arise in other twisted graphene structures without flat bands due to low-energy van Hove singularities and Lifshitz transitions ~\cite{Li_2022}. Theoretical understanding of this system has greatly benefited from the introduction of the Bistritzer-MacDonald (BM) model~\cite{BM}, in which the graphene sheets are individually treated in the long wavelength limit as Dirac point Hamiltonians, while the interlayer tunneling is treated in a spatially periodic model, represented in a small moir\'e Brillouin zone (mBZ). Intriguingly, although there has been important progress~\cite{Tarnopolsky_2019, Balents, Morrell_2010, Nam_2017, Zou_2018, Yuan_2018, Lin_2018, Zhang_2019, Kang_2018, Koshino2019, Rademaker_2018, Po_2019, Qiao_2018, Carr_2019, Guinea_2019}, a full explanation for the band flatness within the mBZ at magic twist angles remains elusive. One question that this naturally raises is the role of symmetry in producing flat bands in such models. In addition to translational and discrete rotational symmetries, a mirror symmetry operation maps the $K_M$ and $K_M'$ points of the mBZ onto one another~\cite{Po_2018}. Indeed the eigenstates of the BM model at the $K_M$ and $K'_M$ points largely reside in one layer or the other. Moreover, the energy dispersions in their vicinities are essentially identical, i.e., they have the same Fermi velocities. The symmetry of these Dirac points can be broken with a perpendicular electric field~\cite{Po_2018}, in which case the flatness of the low-energy bands at the magic angles are not expected to survive. However, the symmetry of the Dirac points in a mBZ may be broken in more subtle ways, and whether the flat band phenomenon survives the lifting of this symmetry in general is, to our knowledge, not known. In this work we explore this question by investigating models in which the symmetry between the Dirac points of the layers that are tunnel-coupled has been broken, in effect through {\it different} Fermi velocities at the two coupled Dirac points. We consider two concrete situations where this can occur. The first involves a dielectric screening substrate applied on only one side of the TBG system. In general, Coulomb interactions renormalize the Fermi velocity at the Dirac points of a graphene layer~\cite{Gonzalez_1994}, through the effects of high momentum states on those at low-momentum. Because the two graphene layers are at different distances from the substrate, screening sets in at different length scales for each of them and leads to different Fermi velocities at low energies for the coupled Dirac points. We estimate this effect and show that it can be considerable for high dielectric substrates, such as SrTiO$_3$~\cite{Veyrat_2020}. A second such model involves twisted tetralayer graphene with three independent twist angles $\theta_{12}$ (top pair), $\theta_{23}$ (middle pair), and $\theta_{34}$ (bottom pair). By considering situations where the $\theta_{12}$ and $\theta_{34}$ are not too small, we approximate the four-layer system as two coupled systems comprised of the top and bottom pairs of layers supporting Dirac points, which are themselves in turn tunnel-coupled with effective twist angle $\theta_{23}$ between them. Having three independent twist angles is useful because it allows engineering of the relevant properties of the system. In our treatment, one finds that in addition to renormalized Fermi velocities at the Dirac points of the top and bottom pairs of layers, there are also changes in the precise form of the tunneling between the two coupled systems. Our main result is that magic angles at which flat bands arise do indeed survive symmetry breaking between Dirac points even when it is relatively strong. Figure \ref{fig:lambda vs theta} illustrates a typical result for the asymmetric bilayer system, in which one sees that engineering the Fermi velocity ratio allows for controlling the value of the magic angle. The locations of the magic angle can be predicted to quite a good approximation by perturbation theory~\cite{BM}, which results in the condition ${\hbar k_{\theta} \sqrt{v_{1} v_{2}}} / {w} = \sqrt{3}$, where $v_{1}$ and $v_{2}$ are the Fermi velocities associated with the Dirac points in the two coupled layers, $w$ is the tunneling strength between layers, and $k_{\theta} = 2k_D\sin(\theta/2)$ is the separation between twisted Dirac points as determined by the twist angle $\theta$ and $k_D$, the separation between the $K$ and $K'$ points of a single graphene sheet. Qualitatively similar results are obtained for the tetralayer system when the twist angles for the outer layers are not too small, and again the values of magic angles can be accounted for by a perturbation theory analysis. While our basic approach does not include the effects of incommensuration arising at most sets of twist angles in this system, an estimate of these using degenerate perturbation theory suggests that they do not eliminate the basic flat band phenomenon. The rest of this article is organized as follows. In Sec.~\ref{sec:asymTBG} we provide an analysis of twisted bilayer graphene with unequal Fermi velocities in the layers and describe how such asymmetry can emerge for a bilayer system with different dielectric screening in each layer. In Sec.~\ref{sec:4TBG} we focus on an effective realization of this model in a graphene tetralayer in which the outer twist angles are unequal and not too small. We model this system by treating the effects of twisting in the outer layers via ${\it k \cdot p}$ perturbation theory, which essentially renormalizes the Dirac point velocities, and then numerically solve for the spectrum in an effective bilayer BM model. We then provide a perturbative analysis for magic angles in this system and compare them with numerical results for representative sets of angles. We conclude in Sec.~\ref{sec:sum} with a summary and discussion. We present a study of effects incommensuration between outer and inner twist angles in Appendices. Appendix~\ref{app:wave} provides some results that motivate our treatment of the tetralayer system as an effective bilayer system, in particular showing conditions under which incommensuration effects should be very small. Appendix~\ref{app:incomm} provides a degenerate perturbation theory estimate of the effects of scattering by incommensurate wavevectors from the outer two twisted layers in our idealization of the tetralayer as an effective bilayer system. \begin{figure}[t] \includegraphics[width=\linewidth]{fig1} \caption{The bandwidth of the asymmetric TBG at the $\Gamma_M$ point of the moir\'e Brillouin zone as a function of both the twist angle $\theta$ and the Fermi velocity asymmetry $v_{1}/v_{2}$. Locations where the bandwidth is less than 5 meV are shown in white. Here the tunneling amplitude $w = 110$ meV and $v_{2} = 0.88 \times 10^6$ m/s is the Fermi velocity of bare monolayer graphene.} \label{fig:lambda vs theta} \end{figure} \section{Asymmetric Twisted Bilayer Graphene}\label{sec:asymTBG} We consider ansymmetic twisted bilayer system with unequal Fermi velocities, described by the continuum Hamiltonian \begin{equation} H_\text{ATBG} = \begin{bmatrix} h_1 & T \\ T^\dagger & h_2 \end{bmatrix}, \label{HATBG} \end{equation} where $h_l = \hbar v_{l} {\boldsymbol \sigma} \cdot \left[-i\boldsymbol{\nabla} + (-1)^l {\bf q}_0/2\right]$ is the Hamiltonian of layer $l=1,2$, with $\boldsymbol{\sigma} = (\sigma_x,\sigma_y)$ the vector of Pauli matrices and $\boldsymbol{\nabla} = (\partial_x, \partial_y)$, and the tunneling $T = w\sum_{j = 0}^2 \exp(-i {\bf Q}_j \cdot {\bf r}) T_j$, with ${\bf Q}_j = {\bf q}_j - {\bf q}_0$, ${\bf q}_0 = k_\theta(0,-1)$, ${\bf q}_1=k_{\theta}(-\frac{\sqrt{3}}{2},\frac{1}{2})$, and ${\bf q}_2=k_{\theta}(\frac{\sqrt{3}}{2},\frac{1}{2})$~\cite{Balents}. Note that we have ignored the effect of the small rotation angle $\theta$ on Pauli matrices in each layer and assumed the Dirac points that are tunnel-coupled by $T$ reside in the same valley of their host graphene sheets, and we are only describing the low-energy bands from those valleys. The tunneling matrices $T_j$ are given by~\cite{Balents} \begin{equation} \label{eq:Tj} T_0 = \begin{bmatrix} u & 1 \\ 1 & u \\ \end{bmatrix}, \quad T_1 = \begin{bmatrix} u & e^{2\pi i/3} \\ e^{-2\pi i/3} & u \\ \end{bmatrix}, \quad T_2 = T_1^*. \end{equation} In the situations of interest to us, $v_{1}\neq v_{2}$, and $u\neq 1$ allows for different tunneling amplitudes between atoms on the same sublattice and those on different sublattices, which represents a simple model of lattice relaxation in the layers~\cite{Nam_2017,Koshino_2018,Carr_2019}. Except where otherwise indicated, in our numerical results we take the tunneling amplitude to be $w = 110$ meV, and the effective ratio of tunneling between sites on the same sublattice and opposite sublattice $u = 0.8$. \subsection{Perturbative Estimate of Magic Angle} \label{sec:per} The effect of the tunneling $T$ on the low-energy dispersion in each layer can be computed perturbatively as corrections to poles of the resolvent operator, $G(E) = (E - H_\text{ATBG})^{-1}$. It is useful to employ the projector $P_l$ onto the space of each layer to define an energy-dependent effective Hamiltonian $h^\text{eff}_l(E)$ in layer $l$, \begin{equation} E-h^\text{eff}_l(E) \equiv \left[P_l {G}(E) P_l \right]^{-1}, \end{equation} which then yields \begin{subequations} \begin{align} h^\text{eff}_1(E) &= h_1 + T(E - h_2)^{-1}T^\dagger,\\ h^\text{eff}_2(E) &= h_2 + T^\dagger(E - h_1)^{-1}T. \end{align} \end{subequations} We next evaluate the matrix elements of $h_l^\text{eff}$ in the plane-wave basis $|{\bf k}_l\rangle$, where ${\bf k}_l$ is measured from the Dirac point of layer $l$. Then $T_j$ only connects states with wavevectors that differ by ${\bf q}_j$, so that \begin{equation}\label{eq:heff1} \langle {\bf k}_1| h_1^\text{eff} - h_1 | {\bf k}_1\rangle = w^2 \sum_j \frac{T_j \left[ E + \hbar v_{2} \boldsymbol\sigma \cdot ({\bf k}_1 - {\bf q}_j) \right] T_j}{E^2 - (\hbar v_{2}|{\bf k}_1 - {\bf q}_j|)^2}. \end{equation} A similar expression for $\langle {\bf k}_2| h_2^\text{eff} - h_2 | {\bf k}_2\rangle$ is obtained by replacing $v_{2}$ with $v_{1}$ and ${\bf q}_j$ with $-{\bf q}_j$. Since we are interested in solutions $E \sim |{\bf k}_1|$, we expand \begin{equation} \frac1{E^2 - (\hbar v_{2}|{\bf k}_1 - {\bf q}_j|)^2} = -\frac{1 + 2{\bf k}_1\cdot{\bf q}_j}{(\hbar v_{2} k_\theta)^2} + \mathcal{O}(|{\bf k}_1|^2). \end{equation} Finally, using the identities \begin{subequations} \begin{align} &\sum_j T_j^2 = 3(1+u^2), \\ &\sum_j T_j \boldsymbol\sigma T_j = 3u^2 \boldsymbol\sigma,\\ &\sum_j T_j (\boldsymbol\sigma \cdot {\bf q}_j) T_j = 0, \\ &\sum_j T_j (\boldsymbol\sigma \cdot {\bf q}_j) {\bf q}_j T_j = \frac32(u^2-1) \boldsymbol\sigma, \end{align} \end{subequations} the matrix elements up to $\mathcal{O}(|{\bf k}|^2,w^4)$ simplify to \begin{equation} \langle {\bf k}_l| h_l^\text{eff} - h_l | {\bf k}_l\rangle \approx - 3 \alpha_{\bar l}^2\left[ (1+u^2) E + \hbar v_{\bar l} \boldsymbol\sigma\cdot{\bf k}_l \right], \end{equation} where $\alpha_l = w/(\hbar k_{\theta} v_{l})$ and we have denoted opposite layers by $l \neq \bar l$. \begin{figure} \includegraphics[width=\linewidth]{fig2} \caption{(a) Band spectrum for asymmetric TBG with Fermi velocity asymmetry $v_1/v_2 = 1.33$ for twist angle $\theta=1.3^\circ$, which yields nearly flat bands. (b) Detail of low energy band spectrum.} \end{figure} Solving for the eigenvalue $E$ of $h_l^\text{eff}(E)$ self-consistently, we find $E = \pm \hbar v'_{Fl} |{\bf k}_l|$ with a renormalized Fermi velocity, \begin{equation} v'_{l} = \frac{ v_{l} - 3 \alpha_{\bar l}^2 v_{\bar l}}{1+ 3(1+u^2) \alpha_{\bar l}^2}. \end{equation} Thus the renormalized Fermi velocities both vanish when \begin{equation} \overline\alpha = \frac1{\sqrt 3}, \quad \overline\alpha \equiv \frac{w}{\hbar k_\theta \sqrt{v_{1}v_{2}}}. \end{equation} Thus, within this perturbative analysis, the ``magic'' angle persists in the presence of Fermi velocity asymmetry between the twisted layers and is set by their geometric mean. We note here that by defining $\beta_l = 1+3(1+u^2)\alpha_l^2$, we can write (in the plane-wave basis) \begin{equation} h_l^\text{eff} \approx (1-\beta_{\bar l}) E + \beta_{\bar l} \hbar v'_{l} \boldsymbol\sigma \cdot {\bf k}, \end{equation} and the projected resolvent operator takes the form \begin{equation} (E-h_l^\text{eff})^{-1} \approx \frac{1}{\beta_{\bar l}} \left[E - \hbar v'_{l} \boldsymbol\sigma \cdot {\bf k}\right]^{-1}. \end{equation} The poles of this operator occur at $E = \pm \hbar v'_l |{\bf k}_l|$ with a residue $1/\beta_{\bar l}$. The square-root of this residue, $1/\sqrt{\beta_{\bar l}}$, signifies the renormalization of the wavefunction amplitude due to projection to layer $l$. \begin{figure}[t] \includegraphics[width=\linewidth]{fig3} \caption{Sketch of the asymmetric TBG system composed of two layers of twisted graphene with a dielectric applied beneath the bottom layer. Dirac cones are shown representing the different Fermi velocities in the two layers.} \label{fig:ATBGsketch} \end{figure} \subsection{Realization by Asymmetric Dielectric Screening} We now briefly discuss a mechanism through which different Fermi velocities could be generated for the two layers of a TBG system by exploiting the renormalization of the Fermi velocity via Coulomb interactions~\cite{Gonzalez_1994}. In particular we focus on a situation in which a dielectric layer is present only on one side of the TBG system, as sketched in Fig.~\ref{fig:ATBGsketch}, with $d_1$ and $d_2$ denoting the distances between the dielectric and each of the graphene sheets. For concreteness we take $d_1 < d_2$. We expect for such geometries $d_2 \approx 2d_1$. For wavevectors ${\bf k}$ with $|{\bf k}| \gg 2\pi/d_1 \equiv \Lambda_1$ the dielectric will have little effect, while for $k \ll 2\pi/d_2 \equiv \Lambda_1$, dielectric screening is essentially the same for both layers. We model the difference in dielectric screening between the layers by an effective potential that applies only to the layer closer to the dielectric, of the form \begin{equation} \delta V(|{\bf k}|) = \begin{cases} \left(\frac1\kappa - \frac1{\kappa_0} \right)\frac{2\pi e^2}{ |{\bf k}|}, & \Lambda_2 < |{\bf k}| < \Lambda_1, \\ 0, &\text{otherwise}, \end{cases} \end{equation} where $\kappa_0$ is a dielectric constant due to the intrinsic screening of graphene applying to both layers, and $\kappa$ is the dielectric constant applied to the layer closer to the dielectric. \begin{figure}[t] \includegraphics[width=\linewidth]{fig4.pdf} \caption{Feynman diagram for the Asymmetric TBG self-energy. Because of the presence of the dielectric, an effective potential difference $\delta V({\bf k})$ contributes to corrections to the propagator in one of the layers. Here the solid line with the arrow is the bare propagator $G_0({\bf k}+{\bf q},i\omega)$.} \label{fig:feynman} \end{figure} Because $\delta V$ has a cutoff on the low momentum side, we can estimate its effect perturbatively through an exchange self-energy correction, $\Sigma$, to the Matsubara Green's function, \begin{equation} G^{-1}({\bf k},i\omega) = G_0^{-1}({\bf k},i\omega) - \Sigma({\bf k}), \label{eq:dyson} \end{equation} which to the lowest order and in $\delta V$ in the zero-temperature limit (see Fig.~\ref{fig:feynman}) has the form~\cite{Tang_2018} \begin{equation}\label{eq:SigmaForm} \Sigma({\bf q}) = \int_{-\infty}^{\infty} \frac{d\hbar\omega}{2\pi}\int\frac{d^2 {\bf k}}{(2\pi)^2} \delta V(|{\bf k}|) G_0({\bf k}+{\bf q},i\omega). \end{equation} Here, \begin{equation} G_0(\mathbf{k}, i\omega) = -\frac1\hbar\frac{i\omega+ v_F{\bf k} \cdot {\boldsymbol\sigma}}{\omega^2 + v_F^2|\mathbf{k}|^2}, \end{equation} is the unperturbed Green's function, where we have set the chemical potential to zero so that we work near the charge-neutrality point, $v_F$ is the bare Fermi velocity, and $\boldsymbol{\sigma}=(\sigma_x,\sigma_y)$ are Pauli matrices. Since we are interested in the renormalization of Fermi velocity, we consider small values of $|{\bf q}|$ in Eq.~\eqref{eq:SigmaForm} while the form of $\delta V$ guarantees that $|{\bf q}| \ll |{\bf k}|$ for non-vanishing values of the integrand. Then, integrating over $\omega$ first and expanding, up to $\mathcal{O}(q^2)$, $1/|{\bf k}+{\bf q}| \approx 1/|{\bf k}| - {\bf k}\cdot{\bf q}/|{\bf k}|^2$, we have \begin{subequations} \begin{align} \Sigma(\mathbf{q}) &= -\frac12\int\frac{d^2 {\bf k}}{(2\pi)^2} \delta V(|{\bf k}|)\frac{({\bf k}+{\bf q}) \cdot {\boldsymbol\sigma}}{|{\bf k}+{\bf q}|} \\ &\approx -\frac{{\bf q}\cdot\boldsymbol{\sigma}}{8\pi} \int \delta V(k) dk \\ &= -\left[\frac{e^2}4\left(\frac{1}{\kappa} - \frac{1}{\kappa_0} \right) \ln\frac{\Lambda_2}{\Lambda_1}\right] {\bf q}\cdot\boldsymbol{\sigma}. \end{align} \end{subequations} Using Eq.~\eqref{eq:dyson} one sees that the Green's function retains its non-interacting form, albeit with a renormalized velocity. Since this renormalization applies only to the layer closer to the dielectric substrate, the ratio of effective Fermi velocities for the two layers becomes \begin{equation} \frac{v_{1}}{v_{2}}=1 + \frac{(\kappa_0-\kappa)e^2}{4\kappa\kappa_0 \hbar v_F}\ln \frac{\Lambda_2}{\Lambda_1}. \end{equation} With $\Lambda_2/\Lambda_1 \approx 2$, a very large value of $\kappa$ (as would be appropriate for example to SrTiO$_3$~\cite{Veyrat_2020}), and a background dielectric constant of $\kappa_0=4$, one finds $v_{2}/v_{1} \sim 1.1$. The Fermi velocities of the two layers of TBG can thus be made different by $\sim 10\%$ due to such one-sided dielectric screening. Finally, note that in these perturbative corrections we are including a contribution that makes the two Fermi velocities different, but do not include higher order logarithmic corrections due to Coulomb interactions, which cause the Fermi velocities to acquire some momentum dependence ~\cite{Kotov_12}. \begin{figure}[t] \includegraphics[width=\linewidth]{fig5} \caption{Sketch of the tetralayer graphene system. The angles between the two layers in the top and bottom bilayer are $\theta_{12}$ and $\theta_{34}$, respectively, and the angle between the two bilayers is $\theta_{23}$.} \label{fig:4TTGsketch} \end{figure} \section{Asymmetric Twisted Tetralayer}\label{sec:4TBG} A second platform which approximately realizes the asymmetric Dirac point models we consider is a graphene tetralayer with three independent twist angles in which the outer two are not too small, as sketched in Fig.~\ref{fig:4TTGsketch}. The idea is that at such relatively large twist angles, the main effect of the outer twists is to renormalize the Fermi velocities of the inner layers, which in turn would implement the asymmetric twisted bilayer discussed above at smaller inner twist angles. The renormalized Fermi velocity asymmetry found increases when $\theta_{12}$ and $\theta_{34}$ are significantly different (while neither one is too small). For example, taking $\theta_{12} = 2.5^\circ$ and $\theta_{34} = 10^\circ$, we find $v'_{3}/v'_{2} \approx 1.57$. We shall model this system systematically below and provide perturbative estimates as well as numerical results for its spectra. \subsection{Hamiltonian}\label{sec:4LHamiltonian} The Hamiltonian for the system as a whole may be written as \begin{equation} \label{fullH} H = \begin{bmatrix} H_\text{TBG}(\theta_{12}) & T_{23}({\bf r}) \\ T_{23}^{\dagger}({\bf r}) & H_\text{TBG}(\theta_{34}) \\ \end{bmatrix}, \end{equation} where $H_\text{TBG}(\theta_{ij})$ is the Hamiltonian~\cite{BM} for the bilayer $ij$ with twist angle $\theta_{ij}$, \begin{equation} H_\text{TBG}(\theta) = \begin{bmatrix} h_+ & T({\bf r}) \\ T^\dagger({\bf r}) & h_- \\ \end{bmatrix}, \end{equation} with $h_\pm = \hbar v \boldsymbol{\sigma}_{\pm\theta/2} \cdot \left(-i\boldsymbol{\nabla}\mp{\bf q}_0/2 \right)$ the Hamiltonian in each layer, $\boldsymbol{\sigma}_{\theta/2} = e^{-i\theta\sigma_z/4} \boldsymbol{\sigma} e^{i\theta\sigma_z/4}$, $T({\bf r}) = w\sum_{j = 0}^2 \exp(-i {\bf Q}_j \cdot {\bf r}) T_j$ as before, and $T_{23}$ implements tunneling between the two bilayers by coupling layers 2 and 3. Solving for the spectrum of the full Hamiltonian $H$ in general is very challenging, in particular because for an arbitrary set of twist angles the system is not spatially periodic. For our purposes we are interested in parameter regimes in which there is approximate spatial periodicity, and in which the twist angles $\theta_{12}$ and $\theta_{34}$ are exploited to create Dirac points with different velocities, which can be coupled together to form an approximate moir{\'e} lattice. We note that in principle there are deviations from perfect discrete translational symmetry because, for general twist angles, tunneling may be accompanied by scattering by many different discrete wavevectors. Ref.~\onlinecite{BM} demonstrated that for a single twisted graphene bilayer, the scattering involved is dominated by just two wavevectors and their linear combinations, so that the resulting bands fall in a two-dimensional Brillouin zone. In the four-layer systems we consider, the outer two layers have relatively large twist angles compared to their neighbors, so that their single particle states near zero energy are well-approximated by a single plane wave. This allows us to adopt the BM strategy for tunneling between the two middle layers. We discuss in more detail below the justification for this, and in Appendix B estimate the effect of retaining plane wave states not included in our basic approach. Indeed, we find their effect to be quite small provided the outer twist angles are not too small. In general, the Hamiltonians $H_\text{TBG}(\theta_{12})$ and $H_\text{TBG}(\theta_{34})$ in Eq.~\eqref{fullH} host Dirac points associated with each of their valleys, and the two degenerate states of those Dirac points reside mostly in one of the two members of the bilayer. Out of the four Dirac points hosted by (a single valley) of the two bilayers, we focus on those with the most weight in layers 2 and 3, respectively, and model the diagonal components of Eq.~\eqref{fullH} using a ${k} \cdot {p}$ approximation. Note that the remaining two Dirac points are remote in wavevector from low energy states in the opposite bilayer, and so are largely decoupled from states of the two Dirac points we retain. This yields a simple linearly dispersing mode near each Dirac point with some Fermi velocity, as well as wavefunctions associated with eigenstates. We can then use these dispersive states to create a model for tunneling between the bilayers, as we now explain. \subsection{Interbilayer Tunneling} \label{subsection: Interbilayer Tunneling} To formulate the interbilayer tunneling, in analogy with Ref. \onlinecite{BM} we begin by calculating the matrix element $\langle {\bf k} \mu | H | {\bf k}' \mu' \rangle$ where ${\bf k}$ is the wavevector for an electron state and $\mu$ and $\mu'$ are indices labeling positive and negative energy states of a Dirac cone in bilayer 12 and 34, respectively. To compute these matrix elements we need wavefunctions for the states in the uncoupled bilayers, which in the BM model take the approximate form \begin{equation} \psi^{(12),\mu}_{\bf k} ({\bf r}) \propto \sum_{{\bf g}} e^{i{\bf g} \cdot {\bf r}} \begin{bmatrix} a^\mu_1({\bf g}) \\ b^\mu_1({\bf g})e^{i({\bf g}+{\bf k})\cdot{\boldsymbol \tau}_1} \\ a^\mu_2({\bf g})e^{-i({\bf g}+{\bf k})\cdot{\boldsymbol \tau}_2} \\ b^\mu_2({\bf g}) \\ \end{bmatrix e^{i {\bf k} \cdot {\bf r}}, \label{eq:wavefunctions} \end{equation} for the 12 bilayer, and similarly for the 34 bilayer. In this expression, $a_j^{\mu}$ denotes an amplitude on the $A$ sublattice of sheet $j = 1,2$, and $b_j^{\mu}$ is the corresponding amplitude on the $B$ sublattice. The wavevectors ${\bf g}$ depend on the twist angle and are in general different for the 12 and 34 bilayers. The values of $a_j^{\mu}({\bf g})$ and $b_j^{\mu}({\bf g})$ are determined by numerically solving the BM model for the isolated twisted bilayer. Finally the vectors ${\boldsymbol \tau}_{1}$ and ${\boldsymbol \tau}_{2}$ denote the separations of different sublattice atoms within a unit cell of sheet $j$ of the bilayer. Note that for the $12$ and $34$ bilayers, our tunneling matrices are those in Ref.~\onlinecite{Balents}, which correspond to choices of ${\boldsymbol \tau}_{1}$ and ${\boldsymbol \tau}_{2}$ that lead to AA stacking in the zero twist angle limit. Within this model, the tunneling matrix element takes the form \begin{align} \langle {\bf k} \mu | H | {\bf k}' \mu' \rangle &= \frac{\Omega_c}{\Omega}\sum_{{\bf R},{\bf R}'} t({\bf R} - {\bf R}') \sum_{{\bf g}, {\bf g}'} f^{\mu\mu'} ({\bf g}, {\bf g}',{\bf k},{\bf k}') \nonumber \\ &\quad\times e^{-i({\bf k}+{\bf g}) \cdot {\bf R}} e^{i({\bf k}'+{\bf g}') \cdot {\bf R}'} \label{matrix element} \end{align} where ${\bf R},{\bf R}'$ are Bravais lattice sites for sheets 2 and 3, respectively, $\Omega_c$ is a primitive unit cell area for the graphene Bravais lattice, and $\Omega$ is the system area. A tunneling amplitude $t({\bf R}-{\bf R}')$ has been introduced, which is assumed to depend only on the lateral separation ${\bf R}-{\bf R}'$ between points in different sheets~\cite{BM}, and \begin{widetext} \begin{align} f^{\mu\mu'} ({\bf g}, {\bf g}',{\bf k},{\bf k}') = \left[a^\mu_1({\bf g}), b^\mu_1({\bf g})e^{i({\bf g}+{\bf k})\cdot{\boldsymbol \tau}_1}, a^\mu_2({\bf g})e^{-i({\bf g}+{\bf k})\cdot{\boldsymbol \tau}_2} , b^\mu_2({\bf g}) \right]^* M \begin{bmatrix} a^{\mu'}_3({\bf g}') \\ b^{\mu'}_3({\bf g}')e^{i({\bf g}'+{\bf k}')\cdot{\boldsymbol \tau}_3} \\ a^{\mu'}_4({\bf g}')e^{-i({\bf g}'+{\bf k}') \cdot{\boldsymbol \tau}_4} \\ b^{\mu'}_4({\bf g}') \end{bmatrix}. \end{align} \end{widetext} Here ${\bf g}$ and ${\bf g}'$ are reciprocal lattice vectors for the 12 and 34 bilayers, respectively, and $M$ is a $4\times4$ matrix that describes the tunneling between bilayers 12 and 34. Following Ref. \onlinecite{BM}, we express the amplitude $t$ in terms of its Fourier transform, and after summing over the lattice sites one finds \begin{align} \langle {\bf k} \mu | H | {\bf k}' \mu' \rangle = \sum_{{\bf G}, {\bf G}'}\sum_{{\bf g}, {\bf g}'}\ & t({\bf k}+{\bf g}+{\bf G}) f^{\mu\mu'} ({\bf g}, {\bf g}',{\bf k},{\bf k}') \nonumber \\ &\quad \times \delta_{{\bf k}+{\bf g}+{\bf G}, {\bf k}' + {\bf g}'+{\bf G}'}. \end{align} Here the vectors ${\bf G}$ and ${\bf G}'$ correspond to the reciprocal lattice vectors of the two inner graphene sheets, 2 and 3, respectively. We next adopt two simplifications which limit the values of twist angles for which our analysis gives a reasonable approximation. Firstly, we note that when the angles $\theta_{12}$ and $\theta_{34}$ are not too small, the overlaps $f^{\mu \mu'} ({\bf g}, {\bf g}',{\bf k},{\bf k}')$ are sharply peaked at ${\bf g}={\bf g}'=0$. This is discussed in more detail in Appendix~\ref{app:wave}. Exploiting this feature allows one to set ${\bf g}={\bf g}'=0$. This is a crucial simplification because retaining further values of ${\bf g},{\bf g}'$ spoils the spatial periodicity of the system, rendering it a quasicrystal~\cite{Kaxiras}. The second simplification is commonly made for the BM model. In the expected situation where the distance between layers is larger than the graphene lattice constant, $t({\bf q})$ vanishes very rapidly for $|{\bf q}|$ larger than the inverse of the spacing between the two sheets. Moreover, we are interested in the bands near zero energy, for which the values of ${\bf k}$, ${\bf k}'$ lie in the vicinity of Dirac points of the 2 and 3 layers. We thus focus on values of ${\bf k}+{\bf G} = {\bf k}'+{\bf G}'$ in the vicinity of ${\bf K}_2$, the $K$ point of layer 2, which (assuming small $\theta_{23}$) are also near ${\bf K}_3$, a $K$ point of layer 3. On the scale of the Brillouin zone of a single graphene sheet, the set of wavevectors coupled together by $\langle {\bf k} \mu | H | {\bf k}' \mu' \rangle$ in the low energy bands are very close together, so we ignore the small wavevector variations in $t({\bf k}+{\bf G})$, and retain only values of ${\bf G}$ such that $\bf k+{\bf G}$ is near a Dirac point for the two inner layers, and for which ${\bf K_2}+{\bf G}$ has the smallest possible value. There are three such choices for ${\bf G}$, and for all of them $t({\bf K_2}+{\bf G})$ has the same value $t$; other choices of ${\bf G}$ yield values for $t({\bf k}+{\bf G})$ which are negligibly small. Thus in our reciprocal lattice sum we retain only ${\bf G}={\bf G}_{0,1,2}$, with ${\bf G}_0 = 0, {\bf G}_1 = k_D(-\frac32,\frac{\sqrt3}2), {\bf G}_2= k_D(\frac32,\frac{\sqrt3}2) $. In other words, we take $t({\bf k}+{\bf G}) \approx t({\bf G})$. Furthermore, because the reciprocal lattice vectors of a single sheet are very large compared to the scale of a small-angle twisted bilayer mBZ, for each ${\bf G}_j, j=0,1,2$, we retain only a single ${\bf G}'_j={\bf G}_j+{\bf Q}_j$: the other combinations couple together states with very large single particle energy differences, which will have little effect on the bands near zero energy. {A sketch of the geometry with the relevant wavevectors is shown in Fig.~\ref{fig:geometry}}. \begin{figure}[t] \includegraphics[width=\linewidth]{fig6} \caption{Geometry of reciprocal lattice vectors relevant to tunneling matrix elements in the tetralayer graphene system. The angle between the two bilayers ($\theta_{23}$) is enlarged for clarity; for the small angles $\theta_{23}$ we consider in this work, $|\mathbf{G}_i| \gg |\mathbf{Q}_j|$ for all $i,j$. } \label{fig:geometry} \end{figure} With this reasoning, the tunneling matrix element we adopt takes the form \begin{equation} \langle{\bf k}\mu|H|{\bf k}'\mu'\rangle = \frac{t}{\Omega} \sum_{j} f^{\mu\mu'}(0,0,{\bf K}_2+{\bf G}_j,{\bf K}_3+{\bf G}_j^{\prime}) \delta_{{\bf k}-{\bf k}',{\bf Q}_j}. \end{equation} The resulting system is now formally very similar to the BM model. Finally we must choose a concrete form for the matrix $M$ entering the $f^{\mu\mu'}$ factors. To do this, we first note that tunneling between remote sheets is much smaller in amplitude that that between neighboring sheets, so we retain non-zero matrix elements only for the $2 \times 2$ block the connects sheets 2 and 3. A natural choice is then $M_{23} = \mathbb{1}+\sigma_x$, since there is no distinction between atoms of the two sublattices in graphene beyond their locations in the unit cell, which are explicitly taken into account in the wavefunctions, Eq.~\eqref{eq:wavefunctions}. With this choice, we arrive at our model for tunneling between the bilayers, \begin{equation} \langle {\bf k} \mu | H | {\bf k}' \mu' \rangle = \dfrac{t}{\Omega} \sum_j f^{\mu\mu'}_j \delta_{{\bf k}-{\bf k}', {\bf Q}_j}, \label{MatrixElement} \end{equation} where \begin{subequations}\label{eqn: fj} \begin{align} f^{\mu\mu'}_{0}&=[a_2^{\mu}(0)+b_2^{\mu}(0)]^*[a_3^{\mu'}(0)+b_3^{\mu'}(0)],\\ f^{\mu\mu'}_{1}&=[a_2^{\mu}(0)e^{-i\phi}+b_2^{\mu}(0)]^*[a_3^{\mu'}(0)+b_3^{\mu'}(0)e^{i\phi}],\\ f^{\mu\mu'}_{2}&=[a_2^{\mu}(0)e^{i\phi}+b_2^{\mu}(0)]^*[a_3^{\mu'}(0)+b_3^{\mu'}(0)e^{-i\phi}], \end{align}\end{subequations} with $\phi=2\pi/3$. The constants $a_2^{\mu}(0), b_2^{\mu}(0), \cdots$ are found by numerically obtaining the bilayer wavefunction by diagonalizing the Bistritzer-MacDonald model Hamiltonian~\cite{BM} for the individual 12 and 34 bilayers at their Dirac points. Thus, in terms of the matrices $f_j$ defined in Eqs.~\eqref{eqn: fj}, we have \begin{equation} T_{23}(\mathbf{r}) = w \sum_{j = 0}^3 f_j \exp(-i\mathbf{Q}_j \cdot \mathbf{r}). \label{Tmumu} \end{equation} Note that in the limit where layers 2 and 3 are coupled to one another but not to layers 1 and 4, the matrices $f_{j}$ become precisely the same as the tunneling matrices $T_{j+1}$ in Ref. \onlinecite{BM}, which differ slightly from what was used for the (12) and (34) bilayers as described above \cite{Balents}. This corresponds to adopting values of ${\boldsymbol \tau}_{i,j}$, the displacements of the two atoms in sheets $i$ and $j$ that are tunnel coupled, which differ in the two cases: in the zero twist angle limit, the 12 and 34 displacements correspond to AA stacking, while in the 23 case they correspond to AB stacking. However for non-zero twist angles, the local alignment varies among all possibilities, so that other possible choices for untwisted layer alignment should not qualitatively change our results. \subsection{Perturbation Theory}\label{section:PT} In this section we use a low-energy perturbation theory in the interlayer tunneling to estimate the Fermi velocity at a Dirac point of the mBZ in our tetralayer model, and look for situations in which it vanishes, as an indicator for the flat bands~\cite{BM}. For the tetralayer system, our starting Hamiltonian has the form \begin{equation} \label{Full Hamiltonian} H = \begin{bmatrix} h_{1} & T_{12} & 0 & 0 \\ T_{12}^\dagger & h_2 & T_{23} & 0 \\ 0 & T_{23}^\dagger & h_3 & T_{34} \\ 0 & 0 & T_{34}^\dagger & h_4 \\ \end{bmatrix} \end{equation} where $h_l$ are the Hamiltonians for layer $l$ and $T_{ll'}$ are the tunneling matrices between layers $l$ and $l'$. Projecting the resolvent operator into the subspace of layers 2 and 3, we can write energy-dependent effective Hamiltonians in the vicinity of the Dirac points at ${\bf K}_2$ and ${\bf K}_3$ respectively in the forms \begin{align} h_2^\text{eff}(E) &= \tilde h_2(E) + T_{23}\left[ E - \tilde h_3(E) \right]^{-1} T_{23}^\dagger, \\ h_3^\text{eff}(E) &= \tilde h_3(E) + T_{23}^\dagger \left[ E - \tilde h_2(E) \right]^{-1} T_{23}, \end{align} where, $\tilde h_2 = h_2 + T_{12}^\dagger g_1 T_{12}$ and $\tilde h_3 = h_3 + T_{34}g_4T_{34}^\dagger$. Following the derivation in Sec.~\ref{sec:per} we may write \begin{align} \tilde h_{2}(E) &\approx (1-\beta_1)E + \beta_1 h'_{2} \\ \tilde h_{3}(E) &\approx (1-\beta_4) E + \beta_4 h'_{3}, \end{align} where $h'_{2} = \hbar v'_{2} \boldsymbol\sigma \cdot {\bf k}_2$ for ${\bf k}_2$ measured from the ${\bf K}_2$ and $h'_{3} = \hbar v'_{3} \boldsymbol\sigma \cdot {\bf k}_3$ for ${\bf k}_3$ measured from the ${\bf K}_3$. Here, $\beta_l \equiv [1+3(1+u^2)\alpha_l^2]$ and the renormalized Fermi velocities in layers 2 and 3 are $v'_{2} = (1-3\alpha_1^2) v / \beta_1$, $v'_{3} = (1-3\alpha_4^2) v / \beta_4$. This yields \begin{align} h_2^\text{eff}(E) &\approx (1-\beta_1) E + \beta_1 h_2' + \beta_4^{-1} T_{23} g'_3(E) T_{23}^\dagger, \\ h_3^\text{eff}(E) &\approx (1-\beta_4) E + \beta_4 h_3' + \beta_1^{-1} T_{23}^\dagger g'_2(E) T_{23}, \end{align} where $g'_2(E) = \big(E - h'_2 \big)^{-1}$ and $g'_3(E) = \big(E - h'_3 \big)^{-1}$. The analysis may be straightforwardly generalized to examine situations in which the tunneling amplitude between layers 2 and 3 is different that between the other layers. Assuming $T_{23}$ has the same form as the tunneling in the BM model with a multiplicative factor $z$ and solving for the eigenvalue $E$ self-consistently, we find \begin{align} v^\text{eff}_2 &= \frac{\beta_1\beta_4 v'_2 - 3z^2 (\alpha'_3)^2 v'_3}{z\beta_1\beta_4 + 3(1+u^2)(\alpha'_3)^2}, \\ v^\text{eff}_3 &= \frac{\beta_1\beta_4 v'_3 - 3z^2 (\alpha'_2)^2 v'_2}{z\beta_1\beta_4 + 3(1+u^2)(\alpha'_2)^2}, \end{align} with $\alpha'_2 = w/(\hbar k_{\theta_{23}} v'_{2})$ and $\alpha'_3 = w/(\hbar k_{\theta_{23}} v'_{3})$. Both of these effective Fermi velocities vanish when \begin{equation} \frac{zw/\sqrt{\beta_1\beta_4}}{\hbar k_{\theta_{23}} \sqrt{v'_2 v'_3} } = \frac1{\sqrt{3}}. \label{eq: PT_z} \end{equation} The structure of this condition can be understood intuitively as follows. The factor $z$ is the ratio of the \emph{bare} tunneling amplitude between layers 2 and 3 with $w$, which is the tunneling amplitude in the bilayers 12 and 34. The effective tunneling between layers 2 and 3 is modified by the wavefunction renormalization factors $1/\sqrt{\beta_1}$ and $1/\sqrt{\beta_4}$, which generically reduce it due to the projection of the wavefunctions to layers 2 and 3, respectively. Because of the renormalizations, the final magic angle is dependent on all three twist angles. The dependence on $\theta_{23}$ is explicit, and by varying $\theta_{12}$ or $\theta_{34}$ one will change the $v'_{2}$ and $v'_{3}$, respectively. We note that Eq.~\eqref{eq: PT_z} may be rewritten as \begin{equation} \sqrt{\alpha^2_1 + \alpha^2_4 + z^2\alpha^2_{23}} = \dfrac{1}{\sqrt{3}}, \label{eq: PT_noz} \end{equation} where $\alpha_{23}=w/(\hbar k_{\theta_{23}}v_F)$, with $v_F$ the Fermi velocity of a single graphene sheet. This magic-angle condition holds for both positive and negative twist angles. We observe that the Fermi velocity drops to zero within the perturbative analysis for both Dirac point simultaneously, so that one does not end up with two closely spaced angles with approximately flat bands. Given that the Fermi velocities of the two uncoupled Dirac points are different, it is not obvious that this should happen, and as discussed in Appendix~\ref{app:incomm}, inclusion of incommensuration effects may change this result. \begin{figure} \includegraphics[scale=0.315]{fig7} \caption{Band spectra for twisted tetralayer graphene for combinations of angles that (a) do not yield flat bands; (b), (c), and (d) support flat bands. For sufficiently large $\theta_{12}$ and $\theta_{34}$ the magic $\theta_{23}$ approaches $1.08^\circ$, the magic angle of TBG.} \label{fig:spectra} \end{figure} \subsection{Numerical Results}\label{sec:numerics} We begin by showing numerical band structure results for a representative triplet of twist angles $\theta_{12}$, $\theta_{23}$ and $\theta_{34}$ in Fig. \ref{fig:spectra}(a). The calculations are performed by expanding Eq.~\eqref{HATBG} in plane waves, with $h_1$ and $h_2$ taken as the ${k}\cdot {p}$ approximations to the Hamiltonians near the relevant Dirac points of the 12 bilayer and 34 bilayer, respectively (obtained by numerically solving the BM model for each of these bilayers individually), and the off-diagonal tunneling operator is given by Eq.~\eqref{Tmumu}. In all these calculations, the tunneling parameter $w$ is taken to be 110meV between each pair of neighboring layers, which is equivalent to $z=1$ in the perturbative analysis above. Notice that because $\theta_{12} \not = \theta_{34}$ there is asymmetry between the two valleys. Nevertheless, magic angles still occur in our model of the twisted tetralayer graphene system, and they manifest themselves in a qualitatively similar way to TBG, see Fig. \ref{fig:spectra}(b) and \ref{fig:spectra}(c). \begin{figure} \includegraphics[width=\linewidth]{fig8} \caption{Locations of the magic angles for twisted tetralayer graphene at fixed $\theta_{12} = 10^\circ$ and $\theta_{34}\in(3^\circ, 10^\circ), \theta_{23}\in(1^\circ, 2^\circ)$. Locations where the bandwidth is less than 10 meV are shown in white. The pink line shows the theoretical prediction given by Eq.~\eqref{eq: PT_noz}.} \label{fig:bandwidth10} \end{figure} An interesting feature of this model is that, analogously to the unequal Fermi velocity system discussed above, the system hosts flat bands for $\theta_{23}$ at different ``magic'' values, depending on the angles $\theta_{12}$ and $\theta_{34}$. This is in contrast to TBG, for which the twist angle for the primary magic angle is fixed at $\theta \approx 1.08^\circ$. Figures \ref{fig:spectra}(b) and \ref{fig:spectra}(c) show examples of this: the combinations of the twist angles are different for the pairs of figures, yet both sets of parameters produce flat bands. In general, magic angles will occur when $\theta_{23}$ is somewhat larger than the TBG magic angle, but for large $\theta_{12}$ and $\theta_{34}$ the first magic angle for $\theta_{23}$ converges to the TBG magic angle $1.08^{\circ}$. A bandstructure corresponding to this situation is shown in \ref{fig:spectra}(d). Fig. \ref{fig:bandwidth10} shows a plot of the bandwidth of the lowest energy bands for the special case where $\theta_{12} = 10^\circ$. Here we define the bandwidth as half the gap between the states of positive and negative energy closest to zero at the $\Gamma_M$ point ($\Gamma$ point of the mBZ), which typically has the widest separation between the two flat bands. As can be seen from the plot, the bandwidth is minimized for a continuum of twist angles. An important feature of this system in general, and in this example in particular, is the perfect swapping symmetry between $\theta_{12} \leftrightarrow \theta_{34}$: Fig. \ref{fig:bandwidth10} appears identical when $\theta_{34}$ is fixed at $3^\circ$ and $\theta_{12}$ is varied over the same region of the parameter space. More generally, we find that when $\theta_{12}$ and $\theta_{34}$ are not too small, the values of the angles at which we find flat bands adhere to Eq.~\eqref{eq: PT_noz} relatively well. An interesting observation about this behavior is that it is rather similar to that found in twisted trilayer systems, for example in Ref.~\onlinecite{Kaxiras}. With a relatively large twist angle $\theta_{12}$, the Dirac point coming from this bilayer has little renormalization, so that it can be viewed as coming from an isolated graphene sheet. The two relevant twist angles are then $\theta_{23}$ and $\theta_{34}$. One can see in Fig.~\ref{fig:bandwidth10} that for large $\theta_{34}$ the flat band occurs when $\theta_{23}$ approaches the magic angle of a single twisted bilayer, while for smaller values of $\theta_{34}$, we find the flat band condition moves to larger values of $\theta_{23}$, precisely as found in Ref.~\cite{Kaxiras}. Moreover, in the trilayer one loses the flat band behavior when both angles are smaller than $\sim 3^{\circ}$, which is precisely the situation in which we find results in our own approach to become unreliable. Fig.~\ref{fig:bandwidth3} shows correponding results for a situation in which the twist angle which is being held constant is much smaller than in Fig.~\ref{fig:bandwidth10}. The result is that the perturbative result is less faithful in matching the numerics. This is unsurprising since we expect our method to become increasingly unreliable as the two outer twist angles are made smaller and smaller \begin{figure} \includegraphics[width=\linewidth]{fig9} \caption{Locations of the magic angles for twisted tetralayer graphene at fixed $\theta_{12} = 3^\circ$ and $\theta_{34}\in(3^\circ, 10^\circ), \theta_{23}\in(1^\circ, 2^\circ)$. Locations where the bandwidth is less than 10 meV are shown in white. The pink line shows the theoretical prediction given by Eq.~\eqref{eq: PT_noz}.} \label{fig:bandwidth3} \end{figure} \section{Summary and Discussion}\label{sec:sum} In conclusion, we have introduced a model of twisted bilayer graphene in which the Fermi velocities of the Dirac points of each layer may be different. We have demonstrated that generically this asymmetry does not spoil the ``magic'' flat band phenomenon. We argued that such models are relevant for systems with asymmetric screening, for which there are unequal interaction renormalizations of the Fermi velocities, and for tetralayer systems, when the main effect of the outermost layers is a slowing of the Fermi velocities of the Dirac points associated with the two inner layers. This situation is realized when the outermost twist angles, $\theta_{12}$ and $\theta_{34}$, are not too small. A perturbative analysis for the Fermi velocity of Dirac points of the fully coupled systems explains the locations of the flat bands under certain conditions, and interestingly shows that for both Dirac points this vanishes at the same twist angle ($\theta_{23}$ for the tetralayer). Our numerical results also support the existence of a single minimum bandwidth as a function of twist angle for this system. For the tetralayer system, open questions remain on the impact of the formal incommensuration between the moir{\'e} lattices of the outer pairs of layers relative to the moir{\'e} lattice associated with the inner pair. In Appendix~\ref{app:incomm} we study the impact of retaining a subset of the incommensurate reciprocal lattice vectors $\bf{g}$ and $\bf{g}'$ that define the outer moir{\'e} lattices. Specifically we use degenerate perturbation theory to calculate the correction to the energy (accurate to first-order in the tunneling amplitude) at the $\Gamma_M$ and $M_M$ points of the lowest energy bands to obtain an estimate of their bandwidth. The analysis indicates that the change in bandwidth is very small for most twist angles, but can become significant at the magic angles, perhaps not surprising as the degeneracy without the extra plane wave states coupled in is very nearly exact. Interestingly, we find within our estimation procedure that the magic angle breaks up into {\it two} closely spaced angles of maximal flatness, suggesting that our observation of a single magic angle found even with differing Dirac point Fermi velocities may not be precisely the case for the tetralayer realization of this system. Beyond this, we find that when the outer twist angles ($\theta_{12}$, $\theta_{34}$) are small enough, the change in bandwidth becomes sufficiently large as to indicate that $\mathbf{g}$ and $\mathbf{g}'$ with larger magnitudes should not be ignored (see Fig.\ref{fig:f} in Appendix~\ref{app:wave} and related discussion). For larger outer angles we believe our simpler treatment (in which incommensuration is ignored) correctly predicts that this system still hosts magic angles, and gives a good estimate of what these angles are. One possible direction for future work is to treat the systems discussed in this work using a tight-binding model in order to investigate how well the continuum model approximation holds. For the ATBG system, this can be accomplished with a twisted bilayer graphene system where the nearest neighbor tunneling is different in the two layers. An application to the tetralayer system is less obvious, because one needs commensuration of all four lattices to define a unit cell. Finding sets of such commensurate angles represents an interesting challenge. Because of the change in Fermi velocities, the magic angles of the system acquires a certain level of tunability. In principle this broadens the set of circumstances under which interaction effects can lead to collective phases such as Mott insulators and superconductivity, and possibly others with broken spin or valley symmetries. In this sense the system we have studied in this work adds to the possible richness of physics in twisted graphene systems. \section{Acknowledgements} This work is supported in part by NSF Grant Nos. DMR-1350663, DMR-1914451, and ECCS-1936406, by the US-Israel Binational Foundation, and the Research Corporation for Science Advancement through a Cottrell SEED grant. The authors thank the Aspen Center for Physics (NSF Grant No. PHY-1607611) where part of this work was done. \nocite{*}
1,108,101,563,472
arxiv
\section{Introduction} Coronary artery disease (CAD), as a common cardiovascular disease, has been a leading threat to human health around the world\cite{luo2020dynamically,luo2020commensal}. It is caused by atherosclerotic plaques in the main blood supply branches of coronary artery trees and induces the stenosis or blockage of blood vessels, resulting in the symptoms of heart disease, such as myocardial ischemia, angina pectoris and heart failure\cite{mendis2015organizational}. Coronary CT angiography (CCTA) is a practical non-invasive vascular imaging technique, which plays an important role in the perioperative period of interventional treatment of CAD. Analyzing grades of stenosis accurately through the pathological information in CCTA scans is essential for clinical applications related to CAD\cite{dewey2007coronary}. \begin{figure} \includegraphics[width=\textwidth]{1.png} \caption{The HU values of non-calcified plaques are similar to adjacent tissues (a), and types of plaques are complicated and shapes vary from different plaques (b).} \label{fig_CAD} \end{figure} In recent years, computer vision technology has been used to detect coronary artery stenosis through CCTA scans of patients to assist clinicians in the diagnosis of CAD. However, it is challenging to realize the automatic diagnosis of CAD, because the contrast of the HU values among adjacent tissues and structures is low. Besides, types of plaques that cause coronary artery stenosis are complicated and there is no shape feature that can be used to strictly describe plaques, as shown in Fig. \ref{fig_CAD}. Originally, most of the proposed methods were semi-automatic detection methods that require a lot of manual interaction\cite{kiricsli2013standardized}. And then, several machine learning-based methods have been proposed to describe changes in coronary lumens to quantify stenosis\cite{sankaran2016hale}. To a certain extent, these methods demonstrated that the geometric information of coronary lumen is considerable in clinical medicine. Several deep learning-based methods have been reported for the automatic detection of coronary artery stenosis in recent related literature\cite{shen2017deep,shin2016deep}. These works mainly employed the model combining convolutional neural network (CNN) and recurrent neural network (RNN) network to complete the task. Zreik et al. achieved the detection of coronary artery plaque and stenosis by a recurrent convolutional neural network (RCNN). Particularly, they firstly reconstructed multiplanar reformatted (MPR) images based on the centerlines of coronary arteries. Next, they employed a 3D-CNN to extract features from small volumes and achieved the classification of two tasks using an RNN\cite{zreik2018recurrent}. Denzinger et al. improved the network structure of RCNN and predicted significant stenosis (i.e. luminal narrowing $> 50\%$) with the combination of deep learning approach and radiomic features\cite{denzinger2019coronary}. Also, Tejero-de-Pablos et al. extracted features from five views of coronary arteries and employed a Fisher vector to predict the classification probability of significant stenosis according to the features of varied views\cite{tejero2019texture}. Although RNN can capture the dependencies between semantic features in a single direction to a certain extent, the global intervention of coronary artery branches to detect coronary artery stenosis is hardly considered in related work. To ensure that the model can learn the semantic features of entire coronary artery branches before local coronary artery stenoses are detected, we introduce Transformer into our method to analyze MPR images from a different perspective from others. Transformer is a type of deep neural network based on self-attention module\cite{vaswani2017attention}, which was invented to solve related tasks in the natural language processing (NLP) field\cite{han2020survey}. Transformer employs an attention mechanism to capture global context information to establish a long-distance dependence on the target, thereby extracting more ponderable features. In recent years, researchers in the computer vision field have continuously tapped its application potential in computer vision tasks\cite{carion2020end,wang2018non}. In this work, we propose a novel Transformer Network (TR-Net) to detect significant stenosis in MPR images. The proposed TR-Net combines CNN and Transformer. As shown in Fig. \ref{fig_CNN_Transformer}, the former has a relatively large advantage in extracting local semantic information, while the latter can more naturally associate global semantic information. We employ a shallow 3D-CNN to extract local semantic features of coronary arteries. The shallow CNN enables the model to obtain the semantic information of each position in an MPR image while ensuring the efficiency of our model. Then, Transformer encoders are used to analyze feature sequences, which can mine the underlying dependence of local stenosis on each position of a coronary artery. Our main contributions can be summarized as follows: (1) To achieve a more accurate diagnosis of coronary artery stenosis, we introduce Transformer to solve the challenging problem. To the best of our knowledge, this is the first attempt employing Transformer structure to complete the task of detecting coronary artery stenosis. (2) The proposed TR-Net can effectively integrate local and global information to detect significant stenosis. Experimental results illustrate that the proposed method has higher accuracy than state-of-the-art methods. \begin{figure} \includegraphics[width=\textwidth]{2.png} \caption{CNN for obtaining local semantic information and Transformer for obtaining global semantic information.} \label{fig_CNN_Transformer} \end{figure} \section{Method} In this section, we detail the proposed TR-Net for significant stenosis detection. Fig. \ref{fig_TR-Net} illustrates the architecture of TR-Net. TR-Net mainly consists of two components. One part is the 3D-CNN used to extract local semantic features at different positions of a coronary artery. The other part is Transformer structure used to associate the local feature maps of each position, analyzing the dependence of different positions, and classifying the significant stenosis at each position, as shown in Fig. \ref{fig_CNN_Transformer}. \begin{figure} \includegraphics[width=\textwidth]{3.png} \caption{Transformer network (TR-Net)} \label{fig_TR-Net} \end{figure} \subsection{Semantic feature extraction for local cubic volumes} For a certain locality of a coronary artery, the detail of the partial image is an indispensable reference basis for doctors when diagnosing CAD. This is also a prerequisite for our method to detect coronary artery stenosis. To efficiently extract local semantic features of coronary arteries, we design a shallow 3D-CNN as the first part of our method. The shallow CNN can not only prevent the overfitting of semantic information but also improve the efficiency of the model. The input of our method is a coronary artery MPR image. We employ the voxels on the centerline of the coronary artery as center points to select cubic volumes from the MPR image, and the side length of cubic volumes is $N$ voxels. Then, we arrange these cubic volumes into a sequence of length $L$ according to the topological relationship of the coronary artery centerline. The semantic features of cubic volumes extracted by 3D-CNN are treated as the input of Transformer structure. The structure of the 3D-CNN to extract semantic features of cubic volumes in volume sequences is inspired by \cite{zreik2018recurrent}, as shown in Fig. \ref{fig_3DCNN_encoder}. The 3D-CNN consists of four sequentially connected substructures. Each substructure includes a convolutional layer with a convolution kernel size of $3\times3\times3$, a Rectified linear unit (ReLU) and a $2\times2\times2$ max-pooling layer. The number of filters of the convolutional layer is 16 in the first part. In the remaining part, the number of filters of the convolutional layer is the number of the previous part multiplied by 2. The feature maps obtained by the 3D-CNN are defined as $x\in \mathbb{R}^{C\times H\times H\times H}$, where $C$ and $H$ respectively indicate the number of filters and the size of feature maps. Since Transformer originates from NLP and needs to take a 1D vectors sequence as input, we flatten the feature maps into 1D vectors and arrange them into a sequence as the feature embeddings of Transformer. \begin{figure} \includegraphics[width=\textwidth]{4.png} \caption{(a) The shallow 3D-CNN. (b) Transformer encoder.} \label{fig_3DCNN_encoder} \end{figure} \subsection{Transformer structure for global sequence analysis} According to clinical experience, a coronary artery branch may have multiple plaques, and each plaque affects the blood flow velocity in the patient's coronary lumen. Therefore, analyzing the potential relationship between plaques at different locations is valuable for clinical diagnosis. In this work, for a voxel on the centerline, there is image information in both two directions (the direction of ascending aorta and the direction of the coronary end) that can affect the detection result. To treat the feature maps of each coronary artery segment as the basis for judgment when detecting local coronary artery stenosis, we design Transformer structure to analyze feature sequence bidirectionally. To introduce the order information of each cubic volume into our model, we add the learnable order embeddings\cite{dosovitskiy2020image} of the same dimension to feature embeddings before inputting embeddings into Transformer structure. The input embedding for Transformer structure $Z_{0}$ can be obtained by adding feature embeddings and order embeddings, expressed as follows: \begin{equation}\label{eqn1} Z_{0}=[x_{1}+o_{1},x_{2}+o_{2},…,x_{L} +o_{L}]\in \mathbb{R}^{L\times ( C\cdot H^3)} \end{equation} where $x_{i}$ and $o_{i}$ respectively indicate the feature embedding and order embedding for the $i^{th}$ cubic volume. The Transformer structure of TR-Net contains $T$ Transformer encoders, and the number $T$ of Transformer encoders is 12 in this work. Each Transformer encoder consists of two sub-blocks connected in sequence, multiheaded self-attention (MSA) and feed-forward network (FFN), where FFN consists of two linear layers with a ReLU activation. Layer normal (LN) and residual connections are respectively employed before and after both two sub-blocks\cite{baevski2018adaptive,wang2019learning}, as shown in Fig. \ref{fig_3DCNN_encoder}. For each Transformer encoder, the size of the input is the same as the output to ensure the consistency of Transformer encoders. The output of the previous Transformer encoder is treated as the input of the next Transformer encoder, the output of the $t^{th}$ Transformer encoder $Z_{t}$ can be defined as: \begin{equation} \label{eqn2} \begin{split} Z_{t}’&= {\rm MSA}({\rm LN}(Z_{t-1}))\in \mathbb{R}^{L\times ( C\cdot H^3)}\\ Z_{t} &= {\rm FFN}({\rm LN}(Z_{t}’) + Z_{t-1})) + Z_{t}’ +Z_{t-1}\in \mathbb{R}^{L\times ( C\cdot H^3)} \end{split} \end{equation} where $Z_{t-1}$ indicates the output of the ${t-1}^{th}$ Transformer encoder. For the output of the last Transformer encoder $Z_{T}\in \mathbb{R}^{L\times(C\cdot H^3)}$, we split it into L embeddings, where the $i^{th}$ embedding is denoted as $Z_{T}^{i}\in \mathbb{R}^{1\times(C\cdot H^3)}$. The order of these embeddings corresponds to the order of the cubic volumes which input model. These embeddings are fed into softmax classifiers to detect significant stenosis of the corresponding cubic volume. \section{Experiment} \subsection{Dataset} We conducted experiments on a dataset consisting of 76 CCTA scans from different patients and evaluated our method. These scan data have a total of 158 significant stenoses. We extracted the MPR image of the main coronary artery branches in each CCTA scan. For the entire dataset, we extracted a total of the MPR images of 609 coronary artery branches. For these MPR images, 42425 voxels belonging to the centerline of coronary artery branches could be selected as the volume center points. The dataset was annotated by experienced radiologists, and each voxel on the centerline was marked with non-significant stenosis (i.e. luminal narrowing $\le 50\%$) or significant stenosis (i.e. luminal narrowing $> 50\%$). We selected voxels in the MPR image at intervals of 5 voxels along centerlines of coronary arteries and employed these voxels as volume center points to construct volume sequences. To extract local information properly, the side length $N$ of cubic volumes was set to 29 and the length $L$ of volume sequences was 30 at most. Considering that in most coronary artery branches, the proportion of significant stenosis in the entire branch is low, we cut the non-significant stenosis part of MPR images appropriately to make training samples as balanced as possible when constructing volume sequences. To ensure that the model has stronger robustness, we made the volume center points randomly move up to three voxels in any direction along 6 neighborhoods and rotated the cubic volumes at random angles perpendicular to the centerline. Finally, we obtained 849 volume sequences, of which 3326 center points corresponded to significant stenosis. \begin{table} \Large \caption{Evaluation results for significant stenosis detection.}\label{tab1} \centering \setlength{\tabcolsep}{3.5mm} \resizebox{\textwidth}{!}{ \begin{tabular}{l|l|l|l|l|l|l|l|l} \hline Method & Metric & ACC & Sens & Spec & PPV & NPV & F1 & MCC\\ \hline Texture CLS\cite{tejero2019texture}& Orig data & 0.81 & 0.90 & 0.80 & \textbf{--} & \textbf{--} & \textbf{--} & \textbf{--} \\ \hline 3D-RCNN\cite{zreik2018recurrent} & Orig data & 0.94 & 0.63 & 0.97 & 0.65 & 0.97 & 0.64 & 0.60\\ \hline 2D-RCNN+PT\cite{denzinger2019coronary} & Orig data & 0.87 & 0.60 & 0.93 & 0.68 & 0.91 & 0.64 & 0.56\\ \hline 3D-RCNN\cite{zreik2018recurrent} & Our data & 0.87 & 0.66 & 0.91 & 0.56 & 0.93 & 0.60 & 0.53\\ \hline 2D-RCNN+PT\cite{denzinger2019coronary} & Our data & 0.89 & \textbf{0.82} & 0.89 & 0.50 & \textbf{0.97} & 0.62 & 0.58\\ \hline TR-Net & Our data & \textbf{0.92} & 0.74 & \textbf{0.96} & \textbf{0.84} & 0.93 & \textbf{0.79} & \textbf{0.74}\\ \hline \end{tabular} } \end{table} \subsection{Experimental Results} Quantitative evaluation of experimental results can demonstrate the reliability of methods in clinical application, so we compared our proposed TR-Net with several state-of-the-art methods. To demonstrate the effectiveness of our TR-Net scientifically, we evaluated accuracy (ACC), sensitivity (Sens), specificity (Spec), predictive value (PPV), negative predictive value (NPV), F1-score, the Matthews correlation coefficient (MCC) based on the classification results of volume center points. For all model experiments, we performed ten-fold cross-validation on centerline-level, where the validation set accounted for 10\% of the training data. Models were trained for 200 epochs, and the models were saved with the best performance on the validation set to predict the data on the test set. As shown in Table \ref {tab1}, compared with state-of-the-art methods, TR-Net achieved the best performance on ACC, Spec, PPV, F1 and MCC indicators on our dataset. The dataset we employed was obtained by marking whether each voxel was significant stenosis on the centerlines of coronary arteries. However, the model selected one for every 5 voxels along centerlines for significant stenosis detection. Therefore, there was a tolerable error between correct detection results and annotations at significant stenosis boundaries. If the error was less than 5 voxels, the detection results obtained by the model were considered correct. According to several representative examples of significant stenosis detection in Fig. \ref{fig_examples}, annotations and our detection results had high consistency. Experimental results demonstrated that TR-Net could effectively detect significant stenosis caused by various types of plaques, including non-calcified plaques that are difficult to detect. The detection results of our method had outstanding continuity, and there was almost no interruption when dealing with long-length significant stenosis. \begin{figure} \includegraphics[width=\textwidth]{5.png} \caption{Examples of significant stenosis detection. Volume center points are denoted by orange $\times$.} \label{fig_examples} \end{figure} \section{Conclusion} In this work, we have proposed TR-Net to solve the challenging task of automatically detecting significant stenosis in MPR images. Our TR-Net can well combine the information of local areas adjacent to stenoses and the global information of coronary artery branches when detecting significant stenosis. Experimental results have demonstrated that TR-Net has better performance on multiple indicators compared with state-of-the-art methods. Through more comprehensive information analysis of coronary arteries in MPR images, our method can achieve the purpose of the computer-assisted diagnosis of CAD. ~\\ \textbf{Acknowledgments.} This work was supported by the National Natural Science Foundation of China under Grant 62001144 and Grant 62001141, and by China Postdoctoral Science Foundation under Grant 2021T140162 and Grant 2020M670911, and by Heilongjiang Postdoctoral Fund under Grant LBH-Z20066, and by Shandong Provincial Natural Science Foundation (ZR2020MF050). \bibliographystyle{splncs}
1,108,101,563,473
arxiv
\section{Introduction} The notion of Lipschitz constant for a function, in general, bounds the rate of change of outputs with respect to the inputs. For neural networks, the Lipschitz constant of the network is an useful metric to measure sensitivity, robustness and many other properties. It has several applications in the context of deep learning \cite{deepmind}. It can be used as a regularization constraint while training or provide certified robustness bounds against adversarial perturbations \cite{lipschitzmargin}. It also helps in providing guaranteed generalization bounds \cite{generalizationBounds}. Other use cases involve estimating Wasserstein distance \cite{estWassDist}, stabilising training of GANs \cite{spectralNorm}, and acting as building blocks for formulating invertible neural networks and flow based generative models \cite{invres}, \cite{flowres}. Hence, a provably correct and accurate Lipschitz constant estimation technique is important. There have been several recent works related to the estimation of Lipschitz constants \cite{lipmilp},\cite{fastlip}, \cite{NIPS2018-seqlip}, etc. Section~\ref{sec:related-work} provides a brief study on these works, including a comparison with our algorithm. It has been shown that exactly computing the Lipschitz constant or even within any desired approximation is computationally hard \cite{NIPS2018-seqlip}, \cite{lipmilp}. Hence computing the exact Lipschitz constant for large networks is almost infeasible. However, any technique which provides iteratively refined bounds with convergence over time is desired. Techniques used for the verification of neural networks including robustness certification, output bounds estimation and calculation of the Lipschitz constant are deeply interlinked and share many similarities among them. Our work also uses some of the extensions and combinations of related techniques like symbolic propagation, interval arithmetic and linear programming (for feasibility checking), within our proposed branch and bound framework. Our Branch and Bound (BaB) algorithm is based on iterative space partitioning and upper bounding the local Lipschitz constant for each such partition. Each node in the BaB tree has associated with it, an input partition defined by some half-space constraints and a activation pattern of the network on which the Lipschitz upper bounds are calculated. At each iteration our algorithm provides a refined upper bound of the exact Lipschitz constant until convergence. Preliminaries and notations used are given in Section~\ref{sec:preliminaries}. Section~\ref{sec:overview} gives an overview of the overall approach. Sections~\ref{sec:algorithm} and \ref{sec:final-algorithm} provides details of our algorithm. Section~\ref{sec:experiments} provides the implementation and experimental results demonstrating the performance on different parameters. \section{Related Work} \label{sec:related-work} We provide a brief comparison of some of the related works and the specific settings they apply to in the table below \footnote{We have used the names of the techniques as given in the respective papers. The column titled \textbf{LipBaB} is our algorithm}. To the best of our knowledge LipBaB is the first work which is able to calculate the \textbf{exact} local Lipschitz constant for \textbf{any} $p$-norm. Our algorithm can also be used to compute the \textbf{global} Lipschitz constant. \begin{table}[] \begin{tabular}{c|ccccccc} \textbf{} & \textbf{LipBaB} & \textbf{LipMIP} & \textbf{LipSDP} & \textbf{SeqLip} & \textbf{CLEVER} & \textbf{LipOpt} & \textbf{FastLip} \\ \hline global/local & \multicolumn{1}{c|}{local, global} & \multicolumn{1}{c|}{local} & \multicolumn{1}{c|}{global} & \multicolumn{1}{c|}{global} & \multicolumn{1}{c|}{local} & \multicolumn{1}{c|}{local} & local \\ gurantee & \multicolumn{1}{c|}{exact} & \multicolumn{1}{c|}{exact} & \multicolumn{1}{c|}{upper} & \multicolumn{1}{c|}{heuristic} & \multicolumn{1}{c|}{heuristic} & \multicolumn{1}{c|}{upper} & upper \\ p-norms & \multicolumn{1}{c|}{p} & \multicolumn{1}{c|}{1,inf} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{p} & \multicolumn{1}{c|}{p} & \multicolumn{1}{c|}{p} & p \\ activations & \multicolumn{1}{c|}{ReLu} & \multicolumn{1}{c|}{ReLu} & \multicolumn{1}{c|}{ReLu,Diff} & \multicolumn{1}{c|}{ReLu} & \multicolumn{1}{c|}{ReLu,Diff} & \multicolumn{1}{c|}{Diff} & ReLu \end{tabular} \end{table} As shown in the above table, each of the approaches on estimating Lipschitz constants meet different requirements, depending on the kind of norms that can be evaluated, or the global or local nature of the Lipschitz constant or whether they are upper bounds or lower bounds or heuristic estimations. Difference also arises on the kind of activation functions that are supported. Fastlip provides an efficient way of upper-bounding the local Lipschitz constant of ReLU-networks using interval-bound propagation \cite{fastlip}. LipSDP uses semi-definite programming to provide upper bounds of $l_2$-Lipschitz constant \cite{lipsdp}. CLEVER is a heuristic approach which uses extreme value theory and sampling techniques to find a approximation of the Lipschitz constant \cite{clever}. SeqLip transforms this problem into a combinatorial optimization problem and uses greedy strategies to provide estimates \cite{NIPS2018-seqlip}. LipOpt uses polynomial optimization to find global or local upper bounds of continuously differentiable networks \cite{lipopt}. LipMip is able to calculate the exact Lipschitz constant under $l_0, l_{\infty}$ norms by formulating the problem as a Mixed Integer problem \cite{lipmilp}. LipMip can be extended to other linear norms also. \section{Preliminaries} \label{sec:preliminaries} This section provides preliminaries, notations and background definitions as required by the problem. \subsection{Vectors, Matrices and Intervals} \begin{table}[h!] \centering \begin{tabular}{cl} $\underline{a},\overline{a}$ & \makecell[tl]{lower and upper bounds of $a$}\\ \addlinespace[1mm] $[a],[a]_i$ & \makecell[tl]{interval vector and its $i^{th}$ component $[\underline{a}_i,\overline{a}_i]$}\\ \addlinespace[1mm] $[A],[A]_{ij}$ & \makecell[tl]{interval matrix and its $(i,j)^{th}$ element $[\underline{A}_{ij},\overline{A}_{ij}]$}\\ \end{tabular} \end{table Addition and multiplication operations on intervals, where $p,q$ are intervals and $c$ is a scalar \begin{align*} cp &\equiv [min(c\underline{p},c\overline{p}),max(c\underline{p},c\overline{p})]\\ c+p &\equiv [c+\underline{p},c+\overline{p}]\\ p+q &\equiv [\underline{p}+\underline{q},\overline{p}+\overline{q}]\\ pq &\equiv [min(\underline{p}\underline{q},\underline{p}\overline{q},\overline{p}\underline{q},\overline{p}\overline{q}), max(\underline{p}\underline{q},\underline{p}\overline{q},\overline{p}\underline{q},\overline{p}\overline{q})]\\ \end{align*} \subsection{Operator norms} The $p$-norm, $\|\cdot\|_p$, of a vector $x$ is defined as: \[ \|x\|_p=\left(\sum_{i=1}^n |x_i|^p \right)^{\frac{1}{p}}\] Let $A$ be a matrix of size $m\times n$. Then the operator norm of this matrix, induced by the vector norm $\|\cdot\|_p$, is defined as: \[ \|A\|_p =\sup_{x \in \mathbb{R}^n}\|Ax\|_p, \|x\|_p=1 \] The following lists the values of $p$-norm for some commonly used values of $p$. \[ \|A\|_p= \begin{cases} max_{1\leq j\leq n}\sum_{i=1}^m |A_{ij}|, & \text{if } p=1\\ \sigma_{max} & \text{if } p=2\\ max_{1\leq i\leq n}\sum_{j=1}^n |A_{ij}|, & \text{if } p=\infty \end{cases} \] where $\sigma_{max}$ is the largest singular value of the matrix $A$. Note that $\|A\|_2 \leq \|A\|_F=\left( \sum_{i=1}^m \sum_{j=1}^n |A_{ij}|^2\right)^{1/2}$, where the $\|A\|_F$ is the Frobenius norm of $A$. \subsection{Generalized Jacobian} The Jacobian of a function $f:\mathbb{R}^n \rightarrow \mathbb{R}^m $ at a differentiable point $x$ is given as: \[ J_f(x)= \begin{pmatrix} \frac{\delta f_1}{\delta x_1} & \frac{\delta f_1}{\delta x_2} & \dots & \frac{\delta f_1}{\delta x_n}\\ \frac{\delta f_2}{\delta x_1} & \frac{\delta f_2}{\delta x_2} & \dots & \frac{\delta f_2}{\delta x_n}\\ . & . & . & .\\ \frac{\delta f_m}{\delta x_1} & \frac{\delta f_m}{\delta x_2} & \dots & \frac{\delta f_m}{\delta x_n} \end{pmatrix} \] For functions which are not continuously differentiable, we have the notion of Clarke's generalized Jacobian. The generalized Jacobian of such a function $f$ at a point $x$ is defined as: \[ \delta_f(x)=\textit{co}\{\lim_{x_i\rightarrow x} J_f(x_i): x_i \textit{ is differentiable}\}\] In other words, $\delta_f(x)$ is the convex hull of the set of Jacobians of nearby differential points. For a differentiable point $\delta_f(x)$ is a singleton set $\{J_f(x)\}$. \subsection{Lipschitz Constant and norms of Jacobians} For a locally Lipschitz continuous function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ defined over an open domain $\mathcal{X} \in \mathbb{R}^n$, the local Lipschitz constant $\mathcal{L}_{p}(f,\mathcal{X})$ is defined as the smallest value such that \[ \forall x,y\in \mathcal{X}: \|f(y)-f(x)\|_p \leq \mathcal{L}_{p}(f,\mathcal{X}) \cdot \| y-x\|_p \] For a differentiable and locally Lipschitz continuous function, the Lipschitz constant is given as \cite{deepmind}: \[\mathcal{L}_p(f,\mathcal{X})| = sup_{x\in \mathcal{X}}\|J_f(x)\|_p , \text{(Federer, 1969)} \] where, $\|J_f(x)\|_p$ is the induced operator norm on the matrix $J_f(x)$. However, in the case of ReLU-networks the function is piece-wise linear in nature and is not differentiable everywhere. For such functions, the above definition of Lipschitz constant can be extended accordingly with the help of Clarke's generalized Jacobian \cite{lipmilp}: \[\mathcal{L}_p(f,\mathcal{X}) = \sup_{M\in \delta_f(x), x \in \mathcal{X}}\|M\|_p = \sup_{x_d \in \mathcal{X}}\|J_f(x_d)\|_p\] where, $x_d$ is a differentiable point in $\mathcal{X}$. It is natural that the $ \sup_{M\in \delta_f(x), x \in \mathcal{X}}\|M\|_p$ is attained at a differentiable point $x_d$ in $\mathcal{X}$. By definition, a norm $\|.\|$ is convex. Therefore the maximal value of the norm of the elements in $\delta_f(x)$, which is itself a convex set, is attained at one of the extreme points which are the Jacobians of nearby differentiable points. This result shows that for computing the Lipschitz constant we don't need to necessarily account for the non-differentiable points. \subsection{Upper bounds on Jacobian norms} \begin{lemma} If $A$ and $B$ are both matrices of size $m\times n$, and if for each $i,j$, $|A_{ij}| \leq B_{ij}$, then $\|A\|_p \leq \|B\|_p$. \end{lemma} \begin{proof} Let $x^*= \arg\max_{x\in \mathbb{R}^n}\|Ax\|_p$ where $\|x\|_p=1$, and $x'$ be a vector such that $x'_i=|x^*_i|$ . Also we note from the definition of $p$-norm of a vector, that for any two vectors $u,v$ if $|u_i| \leq v_i$ for each $i$, then $\|u\|_p \leq \|v\|_p$. The following chain of inequalities establishes the lemma: \begin{align*} |(Ax^*)_i| &= |\sum_{j=1}^n A_{ij}.x^*_j| \leq \sum_{j=1}^n|A_{ij}.x^*_j| =\sum_{j=1}^n|A_{ij}| \cdot |x^*_j| \\&\leq \sum_{j=1}^n B_{ij}.|x^*_j| = \sum_{j=1}^n B_{ij}x'_j = (Bx')_i \end{align*} for each $i$, where $1 \leq i \leq m$. Therefore, \[ \|A\|_p= \|Ax^*\|_p \leq \|Bx'\|_p \leq sup_x\|Bx\|_p=\|B\|_p \] \end{proof} The above result can be used to upper bound the Lipschitz constant by upper bounding the absolute values of the partial derivatives \cite{fastlip}. \begin{lemma}\label{thm:ub} Let $U$ be a matrix of the same size as the Jacobian $J_f(x)$ of a function $f(x)$. If $U$ is such that $sup_{x_d\in\mathcal{X}}|J_f(x_d)_{ij}| \leq U_{ij}$ for all $i,j$, then $\mathcal{L}_p(f,\mathcal{X})\leq \|U\|_p$, in the open domain $\mathcal{X}$. \end{lemma} \begin{proof} We know that $\mathcal{L}_p(f,\mathcal{X})=sup_{x_d\in\mathcal{X}}\|J_f(x_d)\|_p$. Now, \[ \mathcal{L}_p(f,\mathcal{X})=\sup_{x_d\in\mathcal{X}}\|J_f(x_d^*)\|_p \leq \|[sup_{x_d\in\mathcal{X}}|J_f(x_d)_{ij}|]\|_p \leq \|U\|_p \] where $x_d^*$ is $\arg\max_{x_d\in\mathcal{X}}\|J_f(x_d)\|_p$. \end{proof} \subsection{Feed forward ReLU networks and their Jacobians} Deep feed-forward ReLU networks (MLPs) are stacked layers of perceptrons with ReLU activation functions. The following notations are used for the networks. \begin{table}[h!] \centering \begin{tabular}{cl} $L$ & number of layers in the network\\ \addlinespace[1mm] $n^{(l)}$ & number of neurons in the $l^{th}$ layer\\ \addlinespace[1mm] $x^{(l)}$ & vector of outputs from the neurons at layer $l$\\ \addlinespace[1mm] $z^{(l)}$ & vector of inputs to the neurons at layer $l$\\ \addlinespace[1mm] $W^{(l)}, b^{(l)}$ & Weights and biases of the $l^{th}$ layer of a network\\ \end{tabular} \end{table Given an input vector $x$ and a list of parameters $\theta\equiv\{W^{(l)}, b^{(l)}, i=1,\ldots,n\}$ describing the network architecture, the function $f:\mathbb{R}^n \rightarrow \mathbb{R}^m$ represented by the ReLU network is defined as follows. \begin{multline*} f(x,\theta) = W^{(L)}\phi(W^{(L-1)}(\dots\phi(W^{(1)}x+b^{(1)})\dots)+ b^{(L-1)})+b^{(L)} \end{multline*} where $\phi$ denotes the ReLU activation function with $\phi(x)=max(0,x)$. ReLU networks are piece-wise linear in nature. The concept of Jacobians for a network (with respect to the outputs and inputs of $f$) gives us an idea about how the network outputs vary with changes in inputs near a point. The Jacobian at a point $x$ is calculated by the chain rule of derivatives and is done using back propagation. It is important to note that this Jacobian is defined only if the derivatives at every ReLU node is defined. This happens only if the input to each ReLU node is strictly positive or negative. If it is equal to zero then a sub-gradient exists which lies between $[0,1]$. The Jacobian at a point $x$, if defined, can be compactly represented as: \[J_f(x) = W^{(L)} \Lambda^{(L-1)}W^{(L-1)}\ldots \Lambda^{(1)}W^{(1)}\] where $\Lambda^l$ encodes the activation pattern of a layer $l$ caused by the input $x$. It is a diagonal matrix, having $1$s as elements if the corresponding neuron is active, otherwise $0$s for inactive neurons. The Jacobian is the same for all the points strictly inside a linear region with the same activation pattern. Since ReLU networks are piece-wise linear in nature, the Lipschitz constant is exactly equal to the $p$-norm of the Jacobian at one such linear region in the input domain. \section{Approach overview} \label{sec:overview} The proposed algorithm is composed of several components like initial estimation of activation pattern, calculation of Lipschitz bounds and partitioning of sub-problems, which are unified within the branch and bound framework. This section provides an overview of the algorithm and some of its components. The next section describes each of these components in detail, including the representation of the sub-problems, in a relevant order. \begin{algorithm} \caption{Overview} \label{alg:branchandbound} \begin{algorithmic}[1] \Procedure{Algo}{network $N$, input-domain $\mathcal{X}$} \State Set initial half-space constraints and initial activation pattern considering the input domain $\mathcal{X}$ \State Create new sub-problem and get initial upper-bound estimation of $\mathcal{L}_p(f,\mathcal{X})$ \State Initialize the set of sub-problems with the sub-problem \While{\textbf{not} Terminate} \State Select sub-problem to branch \State Branch into new sub-problems and get refined upper-bound estimations \State Remove the selected sub-problem and add the new sub-problems in the problem set \EndWhile \State\Return {Final estimate of the Lipschitz constant} \EndProcedure \end{algorithmic} \end{algorithm} A short overview of some of the components is given below. \begin{itemize} \item \texttt{Symbolic Propagation}: Uses symbolic expressions and intervals to compute the initial estimation of the activation pattern. It also provides us with output bounds of the network for the given input region. We also use symbolic propagation in a slightly different way to generate half-space constraints. \item \texttt{Lipschitz Bounds}: Computes an upper bound to $\mathcal{L}_p(f,\psi)$ for a partition $\psi \in \mathcal{X}$ using interval arithmetic. \item \texttt{Branching}: Branches a given sub-problem $\rho$ into new sub-problems and computes their Lipschitz upper-bound. Each sub-problem has a partition of the input space associated with it. \end{itemize} \section{Components of the algorithm} \label{sec:algorithm} \subsection{Input Domain Abstraction and Interval bound propagation} We consider the input region $\mathcal{X}$ as a hyper-rectangle of $n$-dimensions. It can be represented as the Cartesian product of the following intervals. \[\mathcal{X}=[\underline{x_1},\overline{x_1}]\times [\underline{x_2},\overline{x_2}]\times\dots\times [\underline{x_n},\overline{x_n}]\] where $x_i$ denotes the $i$th dimension of an input $x$. The main purpose of this section is to get an initial estimation of the activation states of neurons with respect to all inputs in the input region. We do this by calculating the pre-activation bounds of each neuron. Additionally we obtain the output bounds of the network considering all the input regions as well. We mark the state of a neuron as active/inactive based on whether the pre-activation bounds are positive/negative. If the pre-activation bounds contain both negative and positive values, it is marked as $\ast$ or an undecided neuron. These neurons may be active/inactive depending on the inputs. The pre-activation bounds of each neuron can be calculated using interval bound propagation. For all $l=1,\ldots,n$, taking $[x^{(0)}]=[z^{(0)}]$ \[[z^{(l)}] = W^{(l)}[x^{(l-1)}] + b^{(l)}\] \[[x^{(l)}_i] = [\max(0,\underline{z^{(l)}_i}),\max(0,\overline{{z}^{(l)}_i} )]\] where $[z^{(0)}]$ denotes the input interval vector. However these bounds are an over-approximation of the actual range of values and these over-approximations accumulate across the layers because of the dependency problem in interval arithmetic. The dependency problem occurs because multiple occurrences of the same variables are treated independently. This dependency problem can be reduced by using symbolic expressions. The symbolic expression makes use of the fact that the input to any neuron depends on the same set of input variables, instead of considering only the outputs from the previous layer independently as done in naive interval bound propagation. A similar approach is presented in \cite{reluval}, but, a crucial difference is that we maintain a single expression instead of two, per neuron. Also we create new symbols instead of concretizing the bounds completely for $\ast$-neurons, to reduce the dependency errors for such a neuron in deeper layers. The symbolic expressions in practice are represented by coefficient vectors and constants. \subsubsection{Symbolic propagation} We denote the symbolic counterparts of the input vector $z^l$ and output vector $x^l$ of the neurons in layer $l$, as $ze^l$ and $xe^l$ respectively. These expressions are symbolic linear expressions over the outputs of neurons from previous layers. To get the pre-activation bounds we calculate the lower and upper bounds of these expressions at each node. If we encounter a $\ast$-neuron, the linear relation breaks down and we can no longer propagate the expression from this node. Hence we introduce an independent variable for this node and hope to preserve the resultant expressions in deeper layers. $[\Lambda]^l$ is a diagonal interval matrix denoting the activation pattern of layer $l$, whose diagonal elements are intervals of the form $[1,1], [0,0]$ or $[0,1]$ corresponding to active, inactive or $\ast$- neurons (undecided) respectively. All other elements of this interval matrix are $[0,0]$. This interval matrix will be later used in calculating the Lipschitz upper bounds. Note that the number of $\ast$-neurons marked can only be exact or an overestimation. It is these $\ast$-neurons which we aim to remove successively by creating partitions. \subsection{Sub-Problem Representation} This section explains how a sub-problem, or equivalently a node in the Branch and Bound (BaB) tree, is represented. We denote a sub-problem as $\rho$ and a property $p$ of that sub-problem as $(p,\rho)$. Each sub-problem has a corresponding subset of the input space $(\psi,\rho)$ associated with it, defined by the set of half-space constraints $(H,\rho)$. Also, for every sub-problem there is an associated activation pattern of the network, $(A,\rho)$, where \[A^l_i = \begin{cases} 1 & \text{if known}~ z^{(l)}_i> 0 ~\text{for all inputs in} ~\psi\\ 0 & \text{if known}~ z^{(l)}_i< 0 ~\text{for all inputs in} ~\psi\\ \ast & \text{if known otherwise/ not known} \end{cases}\] Any sub-problem can be equivalently represented by a pair consisting of its set of half-space constraints and its activation pattern $\{H, A\}$. The half-space constraints of a sub-problem are of the form $zle^l_i<0$ or $zle^l_i>0$, where $zle^l_i$ is a symbolic linear expression of only the input variables, corresponding to the $i^{th}$ neuron at layer $l$. $zle^0$ is the symbolic input vector. \begin{algorithm} \caption{Symbolic Propagation} \label{symprop} \begin{algorithmic}[1] \Procedure{SymProp}{network $N$, input-domain $\mathcal{X}$} \State //$ze^l_i$ and $xe^l_i$ are the input and output expressions respectively For $ith$ neuron at level $l$ \State Initialize $xe^0=ze^0=$ vector of input variables \For{l=1,...,L-1} \State $ze^l=W^{(l)}xe^{l-1}+b^l$ \For {$i=1,...,n_l$} \If {$\underline{ze^l_i} > 0$} \State //Keep Dependency \State $xe^l_i=ze^l_i$ \State $A^l_i=1, [\Lambda]^l_{ii}=[1,1]$ \ElsIf {$ \overline{ze^l_i} < 0$} \State //Update to 0 \State $xe^l_i=0$ \State $A^l_i=0,[\Lambda]^l_{ii}=[0,0]$ \Else \State //Introduce new variable $v^l_i$ \State $\overline{v^l_i}= \overline{ze^l_i}$, $\underline{v^l_i}=0$ \State $xe^l_i=v^l_i$ \State $A^l_i=\ast, [\Lambda]^l_{ii}=[0,1]$ \EndIf \EndFor \EndFor \State $ze^L=W^{(L)}xe^{L-1}+b^{(L)}$ \State output-bounds=$[\underline{ze}^L\overline{ze}^L]$ \EndProcedure \end{algorithmic} \end{algorithm} The initial sub-problem (root node of the BaB tree) is denoted as $\rho_I$, whose corresponding $\psi$ is $\mathcal{X}$ itself and activation pattern $A$ is as determined by Algorithm SymProp. The constraints corresponding to the initial hyper-rectangular input space $\mathcal{X}$ are given as $H=\bigcap\{zle^0_i< \overline{z}_i^{(0)}, i=1,\ldots,n^{(0)}\} \cap \bigcap\{zle^0_i> \underline{z}_i^{(0)}, i=1,\ldots,n^{(0)}\}$, which forms a bounded convex polytope. For any sub-problem, if the set of constraints $H$ form a bounded convex polytope and a hyper plane $zle^l_i=0$ cuts through this polytope, we get two partitions, which are also convex polytopes, given by the constraints, $\{H\cap zle^l_i< 0\}$ and $\{H\cap zle^l_i > 0\}$. It follows from induction that any feasible set of constraints generated after any number of partitioning steps as stated above, forms a bounded convex polytope. The use of open half-spaces, or strict inequalities, makes sure that when we have a sub-problem with no $\ast$-neurons, any feasible point in that region is actually a differentiable point. The reason is that the strict inequalities implies that the corresponding neurons takes non-zero values as inputs for all points in the feasible region, which in turn implies that the Jacobian is well defined. \subsection{Propagating linear relations} In order to generate the half-space constraints corresponding to neurons at a layer $l$, we need to have the expressions $zle^l$ for that layer, which is possible only if there are no $\ast$-neurons present in previous layers. Therefore for any sub-problem, we simply propagate the linear relations $zle$ across the layers until we reach the last layer of the network or encounter a layer which contains a $\ast$-neuron (since by moving further we cannot preserve the linear relationship with the inputs anymore). $xle^l_i$ is the output expression of a neuron corresponding to the input $zle^l_i$. It is computed only if there are no $\ast$-neurons in a layer. \[ zle^l=W^{(l)}xle^{l-1}+b^l\\\] \[ xle^l_i= \begin{cases} 0 & \text{if}~ A^l_i=0\\ zle^l_i & \text{if}~ A^l_i=1 \end{cases} \] where $xle^0=zle^0$ is the symbolic input vector. This process, called as LinProp (as used in subsequent algorithms), is similar to SymProp except that we don't need to evaluate any bounds or introduce any new variables. \subsection{Lipschitz bounds} In this section we describe the procedure to calculate valid upper bounds of the Lipschitz constant of a sub-problem (similar to \cite{fastlip}). We use interval matrix multiplication to upper bound the Lipschitz constant for a sub-problem. Similar to the Jacobian $J_f(x)$ for a single point $x$ as described before, we can represent the notion of a Jacobian matrix for a set of points $X$ by an interval matrix where each element is an interval which bounds the partial derivatives of all the points. We have \[[J(X)] = W^{(L)} [\Lambda]^{(L-1)}W^{(L-1)}\ldots [\Lambda]^{(1)}W^{(1)}\] where $[\Lambda]^l$ is an interval matrix used to denote the activation pattern for layer $l$, as described in SymProp. The intervals $[0,1]$ used to represent $\ast$-neurons, takes into account both possible activation states to calculate the extreme cases of lower and upper bounds of the partial derivatives. Once we obtain this $[J]$ matrix, calculated using interval matrix multiplication, we construct an ordinary matrix $U$ of the same size as that of $[J]$ where each element upper bounds the absolute values of the corresponding intervals in $[J]$. It is to be noted that the interval bounds are over-approximation of the actual values. The $p$-norm of the constructed $U$ matrix gives us an upper-bound of the local Lipschitz constant for the corresponding sub-problem. In case we have no $\ast$-neurons (corresponds to a piece-wise linear region), we simply return the $p$-norm of the Jacobian at that region. \begin{algorithm} \caption{Calculating Lipschitz Bounds} \begin{algorithmic}[1] \Procedure{LipschitzBounds}{sub-problem $\rho$} \If{$\rho$ has no $\ast$ neurons} \State //$\psi$ is a linear region, hence Jacobian is defined \State\Return {$\|J_f\|_p$} \Else \State Initialize $[J]_{ij}=[W_{ij}^{(1)}, W_{ij}^{(1)}], \forall i,j$ \For{$l=2, \ldots,L$} \State //interval matrix multiplication \State $[J]=W^{(l)}[\Lambda]^{(l-1)}[J]$ \EndFor \State Define $\{ U| U_{ij}=max(abs(\underline{[J]_{ij}}), abs(\overline{[J]_{ij}})), \forall i,j\}$ \State \Return $\|U\|_p$ \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Branching} \label{sec:split} \label{sec:split} The main idea behind the branching step is to create a partition in the polytope associated with a sub-problem and compute the upper-bound of the local Lipschitz constant of each partition to get tighter estimates. The partitions are made in such a way such that inputs from one part activates a specific neuron while for the inputs from the other part, the neuron is inactive. A new set of constraints is created by adding a new constraint ($zle^l_i<0$ in one case, $zle^l_i>0$ in the other) corresponding to a $\ast$-neuron at layer $l$, to the existing set of half-space constraints $H$. A related idea of solving linear programs to get refined values after adding constraints similar to this is discussed in \cite{neurify}. If the constraints set is feasible, we create a sub-problem with the new set of constraints and activation pattern (based on the new half-space constraint added). The next steps include propagating the linear expressions, $zle$, and also removing some of the $\ast$-neurons whose states can be determined to be active/inactive with respect to the new constraint set. Finally we calculate the Lipschitz bound for the newly created sub-problem and add it to the set of sub-problems. Note that the Lipschitz bounds of a branched problem can only decrease, since we reduce uncertainties. \begin{algorithm} \caption{Branching} \begin{algorithmic}[1] \Procedure{Branch}{Sub-problem $\rho$} \State $t\leftarrow$ first layer with $\ast$-neurons \State select a $\ast$-neuron($i$th neuron at layer $t$) \State $H_0, H_1 \leftarrow \{(H,\rho)\cap zle^t_i< 0\}, \{(H,\rho)\cap zle^t_i> 0\}$ \State $S\leftarrow S\backslash \{\rho\}$ \For{$r\in\{0,1\}$ } \If {$Feasible(H_r)$} \State Create sub-problem $\rho_r \equiv \{H_r, (A,\rho)\}$ \State $(A^t_i, \rho_r)\leftarrow r, ([\Lambda]^t_{ii}, \rho_r)\leftarrow [r,r]$ \State LinProp($\rho_r$)//propagating linear relations \State FFilter($\rho_r$)//feasibility filter \State $(L_{ub},\rho_r)=LipschitzBounds(\rho_r)$ \State $S\leftarrow S\cup \rho_r$ \If{$\rho_r$ has no $\ast$-neurons} \State //update lower bound \State $glb \leftarrow max(glb, (L_{ub},\rho_r))$ \EndIf \EndIf \EndFor \EndProcedure \end{algorithmic} \end{algorithm} \subsection{Feasibility Filter} When a new sub-problem is created by creating a partition in a polytope, the smaller partitions may be able to decide the states (active/inactive) of more than one neuron which were marked as $\ast$-neurons before the partitioning. It is easy to see that identifying such a neuron early on in the BaB tree is better as it prevents repeating the process (feasibility checks) for several (in worst case, exponential) sub-problems which are generated later. We use a simple heuristic to do this. We keep fixing the states of $\ast$-neurons which are decidable by the new constraints till we encounter a neuron which still maintains both active and inactive states depending on the inputs. We check the feasibility of both $(H\cap zle^l_i<0)$ and $(H\cap zle^l_i>0)$ for a $\ast$-neuron at layer $l$. Based on which of these two is feasible we decide the activation state of the neuron. If both of them are feasible then this is a $\ast$-neuron with respect to $H$, and we terminate this step. Later, if we need to branch on this sub-problem we shall choose to branch on the $\ast$-neuron which was already found to maintain both active and inactive states. This strategy will always create two feasible sub-problems. We use this process, called FFilter, in combination with LinProp (for generating half-space constraints) to reduce $\ast$-neurons across layers. \section{Final Algorithm} \label{sec:final-algorithm} This section puts together all the individual components discussed before and provides the main branch and bound framework, as provided in Algorithm~\ref{alg:final}. Given the network parameters and the input region of interest, the first step is to run symbolic propagation to mark the state of each neurons as active, inactive or $\ast$. This gives us the initial activation pattern. The first sub-problem is created with this activation pattern and half-space constraints given by the bounding constraints of the input region. The corresponding Lipschitz upper bound is also calculated. The set of sub-problems is initialized with this sub-problem. We use a max heap data structure to represent the sub-problems, sorted by the Lipschitz upper-bound of the sub-problems. $glb$ and $gub$ are used to keep track of the lower and upper bounds of $\mathcal{L}_p(f,\mathcal{X})$ respectively. The algorithm iteratively selects a sub-problem from the set with the highest Lipschitz upper bound and branches on it. Each newly created sub-problem with its own set of half-space constraints, activation pattern and corresponding Lipschitz upper bound is pushed into the heap. Also, if a sub-problem has no $\ast$-neurons it is a valid lower-bound for $\mathcal{L}_p(f,\mathcal{X})$, and we use it to compare and update $glb$. \begin{algorithm} \caption{Final Algorithm} \label{alg:final} \begin{algorithmic}[1] \Procedure{LipBaB}{network $N$, input domain $\mathcal{X}$, approximation factor $k$} \State Initial constraints $H \leftarrow$ Bounding constraints of $\mathcal{X}$ \State Initial activation $A \leftarrow$ SymProp($N, \mathcal{X}$) \State Create initial sub-problem $\rho_I \equiv \{H,A\}$ \State LinProp($\rho_I$) //propagating linear relations \State $(L_{ub},\rho_I)\leftarrow LipschitzBounds(\rho_I)$ \State $S\leftarrow S\cup \rho_I$ \State $gub,glb\leftarrow (L_{ub},\rho_I),0$ // initialize lower and upper bounds of $\mathcal{L}_p(f,\mathcal{X})$ \If {$\rho_I$ has no $\ast$-neurons} \State $glb \leftarrow (L_{ub},\rho_I)$ \EndIf \While{\textit{True}} \State $\rho' \leftarrow \arg\max_{\rho \in S}(L_{ub},\rho)$ \State $gub\leftarrow (L_{ub},\rho')$ \If{$gub\leq k.glb$} \State //k=1 implies $gub=glb=\mathcal{L}_p$ \State break \Else \State $Branch(\rho')$ \EndIf \EndWhile \State\Return{ $gub$} \EndProcedure \end{algorithmic} \end{algorithm} For an approximation factor $k$, we can terminate when $gub \leq k.glb$ since we know that $\mathcal{L}_p(f,\mathcal{X})$ lies between $glb$ and $gub$. While calculating the exact local Lipschitz constant the algorithm is terminated when $gub=glb$ or equivalently, the sub-problem at the top of the heap as no $\ast$-neurons. This means that the input region corresponding to the sub-problem is actually a piece-wise linear region and therefore the local Lipschitz upper-bound of that region is exact instead of an upper bound. Also, since this is the highest among all the other sub-problems, by definition, this is the exact local Lipschitz constant of the entire input region $\mathcal{X}$. The algorithm returns tighter estimates of the Lipschitz upper-bound iteratively until convergence, hence terminating early because of constraints like time/memory, will always provide us with a valid upper-bound. To compute the global Lipschitz constant we need to simply mark every neuron as $\ast$-neuron. This takes care of all the possible activation patterns throughout $\mathbb{R}^n$ and there is no need for any initial input constraints. The rest of the procedure is same. Note, in this case the feasible region of sub-problems are not necessarily bounded. \subsection{Analysis} To prove both the correctness of the Lipschitz upper-bound calculation and convergence of the BaB algorithm we use some simple properties of interval arithmetic. \subsubsection{Correctness of Lipschitz upper bounds} The reason for using an interval $[0,1]$ for a $\ast$-neuron is to consider both the activation states of the neuron to calculate the extreme cases of lower and upper bound. Since any basic arithmetic operation on intervals bounds the resultant intervals considering both the end points of every interval (which in our case are the activation states $0$ and $1$), we know the resultant interval bounds on the partial derivatives obtained after interval matrix multiplication are exact/over-approximated bounds which has already covered all the cases of possible activation patterns. \subsubsection{Convergence of the algorithm} First, we note that if the activation state of any neuron is decided in a node of the BaB tree, then it stays decided for any node branched from it. This is simply because if a neuron is active/inactive for a set of inputs, then it is also active/inactive accordingly for any of its subset. Proving the convergence of the Lipschitz bounds depends on the property that if any interval which is used in some basic arithmetic operations is replaced by a sub-interval then the the resulting interval bounds can only tighten. Every time we branch we reduce the number of $\ast$-neurons by at least one, in each branched problem (change interval $[0,1]$ to sub-intervals $[0,0]$/$[1,1]$), which implies that the Lipschitz bounds of the branched sub-problems can only decrease. This along with the fact that the number of $\ast$-neurons are finite and strictly reduce along any path from the root of the BaB tree, proves the convergence of the algorithm with termination. \subsubsection{Number of sub-problems} Since we have a method which guarantees that every branching will create two feasible sub-problems, i.e., the BaB tree will be a full binary tree, we can show that the number of sub-problems generated can be bounded by at most $2p-1$, where $p$ is the number of piece-wise linear regions in the input space $\mathcal{X}$. We know that a sub-problem corresponding to a piece-wise linear region has no $\ast$-neurons and vice versa, and is therefore, a terminal sub-problem. Hence, if we keep generating feasible sub-problems till we have no more to split, we will be having at most $p$ leaf nodes (terminal sub-problems), each of which corresponds to a piece-wise linear region. Therefore the total number of sub-problems generated will be $2p-1$, since the BaB tree is a full binary tree. However, in practice, the total number of sub-problems generated is usually much less, because of branch and bound. \section{Implementation and Experiments} \label{sec:experiments} In this section we provide experimental results to illustrate the working of our algorithm on various parameters. All the experiments were done on Google Colab notebooks. The implementation is done using Python (available in \href{https://github.com/pyrobits/LipBaB}{GitHub}). For feasibility checking, we used the `GLPK' Linear Programming solver with a default tolerance limit of 1e-7. The MLPClassifier module from scikit-learn was used to train the networks. The local Lipschitz computation was done on the networks considering the input region $[0,1]^4$ for the Iris data-set and $[0,0.1]^{10}$ for the synthetic data-sets. The synthetic data-set consisted of 2000 data points of 10 dimensions and 3 classes, generated using scikit-learn. Note that the choice of input region is arbitrary. \begin{table}[] \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Network} & \multirow{2}{*}{p-norm} & \multicolumn{2}{c|}{First estimation} & \multicolumn{2}{c|}{2-approximation} & \multicolumn{2}{c|}{1.5-approximation} & \multicolumn{2}{c|}{Exact} \\ \cline{3-10} & & Time & Value & Time & Value & Time & Value & Time & Value \\ \hline \begin{tabular}[c]{@{}c@{}}Iris\_Network\\ (4,5,5,3)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1\\ 2\\ inf\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.02s\\ 0.02s\\ 0.02s\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.776\\ 8.810\\ 14.663\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.06s\\ 0.07s\\ 0.06s\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.874\\ 7.098\\ 12.606\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.06s\\ 0.06s\\ 0.06s\end{tabular} & \begin{tabular}[c]{@{}c@{}}6.874\\ 7.098\\ 12.606\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.11s\\ 0.09s\\ 0.06s\end{tabular} & \begin{tabular}[c]{@{}c@{}}5.959\\ 6.772\\ 12.606\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}SD\_Network1\\ (10,15,10,3)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1\\ 2\\ inf\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.04s\\ 0.05s\\ 0.04s\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.105\\ 13.019\\ 25.243\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.17s\\ 0.16s\\ 0.18s\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.704\\ 10.658\\ 19.543\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.17s\\ 0.16s\\ 0.18s\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.704\\ 10.658\\ 19.543\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.25s\\ 0.28s\\ 0.27s\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.413\\ 9.531\\ 16.275\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}SD\_Network2\\ (10,20,15,10,3)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1\\ 2\\ inf\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.06s\\ 0.07s\\ 0.06s\end{tabular} & \begin{tabular}[c]{@{}c@{}}101.705\\ 101.940\\ 182.988\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.01s\\ 1.79s\\ 2.26s\end{tabular} & \begin{tabular}[c]{@{}c@{}}70.928\\ 55.969\\ 82.938\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.01s\\ 1.79s\\ 2.26s\end{tabular} & \begin{tabular}[c]{@{}c@{}}70.928\\ 55.969\\ 82.938\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.08s\\ 3.35s\\ 2.97s\end{tabular} & \begin{tabular}[c]{@{}c@{}}48.049\\ 40.057\\ 72.286\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}SD\_Network3\\ (10,30,30,30,3)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1\\ 2\\ inf\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.20s\\ 0.20s\\ 0.20s\end{tabular} & \begin{tabular}[c]{@{}c@{}}131.727\\ 139.808\\ 272.416\end{tabular} & \begin{tabular}[c]{@{}c@{}}9.43s\\ 19.11s\\ 18.90s\end{tabular} & \begin{tabular}[c]{@{}c@{}}33.890\\ 30.684\\ 63.035\end{tabular} & \begin{tabular}[c]{@{}c@{}}22.24s\\ 25.94s\\ 23.88s\end{tabular} & \begin{tabular}[c]{@{}c@{}}25.871\\ 28.474\\ 55.311\end{tabular} & \begin{tabular}[c]{@{}c@{}}56.00s\\ 78.88s\\ 59.98s\end{tabular} & \begin{tabular}[c]{@{}c@{}}19.370\\ 19.463\\ 39.111\end{tabular} \\ \hline \end{tabular} \caption{\label{tab:experiments} Lipschitz computation for different approximation factors} \end{table} \begin{figure}[h] \centering \begin{tabular}{ll} \includegraphics[scale=0.4]{convergence_itr_2.png} & \includegraphics[scale=0.4]{convergence_time_2.png} \end{tabular} \caption{Convergence of the algorithm for a network with layer sizes (10,30,30,30,3)} \label{fig:convergence} \end{figure} It was observed that the algorithm achieves a good approximation factor (sufficient for most practical cases) within a reasonable time, but gradually converges slowly as the number of sub-problems increases exponentially. Also, the output bounds calculated for a network using SymProp was found to be much tighter than that calculated using naive interval propagation. It was found as expected that the optimization strategy, FFilter, improved the performance significantly. Achieving arbitrary precision with regards to feasibility tolerance for constraints is not possible for solvers with finite tolerance limits. For this reason, in very rare cases, it may falsely validate some set of constraints to be feasible if the precision demand is more than the tolerance limit of solvers. This might validate false activation patterns which can (not necessarily) cause the algorithm to report a value larger than the true exact Lipschitz constant. \section{Conclusion} We provide techniques to calculate exact bounds of the Lipschitz constant of neural networks, which has several important applications in deep learning. Our main contribution is that this technique can be applied for exact/approximate computation of the local/global Lipschitz constant for any $p$-norm. This work discusses several ideas within an unified branch and bound framework which can be extended to related problem areas of neural network verification like output bounds computation, robustness estimation etc.. The algorithm also provides local information about the sensitivity of the neural network corresponding to different parts of the input space. The branch and bound strategy discussed here, based on the partitioning of input space, can be potentially extended to iteratively refine the estimation of other important properties of neural networks which varies across the input space. The exact computation of the Lipschitz constant for different norms may prove to be useful for other theoretical studies of neural networks as well as real-life safety critical scenarios requiring formal guarantees about the behaviour of neural networks. \bibliographystyle{plain}
1,108,101,563,474
arxiv
\section{Introduction} \label{sec:intro} A natural source of randomness is biometric identifiers such as fingerprints that are generally transformed into a frequency domain and quantized to obtain bit sequences that are unique to an individual \cite{Campisi}. Similarly, physical identifiers such as fine variations of ring oscillator (RO) outputs or random start-up values of static random access memories (SRAMs) that are caused by uncontrollable manufacturing variations, are safer and cheaper alternatives to key storage in a non-volatile memory \cite{GassendThesis}. Physical identifiers for digital devices such as Internet-of-Things (IoT) devices can be implemented using physical unclonable functions (PUFs) \cite{GassendThesis}. One can use PUFs in various coding schemes as a source of local randomness \cite[Chapter 1]{benimdissertation}, e.g., in the randomized encoder of the wiretap channel \cite{WTC} and of the strong coordination problem \cite{CuffStrongCoord,GiuliaStrongCoord}. We use the basic source model for key agreement from \cite{AhlswedeCsiz,Maurer} to find achievable rate regions for key agreement with PUFs and biometric identifiers. In this classic model, an encoder observes a source output to generate a secret key and sends public side information, i.e., \textit{helper data}, to a decoder, so the decoder can reliably reconstruct the same secret key by observing another source output and the helper data. The main constraints are that the information leaked about the secret key, i.e., \emph{secrecy leakage}, is negligible and the information leaked about the identifier output, i.e., \emph{privacy leakage}, is small \cite{IgnaTrans,LaiTrans}. Furthermore, the amount of public storage should also be minimized to limit the hardware cost \cite{csiszarnarayan}. Suppose the encoder generates a key from a noisy measurement of a hidden (or remote) source output, and a decoder has access to another noisy measurement of the same source and the helper data to reconstruct the same key. We call this model the \emph{generated-secret} (GS) model with a hidden source. This model is introduced in \cite{bizimMMMMTIFS} as an extension of the visible (noiseless) source outputs observed by the encoder, considered in \cite{IgnaTrans,LaiTrans}. Similarly, for the \emph{chosen-secret} (CS) model, an embedded (or chosen) key and noisy identifier measurements are combined by the encoder to generate the public helper data. We consider both models to address different applications. \subsection{Related Work and Motivation} The same identifier is used by multiple encoder and decoder pairs in \cite{LifengTransMultipleUse}, where the identifier outputs observed by different encoders are the same because the encoder measurements are assumed to be noiseless. Therefore, the multiple use of the same noiseless source output allows all encoders to know the secret key of the other encoders. This model does not fit well to the practical key agreement with identifier scenarios because there is noise in every identifier measurement. Multiple enrollments of a hidden source using noisy measurements are considered in \cite{LienekeTIFS2019}, where weakly secure secret keys are generated without privacy leakage and storage constraints. Furthermore, there is a causality assumption in \cite{LienekeTIFS2019} on the availability of the helper data, i.e., any decoder has access to all previously-generated helper data. This assumption is not necessarily realistic as a decoder of, e.g., an IoT device that embodies a PUF should be low complexity and the amount of data to process increases linearly with the number of enrollments. In addition, any manipulation in any of the helper data can cause the complete multi-enrollment system to fail. A classic method used for key agreement, i.e., the fuzzy commitment scheme (FCS) \cite{FuzzyCommitment}, is used in \cite{Lieneke} in combination with an SRAM PUF to enroll the noisy outputs of the same SRAM multiple times. The symmetry condition in \cite[Eq. (16)]{Lieneke} conditioned on a fixed SRAM cell state is entirely similar to the symmetry satisfied by binary-input symmetric output (BISO) channels; see e.g., \cite[p. 613]{infcombining}, \cite[Eq. (14)]{bizimMMMMTIFS}. For SRAM outputs that satisfy this symmetry, the normalized (weak) secrecy leakage about each separate secret key is shown to be zero. It is discussed in \cite[Section 3.4]{bizimBenelux} that any uniformly-distributed hidden identifier output with BISO measurement channels satisfies the results in \cite{Lieneke}. In \cite[Theorem 1]{bizimBenelux} the secret-key capacity of the two-enrollment key agreement problem is established for measurement channels with the same channel transition matrix. However, these multi-enrollment models do not consider the privacy leakage and storage constraints, there is no constraint on the independence of the secret keys of different enrollments, and the secrecy leakage constraint is weak and is not applied jointly on all secret keys. Furthermore, optimal random linear code constructions that achieve the boundaries of the key-leakage-storage regions are given in \cite{bizimWZ}, where the classic code constructions FCS and code-offset fuzzy extractors \cite{Dodis2008fuzzy} are shown to be strictly suboptimal. Therefore, the multi-enrollment models and constructions in the literature are strictly suboptimal and not necessarily realistic. We therefore list stronger secrecy constraints jointly on all entities, which approximates the reality better in combination with storage rate and joint privacy-leakage rate constraints. These constraints define the \textit{multi-entity key agreement} problem, where the entities that use the same identifier do not have to trust other entities after key agreement. Therefore, the multi-entity key agreement problem is a proper multi-user extension of single-enrollment models. We first consider the multi-entity key agreement problem and then analyze a special case of the multi-enrollment key agreement problem to illustrate scenarios for which a single enrollment can be more useful than multiple enrollments and vice versa. Every measurement of an identifier is considered to be noisy due to, e.g., local temperature and voltage changes in the hardware of the PUF circuit or a cut on the finger. Noise components at the encoder and decoder measurements of a hidden source can be also correlated due to, e.g., the surrounding logic in the hardware \cite{MerliROCorrelated} or constant fingertip moisture. This correlation between the noise sequences is modeled in \cite{bizimITW} as a broadcast channel (BC) \cite{CoverandThomas} with an input that is the hidden source output and with outputs that are the noisy encoder and decoder measurements. We use this model for multi-entity key agreement with identifiers, where each entity (i.e., each encoder and decoder pair) observes noisy identifier outputs of the same hidden source through different BCs. For the multi-entity key agreement problem, we allow the BCs to be different as honest entities generally use different hardware implementations of the encoder and decoder pairs, which results in different correlations between noise components. We also consider physically-degraded (PD) and less-noisy (LN) BCs to give finer inner and outer bounds to the key-leakage-storage regions for the GS and CS models of the multi-entity key agreement problem. For the considered PD and LN BCs, we prove that strong privacy can be achieved. In \cite{IgnaTrans,LaiTrans,MatthieuPolar}, an extra common randomness that is available to the encoder and decoder and that is hidden from the eavesdropper is required to obtain strong privacy. This assumption is not realistic since such a common randomness requires hardware protection against invasive attacks, and if such a protection is feasible, then it is not necessary to use an identifier for key agreement. \subsection{Models for Identifier Outputs} We study physical and biometric identifier outputs that are independent and identically distributed (i.i.d.) according to a given probability distribution. These models are reasonable if one uses transform-coding algorithms from \cite{bizimMDPI} that occupy a small hardware area to extract almost i.i.d. bits from PUFs under varying environmental conditions. Similar transform-coding based algorithms have been applied to biometric identifiers to obtain independent output symbols \cite{Transformbio}. These transform-coding algorithms provide almost i.i.d. identifier outputs and noise sequences; however, the correlation between the noise components on the encoder and decoder components are not removed using these methods. Furthermore, PUFs are used for on-demand key reconstruction and physical attacks on PUFs permanently change the identifier outputs \cite{PappuThesis}, so we assume that the eavesdropper cannot obtain information correlated with the PUF outputs, unlike biometric identifiers. \subsection{Summary of Contributions} We extend the key-leakage-storage rate tuple analysis of the single-enrollment model for hidden identifier outputs measured through general BCs in \cite{bizimITW} to consider multi-entity and multi-enrollment key agreement with a set of stringent secrecy constraints. A summary of the main contributions is as follows. \begin{itemize} \item We derive achievable key-leakage-storage rate tuples for the GS model with strong secrecy for any finite number of entities using the same identifier's measurements through different BCs for key agreement. Separate identifier measurements considered in \cite{bizimKittipongTIFS,bizimMMMMTIFS} correspond to a PD BC and the visible source model in \cite{IgnaTrans,LaiTrans} corresponds to a semi-deterministic BC. \item For a set of PD and LN BCs, the privacy-leakage rates for the two-entity key agreement problem are calculated. These PD and LN BCs are shown to provide strong privacy without the need of a common randomness. An outer bound is given for the considered PD and LN BCs. \item We next consider a special case of the multi-enrollment key agreement problem, where all measurement channels are separate (i.e., PD BCs) and they have the same transition matrix. This is a common model used for SRAM PUFs. Using a less stringent secrecy leakage constraint that bounds the information leakage for each secret key separately and without the mutual independence constraint on the secret keys, we establish inner and outer bounds for the strong-secrecy key-leakage-storage region for this two-enrollment key agreement problem. The bounds differ only in the Markov chains imposed. This result is a significant improvement to the two-enrollment secret-key rate region (without storage and privacy-leakage rate constraints) established in \cite{bizimBenelux} for weak secrecy, which is recovered by eliminating auxiliary random variables in the proposed rate regions. \item All inner and outer bounds for the GS model are extended to the CS model, which comprises secret-key binding methods that embed a chosen secret key to the encoder. \item We give two scenarios to compare single-enrollment and two-enrollment models and illustrate that for different assumptions on measurement channels, either of the two models can perform better in terms of the privacy-leakage vs. secret-key rate boundary tuples. \end{itemize} \subsection{Organization} This paper is organized as follows. In Section~\ref{sec:problem_setting}, we describe the multi-entity key agreement problem with BC measurements. We give achievable key-leakage-storage regions for the GS and CS models with strong secrecy and BC measurements for any finite number of entities in Section~\ref{sec:achievablescheme} in addition to inner and outer bounds for PD and LN BCs that satisfy strong privacy. The proposed inner bounds for the two-enrollment key agreement problem in Section~\ref{sec:tightregions} are shown to differ from the outer bounds only in the Markov chains imposed for a special case with less stringent secrecy constraints. In Sections~\ref{sec:achinnergeneral} and ~\ref{sec:Twoenrollmentproofs}, proofs of the given rate regions for the general multi-entity key agremeent problem and for the two-enrollment key agreement problem, respectively, are given. Section~\ref{sec:conclusion} concludes the paper. \subsection{Notation} Upper case letters represent random variables and lower case letters their realizations. A superscript denotes a string of variables, e.g., $\displaystyle X^n\!=\!X_1,X_2,\ldots, X_i,\ldots, X_n$, and a subscript $i$ denotes the position of a variable in a string. A random variable $\displaystyle X$ has probability distribution $\displaystyle P_X$. Calligraphic letters such as $\displaystyle \mathcal{X}$ denote sets, set sizes are written as $\displaystyle |\mathcal{X}|$ and their complements as $\displaystyle \mathcal{X}^c$. $[1:J]$ denotes the set $\{1,2,\ldots,J\}$ for an integer $J\geq1$ and $[1:J]\setminus\{j\}$ denotes the set $\{1,2,,\ldots,j-1,j+1,\ldots,J\}$ for any $j\in[1:J]$. $H_b(x)=-x\log x- (1-x)\log (1-x)$ is the binary entropy function, where we take logarithms to the base $2$, and $H_b^{-1}(\cdot)$ denotes its inverse with range $[0, 0.5]$. $X\sim\text{Bern}(\alpha)$ is a binary random variable with $\Pr[X=1]=\alpha$. A binary symmetric channel (BSC) with crossover probability $p$ is denoted by BSC($p$). $Q(\cdot)$ is the $Q$-function that gives the tail probability for the standard normal distribution. \section{Multi-Entity Key Agreement Model}\label{sec:problem_setting} Consider hidden identifier outputs $X^n$ that are i.i.d. according to a probability distribution $P_X$. The hidden (or remote) source with outputs $X^n$ is common to all honest entities that enroll the same identifier, but they observe different noisy measurements of the same hidden source. If there are a finite number $J$ of honest entities that use the same identifer, the $j$-th encoder and decoder pair observes noisy source measurements that are outputs of a BC $P_{\widetilde{X}_jY_j|X}$, with abuse of notation, for all $j\in[1:J]$, where $\mathcal{\widetilde{X}}_j$, $\mathcal{Y}_j$, and $\mathcal{X}$ are finite sets. \begin{figure} \centering \resizebox{1\linewidth}{!}{ \begin{tikzpicture} \node (so) at (-2.5,-4.5) [draw,rounded corners = 5pt, minimum width=0.4cm,minimum height=0.6cm, align=left] {$P_X$}; \node (a) at (1.5,-1.0) [draw,rounded corners = 6pt, minimum width=2.10cm,minimum height=1.0cm, align=left] {$ (S_1,W_1) \overset{(a)}{=} f_{\text{GS},1}(\widetilde{X}^n_1)$\\ $W_1\! \overset{(b)}{=}\! f_{\text{CS},1}(\widetilde{X}^n_1,S_1)$}; \node (kdb) at (1.5,2.6) [draw,rounded corners = 6pt, minimum width=1.0cm,minimum height=1.0cm, align=left] {$\;\;$ Key\\ Database}; \node (hddb) at (3.9,1.9) [draw,rounded corners = 6pt, minimum width=1.0cm,minimum height=1.0cm, align=left] {$\;\;$ Public\\ Database}; \node (comp) at (6.2,1.65) [draw,rounded corners = 6pt, minimum width=0.6cm,minimum height=0.6cm, align=left] {$=$}; \node (quest) [right of = comp, node distance = 1.1cm] {?}; \node (f) at (1.5,-3.5) [draw,rounded corners = 5pt, minimum width=1.2cm,minimum height=0.9cm, align=left] {$P_{\widetilde{X}_1Y_1|X}$}; \node (b) at (6,-1.0) [draw,rounded corners = 6pt, minimum width=3.2cm,minimum height=1.1cm, align=left] {$\hat{S}_1 = g_1\left(Y^n_1,W_1\right)$}; \node (a1) [right of = so, node distance = 1.35cm] {$X^n$}; \node (b1) [below of = b, node distance = 2.5cm] {$Y^n_1$}; \node (a2) [above of = a, node distance = 1.6cm] {$S_1$}; \node (svhat) [right of = kdb, node distance = 4.3cm] {$S_1$}; \node (w5) [right of = a2, node distance = 2.40cm] {$W_1$}; \node (shat) [right of = w5, node distance = 2.0cm] {$\hat{S}_1$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (so.east) -- (a1.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a1.east) -- ($(a1.east)+(0.5,0.0)$)-- ($(f.west)-(1.04,0.01)$)|- (f.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (f.north) -- (a.south) node [midway, right] {$\widetilde{X}^n_1$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (f.east) -- (b1.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (b1.north) -- (b.south); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a.east) -- ($(b.west)-(0.87,0.01)$) -|($(w5.south)-(0.2,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(w5.south)+(0.18,0)$) -- ($(b.west)-(0.32,0.01)$)|- (b.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(w5.north)-(0.2,0)$) -- ($(hddb.south)-(0.2,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(hddb.south)+(0.2,0)$) -- ($(w5.north)+(0.2,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a2.south)+(0.2,0)$)-- ($(a.north)+(0.2,0)$) node [midway, right] {$(b)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a.north)-(0.2,0)$)-- ($(a2.south)-(0.2,0)$) node [midway, left] {$(a)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a2.north)-(0.2,0)$)-- ($(kdb.south)-(0.2,0)$) node [midway, left] {$(a)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(kdb.south)+(0.2,0)$)-- ($(a2.north)+(0.2,0)$) node [midway, right] {$(b)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}} , postaction={decorate}, thick, shorten >=1.4pt] ($(kdb.east)+(0,0)$)-- ($(svhat.west)+(0,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(b.north)-(0.1,0)$)-- ($(shat.south)+(0,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(shat.north)+(0,0)$)-- (comp.south); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(svhat.south)-(0,0)$)-- (comp.north); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (comp.east) -- (quest.west); \node (a5) at (1.5,-8) [draw,rounded corners = 6pt, minimum width=2.10cm,minimum height=1.0cm, align=left] {$ (S_2,W_2) \overset{(a)}{=} f_{\text{GS},2}(\widetilde{X}^n_2)$\\ $W_2\! \overset{(b)}{=}\! f_{\text{CS},2}(\widetilde{X}^n_2,S_2)$}; \node (kdb5) at (1.5,-11.6) [draw,rounded corners = 6pt, minimum width=1.0cm,minimum height=1.0cm, align=left] {$\;\;$ Key\\ Database}; \node (hddb5) at (3.9,-10.9) [draw,rounded corners = 6pt, minimum width=1.0cm,minimum height=1.0cm, align=left] {$\;\;$ Public\\ Database}; \node (comp5) at (6.2,-10.65) [draw,rounded corners = 6pt, minimum width=0.6cm,minimum height=0.6cm, align=left] {$=$}; \node (quest5) [right of = comp5, node distance = 1.1cm] {?}; \node (f5) at (1.5,-5.5) [draw,rounded corners = 5pt, minimum width=1.2cm,minimum height=0.9cm, align=left] {$P_{\widetilde{X}_2Y_2|X}$}; \node (b5) at (6,-8) [draw,rounded corners = 6pt, minimum width=3.2cm,minimum height=1.1cm, align=left] {$\hat{S}_2 = g_2\left(Y^n_2,W_2\right)$}; \node (b15) [above of = b5, node distance = 2.5cm] {$Y^n_2$}; \node (a25) [below of = a5, node distance = 1.6cm] {$S_2$}; \node (svhat5) [right of = kdb5, node distance = 4.3cm] {$S_2$}; \node (w55) [right of = a25, node distance = 2.40cm] {$W_2$}; \node (shat5) [right of = w55, node distance = 2.0cm] {$\hat{S}_2$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a1.east) -- ($(a1.east)+(0.5,0.0)$)-- ($(f5.west)-(1.04,0.01)$)|- (f5.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (f5.south) -- (a5.north) node [midway, right] {$\widetilde{X}^n_2$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (f5.east) -- (b15.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (b15.south) -- (b5.north); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (a5.east) -- ($(b5.west)-(0.87,0.01)$) -|($(w55.north)-(0.2,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(w55.north)+(0.18,0)$) -- ($(b5.west)-(0.32,0.01)$)|- (b5.west); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(w55.south)-(0.2,0)$) -- ($(hddb5.north)-(0.2,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(hddb5.north)+(0.2,0)$) -- ($(w55.south)+(0.2,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a25.north)+(0.2,0)$)-- ($(a5.south)+(0.2,0)$) node [midway, right] {$(b)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a5.south)-(0.2,0)$)-- ($(a25.north)-(0.2,0)$) node [midway, left] {$(a)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(a25.south)-(0.2,0)$)-- ($(kdb5.north)-(0.2,0)$) node [midway, left] {$(a)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(kdb5.north)+(0.2,0)$)-- ($(a25.south)+(0.2,0)$) node [midway, right] {$(b)$}; \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}} , postaction={decorate}, thick, shorten >=1.4pt] ($(kdb5.east)+(0,0)$)-- ($(svhat5.west)+(0,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(b5.south)-(0.1,0)$)-- ($(shat5.north)+(0,0)$); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(shat5.south)+(0,0)$)-- (comp5.north); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] ($(svhat5.north)-(0,0)$)-- (comp5.south); \draw[decoration={markings,mark=at position 1 with {\arrow[scale=1.5]{latex}}}, postaction={decorate}, thick, shorten >=1.4pt] (comp5.east) -- (quest5.west); \end{tikzpicture} } \caption{Illustration of the multi-entity key agremeent problem for $J=2$ entities with encoder and decoder measurements through BCs for $(a)$ the GS model and $(b)$ the CS model.}\label{fig:ProblemDefinitionfortwo} \end{figure} For the GS model illustrated in Fig.~\ref{fig:ProblemDefinitionfortwo}$(a)$ for $J=2$ honest entities, the $j$-th encoder $f_{\text{GS},j}(\cdot)$ generates helper data $W_j$ and a secret key $S_j$ from its observed sequence $\widetilde{X}^n_j$. All secret keys are stored in a secure database, whereas helper data are stored in a public database so that an eavesdropper has access only to the helper data. Using the helper data $W_j$ and its observed sequence $Y^n_j$, the $j$-th decoder $g_j(\cdot,\cdot)$ generates the key estimate $\hat{S}_j$. Similar steps are applied for the CS model in Fig.~\ref{fig:ProblemDefinitionfortwo}$(b)$ also for $J=2$ honest entities, except that each $S_j$ should be embedded into the $j$-th encoder $f_{\text{CS},j}(\cdot,\cdot)$. Denote a set of secret keys as \begin{align} \mathcal{S}_{\mathcal{K}} = \{S_j:j\in\mathcal{K}\} \end{align} and a set of helper data as \begin{align} \mathcal{W}_{\mathcal{K}} = \{W_j:j\in\mathcal{K}\} \end{align} for any $\mathcal{K} \subseteq[1:J]$. A (secret-key, privacy-leakage, storage), or key-leakage-storage, rate tuple is denoted as $(R_{\text{s}}, R_{\ell},R_{\text{w}})$. Similarly, we denote a set of secret-key rates, for any $\mathcal{K} \subseteq[1:J]$, as \begin{align} \mathcal{R}_{\text{s},\mathcal{K}} = \{R_{\text{s},j}:j\in\mathcal{K}\} \end{align} and a set of storage rates as \begin{align} \mathcal{R}_{\text{w},\mathcal{K}} = \{R_{\text{w},j}:j\in\mathcal{K}\}. \end{align} We next define the multi-entity key-leakage-storage regions. \begin{definition} \normalfont A key-leakage-storage rate tuple $(\mathcal{R}_{\text{s},[1:J]} , R_{\ell},\mathcal{R}_{\text{w},[1:J]} )$ is achievable for the multi-entity GS and CS models with $j$-th encoder and decoder measurements through a BC $P_{\widetilde{X}_jY_j|X}$ if, given any $\delta\!>\!0$, there is some $n\!\geq\!1$, and $J$ encoder and decoder pairs for which $\displaystyle R_{\text{s},j}=\frac{\log|\mathcal{S}_j|}{n}$ for all $ j\in[1:J]$ and \begin{alignat}{2} &\Pr\left[\underset{j\in[1:J]}{\bigcup}\{S_j\ne\hat{S}_j\}\right] \leq \delta &&\quad \text{(reliability)}\label{eq:reliabilityconst}\\ &\frac{1}{n}H(S_j) \geq R_{\text{s},j}-\delta,\quad\;\;\, \forall j\!\in\![1\!:\!J]&&\quad \text{(key uniformity)}\label{eq:uniformityconst}\\[2pt] &I\left(\mathcal{S}_{\mathcal{K}};\mathcal{S}_{\mathcal{K}^c}\right)\leq \delta,\qquad\;\;\;\;\; \forall \mathcal{K}\!\subseteq\![1\!:\!J]&&\quad \text{(strong key ind.)}\label{eq:keyindependence}\\[2pt] &\frac{1}{n}I(X^n;\mathcal{W}_{[1:J]})\! \leq\! R_{\ell}\!+\!\delta &&\quad \text{(privacy)} \label{eq:privacyconst}\\[2pt] &I\left(\mathcal{S}_{[1:J]};\mathcal{W}_{[1:J]}\right) \leq \delta &&\quad\text{(strong secrecy)} \label{eq:secrecyconst}\\[2pt] &\frac{1}{n} \log|\mathcal{W}_j| \leq R_{\text{w},j}\!+\!\delta,\quad \forall j\!\in\![1\!:\!J] &&\quad \text{(storage)}\label{eq:storageconst}. \end{alignat} The \emph{multi-entity key-leakage-storage} regions $\mathcal{C}_{\text{gs}}$ for the GS model and $\mathcal{C}_{\text{cs}}$ for the CS model are the closures of the set of all achievable rate tuples $(\mathcal{R}_{\text{s},[1:J]} , R_{\ell},\mathcal{R}_{\text{w},[1:J]})$. \end{definition} Both secret-key uniformity (\ref{eq:uniformityconst}) and storage rate (\ref{eq:storageconst}) constraints correspond to $J$ separate constraints. However, reliability (\ref{eq:reliabilityconst}), strong and mutual key independence (\ref{eq:keyindependence}), privacy-leakage rate (\ref{eq:privacyconst}), and secrecy leakage (\ref{eq:secrecyconst}) constraints are joint constraints for all $J$ honest entities. Suppose after a key generation, an honest entity has access only to its corresponding secret key and it does not have access to other entities’ keys or sequences or even to the sequence it observed to generate its secret key. The mutual key independence constraint in (\ref{eq:keyindependence}) is not imposed in the multi-enrollment key agreement problem considered in \cite{Lieneke}. Furthermore, a normalized (weak) version of this constraint is imposed in the multi-enrollment key agreement problem considered in \cite{LienekeTIFS2019}, where the $j$-th decoder $g_j(\cdot,\cdot)$ is assumed to have access to the set of helper data $\mathcal{W}_{[1:j]}$ for all $j\in[1:J]$. The lack of the mutual key independence constraint and the assumption of availability of all previous helper data require that different encoder and decoder pairs should trust each other after key agreement. This can be the case, e.g., if all enrollments are made by the same entity. Therefore, the multi-entity key agreement problem imposes strictly more stringent constraints than the multi-enrollment key agreement problem. The unnormalized secrecy leakage constraint (\ref{eq:secrecyconst}) provides strong secrecy, which is a stronger notion than the weak secrecy considered in \cite{IgnaTrans, LaiTrans, bizimMMMMTIFS,bizimKittipongTIFS,Lieneke,LienekeTIFS2019}. Furthermore, (\ref{eq:secrecyconst}) is more stringent than the set of individual secrecy leakage constraints $I(S_j;\mathcal{W}_{[1:J]})$ imposed for all $j\in[1:J]$, considered in \cite{Lieneke} for symmetric SRAM PUF outputs in combination with the suboptimal FCS. The unnormalized privacy leakage $I(X^n;\mathcal{W}_{[1:J]})$ cannot be bounded by a finite number in general. We illustrate special strong privacy cases in the next section. \section{Inner Bounds}\label{sec:achievablescheme} We are interested in characterizing the optimal trade-off among the secret-key, privacy-leakage, and storage rates with strong secrecy for BC measurements at the encoders and decoders of any finite number $J$ of entities that use the same hidden identifier outputs for the multi-entity key agreement problem. We give achievable rate regions for the GS and CS models in Theorem~\ref{theo:GeneralInnergscs}. The proofs are given in Section~\ref{sec:achinnergeneral}. Denote \begin{align} \mathcal{U}_{\mathcal{K}} = \{U_j:j\in\mathcal{K}\} \end{align} and define a function $\max\{\cdot,\cdot\}$ that gives the maximum of the input values as its output. \begin{theorem}[Inner Bounds for Multi-entity GS and CS Models]\label{theo:GeneralInnergscs} An achievable rate region $\mathcal{R}_{\text{gs}}$ for the multi-entity GS model with $J$ entities is the union over all $P_{U_{j}|\widetilde{X}_j}$ for all $j\in [1:J]$ of the rate tuples such that $R_{\text{s},j} \geq 0$ for all $j \in [1:J]$ and \begin{alignat}{2} &R_{\text{s},j}\leq I(U_j;Y_j)-I(U_j;U_{[1:J]\setminus\{j\}}), &&\;\; \forall j\in [1:J]\label{eq:corrkeyrate}\\ &R_\ell \geq \sum_{j=1}^{J}\max\{0, I(U_j;X)\!-\!I(U_j;Y_j)\}, \label{eq:corrleakagerate}\\ &R_{\text{w},j}\! \geq\! I(U_j;\widetilde{X}_j)-I(U_j;Y_j),&& \;\;\forall j \in [1:J] \label{eq:corrstoragerate}\\ &R_{\text{s},j}+R_{\text{w},j}\leq H(U_j|\,\mathcal{U}_{[1:J]\setminus\{j\}}),&& \;\;\forall j \in [1:J]. \label{eq:corrstorageandkeysumrate} \end{alignat} An achievable rate region $\mathcal{R}_{\text{cs}}$ for the multi-entity CS model with $J$ entities is the union over all $P_{U_{j}|\widetilde{X}_j}$ for all $j\in [1:J]$ of the rate tuples such that $R_{\text{s},j} \geq 0$ for all $j \in [1:J]$, (\ref{eq:corrkeyrate}), (\ref{eq:corrleakagerate}), and \begin{alignat}{2} &R_{\text{w},j}\! \geq\! I(U_j;\widetilde{X}_j)-I(U_j;U_{[1:J]\setminus\{j\}}),&&\quad \forall j \in [1:J] \label{eq:cscorrstoragerate}\\ &R_{\text{w},j}\leq H(U_j|\,\mathcal{U}_{[1:J]\setminus\{j\}}),&& \quad\forall j \in [1:J] \label{eq:cscorrstorageandkeysumrate}. \end{alignat} For the achievable rate regions $\mathcal{R}_{\text{gs}}$ and $\mathcal{R}_{\text{cs}}$, we have \begin{align} \displaystyle P_{\mathcal{U}_{[1:J]}\mathcal{\widetilde{X}}_{[1:J]}X\mathcal{Y}_{[1:J]}}=P_{X}\prod_{j=1}^JP_{U_j|\widetilde{X}_j}P_{\widetilde{X}_jY_j|X} \label{eq:fixedprobdistr}. \end{align} \end{theorem} \begin{corollary}\label{cor:strongprivacy} Suppose for all $j\in[1:J]$ that \begin{itemize} \item $\widetilde{X}_j-Y_j-X$ form a Markov chain, i.e., $X$ is a PD version of $Y_j$ with respect to $\widetilde{X}_j$, or \item $P_{XY_j|\widetilde{X}_j}$ is a LN BC with $I(U_j;Y_j)\geq I(U_j;X)$ for all $P_{U_j|\widetilde{X}_j}$. \end{itemize} For these cases, strong privacy, i.e., \begin{align} R_\ell\geq 0 \label{eq:privacyleakzero} \end{align} can be achieved for the multi-entity GS and CS models in combination with the other corresponding bounds given in Theorem~\ref{theo:GeneralInnergscs}. \end{corollary} The proof of Corollary~\ref{cor:strongprivacy} follows from Theorem~\ref{theo:GeneralInnergscs} because $I(U_j;X)-I(U_j;Y_j)\leq 0$ for all $j\in[1:J]$ for BCs considered in Corollary~\ref{cor:strongprivacy}. Corollary~\ref{cor:strongprivacy} illustrates that it is possible to obtain strong privacy, i.e., negligible unnormalized privacy leakage, without the requirement of a common randommness that is hidden from an eavesdropper assumed in \cite{IgnaTrans,LaiTrans,MatthieuPolar}. This is the case because the observation $Y^n_j$ of each decoder is ``better" than the observation $\widetilde{X}^n_j$ of the corresponding encoder with respect to the hidden source $X^n$ for all entities. \begin{remark} \normalfont The rate regions for our problem depend on the joint conditional probability distributions $P_{XY_j|\widetilde{X}_j}$ rather than only the marginal conditional distributions. Thus, the key-leakage-storage regions for the stochastically-degraded BCs are not necessarily equal to the regions for the corresponding PD BCs, unlike in the classic BC problem. Furthermore, since $P_{\mathcal{\widetilde{X}}_{[1:J]}X\mathcal{Y}_{[1:J]}}$ is fixed, the distinction between the LN BCs and essentially-less noisy BCs \cite{NairEssentially}, is not necessary. \end{remark} We next give simple outer bounds for the multi-entity key-leakage-storage regions $\mathcal{C}_{\text{gs}}$ for the GS model and $\mathcal{C}_{\text{cs}}$ for the CS model when the BCs $P_{XY_j|\widetilde{X}_j}$ for all $j\in[1:J]$ are PD BCs or LN BCs, as defined in Corollary~\ref{cor:strongprivacy}. These simple outer bounds give insights into the reason for different bounds on the secret-key rates. Based on these insights, we show a special multi-enrollment case in the next section with a less stringent secrecy constraint, for which the inner and outer bounds differ only in the Markov chains imposed and we illustrate that they match for simpler models. \begin{lemma}\label{lem:zeroprivacyouterbound} Suppose one of the cases given in Corollary~\ref{cor:strongprivacy} is satisfied by the BCs $P_{XY_j|\widetilde{X}_j}$ for all $j\in[1:J]$. An outer bound on the multi-entity key-leakage-storage region $\mathcal{C}_{\text{gs}}$ is the union over all $P_{U_{j}|\widetilde{X}_{j}}$, where $U_j-\widetilde{X}_{j}-(X,Y_j)$ form a Markov chain, for all $j\in[1:J]$ of the rate tuples such that $R_{\text{s},j} \geq 0$ for all $j \in [1:J]$, (\ref{eq:corrstoragerate}), (\ref{eq:privacyleakzero}), and \begin{align} &R_{\text{s},j} \leq I(U_j;Y_j),\qquad\qquad\forall j\in[1:J]\label{eq:OuterSKRatetwoentityrall}. \end{align} An outer bound to the multi-entity key-leakage-storage region $\mathcal{C}_{\text{cs}}$ for the same BCs $P_{XY_j|\widetilde{X}_j}$ is the union over all $P_{U_{j}|\widetilde{X}_{j}}$, where $U_j-\widetilde{X}_{j}-(X,Y_j)$ form a Markov chain, for all $j\in[1:J]$ of the rate tuples such that $R_{\text{s},j} \geq 0$ for all $j \in [1:J]$, (\ref{eq:privacyleakzero}), (\ref{eq:OuterSKRatetwoentityrall}), and \begin{alignat}{2} &R_{\text{w},j}\geq I(U_j;\widetilde{X}_j),\qquad\qquad\forall j\in[1:J]\label{eq:Outercsstortwoentityrall}. \end{alignat} \end{lemma} The proof of Lemma~\ref{lem:zeroprivacyouterbound} follows straightforwadly by following the steps in \cite[Section VI]{bizimMMMMTIFS}, defining the auxiliary random variables $U_{j,i}= (S_j,W_j,Y_j^{i-1})$ for all $j\in[1:J]$ and $i\in[1:n]$, and by bounding $I(X^n;\mathcal{W}_{[1:J]})\geq 0$; therefore, we omit the proof. The outer bounds do not include the inequalities in (\ref{eq:corrstorageandkeysumrate}) and (\ref{eq:cscorrstorageandkeysumrate}). Furthermore, the secret-key rate achieved by the inner bound in (\ref{eq:corrkeyrate}) is smaller than the outer bound given in (\ref{eq:OuterSKRatetwoentityrall}), where the difference is the term $-I(U_j;\mathcal{U}_{[1:J]\setminus\{j\}})$. This term is a result of the constraint in (\ref{eq:independenceofindices}) that is imposed to satisfy the strong and mutual key independence constraint given in (\ref{eq:keyindependence}). Therefore, we next consider a model without the constraint in (\ref{eq:keyindependence}) and use a secrecy-leakage constraint that is less stringent than the one in (\ref{eq:secrecyconst}), i.e., replace (\ref{eq:secrecyconst}) by \begin{align} I(S_j;\mathcal{W}_{[1:J]})\leq\delta,\qquad\qquad \forall j\in[1:J] \end{align} which is also a strong secrecy metric. Due to the lack of a mutual key independence constraint, the model in the next section is not a multi-entity model but rather a multi-enrollment model. For a special case of this multi-enrollment key agreement problem, we establish inner and outer bounds for the key-leakage-storage regions that comprise the same bounds but for different Markov chains. \section{Bounds for a Multi-Enrollment Model}\label{sec:tightregions} Consider next the multi-enrollment model, where the strong and mutual key independence constraint (\ref{eq:keyindependence}) of the multi-entity model is not imposed. Assume further $J=2$ entities that measure noisy outputs of the same hidden source $X^n$ through separate channels that have the same channel transition matrices, i.e., for all $j\in[1:2]$, $\tilde{x}_j\in\mathcal{\widetilde{X}}$, and $y_j\in\mathcal{\widetilde{X}}$ we have \begin{align} P_{\widetilde{X}_jY_j|X}(\tilde{x}_j,y_j|x)=P_{\widetilde{X}|X}(\tilde{x}_j|x)P_{\widetilde{X}|X}(y_j|x). \label{eq:SRAMchanneldef} \end{align} This model is common for SRAM PUFs, for which each measurement channel is modeled as a BSC with the same crossover probability corresponding to a worst case scenario\cite{RoelSRAMPUF}. Using (\ref{eq:SRAMchanneldef}), we define a multi-enrollment model. \begin{definition}\label{def:twoenrollment} \normalfont A key-leakage-storage rate tuple $(\widebar{R}_{\text{s},1}, \widebar{R}_{\text{s},2}, \widebar{R}_{\ell}, \widebar{R}_{\text{w},1}, \widebar{R}_{\text{w},2})$ is achievable for the multi-enrollment GS and CS models with measurements through a BC $\displaystyle P_{\widetilde{X}Y|X}(\tilde{x},y|x)$ as in (\ref{eq:SRAMchanneldef}) if, given any $\delta\!>\!0$, there is some $n\!\geq\!1$, and two encoder and decoder pairs for which $\displaystyle \widebar{R}_{\text{s},1}=\frac{\log|\mathcal{S}_1|}{n}$, $\displaystyle \widebar{R}_{\text{s},2}=\frac{\log|\mathcal{S}_2|}{n}$, $\displaystyle\widebar{R}_{\text{w},1}=\frac{H(W_1)}{n}$, $\displaystyle\widebar{R}_{\text{w},2}=\frac{H(W_2)}{n}$, and \begin{alignat}{2} &\Pr\left[\{S_1\ne\hat{S}_1\}\bigcup\{S_2\ne\hat{S}_2\}\right] \leq \delta &&\quad \text{(reliability)}\label{eq:reliabilityconstJ2}\\[2pt] &\frac{1}{n}H(S_j) = \widebar{R}_{\text{s},j}-\delta,\qquad\;\;\, j=1,2&&\quad \text{(key uniformity)}\label{eq:uniformityconstJ2}\\[2pt] &\frac{1}{n}I(X^n;W_1,W_2)\! =\! \widebar{R}_{\ell}\!+\!\delta &&\quad \text{(privacy)} \label{eq:privacyconstJ2}\\[2pt] &I\left(S_j;W_1,W_2\right) \leq \delta, \qquad\quad j=1,2&&\quad\text{(strong secrecy)} \label{eq:secrecyconstJ2}\\[2pt] &\frac{1}{n} \log|\mathcal{W}_j|= \widebar{R}_{\text{w},j}\!+\!\delta,\qquad j=1,2 &&\quad \text{(storage)}\label{eq:storageconstJ2}\\ &I(W_1;W_2)\leq \delta&&\quad \text{(storage ind.)}.\label{eq:storageindependence} \end{alignat} The \emph{multi-enrollment key-leakage-storage} regions $\mathcal{\widebar{C}}_{\text{gs},J=2}$ for the GS model and $\mathcal{\widebar{C}}_{\text{cs},J=2}$ for the CS model are the closures of the set of all achievable rate tuples $(\widebar{R}_{\text{s},1} , \widebar{R}_{\text{s},2} ,\widebar{R}_{\ell},\widebar{R}_{\text{w},1},\widebar{R}_{\text{w},2})$. \end{definition} We characterize in Theorem~\ref{theo:Twoenrollmentgscs} inner and outer bounds for $\mathcal{\widebar{C}}_{\text{gs},J=2}$ and $\mathcal{\widebar{C}}_{\text{cs},J=2}$. The proofs of Theorem~\ref{theo:Twoenrollmentgscs} are given in Section~\ref{sec:Twoenrollmentproofs}, where the reason for the necessity of the secrecy-leakage constraint in (\ref{eq:secrecyconstJ2}) that is less stringent than the joint secrecy-leakage constraint in (\ref{eq:secrecyconst}) is given in Remark~\ref{rem:whylessstringent}. Similarly, the reason for the necessity of the strong helper data (storage) independence constraint in (\ref{eq:storageindependence}) is discussed in Remark~\ref{rem:helperdataindependence}. We remark that the equalities in (\ref{eq:uniformityconstJ2}), (\ref{eq:privacyconstJ2}), and (\ref{eq:storageconstJ2}) are required in the outer bounds in Theorem~\ref{theo:Twoenrollmentgscs} to provide both upper and lower bounds on $\widebar{R}_{\ell}$ and $\widebar{R}_{\text{w},j}$ in terms of Shannon entropy terms. Denote \begin{align} j'= 3-j,\qquad\qquad\qquad j=1,2.\label{jprimedef} \end{align} \begin{theorem}\label{theo:Twoenrollmentgscs} {\normalfont (Inner Bounds for Multi-enrollment GS and CS Models)}: An achievable multi-enrollment key-leakage-storage region $\mathcal{\widebar{R}}_{\text{gs},J=2}$ is the union over all $P_{U_{1}|\widetilde{X}_1}$ and $P_{U_{2}|\widetilde{X}_2}$ of the rate tuples such that $\widebar{R}_{\text{s},j} \geq 0$ for $j=1,2$ and \begin{alignat}{2} &\widebar{R}_{\text{s},j}\leq I(U_j;Y_j), &&\quad j=1,2\label{eq:corrkeyrateTwoenroll}\\ &\widebar{R}_\ell \geq \sum_{j=1}^2\big(I(U_j;X)\!-\!I(U_j;Y_j)\big), \label{eq:corrleakagerateTwoenroll}\\ &\widebar{R}_\ell \leq \sum_{j=1}^2\big(I(U_j;X)\!-\!I(U_j;\widetilde{X}_j)\!+\!\widebar{R}_{\text{w},j}\big), \label{eq:corrleakagerateTwoenrollupperbound}\\ &\widebar{R}_{\text{w},j}\! \geq\! I(U_j;\widetilde{X}_j)-I(U_j;Y_j),&& \quad j=1,2 \label{eq:corrstoragerateTwoenroll}\\ &\widebar{R}_{\text{s},j}+\widebar{R}_{\text{w},j}\leq H(U_j),&&\quad j=1,2\label{eq:sumonsamesandwpart1Twoenroll}\\ &\widebar{R}_{\text{s},j}+\widebar{R}_{\text{w},j}+\widebar{R}_{\text{w},j'}\leq H(U_{j},U_{j'}),&&\quad j=1,2\label{eq:sumonsamesjandwjandjprimeTwoenroll}. \end{alignat} An achievable multi-enrollment key-leakage-storage region $\mathcal{\widebar{R}}_{\text{cs},J=2}$ is the union over all $P_{U_{1}|\widetilde{X}_1}$ and $P_{U_{2}|\widetilde{X}_2}$ of the rate tuples such that $\widebar{R}_{\text{s},j} \geq 0$ for $j=1,2$, (\ref{eq:corrkeyrateTwoenroll})-(\ref{eq:corrleakagerateTwoenrollupperbound}), and \begin{alignat}{2} &\widebar{R}_{\text{w},j}\! \geq\! I(U_j;\widetilde{X}_j),&&\quad j=1,2 \label{eq:cscorrstoragerateTwoenroll}\\ &\widebar{R}_{\text{w},j}\leq H(U_j),&&\quad j=1,2\label{eq:cssumonsamesandwpart1Twoenroll}\\ &\widebar{R}_{\text{w},j}\!+\!\widebar{R}_{\text{w},j'}\!\leq\! H(U_{j},U_{j'})\!+\!\widebar{R}_{\text{s},j'},&&\quad j=1,2\label{eq:cssumonsamesjandwjandjprimeTwoenroll}. \end{alignat} For both achievable rate regions $\mathcal{\widebar{R}}_{\text{gs},J=2}$ and $\mathcal{\widebar{R}}_{\text{cs},J=2}$, we have \begin{align} \displaystyle &P_{U_{1}U_2\widetilde{X}_{1}\widetilde{X}_2XY_{1}Y_2}({u_{1},u_2,\widetilde{x}_{1},\widetilde{x}_2,x,y_{1},y_2})\nonumber\\ &\quad=P_{U_1|\widetilde{X}_1}(u_1|\tilde{x}_1)P_{U_2|\widetilde{X}_2}(u_2|\tilde{x}_2)P_{\widetilde{X}|X}(\tilde{x}_1|x)P_{\widetilde{X}|X}(\tilde{x}_2|x)\nonumber\\ &\quad\qquad \times P_{\widetilde{X}|X}(y_1|x)P_{\widetilde{X}|X}(y_2|x)P_{X}(x) \label{eq:fixedprobdistrTwoenroll}. \end{align} {\normalfont (Outer Bounds for Multi-enrollment GS and CS Models)} An outer bound for $\mathcal{\widebar{C}}_{\text{gs},J=2}$ is the union over all $P_{U_{1}|\widetilde{X}_{1}}$ and $P_{U_{2}|\widetilde{X}_2}$ of the rate tuples such that $\widebar{R}_{\text{s},j} \geq 0$, (\ref{eq:corrkeyrateTwoenroll}) - (\ref{eq:sumonsamesjandwjandjprimeTwoenroll}), and $U_j-\widetilde{X}_{j}-X-Y_j$ form a Markov chain for $j=1,2$. An outer bound for $\mathcal{\widebar{C}}_{\text{cs},J=2}$ is the union over all $P_{U_{1}|\widetilde{X}_{1}}$ and $P_{U_{2}|\widetilde{X}_2}$ of the rate tuples such that $\widebar{R}_{\text{s},j} \geq 0$, (\ref{eq:corrkeyrateTwoenroll}) - (\ref{eq:corrleakagerateTwoenrollupperbound}), (\ref{eq:cscorrstoragerateTwoenroll}) - (\ref{eq:cssumonsamesjandwjandjprimeTwoenroll}), and $U_j-\widetilde{X}_{j}-X-Y_j$ form a Markov chain for $j=1,2$. \end{theorem} The inner and outer bounds differ because the outer bounds define rate regions for the Markov chains $U_1-\widetilde{X}_{1}-X-Y_1$ and $U_2-\widetilde{X}_{2}-X-Y_2$, which are larger than the rate regions defined by the inner bounds that satisfy (\ref{eq:fixedprobdistrTwoenroll}). For instance, in the achievability proof of Theorem~\ref{theo:Twoenrollmentgscs}, we apply the properties of the Markov chain $U_2-\widetilde{X}_2-U_1$ in (\ref{eq:conversJ2thatshowcombinationboundisalreadysatisfied})(b), which does not form a Markov chain for the choice of $U_{1}$ and $U_2$ in the outer bounds. Therefore, inner and outer bounds do not match in general. \begin{corollary} Choosing $U_1=\widetilde{X}_1$ and $U_2=\widetilde{X}_2$, it is straightforward to show that inner and outer bounds in Theorem~\ref{theo:Twoenrollmentgscs} match if we do not impose any storage or privacy constraints, i.e., impose only (\ref{eq:reliabilityconstJ2}), (\ref{eq:uniformityconstJ2}), and (\ref{eq:secrecyconstJ2}). This result improves on the secret-key capacity region given in \cite[Theorem 1]{bizimBenelux} for a weak secrecy constraint. \end{corollary} \begin{example}\label{ex:example1} Consider the RO PUF model from \cite[Section 4.1]{bizimMDPI} where a transform-coding method is applied to conservatively model the measurement channels $P_{Y|X}=P_{\widetilde{X}|X}$ as independent BSCs with the same crossover probability of $p_\text{A}$ and where the hidden source output is $\text{Bern}(\frac{1}{2})$. We therefore can apply the achievability results from Theorem~\ref{theo:Twoenrollmentgscs} to this RO PUF model. Using \cite[Theorem~3]{bizimMMMMTIFS} to evaluate the boundary tuples of $\mathcal{\widebar{R}}_{\text{gs},J=2}$, it suffices to consider probability distributions $P_{U_j|\widetilde{X}_j}$ for $j=1,2$ such that $P_{\widetilde{X}_j|U_j}$ are BSCs with crossover probabilities \begin{align} \tilde{x}_j = \frac{H_b^{-1}(H(X|U_j))-p_\text{A}}{1-2p_\text{A}}. \label{eq:optimalcrossoverUtoXtilde} \end{align} Consider the projection of the boundary tuples of $\mathcal{\widebar{R}}_{\text{gs},J=2}$ onto key-leakage plane, i.e., (\ref{eq:corrkeyrateTwoenroll}) and (\ref{eq:corrleakagerateTwoenroll}). We plot in Fig.~\ref{fig:example1plot} single-enrollment results where the privacy-leakage rate is measured with respect to single helper data and two-enrollment results for the sum rate of the two keys, both for $p_\text{A}=0.06$ \cite{bizimMDPI}. To achieve a total secret-key rate of $I(\widetilde{X}_1;Y_1)=I(\widetilde{X}_2;Y_2)$, the privacy-leakage rate for the two-enrollment model is approximately $13.5\%$ less than the privacy-leakage rate for the single-enrollment model for RO PUFs. The reason for this gain is the information bottleneck problem that arises from (\ref{eq:corrkeyrateTwoenroll}) and (\ref{eq:corrleakagerateTwoenroll}) to find the boundary tuples. \end{example} \begin{figure}[t] \centering \input{PrivacyLeakSKRateTwoEnrollment.tikz} \caption{Privacy-leakage vs. secret-key rate projection of the boundary tuples of the single- and two-enrollment RO PUF models with BSCs$(p_\text{A}=0.06)$.} \label{fig:example1plot} \end{figure} \begin{example} Consider uniform binary antipodal measurements over an additive white Gaussian noise (AWGN) channel. Define the signal power as $P_{\text{S}}$ and the noise power as $P_{\text{N}}$, so we have a signal-to-noise ratio (SNR) of $\displaystyle SNR = \frac{P_{\text{S}}}{P_{\text{N}}}$. If a matched filter, which maximizes the $SNR$ at the sampling instant for the AWGN channel, is applied at the encoder and decoder, the bit error probability $P_{\text{b}}$ is given by \cite[pp. 96]{bizimSingaporeLNs} \begin{align} P_{\text{b}} = Q\left(\sqrt{SNR}\right). \end{align} The channel between input binary symbols and outputs of the matched filter is a BISO channel. Using \cite[Theorem~3]{bizimMMMMTIFS}, we have that $P_{\widetilde{X}_j|U_j}$ for $j=1,2$ that are BSCs with crossover probabilities given in (\ref{eq:optimalcrossoverUtoXtilde}) by replacing $p_\text{A}$ with $P_{\text{b}}$, suffice to obtain the boundary tuples of $\mathcal{\widebar{R}}_{\text{gs},J=2}$. We remark that $p_{\text{A}}=0.06$ used in Example~\ref{ex:example1} corresponds to an SNR of approximately $3.83$dB. In Fig.~\ref{fig:Gaussianexample2}, the privacy-leakage rate vs. secret-key rate boundary tuples are depicted for two cases. First, a two-enrollment model at $SNR=3.83$dB with a sum rate for two secret keys is depicted, where each enrollment has a signal power of $P_s$. For comparison, we plot a single-enrollment model with the signal power of $2P_{\text{s}}$, i.e., we have an SNR of approximately $6.84$dB. Fig.~\ref{fig:Gaussianexample2} shows for the two cases with the same total signal power of $2P_{\text{s}}$, unlike in Example~\ref{ex:example1}, that the single enrollment boundary tuple can result in a gain of approximately $228.55\%$ at its top left corner point in terms of the secret-key rates achieved for a given privacy-leakage rate. For such an AWGN channel with a fixed total signal power; therefore, the single-enrollment model can result in significant gains in terms of achieved secret-key rates as compared to the two-enrollment model for small $\widebar{R}_{\ell}$ values. \end{example} \begin{figure}[t] \centering \input{PrivacyLeakSKRateTwoEnrollment_Example2.tikz} \caption{Privacy-leakage vs. secret-key rate projection of the boundary tuples of the single- and two-enrollment RO PUF models with different SNRs.} \label{fig:Gaussianexample2} \end{figure} \section{Proof of Theorem~\ref{theo:GeneralInnergscs}}\label{sec:achinnergeneral} We provide a proof that follows from the output statistics of random binning (OSRB) method, proposed in \cite{OSRBAmin} and further extended in \cite{YenerOSRB}, by applying the steps in \cite[Section 1.6]{BlochLectureNotes2018}. \subsection{Proof for the GS Model}\label{subsec:Theorem1proofGS} \begin{IEEEproof}[Proof Sketch] Fix $\displaystyle P_{U_1|\widetilde{X}_1}, \displaystyle P_{U_2|\widetilde{X}_2},\ldots,P_{U_J|\widetilde{X}_J}$. Let $(\mathcal{U}_{[1:J]}^n,\mathcal{\widetilde{X}}_{[1:J]}^n,X^n,\mathcal{Y}_{[1:J]}^n)$ be i.i.d. according to (\ref{eq:fixedprobdistr}). Assign three random bin indices $(S_j,W_j,C_j)$ to each realization $u_j^n$ for all $j\in[1:J]$, where $S_j$ represents the secret key, $W_j$ the helper data, and $C_j$ a public index referring to a random encoder-decoder pair fixed below. Assume $S_j\in[1:2^{nR_{\text{s},j}}]$, $W_j\in[1:2^{nR_{\text{w},j}}]$, and $C_j\in[1:2^{nR_{\text{c},j}}]$ such that $R_{\text{s},j},R_{\text{w},j},R_{\text{c},j}\geq0$ for all $j\in[1:J]$. Apply the union bound to the reliability constraint in (\ref{eq:reliabilityconst}) to obtain the sum of $J$ error probabilities. This sum vanishes for any finite number $J$ when $n\rightarrow\infty$ by using a Slepian-Wolf (SW) \cite{SW} decoder to estimate $U_j^n$ from $(C_j,W_j,Y_j^n)$ if \cite[Lemma 1]{OSRBAmin} \begin{align} R_{\text{c},j}+R_{\text{w},j}> H(U_j|Y_j),\qquad\forall j\in[1:J]. \label{eq:slepianwolfdecoder} \end{align} The key uniformity (\ref{eq:uniformityconst}), mutual and strong key independence (\ref{eq:keyindependence}), and strong secrecy (\ref{eq:secrecyconst}) constraints are satisfied if \cite[Theorem 1]{OSRBAmin} \begin{align} R_{\text{s},j}\!+\!R_{\text{w},j}\!+\!R_{\text{c},j}< H(U_j|\,\mathcal{U}_{[1:J]\setminus \{j\}}), \;\;\;\forall j\in[1:J]\label{eq:independenceofindices} \end{align} since (\ref{eq:independenceofindices}) ensures that the three random indices $(S_j,W_j,C_j)$ are almost mutually independent and uniformly distributed, and they are almost independent of $\mathcal{U}_{[1:J]\setminus\{j\}}$. Therefore, $(S_j,W_j,C_j)$ are almost independent of $\left(\mathcal{S}_{[1:J]\setminus\{j\}}, \mathcal{W}_{[1:J]\setminus\{j\}}, \mathcal{C}_{[1:J]\setminus\{j\}}\right)$ because $U_k^n$ determines $(S_k,W_k,C_k)$ for all $k\in[1:J]$. Similarly, the public randomness $C_j$ is almost independent of $\widetilde{X}^n_j$, so it is almost independent of $(\mathcal{\widetilde{X}}_{[1:J]}^n,X^n,\mathcal{Y}_{[1:J]}^n)$, if we have \cite[Theorem 1]{OSRBAmin} \begin{align} R_{\text{c},j}<H(U_j|\widetilde{X}_j),\qquad \qquad \forall j\in[1:J].\label{eq:independenceofcode} \end{align} Thus, the public indices $\mathcal{C}_{[1:J]}$ can be fixed and shared with all parties by generating them uniformly at random. The $j$-th encoder can generate $U_j^n$ according to $P_{U_j^n|\widetilde{X}_j^nC_{j} }$ obtained from the binning scheme above to compute the bins $(S_j,W_{j})$ from $U_j^n$ for all $j\in[1:J]$. This procedure induces a joint probability distribution that is almost equal to $P_{\mathcal{U}_{[1:J]}\mathcal{\widetilde{X}}_{[1:J]}X\mathcal{Y}_{[1:J]}}$ fixed in (\ref{eq:fixedprobdistr}) \cite[Section 1.6]{BlochLectureNotes2018}. Applying the Fourier Motzkin elimination \cite{FMEbook} using the software available in \cite{FMEZiv} to (\ref{eq:slepianwolfdecoder})-(\ref{eq:independenceofcode}) for each $j\in[1:J]$ separately, we obtain the inequalities \begin{align} &R_{\text{w},j}>I(U_j;\widetilde{X}_j)-I(U_j;Y_j)\label{eq:FMEgeneralinnerrw}\\ &R_{\text{s},j}<I(U_j;Y_j)-I(U_j;\mathcal{U}_{[1:J]\setminus\{j\}})\label{eq:FMEgeneralinnerrs}\\ &R_{\text{w},j}+R_{\text{s},j}< H(U_j|\mathcal{U}_{[1:J]\setminus\{j\}})\label{eq:FMEgeneralinnerrwplusrs} \end{align} for all $j\in[1:J]$. To satisfy the constraints (\ref{eq:FMEgeneralinnerrw})-(\ref{eq:FMEgeneralinnerrwplusrs}), we can fix the rates to \begin{alignat}{2} &R_{\text{s},j} = I(U_j;Y_j)\!-\!I(U_j;\mathcal{U}_{[1:J]\setminus\{j\}})\!-\!2\epsilon,&& \quad\forall j\in[1:J]\label{eq:chooseR_s}\\ &R_{\text{w},j} = I(U_j;\widetilde{X}_j)-I(U_j;Y_j)+2\epsilon,&&\quad\forall j\in[1:J]\label{eq:chooseR_w}\\ &R_{\text{c},j} = H(U_j|\widetilde{X}_j)-\epsilon,&&\quad\forall j\in[1:J]\label{eq:chooseR_c} \end{alignat} for some $\epsilon>0$ such that $\epsilon\rightarrow0$ when $n\rightarrow\infty$. Consider the privacy leakage. Since $\mathcal{C}_{[1:J]}$ are public, we can bound the privacy leakage as follows. \begin{align} & I(X^n;\mathcal{W}_{[1:J]},\mathcal{C}_{[1:J]})\nonumber\\ &\quad\leq H(\mathcal{W}_{[1:J]})-H(\mathcal{W}_{[1:J]},\mathcal{C}_{[1:J]}|X^n)+H(\mathcal{C}_{[1:J]})\nonumber\\ &\quad\overset{(a)}{=}H(\mathcal{W}_{[1:J]})-\sum_{j=1}^JH(W_j,C_j|X^n)+H(\mathcal{C}_{[1:J]})\nonumber\\ &\quad\leq\sum_{j=1}^{J}\Big(H(W_j)\!+\!H(C_j)\!-\!H(W_j,C_j|X^n)\Big)\label{eq:privacyleakagefirstpartach} \end{align} where $(a)$ follows because $(W_j,C_j)-X^n-(\mathcal{W}_{[1:j-1]},\mathcal{C}_{[1:j-1]})$ form a Markov chain for all $j\in[2:J]$. Consider two cases for the privacy leakage analysis. \textbf{Case 1:} Suppose for any $j\in[1:J]$ that we have \begin{align} R_{\text{c},j}+R_{\text{w},j}< H(U_j|X) \end{align} i.e., $H(U_j|X)>H(U_j|Y_j)$, so $(W_j,C_j,X^n)$ are almost mutually independent \cite[Theorem 1]{OSRBAmin}. Therefore, we have \begin{align} &H(W_j)\!+\!H(C_j)\!-\!H(W_j,C_j|X^n)\nonumber\\ &\quad\leq H(W_j)\!+\!H(C_j)\!-\!(H(W_j)+H(C_j)-\epsilon_n')=\epsilon_n'\label{eq:privleakCase1} \end{align} for some $\epsilon_n'>0$ such that $\epsilon_n'\rightarrow 0$ when $n\rightarrow\infty$. Combining (\ref{eq:privacyleakagefirstpartach}) and (\ref{eq:privleakCase1}) proves strong privacy. \textbf{Case 2:} Suppose for any $j\in[1:J]$ that we have \begin{align} R_{\text{c},j}+R_{\text{w},j}\geq H(U_j|X) \label{eq:degradedXtildeXY case2} \end{align} i.e., $H(U_j|X)\leq H(U_j|Y_j)$, so $(W_j,C_j,X^n)$ can reliably estimate $U_j^n$ \cite[Lemma 1]{OSRBAmin}. Therefore, we have \begin{align} &H(W_j)\!+\!H(C_j)\!-\!H(W_j,C_j|X^n)\nonumber\\ &\quad\overset{(a)}{\leq}H(W_j)\!+\!H(C_j)\!-\!nH(U_j|X)+n\epsilon_n''\nonumber\\ &\quad\overset{(b)}{\leq}n(I(U_j;X)-I(U_j;Y_j)+\epsilon+\epsilon_n'')\label{eq:privleakCase2} \end{align} where $(a)$ follows because $U_j^n$ determines $(W_j,C_j)$, $(W_j,C_j,X^n)$ can realiably estimate $U^n$ for some $\epsilon_n''>0$ such that $\epsilon_n''\rightarrow 0$ when $n\rightarrow\infty$, and $(U_j^n,X^n)$ are i.i.d., and $(b)$ follows by (\ref{eq:chooseR_w}) and (\ref{eq:chooseR_c}). Combining (\ref{eq:privacyleakagefirstpartach}) and (\ref{eq:privleakCase2}), we obtain \begin{align} &I(X^n;\mathcal{W}_{[1:J]},\mathcal{C}_{[1:J]})\nonumber\\ &\quad\!\leq \!\sum_{\substack{j=1\\\; j:\\H(U_j|X)\leq H(U_j|Y_j)}}^J\! n(I(U_j;X)\!-\!I(U_j;Y_j)\!+\!\epsilon\!+\!\epsilon_n''). \end{align} Using the selection lemma \cite[Lemma 2.2]{Blochbook}, these prove the achievability of the rate region $\mathcal{R}_{\text{gs}}$. \end{IEEEproof} \subsection{Proof for the CS Model}\label{subsec:Theorem1proofCS} We use the achievability proof for the GS model. Suppose the key $S'_j$, generated as in the GS model together with the helper data $W'_j$ and public index $C'_j$, have the same cardinality as the corresponding embedded secret key $S_j$, i.e., $|\mathcal{S}'_j|=|\mathcal{S}_j|$ for all $j\in [1:J]$. The chosen key $S_j$ is uniformly distributed and independent of $(X^n,\mathcal{\widetilde{X}}_{[1:J]}^n,\mathcal{Y}_{[1:J]}^n, \mathcal{S}_{[1:J]\setminus\{j\}})$ for all $j\in[1:J]$. Consider the $j$-th encoder $f_{\text{cs},j}(\cdot,\cdot)$ with inputs $(\widetilde{X}^n_j,S_j)$ and output $W_j=(S'_j+S_j,W'_j)$, and the $j$-th decoder $g_j(\cdot,\cdot)$ with inputs $(Y_j^n,W_j)$ and output $\hat{S}_j=S'_j+S_j-\hat{S}'_j$. All addition and subtraction operations are modulo-$|\mathcal{S}_j|$ for all $j\in[1:J]$. The $j$-th decoder of the GS model is used to obtain $\hat{S}'_j$ for all $j\in[1:J]$. We have the error probability \begin{align} &\Pr\left[\underset{j\in[1:J]}{\bigcup}\{S_j\ne\hat{S}_j\}\right]=\Pr\left[\underset{j\in[1:J]}{\bigcup}\{S'_j\ne\hat{S}'_j\}\right]\label{eq:errorprobabilityachtheo2} \end{align} which is small due to the proof of achievability for the GS model. Using (\ref{eq:chooseR_s}) and (\ref{eq:chooseR_w}), and from the one-time padding operation applied above, we can achieve a storage rate of \begin{align} &R_{\text{w},j} \geq I(U_j;\widetilde{X}_j) - I(U_j;U_{[1:J]\setminus\{j\}}),\quad\; \forall j\in[1:J]\label{eq:csstorageach} \end{align} for the CS model. We have the secrecy leakage of \begin{align} &I(\mathcal{S}_{[1:J]};\mathcal{W}_{[1:J]},\mathcal{C}_{[1:J]}')\overset{(a)}{=}I(\mathcal{S}_{[1:J]};\mathcal{W}_{[1:J]}|\mathcal{C}_{[1:J]}')\nonumber\\ &\!=\! I(\mathcal{S}_{[1:J]};\mathcal{W}_{[1:J]}'|\mathcal{C}_{[1:J]}'\!)\!+\!I(\mathcal{S}_{[1:J]};{(\mathcal{S}'\!+\!\mathcal{S})}_{[1:J]}\!|\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}'\!)\nonumber\\ &\!\overset{(b)}{=} H({(\mathcal{S}'\!+\!\mathcal{S})}_{[1:J]}|\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}') - H(\mathcal{S}_{[1:J]}'|\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}') \nonumber\\ &\!\overset{(c)}{\leq} n\Big(\sum_{j=1}^JR_{\text{s},j}\Big)-H(\mathcal{S}_{[1:J]}'|\mathcal{C}_{[1:J]}')+I(\mathcal{S}_{[1:J]}';\mathcal{W}_{[1:J]}'|\mathcal{C}_{[1:J]}')\nonumber\\ &\!\overset{(d)}{\leq} n\Big(\sum_{j=1}^JR_{\text{s},j}\Big)-\Big(n\Big(\sum_{j=1}^JR_{\text{s},j}\Big)-\epsilon_n'''\Big)\nonumber\\ &\qquad+I(\mathcal{S}_{[1:J]}';\mathcal{W}_{[1:J]}'|\mathcal{C}_{[1:J]}')\nonumber\\ &\!\overset{(e)}{\leq}\epsilon_n'''+\epsilon_n^{(4)} \end{align} where $(a)$ follows since $\mathcal{S}_{[1:J]}$ are chosen independently of the public indices $\mathcal{C}_{[1:J]}$, $(b)$ follows because $\mathcal{S}_{[1:J]}$ are chosen independently of $(\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}',\mathcal{S}_{[1:J]}')$, $(c)$ follows because $|\mathcal{S}'_j|=|\mathcal{S}_j|$ for all $j\in[1:J]$, $(d)$ follows because $\mathcal{S}_{[1:J]}'$ and $\mathcal{C}_{[1:J]}'$ are almost mutually independent and each $S_j'$ is almost uniformly distributed due to (\ref{eq:independenceofindices}) for some $\epsilon_n'''>0$ such that $\epsilon_n'''\rightarrow0$ when $n\rightarrow\infty$, and $(e)$ follows because the GS model satisfies the strong secrecy constraint (\ref{eq:secrecyconst}) due to (\ref{eq:independenceofindices}) for some $\epsilon_n^{(4)}>0$ such that $\epsilon_n^{(4)}\rightarrow0$ when $n\rightarrow\infty$. Consider the privacy leakage: \begin{align} &I(X^n;\mathcal{W}_{[1:J]},\mathcal{C}_{[1:J]}')\nonumber\\ &\leq I(X^n;\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}') +H((\mathcal{S}+\mathcal{S}')_{[1:J]}|\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}')\nonumber\\ &\qquad -H((\mathcal{S}+\mathcal{S}')_{[1:J]}|X^n,\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}',\mathcal{S}_{[1:J]}')\nonumber\\ &\overset{(a)}{\leq} I(X^n;\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}')\!+\!\Big(\sum_{j=1}^J\log (|\mathcal{S}_j|)\Big)\!-\!H(\mathcal{S}_{[1:J]})\nonumber\\ &\overset{(b)}{=} I(X^n;\mathcal{W}_{[1:J]}',\mathcal{C}_{[1:J]}') \label{eq:ach2privleaktemp} \end{align} where $(a)$ follows because $\mathcal{S}_{[1:J]}$ are chosen independently of $(X^n,\mathcal{W}_{[1:J]}',\mathcal{S}_{[1:J]}',\mathcal{C}_{[1:J]}')$ and $|\mathcal{S}'_j|=|\mathcal{S}_j|$ for all $j\in[1:J]$ and $(b)$ follows from the uniformity and mutual independence of $\mathcal{S}_{[1:J]}$. Using the selection lemma, these prove the achievability of the rate region $\mathcal{R}_{\text{cs}}$. \section{Proof of Thorem~\ref{theo:Twoenrollmentgscs}}\label{sec:Twoenrollmentproofs} We use the OSRB method steps in \cite[Section 1.6]{BlochLectureNotes2018}. \subsection{Achievability Proof for the GS Model} Fix \begin{align} \displaystyle P_{U_1|\widetilde{X}_1}=P_{U_2|\widetilde{X}_2}=P_{U|\widetilde{X}}.\label{eq:twoenrolleaulencoder} \end{align} Let $(U_{1}^n,U_2^n,\widetilde{X}_{1}^n,\widetilde{X}_{2}^n,X^n,Y_{1}^n,Y_{2}^n)$ be i.i.d. according to (\ref{eq:fixedprobdistrTwoenroll}). Assign three random bin indices $(S_j,W_j,C_j)$ to each realization $u_j^n$ for all $j=1,2$. Assume $S_j\in[1:2^{n\widebar{R}_{\text{s},j}}]$, $W_j\in[1:2^{n\widebar{R}_{\text{w},j}}]$, and $C_j\in[1:2^{n\widebar{R}_{\text{c},j}}]$ such that $\widebar{R}_{\text{s},j},\widebar{R}_{\text{w},j},\widebar{R}_{\text{c},j}\geq0$ for $j=1,2$. Apply the union bound to the reliability constraint in (\ref{eq:reliabilityconstJ2}), which vanishes when $n\rightarrow\infty$ by using an SW decoder to estimate $U_j^n$ from $(C_j,W_j,Y_j^n)$ if \cite[Lemma 1]{OSRBAmin} \begin{align} \widebar{R}_{\text{c},j}+\widebar{R}_{\text{w},j}> H(U_j|Y_j),\qquad\;\;\;\; j=1,2. \label{eq:slepianwolfdecoderTwoenroll} \end{align} The key uniformity (\ref{eq:uniformityconstJ2}) constraint is satisfied if \cite[Theorem 1]{OSRBAmin} \begin{align} \widebar{R}_{\text{s},j}\!+\!\widebar{R}_{\text{w},j}\!+\!\widebar{R}_{\text{c},j}< H(U_j), \qquad j=1,2\label{eq:keyuniformitytwoenroll} \end{align} since (\ref{eq:keyuniformitytwoenroll}) ensures that the three random indices $(S_j,W_j,C_j)$ are almost mutually independent and uniformly distributed. Suppose a virtual joint encoder assigns six indices $(S_1,W_1,C_1,S_2,W_2,C_2)$ to each realization pair $(u_1^n,u_2^n)$. This virtual encoder is an operational dual of the virtual decoder used in the proof of \cite[Theorem 1]{bizimBenelux}. Using the virtual joint encoder, the strong secrecy constraint in (\ref{eq:secrecyconstJ2}) and the strong helper data independence constraint in (\ref{eq:storageindependence}) are satisfied if \cite[Theorem 1]{OSRBAmin} \begin{align} \widebar{R}_{\text{s},1}\!+\!\widebar{R}_{\text{w},1}\!+\!\widebar{R}_{\text{c},1}\!+\!\widebar{R}_{\text{w},2}\!+\!\widebar{R}_{\text{c},2}< H(U_1,U_2)\label{eq:keyleakageperkeytwoenroll_part1} \end{align} and \begin{align} \widebar{R}_{\text{s},2}\!+\!\widebar{R}_{\text{w},2}\!+\!\widebar{R}_{\text{c},2}\!+\!\widebar{R}_{\text{w},1}\!+\!\widebar{R}_{\text{c},1}< H(U_1,U_2)\label{eq:keyleakageperkeytwoenroll_part2} \end{align} because (\ref{eq:keyleakageperkeytwoenroll_part1}) ensures that $(S_1,W_1,C_1,W_2,C_2)$ are almost mutually independent; whereas, (\ref{eq:keyleakageperkeytwoenroll_part2}) ensures that $(S_2,W_2,C_2,W_1,C_1)$ are almost mutually independent. \begin{remark}\label{rem:whylessstringent} \normalfont The set of equations considered in (\ref{eq:keyuniformitytwoenroll})-(\ref{eq:keyleakageperkeytwoenroll_part2}) cannot be imposed for the joint secrecy-leakage constraint in (\ref{eq:secrecyconst}) for general probability distributions $P_{\widetilde{X}_1\widetilde{X}_2XY_1Y_2}$, since to impose (\ref{eq:secrecyconst}) one would replace (\ref{eq:keyleakageperkeytwoenroll_part1}) and (\ref{eq:keyleakageperkeytwoenroll_part2}) with \begin{align} \widebar{R}_{\text{s},1}\!+\!\widebar{R}_{\text{w},1}\!+\!\widebar{R}_{\text{c},1}\!+ \widebar{R}_{\text{s},2}+\!\widebar{R}_{\text{w},2}\!+\!\widebar{R}_{\text{c},2}< H(U_1,U_2)\label{eq:SStringentkeyleakageperkeytwoenroll_part1} \end{align} which would also imply the mutual independence of secret keys in (\ref{eq:keyindependence}). However, the inequalities in (\ref{eq:keyuniformitytwoenroll}) and (\ref{eq:SStringentkeyleakageperkeytwoenroll_part1}) cannot be satisfied simultaneously in general as $H(U_1)+H(U_2)\geq H(U_1,U_2)$. This problem is avoided in the proof of Theorem~\ref{theo:GeneralInnergscs} by imposing the inequality in (\ref{eq:independenceofindices}) rather than (\ref{eq:keyuniformitytwoenroll}). \end{remark} The public randomness $C_j$ is almost independent of $\widetilde{X}^n_j$, so it is almost independent of $(\widetilde{X}_1^n,\widetilde{X}_2^n,X^n,Y_1^n,Y_2^n)$, if we have \cite[Theorem 1]{OSRBAmin} \begin{align} \widebar{R}_{\text{c},j}<H(U_j|\widetilde{X}_j),\qquad \qquad j=1,2.\label{eq:independenceofcodetwoenroll} \end{align} Thus, the public indices $(C_1,C_2)$ can be fixed and shared publicly by generating them uniformly at random. $U_j^n$ can be generated according to $P_{U_j^n|\widetilde{X}_j^nC_j}$ for $j=1,2$ obtained from the binning scheme above to compute the bins $(S_j,W_{j})$ from $U_j^n$ for $j=1,2$. This procedure induces a joint probability distribution that is almost equal to $P_{U_{1}U_2\widetilde{X}_{1}\widetilde{X}_2XY_{1}Y_2}$ that is fixed in (\ref{eq:fixedprobdistrTwoenroll}) \cite[Section 1.6]{BlochLectureNotes2018}. Applying the Fourier Motzkin elimination to (\ref{eq:slepianwolfdecoderTwoenroll})-(\ref{eq:keyleakageperkeytwoenroll_part2}) and (\ref{eq:independenceofcodetwoenroll}), we obtain the inequalities \begin{align} &\widebar{R}_{\text{w},1}> H(U_1|Y_1)-H(U_1|\widetilde{X}_1)\label{eq:J2constraints1}\\ &\widebar{R}_{\text{w},2}> H(U_2|Y_2)-H(U_2|\widetilde{X}_2)\\ &\widebar{R}_{\text{s},1}<I(U_1;Y_1)\label{eq:J2R1ineq}\\ &\widebar{R}_{\text{s},2}<I(U_2;Y_2)\label{eq:J2R2ineq}\\ &\widebar{R}_{\text{s},1}<-H(U_1|Y_1)-H(U_2|Y_2)+H(U_1,U_2)\label{eq:J2unnecessaryRs1}\\ &\widebar{R}_{\text{s},2}<-H(U_1|Y_1)-H(U_2|Y_2)+H(U_1,U_2)\label{eq:J2unnecessaryRs2}\\ & \widebar{R}_{\text{s},1} + \widebar{R}_{\text{w},2}<-H(U_1|Y_1)+H(U_1,U_2)\label{eq:J2S1W2}\\ & \widebar{R}_{\text{s},1} + \widebar{R}_{\text{w},1}<H(U_1)\label{eq:HU1}\\ & \widebar{R}_{\text{s},1} + \widebar{R}_{\text{w},1}<-H(U_2|Y_2)+H(U_1,U_2)\label{eq:HU1icingereksiz}\\ & \widebar{R}_{\text{s},1} + \widebar{R}_{\text{w},1}+\widebar{R}_{\text{w},2}<H(U_1,U_2)\label{eq:J2Rs1Rw1Rw2}\\ & \widebar{R}_{\text{s},2} + \widebar{R}_{\text{w},2}<-H(U_1|Y_1)+H(U_1,U_2)\label{eq:HU2icingereksiz}\\ & \widebar{R}_{\text{s},2} + \widebar{R}_{\text{w},2}<H(U_2)\label{eq:HU2}\\ & \widebar{R}_{\text{s},2} + \widebar{R}_{\text{w},1}<-H(U_2|Y_2)+H(U_1,U_2)\label{eq:J2S2W1}\\ & \widebar{R}_{\text{s},2} + \widebar{R}_{\text{w},2}+\widebar{R}_{\text{w},1}<H(U_1,U_2).\label{eq:J2constraintslast} \end{align} Observe that we have \begin{align} &H(U_1|\widetilde{X}_2) = H(U_1|Y_1) = H(U_2|\widetilde{X}_1)=H(U_2|Y_2)\label{eq:uconditionalequalitiesJ2}\\ & H(U_1|\widetilde{X}_1) = H(U_2|\widetilde{X}_2)\label{eq:u1xtilde1u2xtilde2ineqJ2}\\ & H(U_1) = H(U_2)\label{eq:u1equaltou2J2} \end{align} due to (\ref{eq:SRAMchanneldef}) and (\ref{eq:twoenrolleaulencoder}). We therefore obtain \begin{align} &H(U_1,U_2)-H(U_1|Y_1)\overset{(a)}{=}H(U_2)\!+\!H(U_1|U_2)\!-\!H(U_1|\widetilde{X}_2)\nonumber\\ &\quad\overset{(b)}{\geq} H(U_2)\label{eq:conversJ2thatshowcombinationboundisalreadysatisfied} \end{align} where $(a)$ follows by (\ref{eq:uconditionalequalitiesJ2}) and $(b)$ follows from the Markov chain $U_2-\widetilde{X}_2-U_1$. A similar result can be shown by swaping the indices. Therefore, the constraints in (\ref{eq:HU1icingereksiz}) and (\ref{eq:HU2icingereksiz}) are inactive due to the constraints, respectively, in (\ref{eq:HU1}) and (\ref{eq:HU2}). Similarly, the constraints in (\ref{eq:J2unnecessaryRs1}) and (\ref{eq:J2unnecessaryRs2}) are inactive due to the constraints, respectively, in (\ref{eq:J2R1ineq}) and (\ref{eq:J2R2ineq}). Replace the inequalities in (\ref{eq:J2S1W2}) and (\ref{eq:J2S2W1}), respectively, with \begin{align} &2\widebar{R}_{\text{s},1}+\widebar{R}_{\text{w},1}+\widebar{R}_{\text{w},2}< I(U_1;Y_1)+H(U_1,U_2)\label{eq:intermediateineq3}\\ &2\widebar{R}_{\text{s},2}+\widebar{R}_{\text{w},2}+\widebar{R}_{\text{w},1}< I(U_2;Y_2)+H(U_1,U_2)\label{eq:intermediateineq3forj2}. \end{align} Then, (\ref{eq:intermediateineq3}) is inactive because (\ref{eq:J2R1ineq}) and (\ref{eq:J2Rs1Rw1Rw2}) imply (\ref{eq:intermediateineq3}), and (\ref{eq:intermediateineq3forj2}) is inactive because (\ref{eq:J2R2ineq}) and (\ref{eq:J2constraintslast}) imply (\ref{eq:intermediateineq3forj2}). We remark that the rate region represented by (\ref{eq:J2constraints1})-(\ref{eq:J2constraintslast}) is the same as the region represented by replacing (\ref{eq:J2S1W2}) and (\ref{eq:J2S2W1}) with (\ref{eq:intermediateineq3}) and (\ref{eq:intermediateineq3forj2}) because the corner points (i.e., the points that asymptotically achieve equalities in the given inequalities for fixed $P_{U_1|\widetilde{X}_1}=P_{U_2|\widetilde{X}_2}$) of the two rate regions are the same. Therefore, the inequalities in (\ref{eq:J2S1W2}) and (\ref{eq:J2S2W1}) are inactive. To satisfy the constraints (\ref{eq:J2constraints1})-(\ref{eq:J2constraintslast}), we can fix the rates to \begin{alignat}{2} &\widebar{R}_{\text{s},j} = I(U_j;Y_j)-\!5\epsilon,&& \qquad j=1,2\label{eq:chooseR_stwoenroll}\\ &\widebar{R}_{\text{w},j} = I(U_j;\widetilde{X}_j)-I(U_j;Y_j)+2\epsilon,&&\qquad j=1,2\label{eq:chooseR_wtwoenroll}\\ &\widebar{R}_{\text{c},j} = H(U_j|\widetilde{X}_j)-\epsilon,&&\qquad j=1,2\label{eq:chooseR_ctwoenroll} \end{alignat} for some $\epsilon>0$ such that $\epsilon\rightarrow0$ when $n\rightarrow\infty$ due to (\ref{eq:uconditionalequalitiesJ2})-(\ref{eq:conversJ2thatshowcombinationboundisalreadysatisfied}). Since $C_1$ and $C_2$ are public, we can bound the privacy leakage as follows. \begin{align} & I(X^n;W_1,W_2,C_1,C_2)\nonumber\\ &\overset{(a)}{\leq}H(W_1,W_2) - H(W_1,C_1|X^n)- H(W_2,C_2|X^n)\nonumber\\ &\qquad + H(C_1,C_2)\nonumber\\ &\overset{(b)}{\leq} H(W_1)+H(W_2)-H(U_1^n|X^n)-H(U_2^n|X^n)+2n\epsilon_n''\nonumber\\ &\qquad +H(C_1)+H(C_2)\label{eq:privacyleakachforfixedratesintermediatestep}\\ &\overset{(c)}{\leq} n(I(U_1;X)-I(U_1;Y_1)+I(U_2;X)-I(U_2;Y_2))\nonumber\\ &\qquad +2n\epsilon_n''+2n\epsilon \label{eq:privacyleakachforfixedrates} \end{align} where $(a)$ follows because $(W_1,C_1)-X^n-(W_2,C_2)$ form a Markov chain, $(b)$ follows for some $\epsilon_n''>0$ such that $\epsilon_n''\rightarrow 0$ when $n\rightarrow\infty$ because for the two-enrollment model considered, (\ref{eq:degradedXtildeXY case2}) is satisfied due to the Markov chain $U_j-X-Y_j$ for $j=1,2$, and $(c)$ follows by (\ref{eq:chooseR_wtwoenroll}) and (\ref{eq:chooseR_ctwoenroll}), and because $(U_1^n,U_2^n,X^n)$ are i.i.d. Using (\ref{eq:privacyleakachforfixedratesintermediatestep}) for general rate tuples that satisfy the constraints (\ref{eq:J2constraints1})-(\ref{eq:J2constraintslast}), i.e., not only (\ref{eq:chooseR_stwoenroll})-(\ref{eq:chooseR_ctwoenroll}), we can bound the privacy leakage alternatively as \begin{align} & I(X^n;W_1,W_2,C_1,C_2)\nonumber\\ &\overset{(a)}{\leq} n\widebar{R}_{\text{w},1}+n\widebar{R}_{\text{w},2}+nI(U_1;X)-nI(U_1;\widetilde{X}_1)\nonumber\\ &\qquad+nI(U_2;X)-nI(U_2;\widetilde{X}_2)+2n\epsilon_n'' \label{eq:privacyleakachforGeneralTuples} \end{align} where $(a)$ follows by (\ref{eq:chooseR_ctwoenroll}) and because $(U_1^n,U_2^n,X^n)$ are i.i.d. Using the selection lemma, these prove the achievability of the key-leakage-storage region $\mathcal{\widebar{R}}_{\text{gs},J=2}$. \subsection{Achievability Proof for the CS Model} The achievability proof for the CS model follows by applying the one-time padding step used in Section~\ref{subsec:Theorem1proofCS}. \subsection{Outer Bound Proofs for the Multi-enrollment Models} Suppose for some $\delta_n\!>\!0$ and $n$, there is a pair of encoders and decoders such that (\ref{eq:reliabilityconstJ2})-(\ref{eq:storageindependence}) are satisfied by some key-leakage-storage tuple $(\widebar{R}_{\text{s},1}, \widebar{R}_{\text{s},2}, \widebar{R}_{\ell}, \widebar{R}_{\text{w},1}, \widebar{R}_{\text{w},2})$. Using (\ref{eq:reliabilityconstJ2}) and Fano's inequality, we obtain \begin{align} H(S_j|W_j,Y_j^n)\!\overset{(a)}{\leq}\!H(S_j|\hat{S}_j)\!\leq\!n\epsilon_n,\qquad j=1,2 \label{eq:fanoappJ2} \end{align} where $(a)$ permits randomized decoding, $\epsilon_n\!=\!\delta_n \max\{\widebar{R}_{\text{s},1},\widebar{R}_{\text{s},2}\} \!+\!H_b(\delta_n)/n$ such that $\epsilon_n\!\rightarrow\!0$ if $\delta_n\!\rightarrow\!0$. Let $U_{j,i}\triangleq (S_j,W_j,X^{i-1})$, which satisfies the Markov chain $U_{j,i}-\widetilde{X}_{j,i}-X_i-Y_{j,i}$ for all $i\in[1:n]$ and $j=1,2$. \begin{remark}\label{rem:Markovchainsdiff} For the choice of $U_{j,i}= (S_j,W_j,X^{i-1})$ (and similarly for $U_{j,i}= (S_j,W_j,Y_j^{i-1})$) for $j\!=\!1,2$, $U_{1,i}-\widetilde{X}_{1,i}-U_{2,i}$ do not form a Markov chain for all $i\in[1:n]$ although for the inner bound we use this Markov chain. This is the reason why inner and outer bounds do not match in general. \end{remark} \emph{Proof for (\ref{eq:corrkeyrateTwoenroll})}: We obtain for the multi-enrollment GS and CS models for $j=1,2$ that \begin{align} &n(\widebar{R}_{\text{s},j}-\delta_n)\overset{(a)}{\leq} H(S_j)-H(S_j|W_j,Y_j^n)+n\epsilon_n\nonumber\\ &\overset{(b)}{\leq} I(S_j;Y_j^n|W_j)+n\epsilon_n+\delta_n\nonumber\\ &\leq \sum_{i=1}^n \Big[I(S_j,W_j,Y_j^{i-1};Y_{j,i})+\epsilon_n+\frac{\delta_n}{n}\Big]\nonumber\\ &\overset{(c)}{\leq}\sum_{i=1}^n \Big[I(S_j,W_j,X^{i-1};Y_{j,i})+\epsilon_n+\frac{\delta_n}{n}\Big]\nonumber\\ &\overset{(d)}{=}\sum_{i=1}^n \Big[I(U_{j,i};Y_{j,i})+\epsilon_n+\frac{\delta_n}{n}\Big]\label{eq:secretkeyconv1} \end{align} $(a)$ follows by (\ref{eq:uniformityconstJ2}) and (\ref{eq:fanoappJ2}), $(b)$ follows by (\ref{eq:secrecyconstJ2}), $(c)$ follows by applying the data-processing inequality to the Markov chain \begin{align} Y_j^{i-1}-(W_j,S_j,X^{i-1})-Y_{j,i},\quad j=1,2,\;\;\forall i\in[1\!:\!n]\label{eq:MarkovchainYiYiminus1} \end{align} and $(d)$ follows from the definition of $U_{j,i}$. \emph{Proof for (\ref{eq:corrleakagerateTwoenroll})}: Observe for the multi-enrollment models that \begin{align} &n(\widebar{R}_\ell+\delta_n)\overset{(a)}{=} H(W_1,W_2)-H(W_1|X^n)-H(W_2|X^n)\nonumber\\ &\overset{(b)}{=}H(W_1|Y_1^n)-H(W_1|X^n)+H(W_2|Y_2^n)-H(W_2|X^n)\nonumber\\ &\quad +I(W_1;\widetilde{X}_2^n)+I(W_2;Y_2^n)-I(W_1;W_2)\nonumber\\ &\overset{(c)}{\geq}\sum_{j=1}^2\Big[H(W_j|Y_j^n)-H(W_j|X^n)\Big]\nonumber\\ &\geq \sum_{j=1}^2 \Big[H(S_j,W_j,Y_j^n)-H(S_j|W_j,Y_j^n)-H(Y_j^n)\nonumber\\ &\qquad\qquad-H(S_j,W_j|X^n)\Big]\nonumber\\ &\overset{(d)}{\geq} \sum_{j=1}^2\Big[I(S_j,W_j;X^n)-I(S_j,W_j;Y_j^n)-n\epsilon_n\Big]\nonumber\\ &\overset{(e)}{\geq}\!\sum_{j=1}^2\!\sum_{i=1}^{n}\!\Big[I(S_j,W_j,X^{i-1};X_i) \!-\!I(S_j,W_j,X^{i-1};Y_{j,i}) \!-\!\epsilon_n\Big]\nonumber\\ &\overset{(f)}{=}\!\sum_{j=1}^2\sum_{i=1}^{n}\Big[I(U_{j,i};X_i) -I(U_{j,i};Y_{j,i}) -\epsilon_n\Big]\label{eq:convprivacylowerbound} \end{align} where $(a)$ follows by (\ref{eq:privacyconstJ2}) and from the Markov chain $W_1-X^n-W_2$, $(b)$ follows because $I(W_1;Y_1^n)=I(W_1;\widetilde{X}_2^n)$ due to (\ref{eq:SRAMchanneldef}), $(c)$ follows from the Markov chain $W_1-\widetilde{X}_2^n-W_2$, $(d)$ follows by (\ref{eq:fanoappJ2}), $(e)$ follows because the channel and source are memoryless and from the Markov chain in (\ref{eq:MarkovchainYiYiminus1}), and $(f)$ follows from the definition of $U_{j,i}$. \emph{Proof for (\ref{eq:corrleakagerateTwoenrollupperbound})}: Observe for the multi-enrollment models that \begin{align} &n(\widebar{R}_\ell+\delta_n)\!\overset{(a)}{\leq}\! H(W_1)\!+\!H(W_2)\!-\!H(W_1|X^n)\!-\!H(W_2|X^n)\nonumber\\ &\overset{(b)}{\leq} \sum_{j=1}^2\Big[n\widebar{R}_{\text{w},j}+H(S_j,W_j|\widetilde{X}_j^n)-H(S_j,W_j|X^n)+n\epsilon_n\Big]\nonumber \end{align} \begin{align} &\overset{(c)}{=}\sum_{j=1}^2\Big[n\widebar{R}_{\text{w},j}+\sum_{i=1}^nI(S_j,W_j,X^{i-1};X_i)\nonumber\\ &\qquad\qquad-\sum_{i=1}^nI(S_j,W_j,\widetilde{X}_{j}^{i-1};\widetilde{X}_{j,i})+n\epsilon_n\Big]\nonumber\\ &\overset{(d)}{\leq}\sum_{j=1}^2\Big[n\widebar{R}_{\text{w},j}+\sum_{i=1}^nI(S_j,W_j,X^{i-1};X_i)\nonumber\\ &\qquad\qquad-\sum_{i=1}^nI(S_j,W_j,X^{i-1};\widetilde{X}_{j,i})+n\epsilon_n\Big]\nonumber\\ &\overset{(e)}{\leq}\sum_{j=1}^2\Big[n\widebar{R}_{\text{w},j}+\sum_{i=1}^n(I(U_{j,i};X_i)-I(U_{j,i};\widetilde{X}_{j,i}))\nonumber\\ &\qquad\qquad+n\epsilon_n\Big]\label{eq:converseprivleakupperbound} \end{align} where $(a)$ follows by (\ref{eq:privacyconstJ2}) and from the Markov chain $W_1-X^n-W_2$, $(b)$ follows by (\ref{eq:fanoappJ2}) and from the Markov chain $S_j-(W_j,X^n)-Y^n$ for $j=1,2$, $(c)$ follows because the channel and source are memoryless, $(d)$ follows from the Markov chain \begin{align} X^{i-1}\!-\!(W_j, S_j,\widetilde{X}_j^{i-1})\!-\!\widetilde{X}_{j,i},\;\;\; j=1,2,\;\;\forall i\in[1\!:\!n]\label{eq:MarkovchainXtildeiXiminus1} \end{align} and $(e)$ follows from the definition of $U_{j,i}$. \emph{Proof for (\ref{eq:corrstoragerateTwoenroll})}: Observe for the multi-enrollment GS model for $j=1,2$ that \begin{align} &n(\widebar{R}_{\text{w},j}+\delta_n)\overset{(a)}{\geq} H(W_j|Y_j^n)+I(W_j;Y_j^n)\nonumber\\ &\overset{(b)}{\geq} H(S_j,W_j,Y_j^n)-H(Y_j^n)-H(S_j|W_j,Y_j^n)\nonumber\\ &\qquad -H(S_j,W_j|\widetilde{X}_j^n)+I(W_j;Y_j^n)\nonumber\\ &\overset{(c)}{\geq} I(S_j,W_j;\widetilde{X}_j^n)-I(S_j,W_j;Y_j^n)-n\epsilon_{n}\nonumber\\ &\overset{(d)}{=}\!\sum_{i=1}^{n}[I(S_j,W_j,\widetilde{X}_j^{i-1};\widetilde{X}_{j,i})\!-\!I(S_j,W_j,Y_j^{i-1};Y_{j,i})\!-\!n\epsilon_{n}]\nonumber\\ &\overset{(e)}{\geq}\sum_{i=1}^{n}[I(S_j,W_j,X^{i-1};\widetilde{X}_{j,i})\!-\!I(S_j,W_j,X^{i-1};Y_{j,i})\!-\!n\epsilon_{n}]\nonumber\\ &\overset{(f)}{=}\sum_{i=1}^{n}[I(U_{j,i};\widetilde{X}_{j,i})\!-\!I(U_{j,i};Y_{j,i})\!-\!n\epsilon_{n}]\label{eq:storageconv1} \end{align} where $(a)$ follows by (\ref{eq:storageconstJ2}), $(b)$ follows from the encoding steps, $(c)$ follows by (\ref{eq:fanoappJ2}), $(d)$ follows because the source and channel are memoryless, $(e)$ follows from the data-processing inequality applied to the Markov chains in (\ref{eq:MarkovchainYiYiminus1}) and (\ref{eq:MarkovchainXtildeiXiminus1}), and $(f)$ follows from the definition of $U_{j,i}$. \emph{Proof for (\ref{eq:cscorrstoragerateTwoenroll})}: Observe for the multi-enrollment CS model for $j=1,2$ that \begin{align} &n(\widebar{R}_{\text{w},j}\!+\!\delta_n)\overset{(a)}{\geq}\! I(S_j,W_j;\widetilde{X}_j^n)\!-\!H(S_j|W_j)\!+\!H(S_j,W_j|\widetilde{X}_j^n)\nonumber\\ &\overset{(b)}{\geq} I(S_j,W_j;\widetilde{X}_j^n)+I(S_j;W_j)\overset{(c)}{\geq}\!\sum_{i=1}^{n}I(S_j,W_j,\widetilde{X}_j^{i-1};\widetilde{X}_{j,i})\nonumber\\ &\overset{(d)}{\geq}\sum_{i=1}^{n}I(S_j,W_j,X^{i-1};\widetilde{X}_{j,i})\overset{(e)}{=}\sum_{i=1}^{n}I(U_{j,i};\widetilde{X}_{j,i})\label{eq:storageconv2} \end{align} where $(a)$ follows by (\ref{eq:storageconstJ2}), $(b)$ follows because $\widetilde{X}^n$ is independent of $S_j$ and from the encoding step, $(c)$ follows because the source and channel are memoryless, $(d)$ follows by applying the data-processing inequality to the Markov chain in (\ref{eq:MarkovchainXtildeiXiminus1}), and $(e)$ follows from the definition of $U_{j,i}$. \emph{Proof for (\ref{eq:sumonsamesandwpart1Twoenroll})}: We have for the multi-enrollment GS model for $j=1,2$ that \begin{align} &n(\widebar{R}_{\text{s},j}+\widebar{R}_{\text{w},j})\overset{(a)}{=}H(S_j,W_j)+I(S_j;W_j)+n\delta_n\nonumber\\ &\overset{(b)}{\leq} \sum_{i=1}^n\big[H(S_j,W_j,X^{i-1})+\frac{\delta_n}{n}+\delta_n\big]\nonumber\\ &\overset{(c)}{=} \sum_{i=1}^n\big[H(U_{j,i})+\frac{\delta_n}{n}+\delta_n\big]\label{eq: RsjRwjsumconversegs} \end{align} where $(a)$ follows by (\ref{eq:uniformityconstJ2}), $(b)$ follows by (\ref{eq:secrecyconstJ2}), and $(c)$ follows from the definition of $U_{j,i}$. \emph{Proof for (\ref{eq:cssumonsamesandwpart1Twoenroll})}: Similarly, we have for the multi-enrollment CS model for $j=1,2$ that \begin{align} &n\widebar{R}_{\text{w},j}\leq \sum_{i=1}^nH(S_j,W_j,X^{i-1})\overset{(a)}{=} \sum_{i=1}^nH(U_{j,i})\label{eq:Rwjupperboundcs} \end{align} where $(a)$ follows from the definition of $U_{j,i}$. \emph{Proof for (\ref{eq:sumonsamesjandwjandjprimeTwoenroll})}: We obtain for the multi-enrollment GS model for $j=1,2$ and $j'$ as defined in (\ref{jprimedef}) that \begin{align} &n(\widebar{R}_{\text{s},j}+\widebar{R}_{\text{w},j}+\widebar{R}_{\text{w},j'})\nonumber\\ &\overset{(a)}{=} H(S_j,W_j,W_{j'})+I(S_j;W_j,W_{j'})+I(W_j;W_{j'})+n\delta_n\nonumber\\ &\overset{(b)}{\leq}\sum_{i=1}^n \Big[H(S_j,W_j,W_{j'},S_{j'},X^{i-1}) + \frac{2\delta_n}{n}+\delta_n\Big]\label{eq:whereweneedindhelperdata}\\ &\overset{(c)}{=}\sum_{i=1}^n \Big[H(U_{j,i},U_{j',i}) + \frac{2\delta_n}{n}+\delta_n\Big]\label{eq:highestsumrateconverse} \end{align} where $(a)$ follows by (\ref{eq:uniformityconstJ2}), $(b)$ follows by (\ref{eq:secrecyconstJ2}) and (\ref{eq:storageindependence}), and $(c)$ follows from the definitions of $U_{j,i}$ and $U_{j',i}$. \emph{Proof for (\ref{eq:cssumonsamesjandwjandjprimeTwoenroll})}: We have for the multi-enrollment CS model for $j=1,2$ and $j'$ as defined in (\ref{jprimedef}) that \begin{align} &n(\widebar{R}_{\text{w},j}\!+\!\widebar{R}_{\text{w},j'})\nonumber\\ &\leq\sum_{i=1}^nH(W_j,W_{j'},S_j,S_{j'},X^{i-1})+I(W_j;W_{j'})+n\widebar{R}_{\text{s},j'}\nonumber\\ &\overset{(a)}{\leq} \sum_{i=1}^n\Big[H(W_j,W_{j'},S_j,S_{j'},X^{i-1})+\frac{\delta_n}{n}+\widebar{R}_{\text{s},j'}\Big]\label{eq:whereweneedindhelperdata2}\\ &\overset{(b)}{=} \sum_{i=1}^n\Big[H(U_{j,i},U_{j',i})+\frac{\delta_n}{n}+\widebar{R}_{\text{s},j'}\Big]\label{eq:highestsumrateconversecs} \end{align} where $(a)$ follows by (\ref{eq:storageindependence}) and $(b)$ follows from the definitions of $U_{j,i}$ and $U_{j',i}$. \begin{remark}\label{rem:helperdataindependence} \normalfont (\ref{eq:whereweneedindhelperdata}) and (\ref{eq:whereweneedindhelperdata2}) are the only places we use the constraint in (\ref{eq:storageindependence}) and it does not seem straightforward to obtain the inequalities in (\ref{eq:whereweneedindhelperdata}) and (\ref{eq:whereweneedindhelperdata2}) without (\ref{eq:storageindependence}). \end{remark} Introduce a uniformly distributed time-sharing random variable $\displaystyle Q\!\sim\! \text{Unif}[1\!:\!n]$ independent of other random variables. Define $X\!=\!X_Q$, $\displaystyle \widetilde{X}_j\!=\!\widetilde{X}_{j,Q}$, $\displaystyle Y_j\!=\!Y_{j,Q}$, and $U_j\!=\!(U_{j,Q},\!Q)$ so that $\displaystyle U_j\!-\!\widetilde{X}_{j}\!-\!X\!-\!Y_j$ form a Markov chain for $j=1,2$. The outer bound for the GS model follows by using the introduced random variables in (\ref{eq:secretkeyconv1}), (\ref{eq:convprivacylowerbound}), (\ref{eq:converseprivleakupperbound}), (\ref{eq:storageconv1}), (\ref{eq: RsjRwjsumconversegs}), and (\ref{eq:highestsumrateconverse}), and letting $\delta_n\rightarrow0$. Similarly, the outer bound for the CS model follows by using the introduced random variables in (\ref{eq:secretkeyconv1}), (\ref{eq:convprivacylowerbound}), (\ref{eq:converseprivleakupperbound}), (\ref{eq:storageconv2}), (\ref{eq:Rwjupperboundcs}), and (\ref{eq:highestsumrateconversecs}), and letting $\delta_n\rightarrow0$. \section{Conclusion}\label{sec:conclusion} We derived inner bounds for the multi-entity key-leakage-storage regions for GS and CS models with strong secrecy, a hidden identifier source, and correlated noise components at the encoder and decoder measurements that are modeled as BCs. The inner bounds are valid for any finite number of entities that use the same hidden source to agree on a secret key. We argued that the mutual key independence constraint we impose makes the proposed multi-entity key agreement problem a proper multi-user extension of the classic single-enrollment key agreement problem, unlike the multi-enrollment key agreement problem considered in the literature. A set of degraded and less-noisy BCs was shown to provide strong privacy without a need for a common randomness. We also established inner and outer bounds for the key-lekage-storage regions for a two-enrollment model with measurement channels that are valid for SRAM and RO PUFs. Inner and outer bounds were shown to differ only in the Markov chains imposed and they match if the storage and privacy-leakage rate constraints are removed. Two examples illustrated that depending on the constraints of the practical scenario, a single or multiple enrollments might perform better in terms of the secret-key vs. privacy-leakage rate ratio. In future work, we will find a set of symmetric probability distributions for which the strong helper data independence constraint in the two-enrollment model can be eliminated. \section*{Acknowledgment} O. G\"unl\"u thanks Rafael F. Schaefer for fruitful discussions. \bibliographystyle{IEEEtran}
1,108,101,563,475
arxiv
\section{Introduction} \label{sec:Introduction} Across multiple disciplines, studies of social media text and metadata have yielded valuable insights into population-level dynamics (e.g., consumer habits \cite{saura2019black}, voting patterns \cite{beauchamp2017predicting}). In several cases, the outcomes have enabled policy makers to more effectively anticipate and respond to concerns amongst their constituents \cite{myslin2013using,burnap2015cyber}. Now, as the world is presented with new and evolving global crises -- e.g., COVID-19, climate change, and racial inequity -- researchers look to build upon the utility of these past analyses to inform decision-making that is almost certain to have enduring ramifications \cite{van2021artificial}. Methods based on machine learning (ML), natural language processing (NLP), and web mining build the foundation of these efforts, offering an opportunity to answer questions that cannot be easily addressed using traditional mechanisms alone \cite{paul2016social,nobles2018std}. At the same time, researchers in these computational communities are aware of how brittle these methods can be. The challenges of transfer learning and domain adaptation are well known \cite{blitzer2007biographies,ruder2019transfer}, with various algorithmic techniques having since been developed to enhance model robustness and improve generalization within novel data distributions \cite{jiang2007instance,huang2019neural}. Yet, how these problems and their proposed solutions affect conclusions within longitudinal studies remains largely absent from applied analyses. Indeed, longitudinal studies almost ubiquitously follow the same formulaic approach. First, acquire ground truth for a target concept within a small sample of data (e.g., regular expressions to identify medical diagnosis disclosures \cite{coppersmith2014quantifying}, follower networks indicating political leaning \cite{al2012homophily}). Next, train a statistical classifier on this data with the objective of re-identifying language associated with the target concept. Finally, apply the trained classifier to a new population of individuals across multiple time steps (e.g., annually, weekly). The first two stages of this modeling procedure have been explored extensively \cite{volkova2015inferring}, but studies validating the final step have been sparse, due largely to the difficulties of obtaining temporally granular ground truth for many high-level concepts \cite{zafarani2015evaluation,choi2020development}. A lack of analyses of temporal robustness of these models belies the seriousness of the problem: language shifts over time -- especially on social media \cite{brigadir2015analyzing,loureiro2022timelms} -- and statistical classifiers degrade in the presence of distributional changes \cite{daume2006domain,huang2019neural}. Three types of distributional change are of particular concern for classifiers applied over time: 1) new terminology is used to convey existing concepts; 2) existing terminology is used to convey new concepts; and 3) semantic relationships remain fixed, but the overall language distribution changes. The latter challenge frequently manifests when major social events cause large-scale shifts in the topic of online conversation (e.g., discussion of healthcare increases during a pandemic, discussion of a political leader increases near an election). Unfortunately, these are often the types of events we seek to study. To better understand this gap in the literature, we conduct a case study on estimating changes in depression prevalence during the COVID-19 pandemic, a timely analysis with value to the medical and public health communities which has thus far has procured incongruous results across studies \cite{galea2020mental,bray2021racial}. We draw inspiration from research on detecting distributional shifts in language over time \cite{dredze2010we,huang2018examining}, focusing our attention on a recently-introduced method that leverages word embedding neighborhoods to identify semantic shift between multiple domains \cite{gonen2020simple}. We find that semantically-informed feature selection can improve classifier generalization when semantic noise and predictive power are interwoven. More importantly, we provide evidence that semantic shift can introduce undesirable variance in downstream longitudinal monitoring applications, despite having an indistinguishable effect on historical predictive performance. Altogether, our study serves as a cautionary tale to practitioners interested in using social media data and statistical algorithms to derive sensitive population insights. \section{Motivation} \label{sec:Motivation} When the COVID-19 pandemic began in March 2020, healthcare professionals warned of an impending mental health crisis, with economic uncertainty \cite{godinic2020effects}, loss of access to care \cite{yao2020patients}, and physical distancing \cite{galea2020mental} expected to reduce mental wellness. Given the inherent difficulties of measuring mental health at scale using traditional monitoring mechanisms, the healthcare community called upon computational scientists to leverage web data to provide evidence for optimizing crisis mitigation strategies \cite{torous2020digital}. Computational researchers responded by analyzing search queries regarding anxiety and suicidal ideation \cite{ayers2020internet,ayers2021suicide}, developing novel topic models to gather an understanding of the population's concerns \cite{koh2020loneliness}, and applying language-based classifiers to streams of social media text \cite{wolohan2020estimating}. Unfortunately, these inquiries failed to provide unanimous insights that could be used with any confidence to manage the ongoing situation \cite{zelner2021accounting}. For instance, application of a neural language classifier to the general Reddit population estimated over a 50\% increase in depression after the start of the pandemic \cite{wolohan2020estimating}, despite an analysis of topic distributions within three mental health support subreddits finding evidence to suggest the opposite \cite{biester2020quantifying}. Similarly, multiple keyword-based analyses using Google Trends data suggested anxiety increased relative to expected levels \cite{stijelja2020covid,ayers2020internet}, while others suggested anxiety levels actually remained stable \cite{knipe2020mapping}. Two years later, our understanding of COVID-19's effect on mental health is still evolving. For example, in a survey conducted early in the pandemic by the Centers for Disease Control and Prevention (CDC), 10.7\% of respondents reported having thoughts of suicide in the previous 30 days \citep{czeisler2020mental} (a $2\times$ increase over the expected rate). Later in 2020, data suggested suicide rates remained stable or even fell after the start of the COVID-19 pandemic \citep{ahmad2021quarterly}. Some argued this drop was the result of a ``pulling together'' effect \citep{ayers2021suicide}, an outcome that had been observed previously during times of crisis \cite{claassen2010effect,gordon2011impact}. However, upon closer inspection, it became clear that this trend was subject to Simpson's paradox \citep{julious1994confounding}. Reductions in suicide rate were observed amongst white folk, while a significant increase was observed amongst ethnic and racial minorities \citep{mcknight2021racial}, with whom stress-inducing factors such as financial instability and food insecurity are more common. These nuances illustrate one dimension of difficulty associated with monitoring mental health -- heterogeneity. In this paper, we will highlight an understudied dimension that affects analyses of web data -- semantic shift. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/keyword_timeseries.pdf} \caption{Proportion of posts containing a subset of depression-indicative $n$-grams.} \label{fig:KeywordTimeseries} \end{figure} \subsection{Understanding the Uncertainty} \label{sec:understand} In general, it is common for web-based research utilizing different datasets and analysis techniques to arrive at varying measurements of a specific phenomenon \cite{hutton2015toward,roy2018clean}. However, failure to understand \emph{why} these discrepancies emerge critically constrains our ability to instill confidence in future use of computational monitoring methods. In the case of the aforementioned computational studies, we argue the primary distinction is the manner in which linguistic units are aggregated and transformed into downstream insights. Indeed, a review of mental-health related $n$-gram usage\footnote{Pointwise Mutual Information of each term within historical samples of depressed individuals was used to determine mental health relevance.} over time (and inspection of the posts in which they are found) highlights how inclusion of semantically unstable terms could confound results (see Figure \ref{fig:KeywordTimeseries}). For example, spikes in usage of the term ``suicide'' in August 2019 are actually a response to Jeffrey Epstein's death. Meanwhile, increased usage of ``panic,'' ``eviction,'' ``vulnerable,'' and ``isolated'' in March 2020 primarily corresponds to discussion of pandemic-specific circumstances (e.g., toilet paper \emph{panic}, \emph{eviction} moratorium, medically \emph{vulnerable} populations). Keyword-based monitoring methods such as those from \citet{stijelja2020covid}, which aggregate counts from a pool of several lexical units together, are vulnerable to measurement error if the underlying semantic distribution of any subset of terms changed during the course of the given analysis period. Statistical language models have a higher capacity to disambiguate contextual usage by leveraging thousands of lexical units simultaneously to arrive at a final inference \cite{biester2020quantifying,wolohan2020estimating}. However, they provide no guarantee that this disambiguation is done correctly in the presence of dramatic distributional shifts \cite{dhingra2021time}. These challenges raise two important questions: \begin{enumerate} \item To what extent has semantic shift affected results regarding mental health in the published literature? \item Is it possible to obtain more reliable longitudinal estimates by explicitly invoking knowledge of semantic shift when training statistical algorithms? \end{enumerate} \section{Related Work} \label{sec:related} In the machine learning literature, semantic shift often induces what is known as \emph{concept shift} \citep{widmer1996learning,lu2018learning}. That is, we experience a mismatch between the training and deployment conditional distributions $p(y \mid x)$. These shifts occur naturally in a variety of situations where time is involved \citep{ruano2018concept,hu2020no}. For example, consider the word ``lonely'' becoming a weaker indicator of depression during Valentine's day when loneliness is expressed more frequently in a romantic context. Concept shift is often addressed in a two-stage ``detect, then react'' manner \citep{yang2005combining}. During the detection phase, the goal is to measure similarity between source and target distributions, raising a flag when the divergence surpasses a predetermined threshold \citep{vallim2014proposal,webb2018analyzing}. When representative labels are available for the target distribution, we can simply select a mixture of source and target data to train a new, more appropriate model \citep{vorburger2006entropy}. In the much more common case when labels for the target distribution are not available, methods attempt to identify subsets of source data that contain patterns found within the target data \citep{hulten2001mining} or build an ensemble of experts weighted by their appropriateness to the target distribution \citep{last2002online,kolter2007dynamic}. Alternative methods operate on the target predictions instead, applying post-hoc corrections to re-calibrate the estimated probabilities \citep{tian2021exploring}. Our study is more similar to the former; we use unlabeled deployment data to inform feature selection. \section{Measuring Semantic Shift} \label{sec:Measuring} The type of manual lexical analysis discussed in \S \ref{sec:understand} is not feasible to perform at scale for statistical language models that often have vocabularies with thousands of terms. Fortunately, a substantial pool of prior work has proposed methods for algorithmically quantifying semantic shift between language domains \cite{dredze2010we,kutuzov2018diachronic}. We choose to leverage a method introduced recently by \citet{gonen2020simple}, which has not only outperformed several state-of-the-art alternatives in preliminary studies \cite{hamilton2016diachronic}, but also shown promise for use by applied practitioners. Core advantages of this methodology include its interpretability, robustness to stochasticity, ease of implementation, and low computational overhead. \citeauthor{gonen2020simple}'s method \cite{gonen2020simple} assumes that semantically stable language has similar sets of neighboring lexical units within word embedding spaces of different domains. More formally, for two domains $\mathcal{P}$ and $\mathcal{Q}$, the semantic stability $S$ of a lexical unit (e.g., word, $n$-gram) $w$ can be measured as: \begin{align*} S(w; \mathcal{P}, \mathcal{Q}) = \frac{\text{nb}_{\mathcal{P}}^{(k)}(w) \cap \text{nb}_{\mathcal{Q}}^{(k)}(w)}{k}, \end{align*} where $\text{nb}_{\mathcal{X}}^{(k)}(w)$ (i.e., the neighborhood of $w$ in $\mathcal{X}$) denotes the top-$k$ set of lexical units nearest to lexical unit $w$ in word-embedding vector space $X$ based on a vector distance metric of the modeler's choosing. Hyperparameters include the neighborhood size $k$, the minimum frequency of $n$-grams used for building each neighborhood $\text{cf}_{\text{nb}}$, the minimum frequency of $n$-grams input to the semantic shift calculation $\text{cf}_{\text{shift}}$, the distance function used for measuring a word's neighborhood, and the embedding model architecture. For the purpose of measuring semantic shift longitudinally, we can think of independent, discrete time periods as the domains $\mathcal{P, Q}$. We include our parameterization choices in Appendix \ref{apx:predparam}. \begin{table}[t!] \centering \begin{tabular}{ccccc} \textbf{Dataset} & \textbf{Platform} & \textbf{Dates} & \textbf{\# Users} \\ \toprule CLPsych \cite{coppersmith2015clpsych} & Twitter & 2012 - 2014 & \begin{tabular}{@{}c@{}}C: 477\\D: 477\end{tabular} \\ \begin{tabular}{@{}c@{}}Multi-Task\\Learning \cite{benton2017multitask}\end{tabular} & Twitter & 2012 - 2016 & \begin{tabular}{@{}c@{}}C: 1,400\\D: 1,400\end{tabular} \\ SMHD \cite{cohan2018smhd} & Reddit & 2013 - 2018 & \begin{tabular}{@{}c@{}}C: 127,251\\D: 14,139\end{tabular}\\ \begin{tabular}{@{}c@{}}Topic-Restricted\\Text \cite{wolohan2018detecting}\end{tabular} & Reddit & 2016 - 2020 & \begin{tabular}{@{}c@{}}C: 107,274\\D: 9,210\end{tabular} \\ \midrule 1\% Stream & Twitter & 1/2019 - 7/2020 & All: 25,379 \\ Pushshift.io \cite{baumgartner2020pushshift} & Reddit & 1/2019 - 7/2020 & All: 40,671 \\ \bottomrule \end{tabular} \caption{Summary statistics for labeled (top) and unlabeled (bottom) datasets. Labeled dataset statistics are further broken out as a function of (C)ontrol and (D)epression groups.} \label{tab:DataTable} \end{table} \section{Data} \label{sec:Data} To comprehensively understand how semantic shift may influence downstream longitudinal analyses, we leverage datasets which come from multiple social media platforms, span a wide range of time periods, and leverage different annotation/sampling mechanisms. As mentioned before (\S \ref{sec:Motivation}), we focus on the the task of estimating depression prevalence, an important undertaking within the public health community \cite{gelenberg2010prevalence} due to the substantial burden on individuals, communities, and society \cite{dressler1991stress, lepine2011increasing,chang2012economic}. While we consider this specific use case, our analyses are generally applicable to longitudinal monitoring of social media. \textbf{Institutional Oversight} This research was deemed exempt from review by our Institutional Review Board (IRB) under 45 CFR \S 46.104. All datasets in this study are either publicly accessible (via authenticated application programming interfaces) or available through secure distribution mechanisms (i.e., non-commercial data usage agreements). Given the sensitive nature of mental health data, we abide by additional protocols enumerated in \citet{benton2017ethical} to govern all data collection, storage, and analysis. We discuss the ethics of conducting this type of research (i.e., sensitive attribute tracking) in \S \ref{sec:Ethics}. \subsection{Labeled Data} To numerically quantify the effect semantic shift has on predictive generalization, we consider four widely adopted datasets containing ground truth annotations of individual-level depression status. To diversify our data sample and understand platform-specific differences, we consider two Twitter datasets -- 2015 CLPsych Shared Task \cite{coppersmith2015clpsych}, Multi-Task Learning \cite{benton2017multitask} -- and two Reddit datasets -- Topic-restricted Text \cite{wolohan2018detecting}, Self-reported Mental Health Diagnoses (SMHD) \cite{cohan2018smhd}. Each dataset relies on a form of distant supervision; the Topic-restricted Text dataset assumes original posts made in the r/depression subreddit serve as a proxy for a depression diagnosis, while the remaining three datasets use regular expressions to identify self-disclosures of a depression diagnosis. This annotation procedure remains widely used to train classifiers for monitoring population-level trends due to challenges inherent in acquiring sufficient samples of annotated data \cite{de2013social,chancellor2020methods}, but remains prone to introducing label noise and other sampling artifacts \cite{ernala2019methodological}. \subsection{Unlabeled Data} Our primary interest is understanding the practical effects of semantic shift in longitudinal monitoring applications. Accordingly, we collect large samples of text data from both Twitter and Reddit to use for extrinsic model evaluation. Our sampling procedures are inspired by those used in prior COVID-19 related work \citep{saha2020psychosocial,wolohan2020estimating}. We acquire raw Twitter data from the platform's streaming API, a 1\% sample of all public tweets available for non-commercial research use. We isolate all original tweets (i.e., no retweets) that include an `en' language metadata attribute and are further classified as being written in English based on automatic language identification \cite{lui2012langid}. To facilitate application of our statistical classifiers, which require multiple documents from each individual to make accurate inferences, we further isolate individuals with at least 400 posts across the entire study time period (January 1, 2019 through July 1, 2020). We sample Reddit data within the same time period using the Pushshift.io archive \cite{baumgartner2020pushshift}, which, unlike the Twitter streaming API, provides access to nearly all historical Reddit data \cite{gaffney2018caveat}. We begin data collection by identifying all users who posted a comment in one of the 50 most popular subreddits\footnote{Based on total number of subscribers as of 6/01/2020. Statistics sourced from https://subredditstats.com} between May 25, 2020 and June 1, 2020. Of the 1.2 million unique users identified by this query, roughly 200k were identified to have posted at least once per week during January 2019 and to not exhibit clear indicators of bot activity (e.g., repeated comments, username indicators, abnormal activity volume). We collect the entire public comment history from January 1, 2019 through July 1, 2020 for a random sample of 50k users in this cohort and perform additional filtering to isolate English data and users who have at least 200 posts across the study time period. Summary statistics for all datasets are provided in Table \ref{tab:DataTable}. \begin{table*}[t] \centering \begin{tabular}{c c c c c c c c c c c} & & & & \multicolumn{3}{c}{\textbf{Na\"{i}ve}} & \multicolumn{2}{c}{\textbf{Statistical}} & \multicolumn{2}{c}{\textbf{Semantic}} \\ \cmidrule(lr){5-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} & \textbf{Dataset} & \textbf{Train} & \textbf{Test} & \textbf{Cumulative} & \textbf{Intersection} & \textbf{Frequency} & \textbf{Chi-Squared} & \textbf{Coefficient} & \textbf{Overlap} & \textbf{Weighted} \\ \toprule \multirow{7}{*}{\rotatebox[origin=c]{90}{\textbf{Twitter}}} & CLPysch & 2012-2013 & 2013-2014 & 0.656\hphantom{*} & 0.677\hphantom{*} & 0.676\hphantom{*} & 0.687\hphantom{*} & 0.677\hphantom{*} & \textbf{0.715*} & 0.696\hphantom{*} \\ \cmidrule{2-11} & \multirow{2}{*}{\begin{tabular}{@{}c@{}}Multi-Task\\Learning\end{tabular}} & 2012-2013 & 2013-2014 & 0.746\hphantom{*} & 0.759\hphantom{*} & 0.759\hphantom{*} & 0.761\hphantom{*} & 0.757\hphantom{*} & \textbf{0.779*} & 0.772\hphantom{*}\\ & & & 2014-2015 & 0.703\hphantom{*} & 0.760\hphantom{*} & 0.760\hphantom{*} & 0.762\hphantom{*} & 0.758\hphantom{*} & \textbf{0.778*} & 0.765\hphantom{*} \\ & & & 2015-2016 & 0.699\hphantom{*} & 0.775\hphantom{*} & 0.773\hphantom{*} & 0.777\hphantom{*} & 0.772\hphantom{*} & \textbf{0.783*} & 0.772\hphantom{*} \\ & & 2012-2014 & 2014-2015 & 0.778\hphantom{*} & 0.779\hphantom{*} & 0.778\hphantom{*} & 0.781\hphantom{*} & 0.783\hphantom{*} & 0.781\hphantom{*} & \textbf{0.786}\hphantom{*} \\ & & & 2015-2016 & 0.788\hphantom{*} & 0.787\hphantom{*} & 0.787\hphantom{*} & 0.789\hphantom{*} & \textbf{0.792}\hphantom{*} & 0.789\hphantom{*} & 0.791\hphantom{*} \\ & & 2012-2015 & 2015-2016 & 0.799\hphantom{*} & 0.800\hphantom{*} & 0.800\hphantom{*} & 0.800\hphantom{*} & \textbf{0.806}\hphantom{*} & 0.802\hphantom{*} & \textbf{0.806}\hphantom{*} \\ \midrule \multirow{16}{*}{\rotatebox[origin=c]{90}{\textbf{Reddit}}} & \multirow{3}{*}{\begin{tabular}{@{}c@{}}Topic\\Restricted\\Text\end{tabular}} & 2016-2017 & 2017-2018 & 0.659\hphantom{*} & \textbf{0.662}\hphantom{*} & 0.661\hphantom{*} & 0.660\hphantom{*} & 0.660\hphantom{*} & 0.661\hphantom{*} & 0.661\hphantom{*} \\ & & & 2018-2019 & \textbf{0.670}\hphantom{*} & 0.669\hphantom{*} & 0.668\hphantom{*} & 0.668\hphantom{*} & 0.668\hphantom{*} & 0.666\hphantom{*} & 0.668\hphantom{*} \\ & & & 2019-2020 & \textbf{0.670*} & 0.665\hphantom{*} & 0.663\hphantom{*} & 0.665\hphantom{*} & 0.667\hphantom{*} & 0.666\hphantom{*} & 0.667\hphantom{*} \\ & & 2016-2018 & 2018-2019 & 0.667\hphantom{*} & 0.672\hphantom{*} & 0.671\hphantom{*} & 0.671\hphantom{*} & 0.669\hphantom{*} & \textbf{0.674}\hphantom{*} & 0.669\hphantom{*} \\ & & & 2019-2020 & 0.672\hphantom{*} & 0.674\hphantom{*} & 0.674\hphantom{*} & 0.674\hphantom{*} & 0.674\hphantom{*} & \textbf{0.675}\hphantom{*} & 0.674 \\ & & 2016-2019 & 2019-2020 & 0.667\hphantom{*} & 0.668\hphantom{*} & 0.669\hphantom{*} & 0.668\hphantom{*} & 0.668\hphantom{*} & \textbf{0.674*} & 0.670\hphantom{*} \\ \cmidrule{2-11} & \multirow{1}{*}{\begin{tabular}{@{}c@{}}SMHD\end{tabular}} & 2013-2014 & 2014-2015 & 0.799\hphantom{*} & 0.798\hphantom{*} & \textbf{0.803}\hphantom{*} & 0.799\hphantom{*} & 0.799\hphantom{*} & 0.799\hphantom{*} & 0.799\hphantom{*} \\ & & & 2015-2016 & 0.801\hphantom{*} & 0.800\hphantom{*} & 0.800\hphantom{*} & \textbf{0.805}\hphantom{*} & 0.801\hphantom{*} & 0.802\hphantom{*} & 0.802\hphantom{*} \\ & & & 2016-2017 & 0.792\hphantom{*} & 0.792\hphantom{*} & 0.793\hphantom{*} & 0.798\hphantom{*} & 0.797\hphantom{*} & 0.792\hphantom{*} & \textbf{0.799}\hphantom{*} \\ & & & 2017-2018 & 0.799\hphantom{*} & 0.800\hphantom{*} & 0.800\hphantom{*} & 0.803\hphantom{*} & 0.804\hphantom{*} & 0.804\hphantom{*} & \textbf{0.808}\hphantom{*} \\ & & 2013-2015 & 2015-2016 & 0.797\hphantom{*} & 0.795\hphantom{*} & 0.798\hphantom{*} & 0.799\hphantom{*} & 0.798\hphantom{*} & \textbf{0.801}\hphantom{*} & 0.799\hphantom{*} \\ & & & 2016-2017 & 0.786\hphantom{*} & 0.785\hphantom{*} & 0.787\hphantom{*} & 0.790\hphantom{*} & 0.790\hphantom{*} & 0.788\hphantom{*} & \textbf{0.791}\hphantom{*} \\ & & & 2017-2018 & 0.796\hphantom{*} & 0.796\hphantom{*} & 0.802\hphantom{*} & 0.799\hphantom{*} & 0.804\hphantom{*} & 0.804\hphantom{*} & \textbf{0.807}\hphantom{*} \\ & & 2013-2016 & 2016-2017 & 0.790\hphantom{*} & 0.790\hphantom{*} & 0.791\hphantom{*} & 0.792\hphantom{*} & 0.793\hphantom{*} & 0.792\hphantom{*} & \textbf{0.794}\hphantom{*} \\ & & & 2017-2018 & 0.798\hphantom{*} & 0.796\hphantom{*} & 0.804\hphantom{*} & 0.798\hphantom{*} & 0.804\hphantom{*} & 0.806\hphantom{*} & \textbf{0.808}\hphantom{*} \\ & & 2013-2017 & 2017-2018 & 0.799\hphantom{*} & 0.797\hphantom{*} & 0.804\hphantom{*} & 0.800\hphantom{*} & 0.803\hphantom{*} & 0.808\hphantom{*} & \textbf{0.810}\hphantom{*} \\ \bottomrule \end{tabular} \caption{Average F1 score for the best performing vocabulary size of each feature selection method. Bolded values indicate top performers within each test set, while asterisks (*) indicate significant improvement over alternative classes of feature selection (i.e., Naive vs. Statistical vs. Semantic). Semantically-informed vocabulary selection matches or outperforms alternatives in nearly all instances, despite lacking knowledge of target outcome.} \label{tab:performance} \end{table*} \section{Predictive Generalization} \label{sec:Generalization} Our ultimate goal is to understand how the presence of semantic shift affects downstream outcomes obtained from longitudinal analyses of social media data. Critical to the success of this goal is a methodology for controlling a statistical classifier's access to semantically unstable features when making inferences on unseen data. In this initial experiment, we demonstrate that \citeauthor{gonen2020simple}'s method for measuring semantic shift \cite{gonen2020simple} can be adapted with minimal effort to curate vocabularies with constrained levels of semantic stability. Further, we demonstrate that these vocabularies often improve predictive generalization and outperform alternative feature selection methods despite lacking an explicit awareness for the target classification outcome. \subsection{Methods} We design our experiment with the intention of replicating a standard deployment paradigm seen within longitudinal analyses. Language classifiers are fit on historical accumulations of annotated data and evaluated iteratively within future one-year-long time windows (see Table \ref{tab:performance}). The influence of semantic shift on generalization is measured by comparing predictive performance (F1-score) of classifiers trained using a subset of semantically-stable terms to performance of classifiers trained using alternative feature selection methods which lack awareness of semantic shift altogether. We outline the full experimental design in Appendix \ref{apx:generalization}. \textbf{Feature Selection} Semantic stability scores $S$ are computed for each source (training) and target (evaluation) time period combination using \citeauthor{gonen2020simple}'s method \cite{gonen2020simple}. We vary vocabulary size in linear, 10-percentile intervals until all available tokens are used for training the language classifier. All vocabulary selection methods are enumerated below, chosen to encompass a variety of common strategies (na\"{i}ve and statistical) for reducing dimensionality and enhancing model performance. \begin{itemize} \item \textbf{Cumulative}: Frequency $>$ 50 in the source time period \item \textbf{Intersection}: Frequency $>$ 50 in the source \& target time periods \item \textbf{Frequency}: Top $p\%$ of $n$-grams with highest frequency \item \textbf{Random}: Randomly selected terms, $p\%$ of the total available vocabulary \item \textbf{Chi-Squared}: Top $p\%$ of $n$-grams with highest chi-squared test statistic \cite{pedregosa2011scikit} \item \textbf{Coefficient}: Top $p\%$ of $n$-grams with highest absolute logistic regression weight within the training data \item \textbf{Overlap}: Top $p\%$ of $n$-grams with the highest semantic stability score $S$ \item \textbf{Weighted (Overlap)}: Top $p\%$ of $n$-grams with the highest 50/50 weighted combination of Coefficient and Overlap scores \end{itemize} All feature selection methods below \emph{Frequency} (inclusive) are a subset of the \emph{Intersection} method. We introduce \emph{Weighted (Overlap)} to balance the predictive value of a given feature and its semantic stability, theorizing that a vocabulary based solely on semantic stability may come at the cost of significant predictive power, while a vocabulary solely based on within-domain predictive power will be vulnerable to generalization issues. \subsection{Results} \subsubsection{Stability Analysis} To validate our implementation of \citeauthor{gonen2020simple}'s method \cite{godinic2020effects} and build context for our classification results, we first manually inspect a sample of learned semantic stability scores for each dataset. On a distribution level, we see that stability scores tend to decrease as the gap between training and evaluation time periods increases -- evidence of increased semantic shift over time. Additionally, we note that semantic stability scores within the Twitter datasets are generally lower than scores within than the Reddit datasets. These platform-specific differences align with our prior understanding of each platform's design, with Twitter tending to foster conversations motivated by current events (i.e., personal and global conflict) and Reddit offering individuals an opportunity to connect through shared interests that evolve over longer time periods \cite{noble2021semantic}. For all datasets, common nouns and verbs make up the majority of terms with the highest semantic stability scores (e.g., eat, bring, give, city, room, pain). These types of tokens arise only infrequently within the lower tier of semantic stability scores, typically a result of isolated conflation with current events/pop culture -- names of video games (e.g., blackout, warzone), television characters and celebrities (e.g., sandy, gore, rose), and athletic organizations (e.g., twins, braves, cal). Hashtags are frequently found in the lower semantic stability tier for the Twitter datasets, a reflection of the diversity of conversations in which they are used. Broadly, most of the observed semantic shift can be described as changes in the popularity of different word senses \cite{haase2021scot}. Although this suggests that contextual language models \cite{devlin2018bert} would be well-suited for mitigating the effect of semantic shift in longitudinal analyses, emerging research suggests this is not necessarily true in the absence of additional tuning \cite{dhingra2021time,loureiro2022timelms}. \subsubsection{Generalization} We find that classifiers trained using vocabularies derived with a knowledge of semantic stability achieve equal or better predictive performance than alternative feature selection techniques in the majority of classification settings (Table \ref{tab:performance})\footnote{We exclude the Random method to save space, but note it performed significantly worse across all settings as expected.}. Semantic stability tends to be more useful for generalization within the Twitter datasets than the Reddit datasets, likely due to the aforementioned platform-specific distributions. In all cases, joint use of semantic stability and coefficient weights to derive feature selection scores (i.e., \emph{Weighted (Overlap)}) matches or moderately improves performance over use of coefficient weights in isolation. Finally, we note that the semantically-informed vocabulary selection methods not only offer reasonably wide operating windows (usually 20 to 50\% of the total vocabulary size), but also tend to correlate with performance within source time periods. This latter detail suggests that semantically-stable vocabulary selection can be adequately performed in the absence of validation samples from a target time period, a necessity for most longitudinal analyses. We leave hyperparameter optimization for this methodology as an area for future exploration. \section{Practical Effects of Semantic Shift} \label{sec:ApplyExp} \begin{figure*}[t!] \centering \includegraphics[width=0.9\linewidth]{figures/performance-prevalence.pdf} \caption{Horizontal bars denote each dataset's estimate under the na\"{i}ve, \emph{Intersection} baseline. Curves denote performance over varying sizes of vocabulary selected based on semantic stability $S$. (Left) Average F1 score within held-out samples drawn from each dataset's complete time period. Performance is largely indistinguishable for several of the vocabulary sizes. (Right) Estimated change in depression prevalence as a function of vocabulary.}% \label{fig:OutcomesSemantics} \end{figure*} Having demonstrated that semantically-aware vocabulary selection methods achieve comparable performance to alternative techniques using a fraction of features and can even improve predictive generalization outright, we turn our attention to understanding the practical effects semantic shift has in longitudinal modeling applications. Specifically, we leverage our ability to systematically constrain a language classifier's access to semantically volatile terms to evaluate how estimates of depression prevalence vary in the presence of semantic shift. Ultimately, we find that small changes in the vocabulary of a language classifier can promote large deviations in downstream outcomes, despite offering little to no indication of concern within historical data samples. \subsection{Methods} We leverage a similar experimental design to that from \S \ref{sec:Generalization}, making small methodological changes to acutely focus on understanding the practical effect of semantic shift in a deployment scenario. For example, we now model the entire time span of each annotated dataset without additional temporal splitting. Furthermore, semantic vocabulary selection is performed using embeddings learned from pairs of labeled and unlabeled datasets (e.g., CLPsych and the 1\% Twitter Stream) instead of discrete time window pairs within each of the labeled datasets. The full list of modifications is provided in Appendix \ref{apx:practical}; we release our code to support future research and allow others to reproduce our analysis.\footnote{\url{https://github.com/kharrigian/semantic-shift-websci-2022}} To control for seasonal effects, we focus on estimating the year-over-year change in the prevalence of depression-indicative language amongst individuals in each of the unlabeled social media samples. Each unlabeled data sample is split into two distinct time periods -- March 1, 2019 to July 1, 2019 (Pre-Pandemic) and March 1, 2020 to July 1, 2020 (During-Pandemic). Classifiers are applied to each individual in the unlabeled temporal samples who meets a minimum post volume criteria -- 200 for Twitter and 100 for Reddit. We compute the prevalence of depression as the proportion of users in the unlabeled sample who have a predicted probability of depression greater than 0.5. We then measure the difference in estimated prevalence between the two time periods as a function of the underlying model vocabulary. \subsection{Results} \subsubsection{Language Analysis} Semantic stability scores for each of the raw data samples (pre/during COVID-19) align with intuition regarding sociopolitical events of the era. Many of the $n$-grams with the lowest semantic stability are related to the pandemic: ``viral,'' ``masks,'' ``transmission,'' ``isolation,'' ``zoom,'' ``lockdown.'' Of terms that over-index in historical usage amongst individuals with depression, the least semantically stable include: ``panic,'' ``cuts,'' ``isolated,'' ``strain,'' ``vulnerable,'' and ``doctors.'' Each of these terms becomes closely aligned with pandemic-related phenomena that is not explicitly linked to mental-health. We provide an overview of the changes in Table \ref{tab:contextshift}; additional examples are included in Table \ref{tab:NeighborExamples} (see appendix). \begin{table}[b] \centering \begin{tabular}{l l l} \textbf{Term} & \textbf{2019 Context} & \textbf{2020 Context} \\ \toprule Panic & Emotion (i.e., Fear) & Panic Buying, Misinformation \\ \midrule Cuts & Physical & Economic \\ \midrule Isolated & Feeling Detached & Quarantine \\ \midrule Strain & Discomfort/Pressure & Virus \\ \midrule Vulnerable & Emotion & At-risk Populations \\ \midrule Doctors & Personal Experience & Frontline Workers, PPE \\ \bottomrule \end{tabular} \caption{Change in the most prevalent context from 2019 to 2020 for a handful of terms which historically over-indexed in usage amongst individuals living with depression.} \label{tab:contextshift} \end{table} \subsubsection{Prevalence Estimates} Turning our attention to the statistical classifiers, we observe that predictive performance as a function of the underlying vocabulary is nearly indistinguishable for vocabularies of size 40\% and higher. However, as shown in Figure \ref{fig:OutcomesSemantics}, we identify significant differences in the estimated change in population-level depression prevalence as a function of the model's underlying vocabulary. In some cases, these differences are relatively minor and lead to the same general conclusions. In other cases, we arrive at entirely different statements regarding the directional change of depression prevalence (i.e., increase instead of decrease) and absolute change (i.e., nearly 10\% in the case of the minimum and maximum Multi-Task Learning estimates). \section{Ethical Considerations} \label{sec:Ethics} Population monitoring at scale warrants an ethical discussion. We discuss some trade-offs between risk and reward, specifically when studying sensitive characteristics (e.g., mental health status) from social media. We direct the reader to work from \citet{conway2016social} and \citet{golder2017attitudes} for an expanded review. \subsubsection{Risks} Two serious risks arise from measuring personal characteristics using social media data: 1) discrimination, and 2) measurement error. The former is a challenge associated with any approach used to acquire information about human characteristics or behavior, whether inferred by an algorithm or not. Knowledge of personal attributes could be used by educational institutions to make biased admissions decisions, by law enforcement to track individuals without cause, or by political/government entities to target vulnerable individuals. These concerns are particularly poignant with regards to stigmatized characteristics, such as mental illness. Discriminatory actions based on these characteristics could have long-lasting financial and social consequences -- e.g., difficulty obtaining loans, increased insurance premiums, and exclusion from certain communities. While statistical models are not the only method for gathering this information, they can be used in some situations where other approaches are infeasible \cite{paul2016social}. With respect to the second challenge, we draw the reader's attention to substantial evidence that demonstrates language models trained on social media datasets perform disproportionately amongst different demographic groups \cite{aguirre2021genderandracial} and maintain historical social biases \cite{brunet2019understanding}. These systematic errors in models of mental health may further exacerbate social stratification in a opaque and elusive manner \cite{bender2021dangers}. \subsubsection{Rewards} We must also take care not to ignore the tremendous need for these methods and the benefits they bring. The same technology used to ostracize vulnerable individuals could also be used to provide those individuals with social services. Likewise, access to reasonably accurate classifiers with well-defined bounds of uncertainty could help small organizations acquire sufficient data to optimize resource allocation without needing to invest in the cost-prohibitive infrastructure necessary to execute traditional monitoring at scale (e.g., random digit dialing, online surveys) \cite{vaske2011advantages,shaver2019using}. These opportunities come with a variety of additional advantages over traditional population monitoring mechanisms -- social media monitoring preempts the need to use downstream outcomes that are not useful in situations that require immediate decisions (e.g., latent changes in suicide rate), addresses certain forms of sample bias (e.g., selection bias introduced when individuals opt into a survey, disclosure bias that emerges when individuals are hesitant to discuss stigmatized topics with an interviewer), and provides the opportunity to make comparisons against retrospective baselines. Moreover, a significant amount of work focuses on methods to mitigate risk of discrimination \cite{zhao2018gender} and adequately correct for sampling biases specific to social media data \cite{giorgi2021correcting}. The large body of literature on social media monitoring in public health, for example, evidences the tremendous need for these technologies \cite{paul2017social}. It is our responsibility to develop and deploy them in an ethically responsible manner. \subsubsection{Discussion} Practitioners must weigh these trade-offs in the context of their particular use case. In our use case, we note the goal of this study is \emph{not} to make claims about a particular longitudinal trend or even demonstrate the prowess of a statistical modeling approach. Rather, our intention is to understand whether existing models can be trusted for measuring longitudinal trends at all in the presence of semantic shift, and if not, identify potential opportunities for practitioners to improve reliability of their models. The utility of such an exploration would be questionable if these types of models had not already been deployed in academia and beyond. However, one need only to look at research published within the last year regarding COVID-19 to see that machine learning classifiers are actively being used to understand a variety of social dynamics, ranging from mental health outcomes \cite{fine2020assessing,tabak2020temporal} to transportation usage \cite{morshed2021impact}. These analyses will form a foundation for public policy in the coming post-pandemic years. It is critical that we answer: are these results reliable? \section{Discussion} \label{sec:Discussion} In this study, we demonstrated that semantic shift can be problematic in longitudinal monitoring applications, both in terms of pure predictive performance and our ability to estimate population-level outcomes. The method for measuring semantic stability introduced by \citet{gonen2020simple} and adapted for use as a feature selection method here is promising for reducing domain divergence and improving generalization over time. However, more research must be done to understand which deployment scenarios may obtain the most significant benefit from its use. \subsubsection{Limitations} The outcomes of this study are designed to spur conversation amongst practitioners, not necessarily to provide a panacea for addressing semantic noise in deployment scenarios. Indeed, we recognize our quantitative experiments are limited by the annotated data itself. For example, it remains to be seen whether semantically stable vocabularies are most useful over the course of certain time frames (e.g., decades instead of years), within a subset of social media platforms, or in the context of specific modeling tasks. Moreover, the labeled datasets may not be entirely conducive to the longitudinal classification experiments we performed in \S \ref{sec:Generalization} \cite{demasi2017meaningless,tsakalidis2018can}, with depression known to present episodically within individuals \cite{collishaw2004time,angst2009long}. Given these dataset constraints, we urge researchers to consider replicating our analysis using new datasets which feature different underlying temporal dynamics. \subsubsection{Next Steps} Beyond expanding the analysis to a more diverse array of datasets and target measures, we foresee substantial value in continued exploration of semantic stability's effect on predictive generalization under alternative technical perspectives. For example, we note that our current study focuses solely on discrete time windows, an abstraction that is useful for simple monitoring applications, but too constraining for others. It would be of significant value to the longitudinal monitoring community to evaluate whether continuous time and diachronic embeddings offer advantages over their discretized counterparts \cite{hamilton2016diachronic,huang2019neural}. We also recognize that our implementation operates in two distinct stages (i.e., feature selection, model training), a setup which may inhibit performance. A better approach may involve leveraging knowledge of semantic shift to explicitly regularize coefficients at training time. \bibliographystyle{ACM-Reference-Format}
1,108,101,563,476
arxiv
\section{Introduction} We discuss the application of the BFKL method~\cite{BFKL} to the description of semihard processes, i.e. hard processes in the kinematic region where the energy variable $s$ is substantially larger than the hard scale $Q^2$, $s\gg Q^2\gg \Lambda_{\rm QCD}^2$, with $Q$ the typical transverse momentum and $\Lambda_{\rm QCD}$ the QCD scale. This approach allows to resum systematically to all orders of perturbation series the terms enhanced by leading $\alpha_s^n \ln^n(s/Q^2)$ (LLA) and first subleading $\alpha_s^{n+1}\ln^n(s/Q^2)$ (NLA) logarithms of the energy. In the BFKL approach, relevant physical observables are expressed as a convolution of two impact factors with the Green's function of the BFKL equation. The Green's function is determined through the BFKL equation and is process-inde\-pen\-dent. The next-to-leading order (NLO) kernel of the BFKL equation for singlet color representation in the $t$-channel and forward scattering, relevant for the determination of the NLA total cross section, has been achieved in Refs.~\cite{NLA-kernel}, after the long program of calculation of the NLO corrections~\cite{NLA-corrections} (for a review, see Ref.~\cite{news}). The other essential ingredient are impact factors, which are not universal and must be calculated process by process. Indeed, only a few of them are known with NLO accuracy. Both the impact factors and the BFKL kernel receive large NLO corrections in the $\overline{\rm{MS}}$ renormalization scheme. The practical application of this approach to physical processes encounters therefore serious difficulties, due not only to the large NLO corrections, but also to big renormalization scale setting uncertainties, thus calling for some optimization procedure of the QCD perturbative series. In this paper we focus on the widely-used Brodsky-Lepage-Mackenzie (BLM) approach~\cite{BLM} to face with this problem, which relies on the removal of the renormalization scale ambiguity by absorbing the non-conformal $\beta_0$-terms into the running coupling. It is known that after BLM scale setting, the QCD perturbative convergence can be greatly improved due to the elimination of renormalon terms in the perturbative QCD series. Moreover, with the BLM scale setting, the BFKL Pomeron intercept has a weak dependence on the virtuality of the Reggeized gluon~\cite{Brodsky:1998kn,Brodsky:2002ka}. We apply the BLM scale setting procedure directly to the amplitudes (cross sections) of several semihard processes. It is shown that due to the presence of $\beta_0$-terms in the NLO expressions for the impact factors, the resulting optimal renormalization scale is not universal and depends both on the energy and on the type of process in question. We illustrate this general conclusion considering in details the following semihard processes: \begin{itemize} \item the forward amplitude of production of two light vector mesons in collision of two virtual photons, $\gamma^*\gamma^*\to V_1V_2$; \item the high-energy behavior of the total cross section for highly virtual photons, $\gamma^*\gamma^*\to X$; \item the inclusive production of two forward, high-$p_T$ jets separated by a large interval in rapidity $\Delta y$ (Mueller-Navelet jets), $p+p\to {\rm jet}+{\rm jet} +X$. \end{itemize} At present we do not have a model-independent method to resum the BFKL series beyond the NLA logarithms of the energy. Therefore we strictly adhere here to the original formulation of the BLM procedure and do not consider its higher-order extensions, such as the \emph{sequential extended BLM}~\cite{Mikhailov:2004iq} and the \emph{Principle of Maximum Conformality}~\cite{pmc} (see~\cite{Wu:2013ei} for a review on the latter method; see also~\cite{Wu:2014iba,Kataev:2014jba,Kataev:2014zwa,Ma:2015dxa} for some recent comparisons between different optimization methods). The paper is organized as follows: in section~2 we rederive the general expression for the NLA BFKL amplitude in the $(\nu,n)$-representation; in section~3 we discuss in detail the implementation of the BLM scale setting method, both in exact way and in some approximated forms; in section~4 we present the applications of the procedure to the three different processes mentioned above; finally, in section~5 we draw our conclusions and discuss previous studies of semihard processes with BLM method. \section{The BFKL amplitude} The cross section and many other physical observables are directly related to the forward amplitude, which in the BFKL approach can be expressed as follows: \beq{ampl} {\rm Im}_s \left( {\cal A} \right)=\frac{s}{(2\pi)^{2}}\int\frac{d^{2}\vec q_1}{\vec q_1^{\,\, 2}}\Phi_1(\vec q_1,s_0)\int \frac{d^{2}\vec q_2}{\vec q_2^{\,\,2}} \Phi_2(-\vec q_2,s_0) \int\limits^{\delta +i\infty}_{\delta -i\infty}\frac{d\omega}{2\pi i}\left(\frac{s}{s_0}\right)^\omega G_\omega (\vec q_1, \vec q_2)\, . \end{equation} This expression holds with NLA accuracy. Here, $s$ is the squared center-of-mass energy, whereas $s_0$ is an artificial scale introduced to perform the Mellin transform from the $s$-space to the complex angular momentum plane and cancels in the full expression, up to terms beyond the NLA. All momenta entering this expression are defined on the transverse plane and are therefore two-dimensional. $\Phi_{1,2}$ are the NLO impact factors specific of the process; we will see later on three different examples for them. The Green's function $G_\omega$ takes care of the universal, energy-dependent part of the amplitude. It obeys the BFKL equation \begin{equation} \omega \, G_\omega (\vec q_1,\vec q_2) =\delta^{(2)} (\vec q_1-\vec q_2) +\int d^{2}\vec q \, K(\vec q_1,\vec q) \,G_\omega (\vec q, \vec q_1) \;, \end{equation} where $K(\vec q_1,\vec q_2)$ is the BFKL kernel. In this section we will derive a general form for the amplitude in the so-called $(\nu,n)$-representation, which will provide us with the starting point of our further analysis. We will proceed along the same lines of Refs.~\cite{mesons}. First of all, it is convenient to work in the transverse momentum representation, defined by \beq{transv} \hat{\vec q}\: |\vec q_i\rangle = \vec q_i|\vec q_i\rangle\;, \end{equation} \beq{norm} \langle\vec q_1|\vec q_2\rangle =\delta^{(2)}(\vec q_1 - \vec q_2) \;, \hspace{2cm} \langle A|B\rangle = \langle A|\vec k\rangle\langle\vec k|B\rangle = \int d^2k A(\vec k)B(\vec k)\;. \end{equation} In this representation, the forward amplitude~(\ref{ampl}) takes the very simple form \beq{ampl-transv} {\rm Im}_s\left({\cal A}\right)=\frac{s}{(2\pi)^2} \int_{\delta-i\infty}^{\delta+i\infty}\frac{d\omega}{2\pi i} \, \left(\frac{s}{s_0}\right)^\omega \langle\frac{\Phi_1}{\vec q_1^{\,\,2}}|\hat G_\omega|\frac{\Phi_2}{\vec q_2^{\,\,2}} \rangle \ . \end{equation} The kernel of the operator $\hat K$ becomes \beq{kernel-op} K(\vec q_2, \vec q_1) = \langle\vec q_2| \hat K |\vec q_1\rangle \end{equation} and the equation for the Green's function reads \beq{Groper} \hat 1=(\omega-\hat K)\hat G_\omega\;, \end{equation} its solution being \beq{Groper1} \hat G_\omega=(\omega-\hat K)^{-1} \, . \end{equation} The kernel is given as an expansion in the strong coupling, \beq{kern} \hat K=\bar \alpha_s \hat K^0 + \bar \alpha_s^2 \hat K^1\;, \end{equation} where \beq{baral} {\bar \alpha_s}=\frac{\alpha_s N_c}{\pi} \end{equation} and $N_c$ is the number of colors. In Eq.~(\ref{kern}) $\hat K^0$ is the BFKL kernel in the LO, while $\hat K^1$ represents the NLO correction. To determine the cross section with NLA accuracy we need an approximate solution of Eq.~(\ref{Groper1}). With the required accuracy this solution is \beq{exp} \hat G_\omega=(\omega-\bar \alpha_s\hat K^0)^{-1}+ (\omega-\bar \alpha_s\hat K^0)^{-1}\left(\bar \alpha_s^2 \hat K^1\right) (\omega-\bar \alpha_s \hat K^0)^{-1}+ {\cal O}\left[\left(\bar \alpha_s^2 \hat K^1\right)^2\right] \, . \end{equation} The basis of eigenfunctions of the LO kernel, \beq{KLLA} \hat K^0 |n,\nu\rangle = \chi(n,\nu)|n,\nu\rangle \, , \;\;\;\;\;\;\;\;\;\; \chi (n,\nu)=2\psi(1)-\psi\left(\frac{n}{2}+\frac{1}{2}+i\nu\right) -\psi\left(\frac{n}{2}+\frac{1}{2}-i\nu\right)\, , \end{equation} is given by the following set of functions: \beq{nuLLA} \langle\vec q\, |n,\nu\rangle =\frac{1}{\pi\sqrt{2}} \left(\vec q^{\,\, 2}\right)^{i\nu-\frac{1}{2}}e^{in\phi} \;, \end{equation} here $\phi$ is the azimuthal angle of the vector $\vec q$ counted from some fixed direction in the transverse space, $\cos\phi \equiv q_x/|\vec q\,|$. Then, the orthonormality and completeness conditions take the form \beq{ort} \langle n',\nu^\prime | n,\nu\rangle =\int \frac{d^2 \vec q} {2 \pi^2 }\left(\vec q^{\,\, 2}\right)^{i\nu-i\nu^\prime -1} e^{i(n-n')\phi}=\delta(\nu-\nu^\prime)\, \delta_{nn'} \end{equation} and \beq{comp} \hat 1 =\sum^{\infty}_{n=-\infty}\int\limits^{\infty}_{-\infty}d\nu \, | n,\nu\rangle\langle n,\nu |\ . \end{equation} The action of the full NLO BFKL kernel on these functions may be expressed as follows: \bea{Konnu} \hat K|n,\nu\rangle &=& \bar \alpha_s(\mu_R) \chi(n,\nu)|n,\nu\rangle +\bar \alpha_s^2(\mu_R)\left(\chi^{(1)}(n,\nu) +\frac{\beta_0}{4N_c}\chi(n,\nu)\ln(\mu^2_R)\right)|n,\nu\rangle \nonumber \\ &+& \bar \alpha_s^2(\mu_R)\frac{\beta_0}{4N_c}\chi(n,\nu) \left(i\frac{\partial}{\partial \nu}\right)|n,\nu\rangle \;, \end{eqnarray} where $\mu_R$ is the renormalization scale of the QCD coupling; the first term represents the action of LO kernel, while the second and the third ones stand for the diagonal and the non-diagonal parts of the NLO kernel and we have used \beq{beta00} \beta_0=\frac{11 N_c}{3}-\frac{2 n_f}{3}\;, \end{equation} where $n_f$ is the number of active quark flavors. The function $\chi^{(1)}(n,\nu)$, calculated in~\cite{Kotikov:2000pm} (see also~\cite{Kotikov:2000pm2}), is conveniently represented in the form \beq{ch11} \chi^{(1)}(n,\nu)=-\frac{\beta_0}{8\, N_c}\left(\chi^2(n,\nu)-\frac{10}{3} \chi(n,\nu)-i\chi^\prime(n,\nu)\right) + {\bar \chi}(n,\nu)\, , \end{equation} where \beq{chibar} \bar \chi(n,\nu)\,=\,-\frac{1}{4}\left[\frac{\pi^2-4}{3}\chi(n,\nu) -6\zeta(3)-\chi^{\prime\prime}(n,\nu) +\,2\,\phi(n,\nu)+\,2\,\phi(n,-\nu) \right. \end{equation} \[ + \left. \frac{\pi^2\sinh(\pi\nu)}{2\,\nu\, \cosh^2(\pi\nu)} \left( \left(3+\left(1+\frac{n_f}{N_c^3}\right)\frac{11+12\nu^2}{16(1+\nu^2)}\right) \delta_{n0} -\left(1+\frac{n_f}{N_c^3}\right)\frac{1+4\nu^2}{32(1+\nu^2)}\delta_{n2} \right)\right] \, , \] \beq{phi} \phi(n,\nu)\,=\,-\int\limits_0^1dx\,\frac{x^{-1/2+i\nu+n/2}}{1+x} \left[\frac{1}{2}\left(\psi'\left(\frac{n+1}{2}\right)-\zeta(2)\right) +\mbox{Li}_2(x)+\mbox{Li}_2(-x) \right. \end{equation} \[ \left. +\ln x \left(\psi(n+1)-\psi(1)+\ln(1+x)+\sum_{k=1}^\infty\frac{(-x)^k} {k+n}\right)+\sum_{k=1}^\infty\frac{x^k}{(k+n)^2}(1-(-1)^k)\right] \] \[ =\sum_{k=0}^\infty\frac{(-1)^{k+1}}{k+(n+1)/2+i\nu}\left[\psi'(k+n+1) -\psi'(k+1)+(-1)^{k+1}(\beta'(k+n+1)+\beta'(k+1))\right. \] \[ \left. -\frac{1}{k+(n+1)/2+i\nu}(\psi(k+n+1)-\psi(k+1))\right] \, , \] \[ \beta'(z)=\frac{1}{4}\left[\psi'\left(\frac{z+1}{2}\right) -\psi'\left(\frac{z}{2}\right)\right]\;, \;\;\;\;\; \mbox{Li}_2(x)=-\int\limits_0^xdt\,\frac{\ln(1-t)}{t} \, . \] Here and below $\chi^\prime(n,\nu)=d\chi(n,\nu)/d\nu$ and $\chi^{\prime\prime}(n,\nu)=d^2\chi(n,\nu)/d^2\nu$. The projection of the impact factors onto the eigenfunctions of the LO BFKL kernel, {\it i.e.} the transfer to the $(\nu,n)$-representation, is done as follows: \[ \frac{\Phi_1(\vec q_1)}{\vec q_1^{\,\, 2}}=\sum^{+\infty}_{n=-\infty} \int\limits^{+\infty}_{-\infty} d\nu \, \Phi_1(\nu,n)\langle n,\nu| \vec q_1\rangle\, , \quad \frac{\Phi_2(-\vec q_2)}{\vec q_2^{\,\, 2}}=\sum^{+\infty}_{n=-\infty} \int\limits^{+\infty}_{-\infty} d\nu \, \Phi_2(\nu,n) \langle \vec q_2 |n,\nu \rangle \, , \] \beq{nu_rep} \Phi_1(\nu,n)=\int d^2 q_1 \,\frac{\Phi_1(\vec q_1)}{\vec q_1^{\,\, 2}} \frac{1}{\pi \sqrt{2}} \left(\vec q_1^{\,\, 2}\right)^{i\nu-\frac{1}{2}} e^{i n \phi_1}\;, \end{equation} \[ \Phi_2(\nu,n)=\int d^2 q_2 \,\frac{\Phi_2(-\vec q_2)}{\vec q_2^{\,\, 2}} \frac{1}{\pi \sqrt{2}} \left(\vec q_2^{\,\, 2}\right)^{-i\nu-\frac{1}{2}} e^{-i n \phi_2}\;. \] The impact factors can be represented as an expansion in $\alpha_s$, \beq{if} \Phi_{1,2}(\vec q\,)=\alpha_s(\mu_R)\left[ v_{1,2}(\vec q\, )+ \bar \alpha_s(\mu_R) v_{1,2}^{(1)}(\vec q\, )\right] \end{equation} and \beq{vertex-exp} \Phi_{1,2}(n,\nu)=\alpha_s(\mu_R)\left[ c_{1,2}(n,\nu)+ \bar \alpha_s(\mu_R) c_{1,2}^{(1)}(n,\nu) \right]\, . \end{equation} To obtain our representation of the forward amplitude, we need the matrix element of the BFKL Green's function. According to~(\ref{exp}), we have \[ \langle n,\nu|\hat G_\omega|n^\prime,\nu^\prime\rangle = \delta_{n,n^\prime}\left[ \delta(\nu-\nu^\prime)\left( \frac{1}{\omega-\bar \alpha_s (\mu_R)\chi(n,\nu)} \right.\right. \] \beq{Greens} \left. +\frac{\bar \alpha_s^2(\mu_R)(\bar \chi(n,\nu) +\frac{\beta_0}{8 N_c}(-\chi^2(n,\nu)+\frac{10}{3}\chi(n,\nu)+2\chi(n,\nu) \ln \mu_R^2+i\frac{d}{d\nu}\chi(n,\nu)))}{(\omega-\bar \alpha_s(\mu_R) \chi(n,\nu))^2}\right) \end{equation} \[ \left. +\frac{\frac{\beta_0}{4 N_c}\bar \alpha_s^2(\mu_R)\chi(n,\nu^\prime)} {(\omega-\bar \alpha_s(\mu_R) \chi(n,\nu))(\omega-\bar \alpha_s(\mu_R) \chi(n,\nu^\prime))}\left(i\frac{d}{d\nu^\prime}\delta(\nu-\nu^\prime)\right) \right]\ . \] Inserting twice the unity operator, written according to the completeness condition~(\ref{comp}), into~(\ref{ampl-transv}), we get \[ {\rm Im}_s \left({\cal A}\right)=\frac{s}{(2\pi)^2}\sum^{\infty}_{n=-\infty} \int\limits^{\infty}_{-\infty} d\nu\sum^{\infty}_{n^\prime =-\infty} \int\limits^{\infty}_{-\infty} d\nu^\prime \int_{\delta-i\infty}^{\delta+i\infty} \frac{d\omega}{2\pi i} \left(\frac{s}{s_0}\right)^\omega \] \beq{ampl-f} \times\langle\frac{\Phi_1}{\vec q_1^{\,\,2}}|n,\nu\rangle\langle n,\nu|\hat G_\omega| n^\prime,\nu^\prime\rangle\langle n^\prime,\nu^\prime | \frac{\Phi_2}{\vec q_2^{\,\,2}}\rangle \ , \end{equation} and, after some algebra and integration by parts, finally \[ {\rm Im}_s \left({\cal A}\right)=\frac{s}{(2\pi)^2}\sum^{\infty}_{n=-\infty} \int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0}\right)^{\bar \alpha_s(\mu_R) \chi(n,\nu)} \alpha_s^2(\mu_R) c_1(n,\nu)c_2(n,\nu) \] \beq{ampl-ff} \times\left[1+\bar \alpha_s(\mu_R)\left(\frac{c^{(1)}_1(n,\nu)}{c_1(n,\nu)} +\frac{c^{(1)}_2(n,\nu)}{c_2(n,\nu)}\right) \right. \end{equation} \[ \left. +\bar \alpha^2_s(\mu_R)\ln\frac{s}{s_0}\left\{\bar \chi(n,\nu) +\frac{\beta_0}{8 N_c}\chi(n,\nu)\left( -\chi(n,\nu)+\frac{10}{3}+2\ln \mu_R^2 +i\frac{d}{d\nu}\ln\frac{c_1(n,\nu)} {c_2(n,\nu)}\right)\right\}\right] \, . \] This is our {\it master} representation of the NLA BFKL forward amplitude. In the next section we will implement on it the BLM scale setting. \section{BLM scale setting} The cross section of a process is related, via the optical theorem, to the imaginary part of the forward scattering amplitude, \begin{equation} \sigma =\frac{{\rm Im}_s A}{s} \ . \end{equation} Here we want to discuss the BLM scale setting for the separate contributions to the cross section, specified in~(\ref{ampl-ff}) by different values of $n$ and denoted in the following by ${\cal C}_n$. Note that the $n=0$ case is relevant, {\it e.g.}, for the total cross sections of $\gamma^*\gamma^*$ interactions, Mueller-Navelet jet production and the forward differential cross section of the $\gamma^*\gamma^*\to V_1V_2$ process. Azimuthal angle correlations of produced jets in the Mueller-Navelet process are instead associated with non-zero values of $n$. The starting point of our considerations is the expression for ${\cal C}_n$ in the $\overline{\rm MS}$ scheme (see Eq.~(\ref{ampl-ff})), \[ {\cal C}_n =\frac{1}{(2\pi)^2}\int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0}\right)^{\bar \alpha_s(\mu_R)\chi(n,\nu)} \alpha_s^2(\mu_R) c_1(n,\nu)c_2(n,\nu) \] \beq{c_n} \times\left[1+\bar \alpha_s(\mu_R)\left(\frac{c^{(1)}_1(n,\nu)}{c_1(n,\nu)} +\frac{c^{(1)}_2(n,\nu)}{c_2(n,\nu)}\right) \right. \end{equation} \[ \left. +\bar \alpha^2_s(\mu_R)\ln\frac{s}{s_0}\left\{\bar \chi(n,\nu) +\frac{\beta_0}{8 N_c}\chi(n,\nu)\left( -\chi(n,\nu)+\frac{10}{3}+2\ln \mu_R^2 +i\frac{d}{d\nu} \ln\frac{c_1(n,\nu)}{c_2(n,\nu)}\right)\right\}\right] \, . \] In the r.h.s. of this expression we have terms $\sim \alpha_s$ originated from the NLO corrections to the impact factors, and terms $\sim \alpha^2_s\ln(s/s_0)$ coming from NLO corrections to the BFKL kernel. In the latter case, the terms proportional to the QCD $\beta$-function are explicitly shown. For our further consideration of the BLM scale setting, similar contributions have to be separated also from the NLO impact factors. In fact, the contribution to an NLO impact factor that is proportional to $\beta_0$ is universally expressed through the LO impact factor, \beq{beta-if} v^{(1)}(\vec q\,)=v(\vec q\,)\frac{\beta_0}{4 N_c}\left(\ln\left(\frac{\mu_R^2} {\vec q\,^2}\right)+\frac{5}{3}\right)+\dots \ , \end{equation} where the dots stand for the other terms, not proportional to $\beta_0$. This statement becomes evident if one considers the part of the strong coupling renormalization proportional to $n_f$ and related with the contributions of light quark flavors. Such contribution to the NLO impact factor originates only from diagrams with the light quark loop insertion in the Reggeized gluon propagator. The results for such contributions can be found, for instance, in Eq.~(5.1) of~\cite{Fadin:2001ap}. Tracing there the terms $\sim n_f$ and performing the QCD charge renormalization, one can indeed confirm~(\ref{beta-if}). Transforming~(\ref{beta-if}) to the $\nu$-representation according to~(\ref{nu_rep}), we obtain \bea{if2} {\tilde{c}}_1^{\left(1\right)}(\nu, n)&=& \frac{\beta_0}{4 N_c} \left[+i\frac{d}{d\nu} c_1(\nu,n)+\left(\ln \mu_R^2+\frac{5}{3}\right) c_1(\nu, n)\right]\ , \nonumber \\ {\tilde{c}}_2^{\left(1\right)}(\nu, n)&=&\frac{\beta_0}{4 N_c} \left[-i\frac{d}{d\nu} c_2(\nu,n)+\left(\ln \mu_R^2+\frac{5}{3}\right) c_2(\nu, n)\right] \ , \end{eqnarray} and \beq{} \frac{{\tilde{c}}_1^{\left(1\right)}}{c_1}+\frac{{\tilde{c}}_2^{\left(1\right)}}{c_2} =\frac{\beta_0}{4 N_c}\left[i\frac{d}{d\nu}\ln\left(\frac{c_1}{c_2}\right) +2\left(\ln \mu_R^2+\frac{5}{3}\right)\right] \ . \end{equation} It is convenient to introduce the function $f\left(\nu\right)$, defined through \beq{} i\frac{d}{d\nu}\ln\left(\frac{c_1}{c_2}\right)\equiv 2 \left[f(\nu) -\ln\left(Q_1 Q_2\right)\right]\ , \end{equation} that depends on the given process, where $Q_{1,2}$ denote here the hard scales which enter the impact factors $c_{1,2}$~\footnote{Here we consider processes whose impact factors are characterized by only one hard scale. This is the virtuality of the photon, $Q$, for the $\gamma^*\to\gamma^*$ and $\gamma^*\to V$ impact factors, and the jet transverse momentum, $|\vec k|$, for the impact factor describing the Mueller-Navelet jet production.}. The specific form of the function $f(\nu)$ depends on the particular process. According to the properties of the corresponding LO impact factors ($\gamma^*\to V$, $\gamma^*\to\gamma^*$ and Mueller-Navelet jet vertex), one can easily check that \begin{equation} f_{\gamma^*\gamma^*\to X}(\nu)=f_{pp\to {\rm jet}_1+X+{\rm jet}_2}(\nu)=0 \ , \label{fgammajet} \end{equation} for the processes $\gamma^*\gamma^*\to X$ and Mueller-Navelet jet production, whereas for the process $\gamma^*\gamma^*\to V_1V_2$ (forward electroproduction of two light vector mesons) this function is not equal to zero, \beq{fVV} f_{\gamma^*\gamma^*\to V_1V_2}\left(\nu\right)=\psi\left(3+2 i\nu\right) + \psi\left(3-2 i\nu\right) - \psi\left(\frac{3}{2}+ i\nu\right) - \psi\left(\frac{3}{2}- i\nu\right)\, . \end{equation} Now, we present again our result for the generic observable ${\cal C}_n$, showing explicitly all contributions proportional to the QCD $\beta$-function, {\it i.e.} also those originating from the impact factors: \[ {\cal C}_n =\frac{1}{(2\pi)^2}\int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0} \right)^{\bar \alpha_s(\mu_R)\chi(n,\nu)} \alpha_s^2(\mu_R) c_1(n,\nu)c_2(n,\nu) \] \beq{c_nn} \times\left[1+\bar \alpha_s(\mu_R)\left(\frac{\bar c^{(1)}_1(n,\nu)}{c_1(n,\nu)} +\frac{\bar c^{(1)}_2(n,\nu)}{c_2(n,\nu)} +\frac{\beta_0}{2 N_c}\left(\frac{5}{3}+\ln \frac{\mu_R^2}{Q_1 Q_2} +f(\nu) \right)\right)\right. \end{equation} \[ \left. +\bar \alpha^2_s(\mu_R)\ln\frac{s}{s_0}\left\{\bar \chi(n,\nu) +\frac{\beta_0}{4 N_c}\chi(n,\nu)\left( -\frac{\chi(n,\nu)}{2}+\frac{5}{3}+\ln \frac{\mu_R^2}{Q_1 Q_2} +f(\nu)\right) \right\}\right] \, , \] where $\bar c^{(1)}_{1,2} \equiv c^{(1)}_{1,2}- \tilde c^{(1)}_{1,2}$. We note that the dependence of~(\ref{c_nn}) on the scale $\mu_R$ is subleading: performing in~(\ref{c_nn}) the replacement \beq{alphaSrun} \alpha_s(\mu_R)=\alpha_s(\mu^\prime_R)\left(1-\bar\alpha_s(\mu^\prime_R) \frac{\beta_0}{2N_c}\ln\frac{\mu_R}{\mu^\prime_R}\right) \, , \end{equation} one indeed obtains the same expression as before with the new scale $\mu_R^\prime$ at the place of the old one $\mu_R$, plus some additional contributions which are beyond the NLA accuracy. As the next step, we perform a finite renormalization from the $\overline{\rm MS}$ to the physical MOM scheme, that means: \beq{scheme} \alpha_s^{\overline{\rm MS}}=\alpha_s^{\rm MOM}\left(1+\frac{\alpha_s^{\rm MOM}}{\pi}T \right)\;, \end{equation} with $T=T^{\beta}+T^{\rm conf}$, \beq{} T^{\beta}=-\frac{\beta_0}{2}\left( 1+\frac{2}{3}I \right)\, , \end{equation} \[ T^{\rm conf}= \frac{C_A}{8}\left[ \frac{17}{2}I +\frac{3}{2}\left(I-1\right)\xi +\left( 1-\frac{1}{3}I\right)\xi^2-\frac{1}{6}\xi^3 \right] \;, \] where $I=-2\int_0^1dx\frac{\ln\left(x\right)}{x^2-x+1}\simeq2.3439$ and $\xi$ is a gauge parameter, fixed at zero in the following. Inserting~(\ref{scheme}) into~(\ref{c_nn}) and expanding the result, we obtain, within NLA accuracy, \[ {\cal C}^{\rm MOM}_n =\frac{1}{(2\pi)^2}\int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0}\right)^{\bar \alpha^{\rm MOM}_s(\mu_R)\chi(n,\nu)} \left(\alpha^{\rm MOM}_s (\mu_R)\right)^2 c_1(n,\nu)c_2(n,\nu) \] \[ \times\left[1+\bar \alpha^{\rm MOM}_s(\mu_R)\left\{\frac{\bar c^{(1)}_1(n,\nu)} {c_1(n,\nu)}+\frac{\bar c^{(1)}_2(n,\nu)}{c_2(n,\nu)}+\frac{2T^{\rm conf}}{N_c} \right.\right. \] \[ \left. +\frac{\beta_0}{2 N_c}\left(\frac{5}{3}+\ln \frac{\mu_R^2}{Q_1 Q_2} +f(\nu) -2\left( 1+\frac{2}{3}I \right)\right) \right\} \] \beq{c_nnn} +\left(\bar \alpha^{\rm MOM}_s(\mu_R)\right)^2\ln\frac{s}{s_0} \left\{\bar \chi(n,\nu) +\frac{T^{\rm conf}}{N_c}\chi(n,\nu)\right. \end{equation} \[ \left.\left. +\frac{\beta_0}{4 N_c}\chi(n,\nu)\left( -\frac{\chi(n,\nu)}{2}+\frac{5}{3}+\ln \frac{\mu_R^2}{Q_1 Q_2} +f(\nu) -2\left(1+\frac{2}{3}I \right)\right)\right\}\right] \, . \] The optimal scale $\mu_R^{\rm BLM}$ is the value of $\mu_R$ that makes the expression proportional to $\beta_0$ vanish. We thus have \[ {\cal C}^{\beta}_n =\frac{1}{(2\pi)^2}\int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0}\right)^{\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_R)\chi(n,\nu)} \left(\alpha^{\rm MOM}_s (\mu^{\rm BLM}_R)\right)^3 \] \beq{c_nnnbeta} \times c_1(n,\nu)c_2(n,\nu) \frac{\beta_0}{2 N_c} \left[\frac{5}{3} +\ln \frac{(\mu^{\rm BLM}_R)^2}{Q_1 Q_2} +f(\nu)-2\left( 1+\frac{2}{3}I \right) \right. \end{equation} \[ \left. +\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_R)\ln\frac{s}{s_0} \: \frac{\chi(n,\nu)}{2} \left(-\frac{\chi(n,\nu)}{2}+\frac{5}{3}+\ln \frac{(\mu^{\rm BLM}_R)^2}{Q_1 Q_2} +f(\nu)-2\left( 1+\frac{2}{3}I \right)\right)\right]=0 \, . \] In the r.h.s. of~(\ref{c_nnnbeta}) we have two groups of contributions. The first one originates from the $\beta_0$-dependent part of NLO impact factor~(\ref{beta-if}) and also from the expansion of the common $\alpha^2_s$ pre-factor in~(\ref{c_nn}) after expressing it in terms of $\alpha_s^{\rm MOM}$. The other group are the terms proportional to $\bar \alpha_s^{\rm MOM}\ln s/s_0$. These contributions are those $\beta_0$-dependent terms that are proportional to $\ln s/s_0$ in~(\ref{c_nn}) and also the one coming from the expansion of the $(s/s_0)^{\bar \alpha_s \chi(n,\nu)}$ factor in~(\ref{c_nn}) after expressing it in terms of $\alpha_s^{\rm MOM}$. The solution of Eq.~(\ref{c_nnnbeta}) gives us the value of BLM scale. Note that this solution depends on the energy (on the ratio $s/s_0$). Such scale setting procedure is a direct application of the original BLM approach to semihard processes. Finally, our expression for the observable reads \beq{c_BLMmain} {\cal C}^{\rm BLM}_n =\frac{1}{(2\pi)^2}\int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0} \right)^{\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_R)\left[\chi(n,\nu) +\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_R)\left(\bar \chi(n,\nu) +\frac{T^{\rm conf}} {N_c}\chi(n,\nu)\right)\right]} \end{equation} \[ \times \left(\alpha^{\rm MOM}_s (\mu^{\rm BLM}_R)\right)^2 c_1(n,\nu)c_2(n,\nu) \left[1+\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_R)\left\{\frac{\bar c^{(1)}_1(n,\nu)} {c_1(n,\nu)}+\frac{\bar c^{(1)}_2(n,\nu)}{c_2(n,\nu)}+\frac{2T^{\rm conf}}{N_c} \right\} \right] \, , \] where we put at the exponent the terms $\sim \bar \alpha_s^{\rm MOM}\ln s/s_0$, which is allowed within the NLA accuracy. Unfortunately, Eq.~(\ref{c_nnnbeta}) can be solved only numerically, thus making the scale setting a bit unpractical. For this reason, we will work out also some analytic approximate approaches to the BLM scale setting, which have the merit of a straightforward and simple application. We consider the BLM scale as a function of $\nu$ and chose it in order to make vanish either the first or the second ($\sim \bar \alpha_s^{\rm MOM}\ln s/s_0$) group of terms in the Eq.~(\ref{c_nnnbeta}). We thus have two cases: \begin{itemize} \item case $(a)$ \beq{casea} \left(\mu_{R, a}^{\rm BLM}\right)^2=Q_1Q_2\ \exp\left[2\left(1+\frac{2}{3}I\right) -f\left(\nu\right)-\frac{5}{3}\right]\ , \end{equation} \[ {\cal C}^{\rm BLM, a}_n =\frac{1}{(2\pi)^2}\int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0} \right)^{\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_{R, a})\left[\chi(n,\nu)+\bar \alpha^{\rm MOM}_s (\mu^{\rm BLM}_{R, a})\left(\bar \chi(n,\nu) +\frac{T^{\rm conf}}{N_c}\chi(n,\nu) -\frac{\beta_0}{8 N_c}\chi^2(n,\nu)\right)\right]} \] \beq{c_BLMa} \times \left(\alpha^{\rm MOM}_s (\mu^{\rm BLM}_{R, a})\right)^2 c_1(n,\nu)c_2(n,\nu) \end{equation} \[ \times \left[1+\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_{R, a}) \left\{\frac{\bar c^{(1)}_1(n,\nu)}{c_1(n,\nu)}+\frac{\bar c^{(1)}_2(n,\nu)} {c_2(n,\nu)}+\frac{2T^{\rm conf}}{N_c} \right\} \right] \, , \] \item case $(b)$ \beq{caseb} \left(\mu_{R, b}^{\rm BLM}\right)^2=Q_1Q_2\ \exp\left[2\left(1+\frac{2}{3}I\right) -f\left(\nu\right)-\frac{5}{3}+\frac{1}{2}\chi\left(\nu,n\right)\right]\ , \end{equation} \[ {\cal C}^{\rm BLM, b}_n =\frac{1}{(2\pi)^2}\int\limits^{\infty}_{-\infty} d\nu \left(\frac{s}{s_0} \right)^{\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_{R, b})\left[\chi(n,\nu)+\bar \alpha^{\rm MOM}_s (\mu^{\rm BLM}_{R, b})\left(\bar \chi(n,\nu) +\frac{T^{\rm conf}}{N_c}\chi(n,\nu) \right)\right]} \] \beq{c_BLMb} \times \left(\alpha^{\rm MOM}_s (\mu^{\rm BLM}_{R, b})\right)^2 c_1(n,\nu)c_2(n,\nu) \end{equation} \[ \times\left[1+\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_{R, b})\left\{\frac{\bar c^{(1)}_1 (n,\nu)}{c_1(n,\nu)}+\frac{\bar c^{(1)}_2(n,\nu)}{c_2(n,\nu)} +\frac{2T^{\rm conf}}{N_c}+\frac{\beta_0}{4 N_c}\chi(n,\nu) \right\}\right]\, . \] \end{itemize} The other possible option for the BLM scale setting could be related with the requirement that the entire expression in the integrand of~(\ref{c_nnnbeta}) vanishes, which leads to the following condition \begin{itemize} \item case $(c)$ \beq{casec} \frac{5}{3}+\ln \frac{(\mu^{\rm BLM}_{R, c})^2}{Q_1 Q_2} +f(\nu) -2\left( 1+\frac{2}{3}I \right)= \frac{\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_{R, c})\ln\frac{s}{s_0} \: \frac{\chi^2(n,\nu)}{4}}{1+\bar \alpha^{\rm MOM}_s(\mu^{\rm BLM}_{R, c}) \ln\frac{s}{s_0} \: \frac{\chi(n,\nu)}{2}} \, . \end{equation} \end{itemize} One should mention, however, that such approach to the BLM scale setting has a limited applicability, since the denominator in the r.h.s. of~(\ref{casec}) vanishes at some value of $\nu=\bar \nu$, given by \beq{barnu} 1+\bar \alpha^{MOM}_s\ln\frac{s}{s_0} \frac{\chi(n,\bar \nu)}{2}=0 \, , \end{equation} which prevents us from defining $\mu^{\rm BLM}_{R, c}(\nu)$ in the entire $\nu$ range. Nevertheless, one can try to use such method in those cases when the product of the two LO impact factors $c_1(n,\nu) c_2(n,\nu)$ is a function decreasing so rapidly to guarantee the convergence of the $\nu$-integration in~(\ref{c_BLMmain}) in the $\nu$-region where there is no problem with the solution of Eq.~(\ref{casec}). Note also that all three approaches to BLM scale fixing discussed above, and given in Eqs.~(\ref{casea}), (\ref{caseb}) and~(\ref{casec}), could be applicable only to processes characterized by a real-valued function $f(\nu)$. For some processes this is not the case. In particular, the inclusive production of two identified hadrons separated by large interval of rapidity in proton-proton collisions, $p+p\to h_1+h_2 +X$, is described by a complex-valued function, $f^*(\nu)=f(-\nu)$. This can be easily seen calculating $f(\nu)$ from Eq.~(77) of~\cite{Ivanov:2012iv} for the identified hadron production impact factor. In such cases one can use only the BLM scale fixing method which relies on the numerical solution of Eq.~(\ref{c_nnnbeta}). \section{Applications} In this section we apply the BLM approach to a selection of semihard processes. For the energy variables we will use notations \beq{YY0} Y=\ln\frac{s}{Q^2} \, , \quad\quad\quad Y_0=\ln\frac{s_0}{Q^2} \, . \end{equation} In our numerics we use the following settings: $n_f=5$ and $\alpha_s(M_Z)=0.11707$ for the number of active flavors and the value of the strong coupling. \subsection{Electroproduction of two vector mesons} We start with the description of the forward amplitude for the production of a pair of light vector mesons in the collision of two virtual photons, $\gamma^*\gamma^*\to V_1V_2$. Such processes could be studied in experiments at future high-energy $e^+e^-$ colliders, see~\cite{Pire:2005ic,Segond:2007fj,Goncalves:2006wy} for estimates of the cross section in the Born approximation. The BFKL resummation for these processes was considered in~\cite{Enberg:2005eq}, where the inclusion of NLO effects was limited to the corrections to the BFKL kernel. In the papers~\cite{mesons}, some of us performed a complete NLA BFKL analysis for the forward amplitude of these processes, including the NLO corrections also to the $\gamma^*\to V$ impact factors~\cite{Ivanov:2004pp}. Very large NLA corrections to the forward amplitude were found, therefore in~\cite{mesons} the \emph{Principle of Minimal Sensitivity} (PMS)~\cite{PMS} approach was used to optimize the perturbative series. \begin{figure}[t] \centering \begin{minipage}{0.50\textwidth} \phantom{.}\vspace{0.2cm} \includegraphics[scale=0.47]{scales_mesons.pdf}\hspace{0.3cm} \end{minipage} \begin{minipage}{0.48\textwidth} \includegraphics[scale=0.47]{mesons.pdf} \end{minipage} \vspace{-0.2cm} \caption[]{Left: BLM scales for the process $\gamma^*\gamma^*\to V_1V_2$ (see the text for details). Right: Forward amplitude for $\gamma^*\gamma^*\to V_1V_2$ at $Y_0=0$.} \label{fig:mesons} \end{figure} Here we present numerical results for the forward amplitude obtained with the BLM optimization method described above. We consider the case of equal values of photons virtualities, $Q_1=Q_2=Q$, and, following the first of Refs.~\cite{mesons}, present our numerical predictions for the forward amplitude multiplied by some kinematic factors, ${\rm Im}_s \left({\cal A}\right)Q^2/(s D_1 D_2)$, calculated at $Q=50$~GeV, where the expressions for $D_{1,2}$ are given in Eq.~(14) of the first of Refs.~\cite{mesons}. For the considered process only the $n=0$ term contributes and the $f(\nu)$ function is given in~(\ref{fVV}). We will try all approaches to the BLM scale setting described in previous section. In particular, for this process the product of two LO impact factors $c_1(n,\nu)c_2(n,\nu)$ vanishes very fast for $|\nu|>1$, therefore in the relevant integration $\nu$-range, $|\nu|<1$, we can find the solution of Eq.~(\ref{casec}) and determine the BLM scale $\mu_{R,c}^{\rm BLM}$ as a function of $\nu$ and energy. In Fig.~\ref{fig:mesons}(left) we show the values of the BLM to kinematic scale ratios, $\mu_R^{\rm BLM}/Q$, as functions of $Y-Y_0$, obtained in four different cases. By ``exact'' case we denote the scale obtained solving numerically Eq.~(\ref{c_nnnbeta}) for each value of $\ln\left(s/s_0\right)\equiv Y-Y_0$. In the other three approaches, the BLM scales depend on $\nu$: the scales for cases~$(a)$ and $(b)$ are given by Eqs.~(\ref{casea}) and~(\ref{caseb}), respectively; the case~$(c)$ corresponds to the numerical solution of Eq.~(\ref{casec}) for each values of $\nu$ and $Y-Y_0$. The $\nu$-dependent scales, cases~$(a)$, $(b)$ and~$(c)$ are shown in Fig.~\ref{fig:mesons}(left) for the particular value of $\nu=0$. Approximate approaches to the scale setting give energy-independent BLM scales (see cases~$(a)$ and $(b)$ in Fig.~\ref{fig:mesons}(left)), whereas an exact implementation of the BLM rule leads in general to the scales which depend on the energy of the process (see cases~$(c)$ and ``exact'' in Fig.~\ref{fig:mesons}(left)). In fact, the approaches $(a)$ and $(b)$ can be considered as a low- and a high-energy approximation to the case~$(c)$, where the BLM scale setting prescription is implemented precisely. Nevertheless, as we already mentioned above, the condition~(\ref{casec}) could not be resolved for all processes. Therefore we defined also a method which could be universally applied and which we call here ``exact''. It gives a $\nu$-independent BLM scale and it is based on the requirement of vanishing of the {\em integral} in Eq.~(\ref{c_nnnbeta}), contrary to the approach $(c)$, where we try to make vanish the {\em integrand} of the same equation for each separate value of $\nu$. In Fig.~\ref{fig:mesons}(right) we show our predictions as functions of the energy for the forward amplitude calculated with all the four different methods described above: cases~$(a)$ and $(b)$ were calculated using Eqs.~(\ref{c_BLMa}) and~(\ref{c_BLMb}), cases~$(c)$ and ``exact'' using Eq.~(\ref{c_BLMmain}) with the corresponding choices of the scales. The result of the BFKL resummation depends not only on the renormalization scale $\mu_R$, which is fixed here with the BLM method, but also on the energy scale $s_0$ or $Y_0$. In Fig.~\ref{fig:mesons}(right) we present the results obtained with the choice of this scale dictated by the kinematic of the process, $s_0=Q^2$ or $Y_0=0$. A more reliable estimation could result from fixing the value of $Y_0$ according to some optimization method, such as PMS, but this goes beyond the scope of present paper. As we can see in Fig.~\ref{fig:mesons}(right), our predictions obtained with precise implementations of BLM method lie inbetween those derived with the use of the two approximate realizations. Note that the difference between the two explicit methods, cases~$(c)$ and ``exact'', is sizeable and increases with the energy. This is related to the fact that these two approaches are not equivalent, and the scales in the case~$(c)$ are larger than those in the ``exact'' one. Note also that, with the growth of energy, the value of $\nu=\bar \nu$ where the solution of Eq.~(\ref{casec}) has a singularity decreases, see Eq.~(\ref{barnu}), and approaches the $\nu$-range important for the determination of our observable. \begin{figure}[t] \centering \includegraphics[scale=0.47]{scales_photons.pdf} \caption[]{BLM scales for the process $\gamma^*\gamma^*\to X$ (see the text for details).} \label{fig:photons} \end{figure} \subsection{$\gamma^*\gamma^*$ total cross section} In~\cite{Ivanov:2014hpa} some of us studied the $\gamma^*\gamma^*$ total cross section in the NLA BFKL approach considering two different optimization methods of the perturbative series. One of them was the BLM method, cases~$(a)$ and $(b)$, described above, where the Eqs.~(\ref{c_BLMa}) and~(\ref{c_BLMb}) were transformed back to $\overline{\rm MS}$ scheme. In that paper we fixed the photon virtualities and, correspondingly, the number of active flavors $n_f$ in order to make a comparison with LEP2 experimental data. Here, we are interested in the general features of the BLM scale setting procedure, therefore we prefer to fix the photon virtualities as in the two-meson production: $Q_1=Q_2\equiv Q=50$ GeV with $n_f=5$. In Fig.~\ref{fig:photons}, as in the case of the vector mesons, we show the four different ratios $\mu_R^{\rm BLM}/Q$ versus $Y-Y_0$. The four cases~$(a)$, $(b)$, $(c)$ and ``exact'' are defined exactly as in the previous subsection. As we mentioned before, cases~$(a)$ and~$(b)$ are independent on the energy of the process, but depend on the kind of process through the $f(\nu)$ function. In particular, for the production of a pair of light vector mesons the function is given by Eq.~(\ref{fVV}), while for this process it is $f(\nu)=0$ (see Eq.~(\ref{fgammajet})). For this process we only discuss here the BLM scale setting and do not present its cross section. The $\gamma^*\gamma^*$ cross section was already considered in~\cite{Ivanov:2014hpa}, where serious problems were found, related with the very large values of NLO corrections~\cite{Balitsky:2012bs} for the virtual photon impact factor. For details and an extended discussion of this issue, we refer the reader to~\cite{Ivanov:2014hpa}. \subsection{Mueller-Navelet jets} The last semihard process that we consider is the production of two forward high-$p_T$ jets produced with a large separation in rapidity $\Delta y$ (Mueller-Navelet jets~\cite{Mueller:1986ey}). Such process was studied at Large Hadron Collider (LHC): the CMS collaboration provided with data for azimuthal decorrelations~\cite{CMS} that can be expressed, from a theoretical point of view, by ratios ${\cal {C}}_m/{\cal{C}}_n$, where ${\cal{C}}_n$ are to be averaged over $k_{J_i}$ (jet transverse momentum) and $y_{J_i}$ (rapidity jet). Then, in order to match the kinematic cuts used by the CMS collaboration, we have \begin{eqnarray} C_n=\int_{y_{1,\rm min}}^{y_{1,\rm max}}dy_1 \int_{y_{2,\rm min}}^{y_{2,\rm max}}dy_2\int_{k_{J_1,\rm min}}^{\infty}dk_{J_1} \int_{k_{J_2,\rm min}}^{\infty}dk_{J_2} \delta\left(y_1-y_2-Y\right){\cal C}_n \left(y_{J_1},y_{J_2},k_{J_1},k_{J_2} \right)\;, \end{eqnarray} with $y_{1,\rm min}=y_{2,\rm min}=-4.7$, $y_{1,\rm max}=y_{2,\rm max}=4.7$~\footnote{In~\cite{Caporale2014} it was mistakenly written $y_{i,\rm min}=0$, although all numerical results presented there were obtained using the correct value $y_{i,\rm min}=-4.7$.} and $k_{J_1,\rm min}=k_{J_2,\rm min}=35$ GeV. The comparison between experimental results for jets with cone radius $R=0.5$ produced at a center-of-mass energy of $\sqrt s=7$ TeV and theoretical calculations was done in~\cite{Ducloue2014}, where the exact NLO impact factors calculated in~\cite{IFjet} were used, and in~\cite{Caporale2014}, where the NLO impact factors were taken in the small-cone approximation as calculated in~\cite{SCA}~\footnote{ For a critical comparison of the different expressions for the forward jet vertex, we refer to~\cite{Colferai:2015zfa}. }. \begin{figure}[t] \centering \includegraphics[scale=0.45]{scales_jets_n0.pdf} \includegraphics[scale=0.45]{scales_jets_n1.pdf} \includegraphics[scale=0.45]{scales_jets_n2.pdf} \includegraphics[scale=0.45]{scales_jets_n3.pdf} \caption[]{BLM scales for Mueller-Navelet jets (see the text for details).} \label{scalejet} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.44]{C1C00.pdf} \includegraphics[scale=0.44]{C2C00.pdf} \includegraphics[scale=0.44]{C3C00.pdf} \includegraphics[scale=0.44]{C2C10.pdf} \includegraphics[scale=0.44]{C3C20.pdf} \caption[]{Azimuthal decorrelations for Mueller-Navelet jets (see the text for details).} \label{figratiosY00} \end{figure} In this section we use the same kinematic settings as in~\cite{Caporale2014} and present the BLM scale setting for the Mueller-Navelet jet production. In particular, we consider the ratios $\mu_R^{\rm BLM}/\sqrt{k_{J_1}k_{J_2}}$ as functions of $Y-Y_0$ for $n$=0, 1, 2 and~3 and recall that, for this process, the function $f(\nu)=0$ is zero. The results are shown in Fig.~\ref{scalejet} were the three lines, violet, green and blue, denote the cases~$(a)$, $(b)$ and ``exact'', respectively. For this process it is not possible to consider the case~$(c)$ because the product of LO impact factors $c_1(n,\nu)c_2(n,\nu)$ is a function that does not decreases so fast, so that the $\nu$-interval needed for the integration includes the value $\overline\nu$, defined by Eq.~(\ref{barnu}), where the method is not applicable. Due to the integration over the jet variables $k_{J_{1,2}}$ and $y_{J_{1,2}}$, the derivation of the ``exact'' curve here is a little bit different from that of the other two processes. In this case, in order to get the ratios $\mu_R^{\rm BLM}/\sqrt{k_{J_1}k_{J_2}}$, we write $\mu_R=m_R\sqrt{k_{J_1}k_{J_2}}$ and we look for $m_R$ such that Eq.~(\ref{c_nnnbeta}) is satisfied. On the contrary, since cases~$(a)$ and $(b)$ are independent of the energy of the process for $n=0$, the two curves, ``BLM$_a$'' and ``BLM$_b$'' are equivalent to those in Fig.~\ref{fig:photons}, since also in the present case $f(\nu)=0$. Moreover, note that, for $n=1$, $\chi(n=1, \nu=0)=0$ and therefore in Fig.~\ref{scalejet} the curve ``BLM$_b$'' overlaps exactly the curve ``BLM$_a$''. In Fig.~\ref{figratiosY00} we present some ratios $C_m/C_n$ {\it versus} $Y$, where we make use of the scales shown in Fig.~\ref{scalejet}. In all cases shown in Fig.~\ref{figratiosY00} the factorization scale $\mu_F$ entering the MSTW2008nlo~\cite{PDF} parton distribution functions was chosen equal to the renormalization scale $\mu_R$ and the BFKL energy scale $Y_0$ was fixed at zero. One could look for optimal choices of the scale $Y_0$, based on the PMS method, for instance, but this goes beyond the scope of the present paper. The results shown in~\cite{Caporale2014} are a little bit different from those shown here, because there we transferred back all formulas to the $\overline{\rm MS}$ scheme. Moreover note that now we have an extra-curve (BLM$_{\rm exact}$) in which $\mu_R$ was obtained solving Eq.~(\ref{c_nnnbeta}). \section{Summary} In this paper we have focused on the BLM method to set the renormalization scale in a ge\-ne\-ric se\-mi\-hard pro\-cess, as described in the NLA BFKL approach in the $(\nu,n)$-represen\-ta\-tion. We found that the BLM scale setting procedure is well defined in the context of semihard processes described by the BFKL approach within NLA accuracy. The straightforward application of the BLM procedure leads to a condition to be fulfilled, Eq.~(\ref{c_nnnbeta}), which defines the optimal renormalization scale depending on the specific process and on its energy. Our main observation here is that, due to the presence of $\beta_0$-terms in the next-to-leading expressions for the process dependent impact factors, the optimal renormalization scale is not universal, but turns out to depend both on the energy and on the type of process in question. The non-universality of the BLM scale setting in exclusive processes was observed already in~\cite{Anikin:2004jb}. Note that the above-mentioned $\sim \beta_0$-contributions to NLA impact factors are universally expressed in terms of the LO impact factors of the considered process, see our Eqs.~(\ref{beta-if}) and~(\ref{if2}). Thus, they could be easily calculated for all processes, even in the case when the full expressions for the NLO corrections to the impact factors are not known. Such contributions must be taken into account in the implementation of BLM method to the description of cross sections of semihard processes, because {\it all} contributions to the cross section that are $\sim \beta_0$ must vanish at the BLM scale. Such an ``exact'' implementation of the BLM method could be difficult since it calls for the solution of an integral equation, Eq.~(\ref{c_nnnbeta}), for each value of the energy of the process. This equation can be solved, in general, only in numerical way. Therefore, we considered several approximated approaches to the BLM scale setting. One of them, the closest to the ``exact'' one and labeled $(c)$, consists in imposing the vanishing of the integrand appearing in the above-mentioned general condition and leads to an optimal BLM scale depending also on the $\nu$-variable. This approximated method has a validity domain in the $\nu$-space and can be applied only if the relevant range of the $\nu$-integration giving a physical observable falls inside this validity domain. Other approximated approaches, labeled $(a)$ and $(b)$, can be viewed as a sort of low- and high-energy approximation of the case~$(c)$ and of the ``exact'' determination. We have compared these different approaches in the study of the total cross section and of other physical observables related with the forward amplitude in processes such as the electroproduction of two light vector mesons, the total cross section of two virtual photons and the production of Mueller-Navelet jets~\footnote{For all these cases the expression of the amplitude is known within the NLA as convolution of NLO impact factors with the NLA BFKL Green's function.}. Note that the formulas for the approximate cases~$(a)$ and $(b)$ were already used by us, without derivation, in our recent papers~\cite{Ivanov:2014hpa,Caporale2014}. Here we presented in full details the implementation of the BLM method for arbitrary semihard processes, considering both its exact and approximate forms. We could observe that, in general, the BLM scale setting in the cases~$(a)$ and $(b)$ provides with a range inside which lie the ``exact'' the case~$(c)$ determinations. This is not the case for the Mueller-Navelet jet production where, as discussed in the text, due to some peculiarities in the definition of the observables imposed by the experimental cuts, the natural ordering between the optimal scales in the cases~$(a)$, $(b)$ and ``exact'' is sometimes lost. It turns out, however, that azimuthal correlations and ratios between them in the Mueller-Navelet case are less sensitive to the different approaches to BLM scale setting than in the other two processes considered in this work. Note that previous applications of the BLM method to the description of $\gamma^*\gamma^*$ total cross sections~\cite{Brodsky:1996sg,Brodsky:1997sd,Brodsky:2002ka,Zheng:2013uja} relied on the use of LO expressions for the photon impact factors. In~\cite{Brodsky:1996sg,Brodsky:1997sd} the $\gamma^*\gamma^*$ total cross section was considered in LLA BFKL, since the NLO corrections to the BFKL kernel were not yet known. However, in~\cite{Brodsky:1996sg,Brodsky:1997sd} the $\beta_0$-part of the first correction to the Born amplitude ({\it i.e.} the $t$-channel two-gluon exchange) was considered in order to establish the renormalization scale. Such approach to the scale setting is closely related to our case~$(a)$ (scale fixed from the correction to the impact factor). Indeed, considering the expansion of the BFKL amplitude~(\ref{ampl-ff}), one can see that the first, $\sim \alpha_s$, correction to the Born amplitudes originates entirely from NLO parts of the impact factors. Comparing Eq.~(5.5) in~\cite{Brodsky:1997sd} with our Eq.~(\ref{casea}) for $f(\nu)=0$, as appropriate for the $\gamma^*\gamma^*$ process, one can see that they agree except for the term that, in our approach, derived from the change to the MOM scheme. One can therefore refer to~\cite{Brodsky:1996sg,Brodsky:1997sd} as to the first (approximate) application of the BLM scale setting to a BFKL calculation. In~\cite{Brodsky:2002ka,Zheng:2013uja} the $\gamma^*\gamma^*$ total cross was considered using full NLA BFKL kernel, but with LO approximation for the photon impact factor. In respect with the BLM scale setting, such approach is equivalent to our approximate case~$(b)$. In~\cite{Ducloue2014} the BLM method was applied to Mueller-Navelet jet production: although the full NLO expression for the jet impact factor was used, the above-discussed effect of $\beta_0$-contributions to NLO jet impact factors on the choice of the BLM scale was overlooked. Therefore in~\cite{Ducloue2014} the value of the BLM scale which was obtained is similar to the one used in~\cite{Brodsky:2002ka,Zheng:2013uja} and, as such, coincides with our approximate case~$(b)$. Our results presented in Fig.~\ref{figratiosY00} allow to assess the inaccuracy in BLM predictions for different Mueller-Navelet jet observables related with approximated approaches to the BLM scale setting. In conclusion, the BLM method for scale setting, which was proposed more than three decades ago on a strong physical basis, remains a fundamental tool for perturbative calculations and has lead to many successful comparisons between theoretical predictions and experimental data. In this paper we have provided with the general paradigm for its systematic application to an important class of processes, {\it i.e.} semihard processes within the NLA BFKL approach, thus filling some gaps left open by previous approximated or incomplete approaches. We believe that this will increase the future significance of the method. \section*{Acknowledgements} D.I. thanks the Dipartimento di Fisica dell'U\-ni\-ver\-si\-t\`a della Calabria and the Istituto Nazio\-na\-le di Fisica Nucleare (INFN), Gruppo collegato di Cosenza, for the warm hospitality and the financial support. The work of D.I. was also supported in part by the Russian Foundation for Basic Research via grant RFBR-13-02-00695-a. The work of B.M. was supported by the European Commission, European Social Fund and Calabria Region, that disclaim any liability for the use that can be done of the information provided in this paper. B.M. thanks the Sobolev Institute of Mathematics of Novosibirsk for warm hospitality during the preparation of this work.
1,108,101,563,477
arxiv
\section{\label{intro}Introduction} There has recently been a renewed interest in the interplay between Charge Density Wave (CDW) and superconducting states in layered dichalcogenides $MX_2$ ($M$=transition metal; $X$=S,Se), such as TiSe$_2$, TaSe$_2$ or TaS$_2$. The CDW mechanism remains controversial and various scenarios beyond the classic mechanism of Peierls instability \cite{pei55}, such as phonon softening \cite{cal11}, Coulomb repulsion \cite{faz80,ros06,sip08} or exciton condensation \cite{cer07} have been invoked. Clarifying this controversy may help to explore the possibility of unconventional, \textit{e.g.} exciton-mediated, superconductivity in this system, as suggested by Ginzburg for metal-dielectric bilayers \cite{gin70}. In order to address this issue, we carried out a systematic study of the structural, magnetic and transport properties of pure VS$_2$, a model $d^1$ system with quasi two-dimensional properties. Its 1T (or CdI$_2$-type) structure (see Fig.~\ref{fig:XRD}) is made of layers of VS$_6$ octahedra separated by a van der Waals gap and is described by the $P\overline 3m$1 symmetry. As compared to the isoelectronic and isostructural compound 1T-TaS$_2$, which has been extensively studied, VS$_2$ is simpler owing to the absence of superconductivity and to the lower $Z$-value of V, which leads to a small spin-orbit coupling. Hence, VS$_2$ is a model system to study the stability conditions of the CDW phase. Despite these favorable characteristics, the literature on VS$_2$ is limited because of its metastability \cite{mur77} and V-rich V$_{1+x}$S$_2$ phases, where interstitial V atoms are located between the layers, are obtained at ambient pressure \cite{kat79,pod02}. The pure ($x$=0) phase has been hitherto synthesized only by de-intercalating Li from LiVS$_2$ \cite{mur77,mul10}. Early magnetic \cite{mur77} and NMR \cite{tsu83} studies on such de-intercalated samples point at a CDW phase below 305 K, however this phase was observed only recently by transmission electron microscopy \cite{mul10} which shows an incommensurate in-plane propagation vector, ${\bf q}_{CDW}$=(0.21,0.21,0). Similarly to 1T-TaS$_2$ at high temperature or to 2H-TaS$_2$ at low temperature, the incommensurate CDW phase of VS$_2$ is concomitant to metallic properties. Indeed, only commensurate CDW phases are insulating, as in TiSe$_2$ and TaS$_2$ at low temperature. Our main result is that no CDW metallic phase is found in high-purity VS$_2$ samples synthesized under high-pressure. The paper is organized as follows. In section II, we report on the sample preparation under high pressure and on the experimental methods. In section III, we report on the main experimental results, the salient results being the evidence of the absence of the CDW phase concomitant to the observation of electron localization. Section IV is devoted to the discussion of the results and to the conclusions. \section{\label{exp}Experimental} \subsection{\label{HP}High-pressure synthesis} We reproducibly synthesized pure VS$_2$ powders under high pressure using a multi-anvil apparatus. We first prepared a 1:2.05 mixture of vanadium metal and sulfur powders within a high-pressure capsule made of thin Pt foil. A 5 \% sulfur excess was used to ensure the full oxidation of V. The capsule was subsequently kept at 5 GPa and 700 $^{\circ}$C for two hours. Preliminary measurements using a commercial x-ray diffractometer equipped with a standard Cu K$_{\alpha}$ source indicate that the as-prepared powders are single-phase within the sensitivity limit of the apparatus. The external part of the capsule in contact with the Pt foil exhibits the presence of PtS$_2$ and was removed. The single-phase stoichiometric properties of the powders were confirmed by a subsequent synchrotron x-ray diffraction study described below. Our strategy of using high pressure synthesis has been motivated by two early studies showing that, by means of high pressure synthesis, the excess of vanadium is reduced to $x$=0.18 at 0.2 GPa \cite{yok85} and to $x$=0.11 at 2 GPa \cite{nak76}. An extrapolation of these results to higher pressures suggests that the $x$=0 phase should be stabilized at 5 GPa, which is confirmed by our present finding. \subsection{\label{struct}Structural study} The as-prepared powders have been studied by means of synchrotron x-ray powder diffraction using a wavelength $\lambda=0.49575$ \AA~ at the Materials Science beamline of the Swiss Light Source at the Paul Scherrer Institut. The characteristics of the beamline are described in detail elsewhere \cite{pat05}. Salient feature is the use of the high-resolution and fast Mythen microstrip detector enabling the simultaneous detection of the diffracted intensity over $120^{\circ}$ in 2$\vartheta$ in the Debye-Scherrer (transmission) geometry. The substantially reduced data acquisition time and the parallel detection allowed us to perform an accurate structural study as a function of temperature in a wide 5-270 K range. In order to avoid preferential orientation effects, which are expected considering the layered structure of VS$_2$, the powders were placed in a spinning glass capillary mounted on the sample holder. The data were collected during both the cooling-down and the heating-up of the samples in order to exclude thermal hysteresis effects. The structural properties were further investigated using a JEOL 2100F high resolution transmission electron microscope (HRTEM) reaching 1.9 \AA~ resolution and equipped with a 200 keV field-emission electron gun. The measurements were carried out on individual $\sim$ 1 $\mu$m size crystallites at room and liquid nitrogen temperatures. In the latter case, the temperature of the sample holder was $\approx$94 K. No alteration of the structure induced by the electron beam was detected. \subsection{Magnetic and transport properties} The dc magnetization of the as-prepared powders was measured in the 5-300 K range using a Quantum Design SQUID magnetometer equipped with a 5 T NbTi superconducting magnet. Both, zero-field-cooling and field cooling curves were taken at 10 oersted. Magnetization curves were taken also as a function of field up to 5 T at fixed temperatures in the 5-300 K range. In order to probe selectively the magnetic behavior of the V ions, $^{51}V$ NMR experiments were performed by employing a home-build spectrometer \cite{all05} and a variable-field superconducting magnet. Spectra were recorded in field sweep mode at a fixed frequency of 78.3 MHz by employing a standard $90^{\circ} - \tau - 90^{\circ}$ spin-echo pulse sequence with a delay, $\tau$, of 10 $\mu$s and a pulse duration of 2.4 $\mu$s. Spin lattice relaxation were measured by the saturation recovery method. The frequency-dependent optical absorption, $A_{\omega}$, was measured at the AILES beamline of the SOLEIL synchrotron facility on a pellet obtained by intimately mixing 0.5 \% weight of VS$_2$ powders with CsI powders. The characteristics of this beamline are described in detail elsewhere \cite{roy06}; in summary, we used a pulse tube closed cycle cryostat by Cryomech to vary the temperature within the 4-300 K range and a commercial Bruker IFS125/HR spectrometer to measure the transmission in the far infrared 150-650 cm$^{-1}$ range. A spectral resolution of 1 cm$^{-1}$ was achieved using a 6 $\mu$m beamsplitter combined with a commercial Infrared Lab bolometer detector. The absorption of VS$_2$ was obtained by subtracting the absorption of a pure CsI pellet from the absorption of the VS$_2$-CsI pellet; the absorbance was then determined using the usual relation. The study of the transport properties of VS$_2$ was complemented by a dc electrical resistivity measurement down to 2 K on a as-prepared sintered sample using a commercial Quantum Design Physical Property Measurement System. \subsection{Band structure calculations} First principles calculations were performed using density functional theory (DFT) in the linear response\cite{RevModPhys.73.515} using the Quantum-Espresso code \cite{gia09} within the local density approximation (LDA) \cite{per81}. We used norm-conserving \cite{PhysRevB.43.1993} and ultrasoft \cite{PhysRevB.41.7892} pseudopotentials for S and V, respectively, and included semicore states in the V pseudopotential. Energy cut-off values were $60$ Rydberg for the kinetic energy expansion and $600$ Rydberg for the charge density. The Brillouin zone integration was performed over a $24\times 24\times 12$ electron-momentum grid and a $4\times 4\times 2$ phonon-momentum grid. The Fermi surface Hermitean-Gaussian smearing used in the simulation was $0.01$ Rydberg. \section{\label{res}Results} \subsection{\label{res_struct}Structural properties} Rietveld refinements of the x-ray diffractograms measured on the as-prepared VS$_2$ powders confirm the previously reported 1T structure in the whole 5-270 K range and indicate that the samples are single-phase, with the exception of $\lesssim 0.3 \%$ vol. of PtS$_2$ impurity formed by the reaction of S with the Pt capsule. For all temperatures, good refinements with reliability factors $R_p \approx$2.5 were obtained in the $P\overline 3m$1 symmetry of the 1T structure by assuming no interstitial vanadium atoms ($x$=0). The result of the refinement is reported in Fig.~\ref{fig:XRD} and Table I for the 5 K data. The small thermal parameters indicate a good degree of crystallinity and a limited disorder. In the above symmetry, all the V-S bond distances in the VS$_6$ octahedron are equal and the only structural distortion allowed is a tilt of the octahedron axis with respect to the octahedron plane. We find a small tilt of 3.90(4)$^{\circ}$, which indicates that the octahedra are almost regular. We also verified the possibility of interstitial V atoms at the 1b site (0,0,1/2). For the 5 K data, this refinement yielded a site occupancy factor $x$=0.05, though the reliability factor was found to improve only slightly from $R_p$=2.53 to 2.20. We thus conclude that the excess $x$, if any, is less than 0.05. \begin{figure}[b] \includegraphics[width=120mm]{VS2_XRD.eps} \caption{\label{fig:XRD} (Color online) Synchrotron x-ray diffractogram measured on VS$_2$ powders at 5 K at the Materials Science beamline of the Swiss Light Source (SLS) using a wave length $\lambda$=0.49575 \AA. Red and black lines represent observed and calculated profile after Rietveld refinement, respectively, whilst the blue line is their difference. Green ticks represent the calculated position of the Bragg peaks (see also Table I). Inset: The 1T structure of VS$_2$ described by the $P\overline 3m$1 space group.} \end{figure} Fig.~\ref{fig:latpar} summarizes the temperature dependence of the structural parameters. Whilst the behavior of the $a$-axis is normal, the $c$-axis exhibits an anomalous upturn at $T_{II} \approx 120$ K, which leads to a broad maximum at $T_{III} \approx 50$ K. This anomaly is associated with a V-shaped behavior of the V-S distance, $d$, showing a sudden linear increase with decreasing temperature at $T_{II} \approx 120$ K (see Fig.~\ref{fig:latpar}b). By extrapolating the high-temperature behavior of $d$ down to low temperatures, it is found that the anomaly corresponds to an expansion of $d$ by $\sim 0.01$ \AA. Notable is the fact that the $a$- ($c$-) axis parameter of the present samples is $\sim$1 \% longer (shorter) than those of the Li de-intercalated samples \cite{mur77}. Further evidence of a structural difference between the two types of samples is given by TEM. Contrary to a previous report on de-intercalated crystals \cite{mul10}, the present TEM diffraction patterns exhibit only the main spots of the hexagonal lattice and no trace of satellite peaks are found either at room temperature or at 94 K (see Fig.~\ref{fig:TEM}). In conclusion, no CDW phase or any other long-range structural modulations are detected, although the anomalous behavior of the V-S distance points at an incipient structural instability. \begin{table} \label{tab:structure} \caption{Refined structure of VS$_2$ in the trigonal $P$-3$m$1 space group at 5 K. Refined lattice parameters are $a$=$b$=3.23055(1) \AA, $c$=5.70915(2) \AA. Numbers in parentheses indicate statistical uncertainty. Atomic coordinates $x$, $y$ and $z$ are in reduced lattice units.} \begin{ruledtabular} \begin{tabular}{ccccccc} Atom & Wyckoff pos. & Site symmetry & $x$ & $y$ & $z$ & $B_{iso}$ ($\times 10^{-4}$ \AA$^2$)\\ \hline V & 1a & $m$ & 0 & 0 & 0 & (*) \\ S & 2d & 1 & 1/3 & 2/3 & 0.25503(12) & 0.326(11) \\ \hline \multicolumn{7}{l} {(*)$B_{eq}$=0.920(18). Anisotropic $\beta_{ij}$ parameters ($\times 10^4$) for the V atom:}\\ \multicolumn{7}{l}{$\beta_{11}$=$\beta_{22}$=236.4(4.3); $\beta_{33}$=22.4(2.3); $\beta_{12}$=-118.2(2.1); $\beta_{13}$=0; $\beta_{23}$=0.}\\ \hline \multicolumn{7}{l}{Reliability factors with all non-excluded points (not corrected for background):}\\ \multicolumn{7}{l}{$R_p$=2.53; $R_{wp}$=4.52; $R_{exp}$=3.40; $\chi^2$=1.76}\\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[b] \includegraphics[width=110mm]{VS2_TEM.eps} \caption{\label{fig:TEM} Transmission Electron Microscopy diffraction pattern taken at 94 K on a grain of VS$_2$ marked by a circle in the inset. The ($hk$0) Bragg peaks and the in-plane $P\overline 3m$1 unit cell are shown. The absence of satellite peaks rule out the presence of long-range structural modulations.} \end{figure} \begin{figure}[b] \includegraphics[width=88mm]{VS2_lat_par.eps} \caption{\label{fig:latpar} Temperature dependence of the lattice parameters, $a$ and $c$ (top panel) and of the unit cell volume, $V$, and the V-S distance, $d$, (bottom) of VS$_2$ obtained from the Rietveld refinement of the powder diffraction data described in the text. The anomalies of the $c$-axis and of the V-S distance at $T_{II}$ and $T_{III}$ are discussed in the text. $T_{I}$ indicates the temperature at which the electrical resistivity of Fig.~\ref{fig:rho} exhibits a minimum.} \end{figure} \subsection{\label{res_mag}Magnetic properties} \begin{figure}[b] \includegraphics[width=88mm]{VS2_chi.eps} \caption{\label{fig:susc} Top panel (a): temperature dependence of the magnetic susceptibility, $\chi(T)$, of a representative VS$_2$ sample measured at 10 oersted in both, zero-field- and field-cooling (ZFC, FC) mode. The two ZFC and FC curves are identical within the experimental error. The continuous line represents a Curie-Weiss fit of the ZFC data, as described in the text. Inset: the same data are plotted as $(\chi-\chi_0)T$ vs. $T$ in order to put into evidence the deviation of the data from the ideal Curie-Weiss dependence. The anomalies at $T_{II}$, $T_{III}$ and $T_{IV}$ are concomitant to the anomalies of Figs.~\ref{fig:latpar} and ~\ref{fig:rho} and discussed in the text. Bottom panel (b): temperature dependence of the inverse magnetic susceptibility, from which the constant term $\chi_0$ has been subtracted, which gives evidence of the overall Curie-Weiss behavior of the data with negligible Weiss constant, $\vartheta \approx 0$.} \end{figure} \subsubsection{Magnetic susceptibility measurements} Fig.~\ref{fig:susc} shows the temperature dependence of the dc magnetic susceptibility, $\chi(T)$, measured at 10 oersted. The ZFC and FC curves are smooth and display no hysteresis, thus no magnetic orderings or other phase transitions occur, in agreement with the x-ray diffraction results. In Li de-intercalated VS$_2$ \cite{mul10} and in the isostructural and isoelectronic compounds VSe$_2$ \cite{dis81} and TaS$_2$ \cite{dis80}, the CDW transition manifests itself as an abrupt drop of $\chi$. The absence of this feature in our samples confirms the TEM evidence of no structural modulation. The $\chi(T)$ curve is well described by a conventional Curie-like term, $\chi(T) = C / (T + \vartheta)$, in addition to a constant paramagnetic term, $\chi_0$, with the following fitting parameters: $\chi_0 = 2.62\pm 0.03 \times 10^{-4}$ emu mol$^{-1}$, a Curie constant, $C = 6.29\pm 0.02 \times 10^{-2}$ emu mol$^{-1}$ K and a vanishing Weiss constant, $\vartheta = -0.46\pm 0.01$ K. These values are similar to those found in the isostructural and isoelectronic compound VSe$_2$ in the CDW phase \cite{dis81}. Assuming a band picture and that the above $\chi_0$ value entirely arises from the Pauli contribution, we estimate a sizable density of states at the Fermi level, $g(E_F)=\chi_0/\mu_B^2 \approx$8.1 states eV$^{-1}$ cell$^{-1}$. However, the NMR data reported below indicate a significant orbital contribution, hence we believe that the above $g(E_F)$ value is overestimated and a more realistic estimate is given in the section devoted to the NMR results. Li de-intercalated VS$_2$ samples exhibit two times larger $\chi_0$ values $\approx 6.0 \times 10^{-4}$ emu/mol \cite{mur77,mul10}, so our samples are expected to be less metallic than the latter ones, in agreement with the optical absorption data presented below. The measured Curie constant, $C$, corresponds to an effective moment $\mu$=0.79 $\mu_B$ per V ion much smaller than the value $\mu = 1.73 \mu_B$ expected for $S=1/2$ V$^{4+}$ ions within a ionic picture. This reduction suggests a two-band scenario, where a majority of electrons are delocalized, whilst a minority $\approx$21\% form localized moments. A departure from the ideal Curie behavior is noted in the $(\chi -\chi_0)T$ vs. $T$ plot of Fig.~\ref{fig:susc}; three anomalies are seen at $T_{II}$, $T_{III}$ and $T_{IV}$. The first two ones are concomitant to the structural anomalies mentioned above and shown in Fig.~\ref{fig:latpar}. The $V$-shaped feature at $T_{II} \approx 120$ K mimics a similar feature in the temperature dependence of the V-S bond distance, whilst the maximum at $T_{III} \approx 50$ K follows the maximum of the $c$-axis lattice parameter in Fig.~\ref{fig:latpar}. These anomalies are also found in the NMR response and in the electrical resistivity (see below), so they should reflect subtle changes of the electronic structure. \subsubsection{NMR measurements} In the whole 1.6 - 300 K range studied, we measured a strong $^{51}$V NMR signal. Note that this signal does not arise from the localized moments of the V ions because the nuclei of paramagnetic V$^{4+}$ ions probe a large fluctuating hyperfine field $B_{hf} \approx$ 10 T \cite{freeman_watson,oha99,kik94}. This field causes exceedingly fast nuclear relaxations that can not be detected, unless a strong exchange interaction narrows the resonance line. This is not the case here, considering the negligible exchange energy measured, $J/k_B \sim \vartheta \lesssim$ 1 K. We estimated the amplitude and correlation time of the magnetic fluctuations at the nucleus to be $\delta \omega={^{51}\gamma} B_{hf} \approx 7\times 10^8$~s$^{-1}$ and $\tau=\hbar/J \approx 10^{-11}$ s, respectively, where $^{51}\gamma/2\pi = 11.19$ MHz/T is the nuclear gyromagnetic ratio of $^{51}$V. From these values, one obtains a transverse relaxation time $T_2\approx (\delta\omega^2\,\tau)^{-1}\approx$ 0.2 $\mu$s much shorter than the instrumental dead time, $\sim$10 $\mu$s \cite{abragam}. Thus, the nuclear resonance of the $V^{4+}$ ions is not detectable and the observed signal originates from non-magnetic V. In Fig.~\ref{fig:NMR} we show two representative $^{51}$V field-sweep spectra at 150 and 1.6 K. They are characterized by two sharp and inhomogeneously broadened peaks and by a broad shoulder. Since the sample is single-phase, this indicates that the non magnetic V ions probe two distinct electronic environments, which supports an electronic phase separation picture in the local scale. Both peaks exhibit a sizable positive Knight shift of the resonance line with respect to the reference field $B_{ref}$=6.9959 T. Upon cooling, the two-peak structure is better resolved due to a transfer of spectral weight from the the main peak at higher field to the minor peak. The spectra do not exhibit the characteristic quadrupolar pattern of $I = 7/2$ nuclear spins expected for powder samples of $^{51}$V. This pattern would display a sharp central line and $2I-1$ satellite lines shifted by the quadrupolar field and broadened by the random orientation of the electric field gradient (EFG), which gives rise to peculiar powder singularities. We believe that the quadrupolar structure is not resolved because the high fields used cause a large magnetic broadening of a spatially inhomogeneous distribution of Knight shifts. This explanation is supported by a former estimate of the quadrupolar frequency, $\nu_Q$. Specifically, for VS$_2$ \cite{tsu83} and VSe$_2$ \cite{tsu81}, a value $2\pi \nu_Q/^{51}\gamma \approx$ 0.033 T (in field units) was found. Since $\nu_Q$ is proportional to the EFG at the nucleus, the broad shoulders are attributed to the quadrupole satellite transitions in presence of magnetic and possibly EFG inhomogeneities, which would smear the powder singularities. The two sharp peaks are fitted well in the whole 1.6-300 K range studied, which enables us to plot the Knight shifts $^{51}K$ of the two peaks and the shift $^{51}\overline{K}$ of the center of gravity of the spectrum as a function of temperature (see Fig.~\ref{fig:Knight}a). For both peaks, $^{51}K$ increases upon cooling down to $T_{III} \approx$ 50 K, where an anomaly in the susceptibility data is also observed, then it levels off. Below $T_{II} \approx$ 100 K, the average shift $^{51}\overline{K}$ exhibits a more marked temperature dependence than that of the single peaks owing to the aforementioned transfer of spectral weight. The plot also shows the temperature dependence of the NMR signal amplitude, $^{51}A$, integrated over the spectrum and corrected for the nuclear susceptibility $\propto 1/k_BT$. Note that $^{51}A$ is nearly $T$-independent at high temperature, whilst it drops by $\approx$ 50\% below $T_{IV} \approx$ 20 K, where the susceptibility also decreases. Considering that $^{51}A$ is directly proportional to the number of $^{51}V$ nuclei detected in the resonance, this loss of signal indicates that, for a sizable fraction of the nuclei, the spin-spin relaxation time, $T_2$, is significantly reduced. This prevents detection if this time becomes shorter than the dead time of the NMR receiver. It follows that the correlation time of the fluctuations probed by these nuclei increases dramatically below $T_{IV}$. The nature of such slow fluctuations remains to be established; they may be either magnetic or electric, \textit{e.g.} due to slow charge motion, which would produce random EFG modulations. The $^{51}K$ data provide further evidence that the NMR signal does not arise from localized V moments. Namely, the orbital contribution to $K$ and to the susceptibility $\chi$ is usually negligible for a magnetic ion, thus $K$ is proportional to $\chi$ via the hyperfine coupling constant and $K$ should follow the Curie law, which is not observed here. Moreover, in a magnetic ion, a \textit{negative} hyperfine field at the nucleus is expected \cite{freeman_watson}, in contrast with our observation of \textit{positive} $K$. \begin{figure}[b] \includegraphics[width=88mm]{VS2_NMR.eps} \caption{\label{fig:NMR} (Color online) $^{51}$V field-sweep NMR of VS$_2$ at 1.6 and 150 K, recorded at 78.3 MHz. The reference field, $B_{ref}$, is indicated by the vertical red line. For clarity, the 1.6 K spectrum has been scaled by a factor of 2 after correction for the nuclear susceptibility $\propto 1/k_BT$. The continuous line is a fit of the the two sharp peaks indicated by the broken line. The small peak at 6.927 T produced by the $^{63}$Cu signal of the copper coil has been excluded from the fit.} \end{figure} \begin{figure}[b] \includegraphics[width=88mm]{VS2_Knight.eps} \caption{\label{fig:Knight} (Color online) a) Temperature dependence of the Knight shifts $^{51}K$ of the two peaks (open triangles), the center of gravity of the spectra $^{51}\overline K$ (filled triangles) and the integrated spectral amplitude multiplied by temperature $^{51}A$ (squares). b) Temperature dependence of the spin-lattice relaxation rate $T_1^{-1}$ and best fit to the Korringa law $T_1^{-1} \propto T$ (dashed line). Inset: the same as before for the Korringa product $(T\,T_1)^{-1}$.} \end{figure} The temperature dependence of the spin-lattice relaxation was measured for the main peak. The recovery of the nuclear magnetization $M(t)$ to thermal equilibrium was found to obey the following multiexponential law, which is appropriate for the magnetic relaxation of the quadrupole-resolved central line of a $I=7/2$ nucleus following its selective saturation: \cite{narath} \begin{equation} \label{eq:SR7_2} \frac{M(t)}{M(t=\infty)} = 1 - \frac{4}{21}e^{-2Wt} - \frac{2}{11}e^{-12Wt} - \frac{20}{91}e^{-30Wt} - \frac{175}{429}e^{-56Wt} \end{equation} where $W$ is the transition probability between two adjacent nuclear Zeeman levels. In Fig.~\ref{fig:Knight}b, the spin-lattice relaxation rate $T_1^{-1}\equiv 2W$, determined by the analysis of $M(t)$ using Eq.~\ref{eq:SR7_2}, is plotted as a function of temperature. In the 10-150 K range, the data follow the Korringa law $T^{-1}_1 \propto T$, characteristic of a non-magnetic metal with a temperature-independent spin susceptibility. The validity of this law in the above range is also apparent in the $(T\,T_1)^{-1}$ vs.\ $T$ plot, which shows a nearly constant value $\approx 0.17$~K$^{-1}$s$^{-1}$. At lower temperature below $T_{IV}$, a large deviation from this simple behavior is found. In a metal, $K$ and $T_1^{-1}$ probe the static and dynamic electronic spin susceptibilities, respectively. The two quantities are then predicted to scale with each other according to the universal Korringa relation: \begin{equation} \label{eq:korringa} K^2 = R\, (T\,T_1)^{-1}, \end{equation} where $R$ is the Korringa ratio. For a simple $s$-band metal with negligible electronic correlations, $R$ is expected to be equal to the universal value $S_0 = (\hbar/4\pi k_B) (\gamma_e/\gamma_n)^2$, where $\gamma_e$ and $\gamma_n$ are the electronic and nuclear gyromagnetic ratios \cite{abragam}. For a $d$-band metal, a similar result holds, with $R=\kappa S_0$ and $2 \le \kappa \le 5$.\cite{yafet_jaccarino} From Fig.~\ref{fig:Knight}, it is clear that the above relation can not be followed by our temperature-dependent $^{51}K$ and temperature-independent $(T\,T_1)^{-1}$ data. Even restricting ourselves to the 10 K $\le T \le$ 50 K range, where both quantities are approximately constant, anomalously high values $R/S_0=10-100$ are obtained. This is explained by comparing our data with those by Tsuda {\it et al.} \cite{tsu83} who found one order of magnitude smaller Knight shifts but a comparable $(T\,T_1)^{-1}\approx 0.4$ K$^{-1}$s$^{-1}$ at $T<$100 K, implying a spin susceptibility smaller by a factor of 1.5 only in our sample. The difference between the $T_1$ values reported by Tsuda \textit{et al.} and ours is not significant, for the recovery law of a quadrupolar nucleus depends upon the experimental conditions, \textit{e.g.} the width of the irradiated band and the length of the saturation pulse train \cite{and61,reg91}. This may lead to deviations from Eq.~\ref{eq:SR7_2}, whence an uncertainty on $T_1$. Tsuda \textit{et al.} showed that $^{51}K$ contains two contributions with opposite sign which nearly cancel each other: a positive orbital one, $^{51}K_{orb}$, and a negative one, $^{51}K_d$, proportional to the spin susceptibility of the $d$-wave conduction band. The discrepancy between Tsuda's $^{51}K$ values and ours are then reconciled by assuming a similar value of the $K_d$ term (and a similar Korringa product) for the two samples but a much larger orbital contribution in our sample, which explains the larger Korringa ratios $R$. A large orbital shift $^{51}K_{orb}$ in $d$-wave metals originates from the presence of low-lying excited multiplets via the van Vleck mechanism \cite{clo64, wal08} and thus depends upon the crystal field splitting. Since the latter is sensitive to the local structural parameters, we believe that the anomaly of $^{51}K$ and $^{51}A$ at $T_{III}$ reflects the corresponding anomaly of the $c/a$ ratio and of the V-S bond distance. To conclude this section, we endeavor to estimate the Pauli contribution $\chi_d$ to the magnetic susceptibility from the $T_1^{-1}$ data and from the values of the Korringa and hyperfine constants determined experimentally in previous reports. According to band calculations\cite{myron80} and ARPES data \cite{mul10}, we should assume a full $d$-wave character of the conduction band. The Korringa ratio was determined to be $R=5.0\times 10^{-6}$~sK in both, VS$_2$ and VSe$_2$ \cite{tsu83,tsu81}. According to Eq.~\ref{eq:korringa}, we obtain $K_d=-0.09$~\% for our sample. Next, we consider the proportionality relation $^{51}K_d={\cal A}_{iso}\chi_d /N_A$, where ${\cal A}_{iso}$ is the isotropic core-polarization term in the hyperfine coupling of vanadium and $N_A$ is the Avogadro number. From the electron spin resonance of the V$^{3+}$ ion \cite{epr96,epr94}, we take ${\cal A}_{iso}=-85$~kOe/$\mu_B$ to obtain $\chi_d = 6\times 10^{-5}$ emu/mol, which corresponds to a density of states $g(E_F) \approx$ 2.0 states eV$^{-1}$ cell$^{-1}$. We finally obtain $\chi_{orb}=\chi_0-\chi_d\approx2\times 10^{-4}$~emu/mol, comparable with the values reported in the related compound LiVO$_2$ \cite{Vorbital}. \begin{figure}[b] \includegraphics[width=96mm]{VS2_opt_cond.eps} \caption{\label{fig:opt_cond} Temperature dependence of the optical absorption of pure VS$_2$ powders as a function of frequency in the 4-300 K range. The absence of a Drude peak and a weak increase of absorption with temperature indicate a nonmetallic behavior. Note the absence of phonon peaks, except for a weak absorption structure at $\approx$390 cm$^{-1}$ ($\approx$ 49 meV) suggestive of a highly screened phonon mode, in agreement with the prediction of a $E_u$ infrared-active mode at 405.8 cm$^{-1}$ (see text).} \end{figure} \subsection{Transport properties} \subsubsection{Infrared optical conductivity} The optical absorption spectra, $A_{\omega}$, measured in the infrared 140-650 cm$^{-1}$ range is shown for two representative temperatures, 5 and 300 K, in Fig.~\ref{fig:opt_cond}. Salient features are: (i) the absence of a Drude peak; (ii) a sizable and monotonic increase of $A_{\omega}$ with $\omega$; (iii) the absence of phonon features at high temperature and one broad phonon feature at $\omega_{ph} \approx 390$ cm$^{-1}$ at low temperature; (iv) the spectra change little with temperature in the whole 5-300 K range. Specifically, besides the appearance of the above phonon feature at low temperature, the only noticeable change is a small increase of the absorption with temperature at low frequency. Considering that, under the present experimental conditions of low absorption, $A_{\omega}$ is simply proportional to the real part of the optical conductivity, $\sigma_{\omega}$, features (i) and (ii) give evidence of non-metallic transport, whilst (iv) suggests a modest charge localization. According to a symmetry analysis of the 1T structure, feature (iii) is consistent with the expectation of two infrared-active phonon modes with $A_{2u}$ and $E_u$ symmetry. It is then plausible that only one of these modes is detected at low temperature; indeed, the measured frequency, $\omega_{ph}$, is in good agreement with the calculated value of 405.8 cm$^{-1}$ for the $E_u$ mode (see section on phonon calculations). The fact that the phonon feature at $\omega_{ph}$ is visible only at low temperature and is broadened indicates a partial screening of the infrared active dipoles, in agreement with the observation of Korringa behavior in the NMR data and of a significant density of states extracted from the measured $\chi_0$ value and calculated \textit{ab initio} (see below). On the other hand, the absence of a Drude peak indicates that the electron density responsible for the screening is not sufficient to ensure a metallic transport, possibly because of disorder or electron-electron correlations leading to charge localization. Conversely, a picture of conventional metal and semimetal accounts for the infrared response of other transition metal dichalcogenides, such as 2H-TaS$_2$ \cite{hu07} and 1T-TiSe$_2$ \cite{li07}, respectively, which both exhibit well-developed Drude peaks. A picture of charge localization has been previously proposed also for the isostructural and isoelectronic $d^1$ compound 1T-TaS$_2$ \cite{faz80,sip08}, which indeed displays a qualitatively similar optical response in the so-called nearly commensurate CDW (NCCDW) phase at room temperature \cite{gas02}. On the other hand, in the commensurate CDW phase at low temperature, the optical spectra of 1T-TaS$_2$ \cite{gas02} and also of 1T-TiSe$_2$ \cite{li07} display a large number of sharp phonon features, consistent with the large number of infrared phonon modes expected for the CDW superstructure. The absence of such additional phonon features in our samples is a further strong argument against the existence of a CDW phase. To the best of our knowledge, the present optical data are the first ones reported in VS$_2$, so no direct comparison can be made with Li de-intercalated samples; it would be interesting to see whether these samples display additional infrared phonon features in the CDW phase. \subsubsection{Electrical resistivity} In Fig.~\ref{fig:rho}, we report the electrical resistivity curve $\varrho (T)$ measured in the 2-325 K range on one representative VS$_2$ sample. Consistently with the optical conductivity results, the overall temperature dependence of $\varrho$ exhibits a modest variation within the 5.5-7.2 m$\Omega$ cm range. Specifically, at low temperature, one notes a weak increase which confirms the above evidence for charge localization. At high temperature, the weakly negative resistivity coefficient above $T_{I} \approx$ 250 K is explained by a small increase of thermally activated carriers, which is characteristic of a small energy gap at $E_F$. A comparison between the present resistivity data and those obtained on Li de-intercalated samples \cite{mul10} further confirms that the latter samples are different from the present ones. First, we find no anomaly of $\varrho(T)$ at $T_{CDW}$=305 K, which supports the conclusion on the absence of CDW transition. Second, the marked metallic behavior of the above samples below $\approx$260 K, with a sizable residual resistivity ratio, $RRR \sim 10$, is completely different from the nonmetallic behavior reported here. On the other hand, the present data display two anomalies at $T_{III} \approx 120$ K and $T_{IV}\approx 20$ K concomitant to the anomalies in the dependence of the V-S distance and of the magnetic susceptibility with temperature shown in Fig.~\ref{fig:latpar} and Fig.~\ref{fig:susc}, respectively. In Fig.~\ref{fig:rho}, these anomalies appear as inflexion points of the $\varrho (T)$ curve and are better seen in the derivative of the curve. The inflexion point at $T_{III}$ corresponds to a slowing down of the resistivity increase upon cooling, which is interpreted as a weakening of the localization below $T_{III}$. The opposite seems to occur at $T_{IV}$. In order to provide a full picture of the transport properties of the present high-pressure VS$_2$ phase and to confirm the proposed scenario of charge localization, a single crystal study would be desirable. \begin{figure}[b] \includegraphics[width=96mm]{VS2_rho.eps} \caption{\label{fig:rho} Temperature dependence of the dc electrical resistivity, $\varrho$, and of its derivative. Note the characteristic semimetallic behavior characterized by a crossover from a positive to a negative slope of $\varrho$ at $T_I$, and two abrupt changes of the slope at $T_{II}$ and $T_{IV}$ concomitant to anomalies of the lattice parameters and of the magnetic susceptibility (see Figs. ~\ref{fig:latpar},\ref{fig:susc}).} \end{figure} \subsection{Electronic structure and lattice stability} We first performed a full structural optimization of the crystal structure by varying both, internal coordinates and unit cell parameters. We found that the LDA result substantially underestimates the experimental volume. Namely, we obtained $a_{\rm LDA}=3.102$ ~\AA, $c_{\rm LDA}/a_{\rm LDA} = 1.786$ and a $z$ coordinate of $0.262$ (in reduced coordinates) for the $S$ atom. Since a compressed volume tends to weaken the tendency towards CDW formation (it is recalled that the CDW phase disappears under hydrostatic pressure in all metallic transition metal dichalcogenides), here we used the experimental lattice parameters and optimized only the internal coordinate of the $S$ atom in order to investigate the stability of the structure towards CDW formation. \subsubsection{Electronic band structure} The electronic structure of VS$_2$ shown in Fig.~\ref{fig:bands} turns out to be very weakly dependent on the volume used in the calculations. The overall shape of the band-structure closely recalls that of TiSe$_2$ \cite{cal11} but with a different position of the Fermi level and a smaller hybridization between the chalcogen $p$ states and the transition metal $3d$ states. According to the band structure, VS$_2$ should be a metal with a density of states at the Fermi level $g(E_F)$ = 3.2 states eV$^{-1}$ cell$^{-1}$. This value is in agreement with the $\chi_0$ value obtained from susceptibility measurements, considering that only one third of this value arises from the Pauli term, as suggested by the analysis of the NMR data. As to the band characteristics at the Fermi level, the $d$-electrons form electron-pockets at the $M$ and $L$ symmetry points and an additional small electron-pocket is present along the $K-A$ direction. \subsubsection{Phonon dispersion} An insight into the occurrence of a CDW transition is obtained from the harmonic phonon dispersion calculation reported in Fig.~\ref{fig:bands}. Within the harmonic approximation, an imaginary phonon frequency (here plotted as negative) is the signature of a second order structural instability, such as a CDW. In VS$_2$, we do indeed find that the harmonic phonon dispersion shows an instability of a transverse acoustic phonon at the same wave vector ${\bf q}_{CDW}$=(0.21,0.21,0) of the CDW phase found experimentally in Ref. \cite{mul10}. The \textit{caveat} is that the calculation turns out to be extremely sensitive to the values of lattice parameters (much more than in the case of other dichalcogenides) and the unstable phonon mode is only slightly imaginary. This suggests that the structure can be stabilized by anharmonic effects. This is what indeed occurs in the parent compound 2H-NbS$_2$ \cite{PhysRevB.86.155125}, where harmonic calculations overestimates the tendency towards CDW formation and anharmonic effects stabilize the lattice without CDW. The lack of anharmonic effects in the present calculations and the high sensitivity of the DFT phonon dispersion on the lattice parameters do not allow us to draw more definitive conclusions. It is nevertheless safe to conclude that VS$_2$ is at the verge of a CDW instability, which accounts for the contrasting results about the occurrence of a CDW phase in Li de-intercalated samples and in the present ones. Finally, the calculations predict two infrared-active $E_u$ modes at 200.9 cm$^{-1}$ and 405.8 cm$^{-1}$ and two Raman-active ones at 246.3 cm$^{-1}$ and 355.3 cm$^{-1}$ with $E_g$ and $A_{1g}$ symmetry, respectively. The prediction of the 405.8 cm$^{-1}$ mode is consistent with the observation of a highly screened phonon mode at a similar frequency in the optical absorption, as discussed above. \begin{figure}[b] \includegraphics[width=0.4\columnwidth]{VS2_electron_bands.eps}\hspace{1.0cm}\includegraphics[width=0.4\columnwidth]{VS2_phonon_bands.eps} \caption{\label{fig:bands}Left: Electronic band structure and density of states of VS$_2$ in the local density approximation. The size of the circles on a given band is proportional to the vanadium 3$d$-component of the band. Right: Phonon dispersion along the $(1,1,0)$ reciprocal space direction. The label $q_{\rm CDW}$ indicates the propagation vector of the CDW instability reported in Ref. \onlinecite{mul10}.} \end{figure} \section{\label{concl}Discussion and conclusions} In conclusion, we systematically studied the structural, magnetic and transport properties of single-phase 1T-VS$_2$ samples prepared under high pressure. Contrary to previous reports on Li de-intercalated samples, we found that 1T-VS$_2$ is nonmetallic and displays no long-range structural modulations, such as CDW. This difference is attributed to the different synthesis route employed which does not require any chemical methods. Specifically, we envisage that residual Li atoms and iodine or acetonitrile molecules introduced by the Li de-intercalation method may alter the doping level, which would be sufficient to stabilize the metallic and CDW phase owing to the semi-metallic character of the band structure. To the best of our knowledge, the doping effect inherent in the de-intercalation process has not been considered in previous studies but is supported by the observation of structural modulations in chemically exfoliated MoS$_2$ and WS$_2$ by means of Li intercalation \cite{eda12}. This scenario would also explain the different lattice parameters of the present VS$_2$ samples with respect to those of the Li de-intercalated ones. A further support of this scenario is given by the enhancement of metallic properties in the related layered compounds AuVS$_2$ \cite{gau02} and Ag$_{0.75}$VS$_2$ \cite{ali13}, where the nominal valence of V is close to $3+$ instead of $4+$. Within a rigid band picture, the evidence of nonmetallic conductivity would be at odds with the Korringa-like behavior of the spin-lattice relaxation time, the sizable Pauli susceptibility, $\chi_0 \sim 10^{-4}$ emu mol$^{-1}$, and the prediction of metallic properties by band structure calculations. This discrepancy indicates that the above picture is not appropriate and localization effects due to disorder or electronic correlations should be invoked. An enhancement of the Pauli susceptibility induced by these effects in quasi two-dimensions would also account for the comparable $\chi_0$ values reported in other 1T-VS$_2$-related compounds, regardless to their metallic or insulating properties. Examples are the misfit layer system La$_{1.17-x}$Sr$_x$VS$_{3.17}$ \cite{yas95,car99}, which displays no appreciable change of $\chi_0$ across the doping-induced metal-insulator transition, the $M$V$_2$S$_4$ family ($M$=Ti, Cr, Fe, Ni or Cu) \cite{pow99,kle11}, where both metallic and insulating properties are found, and the insulator Sr$_3$V$_5$S$_{11}$\cite{kle13}. In support of this scenario, it is recalled that a Mott state has been proposed by several groups for 1T-TaS$_2$, which displays similar magnetic and transport properties in the aforementioned NCCDW phase \cite{gee72,faz80,ros06,sip08}. In this phase, the direct STM observation of a nm-size domain structure \cite{bur91}, in excellent agreement with calculations \cite{nak84}, in conjunction with high-pressure resistivity measurements suggests a picture of insulating and commensurate CDW regions separated by metallic and weakly CDW-distorted ones \cite{sip08}. A phase separation scenario may then apply to the present case as well, for our magnetic susceptibility data unveil the existence of both localized moments and metallic carriers, whilst the NMR data show that the metallic V sites are inhomogeneous. The existence of nm-size domains would indeed explain the absence of satellite diffraction peaks in our TEM pictures, whilst a slightly different doping level in Li de-intercalated samples would be sufficient to stabilize a uniform metallic/CDW phase, in agreement with the prediction of a latent CDW instability by phonon calculations. The confirmation of a phase separation scenario for the present 1T-VS$_2$ samples awaits further studies by means of local probes, such as Scanning Tunneling Microscopy/Spectroscopy. The authors acknowledge L. Cario and M. Marezio for stimulating discussions and R. Lobo and B. L\'eridon for making available their PPMS system. MC gratefully acknowledges financial support of the Graphene Flagship and of the French National ANR funds under reference ANR-11-IDEX-0004-02, ANR-11-BS04-0019 and ANR-13-IS10-0003-01. Computer facilities were provided by CINES, CCRT and IDRIS (project no. x2014091202).
1,108,101,563,478
arxiv
\section*{Introduction} Recent developments in the field of metamaterials and metasurfaces have provided useful platforms for manipulating and tailoring light-matter interaction with numerous applications ranging from cloaking \cite{alu2005achieving, schurig2006metamaterial}, enhanced spontaneous emission \cite{alu2009boosting}, sensing \cite{alu2009cloaking}, signal processing and information handling \cite{silva2014performing, estakhri2019inverse}, and nonreciprocity \cite{coulais2017static}, just to name a few. Among various classes of metamaterials, the epsilon-near-zero (ENZ) and near-zero-index (NZI) structures have attracted increasing attention due to their unique features in light-matter interaction \cite{silveirinha2006tunneling, liberal2017near, reshef2019nonlinear, kinsey2019near}. In such structures, relative permittivity and/or relative permeability attain values near zero, thus making the effective refractive index of the structure near zero. Consequently at the operating frequency, the wavelength in these media is “stretched”, making the phase of the signal approximately uniform across this structure \cite{engheta2013pursuing}. As a result, the waves exhibit “static-like” spatial distributions, while temporally they are dynamic. This has led to numerous exciting wave phenomena, with several potential applications \cite{silveirinha2006tunneling, liberal2017near, reshef2019nonlinear, kinsey2019near, liberal2017rise}. One such feature is the possibility of levitation of electrically-polarized nanoparticles in the vicinity of ENZ substrates \cite{Rodriguez2014}. In our earlier work, we theoretically showed that an infinitesimally small nanoparticle, when electrically polarized at a given frequency, could be levitated when placed near an ENZ substrate. This phenomenon, which was inspired as a classical analogue to the Meissner effect (levitated magnets in proximity of superconductors), can provide a new approach in optomechanics when manipulation of electrically polarizable particles is desired in the presence of optical fields. Careful manipulation of particles with light, which has a long history dating back to the pioneering work of Ashkin in 1970s \cite{Ashkin1970,AshkinAPL71}, has played important roles in various areas, from biology \cite{Fazal2011} to nanoscience and nanotechnology \cite{Marago2013}. At the nanoscale, various methodologies have been used for such optical manipulation, including trapping \cite{Spesyvtseva2016,JuanNP11}, pushing \cite{GargiuloNL16,Donato2018}, and binding \cite{DemergisNL12,Donato_NL_2019}, with different materials such as dielectrics, semiconductors, plasmonic, and biological \cite{Jones2015}. The surrounding media can be vacuum, air, or liquid. Optical tweezing \cite{Ashkin1986,Jones2015} is usually achieved using optical beam shaping to generate desired potential traps \cite{Dholakia2011}. Recently, new approaches to optical manipulation of objects without a beam-shaping were proposed. Soljacic and co-workers \cite{ilic2017topologically} proposed that the motion of a Janus particle with spatially asymmetric absorption can be controlled by changing the incident wavelength. Ilic and Atwater \cite{ilic2019self} proposed self-stabilizing optical manipulation of macroscopic objects by controlling the anisotropy of the scattered light from the structured object's surface. Both approaches, however, rely on structuring the object in lieu of the incident light. In the present work, we merge the two fields of ENZ metamaterials and of optical trapping, providing a new platform, which we name ENZ-based optomechanics, for manipulating and controlling mechanical motion of particles in vicinity of ENZ structures. We explore, numerically and analytically, how various parameters, such as the size, shape and composition of the particle and its distance to the ENZ substrate affect the optomechanical forces on such particles. We consider both homogeneous and layered structures as our ENZ substrates. In recent years, researchers have been able to tailor the effective permeability and permittivity of composite media by engineering the electric and magnetic resonances of nanostructures. Together with related developments in nanophotonics, metamaterials provide unprecedented freedom to define and sculpt electromagnetic modes. Metamaterials allow to alter the topology of photonic isofrequency surfaces - which govern the momentum and energy of optical modes inside a medium - contrarily to conventional bounded spherical and ellipsoidal isofrequency surfaces in natural dielectrics\cite{krishnamoorthy2012topological}. Among many other extreme optical features, unbounded iso-frequency surfaces in hyperbolic dispersion metamaterials \cite{poddubny2013hyperbolic} and point-like vanishing surfaces in epsilon-near-zero (ENZ) media \cite{mahmoud2014wave} constitutes two examples of advanced modal engineering. In particular, epsilon-near-zero metamaterials provide extended modes with uniform phase over micrometer length scales inducing profound effects on nanoscopic light-matter interactions. These deeply subwavelength structured surfaces support unique electromagnetic modes that can be used in sub-diffraction imaging, \cite{jacob2006optical} and waveguiding \cite{silveirinha2006tunneling}, spontaneous emission engineering \cite{cortes2012quantum} and biosensing \cite{sreekanth2016extreme}. In the following, we introduce the geometry of the problem, and discuss the electromagnetic modeling for the structure, along with the dipole approximation. We also present extensive numerical results, based on the finite-element method (using the commercial software COMSOL Multiphysics\textsuperscript{\tiny\textregistered}) and on the T-matrix methods. We also present a series of results for various parameters involved in the problem. Physical insights into the results are presented and future directions are discussed. \begin{figure} \includegraphics[width=\textwidth]{Fig1.pdf} \caption{Geometry and optical force parameter space maps. a) Sketch of the geometry. We consider a generic particle, in principle of any shape and composition, in front of a metamaterial surface at an edge-to-edge distance, $h$, immersed in an external medium (e.g., water) of refractive index $n_{\rm m}$. The origin of the coordinate system is placed on the surface so that the $z$ axis is positive in the semi-infinite space where the particle resides. A monochromatic optical wave illuminates the particle and surface at normal incidence so that the total field is the superposition of incident ($E_{\text{I}}$), reflected ($E_{\text{R}}$) and scattered fields ($E_{\text{S}}$, $E_{\text{SR}}$). The resulting optical force can be either attractive (negative) or repulsive (positive). b) Near-field force $(R,\phi)$ map. We explore the range of reflectivity, $R$, and phase, $\phi$, related to the reflection of the incident wave on a generic surface. The ideal ENZ surface ($R = 1$, $\phi = 0$)) is in the top right corner of the map and shows a repulsive force. Here the near-field force component is calculated in the dipole approximation for a polystyrene (dielectric constant $\varepsilon_{p}$=2.543 at 560 nm)) particle of radius $a=20$ nm at a fixed edge-to-edge distance of $h=10$ nm in water. c) Curves of $(R,\phi)$ for different substrates consisting of alternating metal and dielectric layers, in the thin layer limit where effective medium theory is valid. The color of each curve indicates the metal filling fraction. The zero force lines from panels d and e are superimposed as dashed lines. d-f) Total force $(R,\phi)$ maps as calculated from T-matrix methods for a dielectric particle size of $a=20$ nm (d), $a=220$ nm (e), and $a=1000$ nm (f), respectively, and at a fixed $h=10$ nm. The total force maps have a structure that is strongly dependent on particle size. This is due to the increase of the scattering force component that, for large particles, overcomes any other gradient-like force component that is dominant for nanoparticles.} \label{Fig1} \end{figure} \paragraph{Geometry of the problem.} Figure \ref{Fig1}a presents the geometry of our problem. A polarizable particle, made of a single nonmagnetic material (or multilayered materials), surrounded by an external medium (e.g., water) of refractive index $n_{\rm m}$, is located at an edge-to-edge distance $h$ above a metamaterial substrate. The particle can be spherical (or other shapes as will be discussed later in the manuscript), and it is made of a (dielectric or metallic) material with a relative permittivity $\varepsilon_{\rm p}$. The substrate can be considered as a homogenized nonmagnetic medium with relative permittivity near zero at the frequency of operation, or a layered structure engineered to function as ENZ. A monochromatic optical wave is illuminating this structure at normal incidence. The goal is to evaluate the optical force on the particle and to investigate how various parameters, radius $a$, edge-to-edge distance $h$, particle’s permittivity, and signal frequency affect the optical force's magnitude and direction, \textit{i.e.} whether it is a repulsive (positive) or an attractive (negative) force. In the next section, in order to gain some physical insight we start by assuming the polarizable particle to be represented by an infinitesimally small electric dipole, and discuss the analytical approach for evaluating the force acting on this particle. In the subsequent sections, we will expand our approach to include the full-wave numerical simulations of the problem, allowing to consider realistic sizes and shapes for this particle. \paragraph{Dipole approximation.} We first consider a particle size much smaller than the light wavelength ($a << \lambda$) so that optical forces can be calculated analytically within the dipole approximation (DA) \cite{Chaumet2000, AriasGonzalezJOSAA03, Jones2015}. Due to its simplicity, the dipole approximation can provide useful results that can be compared with more complex light scattering approaches (T-matrix, finite elements methods) at the nanoscale \cite{Polimeno2018}. \begin{figure} \begin{center} \includegraphics[scale=0.5]{Fig2.pdf} \caption{Total optical force as a function of edge-to-edge distance, $h$, for polystyrene particles of different size: a) $a=20$ nm), b) $a=220$ nm, and c) $a=1$ $\mu $m. Different approaches for the calculation of the force are compared: dipole approximation (short dots), COMSOL (circles) and T-matrix (continuous lines). Different surfaces , glass (red), silver (magenta), ENZ (blue), yield very different optomechanical interactions in terms of force amplitude, modulation with respect to $h$, and phase-shifts. Arrows indicate self-binding points, where particles are stably trapped in front of the surface. (d,e) Size dependence of the total optical force for fixed edge-to-edge distance, $h$=10 nm. For small particles the gradient force component of the partially reflected plane wave dominates, resulting in a dependence with particle size, while for large particles radiation pressure has a major contribution resulting in a negative force pushing the particle towards the surface.} \label{Fig2} \end{center} \end{figure} We start our analysis from the near-field force component. It has been shown \cite{Rodriguez2014} that in front of an ENZ surface an emitting point dipole source is subjected to a near-field repulsive force, reminiscent of the Meissner effect in superconductors \cite{Rodriguez2014}. This portion of the force, which we refer to as the ``near-field'' force, is due to the interaction of the emitting dipole with the substrate (excluding the force due to the presence of the incident and reflected waves). When all forces are considered (including the forces caused by the incident and reflected waves), the forces are called ``total force''. We can extend the result of Ref. \cite{Rodriguez2014} to a finite-sized polarizable particle illuminated by an incident field by considering the radiated power upon scattering, $P_{\rm rad}=\sigma_{\rm scat} I(z)$, in terms of the scattering cross section, $\sigma_{\rm scat}$, and light intensity, $I(z)$. Thus, the near-field force component is (see Supp. Info.): \begin{equation}\label{Fenz_pw1} \mathrm{F_{enz}}(z)\approx - \frac{9}{512 \pi^4 c} \mathrm{Re}\left( \frac{\varepsilon_{\rm s}-\varepsilon_{\rm m}}{\varepsilon_{\rm s}+\varepsilon_{\rm m}} \right) \left( \frac{\lambda}{n_{\rm m} z} \right)^4 \sigma_{\rm scat} I(z) \end{equation} \noindent where $z$ is the axial coordinate ($z=h+a$, $a$ is the radius of the particle, $h$ is the edge-to-edge distance of the particle from the surface), $c$ is the vacuum speed of light, $\varepsilon_{\rm m}=\varepsilon_0 n_{\rm m}^2$ is the permittivity of the surrounding medium, $n_m$ is the refractive index of the medium, and $\varepsilon_{\rm s}$ is the complex dielectric permittivity of the ENZ surface. In Fig. \ref{Fig_bead_all}, a panel summarizing the results of the calculation of the near-field force (d-f) on a 20 nm dielectric bead in water is shown. Three different surfaces are considered: lossless (Im$(\varepsilon_s)$=0), with medium loss (Im$(\varepsilon_s)$=0.5), and with high loss (Im$(\varepsilon_s)$=0.8). The comparison with the results obtained for a point dipole in vacuum \cite{Rodriguez2014} shows that in this work the presence of a medium (water) broadens the repulsive near field force region from $-1<\varepsilon_s<1$ to $-1.77<\varepsilon_s<1.77$; moreover, as already observed\cite{Rodriguez2014}, even in surfaces with high loss there is still a repulsive near-field force. In order to explore how the ENZ surface can influence the near-field and total forces on the particle, we evaluate such effects in terms of the amplitude $\rho$ and phase $\phi$ of the complex reflection coefficient of an incident wave from this surface. In Fig. \ref{Fig1}b, the near-field force on a $a$=20 nm radius dielectric bead at $h$=10 nm from the surface has been calculated as a function of the surface reflectivity $R$=$\left|\rho e^{i\phi}\right|^2=\rho^2$ and phase angle $\phi$ which are connected to the surface complex refractive index $\tilde{n}= n_{\rm s}+i k_{\rm s}$ by \cite{Born1980_6th_ed}: \begin{align} \rho=\sqrt{\frac{(n_{\rm m} - n_{\rm s})^2 + k_{\rm s}^2}{(n_{\rm m} + n_{\rm s})^2 + k_{\rm s}^2}} && \phi=\arctan \left[ \frac{-2n_{\rm m} k_{\rm s}}{n_{\rm m}^2 - n_{\rm s}^2 - k_{\rm s}^2 }\right] \label{R and phi} \end{align} Here we use $e^{- i\omega t}$ as our time harmonic convention. The calculated near-field force can reach a fraction of femtonewton for an incident intensity of approximately 5.6$\cdot 10^8$ W/m$^2$ (corresponding to a typical experimental configuration, see Sect. 1.2 of the Suppl. Info.) and changes character from attractive to repulsive when the reflection phase angle changes from $\phi$=-$\pi$ to $\phi=0$. Metals such as Au or Ag, having a certain amount of absorption ($k_s$ in Eq. \ref{R and phi}), are in the attractive region of the near-field force (compare Figs. \ref{Fig1}b and c). On the contrary, in front of an ideal ENZ surface, having $R$=1 and $\phi$=0, the near-field force is repulsive. Substrates of alternating metal and dielectric layers can span a broader range of $\phi$ and $R$ values. In the limit of layers much smaller than the incident wavelength, where effective medium theory (EMT) is valid, we show the $(R,\phi)$ results for four different metal / dielectric mixtures in Fig. \ref{Fig1}c (see Sect. 4 of the Supp. Info. for more details on the EMT calculation). The metal filling fraction is indicated by the color of the curve. Depending on the fraction, we can switch the sign of the force from attractive to repulsive and vice versa. If we go beyond EMT and take into account the finite thickness of layers in real structures, as described in the discussion of Fig.~\ref{Fig3} below, we can achieve an even wider range of $\phi$ and $R$ values. We now consider the total optical force from an incident field on a nanoparticle calculated in DA. This is the sum of two main components: a gradient force, $\mathrm{F}_{\rm grad}$, and a scattering force, $\mathrm{F}_{\rm scat}$ \cite{Jones2015}. For plane wave illumination (for Gaussian beams see Sect. 1.2 of Supp. Info.) impinging normally to the ENZ surface, the force components are influenced by incident and reflected fields. Thus, considering only the axial direction $z$, they are written as (see Supp. Info.): \begin{equation}\label{Fgrad_pw1} \mathrm{F_{grad}}=\frac{1}{2}\frac{n_{\rm m}}{c \varepsilon_{\rm m}} \mathrm{Re}(\alpha) \frac{d I(z)}{dz} \end{equation} \begin{equation}\label{Fscatt_pw1} \mathrm{F_{scat}}=\frac{n_{\rm m}}{c}\sigma_{\rm ext}I_0\left[ \rho^2 - 1\right] \end{equation} \noindent where $\alpha$ is the particle complex polarizability \cite{DraineAJ93}, \begin{equation}\label{alpharad} \alpha=\frac{\alpha_{0}}{1-i\frac{k^3\alpha_0}{6 \pi \varepsilon_{\rm m}}} \end{equation} \noindent $\alpha_0$ is the Clausius-Mossotti polarizability and $\sigma_{\rm ext}=\frac{k}{\varepsilon_{\rm m}}\mathrm{Im}({\alpha})$ is the extinction cross-section, related to the particle absorption and scattering \cite{AriasGonzalezJOSAA03,Jones2015}, with $k=2\pi n_{\rm m}/\lambda$ the wave number and $\lambda$ the wavelength. The gradient force, $\mathrm{{F}}_{\rm grad}$, drives the particle towards the maximum (minimum) of the modulated light intensity profile for positive (negative) real part of the polarizability. On the other hand, the scattering force, $\mathrm{{F}}_{\rm scat}$, is constant with respect $z$, and it always pushes the particle along the beam propagation direction.\\ In Fig. \ref{Fig1}d, the $(R,\phi)$ map of calculated total axial force on a polystyrene $a$=20 nm bead (dielectric constant $\varepsilon_{p}$=2.543 at 560 nm) at $h$=10 nm from the surface is shown. The total force is one order of magnitude larger than the near-field force (Fig. \ref{Fig1}b) and shows a change in the repulsive-attractive character when the phase angle changes from $-\pi$ to 0 , respectively. This is due to the gradient force (see also Fig. \ref{Fig2}a, short dots) that dominates the optomechanical response and drives the particle towards the high field intensity regions. The change of the phase of the reflection coefficient shifts the intensity modulation resulting from the interference between the incident and reflected wave. Thus, the high intensity points shift accordingly and the sign of the force changes around $\phi\sim -\pi/4$. We now calculate the total optical force on the dielectric bead in front of glass, Ag and ENZ surfaces as a function of distance, $h$. The ENZ material is chosen so that $n_{s}\approx 0.476$ and $k_{s}\approx 0.511$, in order to obtain a real part of complex permittivity close to zero and an imaginary part close to 0.5 to include unavoidable losses of realistic systems. This choice leads to values of $R$ and $\phi$ similar to those in the experimentally fabricated layered substrates described below, corresponding to the point marked with a red star in Fig.~\ref{Fig3}. The strong modulation resulting from the standing wave is clearly visible. The points with zero force and negative slope are trapping points that correspond to equilibrium positions for the particle dynamics (arrows in Fig. \ref{Fig2}a). For the case of the ENZ surface the equilibrium point closest to the surface occurs at $h\sim$10 nm, while for the glass and Ag surface they occur at $h\sim$ 85 nm and $h\sim$ 60 nm, respectively. By linearizing the force at the equilibrium points, $F(z) \approx -\kappa z$, a trap spring constant $\kappa$ can be calculated. The trap spring constants $\kappa_{\mathrm{ENZ}}$ and $\kappa_{\mathrm{Ag}}$ calculated in front of ENZ and Ag surfaces can be compared to the spring constant $\kappa_S$ calculated, in a standard single-beam optical tweezers setup, with the same particle and at the same light intensity (see Supp. Info.). The spring constants are $\kappa_{\mathrm{ENZ}}=15$ fN/$\mu$m and $\kappa_{\mathrm{Ag}}=27$ fN/$\mu$m, while the trap spring constant $\kappa_S$ in a standard optical tweezers setup is two order of magnitude lower, $\kappa_S=0.24$ fN/$\mu$m. The beneficial effect of the ENZ and Ag reflective surfaces on the trapping is evident. The increasing size of the particles corresponds to larger optical forces and different trapping points (see Fig. \ref{Fig_dielbeads_vsradius} in Supp. Info. for DA calculations on larger size nanoparticles at 50 and 100 nm). \paragraph{Full-wave simulations.} In order to calculate optical forces on larger particles, we use two different full-wave modeling approaches based on the transition (T-)matrix formalism \cite{waterman1971symmetry,Borghese2007book} and on finite-elements methods using the commercial software COMSOL Multiphysics\textsuperscript{\tiny\textregistered}, respectively. In particular, electromagnetic scattering from particles near to or deposited on a plane surface that separates two homogeneous media of different optical properties in the T-matrix formalism \cite{Borghese2007book,JOSA95,JOSA99,AO99} can give account on the role of the different multipoles in the particle-surface interaction (see Supp. Info. for details). Indeed, the presence of the surface can have a striking effect on the scattering pattern from the particles, because the field that illuminates the particle is partly or totally reflected by the surface and the reflected fields contribute both to the exciting and to the observed field. Moreover, the field scattered by the particle is reflected by the interface and thus contributes to the exciting field. In other words, there are multiple scattering processes between the particles and the interface. As a result, the field in the accessible half-space includes the incident field $\mathbf{E}_{\rm I}$, the reflected field from the interface $\mathbf{E}_{\rm R}$ (as we would have if no particle were present), the scattered field from the particle $\mathbf{E}_{\rm S}$ and, finally, the field that after scattering by the particle is reflected by the surface, $\mathbf{E}_{\rm SR}$, related to $\mathbf{E}_{\rm S}$ by the reflection condition (see Fig. \ref{Fig1}a). Thus, the observed field, superposition of $\mathbf{E}_{\rm S}$ and $\mathbf{E}_{\rm SR}$, includes all the scattered and scattered-reflected multipole contributions (see Supp. Info. for more details). It is possible to define the T-matrix for particles in the presence of the interface that is the starting point to calculate optical forces and torques either by direct integration of the Maxwell stress tensor (MST) over a closed surface containing the particle \cite{Jones2015} or by exploiting the general expressions of optical force and torque in terms of multiple expansion \cite{Saija2005,Borghese2006,Borghese2007}. The optical force is obtained in COMSOL by direct integration of the MST that is calculated based on the total electric field and magnetic field which include the incident fields, the scattered fields by the particle, and all the reflected fields by the surface (see Supplementary Information for details on full wave methods). The results obtained in DA from the different surfaces are compared in Fig. \ref{Fig2}a with those obtained by using full electromagnetic calculations based on COMSOL (circles), and T-matrix methods (continuous lines). A very good agreement is clearly observed. In all approaches, the total optical force on small particles is modulated by the sinusoidal term of the gradient force. Its magnitude is larger (in the fN range) on more reflective surfaces and its phase changes sign going from a Ag to an ENZ substrate, leading to the formation of optical trapping points at different distances (arrows in Fig. \ref{Fig2}a). In brief, the gradient force dominates the ENZ-optomechanics for small particles even in proximity of the surface. T-matrix and COMSOL allow the calculation of optical forces for larger particles than in DA. In Fig. \ref{Fig1}e and f, the $(R,\phi)$ map of total axial force calculated with the T-matrix approach on a $a$=220 nm bead (Fig. \ref{Fig1}e) and $a$=1 $\mu$m bead (Fig. \ref{Fig1}f), at $h$=10 nm from the surface, are shown. The comparison with Fig. \ref{Fig1}d highlights the strong dependence of the total optical force on the bead size. The repulsive-attractive behaviour is driven by the competition between gradient force and scattering force, which may give repulsive behaviour, for intermediate size beads, in surfaces having large reflectivity (see Fig. \ref{Fig1}e); however, at large bead size (Fig. \ref{Fig1}f), scattering force overcomes gradient force, and the total optical force is attractive in front of every type of surface. \begin{figure} \begin{center} \includegraphics[scale=1.15]{Fig3.pdf} \caption{Accessing the full range of reflectance ($R$) and reflected phase ($\phi$) via layered metamaterials. $R$ versus $\phi$ is illustrated for light at normal incidence with wavelength $\lambda = 560$ nm reflected from the surface of a thin film stack. The thick curves of varying color are transfer matrix numerical calculations labeled $n \times d$, where $n$ refers to the number of bilayers in the stack, and $d$ the thickness of each bilayer. The total thickness $n d = 500$ nm is kept constant. The bilayers consist of individual Ag and Al$_2$O$_3$ layers, with the fraction of Ag in the bilayer indicated by the metal filling fraction color. The red $n \times d$ labels correspond to systems where the dielectric is the upper layer in each bilayer (the one closest to the surface), while the blue labels are the ones where the metal is on top. Curve labeled EMT is the effective medium approximation to the system, which corresponds to $n \to \infty$, $d \to 0$ with $n d = 500$ nm. In all the above cases the superstrate is water and the substrate is glass. For comparison we show points indicating the $R$ and $\phi$ values for a simple interface between a water superstrate and a pure material substrate (Ag, Au, Ge, TiO$_2$, Al$_2$O$_3$, and an ideal ENZ). We also show experimental results (green stars, details in the Supp. Info.) involving a water superstrate and 5 trilayers (Al$_2$O$_3$/Ag/Ge from top to bottom, where Ge is present as a thin wetting layer to ensure fabrication quality). The dotted green trend line corresponds to keeping the Ag and Ge layer thicknesses fixed at 15 nm and 2.5 nm respectively, while varying Al$_2$O$_3$ thickness from 80 nm to 20 nm (left to right). In order to compare EMT calculation with the full-wave analysis, COMSOL is used (black diamonds) to calculate $R$ for different layered structures with various metal's filling fraction (0.4 and 0.6), layer's thickness (50 nm and 100 nm) while keeping the total thickness of the layered structure unchanged (500 nm) and also different order of material in the stack (metal on top, blue labels, and dielectric on top, red labels).} \label{Fig3} \end{center} \end{figure} In Fig. \ref{Fig2}b and c the T-matrix calculations of the total optical force on larger particles are shown as a function of the edge-to-edge distance from ENZ, Ag and glass surfaces. The larger size of these particles with respect to the nanosized bead in Fig. \ref{Fig2}a highlights the increased contribution of the scattering force on the gradient force. The scattering force is detrimental towards stable equilibrium positions in front of glass surface for the 220 nm radius bead and in front of both ENZ and glass surfaces for 1 $\mu$m radius bead. The lower reflectivity of these surfaces as compared to the reflection from the Ag surface does not allow an efficient balance between scattering force from incoming and reflected beams, increasing the scattering force contribution with respect gradient force and hindering the trapping. In Fig. \ref{Fig2}d and \ref{Fig2}e the results are reported for increasing bead size at fixed distance, $h=$10 nm, from the ENZ, Ag or glass surfaces. It is shown that at small bead size (below approximately 300 nm radius), the gradient force modulates the total force. At increasing bead size, the particle extinction cross section increases, consequently the scattering force is predominant on the gradient force, inhibiting equilibrium points and inducing an effective attractive force directed towards the surfaces. \paragraph{Epsilon-Near-Zero Metamaterials.} Regarding layered ENZ materials, we have demonstrated experimentally that it is possible to control the optical topology and to induce the ENZ behavior by designing and fabricating subwavelength layered lattice structures as a result of interlocking noble metals and dielectric thin films \cite{sreekanth2013experimental}. Upon selecting metal-dielectric bilayers, the thickness of each layer, the filling fraction and the number of bilayers, the frequency of the optical topological transition in the iso-frequency surface leading to the epsilon-near-zero behavior can be tailored. The lattice structure is fabricated as a five tri-layer system using $\mathrm{Al_2O_3}$, Ag, and Ge from top to bottom. The Ag layer thicknesses were in the range of 10-25 nm, with a thin Ge layer (1-3 nm) underneath to ensure surface wetting. The $\mathrm{Al_2O_3}$ layer thicknesses were systematically varied between roughly 20 nm and 80 nm across different material systems (Fig. \ref{Fig3}), subsequently tuning the frequency of the topological transition. In previous studies we used effective medium theory to calculate the dielectric permittivity of the entire structure, as opposed to more recent inverse design approaches to account for a wider material parameters space. We perform spectroscopic ellipsometry measurements to evaluate the dielectric tensor components and the dispersive behavior of the layered structure. By fitting the measured angular reflectance and the ellipsometry parameters $\psi$ and $\Delta$, we can directly obtain the effective optical constants of the multilayer slab. Using the transfer matrix method, we can then predict the magnitude and phase of reflection at normal incidence with a water superstrate. The green stars in Fig.~\ref{Fig3} represent these predicted values from 6 samples consisting of a 5 bilayer Al$_2$O$_3$/Ag thin-film stack with a Ge seed layer to ensure the uniformity of the Ag films. By varying the thickness of the $\mathrm{Al_2O_3}$ layers, we covered a phase range of $\Delta\Phi$ $\approx$ 180$^\circ$ and reflectance range of $\Delta R$ $\approx$ 0.5. The full range of accessible $R$ and $\phi$ values is even larger if we expand the design space of the substrate to include different numbers of bilayers and metal filling fractions. The thick curves in Fig.~\ref{Fig3} show transfer matrix calculations of $(R,\phi)$ for Al$_2$O$_3$/Ag stacks with different structural parameters indicated by the labels. In all cases the total thickness of the stack was kept fixed at 500 nm. The color at each point along the curves corresponds to the metal filling fraction. In the limit of many thin bilayers we approach the EMT result of Fig.~\ref{Fig1}c, which is also reproduced here for comparison. Note that actual layered materials can achieve positive values of $\phi$, while homogeneous materials (for example those described by EMT) are confined to the $\phi < 0$ subspace. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Fig4.pdf} \caption{Role of polarizability and shape on optical forces. a) COMSOL calculation of total optical force on core-shell particles based on SiO$_2$ and Ag as a function of the distance from the metamaterial surface. Different ratios between core radius $a_1$ and total particle radius $a_{\rm tot}$=20 nm have been considered. Moreover, both materials have been considered as a core. b) Maximum force found in each core-shell structure considered, as pointed out by the dashed blue line in a). c) Extinction spectrum of the SiO$_2$-Ag core-shell particle (total radius $a_{\rm tot}$=20 nm and core radius $a_{1}$ =16.1 nm) in water. d) $(R,\phi)$ contour plot of the total optical force for the core-shell particle at $h$=10 nm from the surface. The ENZ and Ag surfaces used for the calculation of the optical forces in DA approximation are shown as circles. e) Extinction spectra of Ag prolate ellipsoid in water oriented with the long axis parallel to the field (black solid line) and oriented with the short axis parallel to the field (red dashed line). The resonances relative to the long and short axes are indicated. f) $(R,\phi)$ contour plot of the total optical force on the Ag ellipsoid at $h$=10 nm distance from the surface. In the calculation, the spheroid is aligned with the long axis in the direction of the wave polarization. The ENZ and Ag surfaces used for the calculation of the optical forces in DA approximation are shown as circles. The optical force is in the order of tens of pN in front of ENZ (repulsive) and Ag (attractive). The force can be close to 200 pN if the spheroid is in front of an ideal ENZ surface, having R=1 and $\phi$ =0.} \label{Fig4} \end{center} \end{figure} \paragraph{Complex particles (core-shell, ellipsoids, ENZ)} In addition to spherical beads, we evaluate the optical forces on different types of particles in front of dielectric, metallic or ENZ surfaces. We consider spherical core-shell particles based on $\mathrm{SiO_2}$ and Ag, an Ag prolate spheroid and a spherical particle made by an ENZ material. We first used COMSOL simulation to calculate the forces on core-shell structures in front of layered ENZ material at 560 nm illumination. The total particle radius $a_{\rm tot}$ is fixed at 20 nm. The particles had alternatively $\mathrm{SiO_2}$ or Ag as the core, with the other material as the shell. In Fig. \ref{Fig4}a the total force on the core-shell particles as a function of the distance from the ENZ surface is shown. It is clearly observed that the presence of Ag in the outer shell enhances the total force with respect of the inverse structure having $\mathrm{SiO_2}$ as the shell, but also with respect to the pure Ag sphere. The highest value of the force is found (red curve in Fig. \ref{Fig4}a) for a $\mathrm{SiO_2}$-Ag core-shell structure having a core radius of $a_1=16.1$ nm and an Ag shell 3.9 nm thick which, as shown in Fig. \ref{Fig4}c, is at the resonance condition at the ENZ wavelength. As shown in Figures \ref{Fig4}d, \ref{Fig_cs}d and \ref{Fig_cs}e, the particle resonance at 560 nm enhances the optical force to the piconewton range (Fig. \ref{Fig4}d) but only at very short distances from the surfaces, being repulsive in the ENZ case (Fig. S\ref{Fig4}d) and attractive in the Ag case (Fig. \ref{Fig_cs}e). Otherwise, the total optical force is at the fN range. Specifically, at the resonance $\mathrm{F_{enz}}$ is in the piconewton range close to the ENZ surface (from $h$=0 nm to roughly 10 nm). The gradient force, $\mathrm{F_{grad}}$, has an oscillating character, but its amplitude is smaller ($\approx$ 1 fN) than $\mathrm{F_{enz}}$, due to the small real part of the polarizability at resonance ($\mathrm{Re(\alpha)=0.04 \cdot 10^{-32} \ F m^2}$). On the contrary, $\mathrm{F_{scatt}}$ is large (tens of femtonewton), because of the large extinction cross section at resonance. Thus, at 560 nm (black curve in Fig. \ref{Fig_cs_vs_wl}a), the total force is repulsive and in the piconewton range close to the surface, but becomes attractive and approximately constant as the $\mathrm{F_{enz}}$ contribution fades off with distance. The behaviour of the forces on the core-shell particle can also be studied for wavelengths smaller and larger than the particle plasmon resonance (Suppl. Info.). The calculation has been made for 552 nm, on the blue side of the plasmon resonance, and at 566 nm, on its red side. At these wavelengths, the scattering force is slightly lower than at resonance, while $\mathrm{F_{grad}}$ increases by at least one order of magnitude. For this reason, its oscillating character shows up in the total force (Fig.\ref{Fig_cs_vs_wl}a, blue and red curves). Moreover, as the polarizability changes sign from one side to the other of the resonance, also the gradient force inverts its phase from the blue to the red side of the resonance. Similar discussions hold for the optical forces in front of Ag surface (Figure \ref{Fig_cs_vs_wl}b); however, in this case, the $\mathrm{F_{enz}}$ is attractive close to the surface. We now consider an Ag prolate spheroid as a prototypical non-spherical particle. This is chosen with a long axis $a_1=56.8$ nm and short axes $a_2=a_3=20$ nm. As shown in Figure \ref{Fig4}e, the particle has, in water, a long axis resonance at 560 nm and a short axis resonance at 360 nm. For the calculation of the total optical forces we considered the case in which the spheroid has the long axis aligned with the wave polarization, and the short semiaxis as the size parameter in Eq. \ref{Fenz_pw1}. We obtain a further enhancement of the total optical force (tens of piconewton, Fig.\ref{Fig4}f) which, as in the core-shell structure, is repulsive in front of ENZ surface and attractive in front of Ag surface. In Figure \ref{Fig4}f a contour plot of the total optical force, calculated as a function of the surface reflectivity $R$ and phase shift $\phi$, namely, in front of all possible surfaces, is shown. We clearly see that the repulsive force can be close to 200 pN in front of an ``ideal" ENZ surface, having the maximum reflectivity and a vanishing phase shift. In the case of ENZ particles, we used the same $n$ and $k$ values used for the ENZ surface. We calculated optical forces in front of glass, Ag or ENZ surfaces. The calculation has been made for ENZ beads having radii $a$=20, 50 and 100 nm. As shown in Fig. \ref{Fig_ENZbeads_vsradius}, the forces are about five times larger than the ones observed in dielectric bead counterparts. The larger scattering force of ENZ particle hinders its trapping in front of glass surface, for all radii. Moreover, the 100 nm radius ENZ particle cannot be trapped also in front of ENZ surface (Fig. \ref{Fig_ENZbeads_vsradius}c). Results are shown in Supplementary Information. Finally, we have studied the total optical force in case of a focused (NA=1.3) Gaussian beam, typical of optical tweezers experiments (Section S1.2). The calculations, made for a 20 nm radius polystyrene bead, show that, both in front of ENZ (Fig. \ref{Fig_Gauss}a) and Ag (Fig. \ref{Fig_Gauss}b) surfaces, the beam focusing induces a fading of the total force with the distance $h$ (see Fig. \ref{Fig_Gauss}). The extension of the calculations for beads with larger radius (contour plots of the total optical force in front of ENZ, Fig. \ref{Fig_Gauss}c, and Ag, Fig. \ref{Fig_Gauss}d, surfaces) shows that the total force increases at increasing bead radius, reaching the range of tens of femtonewton in front of ENZ and hundreds of femtonewton in front of Ag surface. The modulation induced by the gradient force is clearly visible. It is worth noting that when Gaussian beams are used, for a direct comparison, the beam power is reduced with respect to the plane wave case in order to maintain the intensity at the beam focus similar to the plane wave intensity. \paragraph{Conclusions.} In conclusion, ENZ-based optomechanics represents a novel way to manipulate and tailor mechanical effects of light exploiting flat surfaces. We focused our study on the repulsive-attractive optomechanics for particles in front of an ENZ surface in realistic conditions for a wide range of parameters (particle size and shape, ENZ surface structure, etc.) in the axial direction. Combining the unique optical properties of ENZ metamaterials with patterning capabilities will also enable further manipulation and control in the transverse direction towards a full dynamical engineering of ENZ-based optical forces. Various potential applications for future study include particle sorting due to the strong dependence of ENZ-based optical forces on the size and material composition of particles, biomolecular trapping and sensing, wavelength multiplexing of optical forces, and chiral optical sorting, just to name a few. \section*{Data Availability} Data that support the findings of this study are available from the corresponding authors upon reasonable request. \section*{Acknowledgements} M.G.D., R.S., and O.M.M. acknowledge financial support from the agreement ASI-INAF n.2018-16-HH.0, Project "SPACE Tweezers". N.E. acknowledges partial support from the Vannevar Bush Faculty Fellowship program sponsored by the Basic Research Office of the Assistant Secretary of Defense for Research and Engineering, funded by the Office of Naval Research through Grant No. N00014-16-1-2029. G.S. acknowledges financial support from the Ohio Third Frontier Program and the National Science Foundation - DMR Grant No. 1708742. \section*{Competing interests} The authors declare no competing interests. N.E. is a strategic scientific advisor/consultant to Meta Materials, Inc. \newpage \centerline{\huge{Supplementary Information}} \section*{S1 Optical forces in the dipole approximation in front of epsilon-near-zero Materials} Dipole approximation (DA) is an easy and quick method to calculate optical forces on a nanoparticle. It is valid when the size of the particle is very small compared to the wavelength of the field \cite{Chaumet2000,AriasGonzalezJOSAA03,Jones2015}, and due to its simplicity it provides useful results that can be compared with more complex light scattering approaches (T-matrix, DDA) in the limit of small particles \cite{Polimeno2018}. In DA the total optical force on a particle is usually split in a gradient force $\mathrm{F}_{\rm grad}$ and a scattering force $\mathrm{F}_{\rm scat}$ \cite{Jones2015}: \begin{equation}\label{Fgrad_gen} \vec{\mathrm{{F}}}_{\rm grad}(r,z)=\frac{1}{2}\frac{n_{\rm m}}{c \varepsilon_{\rm m}} \mathrm{Re}(\alpha) \vec{\nabla}I(r,z) \end{equation} \begin{equation}\label{Fscatt_gen} \vec{\mathrm{{F}}}_{\rm scat}(r,z)=\frac{n_{\rm m} \sigma_{\rm ext}}{c}I(r,z) \hat{k} \end{equation} Here, $\hat{k}$ is the wave propagation direction that for an axially directed plane wave coincides with the axial coordinate $\hat{z}$, $r$ is the radial coordinate, $c$ is the speed of light, $\varepsilon_{\rm m}=\varepsilon_{0} n_{\rm m}^2$ is the medium permittivity, $\varepsilon_{0}$ is the vacuum permittivity, $n_{\rm m}$ is the refractive index of the medium, $I(r,z)$ is the wave intensity, $\alpha$ is the particle complex polarizability, \begin{equation}\label{alpharad} \alpha=\frac{\alpha_{0}}{1-i\frac{k^3\alpha_0}{6 \pi \varepsilon_{\rm m}}} \end{equation} \noindent where $\alpha_0$ is the polarizability in the static field limit (Clausius-Mossotti), and $\sigma_{\rm ext}$ is the extinction cross-section, related to the particle absorption and scattering \cite{AriasGonzalezJOSAA03,Jones2015}: \begin{equation}\label{sigmaext} \sigma_{\rm ext}=\frac{k}{\varepsilon_{\rm m}}\mathrm{Im}({\alpha})= \sigma_{\rm abs}+\sigma_{\rm scat} \end{equation} \noindent with $k=\frac{2\pi n_{\rm m}}{\lambda}$ the wave number and $\lambda$ the wavelength. $\mathrm{{F}}_{\rm grad}$ drives the particles towards the maximum of light intensity if they have positive polarizability; otherwise, the particles are repelled from it. On the contrary, $\mathrm{{F}}_{\rm scat}$ always pushes the particles along field propagation direction, $\hat{k}$. Another contribution to the total force may come from the spin-curl force \cite{Jones2015}, but only when beams having spatial polarization gradients are used \cite{Marago2013}, which is not the case in this work. Recently, it has been proposed \cite{Rodriguez2014} that in front of an $\varepsilon$-near-zero (ENZ) surface a point dipole source is subjected to a near-field repulsive force, reminding the Meissner effect in superconductors \cite{Rodriguez2014}. In the quasistatic approximation the near-field force is \cite{Rodriguez2014}: \begin{equation}\label{Fenz} \mathrm{F}_{\rm enz}(z)\approx - \sigma \frac{9}{512 \pi^4 c} \mathrm{Re}\left( \frac{\varepsilon_{\rm s}-\varepsilon_{\rm m}}{\varepsilon_{\rm s}+\varepsilon_{\rm m}} \right) \left( \frac{\lambda}{n_{\rm m} z} \right)^4 P_{\rm rad} \end{equation} \noindent where $\sigma$ is a prefactor accounting for the orientation of the dipole ($\sigma$=1, horizontal dipole; $\sigma$=2, vertical dipole), $\varepsilon_{\rm s}$ is the complex dielectric permittivity of the surface, $z$ is the height of the dipole above the surface and $P_{\rm rad}$ is the radiated power of the dipole in free space. \subsection*{S1.1 Plane wave illumination} Here, we calculate the total optical force on a finite-size particle in front of an arbitrary reflective surface in the dipole approximation. In this case, the exciting field $E_{\rm E}$ is the superposition of the incident $E_I$ and reflected $E_R$ electromagnetic waves which produces a standing wave that, in the simplest case of plane waves travelling in the $z$ direction, can be written as: \begin{equation}\label{Iplanewave_one} I(z)=\frac{n_{\rm m} \varepsilon_{0}c}{2} \left| E_{\rm E}(z)\right|^2=\frac{n_{\rm m} \varepsilon_{0}c}{2} \left| \mathrm{E}_{0} e^{-ik(z)}+ \rho \mathrm{E}_{0} e^{+ik(z)+i \phi}\right| ^2=I_0+2\rho I_0 \cos (-2kz-\phi)+\rho^2 I_0 \end{equation} \noindent with $I_0=n_{\rm m} \varepsilon_{0}c E_0^2/2$. Note that $z$ is taken positive in the direction of the reflected beam and $\rho$ and $\phi$ are the amplitude and phase, respectively, of the complex reflection coefficient of the surface $r_m=\rho e^{i \phi}$, which is connected to the surface complex refraction index $\tilde{n}= n_{\rm s}+i k_{\rm s}$ by \cite{Born1980_6th_ed} \begin{align} \rho=\sqrt{\frac{(n_{\rm m} - n_{\rm s})^2 + k_{\rm s}^2}{(n_{\rm m} + n_{\rm s})^2 + k_{\rm s}^2}} && \phi=\arctan \left[ \frac{-2n_{\rm m} k_{\rm s}}{n_{\rm m}^2 - n_{\rm s}^2 - k_{\rm s}^2 }\right] \end{align} Thus, the gradient force along the axial direction on a finite-size particle in front of a reflective surface can be written as: \begin{equation}\label{Fgrad_pw} \mathrm{F_{\rm grad}}(z)=\frac{1}{2}\frac{n_{\rm m}}{c \varepsilon_{\rm m}} \mathrm{Re}(\alpha) \frac{d I(z)}{dz} \end{equation} \noindent where $z=h+a$ is the axial coordinate, $h$ is the edge-to-edge distance of the particle from the surface and $a$ is the particle radius. The scattering force is the sum of the opposite contributions due to the incident and reflected plane waves \cite{ZemanekOPTCOMM98b, Born1980_6th_ed, Hansen1968}: \begin{equation}\label{Fscatt_pw} \mathrm{F_{\rm scat}}(z)=\frac{n_{\rm m}}{c}\sigma_{\rm ext}I_0\left( \rho^2 -1 \right) \end{equation} \noindent where $\rho$ is related to the surface reflection coefficient $\left|r_m\right|^2=\left|\rho e^{i\phi}\right|^2=\rho^2$. Finally, the near-field force on the particle is: \begin{equation}\label{Fenz_pw} \mathrm{F_{\rm enz}}(z)\approx - \sigma \frac{9}{512 \pi^4 c} \mathrm{Re}\left( \frac{\varepsilon_{\rm s}-\varepsilon_{\rm m}}{\varepsilon_{\rm s}+\varepsilon_{\rm m}} \right) \left( \frac{\lambda}{n_{\rm m} z} \right)^4 \sigma_{\rm scat} I(z) \end{equation} \noindent where the radiated power $P_{\rm rad}$=$\sigma_{\rm ext} I(z)$ is related to the light scattering process. We can now add Eqs. S\ref{Fgrad_pw}-S\ref{Fenz_pw} to calculate the total optical axial force on different types of particles in front of dielectric, metallic or ENZ surfaces. As the dipole is induced by a linearly polarized wave travelling ortogonally to the surface, an horizontal dipole ($\sigma$=1 in Eq. S\ref{Fenz_pw}) is used. We used an incident light intensity of about 5.6$\cdot 10^8$ W/m$^2$, corresponding to a beam power of 10 mW and a beam waist of approx. 3.5 $\mu$m, both of which can be realized in a typical experimental configuration in our laboratories. Stable equilibrium points for the particle dynamics are found at $z$ values in which the total optical force vanishes with a negative slope. For small displacements from these points, the particles are subjected to a restoring force that can be linearized as $F_{z} \approx -\kappa_z z$, with $\kappa_z$ the trap spring constant. We consider four types of model particles: a homogeneous dielectric (polystyrene) spherical bead, a spherical particle with parameters equivalent to an ENZ material, a spherical core-shell particle ($\mathrm{SiO_2}$ core, Ag shell), and an Ag prolate spheroid. The different surfaces have been considered in the calculation by means of their complex refractive index values at 560 nm; in the ENZ case, we have chosen $n_{\rm s}\approx 0.476$ and $k_{\rm s}\approx 0.511$ in order to obtain a real part of complex permittivity close to zero and an imaginary part close to 0.5. The same values have been used for the complex permittivity of the ENZ particle. \paragraph{Dielectric bead.} We calculated the optical forces under $\lambda$=560 nm in water ($n_m$=1.33) on spherical polystyrene (relative permittivity 2.54) beads having radii $a$=20, 50 and 100 nm. In this case, the Clausius-Mossotti polarizability is \cite{Bohren1998}: \begin{equation}\label{alpha} \alpha_0=4\pi \varepsilon_{\rm m} a^3 \left( \frac{\varepsilon_{\rm p} - \varepsilon_{\rm m}}{\varepsilon_{\rm p}+2\varepsilon_{\rm m}} \right) \end{equation} In Fig. 2a of the main text, the results (short dots) obtained in DA approximation for the 20 nm dielectric bead as a function of its distance from the different surfaces are compared with those obtained by using more sophisticated approaches (COMSOL, cicles, and T-matrix, continuous lines). A very good agreement is clearly observed. In all approaches, the total optical force on small particles is modulated by the sinusoidal term in the gradient force. It is larger (in the fN range) on more reflective surfaces and, going from Ag to ENZ, it changes phase, leading to stable traps at different distances (arrows in Fig. 2). The axial trap spring constants $\kappa_{\mathrm{ENZ}}$ and $\kappa_{\mathrm{Ag}}$ in front of ENZ and Ag surfaces have been calculated by a linear fit of the total force at the equilibrium points. They are $\kappa_{\mathrm{ENZ}}=15$ fN/$\mu$m and $\kappa_{\mathrm{Ag}}=27$ fN/$\mu$m, that can be compared to the trap spring constant in the axial direction obtained in a standard optical tweezers setup, based on a single Gaussian beam. In this case, \begin{equation}\label{Igaussian} I(x,y,z)=I_0 \frac{w_{0}^{2}}{w(z)^{2}}\mathrm{exp}\left[-2\frac{x^2+y^2}{w(z)^2}\right] \end{equation} \noindent where $w_0$ is the beam waist, $w(z)=w_0\sqrt{1+\frac{(z-z_0)^2}{z_{R}^{2}}}$ is the beam width at $z$, $z_R=\frac{n_{m}\pi w_{0}^{2}}{\lambda}$ is the Rayleigh range, $z_0$ is the position of the beam waist and $I_0=2P/\pi w_{0}^2$ is the on-axis intensity at the waist of a beam having total power $P$. To evaluate the beam waist, we used the Abbe criterion, $w_0=\frac{0.5 \lambda}{NA}$, with NA=1.3 the numerical aperture, as in typical single beam optical tweezers. Eqs. S\ref{Fgrad_gen} and S\ref{Fscatt_gen} can be used to calculate the axial component of the total force, and the corresponding $k_S$ at the equilibrium point is obtained by a linear fit. We consider as before the particle in water and illuminated at $\lambda$=560 nm. We find, at the same light intensity used in front of ENZ and Ag surfaces, a two order of magnitude lower $\kappa_S=0.24$ fN/$\mu $m axial spring constant. \begin{figure} \includegraphics[width=\textwidth]{FigS1.pdf} \caption{Optical forces on a) 20 nm, b) 50 nm and c) 100 nm radius dielectric beads. The beads are in front of ENZ (blue curve), Ag (magenta curve) or glass (red curve) surfaces. Note how for smaller particles the dominant contribution to the optical force comes from the gradient force, while for larger particles the greater scattering force shifts downwards the force modulation resulting from the interference between incident and reflected field. In d) and e), 3D plots of the total force as a function of the particle radius and of the distance from ENZ (d) and Ag (e) surfaces.} \label{Fig_dielbeads_vsradius} \end{figure} In Fig. \ref{Fig_dielbeads_vsradius} the optical forces in DA on dielectric beads at increasing bead radius (20, 50 and 100 nm) are shown. The increasing size of the particles corresponds to larger optical forces and different trapping points. However, the 100 nm bead is not trapped in front of glass surface, whereas it is trapped in front of ENZ and Ag surfaces, whose higher reflectivity with respect glass surface better counteracts the scattering force due to the incoming beam. In Fig. \ref{Fig_bead_all}, a panel summarizing the results of the calculation of the total optical force (a-c) and of the near-field force (d-f) on a 20 nm dielectric bead in water is shown. Three different surfaces are considered: lossless, with medium loss (Im$(\varepsilon_s)$=0.5) and with high loss (Im$(\varepsilon_s)$=0.8). The comparison with the results obtained for a point dipole in vacuum \cite{Rodriguez2014} shows that in this work the presence of a medium (water) broadens the repulsive near field force region from $-1<\varepsilon_{\rm s}<1$ to $-1.77<\varepsilon_{\rm s}<1.77$; moreover, as already observed\cite{Rodriguez2014}, even in surfaces with high loss there is still a repulsive near field force. However, the calculation of the total optical force gives values not higher than 1 fN, which is found only in front of lossless surfaces. \paragraph{ENZ particle.} Optical forces on spherical beads made by ENZ material have been calculated in front of glass, Ag or ENZ surfaces. ENZ beads having radii $a$=20, 50 and 100 nm have been considered. As shown in Fig. \ref{Fig_ENZbeads_vsradius}, the forces are always larger than the ones observed in dielectric bead counterparts. The larger scattering force of ENZ particle hinders its trapping in front of glass surface, for all radii. Moreover, the 100 nm radius ENZ particle cannot be trapped also in front of ENZ surface (Figure \ref{Fig_ENZbeads_vsradius}c). \begin{figure} \includegraphics[width=\textwidth]{FigS2.pdf} \caption{Contour plots of the total optical force (a-c) and of the near-field force (d-f) in front of (a,d) lossless, (b,e) medium loss, $\mathrm{Im}(\varepsilon_{\rm s})$=0.5 and (c,f) very high loss, $\mathrm{Im}(\varepsilon_{\rm s})$=0.8 surfaces on a 20 nm dielectric bead as a function of of the real part of the surface permittivity and of the particle normalized height $h/\lambda$ above the surface ($\lambda$=560 nm). The maximum optical force is in the fN range.} \label{Fig_bead_all} \end{figure} \begin{figure} \includegraphics[width=\textwidth]{FigS3.pdf} \caption{(a-c) Total force on spherical beads made by ENZ material. The forces are shown for 20 nm radius (a), 50 nm radius (b) and 100 nm radius (c). ENZ (blue curve), Ag (magenta) and glass (red) surfaces are considered for the calculation. (d,e) 3D plots of the total force as a function of the ENZ particle radius and of the distance from the ENZ (d) surface and Ag (e) surface.} \label{Fig_ENZbeads_vsradius} \end{figure} \paragraph{Core-shell particle.} To enhance the optical force, we used a $\mathrm{SiO_2}$-Ag core-shell particle designed to be resonant at approximately 560 nm and having a total radius $a_{tot}$= 20 nm. The calculation of the extinction cross-section shows (Fig. \ref{Fig_cs}a) that the resonance condition is fulfilled if the core-shell structure has a core radius $a_1=16.1$ nm, and the Ag shell thickness is 3.9 nm. The particle polarizability is \cite{Bohren1998}: \begin{equation}\label{alpha_cs} \alpha_{cs}=4\pi a_{tot}^3 \varepsilon_{\rm m} \frac{(\varepsilon_{2}-\varepsilon_{\rm m}) (\varepsilon_{1}+2\varepsilon_{2})+f^3 (\varepsilon_{1}-\varepsilon_{2})(\varepsilon_{\rm m}+2\varepsilon_{2})}{(\varepsilon_{2}+2\varepsilon_{\rm m})(\varepsilon_{1}+2\varepsilon_{2})+f^3(2\varepsilon_{2}-2\varepsilon_{\rm m})(\varepsilon_{1}-\varepsilon_{2})} \end{equation} In this equation, $a_{tot}$ is the core-shell total radius, $\varepsilon_{1}$ and $\varepsilon_{2}$ are the core and shell complex permittivity, respectively, and $f=\frac{a_{1}}{a_{tot}}$ is the ratio between the core radius $a_1$ and the total particle radius $a_{tot}$. As shown in Figure \ref{Fig_cs}, the resonance at 560 nm enhances the optical force to the pN range but only at very short distances from the surfaces, being repulsive in the ENZ case and attractive in the Ag case. Otherwise, the total optical force is at the fN range. More specifically, at the resonance $\mathrm{F_{enz}}$ is in the pN range close to the ENZ surface (from $h$=0 nm to roughly 10 nm). $\mathrm{F_{grad}}$ has an oscillating character, but its amplitude is smaller ($\approx$ 1 fN) than $\mathrm{F_{enz}}$, due to the small real part of the polarizability at resonance $\mathrm{Re(\alpha)=0.04 \cdot 10^{-32} \ F m^2}$. On the contrary, $\mathrm{F_{scatt}}$ is large (tens of fN), because of the great extinction coefficient at resonance. Thus, at 560 nm (black curve in Fig. \ref{Fig_cs_vs_wl} a), the total force is repulsive and in the pN range close to the surface, but becomes attractive and approximately constant as the $\mathrm{F_{enz}}$ contribution fades off. The behaviour of the forces on the core-shell particle can also be studied for wavelengths smaller and larger than the particle plasmon resonance. The calculation has been made for 552 nm, on the blue side of the plasmon resonance, and at 566 nm, on its red side. At these wavelengths, the scattering force is slightly lower than at resonance, while $\mathrm{F_{grad}}$ increase by at least one order of magnitude. For this reason, its oscillating character now can be better noticed in the total force (Fig.\ref{Fig_cs_vs_wl} a, blue and red curves). Moreover, as the polarizability changes sign from one side to the other of the resonance, also the gradient force is ``out of phase'' going from the blue to the red side of the resonance. Similar discussions hold also for the calculation of forces in front of Ag surface (Figure \ref{Fig_cs_vs_wl} b); however, in this case, the $\mathrm{F_{enz}}$ is attractive close to the surface. \begin{figure} \includegraphics[width=\textwidth]{FigS4.pdf} \caption{(a) Extinction spectrum of the $\mathrm{SiO_2}$-Ag core-shell particle (total radius $a_{tot}$=20 nm and core radius $a_1$=16.1 nm) in water. (b,c) Total optical force of the core-shell particle at fixed distance $h$=10 nm from ENZ (b) and Ag (c) surfaces as a function of the $a_1$ to $a_{tot}$ ratio. (d,e) Contour plots of the optical force with respect to the $a_1$ to $a_{tot}$ ratio and the distance $h$ from the surface. The force on the core-shell particle is in the pN range only at short distances from the surfaces and repulsive in front of ENZ (d) while attractive (e) in front of Ag.} \label{Fig_cs} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{FigS5.pdf} \end{center} \caption{Total force on a $\mathrm{SiO_2}$-Ag core-shell particle at three different wavelengths: at resonance (560 nm, black curve), at 552 nm (blue-shifted with respect resonance, blue curve) and at 566 nm (red-shifted with respect resonance, red curve). The total force is in the pN range close to the surface, due to the $\mathrm{F_{enz}}$ contribution. The sinusoidal behaviour of the gradient force is visible only out of resonance (blue and red curves), while it is negligible at resonance, where, far from the surface, only the scattering force drives the total force. Close to the surface, the total force is repulsive in front of ENZ and attractive in front of Ag.} \label{Fig_cs_vs_wl} \end{figure} \paragraph{Ag prolate spheroid.} We choose an Ag prolate spheroid having long axis $a_1=56.8$ nm and short axes $a_2=a_3=20$ nm. The particle polarizability is \begin{equation}\label{alpha1} \alpha_{i}=\frac{4}{3}\pi a_1 a_2 a_3 \varepsilon_{\rm m} \frac{\varepsilon_{\rm p}-\varepsilon_{\rm m}}{\varepsilon_{\rm m}+L_i (\varepsilon_{\rm p}-\varepsilon_{\rm m})} \end{equation} \noindent In this equation, $\varepsilon_{\rm p}$ is the particle permittivity and $L_i$ is a geometric factor relative to the spheroid axis $a_i$. In case of a prolate spheroid, $L_1$ is \begin{align}\label{L1} L_1=\frac{1-e^2}{e^2}\left( -1+\frac{1}{2e}\mathrm{ln}\frac{1+e}{1-e} \right) && e^2=1-\frac{a_{2}^{2}}{a_{1}^{2}} \end{align} \noindent and $L_2=L_3=\frac{1}{2}(1-L_1)$. As shown in Figure \ref{Fig_Ag_ell}a, the particle has, in water, a long axis resonance at 560 nm and a short axis resonance at 360 nm. For the calculation of the total optical force we considered the case in which the spheroid has the long axis aligned with the wave polarization, so to use $\alpha_1$ for the polarizability in Eqs. S\ref{Fgrad_pw} and S\ref{sigmaext}, and the short semiaxis as the size parameter in Eq. S\ref{Fenz_pw}. We obtain a further enhancement of the total optical force (tens of pN) which, as in the core-shell structure, is repulsive in front of ENZ surface and attractive in front of Ag surface. In Figure \ref{Fig_Ag_ell}b a contour plot of the total optical force, calculated as a function of the surface reflectivity $R$ and phase shift $\phi$, namely, in front of all possible surfaces, is shown. We easily see that the repulsive force can be close to 200 pN in front of an ``ideal" ENZ surface, having the maximum reflectivity and a vanishing phase shift. \begin{figure} \includegraphics[width=\textwidth]{FigS6.pdf} \caption{(a) Extinction spectra of Ag prolate ellipsoid in water oriented with the long axis parallel to the field (black solid line) and oriented with the short axis parallel to the field (red dashed line). The resonances relative to the long and short axes are indicated. (b) Contour plot of the total optical force on the Ag ellipsoid at $h$=10 nm distance from a surface as a function of the surface reflectivity R and phase shift $\phi$. In the calculation, the spheroid is aligned with the long axis in the direction of the wave polarization. The ENZ and Ag surfaces used for the calculation of the optical forces in DA approximation are shown. The optical force is in the order of tens of pN in front of ENZ (repulsive) and Ag (attractive). The force can be close to 200 pN if the spheroid is in front of an ideal ENZ surface, having R=1 and $\phi$=0. } \label{Fig_Ag_ell} \end{figure} \subsection*{S1.2 Gaussian beams} In optical tweezers, light beams are tightly focused in order to increase $\mathrm{F_{\rm grad}}$ with respect to $\mathrm{F_{\rm scat}}$. We can introduce this condition in our calculations by using Gaussian beams \cite{ZemanekOPTCOMM98b} instead of plane waves: \begin{equation}\label{Einc} \mathrm{E}_{\rm I}(z,r)=\mathrm{E}_{0}\frac{w_0}{w_i(z)}\mathrm{exp}\left(-\frac{r^2}{w_{i}^2(z)}\right)\mathrm{exp}\left[ -ik(z+z_0)+\frac{i}{2}\frac{kr^2}{R_i}+i\arctan \left(\frac{z+z_0}{z_R}\right)\right] \end{equation} \begin{equation}\label{Erefl} \mathrm{E}_{\rm R}(z,r)=\mathrm{E}_{0}\rho\frac{w_0}{w_r(z)}\mathrm{exp}\left(-\frac{r^2}{w_{r}^2(z)}\right)\mathrm{exp}\left[ +ik(z-z_0)+\frac{i}{2}\frac{kr^2}{R_r}-i\arctan \left(\frac{z-z_0}{z_R}\right)+i\phi \right] \end{equation} Here, $w_0$ is the beam waist, $z_R=\frac{n_{m}\pi w_{0}^{2}}{\lambda}$ is the Rayleigh range, $z_0$ is the position of the beam waist, $R_i$ and $R_r$ are the wave curvature radii of the incident and reflected wave, respectively, and $w_i(z)$ and $w_r(z)$ are the beam widths at $z$ distance: \begin{align} w_i (z)=w_0\sqrt{1+\frac{(z+z_0)^2}{z_{R}^2}} && w_r(z)=w_0\sqrt{1+\frac{(z-z_0)^2}{z_{R}^2}} \end{align} For the sake of simplicity, we restrict ourselves to the calculation of the optical force along the beam propagation axis. The light intensity distribution $I(z)$ is \cite{ZemanekOPTCOMM98b}: \begin{equation}\label{Intensity} I(z)=I_0 \frac{w_{0}^{2}}{w_{i}^{2}(z)}+2\rho I_0\frac{w_{0}^{2}}{w_{i}(z)w_{r}(z)}\cos (\psi(z))+\rho^2 I_0 \frac{w_{0}^{2}}{w_{r}^{2}(z)} \end{equation} Here, $I_0=2P/\pi w_{0}^2$ is the on-axis intensity at the waist of a beam having total power $P$ and \begin{equation}\label{totalphase} \psi(z)=-2kz + \arctan (\frac{z+z_0}{z_R})+\arctan (\frac{z-z_0}{z_R})-\phi \end{equation} \noindent is a factor due to the phase shift of the beam on reflection from the surface. Thus, optical forces are calculated from Eqs. S\ref{Fgrad_gen}, S\ref{Fscatt_gen} and S\ref{Fenz}. As above, our calculations consider a particle in water ($n_m$=1.33) and under illumination at $\lambda$=560 nm; moreover, to evaluate the beam waist, we use the Abbe criterion $w_0=0.5\frac{\lambda}{NA}$, where the numerical aperture (NA) of the beam is NA=1.3, as in typical optical trapping experiments. The comparison between the results obtained with both plane wave and Gaussian beam on a small dielectric bead (radius 20 nm) are shown in Fig. \ref{Fig_Gauss}. It is worth noting that when Gaussian beams are used, the beam power is reduced with respect to the plane wave case in order to maintain fixed the intensity at the beam focus. \begin{figure} \includegraphics[width=\textwidth]{FigS7.pdf} \caption{(a,b) Total optical force on a 20nm dielectric bead under plane wave (blue curves) and focused Gaussian beam (red curves), in front of ENZ (a) and Ag (b) surfaces, as a function of the distance h from the surface. The focusing induces a fading of the total force with $h$. Note that the Gaussian beam power is reduced with respect to the plane wave case in order to maintain fixed the intensity at the beam focus. (c,d) Contour plots of the total optical force on a dielectric bead under focused Gaussian beam illumination in front of ENZ (c) and Ag (d) surfaces, as a function of the distance $h$ from the surface and of the bead radius. The modulation due to the sinusoidal term in the gradient force is clearly visible. The total force increases at increasing bead radius, reaching the range of tens of fN in front of ENZ and hundreds of fN in front of Ag surface. } \label{Fig_Gauss} \end{figure} \section*{S2 Electromagnetic scattering theory and T-matrix formalism in front of epsilon-near-zero materials} We use two different modeling approaches based on the T-matrix formalism and on finite elements methods (COMSOL), respectively. In particular, electromagnetic scattering from particles near to or deposited on a plane surface that separates two homogeneous media of different optical properties in the T-matrix formalism\cite{Borghese2007book,JOSA95,JOSA99,AO99} can give account on the role of the different multipoles in the particle-surface interaction. Indeed, the presence of the surface can have a striking effect on the scattering pattern from the particles since the exciting field does not coincide with the incident plane wave and the observed field does not coincide with the field scattered by the particle. The field that illuminates the particles is partly or totally reflected by the surface and the reflected field contributes both to the exciting and to the observed field. Moreover, the field scattered by the particles is reflected by the interface and thus contributes to the exciting field. In other words there are multiple scattering processes between the particles and the interface. As a result, the field in the accessible half-space includes the incident field $\mathbf{E}_{\rm I}$, the reflected field $\mathbf{E}_{\rm R}$, the scattered field $\mathbf{E}_{\rm S}$ and, finally, the field $\mathbf{E}_{\rm SR}$ that after scattering by the particles is reflected by the surface. The mathematical difficulties that are met in calculating the scattering pattern are due to the need that the field in the accessible half-space satisfy the boundary conditions both across the (closed) surface of the particles and across the (infinite) interface. In other words, even by assuming that we are able to impose the boundary conditions across the surface of the particle, the problem still remains of imposing the boundary conditions across the interface\cite{JOSA95,JOSA99,AO99}. It is possible to define the transition matrix for particles in the presence of the interface that is the starting point to calculate optical forces and torques either by direct integration of the Maxwell stress tensor or by exploiting the general expressions of optical force and torque in terms of multiple expansion\cite{Saija2005,Borghese2006,Borghese2007}. \paragraph{Incident and Reflected Fields.} The reflection of a plane wave on a plane surface can be dealt with in general terms, \textit{i.e.}, without specifying whether the medium that fills the not accessible half-space is a dielectric or a metal. This information can, indeed, be supplied at the end of the algebraic manipulations. Let us thus assume that the interface is the plane $z=0$ of a Cartesian frame of reference and that the half-space $z>0$, which we take as the accessible half-space, is filled by a homogeneous medium of (real) refractive index $n_m$. The half-space $z<0$ is assumed to be filled by a homogeneous medium with (possibly complex) refractive index $\tilde{n}$. Figure \ref{Fig_geom} shows the adopted geometry. The plane wave field \begin{equation} \mathbf{E}_{\rm I}=E_{0}\hat{\mathbf{e}}_{\rm I} \exp(i \mathbf{k}_{\rm I}\cdot \mathbf{r})\;, \end{equation} which propagates within the accessible half-space, is reflected by the interface into the plane wave \begin{equation} \mathbf{E}_{\rm R}=E'_{0}\mathbf{\hat{e}}_{\rm R}\exp(i \mathbf{ k}_{\rm R} \cdot \mathbf{ r})\;, \end{equation} where $\mathbf{ k}_{\rm I}=k'\mathbf{\hat{k}}_{\rm I}$ and $\mathbf{ k}_{\rm R}=k'\mathbf{\hat{k}}_{\rm R}$ are the propagation vectors of the incident and of the reflected wave, respectively, $k'=n_m k$ and $\mathbf{\hat{e}}_{\rm I}$ and $\mathbf{\hat{e}}_{\rm R}$ are the respective unit polarization vectors. The polarization is analyzed with respect to the two pairs of unit vectors $\mathbf{\hat{u}}_{\rm I\eta}$ and $\mathbf{\hat{u}}_{\rm R\eta}$ that are parallel ($\eta=1$) and perpendicular ($\eta=2$) to the plane of incidence that, as usual, is defined as the plane that contains $\mathbf{ k}_{\rm I}$, $\mathbf{ k}_{\rm R}$ and the $z$ axis. Our choice of the orientation is defined by the equations \begin{equation} \mathbf{\hat{u}}_{\rm I1}\times\mathbf{\hat{u}}_{\rm I2}=\mathbf{\hat{k}}_{\rm I}\;,\qquad \mathbf{\hat{u}}_{\rm R1}\times\mathbf{\hat{u}}_{\rm R2}=\mathbf{\hat{k}}_{\rm R}\;, \end{equation} with $\mathbf{\hat{u}}_{\rm I2}\equiv\mathbf{\hat{u}}_{\rm R2}$. In terms of the projections on the polarization basis, the incident and the reflected field can be written \begin{equation} \mathbf{E}_{\rm I}=E_{0}\sum_{\eta}(\mathbf{\hat{e}}_{\rm I}\cdot \mathbf{\hat{u}}_{\rm I\eta})\mathbf{\hat{u}}_{\rm I\eta}\exp(i \mathbf{ k}_{\rm I} \cdot \mathbf{ r})\;, \end{equation} and \begin{equation} \mathbf{E}_{\rm R}=E'_{0}\sum_{\eta}(\mathbf{\hat{e}}_{\rm R}\cdot \mathbf{\hat{u}}_{\rm R\eta})\mathbf{\hat{u}}_{\rm R\eta}\exp(i \mathbf{ k}_{\rm R} \cdot \mathbf{ r})\;. \end{equation} In the preceding equations the incident field $\mathbf{E}_{\rm I}$ and the reflected field $\mathbf{E}_{\rm R}$ are decomposed into their components parallel and orthogonal to the plane of incidence and can be referred to each other by means of the Fresnel coefficients $F_{\eta}$ for the reflection of a plane wave with polarization along $\mathbf{\hat{u}}_{\eta}$. Requiring the continuity of the normal and tangential components of the fields, the reflection condition\cite{Jackson_new} yields the equation \begin{equation}\label{6.1} E'_{0}(\mathbf{\hat{e}}_{\rm R}\cdot\mathbf{\hat{u}}_{\rm R\eta})=E_{0}F_{\eta}(\vartheta_{\rm I})(\mathbf{\hat{e}}_{\rm I}\cdot\mathbf{\hat{u}}_{\rm I\eta})\;, \end{equation} where the Fresnel coefficients are defined as \begin{equation} F_{1}(\vartheta_{\rm I})=\frac{\bar{n}^{2}\cos\vartheta_{\rm I}-\bigl[(\bar{n}^{2}-1)+\cos^{2}\vartheta_{\rm I}]^{1/2}} {\bar{n}^{2}\cos\vartheta_{\rm I}+\bigl[(\bar{n}^{2}-1)+\cos^{2}\vartheta_{\rm I}\bigr]^{1/2}}\;, \quad F_{2}(\vartheta_{\rm I})= \frac{\cos\vartheta_{\rm I}-\bigl[(\bar{n}^{2}-1)+\cos^{2}\vartheta_{\rm I}\bigr]^{1/2}}{\cos\vartheta_{\rm I}+\bigl[(\bar{n}^{2}-1)+\cos^{2}\vartheta_{\rm I}\bigr]^{1/2}}\;, \end{equation} in which $\vartheta_{\rm I}$ is the angle between $\mathbf{\hat{k}}_{\rm I}$ and the $z$ axis, $\bar{n}=\tilde{n}/n_m$. The reflected wave can be rewritten as \begin{equation} \mathbf{E}_{\rm R}=E_{0}\sum_{\eta}F_{\eta}(\vartheta_{\rm I}) (\mathbf{\hat{e}}_{\rm I}\cdot\mathbf{\hat{u}}_{\rm I\eta})\mathbf{\hat{u}}_{\rm R\eta}\exp(i \mathbf{ k}_{\rm R}\cdot \mathbf{ r})\;. \end{equation} The incident and the reflected field, solutions of Helmholtz equation in accessible free space, can be expanded in terms of a series of spherical vector multipole fields centered on a suitable common origin, $O$. To ensure the regularity of the fields at the origin, we choose J-multipole fields defined in terms of spherical radial Bessel functions ${j}_{\rm l}(k'r)$ \cite{Jackson_new,Borghese2007book}. The result is \begin{align*} \mathbf{E}_{\rm I}=&\sum_{\eta}E_{0\eta}\sum_{plm}\mathbf{ J}^{(p)}_{lm}(\mathbf{ r},k')W^{(p)}_{{\rm I}\eta lm}\;,\\ \mathbf{E}_{\rm R}=&\sum_{\eta}E_{0\eta}F_{\eta}(\vartheta_{\rm I}) \sum_{plm}\mathbf{ J}^{(p)}_{lm}(\mathbf{ r},k')W^{(p)}_{{\rm R}\eta lm}\;, \end{align*} where the incident and reflected amplitudes are respectively: \begin{equation}\label{6.2'} W^{(p)}_{{\rm I}\eta lm}= W^{(p)}_{lm}(\mathbf{\hat{u}}_{\rm I\eta},\mathbf{\hat{k}}_{\rm I})\; \end{equation} \begin{equation}\label{6.2'} W^{(p)}_{{\rm R}\eta lm}= W^{(p)}_{lm}(\mathbf{\hat{u}}_{\rm R\eta},\mathbf{\hat{k}}_{\rm R}).\; \end{equation} Because of the reflection condition due to the presence of the surface, the incident and reflected amplitudes are not mutually independent. Infact, as the polar angles of $\mathbf{\hat{u}}_{\rm R1}$ and $\mathbf{\hat{u}}_{\rm R2}$ are \begin{equation} \vartheta_{\rm R1}=\vartheta_{\rm I}+\frac{\pi}{2},\quad\varphi_{\rm R1}=\varphi_{\rm I}+\pi\;,\quad\text{and}\quad \vartheta_{\rm R2}=\frac{\pi}{2},\quad\varphi_{\rm R2}=\varphi_{\rm I}+\frac{\pi}{2}\;, \end{equation} we get \begin{equation}\label{6.3} W^{(p)}_{{\rm R}\eta lm}=(-)^{\eta+p+l+m}W^{(p)}_{{\rm I}\eta lm}\;. \end{equation} In this way the amplitudes of the reflected field never need to be explicitly considered, and conveniently we can define the exciting field as the superposition of incident and reflected fields \begin{equation}\label{6.4} \mathbf{E}_{\rm E}=\mathbf{E}_{\rm I}+\mathbf{E}_{\rm R}\;. \end{equation} As a consequence the multipole expansion of $\mathbf{E}_{\rm E}$ can be written in a more compact form as \begin{equation}\label{6.5} \mathbf{E}_{\rm E\eta}=E_{0}\sum_{plm}\mathbf{J}^{(p)}_{lm}(\mathbf{r},k') W^{(p)}_{{\rm E}\eta lm} \end{equation} with \begin{equation}\label{6.6} W^{(p)}_{{\rm E}\eta lm}=[1+F_{\eta}(\vartheta_{\rm I})(-)^{\eta+p+l+m}] W^{(p)}_{{\rm I}\eta lm}\;. \end{equation} \paragraph{Scattering from a Sphere on a Plane Surface.} We assume that a spherical scatterer lies entirely within the accessible half-space and is illuminated by a plane wave. Outside the scatterer the total field is \begin{equation}\label{6.27} \mathbf{ E}_{\rm Ext}=\mathbf{ E}_{\rm E}+\mathbf{ E}_{\rm S}+\mathbf{ E}_{\rm SR}\;, \end{equation} where $\mathbf{ E}_{\rm E}=\mathbf{ E}_{\rm I}+\mathbf{E}_{\rm R}$ is the same as we would have if no particle were present. $\mathbf{E}_{\rm S}$, the field scattered by the sphere, and $\mathbf{E}_{\rm SR}$, the field that after scattering by the particle is reflected by the surface, are related to each other by the reflection condition. Their superposition represent the observed scattered field that we indicate with $\mathbf{E}_{\rm Obs}$. \begin{figure} \centerline{\includegraphics[width=0.7\textwidth]{FigS8.pdf}} \caption{Geometry adopted for electromagnetic scattering from a sphere in the vicinity of a surface.}\label{Fig_geom} \end{figure} The field that is scattered by a sphere that lies entirely in the accessible half-space can be expanded in a series of vector H-multipole fields that satisfy the radiation condition at infinity. The multipole fields are defined in terms of spherical radial Hankel functions\cite{Borghese2007book} ${h}_{\rm l}(k'r)$ Choosing for the scattered field the origin $O'$ within the particle, we obtain \begin{equation}\label{6.28} \mathbf{E}_{\rm S\eta}=E_{0\eta}\sum_{plm}\mathbf{ H}^{(p)}_{S,lm}(\mathbf{ r}',k')\mathcal{A}^{(p)}_{\eta lm}\;. \end{equation} where the unknown amplitudes $\mathcal{A}$ can be determined by applying the boundary conditions at the particle's surface. The asymptotic expression of $\mathbf{E}_{\rm S\eta}$ can be written easily as follows \begin{equation}\label{6.30} \mathbf{H}^{(p)}_{{\rm F}lm}=-\frac{\rm i}{4\pi k'}\frac{{\rm e}^{{\rm i} k'r'}}{r}\sum_{\eta'}\mathbf{\hat{u}}_{{\rm S}\eta'}W^{(p)\ast}_{{\rm S}\eta'lm}\;. \end{equation} These are the multipole fields that enter in the definition of scattering amplitude of the system. The scattered field $\mathbf{E}_{\rm S\eta}$ impinges on the plane surface and, by reflection yields a reflected-scattered field in the vicinity of the surface of the particle. Thanks to the reflection rule of $\mathbf{H}$-vector multipole fields\cite{fucile1997general}, that proves the fields are given by a superposition of J-multipole vector fields with origin at $O'$, we get: \begin{equation}\label{6.29} \mathbf{ E}_{\rm SR\eta}=E_{0\eta}\sum_{plm}\sum_{p'l'} \mathbf{ J}^{(p)}_{lm}(\mathbf{ r}',k') \mathcal{F}^{(pp')}_{ll';m}\mathcal{A}^{(p')}_{\eta l'm}\;, \end{equation} The quantities $\mathcal{F}^{(pp')}_{ll';m}$ can be understood as the elements of a diagonal matrix F that effects the reflection of the H-multipole fields on the plane interface giving the formal solution to the problem. Assuming that the scattering particle is a homogeneous sphere with (possibly complex) refractive index $n_{\rm p}$ and radius $a$, also the field regular at $O'$ within the sphere can be expanded in the form \begin{equation} \mathbf{ E}_{\rm T\eta}=E_{0\eta}\sum_{plm} \mathbf{ J}^{(p)}_{lm}(\mathbf{ r}',k_{\rm p})\mathcal{C}^{(p)}_{\eta lm}\;. \end{equation} The boundary conditions at the surface of the sphere between the external total field, $\mathbf{E}_{\rm E}+\mathbf{ E}_{\rm S}+\mathbf{ E}_{\rm SR}\ $, and the field within the scatterer, $\mathbf{ E}_{\rm T}$, can be applied provided that the exciting field $\mathbf{ E}_{\rm E}$ is referred to the center of the sphere, $O'$. This can be done resorting to the appropriate phase factors: $\exp(i \mathbf{ k}_{\rm I}\cdot \mathbf{ R}')$ and $\exp(i \mathbf{ k}_{\rm R}\cdot \mathbf{ R}')$. For each $p$, $l$, and $m$, we obtain four equations among which the amplitudes of the internal field $C$ can be easily eliminated. As a result, we get, for each $m$, a system of linear nonhomogeneous equations for the amplitudes $\mathcal{A}_{\eta lm}^{(p)}$, namely \begin{equation}\label{6.30} \sum_{p'l'}\mathcal{M}^{(pp')}_{ll';m}\mathcal{A}_{\eta l'm}^{(p')}= -\mathcal{W}^{(p)}_{\eta lm}\;, \end{equation} where \begin{equation}\label{6.31} \mathcal{M}^{(pp')}_{ll';m}=\bigl({R}^{(p)}_{l}\bigr)^{-1} \delta_{pp'}\delta_{ll'}+\mathcal{F}^{(pp')}_{ll';m}\;, \end{equation} and \begin{equation}\label{6.32} \mathcal{W}^{(p)}_{\eta lm}=\exp(i \mathbf{ k}_{\rm I}\cdot \mathbf{ R}') W^{(p)}_{{\rm I}\eta lm} +\exp(i \mathbf{ k}_{\rm R}\cdot \mathbf{ R}')F_{\eta} W^{(p)}_{{\rm R}\eta lm}\;. \end{equation} The quantities ${R}^{(1)}_l$ and ${R}^{(2)}_l$ coincide with the Mie coefficients $b_l$ and $a_l$, respectively, for a homogeneous sphere of refractive index $n_{\rm p}$ embedded into a homogeneous medium of refractive index $n_{\rm m}$. We remark that our theory can easily deal also with sphere sustaining longitudinal waves (plasmonic particles) or with radially nonhomogeneous spheres\cite{Borghese2007book}. Once the amplitudes $\mathcal{A}^{(p)}_{\eta lm}$ of $\mathbf{ E}_{\rm S\eta}$ have been calculated by solving (\ref{6.30}), the reflected-scattered field $\mathbf{ E}_{\rm SR\eta}$ is also determined by (\ref{6.29}). A brief comment on the expression of the reflected-scattered field is in order. $\mathbf{E}_{\rm SR\eta}$ is valid only in the vicinity of the surface of the sphere as it includes multipole fields that do not satisfy the radiation condition at infinity, for this reason to get the reflected-scattered field that would be observed by an optical instrument in the far zone it is necessary to cast $\mathbf{E}_{\rm SR\eta}$ in its asymptotic form. At any point of the accessible half-space, $\mathbf{ E}_{\rm FSR\eta}$ is given by the equation \cite{wriedt1998light} \begin{equation}\label{6.33} \mathbf{ E}_{\rm FSR\eta}=E_{0\eta}\sum_{plm}\mathbf{ H}^{(p)}_{FR,lm} \mathcal{A}^{(p)}_{\eta lm}, \end{equation} where \begin{equation}\label{6.29} \mathbf{H}^{(p)}_{{\rm FR},lm}=-\frac{{\rm i}}{4\pi k'}\frac{{\rm e}^{{\rm i} k'r''}}{r}\sum_{\eta'}\mathbf{\hat{u}}_{{\rm S}\eta'}W^{(p)\ast}_{{\rm S}\eta'lm}(-)^{\eta'+p+l+m}F_{\eta'}(\pi-\vartheta_{\rm S})\;. \end{equation} for a sphere on or near the surface, this $\mathbf{ H}$-vector multipole fields with the origin at $O''$ can be considered as the the mirror image of the source of the original $\mathbf{ H}$ fields. From the superposition of scattered and reflected-scattered fields, all referred to a common origin, eqs.\ref{6.28}-\ref{6.30} and \ref{6.33}-\ref{6.29}, we get the field \begin{align}\label{6.47} \mathbf{ E}_{\rm Obs\,\eta}=E_{0\eta}\sum_{plm}\mathbf{ H}^{(p)}_{{\rm {Obs}},lm}(\vec{ r},k') \mathcal{A}^{(p)}_{\eta lm}\;, \end{align} with \begin{equation}\label{6.48} \mathbf{H}^{(p)}_{{\rm {Obs}},lm}=-\frac{{\rm i}}{4\pi k'}\frac{{\rm e}^{{\rm i} k'r'}}{r}\sum_{\eta'}\mathbf{\hat{u}}_{{\rm S}\eta'}W^{(p)\ast}_{{\rm S}\eta'lm}[1+(-)^{\eta+p'+l+m}F_{\eta'}(\pi-\vartheta_{\rm S}] \end{equation} Eqs. (\ref{6.47}-\ref{6.48}) lead us to the definition and derivation of the transition matrix for a scatterer in the presence of a plane interface\cite{JOSA95,JOSA99,AO99}. The advantages yielded by the use of the transition matrix is evident if we had to deal with the problem of a random dispersion of non spherical particles deposited on a plane surface. Moreover, the amplitudes of the observed field are the key quantities for calculating the radiation force of which we will discuss later. \subsection*{S2.1 Optical force in front of a substrate} In this section we briefly recall our approach to determine the radiation force exerted by a plane waves, with a definite polarization, on a scatterer (of any shape and composition) placed in a homogeneous medium of (real) refractive index $n_{\rm m}$. We refer to the geometry sketched in Fig. \ref{Fig_geom} in which $\Sigma$ is the customary laboratory frame and $\Sigma'$ is a frame of reference whose axes are parallel to the axes of $\Sigma$ and whose origin $O'$ lies within the particle. The vector position of $O'$ with respect to $\Sigma$ is $\mathbf{R}_O'$. The conservation laws applied to the electromagnetic scattering problem leads to the optical force acting on the particle\cite{Jackson_new, Borghese2007book, Jones2015}: \begin{equation}\label{1} \mathbf{F}_{\rm Rad}=r^{\prime2}\int_{\Omega'}\mathbf{\hat{r}}^{\prime}\cdot\langle \mathrm{T}_{\rm M} \rangle\,{\rm d}\Omega'\;, \end{equation} where the integration is over the full solid angle, $r'$ is the radius of a sphere with center at $\vec{R}_{O'}$ surrounding the particle, and $\langle \mathrm{T}_{\mathrm M} \rangle$, the averaged Maxwell stress tensor (MST), describes the mechanical interaction of light with matter. The general expression of the MST in a medium in the Minkowski form\cite{Jackson_new, Borghese2007book, Jones2015} is \begin{equation} \mathrm{T}_{\rm M} = \mathbf{E}'\otimes\mathbf{D}' + \mathbf{H}'\otimes \mathbf{B}' - \frac{1}{2}\left(\mathbf{E}'\cdot\mathbf{D}' + \mathbf{H}'\cdot\mathbf{B}' \right) \mathrm{I} \; , \end{equation} where $\mathbf{E}'$ is the electric field, $\mathbf{D}'$ is the electric displacement, $\mathbf{H}'$ is the magnetic field, $\mathbf{B}'$ is the magnetic induction, evaluated in the frame $\Sigma'$ as indicated by the prime, $\otimes$ represents the dyadic product, and $\mathrm{I}$ is the dyadic unit. We assume that all the fields are harmonic, propagating in a homogeneous, linear, and non-dispersive medium, and depend on time through the factor ${\rm e}^{-i \omega t}$ that is omitted. So, we can simplify the expression for the MST by using the complex amplitudes of the fields, $\textbf{E}' = \textbf{E}'(\textbf{r})$ and $\textbf{B}'=\textbf{B}'(\textbf{r})$, as\cite{Mishchenko2001, Saija2005,Jones2015}: \begin{equation}\label{eq:Maxwell_stress_tensor} \langle \mathrm{T}_{\mathrm M}\rangle = \frac{\varepsilon_\mathrm{m}}{2}{\rm Re} \left[ \textbf{E}'\otimes\textbf{E}'^{\ast} + \frac{c^2}{n_\mathrm{m}^2}\textbf{B}'\otimes\textbf{B}'^{\ast} - \frac{1}{2}\left( |\textbf{E}'|^2 + \frac{c^2}{n_\mathrm{m}^2}|\textbf{B}'|^2 \right) \mathrm{I} \right] , \end{equation} where the fields are the superposition of the incident and of the scattered field. In presence of a plane surface that separates two homogeneous media with different refractive indexes, the role of the incident field is played by the exciting field $\mathbf{E}_{\rm E}=\mathbf{E}_{\rm I}+\mathbf{E}_{\rm R}$ while the superposition of $\mathbf{ E}_{\rm S}$ and $\mathbf{ E}_{\rm SR}$ acts like the observed field due to the presence of particle. It is possible to simplify\cite{Borghese2007book} equation \eqref{1} since the dyadic products in the expression of $\langle\mathrm{T}_{\rm M}\rangle$ give a vanishing contribution to the radiative force \cite{Mishchenko2001, Saija2005}. For these reason, the component of the radiation force along the direction characterized by the unit vector $\mathbf{\hat{v}}_{\zeta}$ turns out to be \begin{equation}\label{3} F_{\rm Rad\,\zeta}=-\frac{1}{4} \varepsilon_{\rm m} r^{\prime2} {\rm Re} \int_{\Omega'}(\mathbf{\hat{r}}'\cdot\mathbf{\hat{v}}_{\zeta})\bigl[(|\mathbf{E}'_{\rm Osb}|^2+2\mathbf{E}^{\prime\ast}_{\rm E}\cdot\mathbf{E}'_{\rm Obs})+\frac{c^2}{n_{\rm m}^2}(|\mathbf{B}'_{\rm Obs}|^2+2\mathbf{B}^{\prime\ast}_{\rm E}\cdot\mathbf{B}'_{\rm Obs})\bigl]\,{\rm d}\Omega'\;, \end{equation} where $\mathbf{E}'_{\rm Obs}$ and $\mathbf{B}'_{\rm Obs}$ are the superposition of the fields scattered by the particle and the reflected-scattered fields. Obviously, since the exciting field is a plane wave, the integral \eqref{3} gets no contribution from the terms $\mathbf{E}'_{\rm E}\cdot\mathbf{E}^{\prime\ast}_{\rm E}$, and $\mathbf{B}'_{\rm E}\cdot\mathbf{B}^{\prime\ast}_{\rm E}$ that, accordingly, have been omitted. At this stage, using the orthogonality properties of vector spherical harmonics through which we develop the fields, see eqs. \eqref{6.5}-\eqref{6.6} and eqs.\eqref{6.28}-\eqref{6.29}, we obtain the Borghese equations for the optical force components\cite{Borghese2007}: \begin{align}\label{4} F_{\rm Rad\,\zeta}=-F^{\rm(Sca)}_{\rm Rad\,\zeta}+F^{\rm(Ext)}_{\rm Rad\,\zeta}\ \end{align} where \begin{subequations}\label{6ab} \begin{align} F^{\rm(Sca)}_{\rm Rad\,\zeta}&=\frac{\varepsilon_{\rm m}|E_0|^2}{2 k'^2}{\rm Re}\sum_{plm}\sum_{p'l'm'}\mathcal{A}^{(p)\ast}_{lm}\mathcal{A}^{(p')}_{l'm'}i^{l-l'}I^{(pp')}_{\zeta\,lml'm'}\;,\label{6a}\\ F^{\rm(Ext)}_{\rm Rad\,\zeta}&=-\frac{\varepsilon_{\rm m}|E_0|^2}{2 k'^2}{\rm Re}\sum_{plm}\sum_{p'l'm'}W^{(p)\ast}_{{\rm E}\,lm}\mathcal{A}^{(p')}_{l'm'}i^{l-l'}I^{(pp')}_{\zeta\,lml'm'}\;,\label{6b} \end{align} \end{subequations} where the matrix elements \begin{align} I^{(pp')}_{\zeta\,lml'm'}=\frac{4\pi}{3}\sum_{\mu}Y^{\ast}_{1\mu}(\mathbf{\hat{v}}_{\zeta})\frac{{\rm i}^{l'-l}}{16\pi^2}\sum_{\eta'}\int Y_{1\mu}(\mathbf{\hat{k}}_{\rm S})W^{(p)}_{{\rm S}{\eta'} lm}W^{(p')\ast}_{{\rm S}{\eta'} l'm'}\,d\Omega_{\rm S}\;, \end{align} can be analytically valuated. We notice that $F^{\rm(Sca)}_{\rm Rad\,\zeta}$ depends on the amplitudes $A^{(p)}_{lm}$ of the scattered field only, whereas $F^{\rm(Ext)}_{\rm Rad\,\zeta}$ depends jointly on the amplitudes of the scattered field $A^{(p)}_{lm}$ and on those of the incident field $W^{(p)}_{{\rm I}\,lm}$. This dependence is analogous to that of the scattering cross section and of the extinction cross section, respectively, so that $F^{\rm(Sca)}_{\rm Rad\,\zeta}$ can be somewhat related to scattering properties of the particle, whereas $F^{\rm(Ext)}_{\rm Rad\,\zeta}$ can be related to its extinction. Similar considerations hold true also for the radiation torque \cite{Borghese2006, marston1984radiation}. \begin{figure} \includegraphics[width=\textwidth]{FigS9.pdf} \caption{Numerical computation of the optical force: (a) Simulation region and the geometry of the problem. A polystyrene particle is placed above a substrate, the distance $h$ is from the bottom of the particle to the substrate. The particle is surrounded with water and the incoming plane-wave is illuminated from the top and propagates in $z$ direction. (b) Calculation of the optical force by integration of Maxwell's stress tensor on a surface of cylindrical volume surrounding the particle. The Maxwell's stress tensor consists of information about the total electric and magnetic field which includes incident field $(E_{\rm I})$, reflected field $(E_{\rm R})$, field scattered by the particle due to the excitation filed $(E_{\rm S})$, and field scattered by the particle and reflected by the surface $(E_{\rm RS})$. As we are interested on the optical force in z direction it is enough to integrate $\mathrm{T}_{\mathrm{M},zz}$ on top and bottom surface and $\mathrm{T}_{\mathrm{M},rz}$ on the circumferential surface. (c) [Fig. 2a of the main text] Numerical computation (circles) versus analytical T-matrix calculation (solid line) and dipole approximation calculation (dots) of optical force for different distances $h$ above different substrates (Silver, ENZ, glass).}\label{COMSOL} \end{figure} \section*{S3 Finite elements methods} To evaluate the accuracy of the analytical results, we have computed the optical force on the macro particle using numerical simulations (Fig. \ref{COMSOL}). We used software package COMSOL Multiphysics 5.4 which uses finite element method (FEM) to solve Maxwell's equation and calculate the optical force. To increase the accuracy of simulation we used periodic boundary condition; however, the micro-particle radius, $a$, is significantly smaller than the unit cell size, L, to prevent the mutual coupling between adjacent cells ($a$/L =0.02) . In order to reduce the computation time the simulation is run for silicon particle with radius $a$ = 20 nm while the wavelength of incoming plane wave is $\lambda$=560 nm. The surrounding medium is water. We used different substrates as reflecting surfaces and compared the computed force for all substrates with the analytical results. We used silver, glass, layered structure (silver and aluminum dioxide) and ENZ surfaces. The optical properties used for all surfaces are measured values at $\lambda$=560 nm. The thickness of all substrates are considerably larger than $\lambda$ to mimic the semi-infinite medium. The simulation region should be meshed finely specially in three regions: i) the plasmonic layers (Ag) in the layered structure ii) in the near-field of the substrate $( 0 - \lambda/10 )$ to capture the near-field effects on the calculated force and iii) the region surrounding the particle that the force is calculated. To calculate the force, we used Maxwell's stress tensor. It is known that the total time-averaged force acting on any material objects can be found by calculating the integral of Maxwell's stress tensor on any surface that defines a volume containing the objects \begin{equation} \mathbf{F}_{\rm Rad}=\langle F(t)\rangle = \int_S \langle \mathrm{T}_{\rm M} (r,t)\rangle \cdot \mathbf{\hat{n}}\ dS \end{equation} where $\mathrm{T}_{\rm M}$ is the Maxwell's stress tensor calculated based on the total electric and magnetic fields, $S$ is the surface surrounding the volume containing the object and $\mathbf{\hat{n}}$ is the unit vector perpendicular to the surface $S$. In the simulation, we chose a cylinder as a surrounding volume (Fig. \ref{COMSOL}b). In our simulation, we are interested to calculate the force in $z$ direction consequently $\langle \mathrm{T}_{\mathrm{M},zz} (r,t)\rangle$ on top and bottom of the cylinder and $\langle T_{zr} (r,t)\rangle $ on its circumference should be calculated (Fig. \ref{COMSOL}b). As the normal vector on the top and bottom of the cylinder has opposite direction, the total time-averaged force is: \begin{equation} \langle F(t) \rangle = \int_{S_{Top}} \langle \mathrm{T}_{\mathrm{M},zz} (r,t) \rangle dS -\int_{S_{Bottom}} \langle \mathrm{T}_{\mathrm{M},zz} (r,t)\rangle dS +\int_{S_{Circum}} \langle \mathrm{T}_{\mathrm{M},rz}(r,t)\rangle dS \label{F2} \end{equation} \\ As integrals in Eq. S\ref{F2} are calculated over the surface of the cylinder, the cylinder is meshed densely $(\lambda/200)$ to avoid the numerical error. Since the field scattered by the particle causes no singularity, the height and radius of the cylinder can be as close as possible to the diameter and radius of the sphere respectively, however the smaller is the cylinder the denser should be the mesh to capture the intensity of the electric and magnetic field. To avoid computational error within reasonable computation time, the height of the cylinder is chosen to be 10 nm bigger than the sphere's diameter and the radius of the cylinder to be 5 nm bigger than the radius of the sphere. The numerical computation of total force is done for various spacing (h) of the sphere from different reflecting surfaces (Fig. \ref{COMSOL}a). $h$ is the distance from the bottom of the sphere to the surface as shown in Fig. \ref{COMSOL}a. The numerical calculation is done for several scenarios; various spacing $h$, various radii $a$, different substrates, and particles with different polarizabilities and geometries. The comparison between analytical solution and numerical analysis for various spacing above various substrates is shown in Fig. 2a. There is a very good agreement between analytical calculation and numerical simulation. \section*{S4 Layered metamaterial calculations} \paragraph{Transfer matrix numerical details.} The numerical results in Fig. 3 of the main text for reflectance ($R$) versus reflected phase ($\phi$) from the surface of a thin film stack, as described in the caption, are calculated using the standard transfer matrix approach~\cite{chilwell1984thin}. The refractive indices at 560 nm for each material are as follows: Ag: $0.146 + 3.27i$~\cite{rakic1998optical}; Al$_2$O$_3$: $1.68$~\cite{boidin2016pulsed}; Au: $0.384 + 2.55i$~\cite{rakic1998optical}; Ge: $3.02 + 2.90i$~\cite{amotchkina2020characterization}; TiO$_2$: 2.43~\cite{siefke2016materials}; glass substrate: 1.52; water superstrate: 1.33~\cite{daimon2007measurement}. \paragraph{Experimental comparison.} The experimental $(\phi,R)$ points shown as green stars in Fig.~\ref{Fig3} are based on six different fabricated trilayer thin film stack systems (5 trilayers of Al$_2$O$_3$ / Ag / Ge from top to bottom). The Ag layer thicknesses were in the range 10-25 nm, with a thin Ge layer (1-3 nm) underneath to ensure surface wetting. The Al$_2$O$_3$ thicknesses were systematically varied between roughly 20 nm and 80 nm across the different systems. These stacks were deposited on a glass substrate (Corning Inc.) using electron-beam evaporation for Ge (0.5 $\text{\AA} / \text{s}$) and Al$_2$O$_3$ (0.3 $\text{\AA} / \text{s}$), and thermal evaporation for Ag (0.5 $\text{\AA} / \text{s}$). All materials were purchased from Kurt J. Lesker. The sample's ellipsometric properties (amplitude $\Psi$ and phase difference $\Delta$) were measured in an air superstrate using a Variable-angle, high-resolution spectroscopic ellipsometer (J. A. Woollam Co., Inc, V-VASE) for incident angles $45^\circ$, $50^\circ$, $55^\circ$ and wavelength range $300 - 1000$ nm. Individual fits to the ellipsometric data for each system yielded best-fit results for the thicknesses and optical constants, which were then used to estimate the values of $R$ and $\phi$ at normal incidence with $\lambda = 560$ nm shown in Fig.~3. \paragraph{Effective medium theory.} The effective medium theory (EMT) used to calculate the $(\phi,R)$ curves in Fig.~1c and the curve labeled EMT in Fig.~3 takes the following form. We consider the interface between a water superstrate with refractive index $n_0$ and an underlying metamaterial which is a mixture of dielectric with index $n_d$ and a metal with complex index $\tilde{n}_{\rm M} = n_{\rm M} + i k_{\rm M}$. If $f$ is the filling fraction of the metal versus the dielectric, the approximate EMT permittivity of the metamaterial is given by $\epsilon_\text{EMT} = (1-f) n_d^2 + f \tilde{n}_{\rm M}^2$. This allows us to calculate the effective refractive index $n_\text{EMT}$ and extinction coefficient $k_\text{EMT}$ as: \begin{equation}\label{emt1} n_\text{EMT} = \sqrt{\frac{|\epsilon_\text{EMT}|+\epsilon_\text{EMT}}{2}}, \quad k_\text{EMT} = \sqrt{\frac{|\epsilon_\text{EMT}|-\epsilon_\text{EMT}}{2}}. \end{equation} The corresponding complex Fresnel reflection coefficient is given by: \begin{equation}\label{emt2} r_\text{EMT} = \frac{n_0 - n_\text{EMT} - i k_\text{EMT}}{n_0 + n_\text{EMT} + i k_\text{EMT}}. \end{equation} The associated reflectance $R = |r_\text{EMT}|^2$ and phase $\phi = -\arg{r_\text{EMT}}$. \bibliographystyle{ieeetr}
1,108,101,563,479
arxiv
\section{INTRODUCTION} Numerical methods are an essential tool to tackle quantum many-body systems, most of which lack analytical solutions. For these problems, though, the dimensionality---and with it the computational complexity---grows exponentially with the system size. This limits the applicability of exact numerical calculations and calls for the development of numerical methods that can efficiently deal with, at least, the most relevant physical questions. Introduced to the field in the nineties, tensor network (TN) techniques aim to cover this need and have become by now, together with exact diagonalization and Quantum Monte Carlo methods, a key instrument in the numerical study of quantum many-body problems. {TN have been discovered independently in different disciplines. First uncovered in statistical physics by Baxter~\cite{Baxter1968dimers},} in the field of {quantum many-body physics} their ancestry can be traced back to the first valence bond solid (VBS) proposed by Affleck, Kennedy, Lieb and Tasaki~\cite{Affleck1987} as exact ground state of a short-range spin chain {---see~\cite{Okunishi2021review} for a review and historical perspective.} Kl\"umper et al.~\cite{Kluemper1993} later extended the AKLT proposal for a larger set of models, and also introduced the term \emph{matrix product} to designate these states. The construction was generalized and formalized mathematically by Fannes, Nachtergaele and Werner~\cite{Fannes1992} in the finitely correlated states for infinite spin chains. Around the same time the density matrix renormalization group (DMRG), a new algorithm proposed by White~\cite{White1992}, was revealing an amazing power to capture the ground state of large quantum spin chains with only modest numerical effort. Shortly afterwards, \"Ostlund and Rommer~\cite{Oestlund1995} identified the fixed point of the infinite DMRG algorithm with precisely such matrix product states, and Dukelsky et al.~\cite{Dukelsky1998} pointed out the connection between DMRG and a variational search over these states. Furthermore Nishino and Okunishi~\cite{NishinoOkunishi1995,NishinoOkunishi1996ctmrg} unified DMRG with Baxter's corner transfer matrix approach for two dimensional classical models. And these insights inspired further generalizations of the original algorithm~\cite{Nishino1998threeD}. DMRG was applied to multiple scenarios and fast became a method of choice to study the static properties of quantum spin systems in low spatial dimension~\cite{Hallberg2003dmrg,Schollwoeck2005}. Yet a whole new perspective was gained thanks to quantum information concepts. Understanding in terms of entanglement the matrix product ansatz~\cite{Vidal2003} and the DMRG algorithm~\cite{Verstraete2004}, and reformulating the latter fully in terms of matrix product states (MPS)~\cite{McCulloch2007} opened up the possibilities for improvements and jumpstarted the \emph{tensor network} field. In particular, algorithms for real time evolution~\cite{Vidal2004,Daley2004,White2004real} and finite temperature~\cite{Verstraete2004a,Zwolak2004,Feiguin2005finiteT} with matrix product states, as well as a generalization to higher dimensions~\cite{Verstraete2004b} were proposed soon afterwards, revealing the potential of the tensor network picture. Nowadays, TN algorithms are among the standard numerical methods for strongly correlated low-dimensional quantum systems. Most commonly used are the original methods from the early 2000s, which continuously find new applications. But the TN language continues to be exploited to provide, not only deeper mathematical understanding of the ansatz~\cite{Cirac2021rmp}, but also new numerical techniques. The variety of TN applications that have bloomed over the last decade and produced state-of-the-art results is too vast to do justice to it in these pages. Thus, the focus of this article is the general framework of TN algorithms, with a stress on a few selected advances in the field that are important for cutting-edge applications. The details of the algorithms are not explicitly shown; the interested readers are encouraged to refer to the many excellent reviews in the literature, such as~\cite{Verstraete2008,Schollwoeck2011,Orus2014annphys,Bridgeman2017,Ran2020tncontr,Silvi2019tn}, to name only a few. \section{BASIC CONCEPTS} \label{sec:basic} A tensor, the basic object, is simply a multidimensional array. The graphical representation of TN, illustrated in fig~\ref{fig1}, provides a practical language to describe their algorithms and properties. For instance, a $k$-rank tensor, an object with $k$ indices, is depicted as a geometrical shape with $k$ legs (e.g. a matrix would have two legs). A contraction between two tensors---such as a matrix-vector product---is represented by joining the contracted indices. In general, a tensor network is a set of such interconnected tensors, resulting in a rank determined by the number of open legs (see fig~\ref{fig1}a). \onecolumngrid \begin{center} \begin{figure}[t] \includegraphics[width=.8\textwidth]{fig1.pdf} \caption{Graphical representation of tensors: (\textit{a}) Example of a TN formed by four tensors; when contracted, a $4$-rank tensor is obtained; (\textit{b}) graphical representation of a TNS in each of the main families, for a system of 16 sites (note: triangles are commonly used to indicate isometries). } \label{fig1} \end{figure} \end{center} \twocolumngrid \subsection{Tensor network states} In particular, a \emph{tensor network state} (TNS) encodes all coefficients (in a given basis) of a quantum many-body state in such a diagram, with as many open legs as constituents in the system. Each dangling leg corresponds to the (finite) \emph{physical} dimension of one site, while contracted legs correspond to \emph{virtual} or \emph{bond dimensions}. TNS families are defined by graphs with different connectivities. For the families of interest, the number of parameters, proportional to the number of tensors, grows polynomially with the system size. This represents a drastic reduction with respect to the exponentially large dimension of the Hilbert space. But the aim of TNS is to capture physical states, which happen to explore only a small fraction of all possible quantum states, mainly characterized by their low entanglement. In particular, ground and thermal equilibrium states of local Hamiltonians fulfill an entanglement area law~\cite{Eisert2010area}: the entanglement between a certain subsystem and the rest scales with the size of the boundary between both parts (or with small corrections thereof), instead of with the size of the bulk of the subsystems, as is the case for most states in the Hilbert space. A rigorous proof of the area law scaling exists for gapped one-dimensional local Hamiltonians~\cite{Hastings2007} and for thermal equilibrium states in any dimension~\cite{Wolf08arealaw}, whereas critical ground states can display small (logarithmic) corrections~\cite{Calabrese2004,Wolf2005fermions}. The following are the most widely used TNS families (see their diagrams in fig.~\ref{fig1}b). \begin{enumerate} \item MPS have a one-dimensional structure, with one tensor per lattice site~\cite{PerezGarcia2007}. Each tensor owns one open index corresponding to the physical dimension of the site, and two virtual legs connected to the neighboring sites (for open boundaries, the edge tensors only connect to one neighbor). The tensor for site $k$ has components $A[k]^{i}_{\alpha\, \beta}$, where $i$ takes values over the physical dimension (typically denoted $d_k$), and $\alpha$ and $\beta$ respectively take values over the left and right virtual dimensions of the tensor (denoted $D_{l}$ and $D_{r}$)---equivalently, each $A[k]^{i}$ is a $D_l\times D_r$ matrix. More explicitly, for a system of $N$ sites, all with physical dimension $d$, the state can be written \begin{equation} |\Psi\rangle=\sum_{i_1,i_2,\cdots i_N=1}^{d} \mathrm{tr}\left ( A[1]^{i_1}A[2]^{i_2}\cdots A[N]^{i_N}\right)|i_1 i_2 \cdots i_N\rangle. \label{eq:mps} \end{equation} MPS satisfy an entanglement area law: the half-chain entanglement of an MPS with maximal bond dimension $D$ is upper-bounded by $S=2 \log D$. Furthermore, they hold exponentially decaying correlations, can be prepared and contracted efficiently, and essentially correspond to ground states of local one-dimensional gapped Hamiltonians~\cite{Schuch2010peps}. \item PEPS are the natural generalization of MPS to arbitrary graphs, where they can be defined with one tensor (with a physical leg) per vertex and connections according to the graph edges~\cite{Verstraete2004b}. They can be expressed analogously to Eq.~\ref{eq:mps}, replacing the trace by a contraction over all connections. PEPS fulfill the area law in higher dimensions, and are much more complex objects than MPS. For instance, they cannot---in the general case---be prepared or contracted efficiently~\cite{Schuch2007complexity} and, even with small bond dimension, they can support critical correlations~\cite{Verstraete2006a}. \item TTN correspond to tree graphs. Usually---but not always---they have physical indices in the leave nodes~\cite{Shi2006}, connected to tensors with only virtual indices at higher levels (see fig.~\ref{fig1}b), {which can correspond} to a renormalization direction~\cite{Cirac2009rg}. As MPS, TTN are loop-free and can be contracted efficiently, but they violate the one-dimensional area law, and can hold power-law decaying correlations when averaging over spatial positions~\cite{Silvi2010tree}. TTN can be used also for higher dimensional systems~\cite{Tagliacozzo2009ttn2D}. \item MERA implement a more complex renormalization of the physical degrees of freedom~\cite{Vidal2007,Vidal2008,Evenbly2009}, in which layers of unitary transformations (called disentanglers) that remove short range correlations are alternated with layers of isometries that perform the renormalization step. This results in a TN with cycles in which, thanks to the unitarity properties of the tensors, local expectation values can be computed efficiently. Scale invariant MERA can describe quantum critical ground states in one dimension~\cite{Pfeifer2009qcMERA,Montangero2009qcMERA}, where they support logarithmic corrections to the area law. However, in two dimensions they are proven to be a subset of PEPS~\cite{Barthel2010mera}, and thus satisfy the area law. However, a generalization called branching MERA~\cite{Evenbly2014branchingMERA} exists that can support up to volume-law entanglement in more than one spatial dimension. \end{enumerate} Any tensor network has a so-called gauge freedom, since inserting the product of a matrix and its inverse $X X^{-1}$ in between any contracted pair of indices (i.e. in a connected leg) leaves the whole TN invariant, but allows redefining pairs of neighboring tensors. For loop-free TNS, in which cutting a bond splits the network in two, it is possible to define a canonical form, in which the basis for the virtual index is chosen to be the Schmidt basis for the bipartition corresponding to the bond, explicitly encoding the corresponding entanglement~\cite{PerezGarcia2007}. Besides being fundamental to characterize the properties of a TNS family, this canonical form gives rise to more stable and efficient numerical algorithms. The families above can be defined for finite-size systems with site-dependent tensors, but it is also possible to consider directly the thermodynamic limit, in which one (or a few) tensors are repeated infinitely many times, to produce a translationally invariant (or periodic in space) structure. In the case of MPS, the translationally invariant ansatz is called uniform MPS (uMPS). In infinite PEPS (iPEPS), a periodic iteration of a finite unit cell is most commonly used in practice, while the translationally invariant version is fundamental for the formal results~\cite{Cirac2021rmp}. This allows targeting bulk properties directly, without finite-size extrapolations, or, in the case of MERA, capturing the scale invariance of critical systems~\cite{Pfeifer2009qcMERA,Montangero2009qcMERA}. These families can also describe mixed states. The simplest approach is to postulate the TNS ansatz in a given tensor product basis of the vector space of operators, with simply doubled physical legs. In particular, in the MPS and PEPS cases the resulting structures are called MPO~\cite{Verstraete2004a,Zwolak2004,Pirvu2010a} and PEPO. But if the ansatz is to describe a physical state, it must be positive semidefinite, a global property that cannot be assessed at the level of the local tensors. An alternative is to consider the TNS describing a purification, i.e. a pure state of the system plus an ancilla, such that tracing out the latter results in the desired mixed state. In the MPS and PEPS case, this yields a locally purified form, a TN with the same structure, where local tensors have double physical indices, and internal structure granting positivity~\cite{Verstraete2004a}. This is more restrictive and potentially less efficient than the generic ansatz~\cite{delasCuevas2013}, but can be used in practice in numerical algorithms. \subsection{Fundamental primitives} \label{subsec:primitives} Virtually all TN algorithms rest on two basic blocks: contracting (part of) the tensor network, and locally updating the tensors. Together with the approximation of (parts of a) TN by tensors with truncated dimensions, they can be considered the fundamental primitives on which more or less sophisticated higher-level algorithms are built. \subsubsection{Contracting TN} A ubiquitous problem in TN algorithms is contracting a tensor network. This means explicitly evaluating the products and sums of tensor components indicated by the connections, to result in a tensor with dimensionality corresponding to the indices that remain open (see fig.\ref{fig1}a). For instance, for classical statistical models, partition functions and expectation values of local observables can be written as \emph{closed} TN (without open indices). For TNS representing quantum states, norms and local expectation values are also closed tensor networks, while reduced density matrices appear as smaller tensor networks with operator indices. Two aspects of the contraction affect the implementation and performance of the algorithm. \begin{enumerate} \item{Contraction order.} In general, the computational cost of contracting a series of tensors with each other depends on the order in which operations are applied. For the regular networks that appear in the most common TNS algorithms, the number of possibilities is small, and the optimal sequence (which minimizes the computational cost) is known. But in the general case, finding the optimal contraction order is a NP-complete problem, for which some heuristic algorithms exist \cite{Pfeifer2014optimal,Gray2021hyperoptimized}. \item{Computational cost.} If a contraction order exists whose computational cost grows only polynomially with the size of the network, we say that the TN can be contracted exactly. Such is the case with TN that do not contain loops, for instance the networks corresponding to expectation values of multi-point correlators in MPS and TTN. Also for TNS with some unitary properties there are contractible quantities, for instance the norm or few-point correlators evaluated in MERA. The exact contraction of an arbitrary tensor network is however a $\# P$-complete problem~\cite{Schuch2007complexity}. Thus, most algorithms involving TN in more than one dimension need to approximate the contractions, which is referred to as \emph{truncation} (\ref{subsubsec:truncation}). \end{enumerate} \subsubsection{Tensor update} Many algorithms work by holding a TN description of the quantity of interest and iteratively improving it until some predefined level of convergence is attained. The improvement proceeds by local changes, or updates, in which one (or few) tensors are modified in order to optimize the relevant cost function. Typically, the latter depends on all tensors in the network but only one is allowed to vary in each update step, while keeping the others fixed, hence turning the problem into a local one. A related concept is thus the \emph{environment of a tensor}, the part of a TN that is complementary to the tensor being modified. This appears in the local cost function and needs to be evaluated by a (in many cases approximate) contraction, in order to determine the proper update for the local tensor. \subsubsection{Truncation} \label{subsubsec:truncation} Truncating a TN means reducing (some of) the dimensions of its tensors, ideally in such way that the global result does not change. A truncation can be part of an approximation strategy (e.g. for a PEPS, see~\ref{subsec:peps}), where it is used to control the dimension of a partial TN contraction. In the context of quantum states represented as TNS, truncating typically means finding tensors to approximate the state within a given family. For instance after acting with an operation that, if applied exactly, would increase the tensor dimensions, as is often the case of time evolution or non-local operators (e.g.~\ref{subsec:tebd}). And, more generally, truncating may refer to approximating a certain state with a TNS of fixed bond dimension. In any truncation step, a decision is made as to which degrees of freedom to keep and which to discard. In a TNS, the fixed bond dimension upper-bounds the amount of correlations the state can hold, and thus the truncation step in most algorithms can be related to entanglement properties. \subsection{Classic algorithms} \label{subsec:classic} Numerous TN algorithms have been introduced in the last years, yet there are a few well-established methods that are used to obtain state-of-the-art results in quantum many-body problems. Many of them (specially for one-dimensional problems) are available as open-source implementations (see Related Resources at the end), making it possible to benefit from the numerical power of TN methods without the need to dive into implementation details. Furthermore, they are not difficult to implement, and can be easily adapted to solve other problems, beyond the ones they were originally designed for. They constitute the true workhorses of TNS numerical results. \begin{center} \begin{figure*}[t] \includegraphics[width=.8\textwidth]{fig2.pdf} \caption{Graphical expression of the local problems solved by the classic algorithms: (\textit{a}) variational optimization for a single tensor in DMRG; (\textit{b}) update of the local pair of tensors in TEBD (diamond-shaped tensors represent the Schmidt values, explicit in the canonical form); (\textit{c}) local optimization in tMPS. } \label{fig2} \end{figure*} \end{center} \subsubsection{Variational optimization of MPS} \label{subsec:variational} One of the most powerful strategies in the TNS toolbox is the variational optimization of the ansatz with respect to a given cost function, the paradigmatic example being the DMRG algorithm~\cite{White1992}. This can be essentially understood as an application of the variational principle in which an ansatz for the ground state is obtained minimizing the energy for a quantum many-body Hamiltonian over the set of MPS with fixed bond dimension $D$~\cite{Verstraete2004,Schollwoeck2011}, \begin{equation} | \Phi_{\mathrm{GS}}^{(D)} \rangle=\mathrm{argmin}_{|\Psi_{D}\rangle} \frac{\langle \Psi_D|H|\Psi_D\rangle}{\langle \Psi_D|\Psi_D\rangle}. \label{eq:variational} \end{equation} The problem is tackled in an iterative manner, a single tensor being minimized at each step while the rest are kept constant.~\footnote{This corresponds to the single site algorithm, most natural in the TN framework. Some modifications can be made to connect to the classic two-site DMRG {[see~\cite{Schollwoeck2011,Okunishi2021review} for details on DMRG variants and their historical development]}.} While not strictly necessary, the implementation of the original method is greatly simplified by writing the Hamiltonian as a MPO~\cite{McCulloch2007}. This can be done exactly for short-range one-dimensional Hamiltonians~\cite{Pirvu2010a}, and approximation schemes exist for long-range interactions [e.g.~\cite{Hubig2017mpo}]. In this form, the local cost function can be written as the ratio of two tensor networks (see fig.~\ref{fig2}a) that can be contracted efficiently with a cost that, for $N$ sites, only scales as $O(N D^3)$. The local problem has thus the form of a Rayleigh-Ritz quotient, and can be solved exactly using a standard eigensolver. The procedure is iterated, sequentially optimizing each tensor in the ansatz, and repeatedly sweeping back and forth over the whole chain until a predetermined convergence criterion (usually convergence of the energy value within certain precision) has been reached. Further gain in efficiency is possible if tensors are always kept in canonical form and intermediate calculations are stored in memory. Because the optimum of each local problem can be found exactly, the algorithm is guaranteed to lower the energy monotonically, and thus to converge (even though this might be to a local minimum). The infinite DMRG (iDMRG) algorithm directly targets systems in the thermodynamic limit~\cite{Schollwoeck2011}, and can also be expressed in similar terms. In that case, instead of sweeping back and forth, at each step a unit cell of tensors is inserted and optimized in the middle of the chain, and the procedure is iterated until a fixed point has been reached. The most natural scenario for the algorithm is the search for the ground state of a local one-dimensional Hamiltonian. {The method is extremely competitive even for critical systems (for which the MPS can only approximate the correlations), thanks to finite-size and finite-entanglement~\cite{Pollmann2009finiteD,Pirvu2012finiteD} scaling, and } has been successfully used for long-range interactions and problems in larger dimensions (see sec.~\ref{subsec:peps}). The efficiency and robustness of the method make it one of the most powerful numerical methods available to solve quantum many-body problems. Additionally, it can be applied to any variational optimization problem in which the cost function is expressed in terms of an effective Hamiltonian with MPO structure [e.g.~\cite{Cui2015open}]. \subsubsection{Evolving MPS: TEBD, tMPS} \label{subsec:tebd} The Time Evolved Block Decimation (TEBD) algorithm~\cite{Vidal2003,Vidal2004} is arguably the simplest to implement, yet one of the most versatile methods in the TN toolbox. The strategy was originally proposed for simulating the evolution of an MPS under a quantum circuit, which can be written as a sequence of two-body, nearest-neighbor unitary gates. Since each gate can increase the entanglement, its exact action on an MPS generally results in a larger bond dimension. Maintaining an efficient description of the state thus requires an approximation step that reduces (truncates) the bond dimension after the application of each gate. The TEBD strategy proceeds via a local update, involving only the directly affected tensors, and corresponds to minimizing the distance between the transformed and updated states under the condition that all the remaining tensors are kept invariant. Exploiting the canonical form of the MPS, this can be achieved by a singular value decomposition of a single tensor, obtained when contracting together the gate and the local MPS tensors, including their environment, which encodes the state of the rest of the system (see fig.~\ref{fig2}b). In the TEBD truncation step, only singular values above a certain threshold are kept, and the discarded weight gives a measure of the error. For a one-dimensional nearest-neighbour Hamiltonian, the time evolution operator can be approximated, using a Trotter-Suzuki expansion, as a sequence of such two-body gates of the form $\exp(-i \delta h_{i})$, where $h_i$ is a two-body term and $\delta$ a short time step. The method can thus be used to simulate the dynamics of an MPS with cost that scales as $O(D^3)$, for bond dimension $D$. The scheme can be adapted for other (finite-range) Hamiltonians, although the cost increases steeply with the interaction range. As an alternative to the local truncation, it is also possible to vary all tensors in order to minimize the distance to the exact state after one or more gates~\cite{Verstraete2006a}. In this strategy, called tMPS,\footnote{{Notice that the term is used loosely in the literature, sometimes interchanged with tDMRG---see~\cite{Schollwoeck2011} for the details.}} tensors are optimized sequentially as in the variational method~\ref{subsec:variational}, by solving a local problem that, in this case, reduces to a system of linear equations, also with cost $O(D^3)$ (see fig.~\ref{fig2}c). In this way once can apply onto the MPS vector any MPO operator, in particular, a step of the Trotterized time-evolution. The cost of such MPO representation also increases with the range of the Hamiltonian, but long range interactions can be treated with help of an approximation scheme~\cite{Zaletel2015long}. These methods are very efficient and extraordinarily versatile. Starting from an arbitrary state, the ground state can be approached by imaginary (or Euclidean) time evolution, which effectively projects the state onto its lowest energy component, and can be applied with the same algorithm~\cite{Vidal2004}, only using non-unitary terms $\exp(\delta h_{i})$. Also thermal equilibrium states can be approximated using this technique~\cite{Verstraete2004a,Zwolak2004,Feiguin2005finiteT}, by writing a purification of the Gibbs ensemble (namely the thermofield state) as the evolution of a maximally entangled initial state in imaginary time given by the inverse temperature, $|\Psi\rangle\propto e^{-\beta H/2} \sum_n |n\rangle |n\rangle$ (where $n$ label a basis of the system Hilbert space). And by treating the mixed state as a vector in operator space, the same basic method can be used to simulate real time evolution of open systems under master equations~\cite{Verstraete2004a,Zwolak2004}. Imaginary time evolution of pure states can also be used to produce a sample of minimally entangled typical thermal states (METTS)~\cite{White2009metts} that reproduce thermal properties. These are only a few examples: more generally, the tMPS strategy approximates the action of any linear operator written as an MPO onto an MPS. This allows reformulating most linear algebra algorithms as approximate versions in the framework of MPS [e.g.~\cite{GarciaRipoll2006,Huckle2012subspace}]. And TEBD and tMPS algorithms can also be applied to translationally invariant (or periodic) MPS, working directly in the thermodynamic limit~\cite{Vidal2007infinite,Verstraete2008}. Even though the technique to treat all the scenarios named above is almost identical, the entanglement in each of them, and thus the performance of the method, widely differs. While thermal equilibrium states satisfy an area law~\cite{Wolf08arealaw,Dubail2017mpo} and admit efficient TNS approximations for local Hamiltonians~\cite{Hastings2006a,Molnar2015}, real time evolution of a far-from-equilibrium state can give rise to linear growth of entanglement, in which case approximating the resulting state with an MPS would require the bond dimension to grow exponentially with the total time~\cite{Calabrese2005,Osborne2006,Schuch2008}. For this reason, while MPS methods are extremely useful to study dynamics close to equilibrium, or for moderate times~\cite{Paeckel2019tevol}, they suffer a fundamental limitation for genuinely out-of-equilibrium scenarios. \section{ADVANCED TNS METHODS} \label{sec:advanced} Even though the algorithms described in the previous section can treat a large number of problems, some more advanced techniques, developed mostly in the last decade and not yet freely available, are necessary to fully exploit the power of TNS. \subsection{Higher dimensions} \label{subsec:peps} Despite the resounding success of the one-dimensional applications of TNS, applications of higher-dimensional ansatzes remain much less common. However, in the last years, the situation has started to change, thanks to to a number of algorithmic developments and an intense effort of the community. Treating two- and higher-dimensional problems has always been a coveted target of these numerical methods and, since the early times of DMRG, the possibility was recognized of applying the technique to two-dimensional quantum~\cite{Stoudenmire2012} and three-dimensional classical problems~\cite{Nishino1998threeD}. MPS form a complete family, and can be used as an ansatz for any problem, in particular in larger dimensions. For two-dimensional quantum states, the MPS ansatz can be wrapped around the lattice. This is usually done in a zig-zag or snake form, but other choices are possible~\cite{Cataldi2021hilbert}. The resulting representation of the Hamiltonian as MPO is more expensive (since some short-range terms get mapped onto longer-range ones), and a larger bond dimension is required to reach the desired convergence: since cutting a single bond partitions the state in two, to accommodate the entanglement of a state that satisfies an area law the bond dimension needs to grow exponentially with one of the dimensions of the system. Highly accurate computations are still obtained from systems of limited size, often exploiting a long-cylinder geometry and careful finite size extrapolations [e.g.~\cite{Yan2011spinliquid,Depenbrock2012spinliquid}]. With built-in area law, PEPS are a more suitable TNS ansatz, which supports good approximations for equilibrium states of local Hamiltonians~\cite{Hastings2007,Molnar2015}, and allows variational and evolution strategies as described in section~\ref{subsec:classic}~\cite{Verstraete2004b,Murg2007peps}. They can also be used directly in the thermodynamic limit, in which case they are called iPEPS, and are parametrized by a unit cell with a finite number (which could be as small as one) of tensors~\cite{Jordan2008ipeps}. Nevertheless, numerical algorithms with PEPS are considerably more involved, and have higher computational cost in terms of the tensor dimensions. \begin{center} \begin{figure*}[t] \includegraphics[width=.8\textwidth]{fig3.pdf} \caption{Approximation of the environment tensor in PEPS: (\textit{a}) environment (in solid colors) of a pair of tensors on which a nearest-neighbor gate is applied (group shown in lighter shade); (\textit{b}) in the simple update, the environment tensor is approximated as a product (compare to the 1D case in fig.~\ref{fig2}b); (\textit{c}) a correlated approximation of the environment is required for the full update. } \label{fig3} \end{figure*} \end{center} For starters, contracting PEPS is, in contrast to the efficient contraction of MPS, a ($\# P$-complete) hard computational problem~\cite{Schuch2007complexity}. Practical algorithms resort to approximate contractions, in which the two-dimensional network is approximated as a sequence of MPO-MPS contractions from the boundary~\cite{Murg2007peps,Jordan2008ipeps,Lubasch2014unifying}, by a coarse-graining, or tensor renormalization~\cite{Jiang2008trg-qstates,Gu2008terg} (see sec.~\ref{subsec:trg}), or, in the case of iPEPS, by a corner transfer matrix contraction~\cite{NishinoOkunishi1996ctmrg,Orus2009ctm}. In these strategies there is a trade-off between the numerical cost and the accuracy of the contraction, important to determine the environment of a tensor or the expectation values of observables. Because of this, methods have been developed that gain efficiency by allowing less precise environment estimations for tensor updates (fig.~\ref{fig3}). The most efficient alternative uses a so-called \emph{simple update}~\cite{Jiang2008trg-qstates} where, in order to update a pair of tensors under the action of a two-site gate, the environment is approximated by a product of diagonal matrices acting on each link surrounding the pair, which play a role analogous to that of the Schmidt values in the TEBD procedure. Discarding the correlations in the environment can prevent the method from reaching the best PEPS with fixed bond dimension in the general case~\cite{Lubasch2014unifying}, but the algorithm is still popular, due to its efficiency and stability. A better update can be found with the more expensive \emph{full update}~\cite{Jordan2008ipeps,Corboz2010fpeps}, which takes into account a more accurate correlated environment approximation. These approaches may improve the efficiency of the updates, which can be particularly useful in the case of ground state search by imaginary time evolution, where the goal is a fixed point of the evolution. However, the evaluation of observables still needs to be as accurate as possible, in order to guarantee a variational result. This yields for most of PEPS algorithms a computational cost scaling as $O(D^{10})$. Another difference between PEPS and MPS computations is the absence of a canonical form for the former. As a consequence, the effective norm term appearing, for instance, in the denominator of Eq.~\ref{eq:variational}, cannot be reduced to the identity, and needs to be inverted to solve the local problems, which results in higher computational costs and loss of stability. The problem can be alleviated making use of the gauge freedom to optimize the condition of this effective matrix~\cite{Lubasch2014peps,Phien2015gauge,Evenbly2018gauge}. Despite the higher computational challenge, and the still ongoing development of more efficient strategies, PEPS already outperform MPS for two-dimensional problems of moderate size, as explicitly shown in~\cite{Osorio2017pepsvsdmrg} for Heisenberg and Hubbard models. All in all, iPEPS has been the preferred ansatz to address ground states of two-dimensional quantum problems in this context, due to the possibility of directly addressing bulk properties.~\footnote{Notice that, although finite-size extrapolation from PEPS is possible, the number of tensors to determine (e.g. $L^2$ for a two-dimensional system) makes the calculations exceedingly long already for relatively small sizes.} {Imaginary time evolution has been the method predominantly used until very recently,} due to the highly non-linear character of a variational approach in the line of \ref{subsec:classic}. {However, in the last few years,} new strategies have been introduced for a stable and efficient variational optimization of iPEPS~\cite{Corboz2016variational,Vanderstraeten2016gradient}, which produces more accurate results. A further step has been the precise solution of critical systems, with the help of extrapolations in the correlation length of finite $D$ states~\cite{Rader2018prx,Corboz2018prx,Vanhecke2021scaling}. Plenty of impressive numerical results have been already obtained thanks to these advanced methods, among them the most accurate result for the Hubbard model~\cite{Zheng2017hubbard}, and the first studies of three-dimensional problems~\cite{Vlaar2021peps3D}. When the focus is not on ground states, and similar to the MPS case discussed in section~\ref{subsec:tebd}, the (real or imaginary) time evolution techniques allow addressing multiple problems, such as equilibrium states at finite temperature~\cite{Czarnik2019time}, steady states of open systems~\cite{Kshetrimayum2017open,Kilda2021ipepo,Keever2021open} and real time evolution~\cite{Murg2007peps, Czarnik2019time,Hubig2019tdep}. An alternative direction has been the development and exploitation of restricted subsets of PEPS, with more favourable computational properties, that can be suitable ansatzes for particular problems. It is the case, for instance, of sequentially generated states~\cite{Banuls2008sgs}, or the more general isometric PEPS~\cite{Zaletel2020isopeps}, or of Gaussian fermionic PEPS~\cite{Kraus2010}. And other TNS families that do not have an area law, or only a restricted one, can be used to study higher-dimensional systems of a certain size, as does two-dimensional DMRG. For instance TTN [e.g.~\cite{Tagliacozzo2009ttn2D,Magnifico2021-3d}], or the recently introduced augmented trees~\cite{Felser2021aTTN}. \subsection{Symmetries} \label{subsec:syms} In case the problem under study exhibits some symmetry, taking advantage of it is not only of fundamental interest, but can also boost the performance of a numerical algorithm. For instance, if the Hamiltonian commutes with a certain operator $[H,O]=0$, its eigenstates will have well-defined eigenvalues of $O$, and the search can be restricted to subspaces labelled by particular quantum numbers. In the case of quantum many-body systems, one is often interested in problems with a global symmetry of the form $U^{\otimes N}|\Psi\rangle=|\Psi\rangle$, where $U$ is a unitary transformation that acts on a single site. Particularly relevant is the case when the operation is a representation of a group $G$, namely $U=U_g,$ for some $g\in G$. Such Abelian symmetries were soon incorporated to the DMRG method~\cite{McCulloch2007,Schollwoeck2011}, where they became of common use, typically implemented for the conservation of particle number or total magnetization. Also the formalism for non-Abelian symmetries was also developed~\cite{Dukelsky1998,McCulloch2002nonAbelian}, albeit not so commonly used. A general framework to handle global symmetries in higher dimensional TN was first introduced in~\cite{Singh2010sym}, with explicit formulations for Abelian~\cite{Singh2011u1,Bauer2011abelian} and non-Abelian~\cite{Singh2012su2,Weichselbaum2012} cases following shortly. The basic idea of these and the original DMRG constructions is to define invariant tensors, which remain unchanged when the symmetry operation acts on all the indices. This requires well-defined transformation properties for each of the indices and, in particular, choosing bases for the virtual legs with well-defined quantum numbers, for instance $|q \alpha \rangle$, where $q$ labels an irreducible representation of the group,\footnote{In the non-Abelian case, $q$ will actually be a composite index, including not only the label for the irrep, but also additional quantum numbers to account for its inner (and potentially outer) multiplicity~\cite{Bruognolo2021nonAbelian}.} and $\alpha$ labels the states within the same irrep. The bond dimension of such leg will be the sum of dimensions for each $q$. Assigning a direction to each edge in the TN, outgoing and incoming indices transform respectively with the unitary representation of the group and its inverse and it follows that a TNS constructed out of such invariant tensors is globally invariant. The invariance of a tensor implies some internal structure. In the case of Abelian symmetries, the tensor can be decomposed in a direct sum of blocks, with the only non-vanishing ones being those for which the sum of quantum numbers of incoming indices equals that of outgoing ones. In the non-Abelian case, blocks corresponding to a suitable combination of irreps have further structure, as they can be further decomposed as a tensor product of one part dictated merely by the symmetry, and another one containing the free parameters of the state. In particular, for three-legged tensors the first factor is a Clebsch-Gordan tensor. For more general tensors, a decomposition of the whole TN in three-legged terms can be used~\cite{Singh2010sym}, or more efficient precomputation of the corresponding coefficients can be done in the algorithm~\cite{Weichselbaum2012,Hubig2018ipeps,Bruognolo2021nonAbelian}. Notice that generic tensors (i.e. without explicit symmetry) can also be used to describe a TNS with the desired global symmetry, even producing a more compact description~\cite{Singh2013sym-vs-min}. Using the symmetry structure of the tensors involves a more cumbersome implementation of the methods (described in detail in the previous references), but in exchange allows one to work with blocks which have smaller bond dimension, which reduces the computational cost of contractions at the lowest-level. Symmetric tensors can be used to raise the global symmetry to a gauge one~\cite{Tagliacozzo2014,Zohar2015b,Haegeman2015}. This is done through the introduction of additional link tensors (analogous to link variables in usual formulations of lattice gauge theories). {Finally, it is worth mentioning that,} at the theoretical level, a framework has been developed to characterize MPS and PEPS in terms of the tensor symmetries~\cite{Schuch2010peps}, a formal approach that has produced fundamental results and continues to be an active and fruitful area of research~\cite{Cirac2021rmp}. \subsection{Fermions} \label{subsec:fermions} An advantage of the TN framework with respect to other numerical methods for quantum many-body problems, is the possibility to treat problems with fermionic degrees of freedom, of fundamental interest for condensed matter and fundamental physics. Whereas in this case Quantum Monte Carlo methods are often obstructed by the sign problem, which causes the cost of convergence to increase exponentially with the system size, TN calculations can indistinctly treat fermionic and spin setups. In one spatial dimension, fermionic modes do not pose a real problem, as they can be mapped to spins through the Jordan-Wigner transformation {. This maps local fermionic models onto} local spin Hamiltonians, such that both can be treated with exactly the same algorithms. In higher dimensions, however, a similar transformation does not maintain the locality of the model. An alternative that maps local fermions to local spins and would support a treatment with standard TNS algorithms was introduced in~\cite{Verstraete2005fermions}, but at the cost of introducing additional degrees of freedom doubling the size of the system. It is however possible to define TNS directly in terms of fermionic degrees of freedom. The explicit construction was presented by several independent, but essentially equivalent, proposals~\cite{Corboz2009fMERA,Kraus2010,Corboz2010fpeps,Pineda2010}. The fundamental idea is to work in a representation in which all spaces, virtual and physical, are fermionic, and have well-defined parity, i.e. the tensors are symmetric with respect to parity transformations, in the sense described in~\ref{subsec:syms}. Then it is possible to encode the statistics of fermionic operators in a local way, such that the scaling of the computational cost with the system size is preserved. The most intuitive formulation~\cite{Corboz2009fMERA,Corboz2010fpeps} can be visualized as an effective linear ordering of the fermionic modes, fixed once a graphical representation of the TNS is chosen (the order would be that obtained when projecting all the sites of the graph onto a line). Each crossing of legs in the diagram has to be accounted for, as it involves commutations of fermionic operators. This can be achieved substituting the crossing by a swap matrix, which introduces a negative sign with fermionic degrees of freedom with odd parity are exchanged. Thanks to the symmetry of the tensors, the swap matrices can be moved through the network and be absorbed into local tensors, and the contraction can follow the same sequence as in the spin case, thus keeping the leading cost. This formalism, which can be combined with additional symmetries~\cite{Bruognolo2021nonAbelian}, has already made possible for iPEPS to beat any other computational method in some parameter regimes of the Hubbard model~\cite{Zheng2017hubbard}. \subsection{Dynamics} \label{subsec:dynamics} Simulating time evolution is a crucial tool for understanding the out-of-equilibrium dynamics of quantum many-body systems, linked to fundamental questions such as thermalization. Together with the applicability to fermionic problems, being able to address real-time evolution is precisely one of the main advantages of TNS as compared to Monte Carlo methods. The TNS toolbox has several different methods to tackle these problems. Many of them produce an approximation to the time-evolved state within the desired family [see~\cite{Paeckel2019tevol} for a recent detailed review]. The standard algorithms described in~\ref{subsec:tebd} proceed by constructing an approximation of the evolution operator $U(\delta)=e^{-i \delta H}$ for a finite time step $\delta$ and applying it onto a TNS wave function. In general, this increases the bond dimension, and it must be followed by a truncation step that reduces the tensors again. A limitation of these methods is that they rely on approximations of the Hamiltonian exponential operator that become exceedingly costly as the range of the interactions increases. Krylov-based methods, instead, directly target the result of the evolution step by approximating the application of the operator on the state as a linear combination of Krylov vectors~\cite{GarciaRipoll2006,Wall2012njp}, instead of explicitly approximating the evolution operator in the full space. This in turn requires approximating the Krylov vectors themselves by TNS. A related approach is using Chebyshev expansions of the exponential operators~\cite{Holzner2011cheMPS}. Also the more recently proposed time dependent variational principle (TDVP)~\cite{Haegeman2011tdvp,Haegeman2016unifying} adopts a different strategy, in which the MPS tensors are evolved such that the evolution never leaves the MPS manifold. This is achieved by projecting the variation of the wave function, given by the rhs of the Schr\"odinger equation, onto the local tangent plane of the MPS. Despite its different philosophy, TDVP algorithms for finite and infinite systems can be formulated in terms of essentially the same low level primitives as the traditional ones~\cite{Haegeman2016unifying}. That is, the tensors of the ansatz can be updated according to the solution of a local evolution, in this case given by effective Hamiltonians that result from the tangent plane projection. An advantage of this method is that it preserves conserved quantities of the evolved state, such as the norm and energy. In the uniform MPS case, the TDVP algorithm is the first exponent of a new generation of TNS algorithms, so-called tangent-space methods~\cite{Haegeman2013post}, based on exploiting the geometric structure of the MPS manifold. These increasingly popular methods have multiple applications beyond time evolution, including the variational optimization of uMPS or finding elementary excitations, and have been partly adapted for PEPS [see~\cite{Vanderstraeten2018tangent} for a pedagogical overview]. More recently, generalizations for TTN and other isometric TN have been introduced~\cite{Kloss2020tdvpTree,Bauernfeind2020tdvpTTN,Hauru2021riemannian}. The approaches described above provide powerful algorithms to investigate the evolution of quantum systems for moderate times, or close to equilibrium~\cite{Paeckel2019tevol}. However, they are still subject to the fundamental limitation mentioned in \ref{subsec:tebd}: under time evolution, entanglement can grow fast, the bond dimension of the ansatz would need to grow exponentially with the simulated time~\cite{Osborne2006,Schuch2008}, such that after short times, the simulation becomes unfeasible, a problem that has been termed~\emph{entanglement barrier}. But for physical problems, often the interest is not in the full description of the state, but in expectation values of local observables, which correspond to experimentally accessible quantities. There a paradoxical situation takes place, since in the long time limit observables are expected to thermalize or equilibrate to values that are well described by statistical ensembles, which can be themselves efficiently approximated by (mixed) TNS, but in most cases the entanglement barrier makes it impossible to reach this regime following the evolution of the state. For this reason, an active effort is being dedicated to the investigation of potentially new methods that avoid the entanglement barrier and manage to describe the long time dynamics of local quantities. A first proposal was evolving operators in Heisenberg picture~\cite{Hartmann2009heisenberg} using a suitably adapted time evolution algorithm. Despite not completely solving the entanglement problem, such an approach constitutes the basis of many other strategies for dynamical quantities. Another idea was to target the TN that represents the time-dependent local observables, and to approximate its contraction in the transverse direction, after folding~\cite{Banuls2009fold,Mueller-Hermes2012}, which can give access to longer times, especially when exploiting the finite propagation velocity of correlations~\cite{Frias2022lc,Lerose2022lc}. This remains an active area of research, and several new strategies have been proposed in the last years to focus on the local observables~\cite{White2018therm,Surace2019trading,Rakovszky2020dissip}. \subsection{Excitations} \label{subsec:excited} With the variational approach {for the ground state (section~\ref{subsec:variational})} it is possible to target {also low excited states, } by simply orthogonalizing the targeted state with respect to any number of previously computed ones~\cite{McCulloch2007,Schollwoeck2011}, an approach that is most useful in the case of finite systems. A particularly useful ansatz for elementary excitations is to model them as local perturbations acting on the vacuum. In the TNS framework, it is possible to construct well-defined momentum states of this form by suitable superpositions of a locally modified ground state~\cite{Oestlund1995}. Tangent-space methods offer a way to generalize this construction that is especially powerful in the thermodynamic limit~\cite{Haegeman2013post,Vanderstraeten2018tangent}. In this framework, elementary excitations are written as tangent vectors with position-dependent momentum factors, and their energies can be optimized variationally. Also topologically non-trivial excitations (such as domain walls) can be captured in this language. Although low-energy excitations as the ones above are often observed to fulfill an approximate area law, the same is not true for generic, highly excited states. An exception is the case of many-body localized Hamiltonians. Hence, several specific algorithms have been developed to target eigenstates at some high energy value $E$, for instance using a \emph{shift and invert} strategy~\cite{Yu2017excited}, targeting the state at given energy that maximizes the overlap with a particular product state~\cite{Khemani2016exc}, or searching for the lowest eigenvalue of $(H-E)^2$~\cite{Lim2016exc}. \section{FURTHER TN APPROACHES AND PERSPECTIVES} \label{sec:other} Other aspects of TN technologies, beyond the standard TNS tools discussed in the previous sections offer additional ways to explore the physics of complex systems. \subsection{Network renormalization approaches} \label{subsec:trg} Some of the earliest works in the TN literature, before the quantum information perspective shaped the language for TN, already pointed out the connection between many-body problems and tensors in the partition functions of classical spin systems~\cite{NishinoOkunishi1996ctmrg,Nishino2001twodim}. In this approach, a TN represents exactly the partition function of a classical model (which might as well correspond to a path integral formulation of a quantum one) and tensor contractions can be used to approximate a result. The tensor renormalization group (TRG) method introduced in~\cite{Levin2007} is based on a block renormalization of a two-dimensional TN: in each coarse-graining step a local group of tensors is replaced by their approximate contraction with truncated bonds, such that the size of the TN is divided by a constant (see fig.~\ref{fig4}). The original truncation is done by simply singular value decompositions of the tensors being contracted. In~\cite{Xie2009} a new strategy was introduced, called second renormalization group (SRG) method, where a different truncation is chosen that tries to maintain the fidelity of the contraction of the whole network, by taking into account the environment of the tensor that is being computed. A more efficient contraction and truncation strategy that can be applied to higher dimensional systems was later proposed in~\cite{Xie2012}, using the higher order singular value decomposition. \begin{center} \begin{figure*}[t] \includegraphics[width=.8\textwidth]{fig4.pdf} \caption{Coarse-graining step in the simplest TRG schemes: (\textit{a}) TN representing a partition function of a classical spin model; (\textit{b}) original TRG; (\textit{c}) higher order TRG (HOTRG). } \label{fig4} \end{figure*} \end{center} A shortcoming of the approach, already identified in~\cite{Levin2007} is that some short-range entanglement structures cannot be removed by the TRG coarse-graining, in particular, the corner double line (CDL) tensor. Several modifications have been proposed to solve this issue, such as the TNR (tensor network renormalization) that includes disentanglers, in the spirit of MERA, before the renormalization steps~\cite{Evenbly2015tnr}. Other proposals have been the iterative optimization of the tensors around a loop~\cite{Yang2015}, or different local index truncations that take care of internal correlations~\cite{Hauru2018gilt,Evenbly2018gauge}. TRG approaches are also useful to contract the TN corresponding to observables for quantum states in higher dimensions, and can then be used as part of PEPS optimization algorithms~\cite{Jiang2008trg-qstates,Gu2008terg} (see sec.~\ref{subsec:peps}). A related topic is the treatment of fermionic problems in TRG approaches. In \cite{Gu2010grassmann} it was shown that wave functions and expectation values of many-body fermionic (but also bosonic) systems could be expressed and contracted as a Grassmann tensor network, in which tensor components are given in terms of Grassmann variables, and for which a suitable TRG approach can be defined. A compact ansatz of this form, together with algorithms to renormalize the network and to evolve the tensors were presented in~\cite{Gu2013gtrg}, and have been used, for instance, to study discretized field theories with fermionic degrees of freedom [see references in~\cite{Banuls2020ropp,Meurice2020review}]. \subsection{Connections to other techniques} \label{subsec:connections} Exploring the potential connections between TN methods and other techniques is an exciting possibility that, on the one hand, can result in new or improved algorithms and, on the other, opens the door to treating new problems with TN methods, as the following examples illustrate. \begin{itemize} \item{Monte Carlo algorithms.} Monte Carlo sampling can be used to speed up TN contractions, and variationally optimize TNS parameters~\cite{Sandvik2007mc,Wang2011mctns}. With a complementary perspective, TN contractions can be employed to directly sample configurations from the partition function~\cite{Ueda2005snapshot,Ferris2012perfect,Rams2021sampling}, but also to define a Markov chain with collective updates~\cite{Frias2021tnmh}. \item{Machine Learning.} The connections between TN and machine learning drive some of the most recent developments, including the use of TNS models for machine learning tasks~\cite{Stoudenmire2016nips,Han2018unsupervised}, and also importing numerical tools, such as automatic differentiation, into TN algorithms~\cite{Liao2019differentiable}. \item{Field theory.} The interplay between TNS and quantum field theory is another decidedly active area, which has produced accurate numerical results for lattice gauge theories~\cite{Banuls2020ropp,Meurice2020review}, but also motivates formal developments, such as gauge symmetric (see sec.~\ref{subsec:syms}) and continuous~\cite{Verstraete2010cMPS,Haegeman2013cMERA} formulations of TNS. \end{itemize} \section{OUTLOOK} \label{sec:outlook} The field of tensor networks has grown impressively in the last decade and remains a vibrant research area. Current TN research moves forward in different directions. A rather formal approach explores the mathematical aspects of these ansatzes. With a more applied perspective, significant effort is being devoted to the development of numerical TN methods, a multifaceted enterprise, some of whose spotlights have been highlighted in the previous pages. And the field continues to uncover synergies with seemingly remote topics, and to develop in new and creative ways. All these directions are likely to produce exciting results in the coming years, maybe finding useful TNS subfamilies, improving the efficiency of high-dimensional or dynamical calculations, or bridging the gaps between formal and numerical developments. At the same time, mature TN algorithms are well established as competitive computational methods for the study of many-body problems. These algorithms, reviewed in the first part of this article, make it easy for the newcomer to try TN for an existing problem, and simultaneously serve as a platform for the more specialized researcher to experiment with new algorithms or to draw new connections between TN and other disciplines. \section*{ACKNOWLEDGMENTS} I am deeply grateful to E. Carmona, P. Emonts, M. Fr{\'{\i}}as-P{\'e}rez and T. Nishino for their critical reading and constructive comments on an earlier version of this article. This work was partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2111 -- 390814868. \bibliographystyle{ar-style4}
1,108,101,563,480
arxiv
\section*{Appendix} \setcounter{section}{0} \renewcommand{\thesection}{\Alph{section}} \label{sec:appendix} \section{Network Visualizations} \label{appendix:visualizations} \subsection{Weight Sharing} \begin{figure}[h] \begin{minipage}{1.0\textwidth} \subfigure[Linear Policy - $17$ distinct parameters]{\includegraphics[width=0.37\textwidth, height=3.65cm]{plots/linear.png}} \hspace{0.2cm} \subfigure[One-Hidden-Layer Policy - $17$ distinct parameters]{\includegraphics[width=0.6\textwidth, height =3.7cm]{plots/chrome_toeplitz.png}} \end{minipage} \caption{(a): Partitioning of edges into distinct weight classes obtained for the linear policy for $\mathrm{HalfCheetah}$ environment from $\mathrm{OpenAI}$ $\mathrm{Gym}$. (b): Partitioning of edges for a policy with one hidden layer encoded by two matrices. State and action dimensionalities are: $s=17$ and $a=6$ respectively and hidden layer for the architecture from (b) is of size $41$. Thus the size of the matrices are: $17 \times 6$ for the linear policy from (a) and: $17 \times 41$, $41 \times 6$ for the nonlinear one from (b).} \label{fig:partitionings} \end{figure} \subsection{Edge Pruning} \begin{figure}[h] \includegraphics[keepaspectratio, width=0.43\textwidth]{plots/swimmer_architectures.png} \includegraphics[keepaspectratio, width=0.56\textwidth]{plots/swimmer_convergence_regularized_evolution.png} \caption{(Left) Final architectures that PG and Reg-Evo converged to on Swimmer with a linear (L) policy, as specified in Subsection \ref{visualizing_convergence}. Note that the controller does not select all edges even if it is allowed in the boolean search space, but also \textit{ignores some state values.} (Right): Convergence result for Reg-Evo, similar to Fig. \ref{fig:visualizing_swimmer} in Subsection \ref{visualizing_convergence}.} \label{fig:swimmer_arch} \end{figure} \section{Extended Experimental Results} \label{appendix:extended_experiments} As standard in RL, we take the mean and standard deviation of the final rewards across 3 seeds for every setting. ``L", ``H" and ``H, H" stand for: linear policy, policy with one hidden layer, and policy with two such hidden layers respectively. \subsection{Baseline Method Comparisons} \label{appendix:baseline_method} In terms of the masking baseline, while \citep{Lenc2019NonDifferentiableSL} fixes the sparsity of the mask, we instead initialize the sparsity at $50\%$ and increasingly reward smaller networks (measured by the size of the mask $|m|$) during optimization to show the effect of pruning. Using this approach on several $\mathrm{Open}$ $\mathrm{AI}$ $\mathrm{Gym}$ tasks, we demonstrate that masking mechanism is capable of producing compact effective policies up to a high level of pruning. At the same time, we show significant decrease of performance at the $80$-$90\%$ compression level, quantifying accurately its limits for RL tasks (see: Fig. \ref{fig:prune_fail}). \begin{figure}[h] \begin{minipage}{1.0\textwidth} \subfigure[]{\includegraphics[keepaspectratio, width=0.245\textwidth]{plots/Pruning_Motivation/Swimmer.png}} \subfigure[]{\includegraphics[keepaspectratio, width=0.23\textwidth]{plots/Pruning_Motivation/HalfCheetah.png}} \subfigure[]{\includegraphics[keepaspectratio, width=0.245\textwidth]{plots/Pruning_Motivation/Striker.png}} \subfigure[]{\includegraphics[keepaspectratio, width=0.265\textwidth]{plots/Pruning_Motivation/walker2d.png}} \end{minipage} \caption{The results from training both a mask $m$ and weights $\theta$ of a neural network with two hidden layers. `Usage' stands for number of edges used after filtering defined by the mask. At the beginning, the mask is initialized such that $|m|$ is equal to $50\%$ of the total number of parameters in the network.} \label{fig:prune_fail} \end{figure} \newpage \subsection{Weight Sharing} \label{appendix:weight_sharing} \setlength{\tabcolsep}{6pt} \begin{table}[h] \small \centering \scalebox{0.7}{ \begin{tabular}{l*{5}{c}r} \toprule \textbf{Env.} & \textbf{Dim.} & \textbf{Arch.} & \textbf{Partitions} & \textbf{(PG, Reg-Evo, RS) Reward} \\ \midrule $\mathrm{Swimmer}$ & (8,2) & L & 8 & $(366 \pm 0, 296 \pm 31, 5 \pm 1)$ \\ $\mathrm{Reacher}$ & (11,2) & L & 11 & $(-10 \pm 4, -157 \pm 62, -135 \pm 10) $ \\ $\mathrm{Hopper}$ & (11,3) & L & 11 & $(2097 \pm 788, 1650 \pm 320, 16 \pm 0)$ \\ $\mathrm{HalfCheetah}$ & (17,6) & L & 17 & $(2958 \pm 73, 3477 \pm 964, 129 \pm 183)$ \\ $\mathrm{Walker2d}$ & (17,6) & L & 17 & $(326 \pm 86, 2079 \pm 1085, 8 \pm 0)$ \\ $\mathrm{Pusher}$ & (23,7) & L & 23 & $(-68 \pm 2, -198 \pm 76, -503 \pm 4)$ \\ $\mathrm{Striker}$ & (23,7) & L & 23 & $(-247 \pm 11, -376 \pm 149, -590 \pm 18)$ \\ $\mathrm{Thrower}$ & (23,7) & L & 23 & $(-819 \pm 8, -1555 \pm 427, -12490 \pm 708)$ \\ \bottomrule \\ \end{tabular} \quad \quad \begin{tabular}{l*{5}{c}r} \toprule \textbf{Env.} & \textbf{Dim.} & \textbf{Arch.} & \textbf{Partitions} & \textbf{(PG, Reg-Evo, RS) Reward} \\ \midrule $\mathrm{Swimmer}$ & (8,2) & H & 8 & $(361 \pm 4, 362 \pm 1, 15 \pm 0)$ \\ $\mathrm{Reacher}$ & (11,2) & H & 11 & $(-6 \pm 0, -23 \pm 11, -157 \pm 2)$ \\ $\mathrm{Hopper}$ & (11,3) & H & 11 & $(3288 \pm 119, 2834 \pm 75, 95 \pm 2)$ \\ $\mathrm{HalfCheetah}$ & (17,6) & H & 17 & $(4258 \pm 1034, 4894 \pm 110, -41 \pm 5)$ \\ $\mathrm{Walker2d}$ & (17,6) & H & 17 & $(1684 \pm 1008, 2026 \pm 46, -5 \pm 1)$ \\ $\mathrm{Pusher}$ & (23,7) & H & 23 & $(-225 \pm 131, -350 \pm 236, -1049 \pm 40)$ \\ $\mathrm{Striker}$ & (23,7) & H & 23 & $(-992 \pm 2, -466 \pm 238, -1009 \pm 1) $ \\ $\mathrm{Thrower}$ & (23,7) & H & 23 & $ (-1873 \pm 690, -818 \pm 363, -12847 \pm 172) $ \\ \bottomrule \\ \end{tabular} } \caption{Results via weight sharing across PG, Reg-Evo, and random search controllers. The number of partitions is always set to be $\max (|\mathcal{S}|, |\mathcal{A}|)$.} \label{weight_sharing_table} \end{table} \subsection{Edge Pruning} \label{appendix:edge_pruning} \setlength{\tabcolsep}{6pt} \begin{table}[h] \small \centering \scalebox{0.7}{ \begin{tabular}{l*{5}{c}r} \toprule \textbf{Env.} & \textbf{Dim.} & \textbf{Arch.} & \textbf{(PG, Reg-Evo, RS) Reward} \\ \midrule $\mathrm{Swimmer}$ & (8,2) & H & $(105 \pm 116, 343 \pm 2, 21 \pm 1)$ \\ $\mathrm{Reacher}$ & (11,2) & H & $(-16 \pm 5, -52 \pm 5, -160 \pm 2)$ \\ $\mathrm{Hopper}$ & (11,3) & H & $(3349 \pm 206, 2589 \pm 106, 66 \pm 0)$ \\ $\mathrm{HalfCheetah}$ & (17,6) & H & $(2372 \pm 820, 4016 \pm 726, -156 \pm 22)$ \\ $\mathrm{Walker2d}$ & (17,6) & H & $(3813 \pm 128, 1847 \pm 710, 0 \pm 2)$ \\ $\mathrm{Pusher}$ & (23,7) & H & $(-133 \pm 31, -156 \pm 17, -503 \pm 15)$ \\ $\mathrm{Striker}$ & (23,7) & H & $(-178 \pm 54, -130 \pm 16, -464 \pm 13)$ \\ $\mathrm{Thrower}$ & (23,7) & H & $(-532 \pm 29, -1107 \pm 158, -7797 \pm 112)$ \\ \bottomrule \\ \end{tabular} } \caption{Results via weight sharing across PG, Reg-Evo, and random search controllers. The number of edges is always set to be 64 in total, or (32, 32) across the two weight matrices when using a single hidden layer.} \label{edge_pruning_table} \end{table} \subsection{Edge Pruning + Nonlinearity Search} \label{appendix:edge_pruning_nonlinearity} \setlength{\tabcolsep}{6pt} \begin{table}[h] \small \centering \scalebox{0.7}{ \begin{tabular}{l*{5}{c}r} \toprule \textbf{Env.} & \textbf{Dim.} & \textbf{Arch.} & \textbf{(PG, Reg-Evo, RS) Reward} \\ \midrule $\mathrm{Swimmer}$ & (8,2) & H & $(247 \pm 110, 359 \pm 5, 11 \pm 3)$ \\ $\mathrm{Hopper}$ & (11,3) & H & $(2270 \pm 1464, , 57 \pm 7)$ \\ $\mathrm{HalfCheetah}$ & (17,6) & H & $(3028 \pm 469, 5436 \pm 978, -268 \pm 29)$ \\ $\mathrm{Walker2d}$ & (17,6) & H & $(1057 \pm 413, 2006 \pm 248, 0 \pm 1)$ \\ \bottomrule \\ \end{tabular} } \caption{Results using the same setup as Table \ref{edge_pruning_table}, but allowing nonlinearity search.} \label{edge_pruning_op_edge_table} \end{table} \newpage \section{Exact Setup and Hyperparameters} \subsection{Controller Setups} \subsubsection{Regularized Evolution} We use uniform sampling, with the tournament size as $\sqrt{n}$ where $n$ is the number of workers (which is defaulted to 150; see: "ES Algorithm" subsection). This is also recommended in \cite{regularized_evolution}. \subsubsection{Policy Gradient} We use a gradient update batch size of 64 to the Pointer Network, while using PPO as the policy gradient algorithm, with its default (recommended) hyperparameters from \cite{pyglove}. \subsection{Policy} \label{hyperparameter_policy} By default, unless specified, we use Tanh non-linearities with 32 units for each hidden layer. We use the following default hyperparameters for the following search spaces: \subsubsection{Weight Sharing} The number of partitions (or "colors") is set to $\max(|\mathcal{S}|, |\mathcal{A}|)$. This is both in order to ensure a linear number of trainable parameters compared to the quadratic number for unstructured networks, as well as allow sufficient parameterization to deal with the entire state/action values. \subsubsection{Edge Pruning} We collect all possible edges from a normal neural network into a pool $E_{max}$ and set $|E| = 64$ as the number of distinct choices, passed to the \texttt{pyglove.manyof}. Similar to weight sharing, this choice is based on the value $\max(|\mathcal{S}|, H)$ or $\max(|\mathcal{A}|, H)$, where $H=32$ is the number of hidden units, which is linear in proportion to respectively, the maximum number of weights $|\mathcal{S}| \cdot H$ or $|\mathcal{A}| \cdot H$. Since a hidden layer neural network has two weight matrices due to the hidden layer connecting to both the state and actions, we thus have ideally a maximum of $32 + 32 = 64$ edges. \subsubsection{Nonlinearity Search} We use the same nonlinearity choices found in \cite{ha}. These are: \{Tanh, ReLU, Exp, Identity, Sin, Sigmoid, Absolute Value, Cosine, Square, Reciprocal, Step Function.\} \subsection{Environment} For all environments, we set the horizon $T=1000$. We also use the reward without alive bonuses for weight training as commonly used \cite{mania2018simple} to avoid local maximum behaviors (such as an agent simply standing still to collect a total of $1000$ reward), but report the final score as the real reward with the alive bonus. \subsection{ES Algorithm} For all environments, we used reward normalization and state normalization implemented from \citep{horia}. We set smoothing parameter $\sigma = 0.1$ and step size $\eta = 0.01$. Unless specified, we use 150 workers, where 100 are used for perturbations in the antithetic case (and thus 100 / 2 = 50 distinct perturbations are used) and 50 more are used for evaluation on the current parameter settings. \subsection{Baseline Details} \label{baseline_details} We consider Unstructured, Toeplitz, Circulant and a masking mechanism \citep{stockholm,Lenc2019NonDifferentiableSL}. We introduce their details below. Notice that all baseline networks share the same general ($1$-hidden layer, Tanh nonlinearity) architecture from \ref{hyperparameter_policy}. This impplies that we only have two weight matrices $W_1 \in \mathbb{R}^{|\mathcal{S}|\times h },W_2\in \mathbb{R}^{h\times |\mathcal{A}|}$ and two bias vectors $b_1\in\mathbb{R}^{h},b_2\in \mathbb{R}^{|\mathcal{A}|}$, where $|\mathcal{S}|,|\mathcal{A}|$ are dimensions of state/action spaces. These networks differ in how they parameterize the weight matrices. We have: \subsubsection{Unstructured.} A fully-connected layer with unstructured weight matrix $W\in \mathbb{R}^{a\times b}$ has a total of $ab$ independent parameters. \subsubsection{Toeplitz.} A toeplitz weight matrix $W\in \mathbb{R}^{a\times b}$ has a total of $a+b-1$ independent parameters. This architecture has been shown to be effective in generating good performance on benchmark tasks yet compressing parameters \citep{stockholm}. \subsubsection{Circulant.} A circulant weight matrix $W\in \mathbb{R}^{a\times b}$ is defined for square matrices $a=b$. We generalize this definition by considering a square matrix of size $n \times n$ where $n=\max\{a,b\}$ and then do a proper truncation. This produces $n$ independent parameters. \subsubsection{Masking.} One additional technique for reducing the number of independent parameters in a weight matrix is to mask out redundant parameters \citep{Lenc2019NonDifferentiableSL}. This slightly differs from the other aforementioned architectures since these other architectures allow for parameter sharing while the masking mechanism carries out pruning. To be concrete, we consider a fully-connected matrix $W \in \mathbb{R}^{a\times b}$ with $ab$ independent parameters. We also setup another mask weight $\Gamma \in \mathbb{R}^{a\times b}$. Then the mask is generated via \begin{align} \Gamma^{\prime} = \text{softmax}(\Gamma / \alpha) \nonumber \end{align} where $\text{softmax}$ is applied elementwise and $\alpha$ is a constant. We set $\alpha = 0.01$ so that the $\text{softmax}$ is effectively a thresolding function wich outputs near binary masks. We then treat the entire concatenated parameter $\theta = [W,\Gamma]$ as trainable parameters and optimize both using ES methods. Note that this softmax method can also be seen as an instance of the continuous relaxation method from DARTS \cite{darts}. At convergence, the effective number of parameter is $ab \cdot \lambda$ where $\lambda$ is the proportion of $\Gamma^{\prime}$ components that are non-zero. During optimization, we implement a simple heuristics that encourage sparse network: while maximizing the true environment return $f(\theta)=\sum_{t=1}^{T} r_{t}$, we also maximize the ratio $1-\lambda$ of mask entries that are zero. The ultimate ES objective is: $f^{\prime}(\theta)= \beta \cdot f(\theta) + (1-\beta)\cdot (1-\lambda)$, where $\beta \in [0,1]$ is a combination coefficient which we anneal as training progresses. We also properly normalize $f(\theta)$ and $(1-\lambda)$ before the linear combination to ensure that the procedure is not sensitive to reward scaling. \section{Conclusion \& Future Work} We presented a scalable and flexible algorithm, ES-ENAS, for performing efficient neural architecture search for reinforcement learning policies by combining Evolutionary Strategies with a NAS controller. ES-ENAS is compatible the latest frameworks (blackbox optimization \cite{nasbench} and decoupling via PyGlove \cite{pyglove}), as well as the latest techniques for NAS (multiobjective optimization \cite{efficientnet, mnasnet} and regularized evolution \cite{regularized_evolution}). While we have shown an example utility of ES-ENAS on learning structured neural network architectures for RL policies via edge pruning and weight sharing, we believe that this work can be the start of a new line of thinking for NAS methods applied to RL. One highly impactful application may be designing convolutional cells for vision-based RL policies to improve generalization \cite{procgen, coinrun, observational}, similar to the search spaces found in standard SL applications \cite{dean,efficientnet, nasbench}. One may use similar search spaces to design EfficientNet variants for vision-based RL as well, an extension of the multiobjective experiments found in this paper for dense layers. We hope that applying NAS techniques in RL may ultimately automate architecture design for policies, as well as change perspectives on designing human-made policies. \section{Experimental Results} The experimental section is organized as follows: \begin{itemize}[noitemsep,topsep=2pt,parsep=0pt,partopsep=0pt] \item In Subsections \ref{search_space_definitions}, \ref{baseline_definitions}, and \ref{environments}, we describe the search space definitions, baseline methods, and environments used, respectively. \item In Subsections \ref{controller_comparison}, \ref{controller_sample_complexity}, and \ref{multiobjective_case}, we discuss the ablation results and extra modifications of the ES-ENAS method via plots. These results include respectively: comparisons across different controllers, sample complexity with respect to using varying numbers of workers, as well as multiobjective optimization. \item In Subsection \ref{full_results}, we present exhaustive tabular results displaying the final rewards across different methods. \end{itemize} We provide more experimental results in the Appendix. \subsection{Search Space Definitions} \label{search_space_definitions} In order to allow combinatorial flexibility, our neural network consists of vertices/values $V = \{v_{1},...,v_{k}\}$, where the initial block of $|\mathcal{S}|$ values $\{v_{1},...,v_{|\mathcal{S}|}\}$ corresponds to the environment state, and the last block of $|\mathcal{A}|$ values $\{v_{k-|\mathcal{A}|+1}, ..., v_{k}\}$ corresponds to the action output values. Directed edges $E \subseteq E_{max} = \{e_{i,j} = (i, j) \> \> | \> \> |\mathcal{S}| < i < j \le k \}$ are constructed with corresponding weights $W = \{w_{i,j} \> \> | \> \> (i, j) \in E\}$, and nonlinearities $F = \{f_{|\mathcal{S}|+1},...,f_{k}\}$ for the non-state vertices. Thus a forward propagation consists of for-looping in order $j \in \{|\mathcal{S}|+1,...,k\}$ and computing output values $ v_{j} = f_{j} \left(\sum_{(i,j) \in E} v_{i} w_{i,j} \right)$. \begin{figure}[h] \begin{subfigure}{} \includegraphics[keepaspectratio, width=0.3\textwidth]{plots/network_pic_new.pdf} \caption{Example of our neural network setup with selected edges and corresponding weight labels, when $|\mathcal{S}| = 4$ and $|\mathcal{A}| = 3$, with a hidden layer of size $6$. Solid edges are those learned by the algorithm.} \label{fig:network_pic} \end{subfigure} \begin{subfigure}{} \[\mathbf{T}_{t}= \begin{tikzpicture}[baseline=-\the\dimexpr\fontdimen22\textfont2\relax ] \matrix (m)[matrix of math nodes,left delimiter=(,right delimiter=)] { w_{1,5} & w_{1,6} & w_{1,7} & w_{1,8} & w_{1,9} & w_{1,10}\\ w_{2,5} & w_{2,6} & w_{2,7} & w_{2,8} & w_{2,9} & w_{2,10}\\ w_{3,5} & w_{3,6} & w_{3,7} & w_{3,8} & w_{3,9} & w_{3,10}\\ w_{4,5} & w_{4,6} & w_{4,7} & w_{4,8} & w_{4,9} & w_{4,10}\\ }; \begin{pgfonlayer}{myback} \fhighlight[orange]{m-1-1}{m-1-1} \fhighlight[orange]{m-2-2}{m-2-2} \fhighlight[orange]{m-3-3}{m-3-3} \fhighlight[orange]{m-4-4}{m-4-4} \fhighlight[yellow]{m-1-2}{m-1-2} \fhighlight[yellow]{m-2-3}{m-2-3} \fhighlight[yellow]{m-3-4}{m-3-4} \fhighlight[yellow]{m-4-5}{m-4-5} \fhighlight[purple]{m-1-3}{m-1-3} \fhighlight[purple]{m-2-4}{m-2-4} \fhighlight[purple]{m-3-5}{m-3-5} \fhighlight[purple]{m-4-6}{m-4-6} \fhighlight[pink]{m-1-4}{m-1-4} \fhighlight[pink]{m-2-5}{m-2-5} \fhighlight[pink]{m-3-6}{m-3-6} \fhighlight[magenta]{m-1-5}{m-1-5} \fhighlight[magenta]{m-2-6}{m-2-6} \fhighlight[cyan]{m-1-6}{m-1-6} \fhighlight[green]{m-2-1}{m-2-1} \fhighlight[green]{m-3-2}{m-3-2} \fhighlight[green]{m-4-3}{m-4-3} \fhighlight[gray]{m-3-1}{m-3-1} \fhighlight[gray]{m-4-2}{m-4-2} \fhighlight[brown]{m-4-1}{m-4-1} \end{pgfonlayer} \end{tikzpicture} \] \[\theta_{s}= \begin{tikzpicture}[baseline=-\the\dimexpr\fontdimen22\textfont2\relax ] \matrix (m)[matrix of math nodes,left delimiter=(,right delimiter=)] { w^{(1)} & w^{(2)} & w^{(3)} & w^{(4)} & w^{(5)} & w^{(6)} & w^{(7)} & w^{(8)} & w^{(9)} \\ }; \begin{pgfonlayer}{myback} \fhighlight[orange]{m-1-1}{m-1-1} \fhighlight[yellow]{m-1-2}{m-1-2} \fhighlight[purple]{m-1-3}{m-1-3} \fhighlight[pink]{m-1-4}{m-1-4} \fhighlight[magenta]{m-1-5}{m-1-5} \fhighlight[cyan]{m-1-6}{m-1-6} \fhighlight[green]{m-1-7}{m-1-7} \fhighlight[gray]{m-1-8}{m-1-8} \fhighlight[brown]{m-1-9}{m-1-9} \end{pgfonlayer} \end{tikzpicture} \] \caption{Example of weight sharing mechanism using a Toeplitz pattern \cite{stockholm}, for the first layer in Fig. \ref{fig:network_pic}. Entries in each of the diagonals are colored the same, thus sharing the same weight value. The trainable weights $\theta_{s} = \left(w^{(1)},...,w^{(9)}\right)$ are denoted at the very bottom in the vectorized form. As we see, a weight matrix with $24$ entries is effectively encoded by a $9$-dimensional vector.} \label{fig:toeplitz} \end{subfigure} \end{figure} Thus, for our \textbf{edge pruning} method, we group all possible edges $(i,j)$ into a set in the neural network, and select a fixed number of edges from this set. We can also further search across potentially different nonlinearities, e.g. $f_{i} \in \{\text{tanh}, \text{sigmoid}, \text{sin},...\}$. For our \textbf{weight sharing} method, we assign to each edge $(i,j)$ one color of many \textit{colors} $c \in \mathcal{C} = \{1,...,|\mathcal{C}|\}$, denoting the partition group the edge is assigned to, which defines the value $w_{i,j} \leftarrow w^{(c)}$. This is shown pictorially in Figs. \ref{fig:network_pic} and \ref{fig:toeplitz}. Note that in both cases, the size $|\mathcal{M}|$ of the search space is exponentially large; for edge-pruning this is $\binom{|E_{max}|}{|E|}$ or $2^{|E_{max}|}$ when using a fixed or variable size $|E|$ respectively, and for weight sharing, this is $|C|^{|E|}$. These sizes are calculated to be greater than $10^{68}$ (comparable to $10^{49}$, the size of NASBench's search space \cite{nasbench}) when using our specific hyperparameters, whose details can be found in Appendix \ref{hyperparameter_policy}. Thus, a purely random search will clearly be unable to exhaust the entire search space, and will also likely have difficulty attaining high rewards, which we confirm as a sanity check in Subsection \ref{controller_comparison}. In terms of actual code, combinatorial search spaces can be defined in terms of default search space \textit{primitives} in PyGlove \cite{pyglove}, with the most prominent being the "$\texttt{pyglove.oneof}$" and "$\texttt{pyglove.manyof}$" operations, which respectively choose one item, or a combination of multiple objects from a container. These primitives may be combined in a nested structure via "$\texttt{pyglove.List}$" or "$\texttt{pyglove.Dict}$". Thus, for edge pruning, we may simply define the search space as \texttt{pyglove.manyof($E_{max}$,$|E|$)} as well as concatenating primitives \texttt{pyglove.oneof($e_{i,j}$,$\mathcal{C}$)} over all edges $e_{i,j} \in E_{max}$ for the edge pruning method. The search space may then be sent to a pre-implemented controller (such is the case for Reg-Evo, Policy Gradient, and pure random search), which proposes instances from the space. Thus, due to the simplicity of the PyGlove API, we see that appending a NAS pipeline into preexisting ES code is quite clean and straightforward. \subsection{Baseline Definitions} \label{baseline_definitions} In our experiments, we also include specific baselines, including a DARTS-like \cite{darts} softmax \textit{masking} method \citep{Lenc2019NonDifferentiableSL}, where a trainable boolean matrix mask is drawn from a multinomial distribution, and element-wise multiplied with the weights before a forward pass. We also include the results from using Toeplitz and Circulant (particular class of Toeplitz) matrices as weights from referenced works \citep{stockholm}. Specific details can be found in Appendices \ref{appendix:baseline_method} and \ref{hyperparameter_policy}. \subsection{Environments} \label{environments} We perform experiments on the following $\mathrm{OpenAI}$ $\mathrm{Gym}$ tasks: Swimmer, Reacher, Hopper, HalfCheetah, Walker2d, Pusher, Striker, Thrower, and Ant. These environments for continuous control are commonly used for demonstrating the validity of RL methods, including popular algorithms such as PPO \cite{schulman2017proximal}, DDPG \cite{ddpg} and the original ES/ARS \cite{ES, mania2018simple}. For all environments which are simulated in the physics engine Mujoco \cite{mujoco}, the state consists of a 1-D vector containing data such as the joint/arm positions and velocities of an agent, while the action consists of applying force via its limbs. The reward is a combination of forward movement penalized by energy loss from applying actions, which encourages efficient movement. \subsection{Controller Comparisons} \label{controller_comparison} We first compare between the random, Policy Gradient (PG), and Reg-Evo controllers in Fig. \ref{fig:controller_comparisons} on the edge pruning task. While we clearly see that the random controller usually does poorly, we find that the PG and Reg-Evo controllers both perform well on various tasks, with one outperforming the other depending on the type of task. We further find that in most cases, the Reg-Evo controller converges much faster, although the PG controller may sometimes end up with a higher asymptotic performance. Intriguingly, PG produces generally higher variance between runs as well. This is intriguing, as there is no clear winner between PG and Reg-Evo, unlike in supervised learning (SL) where it is usually clear that the now widely adopted Reg-Evo outperforms PG \cite{regularized_evolution}. \begin{figure}[h] \includegraphics[keepaspectratio, width=0.475\textwidth]{plots/controller_comparisons.png} \caption{Comparisons across different environments when using different controllers, on the edge pruning and weight sharing tasks, when using a linear layer (L) or hidden layer of size 32 (H32).} \label{fig:controller_comparisons} \end{figure} On the topic of random search, we find our results somewhat consistent compared to NAS in supervised learning (SL) - random search in SL sampled from a reasonable search space can produce $\ge$ 80-90 \% accuracy \citep{dean, regularized_evolution} with the most gains from NAS ultimately be at the tail end; e.g. at the 95\% accuracies, which is also shown to a lesser degree for easier RL environments such as $\mathrm{Striker}$ (shown in Fig. \ref{fig:controller_comparisons}) and $\mathrm{Reacher}$ (shown in Appendices \ref{appendix:weight_sharing}, \ref{appendix:edge_pruning}), although for the majority of RL environments, random search is unable to train at all. \subsection{Controller Sample Complexity} \label{controller_sample_complexity} We further investigate the effect of the number of objective values per batch on the controller by randomly selecting only a subset of the objectives $f(m, \theta)$ for the controller $p_{\phi}$ to use, but maintain the original number of workers for updating $\theta_{s}$ via ES to maintain weight estimation quality to prevent confounding results. We found that this sample reduction can reduce the performance of both controllers for various tasks, especially the PG controller. Thus, we find the use of the already present ES workers highly crucial for the controller's quality of architecture search in this setting. \begin{figure}[h] \includegraphics[keepaspectratio, width=0.475\textwidth]{plots/sample_complexity.png} \caption{Regular ES-ENAS experiments with 150 full controller objective value usage plotted in darker colors. Experiments with lower controller sample usage (10 random samples, similar to the number of simultaneously training models in \cite{mnasnet}) plotted in corresponding lighter colors.} \label{fig:sample_complexity} \end{figure} \subsection{Multiobjective Case} \label{multiobjective_case} \cite{efficientnet, mnasnet} introduce the powerful notion of \textit{Multiobjective Optimization}, where the controller may optimize multiple objectives towards a Pareto optimal solution \cite{pareto}. Similar to \cite{mnasnet}'s approach, we can modify the controller's objective to be a hybrid combination $f(\theta, m) \left(\frac{|E_{m}|}{|E_{T}|} \right)^{\omega}$ of both the total reward $f(\theta)$ and the compression ratio $\frac{|E_{m}|}{|E_{T}|}$ where $|E_{m}|$ is the number of edges in model $m$ and $|E_{T}|$ is a target number, and allow the controller to decide the number of edges on a Pareto optimal solution rather strictly use a human specified hyperparameter. We thus express our search space as boolean mask mappings $(i,j) \rightarrow \{0,1\}$ over all possible edges. For proof-of-concept, we follow the basic specifications in \cite{mnasnet} and set $\omega = -1$ if $ \frac{|E_{m}|}{|E_{T}|} > 1 $, while $\omega = 0$ otherwise, which strongly penalizes the controller if it proposes a model $m$ whose edge number $|E_{m}|$ breaks the threshold $|E_{T}|$. However, as noted in \cite{mnasnet}, $\omega$ can be more finely tuned to allow $|E_{T}|$ to be a softer constraint and the hybrid objective to be smoother, if the user empirically knows the tradeoff between number of edges and reward. In Fig. \ref{fig:multiobjective}, we see that the controller eventually reduces the number of edges below the target threshold set at $|E_{T}| = 64$, while still maintaining competitive training reward, demonstrating the versatility of the multiobjective approach in the RL setting. \begin{figure}[h] \includegraphics[keepaspectratio, width=0.475\textwidth]{plots/multiobjective.png} \caption{Environment reward plotted alongside the average number of edges used for proposed models. \underline{\textbf{Black} horizontal line} corresponds to the target $|E_{T}|=64$.} \label{fig:multiobjective} \end{figure} \subsection{Visualizing Convergence} \label{visualizing_convergence} In order to confirm convergence results of ES-ENAS, we also graphically plot aggregate statistics over the controller samples. In order to best depict this effect and reduce the visual complexity of observing many edges, we select the smallest environment possible, Swimmer, which has dimensions (8,2). Coincidentally, it is a common phenomenon that the Swimmer environment is biased towards \textit{linear} policies, where a linear policy can achieve an award of around $\approx$ 360, whereas hidden layer policies can usually only achieve $\approx$ 100 \cite{mania2018simple, schulman2017proximal}, suggesting that fewer edges may actually help performance. Thus, we also define the edge pruning search space to be a subset of a linear policy, which avoids needing to deal with permutation equivalencies if using a hidden layer, further reducing visual complexity. We also select our edge pruning search space to be boolean to allow the controller full control over selecting every edge rather than a fixed number of edges which increases the size ($2^{8 \times 2} = 2^{16}$) of the search space, but nevertheless remarkably observe that \textit{for all 3 independently seeded runs}, both PG controller converges toward a specific "local maximum" architecture, demonstrated in Fig. \ref{fig:visualizing_swimmer}. We found this to be true for Reg-Evo as well, although its final architecture was different than PG's, suggesting that there may be a few natural architectures optimal to the environment depending on the controller algorithm. We present PG's final architecture in Appendix \ref{appendix:visualizations} as well as other plots, including results from weight sharing. \begin{figure}[h] \includegraphics[keepaspectratio, width=0.475\textwidth]{plots/swimmer_convergence.png} \caption{Edge pruning convergence over time, with samples aggregated over 3 seeds from PG runs on Swimmer. Each edge is colored according to a spectrum, with its color value equal to $2 |p -\frac{1}{2}|$ where $p$ is the edge frequency. We see that initially, each edge has uniform $(p= \frac{1}{2})$ probability of being selected, but as the controller trains, the samples converge toward a single pruning.} \label{fig:visualizing_swimmer} \end{figure} \subsection{Full Results} \label{full_results} The full set of numerical results over all of the mentioned methods can be found in Appendix \ref{appendix:extended_experiments}, which includes the weight sharing (Appendix \ref{appendix:weight_sharing}), edge pruning (Appendix \ref{appendix:edge_pruning}), as well as plots for baseline methods (Fig. \ref{fig:prune_fail}). However, we present the some of the more notable results in Table \ref{main_table}. Intriguingly, we found that appending the extra nonlinearity selection into the edge-pruning search space improved performance across HalfCheetah and Swimmer, but not across all environments (see Appendix \ref{appendix:edge_pruning_nonlinearity}). However, lack of total improvement is consistent with the results found with WANNs \cite{ha}, which also showed that trained WANNs' performances matched with vanilla policies. From these two observations, we hypothesize that perhaps nonlinearity choice for simple MLP policies trained via ES are not quite so important to performance as other components, but more ablation studies must be conducted. Furthermore, for weight sharing policies, we see that hidden layer policies near-universally outperform linear policies, even when using the same number of distinct weights. \setlength{\tabcolsep}{6pt} \begin{table}[h] \small \centering \scalebox{0.7}{ \begin{tabular}{l*{5}{c}r} \toprule \textbf{Env.} & \textbf{Dim.} & \textbf{(PG, Reg-Evo) Reward} & \textbf{Method} \\ \midrule $\mathrm{HalfCheetah}$ & (17,6) & (2958, 3477) $\rightarrow$ (4258, 4894) & Weight Sharing (L $\rightarrow$ H) \\ $\mathrm{Hopper}$ & (11,3) & (2097, 1650) $\rightarrow$ (3288, 2834) & Weight Sharing (L $\rightarrow$ H) \\ $\mathrm{HalfCheetah}$ & (17,6) & (2372, 4016) $\rightarrow$ (3028, 5436) & Edge Pruning (H) $\rightarrow$ (+ Nonlinearity Search) \\ $\mathrm{Swimmer}$ & (8,2) & (105, 343) $\rightarrow$ (247, 359) & Edge Pruning (H) $\rightarrow$ (+ Nonlinearity Search) \\ \bottomrule \\ \end{tabular} } \caption{\small{Rewards for selected environments and methods, each result averaged over 3 seeds. Arrow denotes modification or addition (+).}} \label{main_table} \end{table} In Table \ref{diff_algorithms} we directly compare our methods with the masking approach discussed in Subsection \ref{baseline_definitions} and (in more detail in Appendix \ref{appendix:baseline_method}), as well as other structured policies (Toeplitz from \citep{stockholm} and Circulant) and the unstructured baseline. In all cases we use the same hyper-parameters, and train until convergence for three random seeds. For masking, we report the best achieved reward with $>90\%$ of the network pruned, making the final policy comparable in size to the weight sharing and edge-pruning networks. For each class of policies, we compare the number of weight parameters used (``\# of weight-params" field), since the compactification mechanism does not operate on bias vectors. We also record compression with respect to unstructured networks in terms of the total number of parameters including biases (``\# compression" field). This number determines the reduction of sampling complexity with respect to unstructured networks (which is a bottleneck of ES training), since generally for ES, the number of blackbox function $f(\theta)$ queries needed to optimize $f$ is proportional to the length of $\theta$, which is equal to the total number of weights and biases of a policy network $\pi_{\theta}$ \cite{query_complexity, differential_evolution,bandit_feedback}. \begin{table}[h] \small \centering \scalebox{0.7}{ \begin{tabular}{l*{6}{c}r} \toprule \textbf{Env.} & \textbf{Arch.} & \textbf{Reward} & \textbf{\# weights} & \textbf{compression} & \textbf{\# bits} \\ \midrule $\mathrm{Striker}$ & Weight Sharing & -247 & 23 & 95\% & 8198 & \\ & Edge Pruning & -130 & 64 & 93\% & 3072 \\ & Masked & -967 & 25 & 95\% & 8262 & \\ & Toeplitz & -129 & 110 & 88\% & 4832 & \\ & Circulant & \textbf{-120} & 82 & 90\% & 3936 & \\ & Unstructured & \textbf{-117} & 1230 & 0\% & 40672 & \\ \midrule $\mathrm{HalfCheetah}$ & Weight Sharing & \textbf{4894} & 17 & 94\% & 6571 & \\ & Edge Pruning & 4016 & 64 & 98\% & 3072 \\ & Masked & \textbf{4806} & 40 & 92\% & 8250 & \\ & Toeplitz & 2525 & 103 & 85\% & 4608 & \\ & Circulant & 1728 & 82 & 88\% & 3936 & \\ & Unstructured & 3614 & 943 & 0\% & 31488 & \\ \midrule $\mathrm{Hopper}$ & Weight Sharing & \textbf{3220} & 11 & 92\% & 3960 & \\ & Edge Pruning & \textbf{3349} & 64 & 84\% & 3072 \\ & Masked & 2196 & 17 & 91\% & 4726 & \\ & Toeplitz & 2749 & 94 & 78\% & 4320 & \\ & Circulant & 2680 & 82 & 80\% & 3936 & \\ & Unstructured & 2691 & 574 & 0\% & 19680 & \\ \midrule $\mathrm{Walker2d}$ & Weight Sharing & 2026 & 17 & 94\% & 6571 & \\ & Edge Pruning & \textbf{3813} & 64 & 90\% & 3072 \\ & Masked & 1781 & 19 & 94\% & 6635 & \\ & Toeplitz & 1 & 103 & 85\% & 4608 & \\ & Circulant & 3 & 82 & 88\% & 3936 & \\ & Unstructured & \textbf{2230} & 943 & 0\% & 31488 & \\ \bottomrule \\ \end{tabular} } \caption{\small{Comparison of the best policies from six distinct classes of RL networks: Weight Sharing (ours), Edge Pruning (ours), Masked, Toeplitz, Circulant, and Unstructured networks trained with standard ES algorithm \citep{ES}. All results are for feedforward nets with one hidden layer. Top two performing networks for each environment are in \textbf{bold.}}} \label{diff_algorithms} \end{table} Finally, for a working policy we report total number of bits required to encode it assuming that real values are stored in the $\mathrm{float}$ format. Note that for weight sharing and masking networks, this includes bits required to encode a dictionary representing the partitioning. \section{Introduction} Neural network architectures are the basic building blocks of modern deep learning, as they determine the inductive biases that models will use to process information and make predictions. Normally, these building blocks are hand-designed using human intuition and priors, such as convolutional layers which are based on the principle of visual locality. Unfortunately, finding the right architecture using e.g. the best possible combination of convolutions, can sometimes be a very tedious and laborious task, which requires training multiple models iteratively. Neural architecture search (NAS) \cite{le}, however, seeks to automate this process to both reduce human labor as well as find optimal architectures that humans may overlook. NAS has been successfully applied to designing model architectures in fields ranging from image classification \cite{le} to language modeling \cite{evolved_transformer}, and has even been applied to algorithm search \cite{automl_zero}. However, surprisingly, one field which has \textit{not} seen relative popularity with NAS is \textit{reinforcement learning} (RL). While some variants of NAS specifically mix in the optimization process with the model, for example Differentiable Architecture Search \cite{darts}, in most cases, the objective is treated as a blackbox function $f: \mathcal{M} \rightarrow \mathbb{R}$ where $\mathcal{M}$ is a combinatorial, discrete search space representing potential components of a model architecture, and the objective is the accuracy of the model once trained to convergence. \begin{figure}[t] \includegraphics[keepaspectratio, width=0.473\textwidth]{plots/aggregator_worker_new.pdf} \caption{Figure representing our aggregator-worker pipeline, where the aggregator proposes models $m_{i}$ in addition to a perturbed input $\theta_{s} + \sigma \mathbf{g_{i}}$, and the worker the computes the objective $f(m_{i}, \theta_{s} + \sigma \mathbf{g_{i}})$, which is sent back to the aggregator. Both the training of the weights $\theta_{s}$ and of the model-proposing controller $p_{\phi}$ rely on the number of worker samples to improve performance.} \label{fig:aggregator_worker} \end{figure} This blackbox framework is exploited in the original NAS paper \cite{le}, where a central \textit{controller}, first introduced as a RNN-based Pointer Network \cite{vinyals}, parameterizes the search space $\mathcal{M}$ with a distribution $p_{\phi}$ via neural network parameters $\phi$, and proposes candidate architectures, i.e. \textit{child models} $m \in \mathcal{M}$ to obtain objectives from which to train $\phi$ via policy gradient. Further uses of this blackbox optimization framework have been explored in NASBench \cite{nasbench} which stores a database of pretrained model accuracies allowing fast querying of the objective function, and PyGlove\cite{pyglove} which programmically attempts to decouple the search space, optimizer, and the objective function. This has led to vast literature on blackbox optimizers for NAS search spaces, such as methods to apply Bayesian Optimization \cite{bayesian_optimization} for NAS \cite{bananas}, as well as using classical \textit{true} evolutionary methods such as NEAT \cite{neat} and Regularized Evolution \cite{regularized_evolution}. However, one caveat is the lack of efficiency of such methods, as they summarize an entire model's training run (which could take hours to days) as simply a single scalar objective, which may be problematic when there is a lack of computational resources. Efficient Neural Architecture Search \cite{dean} addresses this issue by applying weight sharing across all child models, and DARTS converts a single model into a differentiably customizable one to reduce the number of completely new model instantiations. Such methods unfortunately can require significant changes in program logic, and are by default not used when there is abundance of compute resources available, such as in an industrial research lab \cite{efficientnet, mnasnet}. By standardizing the objective function, NASBench \cite{nasbench} further avoids various complications with efficient NAS methods, and encourages the community to instead only focus on blackbox optimization techniques. Thus, we observe a natural tradeoff in NAS: \textit{simplicity vs efficiency}. Framing NAS in the blackbox optimization setting allows very fast changes to the blackbox algorithm, but requires extensive computational resources for many tasks outside of the NASBench setting. Using techniques which exploit prior knowledge of the task can reduce the computational overhead, but requires extensive modifications to training pipelines - for instance, ENAS requires a very nonstandard computation graph, due to its need to define a global shared weight system. However to the best of our knowledge, there also is a lack of work applying (efficient) neural architecture search methods to RL policies. We hypothesize this is due to a few reasons. One reason partially is because unlike SL, RL policies are usually quite small and thus do not require a search space involving large convolutional modules as in the case for image classification. However, we show that the search space for RL policies can still be quite combinatorial, by examining search spaces ranging from partitionings to edge prunings in our work. Another reason is due to RL possessing significantly different training infrastructure than superivsed learning, as large-scale distributed on-policy and off-policy algorithms both require notions of replay buffers and the use of multiple actors and critics as CPUs, and a central worker such as a GPU for training. One such rather conceptually simple RL algorithm is Evolutionary Strategies (ES) \cite{ES}, also known as Augmented Random Search (ARS) \cite{mania2018simple, horia}, which has been applied successfully to multiple domains such as Atari games, and especially in robotics due to its ability to train stable deterministic policies in continuous control tasks. ES effectively sidesteps notions of replay buffers, etc. and simply treats an agent's total reward as a blackbox (and potentially non-differentiable) objective $f(\theta)$. This allows flexibility in the types of neural network architectures used, especially for RL problems, as certain classes of combinatorial policies, which would require special operations for backward passes, are normally difficult to implement in frameworks such as Tensorflow or Pytorch. The ES algorithm estimates a Gaussian smoothed gradient of the objective, by using multiple CPU workers to evaluate the objective. What follows on how to combine NAS with ES should therefore be intuitive, as they are both \emph{blackbox algorithms} and use very similar distributed workflows. Our key observation is that \textbf{ES and ENAS can naturally be combined in a highly scalable but conceptually straightforward way by simply introducing a NAS controller into the central aggregator in ES}, and appending the proposed model string with the preexisting model weights during message passing from the aggregator to the workers. \textbf{In doing so, we are one of the first to apply popular controller-based NAS techniques to the field of RL.} A visual representation of our algorithm can be found in Fig. \ref{fig:aggregator_worker}. In terms of computational costs, to give context, vanilla NAS \cite{le} for classical supervised learning setting (SL) requires a large population of ~450 GPU-workers (“child models”) all training one-by-one, which results in many GPU-hours of training. ENAS \cite{dean} uses weight sharing across multiple workers to avoid training classifiers from scratch, and can reduce the requirement for computational resources at the cost of increasing the complexity of implementation due to the shared weight mechanism. \textbf{Our method solves both issues (training time and controller sample complexity) by leveraging a large population of much-cheaper CPU workers, on the order of hundreds to thousands, thus increasing the effective batch-size of the controller, while also training the workers simultaneously via ES.} This setup is not possible in SL, as a single CPU cannot train a large image-based classifier in practice. We thus introduce the ES-ENAS algorithm, which requires no extra computational resources and is applicable for a wide variety of RL use-cases. \section{Background and Preliminaries} \subsection{Network Architecture Search} \label{nas_background} The subject of this paper can be put in the larger context of Neural Architecture Search (NAS) algorithms which recently became a prolific area of research with already voluminous literature (see: \cite{hutter} for an excellent survey). Interest in NAS algorithms started to grow rapidly when it was shown that they can design state-of-the-art architectures for image recognition and language modeling \cite{le}. More recently it was shown that NAS network generators can be improved to sample more complicated connectivity patterns based on random graph theory models such as Erdos-Renyi, Barabasi-Albert or Watts-Strogatz to outperform human-designed networks, e.g. ResNet and ShuffleNet on image recognition tasks \cite{xie}. The foundation of our algorithm for learning policy network structure is the class of ENAS methods \cite{dean}, and more broadly, controller-based methods \cite{le}. To present our algorithm, we thus need to first describe this class. ENAS algorithms are designed to efficiently construct neural network architectures, which are usually combinatorial-flavored optimization problems with exponential-size domains, by sharing model weights across child models, thus preventing the need to completely retrain every child model from scratch. This weight sharing technique is justified in \cite{dean} due to the transferrability of weights across different architectures seen in previous literature. More formally, a child model can be represented by its model $m$ and its particular weights $\theta$. In ENAS, a $\theta_{s}$ is \textit{shared} across workers, whereas in the original NAS, the final objective $f(m)$ is defined only after the model converges during training. If $f(m, \theta_{s})$ is the objective value (which is normally accuracy in the supervised learning setting, but as we discuss later, can also be total rewards of a policy in a reinforcement learning environment), then when using a RNN-based controller with parameters $\phi$, the smoothed objective $J(\phi) = \mathbb{E}_{m \sim p_{\phi}}[f(m; \theta_{s})] $ can be optimized by estimating the \textit{policy gradient}: \begin{equation} \nabla_{\phi} J(\phi) = \nabla_{\phi} \mathbb{E}_{m \sim p_{\phi}}[f(m, \theta_{s})] \approx \frac{1}{n} \sum_{i=1}^{n} f(m_{i}, \theta_{s}) \nabla_{\theta_{s}} \log p_{\phi}(m) \end{equation} where $\log p_{\phi}(m)$ is the log-probability of the controller for selecting $m$, and thus $\phi \leftarrow \phi + \eta_{pg} \nabla_{\phi} J(\phi) $ is updated with the use of the REINFORCE algorithm \cite{reinforce}, or other policy gradient algorithms such as PPO \cite{schulman2017proximal}. Recent work such as with AmeobaNets \cite{regularized_evolution} have also shown that evolutionary controllers similar to NEAT \cite{neat}, termed \textit{Regularized Evolution} (Reg-Evo) algorithms, can also be competitive and even sometimes outperform RNN-based controllers. One main difference between Reg-Evo and NEAT algorithms is that Reg-Evo's model evolution method discards the oldest model in a population, in order to allow better diversity and exploration, while NEAT may discard models according to their performance or do not discard them at all, resulting in models that remain alive in the population for a long time. Another significant difference is that unlike NEAT which mutates both the architecture and weights in a randomized fashion, Reg-Evo only affects the architecture update method, allowing the user to define their own weight updating scheme (predominantly gradient descent). While the exact algorithm can be found in \cite{regularized_evolution}, in summary, Reg-Evo initializes with a population, or \textit{queue} of $n$ models $Q = \{m_{1},...,m_{n}\}$ with corresponding scores (slightly abusing notation) $f(Q) = \{f(m_{1}),...,f(m_{n})\}$ (or $f(Q, \theta_{s})$'s instead in the efficient setting), and then proceeds to construct child models $m'$ via mutating the model $m_{max} = \arg \max_{m \in Q} f(m)$ corresponding to the highest score. These child models are evaluated and appended to $Q$ in a first-in, first-out (FIFO)-like manner, consequently removing the oldest model from $Q$, otherwise known as \textit{tournament selection} \cite{tournament_selection}. Regardless of the controller used, the shared weights $\theta_{s}$ use a approximate gradient update: \begin{equation}\theta_{s} \leftarrow \theta_{s} + \eta_{w} \left(\frac{1}{n} \sum_{i=1}^{n} \nabla_{\theta_{s}} f(m_{i}, \theta_{s}) \right)\end{equation} assuming the existence of such a gradient $\nabla_{\theta_{s}} f(m, \theta_{s})$ that can be computed for any $m$. However, this can be fairly tricky to implement, as it requires operations on the model which still allow backpropagation regardless of potential disconnectivities in the graph. With the advent of standardized NAS API such as PyGlove \cite{pyglove}, as well as objective value databases such as NASBench \cite{nasbench}, the search spaces of NAS problems have become highly similar to ones found in blackbox optimization. Such search spaces are usually constructed via combinations of primitives involving categorical and conditional parameters, such as ones found in hyperparameter optimization infrastructure \cite{vizier}. Thus, a child model $m$ can first simply be programmically represented via Python dictionaries and strings to be sent over a distributed communication channel to a worker, and then materialized later by the worker into an actual neural network architecture. \subsection{Evolutionary Strategies/Augmented Random Search} Consider a fixed Markov Decision Process (MDP), or environment $\mathcal{E}$ with state space $\mathcal{S} \subseteq \mathbb{R}^{|\mathcal{S}|}$ and action space $\mathcal{A}\subseteq \mathbb{R}^{|\mathcal{A}|}$ and an agent aiming to maximize its total expected/discounted reward obtained from interacting in $\mathcal{E}$. In deep reinforcement learning, the standard approach is to construct a neural network policy $\pi_{\theta}(s) = a$ which maps states $s \in \mathcal{S}$ to actions $a \in \mathcal{A}$, and is parameterized by $\theta$, the neural network's weights. The objective is to maximize the sum of rewards $\sum_{t=1}^{T} r_{t}$ obtained in the environment from a given trajectory of states and actions $(s_{1}, a_{1}, s_{2}, a_{2},...,s_{T}, a_{T})$. ES \cite{ES}/ARS \cite{mania2018simple} interpret the sum of rewards as a blackbox objective $f(\theta) = \sum_{t=1}^{T} r_{t}$, and thus estimates the gradient of the Gaussian-smoothed objective $\widetilde{f}_{\sigma}(\theta) = \mathbb{E}_{\mathbf{g} \sim \mathcal{N}(0, I)}\left[f(\theta + \sigma \mathbf{g} ) \right]$ to be used as an ascent update: $\theta' = \theta + \eta_{w} \nabla_{\theta} \widetilde{f}(\theta)$ for some step size $\eta_{w}$ and precision parameter $\sigma$. Note that the gradient can be expressed as a finite difference: \begin{equation}\nabla_{\theta} \widetilde{f}_{\sigma}(\theta) = \frac{1}{2\sigma} \mathbb{E}_{\mathbf{g} \sim \mathcal{N}(0, I)}\left[(f(\theta + \sigma \mathbf{g}) - f(\theta - \sigma \mathbf{g}))\mathbf{g} \right] \end{equation} which can thus be estimated via sampling multiple $\mathbf{g}_{1}, ... ,\mathbf{g}_{n}$: \begin{equation} \nabla_{\theta} \widetilde{f}_{\sigma}(\theta) = \frac{1}{\sigma n} \sum_{i=1}^{n} \frac{f(\theta + \sigma \mathbf{g}_{i}) - f(\theta - \sigma \mathbf{g}_{i})}{2} \end{equation} This algorithm is highly popularized in robotics applications \cite{mania2018simple, rapidly, meta_strategies, provably_robust, toaster} due to its ability to use \textit{deterministic} and sometimes \textit{linear} policies, allowing for stable control and fast inference, as opposed to necessarily using stochastic policies employed by policy gradient algorithms, or complex and relatively slow argmax procedures for Q-learning in continuous control. Furthermore, this algorithm can run on CPUs via a distributed pipeline, and does not require differentiable components as it does not use exact backpropagation, a property previously exploited to allow sorting mechanisms \cite{ha_attention} and argmax operations \cite{es_maml}. In our particular setting, this flexibility allows fast modifications between different types of combinatorial policies, as we will see in the experimental section. A more advanced variant of this approach is CMA-ES \cite{CMAES} which consists of sampling $\mathbf{g} \sim \mathcal{N}(0, \mathbf{C})$ where $\mathbf{C}$ is a covariance matrix. However, despite its power, the computation of the full covariance matrix is non-trivial, and because of this, CMA-ES is rarely applied to problems in high-dimensional space \cite{rl_es_challenges} where ES is already satisfactory. Nonetheless, since CMA-ES is still based on a blackbox aggregator-worker framework, it is clear that it may also be used in our ES-ENAS approach, which we describe below. \section{ES-ENAS Method} \subsection{Weight and Controller Updates} The optimization problem we are interested in is $\max_{m, \theta} f(m, \theta)$, which represents the sum of rewards over the environment by using a policy whose architecture is $m$, and weights are $\theta$. Note that if $m$ is sampled from an controller $p_{\phi}$ where $\phi$ expresses the controller state (e.g. RNN parameters for Policy Gradient or model population for Regularized Evolution) of the controller, in addition to the usual Gaussian smoothing of $\theta$, then the smoothed objective becomes: \begin{equation}\widetilde{f}_{\sigma}(\phi, \theta) = \mathbb{E}_{m \sim p_{\phi}, \mathbf{g} \sim \mathcal{N}(0, I)} \left[ f(m, \theta + \sigma \mathbf{g}) \right]\end{equation} \subsubsection{Updating the Weights} The gradient with respect to $\theta$ becomes: \begin{equation} \nabla_{\theta} \widetilde{f}_{\sigma}(\phi, \theta) = \frac{1}{2\sigma} \mathbb{E}_{m \sim p_{\phi},\mathbf{g} \sim \mathcal{N}(0, I)}\left[(f(m, \theta + \sigma \mathbf{g}) - f(m, \theta - \sigma \mathbf{g}))\mathbf{g} \right] \end{equation} Note that by linearity, we may move the expectation $ \mathbb{E}_{m \sim p_{\phi}}$ inside into the two terms $f(m, \theta + \sigma \mathbf{g})$ and $f(m, \theta - \sigma \mathbf{g})$, which implies that the gradient expression can be estimated with averaging singleton samples of the form: \begin{equation}\frac{1}{2\sigma} (f(m^{+}, \theta + \sigma \mathbf{g}) - f(m^{-}, \theta - \sigma \mathbf{g}))\mathbf{g} \end{equation} where $m^{+}, m^{-}$ are i.i.d. samples from $p_{\phi}$, and $\mathbf{g}$ from $\mathcal{N}(0, I)$. \subsubsection{Updating the Controller} If the controller $p_{\phi}$ is an RNN parameterized by $\phi$, then in order to optimize $\phi$, we see that this corresponds to sending the reward $R(m) = \mathbb{E}_{\mathbf{g} \sim \mathcal{N}(0, I)} \left[f(m, \theta + \sigma \mathbf{g})\right]$ given a model $m$ in order to produce the policy gradient sample $ \nabla_{\phi} R(m) \log p_{\phi}(m)$. However, practically speaking, this corresponds to actually sending in a stochastic reward $f(m, \theta + \sigma \mathbf{g})$ from a sampled $\mathbf{g}$. Similarly, if the controller $p_{\phi}$ is Reg-Evo, then we may simply pass in $f(m, \theta + \sigma \mathbf{g})$ as the model objective into the population and run the algorithm as-is. This mechanism shows that updates to the controller $p$ and updates to the weights $\theta$ \textit{both rely} on the samples $f(m, \theta + \sigma \mathbf{g})$. The number of workers $n$, now serves the two purposes: reducing the sample complexity of the controller $p$, as well as the variance of the estimated ES gradient $ \nabla_{\theta} \widetilde{f}_{\sigma}$. We show the conceptual simplicity of combining the two updates in Algorithm \ref{algo:es_enas}. \begin{figure}[h] \begin{minipage}{1.0\linewidth} \let\@latex@error\@gobble \begin{algorithm}[H] \SetAlgoLined \KwData{Initial weights $\theta_{s}$, weight step size $\eta_{w}$, precision parameter $\sigma$, number of perturbations $n$, \textcolor{blue}{controller $p_{\phi}$}.} \While{\text{not done}}{ Sample i.i.d. vectors $\mathbf{g}_{1},\ldots,\mathbf{g}_{n} \sim \mathcal{N}(0,\mathbf{I})$\; \ForEach{$\mathbf{g}_{i}$} { \textcolor{blue}{Sample $m_{i}^{+}, m_{i}^{-} \sim p_{\phi}$} \\ $v_{i}^{+} \gets f(m_{i}^{+}, \theta_{s} + \sigma \mathbf{g}_{i})$ \\ $v_{i}^{-} \gets f(m_{i}^{-}, \theta_{s} - \sigma \mathbf{g}_{i})$ \\ $v_i \gets \frac{1}{2} (v_{i}^{+} - v_{i}^{-})$ \\ \textcolor{blue}{$p_{\phi} \gets \{(m_{i}^{+}, v_{i}^{+}), (m_{i}^{-}, v_{i}^{-}) \}$} \\ } Update weights $\theta_{s} \gets \theta_{s} + \eta_{w}\frac{1}{\sigma n} \sum_{i=1}^{n} v_{i} \mathbf{g}_{i}$ \\ \textcolor{blue}{Update controller $p_{\phi}$} } \caption{ES-ENAS Algorithm, with the few additional modifications to allow ENAS shown in \textcolor{blue}{blue}.} \label{algo:es_enas} \end{algorithm} \end{minipage} \end{figure} In the distributed setting of the aggregator-worker case, the model $m$ can be serialized into a string, contained in the original massage containing a perturbation $\theta + \sigma \mathbf{g}$ from the aggregator to one of the workers. The worker then materializes the model and uses forward-passes to interact in the environment $\mathcal{E}$. Although the controller needs to output hundreds of model suggestions, it can be parallelized to run quickly by multithreading (for Reg-Evo) or by simply using a GPU (for policy gradient). \subsection{Extensions} We note a few extensions and modifications of this work as well: \begin{itemize} \item A similar setup can be potentially made when using distributed policy gradient or Q-learning algorithms, where the \textbf{aggregator also receives state-action pairs from various sampled models.} Combining these RL algorithms with NAS may be of interest for exploration in later works. However, we believe that ES is more naturally suited for the efficient setup, as it is relatively simple both conceptually and in terms of implementation, requiring only a network's forward pass rather than backward pass, which reduces complexity in a weight sharing pipeline. Furthermore, ES usually uses many more workers (on the order of $10^{2}$) than the distributed variants of the other methods (on the order of $10^{0}$ to $10^{1}$ of workers) which can be important for the controller's performance, as we will show in Subsection \ref{controller_sample_complexity}. \item The \textbf{controller $p_{\phi}$'s objective can also be defined differently from the weights $\theta_{s}$'s objective.} This is already subtly the case in supervised learning, where the controller's objective is the \textit{nondifferentiable validation accuracy} of the model, while the model weights are explicitly optimizing against the \textit{differentiable cross-entropy training loss} of the model. More significant differences between the controller objective and weight objective involve cases such as efficiency and runtime of the network, which have led to work in EfficientNets \cite{efficientnet}. We show this can also be applied for the RL setting in Subsection \ref{multiobjective_case}. \item Lastly, the controller does not need to be use policy gradients or Reg-Evo, although they are the most popular. Other algorithms may also be used, as the controller simply needs to collect objectives and suggest new models. However, both controllers are readily packaged in PyGlove along with search space primitives, we opt to use these methods. \end{itemize} \section{Applications} While the mentioned ES-ENAS algorithm is a general purpose algorithm which allows a multitude of potential applications and modifications, we provide below example applications that are relevant with literature in mobile robotics, which can leverage combinatorial search spaces and also use our ES-ENAS algorithm. Due to the \textbf{very extensive} background work regarding pruning and compression techniques as we discuss in the next subsections, we thus emphasize to the reader that we will not claim to achieve state of the art for compression on RL policies. We rather instead, demonstrate how our method can apply diverse NAS techniques from SL on a reasonably challenging search space in RL, especially since unlike SL \cite{nasbench}, there are no standardized NAS benchmarks in RL. However, as we will discuss in Subsection \ref{search_space_definitions}, our RL search space is similar or even larger in size compared to NASBench \cite{nasbench}. Whereas some of the background works' algorithms can only be applied to specific problems, ES-ENAS's versatility lies in its applicability to general search spaces. One application of network compression is in mobile robotics \cite{gage}, where computational and storage resources are very limited (especially without an on-board GPU), and there is a need for fast inference times to manage high-frequency reaction speeds (on the order of miliseconds), as well as a need for low memory usage. Furthermore, there has been recent interest in simple policies for continuous control. In particular, \cite{mania2018simple} demonstrated that linear policies are often sufficient for the benchmark MuJoCo locomotion tasks, while \cite{atari6neurons} found smaller policies could work for vision-based tasks by separating feature extraction and control. Finally, recent work found that small, sparse sub-networks can perform better than larger over-parameterized ones \cite{frankle2018the}, inspiring applications in RL \cite{lotteryforrl}. Two recent papers come to mind: \cite{stockholm} and \cite{ha}. In the former one, policies based on Toeplitz matrices were shown to match their unstructured counterparts accuracy-wise, while leading to the substantial reduction of the number of parameters from thousands \cite{ES} to hundreds \cite{stockholm}. Instead of quadratic (in sizes of hidden layers), those policies use only linear number of parameters. The Toeplitz structure can be thought of as a parameter sharing mechanism, where edge weights along each diagonal of the matrix are the same (see: Fig. \ref{fig:toeplitz}). The latter paper \cite{ha} proposes an extremal approach, where weights are chosen randomly instead of being learned, but the topologies of connections are trained and thus are ultimately strongly biased towards RL tasks under consideration. It was shown in \cite{ha} that such Weight Agnostic Neural Networks (WANNs) can encode effective policies for several nontrivial RL problems. WANNs replace conceptually simple feedforward networks with general graph topologies using NEAT \cite{neat}, providing topological operators to build the network. However, aligned with the principles of NEAT, WANNs focus heavily on requiring a single (random) weight and cannot be used on tasks with rich sets of weights which need to be trained with gradient descent. Furthermore, WANNs may require relatively complicated connections to be memorized, which may not be suitable for mobile robotics. Nonetheless, we are inspired by such a technique and thus allow searching for different nonlinearities across the nodes of our neural network. The above two examples lead us to a middle ground set of methods, which can be more natural and easier to implement with an emphasis on reducing complexity and parameter count from a broad range of aspects: \textbf{sparsification} and \textbf{weight sharing}, in addition to \textbf{nonlinearity searching}. In the first two settings, the topology is still a feedforward neural network, but in the former case, we simply determine whether each edge in the network to mask or not, while in the latter case, the weights are partitioned into groups, with each group using the same scalar weight value. \subsection{Sparsification} There is vast literature on compact encodings of NN architectures, with some of the most popular and efficient techniques regarding network sparsification. The sparsity can be often achieved by \textit{pruning} already trained networks. These methods have been around since the 1980s, dating back to Rumelhart \cite{Rumelhart, Chauvin1989, Mozer1989}, followed shortly by Optimal Brain Damage \cite{braindamage}, which used second order gradient information to remove connections. Since then, a variety of schemes have been proposed, with regularization \cite{Louizos2018LearningSN} and magnitude-based weight pruning methods \cite{NIPS2015_5784, see-etal-2016-compression, sparsernniclr} increasingly popular. The impact of \textit{dropout} \cite{JMLR:v15:srivastava14a} has added an additional perspective, with new works focusing on attempts to learn sparse networks \cite{Gomez2019LearningSN}. Another recent work introduced the Lottery Ticket Hypothesis \cite{frankle2018the}, which captivated the community by showing that there exist equivalently sparse networks which can be trained from scratch to achieve competitive results. Indeed, \cite{Lenc2019NonDifferentiableSL} do achieve state-of-the-art results on various supervised feedforward and recurrent models. Interestingly, these works consistently report similar levels of compression, often managing to match the performance of original networks with up to $90\%$ fewer parameters. Those methods are however designed for constructing classification networks rather than those encoding RL policies. We ultimately confirm these findings in the RL setting by showing that good rewards can be obtained up to a high level of pruning. \subsection{Weight Sharing and Quantization} Weight sharing mechanisms can be viewed from the quantization point of view, where pre-trained weights are quantized, thus effectively partitioned. Examples include \cite{Han2016DeepCC}, who achieve $49$x compression for networks applied for vision using both pruning and weight sharing (by quantization) followed by Huffman coding, \cite{chen2015compressing}, which compresses networks via randomized weight sharing, and \cite{kompress}, which uses hashing mechanisms to achieve competitive results against EfficientNets \cite{efficientnet} in image classification. More relevantly, \cite{stockholm} hardcodes RL policies using specific \textit{Toeplitz} matrices and found that such weight sharing patterns work well in control. However such partitions are not learned which is a main topic of this paper. Even more importantly, in RL applications, where policies are often very sensitive to parameters' weights \cite{pmtg}, centroid-based quantization may be too crude to preserve accuracy. \section{0pt}{10pt plus 0pt minus 0pt}{2pt plus 0pt minus 0pt} \usepackage{caption} \captionsetup[figure]{skip=0pt} \captionsetup[table]{skip=-2pt} \setlength{\textfloatsep}{4pt plus 2.0pt minus 2.0pt} \setlength{\intextsep}{4pt plus 2.0pt minus 2.0pt} \usepackage{paralist} \usepackage{enumitem} \usepackage{tikz} \usetikzlibrary{fit} \tikzset{% highlight/.style={rectangle,rounded corners,fill=blue!15,draw,fill opacity=0.5,thick,inner sep=0pt} } \newcommand{\tikzmark}[2]{\tikz[overlay,remember picture,baseline=(#1.base)] \node (#1) {#2};} \newcommand{\Highlight}[1][submatrix]{% \tikz[overlay,remember picture]{ \node[highlight,fill=yellow!15, fit=(left.north west) (right.south east)] (#1) {};} } \usepackage{xparse} \usetikzlibrary{calc,matrix,backgrounds} \pgfdeclarelayer{myback} \pgfsetlayers{myback,background,main} \tikzset{mycolor/.style = {rounded corners,line width=1bp,color=#1}}% \tikzset{myfillcolor/.style = {rounded corners,draw,fill=#1}}% \NewDocumentCommand{\highlight}{O{blue!40} m m}{% \draw[mycolor=#1] (#2.north west)rectangle (#3.south east); } \NewDocumentCommand{\fhighlight}{O{blue!40} m m}{% \draw[myfillcolor=#1] (#2.north west)rectangle (#3.south east); } \makeatletter \newcommand{\let\@latex@error\@gobble}{\let\@latex@error\@gobble} \makeatother \setcopyright{none} \acmConference[]{}{}{} \settopmatter{printacmref=false} \renewcommand\footnotetextcopyrightpermission[1]{} \begin{document} \title{ES-ENAS: Combining Evolution Strategies with Neural Architecture Search at No Extra Cost for Reinforcement Learning} \author{Xingyou Song$^{*1}$, Krzysztof Choromanski$^{1,2}$, Jack Parker-Holder$^{3}$, Yunhao Tang$^{2}$, Daiyi Peng$^{1}$, Deepali Jain$^{1}$, Wenbo Gao$^{4}$, Aldo Pacchiano$^{5}$, Tamas Sarlos$^{1}$, Yuxiang Yang$^{6}$} \affiliation{ \institution{$^1$Google, $^2$Columbia University, $^3$Oxford University, $^4$Waymo, $^5$UC Berkeley, $^6$University of Washington} } \thanks{$^{*}$Correspondence to [email protected]. \\ Code: \href{https://github.com/google-research/google-research/tree/master/es_enas}{\textcolor{blue}{github.com/google-research/google-research/tree/master/es\_enas}}} \renewcommand{\shortauthors}{X. Song et al.} \renewcommand{\shorttitle}{ES-ENAS: Combining ES with NAS at No Extra Cost for RL} \begin{abstract} We introduce ES-ENAS, a simple neural architecture search (NAS) algorithm for the purpose of reinforcement learning (RL) policy design, by combining Evolutionary Strategies (ES) \citep{ES, mania2018simple} and Efficient NAS (ENAS) \citep{vinyals, dean, le} in a highly scalable and intuitive way. Our main insight is noticing that ES is already a distributed blackbox algorithm, and \textbf{thus we may simply insert a model controller from ENAS into the central aggregator in ES and obtain weight sharing properties for free.} By doing so, we bridge the gap from NAS research in supervised learning settings to the reinforcement learning scenario through this relatively simple marriage between two different lines of research, and are \textbf{one of the first to apply controller-based NAS techniques to RL.} We demonstrate the utility of our method by training combinatorial neural network architectures for RL problems in continuous control, via edge pruning and weight sharing. We also incorporate a wide variety of popular techniques from modern NAS literature, including multiobjective optimization and varying controller methods, to showcase their promise in the RL field and discuss possible extensions. We achieve $>90\%$ network compression for multiple tasks, which may be special interest in mobile robotics \citep{gage} with limited storage and computational resources. \end{abstract} \maketitle \input{introduction} \input{experiments} \input{conclusion} \section{Acknowledgements} The authors would like to thank David Ha, Yingjie Miao, Aleksandra Faust, Sagi Perel, Daniel Golovin, John D. Co-Reyes, and Vikas Sindhwani for valuable discussions. \newpage \bibliographystyle{ACM-Reference-Format}
1,108,101,563,481
arxiv
\section{Introduction} In four dimensional space time, the interactions of fermions in a Nielsen-Olesen vortex background have been widely analyzed in the literature, mainly in connection with bound states at threshold\cite{threshold}, zero modes\cite{zeromode} and scattering solutions\cite{scattering}. Recently, Frere, Libanov and Troitsky have shown that a single family of fermions in six dimensions with vector-like couplings to the Standard Model (SM) bosons gives rise to three generations of chiral Standard Model fermions in four dimensions\cite{libanov,libanov3}. In 5+1 dimensions, Frere $et\;al$ also studied the fermionic zero modes in the background of a vortex-like solution on an extra two-dimensional sphere and relate them to the replication of fermion families in the Standard Model\cite{frere}. The topological vortex (especially Abrikosov-Nielsen-Olesen vortex) coupled to fermions may lead to chiral fermionic zero modes\cite{rossi}. Usually the number of the zero modes coincides with the topological number, that is, with the magnetic flux of the vortex. In Large Extra Dimensions (LED) models, the chiral fermions of the Standard Model are described by the zero modes of multi-dimensional fermions localized in the (four-dimensional) core of a topological defect. Unlike the classical Kaluza Klein theory where one assumes that the extra dimensions should be small and cover a compact manifold, the extra dimensions can be large and non compact\cite{rubakov,daemi}. This freedom can provide new insights for a solution of gauge hierarchy problem\cite{randal} and cosmological constant problem. While we shall study fermionic zero modes coupled with a self-dual vortex background\cite{WangYQ} on a two dimensional non-compact extra space in 5+1 dimensions. The paper is organized as follows: In section II, we first present the unified description of the topological and non-topological self-dual vortex in Abelian Higgs model on the two dimensional non-compact extra space. In section III, In 5+1 dimensions, we analyzed the effective lagrangian of the fermions localized on a brane in the background of the coupling between Higgs field and fermion spinor field. In section IV, two simple cases are discussed to show the role of vortex background in the fermionic zero modes. In the last section, a brief conclusion is presented. \section{Self-dual Vortex on a two-dimensional non-compact extra space} We consider a 5+1 dimensional space-time $M^{4}\times{R^{2}}$ with $M^{4}$ represents our four-dimensional space-time and $R^{2}$ represents the two-dimensional extra Euclidean space. The metric $G_{MN}$ of the manifold $M^{4}\times{R^{2}}$ is \begin{equation}\label{metric} ds^2 = G_{MN}dx^M dx^N = g_{\mu \nu}dx^\mu{d}x^\nu -\delta_{ij} dx^{i}dx^{j}, \end{equation} where $g_{\mu \nu}$ is the four dimensional metric of the manifold $M^4$, capital Latin indices $M,N=0,\cdots,5$, Greek indices $\mu,\nu=0,\cdots,3$, lower Latin indices $i,j=4,5$ and $x^4, x^5$ are the coordinates on $R^2$. To generate the vortex solution, we introduce the Abelian Higgs Lagrangian \begin{equation}\label{lagrangian} \mathcal{L}_{V}=\sqrt{-G} \left (-\frac{1}{4}F_{MN}F^{MN}+(D^{M}\phi)^{\dag}(D_{M}\phi)-\frac{\lambda}{2}(\|\phi\|^{2}-v^{2})^{2} \right ), \end{equation} where $G=\det(G_{MN})$, $F_{MN}=\partial_{M}A_{N}-\partial_{N}A_{M}$, $D_{M}\phi=(\partial_{M}-ieA_{M})\phi$, $\phi=\phi(x^4, x^5)$ and $A_{M}$ are a complex scalar field on $R^2$ and a U(1) gauge field, respectively, $\|\phi\|=(\phi\phi^{\ast})^{\frac{1}{2}}$. The Abrikosov-Neilsen-Olsen vortex solution on the $M^{4}\times{R^{2}}$ could be generated from the Higgs field. We first introduce the first-order Bogomol'nyi self-dual equations \cite{Bogo} on the two dimensional extra Euclidean space \begin{eqnarray}\label{bogo} D_{\pm}\phi=0,\;\;B&=& \partial _{i} \partial _{i} \ln(\|\phi\|^{2})\nonumber\\ &=&\pm{e}(\|\phi\|^{2}-v^{2}), (i=4,5) \end{eqnarray} here the operator $D_{\pm}$ is defined as $D_{\pm}\equiv(D_{4}\pm{i}D_{5})$. We know that complex Higgs field $\phi$ can be regarded as the complex representation of a two-dimensional vector field $\vec{\phi}=(\phi^{1}, \phi^{2})$ over the base space time, it is actually a section of a complex line bundle on the base manifold. Considering the self-dual equation $D_{+}\phi=0$ and $\phi=\phi^{1}+i\phi^{2}$, we split the real part form the imaginary part, and obtain two equations \begin{eqnarray}\label{split} \left.% \begin{array}{c} \partial_{4}\phi^{1}-\partial_{5}\phi^{2} = eA_{4}\phi^{2}+eA_{5}\phi^{1}, \\ \partial_{4}\phi^{2}+\partial_{5}\phi^{1} = eA_{5}\phi^{2}-eA_{4}\phi^{1}. \\ \end{array}% \right. \end{eqnarray} Substituting Eqs. (\ref{split}) and $\phi=\phi^{1}+i\phi^{2}$ into $\partial_{4}\phi^{\ast}\phi-\partial_{4}\phi\phi^{\ast}$, it is easy to verify \begin{equation}\label{mu=mu+nv} \partial_{4}\phi^{\ast}\phi-\partial_{4}\phi\phi^{\ast} = 2ieA_{4}\|\phi\|^{2} + i(\partial_{5}\phi^{\ast}\phi+\partial_{5}\phi\phi^{\ast}). \end{equation} Considering the fundamental identity \begin{equation}\label{na=phi/phi*} \epsilon_{ab}n^{a}\partial_{i}n^{b}=\frac{1}{2i}\frac{1}{\phi\phi^{\ast}}(\partial_{i}\phi^{\ast}\phi-\partial_{i}\phi\phi^{\ast}) \end{equation} with the unit vector defined as $n^{a}={\phi^{a}}/{\|\phi\|}, (a,b=1,2)$, we immediately have \begin{equation}\label{eAmu} eA_{4}=\epsilon_{ab}n^{a}\partial_{4}n^{b}-\frac{1}{2}\partial_{5}\ln(\phi\phi^{\ast}). \end{equation} When considering $\partial_{5}\phi^{\ast}\phi-\partial_{5}\phi\phi^{\ast}$, following the same process above, it yields \begin{equation}\label{eAnu} eA_{5}=\epsilon_{ab}n^{a}\partial_{5}n^{b}+\frac{1}{2}\partial_{4}\ln(\phi\phi^{\ast}). \end{equation} Eq. (\ref{eAmu}) and Eq. (\ref{eAnu}) can be unified into one equation \begin{equation}\label{eAi-} eA_{i}=\epsilon_{ab}n^{a}\partial_{i}n^{b}-\frac{1}{2}\epsilon_{i j}\partial_{j}\ln(\phi\phi^{\ast}). \end{equation} Repeating the same discussion above to $D_{-}\phi=0$, we arrive \begin{equation}\label{eAi+} eA_{i}=\epsilon_{ab}n^{a}\partial_{i}n^{b} + \frac{1}{2}\epsilon_{i j}\partial_{j}\ln(\phi\phi^{\ast}). \end{equation} In fact, since the magnetic field is $B=\epsilon^{ij}\partial_{i}(e{A}_{j})$, according to Eq. (\ref{eAi+}), we have \begin{equation}\label{B=delta+ln} B=\epsilon^{i j}\epsilon_{ab}\partial_{i}n^{a}\partial_{j}n^{b}+\partial_{i}\partial_{i}\ln(\|\phi\|^{2}).\nonumber\\ \end{equation} So the second one in the Bogomol'nyi self-dual equation (\ref{bogo}) can be generalized to \begin{equation}\label{nonlinear01} B=\epsilon^{i j} \epsilon_{ab}\partial_{i}n^{a}\partial_{j}n^{b} +\partial_{i}\partial_{i}\ln(\|\phi\|^{2})=\pm e(\|\phi\|^{2}-v^{2}). \end{equation} For clarity we denote \begin{eqnarray}\label{B=Bt+Bnt} B&=&B_{T}+B_{NT},\nonumber\\ B_{T}&=&\epsilon^{i j}\epsilon_{ab}\partial_{i}n^{a}\partial_{j}n^{b},\\ B_{NT}&=&\partial_{i}\partial_{i}\ln(\|\phi\|^{2}).\nonumber \end{eqnarray} According to Duan's topological current theory, it is easy to see that the first term of Eq. (\ref{nonlinear01}) bear a topological origin. From Duan's $\phi$-mapping topological current theory \cite{DuanSLAC}, one can see that the topological term of the magnetic field $B_{T}$ \begin{equation} B_{T}=\epsilon^{i j}\epsilon_{ab}\partial _{i}n^{a}\partial_{j}n^{b},\label{nonsigma} \end{equation} just describes the non-trivial distribution of $\vec{n}$ at large distances in space \cite{'tHooft}. Noticing $\partial_{i}n^{a}=\frac{\partial_{i}\phi^{a}}{\parallel\phi\parallel}+\phi^{a}\partial_{i}\frac{1}{\parallel\phi\parallel}$ and the Green function relation in $\phi-$space : $\partial_{a}\partial_{a}ln(\|\phi\|)=2\pi\delta^{2}(\vec{\phi}),(\partial_{a}={\frac{\partial}{\partial\phi^{a}}})$, it can be proved that \cite{honseng} \begin{equation} B_{T}=\delta^{2}({\phi})J(\frac{\phi}{x})\label{BT=del}. \end{equation} So the second one in the Bogomol'nyi self-dual equations (\ref{bogo}) should be \begin{equation}\label{gnonlinear01} B=\delta^{2}({\phi})J(\frac{\phi}{x})+\partial_{i}\partial_{i}\ln\|\phi\|^{2}=\pm e(\|\phi\|^{2}-v^{2}). \end{equation} This equation is more exact than the conventional self-dual equation, in which the topological term has been ignored all the time. Obviously when the field $\phi\neq0$, the topological term vanishes and we have \begin{equation}\label{B=ln} B=B_{NT}=\partial_{i}\partial_{i}\ln(\|\phi\|^{2}). \end{equation} So the self-dual equation (\ref{gnonlinear01}) reduces to a nonlinear elliptic equation for the scalar field density $\|\phi\|^{2}$ \begin{equation}\label{nonlinear} \partial_{i}\partial_{i}\ln(\|\phi\|^{2})=\pm e(\|\phi\|^{2}-v^{2}). \end{equation} This is just the conventional self-dual equation. Comparing this equation with Eq. (\ref{gnonlinear01}), one see that the topological term $\delta^{2}(\vec{\phi})J(\frac{\phi}{x})$ is missed. The exact self-dual equation should be Eq. (\ref{gnonlinear01}). From our previous work, obviously the first term of Eq. (\ref{gnonlinear01}) describes the topological self-dual vortices. As for conventional self-dual nonlinear equation (\ref{nonlinear}), a great deal of work has been done by many physicists on it, and a vortex-like solutions was given by A. Jaffe \cite{jaffe}. But no exact solutions are known. Now we see that there are two classes of vortex which arise correspondingly from the symmetric phase and asymmetric phase of the Higgs field. These two classes of vortex provide different vortex background. And we shall study fermionic zero modes coupled with the vortex background in the following two sections. \section{Fermionic zero modes in the vortex background} The lagrangian of the fermions in the vortex background on $R^{2}$ is \begin{equation} \mathcal{L}=\sqrt{-G} \left\{ \bar{\Psi} \Gamma^A E^M_A (\partial_M - \Omega_M + A_M)\Psi-g\phi\Psi^{\dag}\Psi \right\}. \label{Lpsi} \end{equation} where $E^{M}_{A}$ is the {\sl sechsbein} with \begin{equation} E^{M}_{A} = (e^{\mu}_{A},\delta^{4}_{A},\delta^{5}_{A}) \end{equation} and capital Latin indices $A,B=0,\cdots,5$ correspond to the flat tangent six-dimensional Minkowski space, $\Omega_M=\frac{1}{2} \Omega_M^{AB}I_{AB}$ is the spin connection with the following representation of six-dimensional 8 $\times$ 8 Dirac matrices $\Gamma^A$: \begin{equation} \Gamma^A=\begin{pmatrix} 0 & \Sigma^A \\ \bar{\Sigma}^A & 0 \\ \end{pmatrix} , \end{equation} where $\Sigma^0 = \bar{\Sigma}^0 = \gamma^0 \gamma^0$; $\Sigma^k = -\bar{\Sigma}^k = \gamma^0 \gamma^k (k=1,2,3)$; $\Sigma^4 = -\bar{\Sigma}^4 = i\gamma^0 \gamma^5$; $\Sigma^5 = -\bar{\Sigma}^5 = \gamma^0$, $\gamma^{\mu}$ and $\gamma^5$ are usual four-dimensional Dirac matrices in the chiral representation: \begin{equation} \gamma^0=\begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix},\;\;\; \gamma^k=\begin{pmatrix} 0 & \sigma^k \\ -\sigma^k & 0 \\ \end{pmatrix},\;\;\; \gamma^5=i \gamma^0\gamma^1\gamma^2\gamma^3=\begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix}, \end{equation} here $\sigma^k$ are the Pauli matrices. $\Gamma^A$ follows relation $\Gamma^A\Gamma^B+\Gamma^B\Gamma^A=2\eta^{AB}I$, in which $\eta^{AB}=$ diag$ (+,-,\cdots,-)$ is the six-dimensional Minkowski metric. The components of $\Omega_M$ are \begin{equation}\label{connection} \Omega_{\mu}=\omega_{\mu},\;\;\; \Omega_{4}=0, \;\;\;\ \Omega_{5}=0, \end{equation} \noindent where $\omega_{\mu}=\frac{1}{2} \omega_{\mu}^{ab}I_{ab}$ is the spin connection derived from the metric $g_{\mu\nu}=e_{\mu}^{a} e_{\nu}^{b} \eta_{ab}$, lower case Latin indices $a,b=0,\cdots,3$ correspond to the flat tangent four-dimensional Minkowski space. Using Eqs. (\ref{connection}), the lagrangian (\ref{Lpsi}) of the fermions then becomes \begin{eqnarray}\label{Lpsi2} \mathcal{L}=\sqrt{-G} \{ \bar{\Psi} \Gamma^a e^{\mu}_{a} (\partial_{\mu} - \omega_{\mu} + A_{\mu})\Psi + \bar{\Psi}\Gamma^4(\partial_{4}+A_{4})\Psi + \bar{\Psi} \Gamma^5(\partial_{5} + A_{5})\Psi - g \phi \Psi ^{\dag} \Psi \}. \end{eqnarray} We denote the Dirac operator on $R^2$ with $D_R$: \begin{equation} \label{DiracOperator} D_R =\bar{\Gamma} \left \{ \Gamma^4(\partial_{4} + A_{4}) + \Gamma^5(\partial_{5} + A_{5})-g\phi \right \}, \end{equation} where $\bar{\Gamma}=\Gamma^0 \Gamma ^1 \Gamma ^2 \Gamma ^3$, and expand any spinor $\Psi(x^{\mu},x^i)$ in a set of eigenvectors $\Theta_m(x^i)$ of this operator $D_R$ \begin{equation} D_R\Theta_m(x^i)=\lambda_m\Theta_m(x^i) \;\;\; (i=4,5). \end{equation} There may exist a set of discrete eigenvalues $\lambda_m$ with some separation. All these eigenvalues play a role of the mass of the corresponding four-dimensional excitations \cite{libanov}. We assume that the energy scales probed by a four-dimensional observer are smaller than the separation, and thus even the first non-zero level is not excited. So, we are interested only in the zero modes of $D_R$: \begin{equation} \label{DiracEqOnS} D_R\Theta(x^i)=0. \end{equation} This is just the Dirac equation on $R^2$ with gauge and vortex backgrounds. For fermionic zero modes, we can write \begin{equation} \Psi(x^{\mu},x^i)=\psi(x)\Theta(x^i), \end{equation} where $\psi$ and $\Theta$ satisfy \begin{eqnarray} \bar{\Gamma}\Gamma^a e^{\mu}_{a} (\partial_{\mu} - \omega_{\mu} + A_{\mu})\psi(x) &=& 0, \nonumber\\ D_R\Theta(x^i) &=& 0. \end{eqnarray} The effective Lagrangian for $\psi$ then becomes \begin{eqnarray} &&\int dx^4 dx^5 \sqrt{-G} \left \{ \bar{\Psi} \Gamma^A E^M_A (\partial_M - \Omega_M + A_M) \Psi\ -g\phi\Psi^{\dag}\Psi \right\} \nonumber\\ &&= \sqrt{-\det(g_{\mu\nu})} \bar{\psi} \Gamma^a e^{\mu}_a (\partial_{\mu} - \omega_{\mu} + A_{\mu}) \psi \int dx^4 dx^5 \Theta^\dag \Theta. \end{eqnarray} Thus, to have the localization of gravity and finite kinetic energy for $\psi$, the above integral must be finite. This may be achieved for some $\Theta(x^i)$ which does not diverge on the whole $R^2$ and converge to zero as $r$ tends to infinity. \section{SIMPLE SITUATIONS} In this section, to illustrate how the vortex background affects the fermionic zero mode, we first discuss the simple case that the Higgs field $\phi$ is relative only to $x^4$, and then solve the general Dirac equation for the vacuum Higgs field solution $\|\phi\|^{2}=v^2$. \subsection{Case I: $\phi$ is relative only to $x^4$} Now we discuss a simple situation for $\phi$, i.e. $\phi=\phi(x^4)=\phi^1(x^4)+i \phi^2(x^4)$. In this case, Eq. (\ref{eAmu}) and Eq. (\ref{eAnu}) can be written as \begin{equation}\label{Atheta} A_{4}=\frac{1}{e\|\phi\|^2} \left (\epsilon_{ab}\phi^{a}\partial_{4}\phi^{b} -\frac{1}{\|\phi\|^2}\epsilon_{ab}\phi^{a}\phi^{b}\phi^{c}\partial_{4}\phi^{c} \right ), \end{equation} \begin{equation}\label{Aphi} A_{5}=-\frac{1}{2e}\partial_{4}\ln \|\phi\|^{2}. \end{equation} Then the Dirac operator $D_R$ becomes: \begin{equation} \label{DiracOperator2} D_R =\bar{\Gamma} \left \{ \Gamma^4 \left[ \partial_{4} + A_{4}(x^4)- \Gamma^4 \Gamma^5 A_{5}(x^4) +\Gamma^4 g \phi(x^4) \right] + \Gamma^5\partial_{5} \right \}, \end{equation} and Dirac equation $D_R\Theta(x^4,x^5)=0$ is \begin{equation} \label{DiracEqI} \left \{\bar{\Gamma} \Gamma^4 \left[ \partial_{4} + A_{4}(x^4)- \Gamma^4 \Gamma^5 A_{5}(x^4) +\Gamma^4 g \phi(x^4) \right ] + \bar{\Gamma} \Gamma^5 \partial_{5}\right \} \Theta(x^4,x^5)=0. \end{equation} Here $\Theta(x^4,x^5)$can be written as the following form: \begin{equation} \Theta(x^4,x^5)=f(x^4)h(x^5), \end{equation} where $h(x^5)=Const$ and $f(x^4)$ satisfies \begin{equation} \label{DiracEqf} \left \{ \partial_{4} + A_{4}(x^4)- \Gamma^4 \Gamma^5 A_{5}(x^4) +\Gamma^4 g \phi(x^4) \right \} f(x^4)=0. \end{equation} Solving this equation, one can easily obtain the formalized solution: \begin{eqnarray} \label{fx4_1} f(x^4)=e^{-\int dx^4 \left \{ A_{4}(x^4)- i \gamma^5 A_{5}(x^4) + g \Gamma^4 \phi(x^4) \right\}}. \end{eqnarray} This equation spontaneously leads to the Aharonov-Bohm phase. Considering $\phi=\phi(x^4)$ and integrating over the extra dimensions for the Eq. (\ref{gnonlinear01}), one can get \begin{equation} \label{WindingNumber} \partial_{4}\ln\|\phi\|^{2}=-\sum_{l}{W_{l}} \pm \int dx^4 (\|\phi\|^{2}-v^2), \end{equation} where $W_l$ is winding number. Making use of Eq. (\ref{Aphi}) and substituting Eq. (\ref{WindingNumber}) into Eq. (\ref{fx4_1}), we get the following form \begin{eqnarray}\label{fx4WindingNumber} f(x^4)=e^{\frac{i\gamma^5}{2 e } \sum_{l}{W_{l}} -\int dx^4 \left \{ A_{4}(x^4) \pm\frac{i\gamma^5}{2 e} \int dx^4 (\|\phi\|^{2} - v^2) + g \Gamma^4 \phi(x^4) \right\}}. \end{eqnarray} From the first term $e^{\frac{i}{2e}\gamma^5\sum_{l}{W_{l}}}$, we see that the total topological charge $Q=\sum{W_{l}}$ contributes a phase factor to the zero mode $\Theta(x^i)=Cf(x^4)$. The topological charge is determined by the topological properties of the extra space manifold. When we add a point in the infinity, the non-compact $R^{2}$ can be compactified to a 2-sphere. In this case, the total topological charge is just the Euler characteristic number of 2-shpere, i.e., $Q=2$. Eq. (\ref{gnonlinear01}) reveals that this topological phase origin from the symmetric phase of the Higgs field, and the non-topological one arise from asymmetric phase, it is also included in $e^{\pm\frac{i}{2e}\gamma^5\int dx^4(\|\phi\|^{2}-v^2)}$. So the topological and non-topological self-dual vortex both contribute a phase shift to the fermionic zero mode. As all known, quantum topological and geometrical phases are ubiquitous in modern physics---in cosmology, particle physics, modern string theory and condensed matter. In fact, according to Eq. (\ref{fx4WindingNumber}), we see this phase shift is actually the quantum mechanical Aharonov-Bohm phase. This discission can be generalized to the AB phase of non-abelian gauge theories, such as the Wilson and 't Hooft loops. Since the AB phase is fundamental to theories of anyons and to gauge fields, it is an important tool for studying the issues of confinement and spontaneous symmetry breaking. \subsection{Case II: The vacuum solution} For the vacuum solution of Eq. (\ref{gnonlinear01}) $\|\phi\|^{2}=\phi\phi^{\ast}=v^2$ which represents a circle $S^{1}$ in the extra space, according to Eq. (\ref{eAi+}), we see the non-topological part $\frac{1}{2}\epsilon_{i j}\partial_{j}\ln(\phi\phi^{\ast})$ vanishes, there is only topological part left. When the Higgs field is degenerated on the vacuum manifold, we have $A_{4}=A_{5}=0$, then the Dirac equation $D_R \Theta(x^4,x^5)=0$ is read as \begin{equation} \label{DiracEqII} \left \{\bar{\Gamma} \Gamma^4 \left ( \partial_{4} +g v \Gamma^4 \right ) + \bar{\Gamma}\Gamma^5\partial_{5} \right \} \Theta(x^4,x^5)=0. \end{equation} In which $\Theta(x^4,x^5)=f(x^4)h(x^5)$, $h(x^5)$ is a constant again and $f(x^4)$ satisfy the following equation \begin{equation} \bar{\Gamma} \Gamma^4 \left ( \partial_{4} + g v\Gamma^4 \right ) f(x^4)=0. \end{equation} Denoting \begin{equation} f(x^4)=\left( \begin{array}{c} f_1(x^4) \\ f_2(x^4) \\ f_3(x^4) \\ f_4(x^4) \\ \end{array} \right), \end{equation} one obtains the following two sets of the differential equations \begin{equation} \left \{ \begin{array}{c} \partial_{4} f_1(x^4) -i g v f_4(x^4) = 0, \\ \partial_{4} f_4(x^4) -i g v f_1(x^4) = 0; \\ \end{array} \right. \end{equation} \begin{equation} \left \{ \begin{array}{c} \partial_{4} f_2(x^4) +i g v f_3(x^4) = 0, \\ \partial_{4} f_3(x^4) +i g v f_2(x^4) = 0. \\ \end{array} \right. \end{equation} The solutions are \begin{eqnarray} \left. \begin{array}{l} f_1(x^4) = C_1 e^{i Q x^4} + C_4 e^{- i Q x^4},\\ f_2(x^4) = C_2 e^{ Q x^4} + C_3 e^{- Q x^4},\\ f_3(x^4) = i(C_2 e^{ Q x^4} - C_3 e^{- Q x^4}),\\ f_4(x^4) = C_1 e^{i Q x^4} - C_4 e^{- i Q x^4},\\ \end{array} \right. \end{eqnarray} where $Q=gv$. Now we see that $f_1(x^4)$ and $f_4(x^4)$ are planar wave function. It is easy to see that, if the coupling constant $g=0$ or the vacuum expectation $v=0$, the solution $f(x^{4})$ is simply a constant spinor. As shown in section II, $\phi=v=0$ corresponds to the symmetric phase and $\phi=v\neq0$ corresponds to the asymmetric phase. So different vortex background results in different zero mode. The discussion above can also be generalized to a more universal case, usually the general Dirac equation is hardly solvable, while the two simple cases above provide us a coarse insight into the fermionic zero modes in the vortex background. \section{Conclusion} In 5+1 dimensions, there are two classes of vortex solutions in the Abelian Higgs model: the topological vortex and the non-topological vortex. They can be described by a more accurate Bogomol'nyi self-duality equation $B=\delta^{2}(\vec{\phi})J(\frac{\phi}{x})\pm\partial_{i} \partial_{i}ln(\|\phi\|^{2})$. The topological vortex just arise from the symmetric phase of the Higgs field, while the non-topological vortex origin from the asymmetric phase. Through a simple case, it is shown that the vortex background contribute a phase shift to the fermionic zero mode in the 5+1 dimensional space time. The phase is divided into two parts, one is related with the topological number of the extra space, the other depends on the non-topological vortex solution. Then we solve the general Dirac equation for the vacuum case, the symmetric and asymmetric phases of the Higgs field just correspond to different fermion solutions. \section{Acknowledgment} This work was supported by the National Natural Science Foundation and the Doctor Education Fund of Educational Department of the People's Republic of China.
1,108,101,563,482
arxiv
\section{Introduction} Regression discontinuity (RD) design originated in Thistlethwaite and Campbell (1960) that study the effect of the student scholarships on future academic outcomes. In RD design, for evaluation of the intervention of which status is determined by whether a covariate exceed a fixed known threshold or not, subjects with values just below the threshold and those above the threshold are compared, where the intervention status is as good as randomly assigned. RD design is well applied by empirical researchers to estimate the treatment effect at the target population, similar to other quasi-experimental methods. Applications of RD design are found in various empirical fields in economics such as labor, public, education, and development economics. Detailed literature survey is found in Imbens and Lemieux (2008), and in Lee and Lemieux (2010). As there have been numerous empirical applications of RD design, more methodological and theoretical extensions are suggested in different directions such as the case of fuzzy discontinuity by Hahn et al. (2001), as for the selection of bandwidth (Ludwig and Miller, 2007; Imbens and Kalyanaraman, 2012; Calonico et al., 2014; Arai and Ichimura, 2018), and for different tests for estimations (Lee, 2008). Our goal in this paper is to propose a new method for estimation of counterfactual functions and heterogeneous causal effects considering a RD design with multiple groups which have different thresholds. In the standard RD designs, one of the serious limitations is that the intervention effect only at the discontinuity point is evaluable. Angrist and Rokkanen (2015) proposed a method for identification of the causal effects away from the cutoff. However, their approach is that the running variable is assumed to be ignorable if conditional on the other available predictors, and it is different from our attempt to estimate the counterfactual functions themselves. To consider what kind of assumption and estimation method are needed is an important task in this research. In addition, we provide a method for optimization of threshold as an application of our method. Most past studies have considered thresholds as a fixed value and not dealt with threshold itself as an object of study. However, the real interest of researchers should lie not only in evaluation of past interventions under a given threshold but also in an appropriate threshold setting as a support for decision-making for future interventions. Therefore, we develop a method to estimate an optimal threshold in terms of cost effectiveness. Our methodological development is closely related to the multiple thresholds in RD method. Indeed, the empirical literature utilizes the standard RD design with single threshold. However, it is not uncommon to have multiple thresholds in actual datasets. We often observe multiple thresholds to assign one treatment in a target population. For example, it is often the case that local governments determine the cutoff value for running variables such as test scores, poverty indexes, birth weight, geolocation, and income. When the different administrative districts set each unique threshold of admission test score, it leads multiple thresholds exist in the target population (Lucas and Mbiti, 2014). Similarly, the geographical division often sets own eligible cutoff value for social welfare programs (Crost et al., 2014). In Japan, age limits of the local goverments' programs to make medical expenses for children free vary by the local governments. In this way, countless situations are applicable for multiple thresholds, while RD application is merely concentrated on single threshold method. There is scarce methodological literature that handles multiple threshold situations. The past literature that deal with multiple thresholds is Papay et al. (2011). Papay et al. (2011) shows how to incorporate multiple dimensions of running variables in the RD design with single dataset, which is different from our model setup with multiple datasets. Literature has also moved to the situations where thresholds or cutoff points were unknown for researchers (Henderson et al., 2014; Porter and Yu, 2015; Chiou et al., 2018). Our method is clearly differed from their works as we assume the situation where value of cutoff is observed from datasets. This paper is organized as follows. In Section 2 we describe the standard RD design settings as basics of the proposed method. In addition we provides the details of our design that using a special structure that there are multiple groups with different thresholds makes it possible to estimate counterfactuals and causal effects. Section 3 we propose a new AIPW kernel estimator making the best use of the observed data in our design. In Section 4 we investigate the asymptotic properties of the estimator proposed in Section 4 and show its double-robustness. In Section 5 we provide a method to estimate an optimal bandwidth as an application of our method. In Section 6 we report a simulation for studying the properties of the proposed estimator in the finite sample. In Section 7 we summarize this paper and discuss future outlook on this research. \section{Model} In this section, we briefly summarize framework and theory of the conventional regression discontinuity design. Then we extend the discussion to the case with multiple thresholds and propose our method to estimate the unobservable counterfactuals in the conventional RD designs and heterogeneous causal effects by using them. In this paper consider only the situation where there is just two groups for simplicity. Our discussion and notation are based on Imbens and Lemieux (2008) and modern literature using Rubin Causal Model (RCM) setup with a concept of potential outcomes (Rubin, 1974; Holland, 1986). \subsection{Regression discontinuity design\label{sec:}} As is the usual case with RCM, consider the situation that there are two types of interventions, special intervention (i.e. treatment) and normal intervention (i.e. control), and researchers are interested in the causal effect of the intervention. Corresponding to those two types of interventions, there are two potential outcomes for each unit. Denote by $Y_{ji}$ the potential outcomes of unit $i\in N$, where $N=\{1,...,n\}$ is a set of $n$ units, and the potential outcome for treatment is $Y_{1i}$ and the potential outcome for control is $Y_{0i}$. Now let the intervention assignment indicator of unit $i$ denote $Z_i\in\{0,1\}$, which is 1 when unit is exposed to treatment and 0 otherwise. The observed outcome variable can be expressed as \begin{eqnarray} Y_i & = & Z_i Y_{1i}+(1-Z_i) Y_{0i}= \begin{cases} Y_{1i}~~~if~Z_i=1\\ Y_{0i}~~~if~Z_i=0 \end{cases} (i=1,...,n). \end{eqnarray} In addition, let a finite dimensional vector of pretreatment covariate variables except $X_i$ denote $\bm{W_i} \in \mathbb{R}^m$. In the setting of RD designs, the type of intervention allocated to unit $i$ is determined by whether a running variable $X$ is above a threshold $c$. RD designs are generally divided into two types, the sharp RD (SRD) design and the fuzzy RD designs depending on how to determine the assignment of intervention. In this study we limit the discssion to the sharp RD design. In the sharp RD design the assignment $Z_i$ is based on a deterministic function of the running variable $X_i$ defined as \begin{eqnarray} Z_i=1(X_i>c). \label{eq:z_func} \end{eqnarray} Under this function, all the units observing $X_i$ above $c$ are exposed to treatment and the others are exposed to control. In the sharp RD design, although the running variable $X_i$ does not overlap between the treatment group and the control group, the assignment $Z_i$ is only depending on $X_i$, therefore Missing at random (MAR) (Rubin, 1976), that is, \begin{eqnarray} Y_{1i},Y_{0i} \mathop{\perp\!\!\!\perp} Z_i|X_i, \label{eq:strig} \end{eqnarray} is satisfied. Under MAR, if the models of $E(Y_1|X)$ and $E(Y_0|X)$ are parametric, $E(Y_1|X)$ can be extrapolated even below the threshold and $E(Y_0|X)$ also can be extrapolated above the threshold. Therefore, $E[Y_1-Y_0|X=a]$ at any arbitrary point $X=a$ can be estimated and $E[Y_1-Y_0]$ also can be. However, nonparametric regression do not permit extrapolation and only $E(Y_1|X)$ for $X>c$ and $E(Y_0|X)$ for $X<c$ and the difference of those at the discontinuity point ,that is, the local average treatment effect (LATE) \begin{align} \begin{split} \tau_{SRD}&=E[Y_{1}-Y_{0}|X=c]=E[Y_{1}|X=c]-E[Y_{0}|X=c]\\ &=E[Y|X=c,Z=1]-E[Y|X=c,Z=0] \end{split} \end{align} can be estimated. This is the main goal in the RD designs. However only the treatment group can include units who observe $X_i=c$ and the control group cannot, hence the conditional expectation of the observed outcomes $Y_i$ given $X_i$ is discontinuous at $c$. Thus $\tau_{SRD}$ can be regarded as \begin{align} \tau_{SRD}=\lim_{x\downarrow c}E[Y|X=x]-\lim_{x\uparrow c}E[Y|X=x], \end{align} and obtained by point estimations of the limits from the left and right. RD design is useful in many practical cases, however it is one of the major limitations that only LATE at the discontinuity point can be estimated and thus the result may lack generalizability (Lee and Lemieux, 2010). This problem is due to the structure that there is no overlap in $X_i$ between the treatment and control groups and the counterfactual cannot be obtained. To solve this problem at least partially, we propose a new method when different datasets with different thresholds are available. \subsection{Regression discontinuity design with two groups} In this paper, to estimate the unobserved potential outcome in the standard RD design (i.e. counterfactual), we consider the RD designs with multiple groups which have different thresholds. We assume the case where the same intervention is provided to several groups (e.g. geographical regions) and those groups have different thresholds from each other on a same running variable and the types of intervention for units are determined by the thresholds of the groups to which they belong. Other basic settings are the same as the case with the standard RD design described in the previous part; there are two types of intervention, or treatment and control, and corresponding to those interventions there are two potential outcomes, and we focus only on the sharp RD design. In the following, we consider only the case with two groups. Each unit belongs to either of the two groups. Let the group assignment indicator for unit $i \in N$, where $N=\{N_0,N_1\}=\{1,...,n_0,n_0+1,...,n_0+n_1\}$, $N_0=\{1,...,n_0\}$ and $N_1=\{1,...,n_1\}$, be denoted by $D_i\in\{0,1\}$, which takes 0 if $i\in N_0$ and takes 1 if $i\in N_1$. In addition, $c_k(k=0,1; c_0<c_1)$ denotes the thresholds of the two groups, $c_0$ is the one in the group of $N_0$ and $c_1$ is the other. By using the subscript $d_i \in \{0,1\}$ representing the group to which unit $ i $ belongs, the function of intervention assignment is \begin{eqnarray} Z_i=1(X_i>c_{d_i}). \end{eqnarray} According to this function, observable outcomes for unit $i$ from $N_0$ are $Y_{0i}$ for $X_i \leq c_0$ and $Y_{1i}$ for $X_i>c_0$, and for unit $i$ from $N_1$, $Y_{0i}$ for $X_i\leq c_1$ and $Y_{1i}$ for $X_i>c_1$ are obsereved. Thus, different potential outcomes are observed depending on the groups for $c_0<X_i<c_1$, while $Y_{0i}$ for $X_i<c_0$ and $Y_{1i}$ for $X_i>c_1$ are commonly observed from both of the two groups, as shown in the Figure\ref{fig:po}. \begin{figure}[tbhp] \centering \includegraphics[width=90mm]{0Data.pdf} \caption{Observed and unobserved outcomes. The white areas show where outcomes can be observed and the gray ones show where outcomes cannot be observed. For $X<c_0$, $Y_0$ can be commonly observed in both data sets and $Y_1$ cannot. For $X>c_1$, on the contrary, only $Y_1$ can be observed and $Y_0$ cannot. For $c_0<X<c1$, $Y_1$ for $D=0$ and $Y_0$ for $D=1$ are observed and $Y_0$ for $D=0$ and $Y_1$ for $D=1$ are missing. This study utilizes this symmetric structure for estimation of the counterfactual functions.} \label{fig:po} \end{figure} Now, let the conditional expectation functions given $X_i$ depending on the group assignment be denoted by \begin{eqnarray} E[Y_j |X=x,D=k]=g_{jk}(x)~~~(j=0,1; k=0,1). \label{eq:func_ind} \end{eqnarray} This expression allows the regression function to be different by the group assignment, however we are not interested in the individual functions for each group. Our main interest lies in the functions in the target common population: \begin{eqnarray} E[Y_j |X=x]=g_j(x)~~~(j=0,1) \label{eq:cefun} \end{eqnarray} Especially, between the two thresholds, both of the potential outcomes $Y_1$ and $Y_0$ are observed and thus it should be potentially possible to estimate $g_1(x)$ and $g_0(x)$ for $c_0<X<c_1$, which overlap each other. If we can estimate them, we can also estimate the average treatment effects at arbitrary points between the two thresholds defined as \begin{align} \begin{split} \tau(x) &= E[Y_1-Y_0|X=x]\\ &= E[Y_1|X=x]-E[Y_0|X=x] \\ &= E[Y|X=x,Z=1]-E[Y_0|X=x,Z=0] \\ &= g_1(x)-g_0(x) ~~~ (c_0<x<c_1), \end{split} \label{eq:ce} \end{align} as shown in the Figure\ref{fig:ce}. \begin{figure}[tbhp] \centering \includegraphics[width=0.6\linewidth ]{causal_effect.pdf} \caption{Causal effects at the point $X=x_0$. If we can estimate $g_1(x)$ for $X>c0$ and $g_0(x)$ for $X<c1$, the causal effect can be defied as the function of $X$ formed by $\tau(x) = g_1(x)-g_0(x)$ for $c_0<X<c_1$.} \label{fig:ce} \end{figure} Nevertheless what can be estimated from the data is only function (\ref{eq:func_ind}) and we cannot estimate function (\ref{eq:cefun}) directly. The conditional expectation (\ref{eq:cefun}) can be rewritten as \begin{align} \begin{split} E[Y_1|X=x]=&E[Y_1|X=x,D=0]Pr(D=0|X=x)\\ &\qquad+E[Y_1|X=x,D=1]Pr(D=1|X=x)\\ E[Y_0|X=x]=&E[Y_0|X=x,D=0]Pr(D=0|X=x)\\ &\qquad+E[Y_0|X=x,D=1]Pr(D=1|X=x) \end{split} \label{eq:rece} \end{align} and if we knew all factors of the right hand side in equation (\ref{eq:rece}), we could estimate function (\ref{eq:cefun}) following equation (\ref{eq:rece}). However, the observed potential outcome is limited as described above, what can estimate directly from the data are only \begin{align} \begin{split} E[Y_0|X=x,D=0]=g_{00}(x)~~~(x<c_0),~~~E[Y_1 |X=x,D=0]=g_{10}(x)~~~(x>c_0)\\ E[Y_0|X=x,D=1]=g_{01}(x)~~~(x<c_1),~~~E[Y_1 |X=x,D=1]=g_{11}(x)~~~(x>c_1) \end{split} , \end{align} and the other parts cannot be estimated directly in general, as shown in Figure \ref{fig:obs_po}. \begin{figure}[tbhp] \begin{minipage}{0.5\hsize} \centering \includegraphics[width=\linewidth]{c0.pdf} \subcaption{} \end{minipage} \begin{minipage}{0.5\hsize} \centering \includegraphics[width=\linewidth]{c1.pdf} \subcaption{} \end{minipage} \caption{Conditional expectation functions of potential outcomes. The solid line indicates the parts that can be estimated from the data and the dashed line indicates the parts cannot be estimated. The left panel is for the case using $D=0$ and the right panel is for $D=1$.} \centering \label{fig:obs_po} \end{figure} Therefore, whereas $g_0(x)$ for $x<c_0$ and $g_1(x)$ for $x>c_1$ can be estimated according to the equations (\ref{eq:rece}), $g_{00}(x)$ and $g_{11}(x)$ for $c_0<x<c_1$ cannot be estimated and thus we cannot estimate $g_0(x)$ and $g_1(x)$ between the thresholds of most interest. In what follows, we consider what kind of assumption is necessary to realize unbiased estimation of the function $g_j(x)$. First, consider the most optimistic situation, where the group is randomly assigned for units and the estimated functions are independent of the data assignment. In this case, $E[Y_j|X=x,D=k]=E[Y_j|X=x]$ holds and using only one data from observed group of either $D=0$ or $D=1$ does not generate bias. One of the situations in which this assumption holds is where a type of randomized controlled trial (RCT) can be conducted, where units are randomly distributed to two groups with different thresholds. However, in the field of medicine or social science such as ecnomics, there are not many situations where it is possible to implement random assignment for structural or ethical reasons. In the following, we investigate the case where the conditions that group assignment is randomly determined and the conditional expectation functions do not depend on the group assignment are not satisfied; that is, \begin{align} g_{jk}(x)= E[Y_j|X=x,D=k]\not=E[Y_j|X=x]=g_j(x). \end{align} This means that there is a selection bias between the two groups. In this case, a standard approach can cause biased estimates. In order to achieve the unbiased estimator of the conditional expectation functions, we additionally assume ignorability. \begin{assumption} (Ignorability)\\ The group assignment variable $D$ depends only on the covariates $X$ and $W$ but not on the outcome variable $Y_0$ and $Y_1$. \begin{align} Pr(D|Y_0,Y_1,X,W)=Pr(D|X,W). \label{eq:ign} \end{align} In form of the conditional independence given $X$ and $W$, \begin{align} Y_1,Y_0 \mathop{\perp\!\!\!\perp} D|X,W. \end{align} \label{as:ign} \end{assumption} This assumption also can be rewritten in another way using Bayes' theorem. \begin{align} Pr(Y_0,Y_1|D,X,W)=Pr(Y_0,Y_1|X,W) \end{align} In this form it can be interpreted as meaning that given the covariates $X$ and $W$ the simultaneous distribution of $Y_0$ and $Y_1$ is independent of the group assignment $D$. Under this assumption, the conditional expectations satisfy \begin{align} \begin{split} E(Y_0|X) &= E_{W|X}[E(Y_0|X,W)]\\ &= E_{W|X}[E(Y_0|X,W,D=1)]\\ &= E_{W|X}[E(Y_0|X,W,D=1,Z=0)]\quad(X<c_1)\\ E(Y_1|X) &= E_{W|X}[E(Y_1|X,W)]\\ &= E_{W|X}[E(Y_1|X,W,D=0)]\\ &= E_{W|X}[E(Y_1|X,W,D=0,Z=1)]\quad(X>c_0) \end{split} . \end{align} However, when the covariates $W$ is high-dimensional, as in many cases, correct identification of parametric function form is mostly impracticable, furthermore, if using nonparametric regression including the local linear kernel regression, practitioners are faced with the problem known as the Curse of Dimensionality\footnote{The Curse of Dimensionality is the phenomena that the amount of data required for estimation increases exponentially when there are many explanatory variables (Hoshino, 2009). More specifically, let $d$ denote the number of dimension, then asymptotic mean squared error is proportional to $N^{-4/(d+4)}$ (H\"{a}rdle et al., 2004).}. To avoid these problems, we introduce the propensity score. The propensity score is the concept proposed by Rosenbaum and Rubin(1983) that enables covariate adjustment through a single variable into which the information of multiple covariate variables is aggregated; it is the coarsest one-dimensional balancing score\footnote{A balancing score $b(x)$ is a function of observed covariates $x$ such that the conditional distribution of $x$ given $b(x)$ is independent of assignments $z$; that is, \begin{align} x\mathop{\perp\!\!\!\perp} z|b(x). \end{align} Balancing scores are not uniquely determined but various functions of $x$. The coarsest balancing score, i.e. the propensity score, is the function of any other balancing scores (Rosenbaum and Rubin, 1983).}. In general propensity score analysis, a selection probability of a missing in the context of missing data analysis or a treatment assignment in the context of causal inference is usually used as a propensity score. In this study, on the other hands, since what determines which of the potential outcome $Y_j$ is the group assignment, the selection probability of $D$ given the covariates $X$ and $W$ is regarded as the propensity score. Under the ignorability assumption (\ref{eq:ign}), we can estimate the conditional expectations as \begin{align} \begin{split} &E_{X,W}\left[E_{D|X,W}\left[\frac{D}{E(D|X,W)}E(Y_0|X,D=1)\right]\right]\\ &\quad= E_{X,W}\left[E_{D|X,W}\left[\frac{D}{E(D|X,W)}\right]E(Y_0|X,D=1)\right]\\ &\quad= E[Y_0|X]\quad(X<c_1)\\ &E_{X,W}\left[E_{D|X,W}\left[\frac{D}{E(D|X,W)}E(Y_1|X,D=0)\right]\right]\\ &\quad= E_{X,W}\left[E_{D|X,W}\left[\frac{D}{E(D|X,W)}\right]E(Y_1|X,D=0)\right]\\ &\quad= E[Y_1|X]\quad(X>c_0) \end{split} . \end{align} The specific procedure to estimate as above is described in the next section. \section{Estimation} Estimation in the conventional RD designs has been considered as nonparametric estimation problem since the misspecification of the function form may cause bias in estimation of the causal effect (Hahn et al., 2001; Lee and Lemieux, 2010). Therefore we consider nonparametric estimation of $g_j(x)$, in particular, using the local linear regression model taking advantage of the fact that $X$ is one dimensional variable. Note that considering that the purpose of this research is estimation of counterfactual between the two thresholds and estimation of causal effect using it, it is sufficient to estimate even a regression function between thresholds. However, if the estimation target is limited to the interval between the thresholds, the bad boundary behavior of the kernel regression as above occur in the neighborhood of the thresholds. Since data exist outside the thresholds in this design, we use them to improve the stability of estimation; the target of estimation is not limited to the interval between the thresholds. Now consider a nonparametric regression model $Y_i=g(X_i)+\varepsilon_i$, where $g(x)$ is a unknown smooth function. The local linear estimates of is $g(x)$ formed by minimizing \begin{align} \sum_{i=1}^n K_h(X_i-x)[Y_i-\bm{G}(X_i-x)^T\bm{\alpha}]^2 \label{eq:llr} \end{align} where $K_h(X_i-x)=K(X_i-x/h)/h$ is the kernel weight with bandwidth $h$ and $\alpha \equiv (\alpha_0(x),\alpha_1(x))^T$, $G(X_i-x)\equiv(1,X_i-x)^T$. The estimated function is $\hat{g}(x)=\hat{\alpha}_0(x)$. When complete data exists, $\bm{\alpha}$ solving the equation (\ref{eq:llr}) gives the correct regression function; however, actually, the presence of missing due to the design in our study make it biased in general. When focusing on estimate of $E(Y_0|X)$, data of $D=1$ is complete case for $X<c_1$. Consistent estimation of $E(Y_0|X)$ for $X<c_1$ can be implemented using data of $D=1$ and the inversed probability weighted (IPW) method or augmented inversed probability weighted (AIPW) method propsed by Wang et al.(2010), which is more robust than IPW. It is similar for estimate of $E(Y_1|X)$ for $X>c_0$ and data of $D=0$. However, those estimation method ignore the other data ($D=0$ for $g_0(x)$ or $D=1$ for $g_1(x)$) except in estimation of the selection probability model although those data are available. Especially, the observed data of $D=0$ for $X<c_0$ ((c) in Figure \ref{fig:po}) and $D=1$ for $X>c_1$ ((d) in Figure \ref{fig:po}) including both the auxiliary variables and even the outcome can be used to estimate in the neighborhood of the thresholds $c_0$ and $c_1$, but the information of those is totally ignored. Those methods are not efficient in this respect. Therefore we propose more efficient method which is capable of exploiting the information from even (c) or (d) in Figure \ref{fig:po}. \subsection{Proposed doubly robust estimation} We develop a new estimation method for the design of this study based on the AIPW kernel regression proposed by Wang et al. (2010). As mentioned in the previous section, we consider covariate adjustment using propensity score under the ignorability assumption (\ref{eq:ign}) in order to implement unbiased estimation. Let $\pi_i=Pr(D_i=1|X_i,W_i)$ denote the data selection probability as propensity score and we assume a parametric model: \begin{align} \pi_i=\pi(X_i,W_i;\bm{\gamma}), \label{eq:model_ps} \end{align} where $\bm{\gamma}$ is a finite dimensional parameter vector. This model can be specified as logit model or probit model, for example, and we estimate $\hat{\pi}_i=\pi(X_i,W_i;\hat{\bm{\gamma}})$ using $\hat{\bm{\gamma}}$, the maximum likelihood estimate of $\bm{\gamma}$. By weighting the units by the inverse of the estimated $\hat{\pi}_i$ or the inverse of the true selection probability $\pi_i$, if known, we obtain a inversed probability weighted (IPW) estimator. Denote by $\delta_j(X_i,W_i)$ an arbitrary regression function of $X_i$ and $W_i$. To estimate $\delta_j(X_i,W_i)$ we postulate a parametric model \begin{align} E(Y_{ji}|X_i,W_i)=\delta_j(X_i,W_i;\eta_j). \label{eq:model_reg} \end{align} where $\eta_j$ is a finite dimensional parameter vector. We can estimate $\hat{\delta}_j(X_i,W_i;\hat{\eta}_j)$ by using $\hat{\eta}_j$, the estimate of $\eta_j$ obtained by the standard method such as OLS and by using data satisfying $Z=j$; $\hat{\eta}_0$ is estimated from the part as shown as (a) and (c) in Figure \ref{fig:po} and $\hat{\eta}_1$ is estimated from (b) and (d). Now we define the estimating equation for $g_0(\cdot)$ as \begin{align} \sum_{i \in N|X_i<c_1}[U_{IPW,i}^0(\bm{\alpha}^0)-A_i^0(\bm{\alpha}^0)] = 0, \label{eq:pro0} \end{align} where \begin{align} \begin{split} U^0_{IPW,i}(\bm{\alpha^0}) =& D_i\left[(1-Z_i) \frac{D_i}{\hat{\pi}_i}K_{h_0}(X_i-x)V_{0i}^{-1}\bm{G}(X_i-x)\left[Y_i-\bm{G}(X_i-x)\bm{\alpha^0}\right]\right] \\ & + (1-D_i)\left[(1-Z_i)\frac{1-D_i}{1-\hat{\pi}_i}K_{h_0}(X_i-x)V_{0i}^{-1}\bm{G}(X_i-x)\right.\\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. \times\left[Y_i-\bm{G}(X_i-x)\bm{\alpha^0}\right]\right] \\ \label{eq:U_pro} \end{split} \end{align} \begin{align} \begin{split} A^0_i(\bm{\alpha^0}) =& D_i\left[\left((1-Z_i)\frac{D_i}{\hat{\pi}_i}-1\right)K_{h_0}(X_i-x)V_{0i}^{-1}\bm{G}(X_i-x)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. \times\left[\hat{\delta}_0(X_i,W_i;\hat{\eta}_0)-\bm{G}(X_i-x)\bm{\alpha^0}\right]\right]\\ &+ (1-D_i)\left[\left((1-Z_i)\frac{1-D_i}{1-\hat{\pi}_i}-1\right)K_{h_0}(X_i-x)V_{0i}^{-1}\bm{G}(X_i-x)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. \times \left[\hat{\delta}_0(X_i,W_i;\hat{\eta}_0)-\bm{G}(X_i-x)\bm{\alpha^0}\right]\right] \label{eq:A_pro} \end{split} \end{align} and for $g_1(\cdot)$ as \begin{align} \sum_{i \in N|X_i>c_0}\left[U_{IPW,i}^1(\bm{\alpha}^1)-A_i^1(\bm{\alpha}^1)\right] =0 \label{eq:pro1} \end{align} where \begin{align} \begin{split} U^1_{IPW,i}(\bm{\alpha^1}) &=D_i \left[ Z_i\frac{D_i}{\hat{\pi}_i}K_{h_1}(X_i-x)V_{1i}^{-1}\bm{G}(X_i-x)\left[Y_i-\bm{G}(X_i-x)\bm{\alpha^1}\right]\right] \\ &\qquad + (1-D_i)\left[ Z_i\frac{1-D_i}{1-\hat{\pi}_i}K_{h_1}(X_i-x)V_{1i}^{-1}\bm{G}(X_i-x)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left. \times \left[Y_i-\bm{G}(X_i-x)\bm{\alpha^1}\right]\right] \\ A^1_i(\bm{\alpha^1}) & =D_i\left[Z_i \left(\frac{D_i}{\hat{\pi}_i}-1\right)K_{h_1}(X_i-x)V_{1i}^{-1}\bm{G}(X_i-x)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left. \times\left[\hat{\delta}_1(X_i,W_i;\hat{\eta}_1)-\bm{G}(X_i-x)\bm{\alpha^1}\right]\right]\\ &\qquad + (1-D_i)\left[\left(Z_i\frac{1-D_i}{1-\hat{\pi}_i}-1\right)K_{h_1}(X_i-x)V_{1i}^{-1}\bm{G}(X_i-x)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left. \times\left[\hat{\delta}_1(X_i,W_i;\hat{\eta}_1)-\bm{G}(X_i-x)\bm{\alpha^1}\right]\right]\\ \end{split} \end{align} with $\alpha^j=(\alpha_0^j(x),\alpha_1^j (x))$ solving equation (\ref{eq:pro0}) or equation (\ref{eq:pro1}) is the local linear estimator of $g_j(x)$, $V_{ji}=V[G(X_i-x)^T \alpha^j;\zeta_j]$ with a known working variance function $V(\cdot, \cdot)$ and an unknown finite dimensional parameter $\zeta_j$. The consistency of the estimation is guaranteed even if $V_j$ is arbitrarily decided under certain conditions (Hoshino, 2009). If we estimate $\zeta_0$ based on the data, we can use the inverse probability weighted moment equations $\sum_{l=1}^n D_l\hat{\pi}_l^ {-1} V_{0l}^{(1)} \left[\left\{Y_l - \hat{\alpha}^0_{0,l} ( \zeta_0 ) \right\}^2 - V \left\{ \hat{\alpha}^0_{0,l} ( \zeta_0 ) , \zeta_0 \right\} \right] = 0$, where $V _ l ^ { ( 1 ) } = \partial V \left\{ \hat { \alpha}^0_{0,l} (\zeta_0 ) ; \zeta_0 \right\} / \partial \zeta_0 $, and $\hat { \alpha } _ { l } ( \zeta_0 ) = \left\{ \hat { \alpha } _ { 0 , l } ( \zeta_0 ) , \hat { \alpha } _ { 1 , l } ( \zeta_0 ) \right\} ^ { T }$ solve (\ref{eq:pro0}) with $x = X _ { l } , l = 1 , \dots , n$. We can estimate $\zeta_1$ in a similar way. The estimated conditional expectation function is $\hat{g}_j(x)=\hat{\alpha}_0^j(x)$. The first term of equation (\ref{eq:pro0}) and (\ref{eq:pro1}) is what constitutes the IPW estimation equation as $\sum U_{IPW,i}^j(\alpha^j)=0$ and the second term $A_i^j(\alpha^j)$ is called an augmented term. We inevestigate properties of the estimation equations focusing on for $g_0(\cdot)$. These equations allow us to use data of $D=0$ in addition to $D=1$. When $D_i=1$, the first terms in the right hand side in equation (\ref{eq:U_pro}) and (\ref{eq:A_pro}) are left and the scond terms are equal to 0, and thus this estimating equation is equal to the one proposed by Wang et al. (2010). When $D_i=0$, the second terms are left and the first terms are equal to 0. For unit $i\in N_0$, $Z_i$ differs depending on either $X_i\leq c_0$ or $X_i>c_0$. If $X_i\leq c_0$, i.e. $Z_i=0$, since complete data including outcomes exists, outcomes and covariates can be included in the estimation as well as $D_i = 1$ in the form changing weight to $1-\hat{\pi}_i$. On the other hand, if $X_i>c_0$, i.e. $Z_i=1$, potential outcome $Y_{0i}$ is regarded as missing but covariates are obtained. In this case, whereas $U_{IPW,i}$ is equal to 0 by $1-Z_i=0$, the augmented term $A_{i}$ is left with weight $-1$. Therefore the information of covariates can be exploited. Now if only data of $D=0$ is used to estimate the parameter $\eta_0$ in $\delta_0(X_i,W_i; \eta_0)$, since the potential outcomes $Y_0$ are obtained only for $X_i\leq c_0$, applying estimated parameters to units satisfying $X_i>c_0$ is an extrapolation and it is not desirable. However in equation (\ref{eq:pro0}) units satisfying $D_i=1$ and $X_i>c_0$ are weighted by the selection probability and included in addition to the data of $D_i=0$ and thus it can be interpreted as an interpolation. Figure \ref{fig:model} shows in what forms units are included in the estimation depending on $D_i$ and $Z_i$. \begin{figure}[htbp] \begin{center} \begin{minipage}[c]{1\linewidth} \centering \includegraphics[width=0.8\linewidth]{modelD_0.pdf} \subcaption{} \end{minipage}\\ \begin{minipage}{1\linewidth} \vspace{7mm} \end{minipage} \\ \begin{minipage}[c]{1\linewidth} \centering \includegraphics[width=0.8\linewidth]{modelD_1.pdf} \subcaption{} \end{minipage} \end{center}  \caption{Depending on $D_i$ and $Z_i$ how subjects are included in the proposed estimating equation for $g_0(x)$. $u^0_{IPW,i}=K_{h_0}(X_i-x)V_{0i}^{-1}\bm{G}(X_i-x)\left[Y_i-\bm{G}(X_i-x)\bm{\alpha^0}\right]$ and $a^0_i=K_{h_0}(X_i-x)V_{0i}^{-1}\bm{G}(X_i-x)\left[\delta_0(X_i,W_i)-\bm{G}(X_i-x)\bm{\alpha^0}\right]$. The top panel for $D_i=0$ and the bottom panel is for $D_i=1$. } \label{fig:model} \end{figure} The estimators solving equation (\ref{eq:pro0}) or (\ref{eq:pro1}) also have the double-robustness similar to other AIPW estimators including the one proposed by Wang et al.(2010). The estimator is consistent when either of the two following conditions is satisfied (but not necessarily both): (i) the selection probability model is correctly specified, and (ii) the regression function of all covariates is correctly specified. The double-robustness is prooved in the next section. \subsection{Bandwidth selection} Appropriate choice of bandwidth is an important issue in kernel regression. The least squares cross validation (LSCV) is one of the most widely used bandwidth selection methods (Li and Racine, 2007). Let $\hat{g}_{0,-i}(X_i)$ and $\hat{g}_{1,-i}(X_i)$ denote the leave-one-out local linear estimator of $g_{0}(X_i)$ and $g_{1}(X_i)$. $\hat{g}_{0,-i}(X_i)$ is the solution in the equation \begin{align} \sum_{l\not=i, l \in N|X_l<c_1}[U_{IPW,l}^0(\bm{\alpha}^0)-A_l^0(\bm{\alpha}^0)] =0 \end{align} where \begin{align} \begin{split} U^0_{IPW,l}(\bm{\alpha^0}) &= D_l\left[(1-Z_l) \frac{D_l}{\hat{\pi}_l}K_{h_0}(X_l-X_i)V_{0l}^{-1}\bm{G}(X_l-X_i)\left[Y_l-\bm{G}(X_l-X_i)\bm{\alpha^0}\right]\right] \\ &\qquad + (1-D_l)\left[(1-Z_l)\frac{1-D_l}{1-\hat{\pi}_i}K_{h_0}(X_l-x)V_{0l}^{-1}\bm{G}(X_l-X_i)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left. \times\left[Y_l-\bm{G}(X_l-X_i)\bm{\alpha^0}\right]\right] \\ A^0_l(\bm{\alpha^0}) & = D_l\left[\left((1-Z_l)\frac{D_l}{\hat{\pi}_l}-1\right)K_{h_0}(X_l-X_i)V_{0l}^{-1}\bm{G}(X_l-X_i)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left. \times\left[\delta_0(X_l,W_l)-\bm{G}(X_l-X_i)\bm{\alpha^0}\right]\right]\\ & \qquad + (1-D_l)\left[\left((1-Z_l)\frac{1-D_l}{1-\hat{\pi}_l}-1\right)K_{h_0}(X_l-X_i)V_{0l}^{-1}\bm{G}(X_l-X_i) \right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \left. \times \left[\delta_0(X_l,W_l)-\bm{G}(X_l-X_i)\bm{\alpha^0}\right]\right] \end{split} \end{align} and $\hat{g}_{1,-i}(X_i)$ solves a equation similar to the above. The LSCV method choose the bandwith minimizing a function of bandwidth $h$, denoted as $LSCV_j(h)$, as the optimal bandwidth. $LSCV_0(h)$ and $LSCV_1(h)$ are respectively defined as \begin{align} LSCV_0(h) = \frac{1}{\sum_{i \in N|X_i<c_1}(1-Z_i)}\sum_{i \in N|X_i<c_1}(1-Z_i)(Y_i-\hat{g}_{0,-i}(X_i))^2 \end{align} and \begin{align} LSCV_1(h) = \frac{1}{\sum_{i \in N|X_i>c_0}Z_i}\sum_{i \in N|X_i<c_1}Z_i(Y_i-\hat{g}_{1,-i}(X_i))^2. \end{align} Therefore the optimal bandwidth is defined as \begin{align} h_{j,opt}\equiv\mathop{\rm argmin}\limits_h LSCV_j(h). \end{align} See Li and Racine (2007) for the mathematical details of the local linear cross validation. \section{Asymptotic Properties\label{sec:AP}} In this section, we describe the asymptotic properties of the estimator proposed in this paper. We can investigate it in a similar way to Wang et al.(2010). Throughout this section we assume the following: (I) $n\rightarrow \infty$, $h\rightarrow 0$, and $nh\rightarrow \infty$: (II) $x$ is in the interior of the support of $X$: (III) the regularity conditions: (i) $g(\cdot)$ and the densitiy function of X, $f_X(\cdot)$ satisfy the smoothness assumptoions of Fan et al. (1996); (ii) the right hand side of the estimating equation are twice continuously differentiable with respect to $\alpha$ at a target point $x$ and second derivatives are uniformly bounded. The proposed doubly robust (DR) local linear estimator of $g_j(x)$ is $\hat{g}_{j,DR}(x)$ solving equation (\ref{eq:pro0}) or (\ref{eq:pro1}) and this asymptotic limit is denote by $\tilde{g}_{j,DR}(x)$. The proposed DR kernel estimating equations (\ref{eq:pro0}) or (\ref{eq:pro1}) should have a sequence of solutions $(\hat{\alpha}^j_{0,DR}(x),\hat{\alpha}^j_{1,DR}(x))$ at $x$ such that as the sample size $n\rightarrow \infty$, and the sequence converges in probability to a vector $(\tilde{\alpha}^j_{0,DR}(x), \tilde{\alpha}^j_{1,DR}(x))$, of which the first component $\tilde{\alpha}^j_{0,DR}(x)$ is denoted by $\tilde{g}_{j,DR}(x)$, and $\tilde{g}_{0,DR}(x)$ satisfies \begin{align} \begin{split} &E\left[(1-Z)\frac{D}{\tilde{\pi}}V_0^{-1}\{\tilde{g}_{0,DR}(x); \tilde{\zeta}_0\}\left[Y_0-\tilde{g}_{0,DR}(x)\right]|X=x\right]\\ &\quad+E\left[D\left((1-Z)\frac{D}{\tilde{\pi}}-1\right)V_0^{-1}\{\tilde{g}_{0,DR}(x); \tilde{\zeta}_0\}\left[\tilde{\delta}_0(X,W)-\tilde{g}_{0,DR}(x)\right]|X=x\right]\\ &\quad+E\left[(1-Z)\frac{1-D}{1-\tilde{\pi}}V_0^{-1}\{\tilde{g}_{0,DR}(x); \tilde{\zeta}_0\}\left[Y_0-\tilde{g}_{0,DR}(x)\right]|X=x\right]\\ &\quad+E\left[(1-D)\left((1-Z)\frac{1-D}{1-\tilde{\pi}}-1\right)V_0^{-1}\{\tilde{g}_{0,DR}(x); \tilde{\zeta}_0\}\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. \times\left[\tilde{\delta}_0(X,W)-\tilde{g}_{0,DR}(x)\right]|X=x\right]=0, \label{eq:con_DR0} \end{split} \end{align} and $\tilde{g}_{1,DR}(x)$ satisfies \begin{align} \begin{split} &E\left[Z\frac{D}{\tilde{\pi}}V_1^{-1}\{\tilde{g}_{1,DR}(x); \tilde{\zeta}_1\}\left[Y_1-\tilde{g}_{1,DR}(x)\right]|X=x\right]\\ &\quad+E\left[D\left(Z\frac{D}{\tilde{\pi}}-1\right)V_1^{-1}\{\tilde{g}_{1,DR}(x); \tilde{\zeta}_1\}\left[\tilde{\delta}_1(X,W)-\tilde{g}_{1,DR}(x)\right]|X=x\right]\\ &\quad+E\left[Z\frac{1-D}{1-\tilde{\pi}}V_1^{-1}\{\tilde{g}_{1,DR}(x); \tilde{\zeta}_1\}\left[Y_1-\tilde{g}_{1,DR}(x)\right]|X=x\right]\\ &\quad+E\left[(1-D)\left(Z\frac{1-D}{1-\tilde{\pi}}-1\right)V_1^{-1}\{\tilde{g}_{1,DR}(x); \tilde{\zeta}_1\}\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. \times\left[\tilde{\delta}_1(X,W)-\tilde{g}_{1,DR}(x)\right]|X=x\right]=0, \label{eq:con_DR1} \end{split} \end{align} where $\tilde{\pi}=\pi(X_i,W_i; \tilde{\gamma})$ and $\tilde{\gamma}$ is the probability limit of $\hat{\gamma}$, and $\tilde{\delta}_j(X,W)=\delta_j(X,W; \tilde{\eta_j})$ and $\tilde{\eta_j}$ is the probability limit of $\hat{\eta_j}$. Theorem \ref{thm:con} provides the consistency of the proposed estimator under certain conditions. \begin{thm} Under the ignorability assumption, the probability limit $\tilde{g}_{j,DR}(x)$ defined in equation (\ref{eq:con_DR0}) and (\ref{eq:con_DR1}) satisfies $\tilde{g}_{j,DR}(x)=g_{j,DR}(x)$, that is, consistent estimator of $g_{j,DR}(x)$ when either of the following conditions is satisfied; \begin{enumerate} \renewcommand{\labelenumi}{(\alph{enumi})} \setlength{\leftskip}{0.3cm} \item The selection probability $\hat{\pi}_i$ in the DR estimating equation (\ref{eq:pro0}) is replaced by the true selection probability $\pi_i$ or by the estimated $\hat{\pi}_i=\pi_i(X_i,W_i; \hat{\gamma})$ with $\hat{\gamma}$ which is computed under the correctly specified model. \item The regression function $\delta_j(X,W)$ satisfies $\delta_j(X,W)=E(Y_j|X,W)$ or $\delta_j(X,W)=\delta_j(X,W;\hat{\eta_j})$ with $\hat{\eta_j}$ which is computed under the correctly specified model. \end{enumerate} \label{thm:con} \end{thm} Theorem \ref{thm:con} shows the double-robustness of the proposed estimator as mentioned previously. The proof of Theorem \ref{thm:con} about $\hat{g}_{0,DR}$ is shown in what follows. \begin{proof} Under the the strong ignorability condition (\ref{eq:strig}) and the ignorability assumption (\ref{eq:ign}) and , equation (\ref{eq:con_DR1}) can be rewritten as \begin{align} \begin{split} &E\left[\left[Y_0-\tilde{g}_{0,DR}(x)\right]|X=x\right] +E\left[\left((1-Z)\frac{D}{\tilde{\pi}}-1\right)\left[Y_0-\tilde{\delta}_0(X,W)\right]|X=x\right]\\ &\quad+E\left[\left[Y_0-\tilde{g}_{0,DR}(x)\right]|X=x\right] +E\left[\left((1-Z)\frac{1-D}{1-\tilde{\pi}}-1\right)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. \times\left[Y_0-\tilde{\delta}_0(X,W)\right]|X=x\right]=0 \label{eq:con_DR2} \end{split} \end{align} When the true seletion probability is known or the seletion probability model (\ref{eq:model_ps}) is correctly specfied, that is, $\tilde{\pi}=E(D|X,W)$, or the regression function (\ref{eq:model_reg}) is correctly specified, that is, $\tilde{\delta}_0(X,W)=E(Y_0|X,W)$, the second and fourth terms of eqaution (\ref{eq:con_DR2}) are 0. Hence eqaution (\ref{eq:con_DR1}) is equal to \begin{align} E[[Y_0-\tilde{g}_{0,DR}(x)]|X=x]=0 \end{align} Therefore we have $\tilde{g}_{0,DR}(x)=g_0(x)$, that is, $\hat{g}_{0,DR}$ is a consistent estimator of $g_0(x)$. \end{proof} Theorem \ref{thm:con} about $\hat{g}_{1,DR}$ can be easily proved in a similar way. Next we invesitgate the asymptotic distribution of the proposed estimator. Theorem \ref{thm:ad} shows the asymptotic bias and variance of the proposed estimator. \begin{thm} Assume that \begin{enumerate} \renewcommand{\labelenumi}{(\roman{enumi})} \setlength{\leftskip}{0.3cm} \item the selection probability $\hat{\pi}_i$ in the estimating equation (\ref{eq:pro0}) is computed under a model (\ref{eq:model_ps}) or replaced by fixed probabilities $\hat{\pi}^*_i=\hat{\pi}^*(X_i,W_i)$; \item the regression function $\delta_j(X,W)$ in the estimating equation (\ref{eq:pro0}) and (\ref{eq:pro1}) is a known function or replaced by the function $\delta_j(X,W;\hat{\eta_j})$ with $\hat{\eta_j}$ which is estimated on units with observed outcomes; \item $Pr(D=1|X,W)>a>0$ for some constant $a$ with probability $1$ in a neighborhood $X=x$ ; \item The ignorability assumption (\ref{eq:ign}) and assumption(I)-(III) hold \end{enumerate} In addition to the above assumptions, consider the two conditions; \begin{enumerate} \renewcommand{\labelenumi}{(\Alph{enumi})} \setlength{\leftskip}{0.3cm} \item The selection probability $\hat{\pi}_i$ in the DR estimating equation (\ref{eq:pro0}) is replaced by the true selection probability $\pi_i$ or by the estimated $\hat{\pi}_i=\pi_i(X_i,W_i; \hat{\gamma})$ with $\hat{\gamma}$ which is computed under the correctly specified model. \item The regression function $\delta_j(X,W)$ is a known function or replaced by the function $\delta_j(X,W;\hat{\eta_j})$ with $\hat{\eta_j}$ which is estimated on units with observed outcomes under the correctly specified model. \end{enumerate} If either (A) or (B) holds at least, but necessarily not both, then \begin{align} \sqrt{nh}\left\{\hat{g}_{0,DR}-g_0(x)-\frac{1}{2}h^2\{g_0(x)\}^{\prime\prime}c_2(K)+o(h^2)\right\}\longrightarrow N\left(0,W^0_{DR}(x)\right) \label{eq:ad} \end{align} where \begin{align} \begin{split} W^0_{DR}(x) = b_K(x) &E\left[ \left[ D \left\{ \frac{(1-Z)D}{\tilde{\pi}(X,W)}\left(Y_0-g_0(X)\right)-\left(\frac{(1-Z)D}{\tilde{\pi}(X,W)}-1\right)\right.\right.\right. \\ & \left.\left.\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \times \left(\tilde{\delta}_0(X,W)-g_0(X)\right)\right\} \right.\right. \\ &\quad+ \left.\left.(1-D) \left\{ \frac{(1-Z)(1-D)}{1-\tilde{\pi}(X,W)}\left(Y_0-g_0(X)\right)\right.\right.\right. \\ & \left.\left.\left.\qquad -\left(\frac{(1-Z)(1-D)}{1-\tilde{\pi}(X,W)}-1\right)\left(\tilde{\delta}_0(X,W)-g_0(X)\right) \right\} \right]^2|X=x\right] \end{split} \label{eq:av} \end{align} and \begin{align} \sqrt{nh}\left\{\hat{g}_{1,DR}-g_1(x)-\frac{1}{2}h^2\{g_1(x)\}^{\prime\prime}c_2(K)+o(h^2)\right\}\longrightarrow N\left(0,W^1_{DR}(x)\right) \label{eq:ad1} \end{align} where \begin{align} \begin{split} W^1_{DR}(x) = b_K(x) &E\left[ \left[ D \left\{ \frac{ZD}{\tilde{\pi}(X,W)}\left(Y_1-g_1(X)\right)-\left(\frac{ZD}{\tilde{\pi}(X,W)}-1\right)\right.\right.\right. \\ & \left.\left.\left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \times \left(\tilde{\delta}_1(X,W)-g_1(X)\right)\right\} \right.\right. \\ &\quad+ \left.\left.(1-D) \left\{ \frac{Z(1-D)}{1-\tilde{\pi}(X,W)}\left(Y_1-g_1(X)\right)\right.\right.\right. \\ & \left.\left.\left.\qquad -\left(\frac{Z(1-D)}{1-\tilde{\pi}(X,W)}-1\right)\left(\tilde{\delta}_1(X,W)-g_1(X)\right) \right\} \right]^2|X=x\right] \end{split} \label{eq:av1} \end{align} with $f_X(x)$ is the density function of $X$, $b_K(x)\equiv \int K^2(s)ds/f_X(x)$ and $c_2(K)\equiv \int s^2K(s)ds$ . \label{thm:ad} \end{thm} Theorem \ref{thm:ad} shows that the asymptotic bias of the proposed estimator is of order $O(h^2)$, and the variance of it is of order $O(1/nh)$, and additionally, it is independent of the working variance $V(\cdot)$ in the proposed DR kernel estimating equations (\ref{eq:pro0}). A proof of Theorem \ref{thm:ad} is provided in the Appendix. \section{Estimation of an optimal threshold} The principal aim of this study are expanding the conventional RD design the purpose of which is evaluating the causal effect at the discontinuous point to estimate counterfactual between the two thresholds and to enable evaluation of the causal effect at arbitrary points between the thresholds themselves. In this section, moreover, we propose to estimate optimal thresholds in terms of cost effectiveness as an application of this study. Our position here is to support policy makers' decisions. In general, it is considered desirable to target as many subjects as possible if special interventions yield better results. However, in practice, special interventions require more costs than regular interventions and the intervention practitioners (e.g. governments or companies) need to bear additional costs. For above reasons, they limit subjects by setting uniform criteria and that is why the RD design is useful in many cases. Considering such background, it is obvious that the question of where to set the threshold to maximize cost performance is one of the most important issues for practitioners. In the web marketing example described above, who pay the expense for the privilege of greater membership or for the coupons are the companies providing such services and it is easy to imagine that they cannot help limiting the target customers due to budgetary reasons. Setting criteria to maximize the return on investment in this example is an important management challenge. In what follows, we describe how to optimize the threshold using the estimated counterfactuals. Attention should be given to the fact that the following discussion is based on the presupposition that outcomes and cost are measured by the same unit, which is supposed to be money in most cases. Indeed, there are cases where outcomes and costs are variables of different measurement especially in political cases, however this problem has been dealt with in another research area, namely cost benefit analysis. We regard this problem as a issue deviating from the range of this research and do not deal with it here. We postulate that the optimal thresholds can be estimated by maximization of the function of a threshold $c$ representing the total benefit obtained in the treatment group and the control group minus the additional costs with constraint subject to $c_0<c<c_1$; that is, \begin{align} \begin{split} &\max_{c\in[c_0,c_1]}(E[Y_0|X<c]Pr(X<c)+E[Y_1|c<X]Pr(X>c)-m(c))\\ &\quad=\max_{c\in[c_0,c_1]}\left(\int^c_{-\infty} g_0(x)f_X(x)dx+\int_c^\infty g_1(x)f_X(x)dx-m(c) \right) \end{split} , \end{align} where $m(c)$ is a known function of a threshold $c$ that represents the additional cost of treatment and $f_X(x)$ is a probability density function of $X$. The benefits obtained from $X<c_0$ and $X>c_1$ are constant for every $c\in [c_0, c_1]$, thus, practically, we need to consider only maximization of the total benefits and costs across the thresholds. Therefore, the optimal threshold can be defined as \begin{align} \begin{split} c_{opt}&\equiv \mathop{\rm argmax}\limits_{c\in[c_0,c_1]}(E[Y_0|c_0<X<c]Pr(c_0<X<c)+E[Y_1|c<X<c_1]Pr(c<X<c_1)-m(c))\\ & = \mathop{\rm argmax}\limits_{c\in[c_0,c_1]}\left(\int^c_{c_0} g_0(x)f_X(x)dx+\int_c^{c_1} g_1(x)f_X(x)dx-m(c) \right) \end{split} . \label{eq:copt_def} \end{align} Since it is assumed that the same intervention is performed for all subjects, it is considered reasonable to assume that the additional cost per unit is constant. Therefore, the cost function can be defined as \begin{align} m(c)\equiv \int_c^{c_1}MC(x) f_X(x)dx \end{align} where $MC(c)$ is the additional cost per unit when threshold is set to $c$. Using this definition and equation (\ref{eq:ce}), the objective function of optimization is \begin{align} \begin{split} E[Y_0&|X<c]Pr(X<c)+E[Y_1|c<X]Pr(X>c)-m(c)\\ &=\int_{c_0}^c g_0(x)f_X(x)dx + \int_c^{c_1}g_1(x)f_X(x)dx - \int_c^{c_1}MC(x)f_X(x)dx\\ &=\int_{c_0}^{c_1} g_0(x)f_X(x)dx - \int_c^{c_1}g_0(x)f_X(x)dx + \int_c^{c_1}\{g_1(x)-MC(x)\}f_X(x)dx\\ &=\int_{c_0}^{c_1} g_0(x)f_X(x)dx - \int_c^{c_1}\{\tau(x)-MC(x)\}f_X(x)dx \end{split} \end{align} When the intervention providers are beneficiaries at the same time such as the web marketing example mentioned above and $\forall x\in [c_0,c_1]$, $\tau(x)-MC(x)<0$, in other words, $\max\tau(x)<MC(x)$ is satisfied, the objective function is monotonically decreasing and hence the optimal threshold is estimated as $c_{opt}=c_0$. However, this result means that the additional benefit due to the treatment (i.e. the causal effect) is less than the additional cost at any point the intervention does not pay off and implies that the validity of the intervention itself might have to be reviewed from the viewpoint of cost effectiveness. The practical estimator of the optimal threshold is $\hat{c}_{opt}$ solving equation (\ref{eq:copt_def}) with $g_j(x)$ replaced by $\hat{g}_j(x)$ estimated in the method proposed in Section 3 and either the true probability density function $f_X(x)$, if known as prior information, or an estimator of it $\hat{f_X}(x)$ estimated by the kernel density estimation, for instance. \section{Simulations} In this section, we describe simulation conducted to investigate the properties of the proposed estimator in the finite samples. We evaluate our proposed estimator by comparing it with IPW local linear estimator and the naive local linear estimator. The IPW local linear estimator solves the first terms of equation (\ref{eq:pro0}) and (\ref{eq:pro1}) $\sum U^j_{IPW,i}=0$ using the data of either $D=0$ or $D=1$. The naive local linear estimator solves equation formed by specification of $\pi$ in the IPW estimating equation to be 1. We generate 100 data sets and estimate using each data set under the following conditions to evaluate the estimators from some viewpoints. First, in order to study the efficiency of the proposed estimator, we performed three types of estimation for each data set using the proposed estimator with all units, the IPW local linear estimator with either of the groups except in the estimation of selection probabilities and the naive local linear estimator with complete case. Note that model specifications here in the IPW and proposed estimator are correct. Next, for the evaluation of the robustness of the estimators, compare the results in the following four cases; (i) the selection probability model in the IPW estimation is incorrect; (ii) the selection probability model in the proposed estimation is incorrect; (iii) the regression model in the proposed estimation is incorrect; (iv) both of the models of $\pi$ and $\delta$ are incorrect. Finally, we examine dependency on the settings of the distribution of the running variable by generating the running variable from either the normal distribution or the log normal distribution. We evaluate the estimation results by comparing mean integrated squared error (MISE) limited to between the two thresholds defined as $\int_{c_0}^{c_1} \{ \hat{g}(x)-g(x)\}f_X(x)dx$. We use the LSCV method to choose the optimal bandwidth as described in Section 4.4. In what follows, describe the data generating process. The running variable $X$ is generated from a normal distribution with mean 4 and variance $\sigma^2=1.7^2$. We assume that in this simulation the covariates $\boldsymbol{W}$ other than $X$ is 2-dimensional and correlated with $X$ to induce selection bias. Thus we generate $\boldsymbol{W}=(w_1,w_2)^T$ according to a model: $\boldsymbol{W}=\boldsymbol{\eta_0}+\boldsymbol{\eta_1} X+\bm{\xi}$, where $\boldsymbol{\eta_0}$ and $\boldsymbol{\eta_1}$ are $2 \times 1$ parameter vectors and $\boldsymbol{\eta_0}=(-1.5, 2.4)^T$ and $\boldsymbol{\eta_1}=(0.6, 0.4)^T$, $\bm{\xi}=(\xi_1,\xi_2 )^T$ is the disturbance term generated from normal distribution with mean $0$ and $\sigma^2=4$ independently. For the data assignment probability $\pi_i$ we postulate the logit model \begin{align} logit(\pi_i )=\gamma_0+\gamma_1 X_i+ \gamma_2 w_{1i}+\gamma_3 w_{2i}, \label{eq:pi} \end{align} where $\gamma_0=0.8$, $\gamma_1=0.5$, $\gamma_2=2$ and $\gamma_3=-0.8$. Then the data assignment indicator $D_i$ is sampled from Bernoulli distribution with probability $\pi_i$. Following $D_i$, the treatment assignment $Z_i$ is determined by the function $Z_i=1\left(X_i>c_{d_i}\right)$, with the lower thresholds $c_0=2$ and the upper thresholds $c_1=6$. Finally we generate the observed outcomes $Y_i=Z_iY_{1i}+(1-Z_i)Y_{0i}$, where \begin{align} Y_{ji}=\beta^j_0 + \beta^j_1 X_i+ \beta^j_2 X_i^2 + \beta^j_3 w_{1i} + \beta^j_4 w_{2i}+\varepsilon_{ji},~~~ \varepsilon_{ji} \overset{i.i.d.}{\sim} N(0,10^2) \label{eq:delta} \end{align} with $(\beta^0_0, \beta^0_1, \beta^0_2, \beta^0_3, \beta^0_4)=(0, 16, -1, 42, 36)$ and $(\beta^1_0, \beta^1_1, \beta^1_2, \beta^1_3, \beta^1_4)=(80, -2, 2, 40, 48)$. Figure \ref{fig:plot} shows a scatter plot of the $(X,Y)$ from one of the generated data sets with the lines that indicate $E(Y_j|X)=E_{W|X}(E(Y_j|X,W))$. \begin{figure}[htbp] \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=\linewidth]{0D_0.pdf} \end{center} \subcaption{} \label{fig:one} \end{minipage} \begin{minipage}{0.5\hsize} \begin{center} \includegraphics[width=\linewidth]{0D_1.pdf} \end{center} \subcaption{} \label{fig:two} \end{minipage}\\ \begin{center} \begin{minipage}{0.8\hsize} \begin{center} \includegraphics[width=0.9\linewidth]{0Dsample.pdf} \end{center} \subcaption{} \label{fig:one} \end{minipage} \end{center} \caption{Scatter plots of one of the generated data sets. The left panel of the top is for $D=0$ and the right panel is for $D=1$ and the bottom panel is a scatter (X, Y) plot on the same plane. In each panel the upper black line indicates $E(Y_1|X)=E_{W|X}(E(Y_1|X,W))$ and the lower one indicates $E(Y_0|X)=E_{W|X}(E(Y_0|X,W))$} \label{fig:plot} \end{figure} In what follows we report the results when sample size $n=2000$ . Figure \ref{fig:line} and Table \ref{tab:res1} show the result of the naive, IPW and DR local linear estimators of $g_0(x)$ and $g_1(x)$. Table \ref{tab:res1} summarizes the MISEs of the naive, IPW and DR local linear estimators of $g_0(x)$ and $g_1(x)$ as the performance with correct models. The naive local linear estimates have much larger MISEs than the IPW and AIPW local linear estimates for both of $g_0(x)$ and $g_1(x)$. The DR local linear estimates have smaller MISEs than the IPW local linear estimates. For instance, the DR local linear estimate has approximately 59\% gain in MISE efficiency in comparison with the IPW local linear estimate in estimation of $E(Y_0|X)$. \begin{figure}[htbp] \begin{center} \includegraphics[width=8cm]{0Res.pdf} \end{center} \caption{Estimated nonparametric functions of $g_0(x)$ and $g_0(x)$ using the naive, IPW and DR estimation methods. The black solid lines are true $g_0(x)$ and $g_1(x)$, the red dashed lines are estimated functions using the DR estimation, the blue dashed-dotted lines are estimated functions using the IPW estimation and the orange dotted lines are estimated functions using the naive local linear estimation. } \label{fig:line} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{c} \begin{minipage}[t]{0.45\hsize} \begin{center} \caption{MISEs of the naive, IPW and DR local linear estimates of $g_0(x)$ and $g_1(x)$ with correct models} \begin{tabular}{lcccc} \hline\hline \multicolumn{1}{l}{}& \multicolumn{2}{c}{MISE} \\ \multicolumn{1}{l}{}& \multicolumn{1}{c}{$g_0(x)$} & \multicolumn{1}{c}{$g_1(x)$}\\\hline Naive & 829.2 & 1060.5 \\ IPW & 314.8 & 740.2 \\ DR & 130.5 & 683.3 \\ \hline \end{tabular} \label{tab:res1} \end{center} \end{minipage} \begin{minipage}{0.02\hsize} \hspace{0.5mm} \end{minipage} \begin{minipage}[t]{0.5\hsize} \begin{center} \caption{MISEs of the IPW and DR local linear estimates of $g_0(x)$ using $\hat{\pi}$ and/or $\hat{\delta}$ computed under incorrectly specified models} \begin{tabular}{lcc} \hline\hline & MISE \\\hline IPW($\pi$ wrong) & 818.3 \\ DR($\pi$ wrong) & 181.4 \\ DR($E(Y_0|X,W)$ wrong) & 546.2 \\ DR(both wrong) & 607.6 \\ \hline \end{tabular} \label{tab:res2} \end{center} \end{minipage} \end{tabular} \end{center} \end{table} Since it is expected that results for $g_0(x)$ and $g_1(x)$ have similar tendencies from the theory and the results shown in Table \ref{tab:res1}, we focus on estimation of $g_0(x)$ in the following simulations. Next consider the case that $\pi$ and/or $\delta$ of the IPW and DR are incorrectly specified as described above. The incorrect model of $\pi$ is specified as the model (\ref{eq:pi}) without the $w_{1}$ term and the incorrect model of $\delta$ is specified as the model (\ref{eq:delta}) without the $X$ squared term. Table \ref{tab:res2} shows the results with incorrectly specified models. The DR estimate with a misspecified $\pi$ has relatively close to the DR estimate with correct models and it is better than the IPW estimate with correct $\pi$. The DR estimate with a misspecified $\delta$ is not as good as the DR estimate with a misspecified $\pi$, however its MISE is still better than the Naive estimate and the IPW estimate with an incorrect $\pi$, and naturally better than the DR estimates when both the model of $\pi$ and $\delta$ are misspecified. \section{Discussion} In this paper we proposed a new framework of the regression discontinuity designs for estimation of two conditional expectation functions of potential outcomes, i.e. counterfactuals, between two thresholds by using multiple groups which have difference thresholds. We considered how to realize estimation of them in the two cases with and without selection bias. We showed that we can simply estimate them in the absence of selection bias but cannot generally in the presence of it using the normal estimation method such as the naive local linear regression. In order to estimate consistently and to make the best use of the available data, we proposed the new estimator based on the AIPW kernel estimator with the ignorability assumption. We showed that the proposed estimator has double-robustness and it can exploit the auxiliary information of covariates from even subjects with missing outcomes. In finite samples, the proposed estimator is more efficient compared with the naive local linear estimator and the IPW kernel estimator and has the double-robustness property. One of the concerns about this study is whether a regression model with a mixture of two data sets as a population is meaningful even if the ignorability (\ref{eq:ign}) is assumed. If we wish to infer the results for a more general population, that is possible when covariates including a running variable are obtained from the more general population and we can assume that which group subjects belong to is determined by the covariates. In addition, in this paper we have chosen nonparametric regression to estimate conditional expectation functions for some reasons, however parametric regressions are also used in many empirical RD designs. If parametric conditional expectation functions are postulated, counterfactual can be estimated at any point on a running variable by extrapolation with the estimated parameters and covariates from beyond the observation range without using the proposed estimation method. There is room for the further development of this research. First of all, we should apply the proposed method to real data to confirm its usefulness in empirical cases. As for the theoretical side, in this paper we proposed our method focusing on limited case in some respects. We have considered only the case with two groups, however our method can be extended to cases with three or more groups. Another important topic of future study is an extension to the fuzzy RD design not limited to the sharp RD design. \newpage
1,108,101,563,483
arxiv
\section{Introduction} Prediction of human motions is key for safe navigation of autonomous robots among humans in cluttered environments. Therefore, autonomous robots, such as service robots or autonomous cars, shall be capable of reasoning about the intentions of pedestrians to accurately forecast their motions. Such abilities will allow planning algorithms to generate safe and socially compliant motion plans. Generally, human motions are inherently uncertain and multi-modal \cite{kothari2020human}. The uncertainty is caused by partial observation of the pedestrians' states and their stochastic dynamics. The multimodality is due to interaction effects between the pedestrians, the static environment and non-convexity of the problem. For instance, as Fig. \ref{fig:multiple_path_example} shows, a pedestrian can decide to either avoid a static obstacle or engage in a non-verbal joint collision-avoidance maneuver with the other upcoming pedestrian, avoiding on the right or left. Hence, to accurately predict human motions, inference models providing multi-modal predictions are required. A large number of prediction models have been proposed. However, some of these approaches only predict the mean behavior of the agents \cite{Pfeiffer2017}. Others apply different techniques to model uncertainty such as ensemble modeling \cite{lotjens2019safe}, dropout during inference \cite{gal2016dropout} or learn a generative model and generate several trajectories by sampling randomly from the latent space \cite{Gupta2018}. Recently, Generative Adversarial Networks (GANs) have been employed for multi-modal trajectory prediction by randomly sampling the latent space to generate diverse trajectories \cite{amirian2019social}. Nevertheless, these methods have two main drawbacks. First, GANs are difficult to train and may fail to converge during training. Second, they require a large number of samples to achieve good prediction performance which is impracticable for real-time motion planning. Moreover, these approaches assume an independent prior across different timesteps ignoring the existing time dependencies on trajectory prediction problems. \newpage \begin{wrapfigure}{r}{.5\textwidth} \centering \includegraphics[width=.5\textwidth]{img/multiple_path.pdf} \caption{Illustration of a scenario where there are multiple ways that two pedestrians can avoid a collision. We present a method that given the same observed past, predicts multiple socially acceptable trajectories in crowded scenes.}\label{fig:multiple_path_example} \end{wrapfigure} The objective of this work is to develop a prediction model suitable for interaction-aware autonomous navigation. Hence, we address these limitations with a novel generative model for multi-modal trajectory prediction based on Variational Recurrent Neural Networks (VRNNs) \cite{vrnn}. We treat the multi-modal trajectory prediction problem as modeling the joint probability distribution over sequences. This paper's main contribution is a new interaction-aware variational recurrent neural network (Social-VRNN) design for one-shot multi-modal trajectory prediction. By following a variational approach, our approach achieves faster convergence in comparison with GAN -based approach. Moreover, employing a time-dependent prior over the latent space enables our model to achieve state-of-the-art performance and generate diverse trajectories with a single network query. To this end, we propose a training strategy to learn more diverse trajectories in an interpretable fashion. Finally, we present experimental results demonstrating that our method outperforms the state-of-the-art methods on both simulated and real datasets using one-shot predictions. \input{sec/02-relworks.tex} \input{sec/04-method.tex} \input{sec/05-exp.tex} \section{CONCLUSION}\label{sec:conclusion} In this paper, we introduced a Variational Recurrent Neural Network (VRNN) architecture for multi-modal trajectory prediction in one-shot and considering the pedestrian dynamics, interactions among pedestrians and static obstacles. Building on a variational approach and learning a mixture Gaussian model enables our model to generate distinct trajectories accounting for the static obstacles and the surrounding pedestrians. Our approach allows us to improve the state-of-the-art prediction performance in scenarios with a large number of agents (e.g., Univ dataset) or containing static obstacles (e.g., Hotel dataset) from a single prediction shot. Furthermore, the proposed approach reduces significantly the number of samples needed to achieve good prediction with high accuracy. As future work, we aim to integrate the proposed method with a real-time motion planner on a mobile platform for autonomous navigation among pedestrians. \clearpage \acknowledgments{If a paper is accepted, the final camera-ready version will (and probably should) include acknowledgments. All acknowledgments go at the end of the paper, including thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support.} \section{Variational Recurrent Neural Network}\label{sec:approach} In this section, we present our Variational Recurrent Neural Network (VRNN) for multi-modal trajectory prediction, depicted in Fig. \ref{fig:network_architecture}. To this end, our model first employs a feature extractor module (Section \ref{sec:inputs}) to create a joint representation of three input channels: the pedestrian dynamics, the static environment and other interacting pedestrians. Then, the probabilistic inference module (Section \ref{sec:stochectic_module}) learns the complex multi-modal distribution conditional to the previous timesteps. Finally, the probability output module (Section \ref{sec:output_model}) applies a Gaussian Mixture Model (GMM) as output probability model enabling one-shot multi-modal trajectory predictions. We start with the problem formulation of multi-modal trajectory prediction. Later in Section \ref{sec:sampling}, we introduce a training strategy which allows to generate different modal trajectories by incorporating the social and environment context of the \textit{query-agent}. Finally, we define the loss function and explain the training procedure in Section \ref{sec:train}. \begin{comment} \end{comment} \subsection{Multi-modal Trajectory Prediction Problem Formulation}\label{sec:problem} Consider a navigation scenario with $n$ interacting agents (pedestrians) navigating on a plane $\pazocal{W} = \mathbb{R}^2$. The dataset $\mathbf{D}$ contains information about the $i$-th pedestrian trajectory $\tau^i_{1:N}=\{(\mathbf{p}_1^i,\mathbf{v}_1^i),\dots,(\mathbf{p}_N^i,\mathbf{v}_N^i)\}$ with $N$ utterances and their corresponding surrounding static environment $\pazocal{O}^{i}_{\textrm{env}} \subset \pazocal{W}$, for $i \in [0, \dots n]$. $\mathbf{v}^{i}_t = \{v^i_{x,t},v^i_{y,t} \}$ is the velocity and $\textbf{p}^{i}_{t} = \{p^i_{x,t},p^i_{y,t} \}$ is the position of the $i$-th pedestrian at time $t$ in the world frame. Without loss of generality, $t = 0$ indicates the current time and $t = -1$ the previous time-step. $\mathbf{v}^{i}_{1:T_H}=(\mathbf{v}^i_{1},\dots, \mathbf{v}^i_{T_H})$ represents the future pedestrian velocities over a prediction horizon $T_H$ and $\mathbf{v}^{i}_{-T_O:0}$ the pedestrian past velocities within an observation time $T_O$. Throughout this paper the subscript $i$ denotes the \emph{query-agent}, i.e., the agent that we want to predict its future motion, and $-i$ the collection of all the other agents. Bold symbols are used to represent vectors and the non bold $x$ and $y$ subscripts are used to refer to the x and y direction in the world frame. $\mathbf{x}^{i}_0=\{\mathbf{v}^{i}_{-T_O:0},\mathbf{p}^{-i}_{0},\pazocal{O}^{i}_{\textrm{env}}\}$ represents the \textit{query-agent} current state information. To account for the uncertainty and multimodality of the $i$-th pedestrian's motion, we seek a probabilistic model $f(\theta)$ with parameters $\theta$ over a set of $M$ different trajectories $\forall m \in [0,M]$: \begin{equation} \begin{array}{c} p(\mathbf{v}^{i,m}_{1:T_H }| \mathbf{x}^{i}_0) = f( \mathbf{x}^{i}_0,\theta) \end{array} \end{equation} where $m$ is the trajectory index. The probability is conditional to other agents states and the surrounding environment to model the interaction and environment constraints. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{img/multimodal_network_vrnn.pdf} \caption{VRNN architecture for multi-modal trajectory prediction composed by: an input feature extraction, a probabilistic inference and output probability module. The first creates a joint representation of the input data $\mathbf{y}^i_{}=\{\mathbf{y}^{i}_{\mathbf{v}},\mathbf{y}^{i}_{\textrm{env}},\mathbf{y}^{-i}_{}\}$. The probabilistic inference module (Section \ref{sec:stochectic_module}) is based on the VRNN \cite{vrnn} incorporating: a encoder network to approximate a time-dependent posterior distribution $q(\mathbf{z}_{0}|\mathbf{x}_{\le 0},\mathbf{z}_{<0}) \sim \mathcal{N}(\mathbf{\mu}_{\mathbf{z},0},\textrm{diag}(\mathbf{\sigma}^2_{\mathbf{z},0}))$ with $[\mathbf{\mu}_{\mathbf{z},0},\mathbf{\sigma}_{\mathbf{z},0}]=\psi^{\textrm{enc}}(\psi^{\mathbf{x}}(\mathbf{x}_0),\mathbf{h}_{-1},\theta_{q})$ with $\theta_{q}$ as the approximate posterior model parameters; a decoder network to model the conditional generation distribution $\mathbf{\mathbf{v}}_{k} | \mathbf{x}_{0}, \mathbf{z}_{0} \sim \mathcal{N}(\mathbf{\mu}_{\mathbf{x},0},\textrm{diag}(\mathbf{\sigma}^2_{\mathbf{x},0}))$ with $ [\mathbf{\mu}_{\mathbf{v},1:T_H},\mathbf{\sigma}_{\mathbf{v},1:T_H}]=\psi^{\textrm{dec}}(\psi^{\mathbf{z}}(\mathbf{z}_0),\psi^{\mathbf{x}}_t(\mathbf{x}_0),\mathbf{h}_{-1},\theta_{\mathrm{dec}})$ with $\theta_{\mathrm{dec}}$ as the inference model parameters; a prior on the latent random variable $\mathbf{z} \sim \mathcal{N}(\mu_{\textrm{prior},0},\mathbf{\sigma}_{\textrm{prior},0})$ conditional to the hidden-state of the decoder network $[\mathbf{\mu}_{\textrm{prior},0},\mathbf{\sigma}_{\textrm{prior},0}]=\psi^{\textrm{prior}}(\mathbf{h}_{-1},\theta_{\textrm{prior}})$ with parameters $\theta_{\mathrm{prior}}$. Finally, the output probability module is a GMM (Section \ref{sec:output_model}). }% \label{fig:network_architecture}% \end{figure*} \subsection{Input feature extraction module }\label{sec:inputs} This module creates a joint representation of three sources of information: the query-agent state, the environment context and social context. The first input is a sequence of $T_{O}$ history velocities $\mathbf{v}^i_{-T_O:0}$ of the query-agent. The second input is a local occupancy grid $O^{i}_{\textrm{env}}$, centered at the query-agent containing information about static obstacles (environment context) with width $D_x$ and height $D_y$. Here, we use the global map provided with publicly available datasets \cite{pellegrini2009you,lerner2007crowds}. In a real scenario, the map information can be obtained by building a map offline \cite{zaman2011ros} or local map from online \cite{online_slam} using onboard sensors such as Lidar. Due to its high dimensionality, a convolution neural network (CNN) is used to obtain a compressed representation of this occupancy map while maintaining the spatial context. The encoder parameters are obtained by pre-training an Encoder-Decoder structure to minimize $\mathcal{L}_{\textrm{env}}=\sum^{D_x}_{i=1}\sum^{D_y}_{j=1}(\hat{\mathbf{O}^{i}_{\textrm{env}}}-\mathbf{O}^{i}_{\textrm{env}})^2$, as proposed in \cite{Pfeiffer2017}. In addition, an LSTM layer is added to the first two input channels, modeling the existing time-dependencies. The third input provides information about the interaction among the pedestrians containing information about their relative dynamics and spatial configuration. More specifically, it is a vector $\mathbf{O}^{-i}_0=[\mathbf{p}_0^{-1}-\mathbf{p}_0^i,\mathbf{v}_0^{-1}-\mathbf{v}_0^i, \dots \mathbf{p}_0^{-n}-\mathbf{p}_0^i,\mathbf{v}_0^{-n}-\mathbf{v}_0^i]$ with the positions and velocities of the surrounding pedestrians relative to the query-agent. This input vector is then fed into an LSTM, allowing to create a fixed-size representation of the query's agent social context and to consider a variable number of surrounding pedestrians. Finally, the outputs of each channel are concatenated creating a compressed and time-dependent representation of the input data $\mathbf{y}^i=\{\mathbf{y}^{i}_{\mathbf{v}},\mathbf{y}^{i}_{\textrm{env}},\mathbf{y}^{-i}\}$. Note that we only use past information about the query-agent velocities. For the other inputs only the current information is used. \begin{comment} \begin{wrapfigure}{r}{.46\textwidth} \includegraphics[width=.2\textwidth]{img/blstm.pdf} \caption{Unrolled BLSTM to show each input. At each prediction step, we feed the observation vector for each nearby nearby pedestrian sequentially. The BLSTM places two RNN's with LSTM cells propagating the information forward and backward of the input vector. Moreover, the time-dependencies are encoded on the forward and backward hidden states, respectively. The final hidden state, hn, encodes the entire state of the other agents in a fixed-length vector, and is then fed to the feedforward portion of the network. The order of agents is sorted by decreasing distance to the ego agent, so that the closest agent has the most recent effect on hn.}\label{fig:multiple_path_example} \end{wrapfigure} \begin{algorithm}[t] \caption{Computing the environment grid $O^{i}_{\textrm{env}}$.} \label{alg:grid} \begin{algorithmic}[1] \STATE \textbf{Data:} Global obstacle map $O_{G}$, the $i$-th pedestrian position $p^i$, the other agents positions $p^{-i}$ grid size $g_s$, neighborhood size $n_s$, grid resolution $\Delta$ \STATE \textbf{Input:} Agent position uncertainty $\sigma^{-i}$ \STATE \textbf{Result:}{Local occupancy grid $O^{i}_{\textrm{env}}$} \FOR{each row r of $O^{i}_{\textrm{env}}$} \FOR{each column c of $O^{i}_{\textrm{env}}$} \STATE $(r_{G},c_G)=\frac{(((r,c)-\frac{n_s}{2})+p^i)}{\Delta}$ \STATE $p^{-i}_{L}=\frac{p^{-i}-p^i}{\Delta}$ \STATE $O^i_{\textrm{env}}$(r,c)=$O_{G}(r_{G},c_G)+\mathcal{N}(p^{-i}_{L},\sigma^{-i})$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \end{comment} \subsection{Probabilistic Inference Module}\label{sec:stochectic_module} The probabilistic inference module is based on the structure of the VRNN, as depicted in Fig. \ref{fig:network_architecture}. It contains three main components: a prior model, a encoder model and decoder model. We use a fully connected layer (FCL) with Relu activation as the encoder model $\psi^{\textrm{enc}}$, the feature extractor of the joint input $\psi^{\mathbf{x}}$ and of the latent random variables $\psi^{\mathbf{z}}$, and the representation of the prior distribution $\psi^{\textrm{prior}}$. $\{\theta_{\textrm{enc}},\theta_{\mathbf{x}},\theta_{\mathbf{z}},\theta_{\textrm{prior}}\}$ are the network parameters of $\{\psi^{\textrm{enc}},\psi^{\mathbf{x}},\psi^{\mathbf{z}},\psi^{\textrm{prior}}\}$, respectively. The output vectors $\{\psi^{\textrm{enc}}_\tau,\psi^{\textrm{prior}}\}$ are then used to model the approximate posterior and prior distribution. We split the output vectors into two parts to model the mean and variance, as represented in Fig. \ref{fig:network_architecture}, and apply the following transformations to ensure a valid predicted distribution: $[\mu_{\mathrm{prior}},\mu_{\mathbf{z}}] = [\psi^{\textrm{prior}}_{1:w_{\textrm{prior}}} , \psi^{\textrm{enc}}_{1:w_{\mathbf{z}}}]$ and $[\sigma_{\textrm{prior}},\sigma_{\mathbf{z}}] = [\exp{\psi^{\textrm{prior}}_{w_{\textrm{prior}}:2w_{\textrm{prior}}}},\exp{\psi^{\mathbf{z}}_{w_{\mathbf{z}}:2w_{\mathbf{z}}}}]$. \begin{comment} \begin{equation} \label{eq:reparam_sthocastic} \begin{split} &\mu_{0,t} = \psi^{\textrm{prior}}_{1:w_{\textrm{prior}}} \\ &\mu_{\mathbf{z},t} = \psi^{\textrm{enc}}_{1:w_{\mathbf{z}}} \\ &\sigma_{0,t} = \exp{\psi^{\textrm{prior}}_{w_{\textrm{prior}}:2w_{\textrm{prior}}}} \implies \sigma_{0,t} \ge 0 \\ &\sigma_{\mathbf{z},t} = \exp{\psi^{\mathbf{z}}_{w_{\mathbf{z}}:2w_{\mathbf{z}}}} \implies \sigma_{0,t} \ge 0 \end{split} \end{equation} \end{comment} $2w_{\textrm{prior}}$ and $2w_{\mathbf{z}}$ are the output vector size of the prior and latent random variable, respectively. This ensures that the standard deviation is always positive. Furthermore, we employ a LSTM layer as the RNN model propagating the hidden-state for the prior model and encoding the time-dependencies for the generative model. In contrast to \cite{vrnn} our generation model conditionally depends on the previous inputs: \begin{equation} \begin{split} &\mathbf{\mathbf{v}}_{k} | \mathbf{x}_{0},\mathbf{z}_{0} \sim \mathcal{N}(\mathbf{\mu}_{\mathbf{v},k},\textrm{diag}(\mathbf{\sigma}^2_{\mathbf{v},k})) \\ &[\mathbf{\mu}_{\mathbf{v},k},\mathbf{\sigma}_{\mathbf{v},k}]=\psi^{\textrm{dec}}(\psi^{\mathbf{z}}(\mathbf{z}_0),\psi^{\mathbf{x}}(\mathbf{y}^i_0),\mathbf{h}_{-1}) \\ \end{split} \end{equation} Lastly, the decoder model consists of two FC layers, with ELU \cite{elu} and linear activation, directly connected to the output of the LSTM network. Our models outputs in one shot $T_H$ steps considering the compressed and time-dependent input representation $\mathbf{y}^i_0$. \subsection{Multi-modal Trajectory Prediction Distribution}\label{sec:output_model} To predict one-shot multi-modal trajectories we model the output of our network as a Gaussian Mixture Model (GMM), similar to \cite{bishop1994mixture} and \cite{graves2013generating}, with $M>1$ modes accounting for the multimodality of the pedestrian's motion. For each mode $m \in \{1, \dots, M\}$, we predict a sequence of future pedestrian velocities $v^{i,m}_{1:T_H}$ represented by a bivariate Gaussian $v^{i,m}_k \sim \mathcal{N}(\mu^ {i,m}_{x,k},\mu^{i,m}_{y,k},\sigma^{i,m}_{x,k},\sigma^{i,m}_{y,k}), k = 1, 2, \dots, T_H$, capturing its motion uncertainty. Consequently, a modal trajectory is defined as a sequence of independent bivariate Gaussian's with length $T_H$. The $M$ modes represent a set of $M$ possible trajectories resulting in the following probabilistic model \begin{equation} p(\mathbf{v}^{i}_{k}|\mathbf{x}_0,\mathbf{z}_{0},\mathbf{h}_{0},\mathbf{\theta}) = \sum_{m=1}^M \pi_m p_G(\mathbf{\mu}^i_{k,m},\mathbf{\sigma}^i_{k,m}) \end{equation} where $p_G$ is the probability density function of multivariate Gaussian distributions, $\mathbf{\theta}=\{\theta_{\textrm{enc}},\theta_{\textrm{dec}},\theta_{\mathbf{x}},\theta_{\mathbf{z}},\theta_{\textrm{prior}}\}$ are the model parameters, $\mathbf{\mu}^{i,m}_{k}=[\mathbf{\mu}^{i,m}_{x,k},\mathbf{\mu}^{i,m}_{y,k}]$ and $\mathbf{\sigma}^{i,m}_{k}=[\mathbf{\sigma}^{i,m}_{x,k},\mathbf{\sigma}^{i,m}_{y,k}]$ are the mean and standard deviation of the predicted velocity vectors for the $m$-th predicted trajectory with likelihood $\pi_m$ at time-step $k$, respectively. The transformations described in Sec.\ref{sec:stochectic_module} and Fig.\ref{fig:network_architecture} are applied to the network outputs $\psi^{\textrm{dec}}_{\tau}$ to ensure a valid distribution parametrization. \begin{comment} \in \mathbb{R}^{4T_HM+M} \begin{equation} \begin{split} \mu_{x,t+1:t+T_H,1:M} &= \psi^{\textrm{dec}}_{1:T_HM} \\ \mu_{y,t+1:t+T_H,1:M} &= \psi^{\textrm{dec}}_{T_HM+1:2T_HM} \\ \sigma_{x,t+1:t+T_H,1:M} &= \text{exp}(\psi^{\textrm{dec}}_{\tau, 2T_HM+1:3T_HM}) \\ \sigma_{y,t+1:t+T_H,1:M} &= \text{exp}(\psi^{\textrm{dec}}_{\tau, 3T_HM+1:4T_HM}) \\ \end{split} \end{equation} \end{comment} \begin{comment} \subsection{Encouraging Trajectory Diversity} Learning multi-modal probability distributions from sequential data is a challenging problem, especially when the data is limited and the trajectory samples are sparse in complex environments. Even though our stochastic module can learn multi-modal conditional distributions from sequential data we can design a loss function which motivates our model during training to learn different agent trajectories according to the current scenario. We propose a diversity loss term encouraging the model to learn different modal trajectories based on the 2-\textit{Wasserstein} metric: \BF{This equation needs to be reformulated. It should the combinatorial between the modes distances.} \begin{equation} \begin{split} L_{m,t}^{Diversity} = -\min(||u_{m,t}-u_{m+1,t}||^2_2 \\ +tr(\Sigma_{m,t}+\Sigma_{m+1,t}-2(\Sigma_{m+1,t}^{\frac{1}{2}}\Sigma_{m,t}\Sigma_{m+1,t}^{\frac{1}{2}})^{\frac{1}{2}}),\epsilon) \end{split} \end{equation} with $\epsilon$ as the saturation term. The 2-\textit{Wasserstein} is a distance metric between two probability distributions. By minimizing the negative 2-\textit{Wasserstein} distance between modal distribution during training we are enforcing the probability distribution of each mode to be different, and thus avoiding mode collapse. The addition of this loss term, can be seen as a regularization factor of the output distribution model. To ensure stability we clip the diversity loss to $\epsilon$. However, this approach is suitable considering a low number of different modal distributions, as the number of loss components grows exponentially with the number of modes. Yet, we consider that three modes are a sufficient number covering the most likely motion hypotheses. \end{comment} \subsection{Improving Diversity}\label{sec:sampling} Generative models have the key advantage of allowing to perform inference by randomly sampling the latent random variable $\mathbf{z}$ from some prior distribution. Here, we propose a strategy to induce our model to learn a more ``diverse" distribution of trajectories in a interpretable fashion, similar to \cite{rhinehart2018r2p2}. Our VRNN models a generative distribution conditionally dependent on the input representation vector $\mathbf{\mathbf{y}^i}$, which is composed by three sub-vectors $\{\mathbf{y}^{i}_{\mathbf{v}},\mathbf{y}^{i}_{\textrm{env}},\mathbf{y}^{-i}\}$. Now, let's assume that each input vector is a random variable with the following distribution: \begin{comment} \begin{equation} \begin{split} \mathbf{y}^{i}_{\mathbf{v}} &\sim \mathcal{N}(\mathbf{y}^{i}_{0,\mathbf{v}},\sigma_{\mathbf{v}}) \\ \mathbf{y}^{i}_{\textrm{env}} &\sim \mathcal{N}(\mathbf{y}^{i}_{0,\textrm{env}},\sigma_{\textrm{env}}) \\ \mathbf{y}^{-i}_{} &\sim \mathcal{N}(\mathbf{y}^{-i}_{0},\sigma_{-i}) \end{split} \end{equation} \end{comment} \noindent\begin{minipage}{.3\linewidth} \begin{equation} \mathbf{y}^{i}_{\mathbf{v}} \sim \mathcal{N}(\mathbf{y}^{i}_{0,\mathbf{v}},\sigma_{\mathbf{v}}) \end{equation} \end{minipage} \noindent\begin{minipage}{.3\linewidth} \begin{equation} \mathbf{y}^{i}_{\textrm{env}} \sim \mathcal{N}(\mathbf{y}^{i}_{0,\textrm{env}},\sigma_{\textrm{env}}) \end{equation} \end{minipage} \noindent\begin{minipage}{.3\linewidth} \begin{equation} \mathbf{y}^{-i}_{} \sim \mathcal{N}(\mathbf{y}^{-i}_{0},\sigma_{-i}) \end{equation} \end{minipage} where $\{\mathbf{y}^{i}_{\mathbf{v}},\mathbf{y}^{i}_{\textrm{env}},\mathbf{y}^{-i}_{}\}$ are random variables representing the variability of the agent state, the environment and surrounding agents context, respectively. $\{\sigma_{\mathbf{v}},\sigma_{\textrm{env}},\sigma_{-i}\}$ are the variance of each input channel and are considered as hyperparameters of our model. Hence, by sampling from these input distributions we can condition the generation distribution of $\mathbf{x}$ according with the uncertainty on the pedestrian state or the environment context and generate different trajectories $\tilde{\mathbf{v}}^i_{1:T_H}$ by varying the pedestrian conditions. Then, we introduce a loss function which motivates our model to cover the generated trajectories as the following cross-entropy term: \begin{equation} \mathcal{L}_{div}=\sum_{m=1}^M\sum_{k=1}^{T_H}- \mathbb{E}_{}[\log p_G(\mathbf{\tilde{v}}_{k}|\mathbf{x}_{0},\mathbf{z}_{0})] \end{equation} where $\mathbf{\tilde{v}}_{m,k}$ is a velocity sample at time-step $k$ from the m-th sampled trajectory. \subsection{Training Procedure}\label{sec:train} The model is trained end-to-end except for the CNN which is pre-trained. We train it using back-propagation through time (BTTP) with fixed truncation depth $t_{\textrm{trunc}}$. Furthermore, we apply the reparametrization trick \cite{kingma2013auto} to obtain a continuous differentiable sampler and train the network using backpropagation. We learn the data distribution by minimizing a timestep-wise variational lower bound with annealing KL-Divergence as loss function \cite{bowman2015generating}: \begin{subequations} \label{eq:recosntruction} \begin{align} \mathcal{L} = \mathcal{L}_{m} + \lambda *(\mathcal{L}_{\textit{KL}}+\mathcal{L}_{div}) \\ \mathcal{L}_{m}= \sum_{m=1}^M\sum_{k=1}^{T_H}-\mathbb{E}_{\mathbf{x}_0 \sim \mathbf{D}}[\log \pi_m p_G(\mathbf{v}_{k}|\mathbf{z}_{0},\mathbf{x}_{0})] \\ \mathcal{L}_{\textit{KL}}(\mathbf{z}_0|\mathbf{x}_{\le 0},\mathbf{z}_{<0}) = \lambda \text{KL}(q(\mathbf{z}_0|\mathbf{x}_{\le 0},\mathbf{z}_{<0})||p_G(\mathbf{z}_0|\mathbf{x}_{< 0},\mathbf{z}_{<0})) \end{align} \end{subequations} \begin{comment} \end{comment} where $\lambda$ is the annealing coefficient. \begin{comment} \begin{wrapfigure}[20]{R}{0.5\textwidth} \begin{minipage}{0.6\textwidth} \begin{algorithm}[H] \caption{Training algorithm.} \label{alg:grid} \begin{algorithmic}[1] \STATE \textbf{Input:} training set $\mathbf{D}$, number of training epochs $n_{epochs}$, pre-trained network parameters $\theta_{\mathrm{CNN},0}$ \STATE \textbf{Result:} optimal model $\theta^*$ and CNN parameters $\theta^*_{\textrm{CNN}}$ \WHILE{$j \le n_{\textrm{epochs}}$} \IF{ $j \le n_{\textrm{pre-train}}$} \label{eq:loss} \STATE $\theta^* = \argmin\limits_\theta \mathcal{L}_m$ \ELSE \STATE Generate diverse trajectories by randomly sampling $\tilde{\mathbf{y}}^i_0$ and predicting $\tilde{\mathbf{v}}^i_{1:T_H}$, \STATE $\theta^* = \argmin\limits_\theta \mathcal{L}_m +\beta \mathcal{L}_{\textrm{div}}$, \ENDIF \STATE $\{\theta^*_{\textrm{enc}},\theta^*_{\mathbf{x}},\theta^*_{\mathbf{z}},\theta^*_{\textrm{prior}}\} = \argmin\limits_{\theta_{\textrm{enc}},\theta_{\mathbf{x}},\theta_{\mathbf{z}},\theta_{\textrm{prior}}} \mathcal{L}_{\textrm{KL}}$, \ENDWHILE \end{algorithmic} \end{algorithm} \end{minipage} \end{wrapfigure} \end{comment} The first term represents the reconstruction loss (Eq. \ref{eq:recosntruction}b) and the second the KL-Divergence between the approximated posterior $q(\textbf{z}_0|\textbf{x}_{\le 0},\textbf{z}_{<0})$ (Eq. \ref{eq:recosntruction}c) and the prior distribution of $\mathbf{z}$. Here, the prior over the latent random variable $\mathbf{z}$ is chosen to be a simple Gaussian distribution with mean and variance $[\mathbf{\mu}_{\textrm{prior},0},\mathbf{\sigma}_{\textrm{prior},0}]=\psi^{\textrm{prior}}(\mathbf{h}_{-1})$ depending on the previous hidden state. During training we aim to find the model parameters which minimize the loss function presented in Equation \ref{eq:recosntruction}a. The annealing coefficient allows the model first to learn the parameters that fit the data well and later in the training phase to match the prior distribution and improve the diversity of the predicted trajectories \section{Conclusion}\label{sec:conclusion} \section{Experiments}\label{sec:experiments} In this section, we show the obtained results of our generative model for simulation and real data. We present a qualitative analysis and performance results of our method against three baselines. \begin{comment} \begin{table*}[!t] \caption{Performance results of the baseline models vs. } \centering \begin{tabular}{|p{1cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}|p{1.6cm}||p{1.6cm}||p{1.6cm}|} \hline \multicolumn{2}{|c|}{Metric} & \multicolumn{4}{c|}{Simulation data} & \multicolumn{4}{|c|}{Real-world data (ETH / Hotel) } \\ \hline \multicolumn{2}{|c|}{} & LSTM-D & LSTM-GMM & vRNN-NoGrid & vRNN & LSTM-D & LSTM-GMM & S-Ways \cite{amirian2019social} & vRNN \\ \hline \multicolumn{2}{|c|}{AVE [m]} & 0.17 & 0.03 & 0.03 & \textbf{0.02} & 0.27 / 0.11 & 0.05 / 0.13 & 0.39 / 0.39 & 0.10 / 0.03 \\ \hline \multicolumn{2}{|c|}{FDE [m]} & 0.23 & 0.19 & 0.19 & \textbf{0.12} & 0.53 / 0.21 & 0.28 / 0.81 & 0.64 / 0.64 & 0.55 / 0.19 \\ \hline \multicolumn{2}{|c|}{NLL} & n.a. & 0.15 & 0.06 & \textbf{0.05} & n.a. & 0.17 / 0.27 & ? & 0.70 / 0.64 \\ \hline \multicolumn{2}{|c|}{2-\textit{Wasserstein}} & n.a. & \textbf{4.81} & 1.86 & \textbf{1.99} & n.a. & 5.3 / 16.2 & ? & 3.14 / 1.37 \\ \hline \end{tabular} \label{tab:global_performance} \end{table*} \end{comment} \subsection{Metrics} To evaluate the performance of our model against the proposed baselines we use the following evaluation metrics: the average displacement error (ADE) and the final displacement error (FDE). The first two assess the prediction performance. For the models outputting probability distributions, the mean values are used to compute the ADE and FDE metrics. For the multi-modal distributions, we use the trajectory with the minimum error as in \cite{amirian2019social}. \subsection{Experimental Settings} \vspace{-0.5mm} We trained our model using RMSProp \cite{tieleman2012rmsprop} which is known to perform well in non-stationary problems with a initial learning rate $\alpha=10^{-4}$ exponentially decaying at a rate of 0.9 and a mini-batch size of 16. We used a KL annealing coefficient $\lambda=\tanh(\frac{\textrm{step}-10^4}{10^3})$, with $\textrm{step}$ as the training step. We set the diversity weight $\beta$ to 0.2 and $\{\sigma_{\mathbf{x}^v},\sigma_{\mathbf{x}^{\textrm{env}}},\sigma_{\mathbf{x}^{-i}}\}=\{0.2,0.2,0\}$. Additionally, to avoid gradient explosion we clip the gradients to 1.0. We trained and evaluated our model for different prior, latent random variable and input feature vector sizes. The configuration achieving lower validation error was $\{128,128,512\}$, for the prior, latent random variable and input feature vector size, respectively. Moreover, we use $M=3$ mixture components for the models using a GMM as the output function. We set $T_H=12$ prediction steps corresponding to 4.8 s of prediction horizon and $T_O=8$ as used in previous methods \cite{amirian2019social,Sadeghian}. The models were implemented using Tensorflow \cite{tensorflow2015} and were trained on a NVIDIA GeForce GTX 980 requiring $2\times10^4$ training steps, or approximately 2 hours. The simulation datasets were obtained with the open-source ROS implementation of the Social Forces model \cite{Helbing1995}. Our VRNN will be released open source. \subsection{Performance evaluation} We compared our model with the following state-of-art prediction baselines: \begin{itemize} \item \textit{LSTM-D} \cite{Pfeiffer2017}: A deterministic interaction-aware model, incorporating the interaction between the agents and static obstacles. \item \textit{SoPhie} \cite{Sadeghian}: a GAN model implementing a Social and Physical attention mechanism. \item \textit{Social-ways} (S-Ways) \cite{amirian2019social}: The state-of-art GAN based method for multi-modal trajectory prediction. \item \textit{STORN} \cite{bayer2014learning}: Our VRNN model considering a time-independent prior as a Gaussian distribution with zero mean and unit variance. \end{itemize} \begin{table}[t] \caption{Performance results of our proposed method (VRNN) vs. baselines. The results presented for the Social Ways with 30 samples ($K=30$) and SoPhie method were taken from \cite{alahi2016social} and \cite{Sadeghian}, respectively. The ADE and FDE values are separated by slash. The average values (AVG) only consider the results for the real datasets. The results for using three samples ($K=3$) of S-Ways were obtained from the open-source implementation provided by \cite{amirian2019social}.} \centering \begin{tabular}{cccllll} \hline \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{Deterministic} & \multicolumn{5}{c|}{Stochastic} \\ \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Single Sample} & \multicolumn{2}{c|}{Multiple Samples} & \multicolumn{1}{c|}{} &\multicolumn{2}{c|}{Single Sample} \\ \hline \multicolumn{1}{|c|}{Dataset} & \multicolumn{1}{c|}{LSTM} & \multicolumn{1}{c|}{SoPhie} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}S-Ways\\ ($K=30$)\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}S-Ways\\ ($K=3$)\end{tabular}} & \multicolumn{1}{c|}{STORN} & \multicolumn{1}{c|}{VRNN} \\ \hline\hline \multicolumn{1}{|c|}{\textbf{ETH}} & \multicolumn{1}{c|}{0.40 / 0.65} & \multicolumn{1}{c|}{0.70 / 1.43} & \multicolumn{1}{c|}{\textbf{0.39 / 0.64}} & \multicolumn{1}{c|}{0.78 / 1.48} & \multicolumn{1}{c|}{0.73 / 1.49} & \multicolumn{1}{|c|}{\textbf{0.39} / 0.70 } \\ \hline \multicolumn{1}{|c|}{\textbf{Hotel}} & \multicolumn{1}{c|}{0.45 / 0.75} & \multicolumn{1}{c|}{0.76 / 1.67} & \multicolumn{1}{c|}{0.39 / 0.64} & \multicolumn{1}{c|}{0.53 / 0.95} & \multicolumn{1}{c|}{1.33 / 1.45} & \multicolumn{1}{|c|}{\textbf{ 0.35 / 0.47 }} \\ \hline \multicolumn{1}{|c|}{\textbf{Univ}} & \multicolumn{1}{c|}{1.02 / 1.54} & \multicolumn{1}{c|}{0.54 / 1.24} & \multicolumn{1}{c|}{0.55 / 1.31} & \multicolumn{1}{c|}{ 0.81 / 1.53 } & \multicolumn{1}{c|}{0.82 / 1.17} & \multicolumn{1}{c|}{\textbf{0.53 / 0.65}} \\ \hline \multicolumn{1}{|c|}{\textbf{ZARA01}} & \multicolumn{1}{c|}{0.35 / 0.68} & \multicolumn{1}{c|}{\textbf{0.30 / 0.63}} & \multicolumn{1}{c|}{0.44 / 0.64} & \multicolumn{1}{c|}{ 0.87 / 1.30 } & \multicolumn{1}{c|}{0.91 / 1.52} & \multicolumn{1}{|c|}{0.41 / 0.70 } \\ \hline \multicolumn{1}{|c|}{\textbf{ZARA02}} & \multicolumn{1}{c|}{0.54 / 0.92} & \multicolumn{1}{c|}{\textbf{0.38 }/ 0.78} & \multicolumn{1}{c|}{0.51 / 0.92} & \multicolumn{1}{c|}{ 1.27 / 2.13 } & \multicolumn{1}{c|}{0.91 / 1.52} & \multicolumn{1}{c|}{ 0.51 /\textbf{ 0.55}} \\ \hline \hline \multicolumn{1}{|c|}{\textbf{AVG}} & \multicolumn{1}{c|}{0.55 / 0.90} & \multicolumn{1}{c|}{0.54 / 1.15} & \multicolumn{1}{c|}{0.46 / 0.83} & \multicolumn{1}{c|}{0.86 / 1.47} & \multicolumn{1}{c|}{0.94 / 1.43} & \multicolumn{1}{|c|}{\textbf{0.44 / 0.61}} \\ \hline \hline \end{tabular} \label{tab:global_performance} \end{table} We use the open-source implementation of \cite{amirian2019social} to obtain the results for S-Ways considering only 3 samples $(K=3)$ as the number of trajectories predicted by our method and as suggested in \cite{trajnet}. We adopt the same dataset split setting as in \cite{amirian2019social} using 4 sets for training and the remaining set for testing. Aggregated results in Table \ref{tab:global_performance} show that our method outperformed the deterministic baselines, STORN, and S-Ways using three samples. Moreover, the results show that our method achieves comparable performance with state-of-the-art methods using a high number of samples on the Zara01, Zara02 and ETH datasets. In contrast, our method achieves the best performance on the Hotel and Univ datasets. Finally, the poor performance of the STORN model results show that employing a time-dependent prior improves the prediction performance significantly. \begin{wrapfigure}{r}{.5\textwidth} \centering \includegraphics[width=.5\textwidth]{img/result_25_20.pdf} \caption{Social-VRNN predicted trajectories vs a multi-modal prediction baseline, Social-Ways \cite{amirian2019social}. In blue is depicted the ground truth trajectory, in red, green and yellow the three possible predicted trajectories by our model, in light blue the one sigma boundary of the predicted trajectory and, in magenta 30 sampled predicted trajectories by the Social-Ways model. }\label{fig:comparison} \end{wrapfigure} \subsection{Qualitative analysis} \begin{figure*}[!t] \centering \begin{minipage}{\textwidth} \includegraphics[height=3.6cm,width=\textwidth,trim={0cm 0cm 0cm 0cm},clip]{img/multipath.pdf} \caption*{\textit{a)} In this scenario, one agent is moving along a corridor with an obstacle in the middle. The agent is moving from the left to the right. When she finds the obstacle in the middle of its path, our model successfully predicts two hypotheses: going left or right. Once she is already avoiding the obstacle through the left side, the model predicts three hypotheses for the pedestrian to continue its collision avoidance maneuver, with varying clearance levels. Finally, when she is in free space all the predicted trajectories collapse to a single-mode.} \label{fig:test1} \end{minipage} \hspace{10mm} \begin{minipage}{\textwidth} \includegraphics[height=3.6cm,width=\textwidth,trim={0cm 0cm 0cm 0cm},clip]{img/complex_zoom.pdf} \caption*{\textit{b)} This sub-figure illustrates four sample results obtained in a more complex simulated scenario, with several static obstacles and 15 agents. The two left figures show two situations where the agent can avoid another agent on its left, right or by simply move straight because the other will keep moving away. The two right figures show the ability of our model to predict different trajectories that an agent may follow to avoid a static obstacle.} \label{fig:test2} \end{minipage} \hspace{10mm} \begin{minipage}{\textwidth} \includegraphics[height=3.6cm,width=\textwidth,trim={0cm 0cm 0cm 0cm},clip]{img/real2.pdf} \caption*{\textit{c)} Three examples of multi-modal trajectory prediction using our model in real scenarios. In blue is depicted the ground truth trajectory, in red, green and yellow the three possible predicted trajectories, in light blue the one sigma boundary of the predicted trajectory.} \label{fig:real_scenarios} \end{minipage} \caption{ The scenarios depicted in Fig.\ref{fig:toy_experient}(a) and (b) were simulated by using the Social Forces model \cite{Helbing1995} for the pedestrians. In magenta the real trajectory, in red, green and yellow the mean values of each trajectory hypothesis and, in blue the 1-$\sigma$ uncertainty boundaries of each trajectory. The dark blue dots represent the other agents. The plotted trajectories correspond to a single query to our network model. } \label{fig:toy_experient} \end{figure*} In this section we present prediction results for simulated and real scenarios, as depicted in Fig. \ref{fig:toy_experient}. We have created two datasets to demonstrate this multi-modal behavior with static obstacles (Fig.\ref{fig:toy_experient}(a)), and other pedestrians (Fig. \ref{fig:toy_experient}(b)). Figure \ref{fig:toy_experient}(a) shows the ability of our method to predict different trajectories according to the environment structure. Figure \ref{fig:toy_experient}(b) demonstrates that our method can scale to more complex environments, with several pedestrians and obstacles, and predict different motion hypotheses. Moreover, we evaluate our method on real data using the publicly available datasets \cite{pellegrini2009you,lerner2007crowds}. In Fig. \ref{fig:toy_experient}(c) on the left, our model infers two possible trajectories for the pedestrian to avoid a tree. In addition, in the central and right images of Fig. \ref{fig:toy_experient}(c), our model predicts two possible trajectories to move through the crowd. Finally, Fig. \ref{fig:comparison} shows predicted trajectories for both Social-VRNN and Social-Ways model in a crowded scene. The predicted trajectories from the Social-VRNN model can capture two distinct trajectories through the crowd. In contrast, Social-Ways only captures one mode, even considering 30 samples from the baseline model. The presented results demonstrate that our model can effectively infer different trajectories according to the environment and social constraints from a single query. We refer the reader to the video accompanying this article for more details on the presented results. \subsection{Expert Knowledge} The human navigation behaviour is inherently multimodal. Under the same situation, humans may follow a different path to avoid collision with other obstacles or agents— for instance, Fig. \ref{fig:multiple_path_example} depicts the paths of two pedestrians, in green and purple, constantly switching position from one side of a corridor to the other with an obstacle in the middle of the path. As it can be observed, over time, the same pedestrian may follow a different path under the same circumstance. Existing approaches for multimodal path prediction either sample from a latent random variable to generate different trajectories (\cite{Gupta2018}) or use ensemble methods (e.g. bagging or bootstrapping) \cite{lakshminarayanan2017simple}. In this work, we propose a supervised approach to train a neural network learning to learn how to predict the different possible paths according to the environment setting. Existing public available datasets only provide one of the many possible trajectories that an agent may follow. To avoid the burden of clustering the different followed paths under the same circumstance from a dataset, we resort to a topological path planner (TPP) as the expert. Typically, a TPP combines some motion planner with some metric to classify the generated paths, e.g. winding angles. homotopy classes, etc. In this work, we resort to a homotopy-based TPP. Homotopy classes characterize how paths avoid obstacles allowing to distinguish or group different paths. More specifically, we employ a homotopy-based TPP using A* as the search algorithm, called Homotopy A* (HA*) as presented in \cite{hernandez2015comparison}. Given a start, a goal position and a grid map containing static and dynamic obstacles, the HA* planner can compute different paths belonging to different classes. We refer to \cite{hernandez2015comparison} for more details about the motion planner. However, the paths generated by the HA* planner are discrete and kinodynamically different (not "human-like") from a typical human path. Thus, the expert is only used to provide intermediate goal positions from paths belonging to different homotopy classes. Then, the Social Forces model \cite{Helbing1995} is applied to simulate the dynamics of the pedestrian from its current position to an intermediate position. If the obtained path belongs to a different class from the path in the dataset, then it is added to the dataset. Thereby, the original dataset is enlarged with other human-like paths. A summary of the proposed approach is presented in Algorithm \ref{alg:expert}. The algorithm is provided with a dataset, a maximum number of homotopy classes, $M$, to search and a horizon length $T$. The dataset contains the trajectories for a group of $\textbf{n}$ pedestrians. Then, we iterate over the entire dataset by iterating for each pedestrian $i$ and for each time instant $t$ of the recorded trajectory. The function $ \textbf{NextTraj}$ returns two two results cropped segments of $T$ seconds starting from $t_0$ \begin{algorithm}[t]\label{alg:expert} \caption{Computing different motion hypothesis}\label{expert} \begin{algorithmic} \STATE Given Dataset, $M$ and $H$ \FOR{$i = 1,\hdots,\textbf{n}$} \FOR{$t = 0,\hdots,N^i$} \STATE $[\pazocal{O}_{\textrm{static},t}^i, \boldsymbol{p}_0] \leftarrow \textbf{NextTraj}(i,t)$ \STATE $HC_0 \leftarrow HomotopyClass(\boldsymbol{p}_0)$ \FOR{$m = 1,\hdots,M$} \STATE $Path_m$ $\leftarrow$ \textbf{HA}*($\pazocal{O}_{\textrm{static},t}^i, \boldsymbol{p}_t^i, \boldsymbol{p}_N^i$) \IF{ $HomotopyClass(Path_m) \ne HC_0$} \STATE $\boldsymbol{p}_m \leftarrow\textbf{SocialForces}(Path_m)$ \STATE Dataset.push($\boldsymbol{p}_m$) \ENDIF \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Please note that the other pedestrians are also added to the static gridmap. \subsection{Adversarial Trainning} Neural networks behave in a interesting way, sometimes classifying with high confidence two similar data samples in a different way. An adversarial example shows this case. Adversarial examples, are examples created from the training data by adding the input gradient, the direction of increasing cost. The resulting example is most likely misclassified and so the network is trained to correct such mistakes\cite{goodfellow2016deep} \subsection{GAN - Multipath prediciton} Frequently, the solution for a motion planning task has more than one correct answer. For that purpose, models minimizing the MSE between the ground truth and predicted trajectory that an agent performs in some specific situation cannot learn multiple options that the agent may had followed. Generative Adversarial Networks (GANs) are a subclass of generative models enabling multi-modal predictions \cite{Goodfellow2016}. \cite{goodfellow2014generative} extend the GSN by removing the need of Markov chains. introduce the adversarial modeling framework consisting on creating a two-player game (competition mechanism) between two networks models: a generative and an discriminative model. In this game, the first tries to The second tries to discriminate if the data comes from the model or distribution The competition drives both networks to achieve better results A \textit{conditional generative model} $p(x|c)$ \subsubsection{Computing different motion hypothesis} The human motion is inherently multimodal. Given a workspace with obstacles surrounding a query-agent, its start $s$ and goal position $g_i$ it is possible to calculate different path that the agent may have followed. In order to learn this multimodal behavior, first we must reason how to distinguish between different paths. There are several ways to characterize a path. For instance, winding angles, homotopy classes In this work we use homotopy classes to classify and compute different paths. Such paths can be grouped into different categories. There are different A homotopy class is the set of all possible trajectories from a start to a goal position that avoid obstacles in the same way. Two paths are homotopic if one can be deformed into the other without intruding an obstacle. between them For instance, topological path planners such as given as a local map $\mathcal{M}_i$. The goal position $g_i$ can be obtained from the recorded dataset \subsubsection{Trajectory classification} \subsubsection{Trajectory selection} Previously, we introduced how to generate a set containing trajectories from different homotopy classes according with the topology of the environment. Yet, some of the generated paths are highly unlikely to be followed by the query-agent. \begin{enumerate} \item Velocity direction \item Only looking forward \end{enumerate} Gradient clipping for the overflow issues. Adversarial training \begin{align}\label{eq:biv} f(x,y) =& \frac{1}{2 \pi \sigma_x \sigma_y \sqrt[]{1-\rho^2}}\exp(-\frac{z}{1-\rho^2})\\ z =& \frac{(x-\mu_x)^2}{\sigma_x}+\frac{(y-\mu_y)^2}{\sigma_y}+2\rho\frac{(x\mu_x)(y-\mu_y)}{\sigma_x \sigma_y} \nonumber \end{align} where the output of the Social LSTM model is composed by five elements: $u_x$, $u_y$, $\sigma_x$, $\sigma_y$ and $\rho$. That represent the mean and standard deviation values of the position variables $x$ and $y$, together with their correlation factor $\rho$. Noise scheduling technique to encode the uncertainty propagation \cite{Fragkiadaki2015}. We already use Back-propagation and dropout \cite{hinton2012improving}. In addition, $\Pi^i_{k,t}$ is the likelihood that the $i$-th agent may follow the $k$-th trajectory at each time step. The information about the scene is provided as an occupancy map $O_{Static}$. A Convolutional Neural Network (CNN) is pre-trained using a convolutional auto-encoder \cite{cadena2016multi}. The learned encoder network is then later used in the final architecture to extract the most relevant features of the map which is then fed to a LSTM Our baseline model consists of 3 separated channel structure incorporating the ego-agent state, environment information and social context. The outputs of each network are concatenated and fed to a sequence of a LSTM plus Fully Connected (FC) network. A s \subsubsection{Model Training} At each training iteration a minibatch $y_{Dataset}$ from the dataset and a minibatch $\hat{y}t$ from the model's prediction is drawn. One gradient step is performed to minimize $J_D$ and another to minimize $J_G$. Some works recommend to run more training steps for one player than to the other. (Something maybe to try) \subsection{MSVRNN model} \begin{figure*} \centering \includegraphics[width=\textwidth]{img/multimodal_network_2.jpg} \caption{Multi-space Variational Network architecture}% \label{fig:msvrnn}% \end{figure*} Homotopy classes allow categorizing different trajectories into different classes. Here, we introduce a multi-space variational recurrent neural network (MSVRNN) for multi-hypothesis trajectory prediction and classification. The likelihood of an agent following a trajectory from a specific class can be described by an additional discrete latent variable $\textbf{y}$ representing the unobserved class of a specific trajectory. Thus, the MSVRNN model combines both continuous and discrete latent variables. We use an additional network to approximate $p(\textbf{y}|\textbf{z})$ working as discriminative classifier. The architecture of the MSVRNN model is presented in Fig. \ref{fig:msvrnn}. \subsubsection{Generative Model} The combined generative model is described as it follows: \begin{eqnarray} p_\theta(\textbf{x}_{t+1:t+T}|\mathbf{z}_{c,t})\sim \sum_{m=1}^M\sum_{t=t_0}^{t_0:t_0+T}\mathcal{N}(\mathbf{\mu}_{z,t},\sigma_{z,t}) \\ p_\pi(\textbf{y}_t|\textbf{z}_{d,t}) = \prod_m^M\textbf{Cat}(y_m|\varphi_{disc}(h_{t-1}))\\ p_{\theta,\pi,\phi}(\textbf{x}|\textbf{y},\textbf{z}) = p(\textbf{x}|\textbf{z})p(\textbf{y}|\textbf{z}) \\ \end{eqnarray} where $\textbf{Cat}(\cdot)$ is a multinomial distribution encompassing the likelihood of an agent following a specific trajectory over a set of a possible class of trajectories, with $y_m∈\{1,\dots,M\}$. Furthermore, the continuous latent variable is now assumed to be a bivariate Gaussian per time-step over the prediction horizon. Similar to the vRNN model, the convex combination of both models result in a GMM. \subsubsection{Prior} The inference model is the same as defined in Eq. \ref{eq:inf_model}. The key difference between the MSVRNN with the vRNN is the addition of a separated LSTM layer to reason about the trajectory class $y_m$ as depicted in Fig. \ref{fig:network_architecture}. The latent variable is split into a discrete and a continuous $\textbf{z}=[\textbf{z}_d,\textbf{z}_c]$. Thereby, two different priors are assumed over the latent space. \begin{eqnarray} \textbf{z}_{c,t}\sim \mathcal{N}(\mathbf{\mu}_{0,t},\sigma_{0,t}) \\ (\mathbf{\mu}_{0,t},\sigma_{0,t})= \varphi_{c,prior}(h^{enc}_t) \\ \textbf{z}_{d,t}\sim \textit{Gumbel}(0,I) \end{eqnarray} To ensure differentiability over the network we employ the Gumbel-Max trick \cite{Tarlow}. Lastly, the outputs of each channel are combined resulting in GMM model. \subsubsection{Loss function} Note that the introduction of discrete latent variables results in a more complex loss function: \begin{equation} \begin{aligned} L(\textbf{x},\theta,\phi,\pi) = &\mathbb{E}_{\textbf{z}\sim q_{\varphi_{enc}}(\textbf{z}|\textbf{x})}[\log p_\theta(\textbf{x}|\textbf{z}_c)+\log p_\pi(\textbf{z}_d)] \\ & -\textbf{KL}(q_{\varphi_{enc}}(\textbf{z}_c|\textbf{x})||p(\textbf{z}_c)) \\& \end{aligned} \end{equation} \subsection{SRNN - To be completed...} \begin{figure*} \centering \includegraphics[width=\textwidth]{img/multimodal_network_srnn.jpg} \caption{Network architecture}% \label{fig:srnn}% \end{figure*} SRNN's combine a deterministic RNN with a State Space Model (SSM) being state-of-art in modelling stochastic sequential processes \cite{NIPS2016_6039}. Our SRNN model follows a similar approach as presented in Fig.\ref{fig:srnn}. We split our network architecture into two main modules: a deterministic and a stochastic. The first is composed by the feature extractor, responsible for pre-processing the input channels as described in section \ref{sec:inputs}, and a deterministic RNN. As previously explained, the deterministic RNN comprising three split LSTM layers followed by a $\textbf{concat}$ operation creates a latent tensor $\textbf{z}$ fusing and encoding the time patterns from the training data. The second module, encompassing the stochastic RNN is a SRNN\dots \section{Introduction}\label{sec:intro} Prediction of human motions is key for safe navigation of autonomous robots among humans in cluttered environments. Therefore, autonomous robots, such as service robots or autonomous cars, shall be capable of reasoning about the intentions of pedestrians to accurately forecast their motions. Such abilities will allow planning algorithms to generate safe and socially compliant motion plans. The are to key aspect To build a model of the pedestrian's motion However, building an inference model of human motion is extremely difficult due to its uncertainty and multimodality. The uncertainty is caused by the partial observation of the pedestrians' states and their stochastic dynamics. The multimodality is due to the interaction effects between the pedestrians, the static environment and the non-convexity of the problem. For instance, as Fig. \ref{fig:multiple_path_example} shows, a pedestrian can decide to either avoid a static obstacle or engage in a non-verbal joint collision-avoidance maneuver with the other upcoming pedestrian, avoiding on the right or left. A large number of high accuracy prediction models have been proposed. However, some of these approaches only predict the mean behavior of the other agents \cite{Pfeiffer2017}. Others apply different techniques to model uncertainty such as ensemble modeling \cite{lotjens2019safe}, dropout during inference \cite{gal2016dropout} or learn a generative model and generate several trajectories by sampling randomly from the latent space \cite{Gupta2018}. Recently, Generative Adversarial Networks (GANs) have achieved state-of-the-art in multi-modal trajectory prediction by randomly sampling the latent space to generate diverse trajectories \cite{amirian2019social}. Yet, these methods have two main drawbacks. First, GANs are difficult to train and may fail to converge during training. Second, they require a high number of samples to achieve good prediction performance which is impracticable for real-time motion planning. Moreover, these approaches assume an independent prior across different timesteps ignoring the existing time dependencies on trajectory prediction problems. \begin{figure}[t] \centering \includegraphics[scale=0.45]{img/multiple_path.pdf} \caption{Illustration of a scenario where there are multiple ways that two pedestrians can avoid a collision. We present a method that given the same observed past, predicts multiple socially acceptable trajectories in crowded scenes.} \label{fig:multiple_path_example} \end{figure} The objective of this work is to develop a prediction model suitable for interaction-aware autonomous navigation. Hence, we address these limitations with a novel generative model for multi-modal trajectory prediction based on Variational Recurrent Neural Networks (VRNNs) \cite{vrnn}. We treat the multi-modal trajectory prediction problem as modeling the joint probability distribution over sequences. To this end, our model first creates a joint representation of three input channels: the pedestrian dynamics, the static environment and other interacting pedestrians. Then, the VRNN structure learns the complex multi-modal distribution conditional to the previous timesteps. Finally, we use a Gaussian Mixture Model (GMM) as output probability model enabling one-shot multi-modal trajectory predictions. Our approach makes three key contributions: \begin{itemize} \item A new variational deep generative neural network (VDGNN) design for one-shot multi-modal trajectory prediction; \item A training strategy is proposed to learn more diverse trajectories in an interpretable fashion \item One-shot prediction model achieving state-of-the-art performance considering a reduced number of samples on both simulated and real datasets. \end{itemize} \section{Preliminaries}\label{sec:preliminaries} \subsection{Multimodal Trajectory Prediction Problem}\label{sec:problem} Consider a navigation scenario with $\textbf{n}$ interacting agents navigating on a plane $\pazocal{W} = \mathbb{R}^2$. The dataset $\mathbf{D}$ contains information about the $i$-th agent trajectory $X^i_{1:N}=\{(p_1^i,v_1^i),\dots,(p_N^i,v_N^i)\}$ with $N$ utterances and their surrounding static environment $\pazocal{O}_{\textrm{static}}^i \subset \pazocal{W}$, for $i \in [0, \dots \textbf{n}]$. Let $\textbf{x}^i_t = \{p^i_t,v^i_t \}$ denote the state of the $i$-th query-agent at time $t$, consisting of the agent position $p^i_t = \{x^i_t,y^i_t \}$ and velocity $v^i_t = \{v^i_{x,t},v^i_{y,t} \}$ with respect to the world frame $\pazocal{W}$. $X^{i}_{t+1:t+T}=(p_{t},\dots,p_{t+T})$ represents the future agent trajectory over a prediction horizon $T$. Throughout this paper the subscript $i$ will denote the query-agent $i$ and $-i$ the collection of all the other agents states. Moreover, To account for the uncertainty and multimodality of agent motion, we seek a probabilistic model $f(\theta)$ with parameters $\theta$ over a set of $M$ different trajectories and a prediction horizon $T$: \begin{equation} \begin{array}{c} p(\mathbf{x}^i_{m,t+1:t+T} | \mathbf{x}^{i}_{t},\mathbf{x}^{-i}_{t},\pazocal{O}_{\textrm{static}}) = f(\theta, X^{i}_{t},X^{-i}_{t},\pazocal{O}_{\textrm{static}}) \\ \forall m \in [0,M] \end{array} \end{equation} where $m$ is trajectory index and $t$ is query time instant. The probability is conditional to the other agents states and the surrounding environment to models the interaction and environment constraints. \begin{comment} \subsection{Variational Recurrent Neural Networks}\label{sec:vrnn} \BF{@Wei Pan: Javier thinks that this is not clear enough and I am not sure weather I should explain it more in detail or just a give a top overview about the method.} \wei{yes and no. You should start with RNN first. Then say why you want to do variational, because generative blabla, then you say your goal is the generate the time seires/sequence. } VRNN's are a class of generative models designed to learn the underlying probability distribution of sequential data. The VRNN structure is similar to a VAE composing a generative (encoder) and inference (decoder) model. In contrast to VAE's, the prior over the latent random variable $\mathbf{z}_t$ at time $t$ is conditional dependent on the previous hidden state $h_{t-1}$ of a RNN. \begin{equation} \begin{split} \mathbf{z}_t \sim \mathcal{N}(\mathbf{\mu}_{0,t},\textrm{diag}(\mathbf{\sigma}^2_{0,t})),\\ [\mathbf{\mu}_{0,t},\mathbf{\sigma}_{0,t}]=\psi^{\textrm{prior}}_\tau(\mathbf{h}_{t-1}) \end{split} \end{equation} with $\psi^{\textrm{prior}}$ is any function approximator such as a neural network, $\mathbf{\mu}_{0,t}$ and $\mathbf{\sigma}^2_{0,t}$ are the mean and variance of the conditional distribution at time $t$ for zero prediction steps, respectively. Moreover, the generation distribution not only depends on the latent random variable $\mathbf{z}_t$ but also on $\mathbf{h}_{t-1}$,being defined as it follows: \begin{subequations} \label{eq-generation} \begin{align} \mathbf{x}_t | \mathbf{z}_t \sim \mathcal{N}(\mathbf{\mu}_{x,t},\textrm{diag}(\mathbf{\sigma}^2_{x,t})) \\ [\mathbf{\mu}_{x,t},\mathbf{\sigma}_{x,t}]=\psi^{\textrm{dec}}_\tau(\psi^{\mathbf{z}}_\tau(\mathbf{z}_t),\mathbf{h}_{t-1}) \end{align} \end{subequations} with $\mathbf{\mu}_{x,t},\mathbf{\sigma}_{x,t}$ as the parameters of the generation distribution, $\psi^{\textrm{dec}}_\tau$ is the decoder model and $\psi^{\mathbf{z}}_\tau$ a feature extractor from the latent space. The inference model aims to learn the approximate posterior distribution of the latent random variable $\textbf{z}_t$ conditioned on the previous input sequence $\mathbf{x}_{\le T}$: \begin{subequations} \label{eq-posterior} \begin{align} \mathbf{z}_{\le T}|\mathbf{x}_{\le T} \sim \mathcal{N}(\mathbf{\mu}_{z,t},\textrm{diag}(\mathbf{\sigma}^2_{z,t})) \\ [\mathbf{\mu}_{z,t},\mathbf{\sigma}_{z,t}]=\psi^{\textrm{enc}}_\tau(\psi^{\mathbf{x}}_\tau(\mathbf{x}_t),\mathbf{h}_{t-1}) \end{align} \end{subequations} \begin{subequations} \label{eq-problem} \begin{align} \mathbf{x}_t | \mathbf{z}_t \sim \mathcal{N}(\mathbf{\mu}_{x,t},\textrm{diag}(\mathbf{\sigma}^2_{x,t})) \\ [\mathbf{\mu}_{x,t},\mathbf{\sigma}_{x,t}]=\psi^{dec}_\tau(\psi^{\mathbf{z}}_\tau(\mathbf{z_t}),\mathbf{h}_{t-1}) \\ \mathbf{z}_t | \mathbf{x}_t \sim \mathcal{N}(\mathbf{\mu}_{z,t},\textrm{diag}(\mathbf{\sigma}^2_{z,t})) \\ [\mathbf{\mu}_{z,t},\mathbf{\sigma}_{z,t}]=\psi^{enc}_\tau(\psi^{\mathbf{x}}_\tau(\mathbf{x_t}),\mathbf{h}_{t-1}) \\ \end{align} \end{subequations} with $\mathbf{\mu}_{z,t},\mathbf{\sigma}_{z,t}$ are the mean and variance of the approximate posterior distribution, respectively. $\{\psi^{\textrm{dec}}_\tau$ the decoder model and $\psi^{\mathbf{x}}_\tau$ a feature extractor from the input $\mathbf{x}_t$. $\{\psi^{\textrm{dec}}_\tau,\psi^{\textrm{enc}}_\tau,\psi^{\mathbf{x}}_\tau,\psi^{\mathbf{z}}_\tau\}$, can be any function, such as neural networks, allowing to learn complex sequences. Moreover, the VRNN as a RNN updating the hidden-state of the network as it follows: \begin{equation}\label{eq:vrnn_prior} \mathbf{h}_t = f_\theta(\psi^{\mathbf{x}}_\tau(\mathbf{x}_t),\psi^{\mathbf{z}}_\tau(\mathbf{z}_t),\mathbf{h}_{t-1}) \end{equation} where $f_\theta$ is the transition function of the cell selected for the RNN structure. \end{comment} \section{Related Works}\label{sec:related_work} Early works on human motion prediction are typically model-based. In \cite{Helbing1995}, a model of human-human interactions was proposed by simulating attractive and repulsive physical forces denominated as "social forces". To account for human-robot interaction, a Bayesian model based on agent-based velocity space was proposed in \cite{Kim2015}. However, these approaches do not capture the multi-hypothesis behavior of the human motion. To accomplish that, \cite{Trautman2010} proposed a path prediction model based on Gaussian Processes, known as interactive Gaussian Processes (IGP). This was done by modeling each individual's path with a Gaussian Process. The main drawbacks of this approach are the usage of hand-crafted functions to model interaction, limiting their ability to learn beyond the perceptible effects, and, it is computationally expensive. Recently, Recurrent Neural Networks (RNNs) have been employed in trajectory prediction problems \cite{Becker2018}. Building on RNNs, a hierarchical architecture was proposed in \cite{Xue2018} and \cite{Pfeiffer2017}, which incorporated information about the surrounding environment and other agents, and performed better than previous models. Despite the high prediction accuracy demonstrated by these models, they are only able to predict the average behavior of the pedestrians. In contrast, Social LSTM \cite{alahi2016social} models the prediction state as a Bivariate Gaussian and thus, uncertainty can be incorporated. Moreover, interaction is modeled by changing the hidden state of each agent network according to the distance between the agents, a mechanism know as "Social pooling". Several approaches extended the latter either by incorporating other sources of information or proposing updates in the model architecture improving the performance of the model. For instance, head pose information from the other agents was incorporated in \cite{hasan2018mx} resulting in a significant increase of the prediction accuracy. Context information from visual images was used to encode both human-human and human-space interactions \cite{bartoli2018context}. Social pooling has been extended to generate collision-free predictions \cite{xu2018collision} and to preserve spatial information by employing Grid LSTM's \cite{lerner2007crowds}. However, previous approaches did not consider the inherent multi-modal nature of human motions. In \cite{Gupta2018}, a generative model based on Generative Adversarial Networks (GANs) was developed to generate multi-modal predictions by randomly sampling from the latent space. The latter approach was extended with two attention mechanisms to incorporate information from scene context and social interactions \cite{Sadeghian}. However, GANs are very susceptible to mode collapsing causing these models to generate very similar trajectories. To avoid mode collapse, a recently improved Info-GAN for multi-modal trajectory prediction was proposed \cite{amirian2019social}. Nonetheless, GANs are very difficult to train and typically require a large number of iterations until it converges to a stable Nash equilibrium. Moreover, the environment context was not incorporated in \cite{amirian2019social}, and the predicted trajectories may collide with static obstacles. To overcome the later issue, \cite{zhao2019multi} proposed to include scene context information provided by a top-view camera of the scene, making use of spatial relationships among agents and had shown good results in the Zara dataset. However, such information is not available in a real autonomous navigation scenario. In contrast to the GAN-based approaches, VRNNs \cite{vrnn} have demonstrated better performance learning probabilistic models of time-sequences. Because its architecture follows a variational approach being easier to train, having faster convergence and consequently, requiring fewer training iterations. Hence, in this work, we propose a novel architecture to learn a multi-modal prediction model based on VRNNs. Moreover, our method only uses local information enabling its application for autonomous navigation. \begin{comment} \subsection{Learning sequence models} RNN In simple terms, a Recurrent Neural Network (RNN) can be described as a neural network specially tailored to learn the underlying model of sequential data. LSTM's and GRU's LSTM + GRU to overcome vanishing and exploding gradient issues Output probability models \cite{bishop1994mixture} RNN-GMM \cite{graves2013generating} Variational auto-encoders (VAE's) have recently shown to be an effective method to learn complex multi-modal distributions from data. \cite{bayer2014learning} combined VAE's with RNN by simply employing a RNN for both recognition/encoder and inference/decoder model. The model, named STORN, computes the next hidden state $h_t$ depending on the previous hidden state $h_{t-1}$, input $x_{t-1}$ and a sampled latent random variable $z_t$. Finnaly, the model is trained using the same principle as in VAE's. \cite{bowman2015generating} applied the same approach for neural-linguistic programming and proposed two strategies to overcome the training issues: KL cost annealing and decoder dropout. Later, \cite{chung2015recurrent} introduced the variational RNN (vRNN) including latent random variables in the hidden state of a RNN. Variational recurrent neural networks (VRNN's) expand VAE's to model sequential data. \end{comment} \section{Preliminaries} \subsection{Mixture Density Network}\label{sec:MDN_pre} A Mixture Density Network (MDN) is the formulation of a neural network to model the conditional probability distribution of a target dataset $\textit{D}$. The idea behind an MDN is to use the outputs of the network to parameterize a mixture distribution. More specifically, a subset of the outputs is set as the mixture weights and the remaining outputs the respective mixture elements. Consider a neural network with parameters $\theta$ modelling a mixture model conditioned to the input vector $\textbf{x}$ with $M$ components where each component is a Gaussian. \begin{equation} p(y|\textbf{x},\theta) = \sum_{m=1}^M\pi_m(\textbf{x}) \cdot \mathcal{N}(y;\mu_m(\textbf{x}),\sigma_m^2(\textbf{x}))) \end{equation} with $\pi_m, \mu_m, \sigma_m$ as the $m$-th mixture weight, mean and variance, respectively. Then, the output vector $\hat{y}=[\hat{\pi}_1, \hat{\mu}_1, \hat{\sigma}_1 \dots \hat{\pi}_m, \hat{\mu}_m, \hat{\sigma}_m]$ is parameterized as it follows: \begin{subequations}\label{eq:control_problem} \begin{align*} \pi_m &= \frac{\exp(\hat{y}_{m})}{\sum_m^M\exp(\hat{y}_{5*l})} \implies \pi_m \in [0,1] \\ \mu_m(\textbf{x}) &= \hat{y}_{2*m} \\ \sigma_m(\textbf{x}) &= \exp(\hat{y}_{3*m}) \implies \sigma_m >0 \\ \end{align*} \end{subequations} The goal is to find the set of parameters $\theta$ of the MDN which minimize the negative log-likelihood given the set of input/output pairs in the training dataset $\textit{D}=\{(x_1,y_1),\dots,(x_N,y_N)\}$ with $N$ samples: \begin{equation} \theta^* = \arg \min -\sum^N_t\sum^M_m \log{p(y_t | x_t,\theta)} \end{equation} \subsection{Uncertainty Propagation} To compute the resulting trajectories we use a linear model with uncertainty resulting in the following dynamical system: \begin{equation} \begin{array}{lc} X_{k+1}=x_k+v_k*\delta t & v_k \sim \mathcal{N}(\mu^v_k,\Sigma^v_k) \\ \mu^x_{k+1} =\mu^x_k + \mu^v_k*\delta t & \\ \Sigma^x_{k+1} = \Sigma^x_k + \Sigma^v_k *\delta t^2 \\ X_0 \sim \mathcal{N}(\mu_{x,0},\sigma_{x,0}) & \text{Prior on agent position} \\ v_0 \sim \mathcal{N}(\mu_{v,0},\sigma_{v,0}) & \text{Prior on agent velocity} \end{array} \end{equation} \subsection{Trajectory prediction}\label{sec:rnns} Recurrent neural networks have demonstrated high efficiency learning time-dependent sequences. When predicting sequences, there are three main approaches which can be followed: many-to-many, one-to-many and recursive. In the first two configurations, the network is requested to predict more than one time-step into the future, differing on the number of past inputs feed to the network. The last learn to predict only a single step using the predicted values to predict the following steps. It has been proven that many-to-many or one-to-many achieve higher prediction performance than recursive approaches. Thus, in this work, we focus ou attention to one-to-many models. Consider that dataset $D$ from subsection \ref{sec:MDN_pre} is divided into batches of input/output sequences with length $T$. Given a sequence of inputs $x_{t:t+T}$ at time $t$ a LSTM network predicts a sequence of outputs $\hat{y}_{t+1:t+T}$ by iteratively computing the following equation: \begin{subequations}\label{eq:lstm} \begin{align} h_t = LSTM(h_{t-1},\theta) \\ \hat{y}_{t+1:t+T} = \theta_{hy}h_t + b_y \end{align} \end{subequations} where $\theta$ denotes the network weights, $\theta_{hy}$ and $b_{y}$ the weight matrices and the bias vector from the output layer, and $h_t$ the hidden state representing the memory of the network of the past inputs. Three gate mechanisms compose the LSTM function: an input gate $i$, forget gate $f$ and output gate $o$. And, a cell state $c$. Thus, the LSTM equations are described as follows: \begin{subequations}\label{eq:lstm} \begin{align} i_t = \sigma(\theta_{xi}x_t + \theta_{hi}h_{t-1}+\theta_{ci}c_{t-1}+b_i) \\ f_t = \sigma(\theta_{xf}x_t + \theta_{hf}h_{t-1}+\theta_{cf}c_{t-1}+b_f) \\ c_t = f_t c_{t-1}+i_t\tanh{(\theta_{xc}x_t + \theta_{hc}h_{t-1}+b_c)} \\ o_t = \sigma(\theta_{xo}x_t + \theta_{ho}h_{t-1}+\theta_{co}c_{t-1}+b_o) \\ h_t = o_t \tanh{c_t} \end{align} \end{subequations} with $\sigma(\cdot)$ as the sigmoid activation function. Typically, the mean square error is defined as the loss function: \begin{equation} L(y_{t+1:t+T},\theta,x_t)= \sum_{k=1}^{N/T}\frac{1}{T}\sum_{l=1}^T ||\hat{y}_T-y_T||^2_2 \end{equation} with $\hat{y}_T$ and $y_T$ are the predicted and ground truth values at time $t$, respectively. \subsection{Trajectory classification} The problem of learning the likelihood of an agent to follow a specific motion hypothesis can be formulated as a multilabel classification problem. Given a series of observations $x^{(1)} \hdots x^{(T)}$ and a set of different trajectory classes $y$, we can train a classifier network to learn the likelihood of an agent following a certain hypothesis. The output of the network can be built by simply adding a fully connected layer (FC) at the top of the LSTM network followed by an element-wise sigmoid activation function. Then, the \textit{log loss} can be used as the loss function: \begin{equation} L(\hat{y},y,\theta,x_t)= \frac{1}{L}\sum_{l=1}^{L}-(y_l\cdot\log(\hat{y}_l)+(1-y_l)\cdot\log(1-\hat{y}_l)) \end{equation} \subsection{Variational Autoencoder} Variational autoencoder (VAE) belongs to the class generative models which can learn complex probabilistic models directly from data and perform inference. Consider the previous dataset and that the observed variables \textbf{x} were generated from a underlying unobserved representation \textbf{z}. The goal of VAE is to learn probability distribution $p(\textbf{x})$ which can be obtained my marginalizing the joint distribution $p(\textbf{x},\textbf{z})=p(\textbf{x}|\textbf{z})p(\textbf{z})$ with respect to \textbf{z} resulting in: \begin{equation}\label{eq:px} p(\textbf{x})=\int p(\textbf{x}|\textbf{z})p(\textbf{z}) \end{equation} There are two main problems with Equation \ref{eq:px} which make it extremely difficult to solve: the distribution of the latent space \textbf{z} is unknown, and the integral over the latent space when \textbf{z} is intractable. Generally, VAE's assume a prior over the latent space to be simple Gaussian distribution with zero mean and unit variance, $z\sim \mathcal{N}(0,I)$. Such assumption is only possible by employing a powerful function approximator, e.g. neural network, which can learn a non-linear map from the assumed Gaussian distribution to the observed \textbf{x}, resembling a \textit{decoder} structure in an autoencoder network. In addition, VAE parametrizes $p(\textbf{x}|\textbf{z})$ using another function approximator $Q(\textbf{z}|\textbf{x})$ (\textit{encoder}) of the posterior. Thus, by assuming that $Q$ generates \textbf{z} values more likely to reconstruct \textbf{x}, it resolves the intractable integral issue. Finally, the sampling operation necessary to solve Eq. \ref{eq:px} is avoided by a reparametrizing $\textbf{z} = \boldsymbol{\mu} + \boldsymbol{\sigma}\odot \epsilon$. The parameters $\phi$ of generative model $p(\textbf{x}|\textbf{z})$ and $\theta$ of the inference model $q(\textbf{z}|\textbf{x})$ can then be trained using standard backpropagation by optimizing the following variational lower bound: \begin{equation} \mathcal{L(\textbf{x},\theta,\phi)} - KL(q_{\phi}(\textbf{z}| \textbf{x})||p(\textbf{z})) + \mathbb{E}_{p(\epsilon)}[ \log{p_{\theta}(\textbf{x}|\textbf{z}=\boldsymbol{\mu} + \boldsymbol{\sigma}\odot \epsilon)}] \end{equation} with $KL(Q||P)$ as the Kullback-Leibler divergence between $Q$ and $P$, $\epsilon$ is a vector of Gaussian variables with independent marginal. \subsection{Baseline network} The LSTM model proposed in \cite{Pfeiffer2017} is one of the first approaches incorporating three main sources of information influencing the agent navigation behavior: the query-agent state, the static-environment and surrounding agents. The first is the query-agent’s velocity $(v_x,v_y)$ represented in its local Cartesian coordinate frame. The second input is a 2D occupancy grid in a local neighbourhood $\pazocal{O}^i_{\textrm{static}}$, $n_s \times n_s $, of the agent and encoding information about the surrounding static obstacles. The latter consists of a vector array with the relative distance and velocity between the query-agent and the other agents. This approach can predict trajectories of pedestrians in crowded scenarios accurately. The proposed network architecture consists of three parallel input channels pre-processed in different ways. The information about the query-agent is fed directly to the network. In parallel, a compressed representation of the static environment is generated by feeding the static grid into an encoder network, previously pre-trained using an auto-encoder scheme. Finally, the information about the surrounding agents is fed through a fully connected layer (FCL). Then, an LSTM layer is stacked in front of each input channel learning the time-dependencies. The output of each LSTM is concatenated creating a latent representation of the three input channels. Finally, another LSTM layer is stacked, learning the time-dependencies of the latent space. The output of the model is the predicted velocities for the query-agent for a prediction horizon $T$ resulting in the following loss function: \begin{equation} L(\theta,v^i)=\frac{1}{T}\sum_{l=1}^T||\hat{v}^i_l-v^i_l||+\lambda\cdot \alpha(\theta) \end{equation} with $v^i$ represents the ground truth velocity for the i-th agent, $\hat{v}^i$ the model predicted velocities, $\alpha(\theta)$ and $\lambda$ the regularization terms and the regularization factor, respectively. For more details about the network architecture, we refer the reader to \cite{Pfeiffer2017}.
1,108,101,563,484
arxiv
\section{Introduction} Consider a Coxeter system $(W,S)$, a positive weight function $L$, and the corresponding generic Iwahori-Hecke algebra $\mathcal{H}$. As detailed by G.~Lusztig in \cite{lusztig:unequal}, a choice of weight function gives rise to a partition of $W$ into left, right, and two-sided Kazhdan-Lusztig cells, each of which carries the structure of an $\mathcal{H}$- as well as a $W$-module. The cell decomposition of $W$ is understood for all finite Coxeter groups and all choices of weight functions with the exception of type $B_n$. We focus our attention on this remaining case and write $W=W_n$. A weight function is then specified by a choice of two integer parameters $a$ and $b$ assigned to the simple reflections in $W_n$: \begin{center} \begin{picture}(300,30) \put( 50 ,10){\circle*{5}} \put( 50, 8){\line(1,0){40}} \put( 50, 12){\line(1,0){40}} \put( 48, 20){$b$} \put( 90 ,10){\circle*{5}} \put(130 ,10){\circle*{5}} \put(230 ,10){\circle*{5}} \put( 90, 10){\line(1,0){40}} \put(130, 10){\line(1,0){25}} \put(170, 10){\circle*{2}} \put(180, 10){\circle*{2}} \put(190, 10){\circle*{2}} \put(205, 10){\line(1,0){25}} \put( 88 ,20){$a$} \put(128, 20){$a$} \put(222, 20){$a$} \end{picture} \end{center} Given $a,b \neq 0$, we write $s=\tfrac{b}{a}$ for their quotient. We have the following description of cells due to C.~Bonnaf{\'e}, M.~Geck, L.~Iancu, and T.~Lam. It is stated in terms of a family of generalized Robinson-Schensted algorithms $G_r$ which define bijections between $W_n$ and same-shape pairs of domino tableaux of rank $r$. \begin{conjecture*}[\cite{bgil}]\label{conjecture:bgil2} Consider a Weyl group $W_n$ of type $B_n$ with a weight function $L$ and parameter $s$ defined as above. \begin{enumerate} \item When $s \not\in \mathbb{N}$, let $r= \lfloor s \rfloor$. Two elements of $W_n$ lie in the same Kazhdan-Lusztig left cell whenever they share the same right tableau in the image of $G_r$. \item When $s \in \mathbb{N}$, let $r= s-1$. Two elements of $W_n$ lie in the same Kazhdan-Lusztig left cell whenever their right tableaux in the image of $G_r$ are related by moving through a set of non-core open cycles. \end{enumerate} \end{conjecture*} Significant progress has been made towards the verification of the above, which we detail in Section \ref{section:cellsB}. Most recently, C.~Bonnaf\'e has shown that if a certain family of statements conjectured by G.~Lusztig is assumed to hold, then the conjecture holds if $s\not \in \mathbb{N}$, and furthermore, if $s \in \mathbb{N}$, then Kazhdan-Lusztig left cells are unions of the sets described \cite{bonnafe:knuth}. We sharpen this result, and verify that the conjecture holds in the latter case as well. We concurrently describe the structure of Kazhdan-Lusztig left cells as $W_n$-modules. The canonical parameter set for irreducible $W_n$-modules consists of ordered pairs of partitions $(d,f)$ where the the parts of $d$ and $f$ sum to $n$. As detailed in Section \ref{section:irreducible}, there is a natural identification of this parameter set with the set of partitions $\mathcal{P}_r(n)$ of a fixed rank $r$. Since $\mathcal{P}_r(n)$ corresponds exactly to the shapes of rank $r$ domino tableaux, the parametrization of Kazhdan-Lusztig left cells via standard tableaux of fixed rank in the above conjecture suggests a module structure for each cell for every choice of weight function. Mainly, the irreducible constituents of the module carried by each cell should correspond to the shapes of the rank $r$ tableaux of its elements, with $r$ determined from the parameter $s$ as in the conjecture. We verify that this suggested module structure is indeed the one carried by each cell. Our approach is based on M.~Geck's characterization of left cells as constructible representations; that is, those representations which are obtained by successive truncated parabolic induction and tensoring with the sign representation, see \cite{geck:constructible}. In Section \ref{section:hecke}, we detail the general construction of Kazhdan-Lusztig cells in an unequal parameter Hecke algebra and extend a result of G.~Lusztig on the intersection of left and right cells to the unequal parameter setting. In Section \ref{section:typeB}, we detail the situation in type $B_n$ and the relevant combinatorics. Section \ref{section:constructible} examines constructible representations and provides a combinatorial description of truncated parabolic induction and tensoring with sign, mimicking the work of W.~M.~McGovern in the equal parameter case \cite{mcgovern:leftcells}. The final section contains the proof of the main results. \section{Unequal Parameter Hecke Algebras}\label{section:hecke} We briefly recount the definitions of unequal parameter Hecke algebras and the corresponding Kazhdan-Lusztig cells, following \cite{lusztig:unequal}. \subsection{Kazhdan-Lusztig Cells}\label{section:klcells} Consider a Coxeter system $(W,S)$ and let $\ell$ be the usual length function. A {\it weight function} $L:W \rightarrow \mathbb{Z}$ satisfies $L(xy)=L(x)+L(y)$ whenever $\ell(xy)=\ell(x)+\ell(y)$ and is uniquely determined by its values on $S$. We will consider those weight functions which take positive values on all $s \in S$. Let $\mathcal{H}$ be the generic Iwahori-Hecke algebra over $\mathcal{A}= \mathbb{Z}[v, v^{-1}]$ with parameters $\{v_s \, | \, s \in S\}$, where $v_x = v^{L(x)}$ for all $x \in W$. The algebra $\mathcal{H}$ is free over $\mathcal{A}$ and has a basis $\{T_x \, | \, x \in W\}$. Multiplication in $\mathcal{H}$ takes the form $$T_s T_x = \left\{ \begin{array}{ll} T_{sx} & \text{if $\ell(sx) > \ell(x)$, and}\\ T_{sx}+(v_s-v_s^{-1}) T_x & \text{if $\ell(sx) < \ell(x)$} \end{array} \right. $$ As in \cite{lusztig:unequal}(5.2), it is possible to construct a Kazhdan-Lusztig basis of $\mathcal{H}$ which we denote by $\{C_x \; | \; x \in W\}$. In terms of it, multiplication has the form $$ C_x C_y = \sum_{z \in W} h_{xyz} C_z.$$ for some $h_{xyz} \in \mathcal{A}$. Although we suppress it in the notation, all of these notions depend on the specific choice of weight function $L$. \begin{definition} Fix $(W,S)$ a Coxeter system with a weight function $L$. We will write $y \leq_{\mathcal{L}} x$ if there exists $s \in S$ such that $C_{y}$ appears with a non-zero coefficient in $C_s C_x$. By taking the transitive closure, this binary relation defines a preorder on $W$ which we also denote by $\leq_{\mathcal{L}}$. Let $y \leq_{\mathcal{R}} x$ iff $y^{-1} \leq_{\mathcal{L}} x^{-1}$ and define $\leq_{\mathcal{LR}}$ as the pre-order generated by $\leq_{\mathcal{L}}$ and $\leq_{\mathcal{R}}$. \end{definition} Each of the above preorders defines equivalence relations which we denote by $\sim_\mathcal{L},$ $\sim_\mathcal{R}$, and $\sim_\mathcal{LR}$ respectively. The resulting equivalence classes are called the {\it left, right, and two-sided Kazhdan-Lusztig cells} of $W$. As described in \cite{lusztig:unequal}(8.3), Kazhdan-Lusztig cells carry representations of $\mathcal{H}$. If $\mathfrak{C}$ is a Kazhdan-Lusztig left cell and $x \in \mathfrak{C}$, then define $$[\mathfrak{C}]_\mathcal{A} = \bigoplus_{y \leq_\mathcal{L} x} \mathcal{A} C_{y} \Big/ \bigoplus_{y \leq_\mathcal{L} x, y \notin \mathfrak{C} } \mathcal{A} C_{y}.$$ This is a quotient of two left ideals in $\mathcal{H}$ and consequently is itself a left $\mathcal{H}$-module; it does not depend on the specific choice of $x \in \mathfrak{C}$, is free over $\mathcal{A}$, and has a basis $\{e_x \; | \; x \in \mathfrak{C}\}$ indexed by elements of $\mathfrak{C}$ with $e_x$ the image of $C_x$ in the above quotient. The action of $\mathcal{H}$ on $[\mathfrak{C}]_\mathcal{A}$ is determined by $$C_x e_y = \sum_{z \in \mathfrak{C}} h_{xyz} e_z$$ for $x \in W$ and $y \in \mathfrak{C}$. A Kazhdan-Lusztig left cell gives rise to a $W$-module $[\mathfrak{C}]$ by restricting $[\mathfrak{C}]_\mathcal{A}$ to scalars. The same construction can be used to define module structures on the right and two-sided cells of $W$. \subsection{A Family of Properties} The main results of this paper rely on a family of conjectures formulated by G.~Lusztig in \cite[\S 14]{lusztig:unequal}. In the equal parameter case, that is when $L$ is a multiple of the length function $\ell$, a number of results about Kazhdan-Lusztig cells depend on positivity results derived by geometric methods of intersection cohomology. Unfortunately, this positivity does not hold for unequal parameter Hecke algebras; for examples see \cite[\S 6]{lusztig:leftcells} and \cite[2.7]{geck:leftcells}. As a substitute, G.~Lusztig detailed a list properties which both, axiomatize known equal-parameter results, and outline methods of approaching non-positivity in general. In order to list Lusztig's conjectures, we must first define two integer-valued functions on $W$. For any $z \in W$, let ${\boldsymbol{a}}(z)$ be the smallest non-negative integer so that $h_{xyz} \in v^{{\boldsymbol{a}}(z)} \mathbb{Z}[v^{-1}]$ for every $x$ and $y$ in $W$ and write $\gamma_{xyz^{-1}}$ for the constant term of $v^{-{\boldsymbol{a}}(z)} h_{xyz}$. If $p_{xy}$ is defined by $C_y = \sum_{x\in W} p_{xy} T_x$, then \cite{lusztig:unequal}(5.4) shows that $p_{1z}$ is non-zero. We write $$p_{1z} = n_z v^{-\Delta(z)} + \text{ terms of smaller degree in $v$}$$ thereby defining a constant $n_z$ and integer $\Delta(z)$ for every $z\in W$. Finally, let $$\mathcal{D} = \{z \in W \mid {\boldsymbol{a}}(z)=\Delta(z)\}.$$ Lusztig has conjectured the following statements are true in the general setting of unequal parameter Hecke algebras: \begin{itemize} \item[\bf P1.] For any $z\in W$ we have ${\boldsymbol{a}}(z)\leq \Delta(z)$. \item[\bf P2.] If $d \in \mathcal{D}$ and $x,y\in W$ satisfy $\gamma_{x,y,d}\neq 0$, then $x=y^{-1}$. \item[\bf P3.] If $y\in W,$ there exists a unique $d\in \mathcal{D}$ such that $\gamma_{y^{-1},y,d}\neq 0$. \item[\bf P4.] If $z'\leq_{\mathcal{L}\mathcal{R}} z$ then ${\boldsymbol{a}}(z')\geq {\boldsymbol{a}}(z)$. Hence, if $z'\sim_{\mathcal{L}\mathcal{R}} z$, then ${\boldsymbol{a}}(z)={\boldsymbol{a}}(z')$. \item[\bf P5.] If $d\in \mathcal{D}$, $y\in W$, $\gamma_{y^{-1},y,d}\neq 0$, then $\gamma_{y^{-1},y,d}=n_d=\pm 1$. \item[\bf P6.] If $d\in \mathcal{D}$, then $d^2=1$. \item[\bf P7.] For any $x,y,z\in W$, we have $\gamma_{x,y,z}=\gamma_{y,z,x}$. \item[\bf P8.] Let $x,y,z\in W$ be such that $\gamma_{x,y,z}\neq 0$. Then $x\sim_{\mathcal{L}} y^{-1}$, $y \sim_{\mathcal{L}} z^{-1}$, and $z\sim_{\mathcal{L}} x^{-1}$. \item[\bf P9.] If $z'\leq_{\mathcal{L}} z$ and ${\boldsymbol{a}}(z')={\boldsymbol{a}}(z)$, then $z'\sim_{\mathcal{L}}z$. \item[\bf P10.] If $z'\leq_{\mathcal{R}} z$ and ${\boldsymbol{a}}(z')={\boldsymbol{a}}(z)$, then $z'\sim_{\mathcal{R}}z$. \item[\bf P11.] If $z'\leq_{\mathcal{L}\mathcal{R}} z$ and ${\boldsymbol{a}}(z')={\boldsymbol{a}}(z)$, then $z'\sim_{\mathcal{L}\mathcal{R}}z$. \item[\bf P12.] Let $I\subseteq S$ and $W_I$ be the parabolic subgroup defined by $I$. If $y\in W_I$, then ${\boldsymbol{a}}(y)$ computed in terms of $W_I$ is equal to ${\boldsymbol{a}}(y)$ computed in terms of $W$. \item[\bf P13.] Any left cell $\mathfrak{C}$ of $W$ contains a unique element $d\in \mathcal{D}$. We have $\gamma_{x^{-1},x,d}\neq 0$ for all $x\in \mathfrak{C}$. \item[\bf P14.] For any $z\in W$, we have $z \sim_{\mathcal{L}\mathcal{R}} z^{-1}$. \item[\bf P15.] If $v'$ is an indeterminate and $h'_{xyz}$ is obtained from $h_{xyz}$ via the substitution $v \mapsto v'$, then whenever ${\boldsymbol{a}}(w)={\boldsymbol{a}}(y)$, we have $$\sum_{y'} h'_{wx'y'}h_{xy'y}=\sum_{y'} h_{xwy'}h'_{y'x'y}.$$ \end{itemize} The statements {\bf P1-P15} are known to hold for finite Weyl groups in the equal parameter case by work of Kazhdan-Lusztig \cite{kazhdan:lusztig:schubert} and Springer \cite{springer:intersection}. If the Coxeter system is of type $I_2(m)$, $H_3$, or $H_4$, they follow from work of Alvis \cite{alvis:left} and DuCloux \cite{ducloux:positivity}. In the unequal parameter case, {\bf P1-P15} have been verified by Geck in types $I_2(m)$ and $F_4$ \cite{geck:remarks}, and in the so-called asymptotic case of type $B_n$ by Geck-Iancu \cite{geck:iancu} and Geck \cite{geck:relative}, \cite{geck:remarks}. Although the geometric approach from which the above follow in the equal parameter case is not available in the general unequal parameter case, it seems that it may not be required. At least in type $A$, Geck has shown that {\bf P1-P15} hold via elementary, purely algebraic methods \cite{geck:murphy}. \subsection{The Asymptotic Ring $J$} \label{section:asymptotic} The goal of this section is to verify Lemma 12.15 of \cite{lusztig:characters} in our more general setting. We begin with a brief discussion of Lusztig's ring $J$ which can be viewed as an asymptotic version of $\mathcal{H}.$ Although originally defined in the equal parameter case, its construction also makes sense in the setting of unequal parameter Hecke algebras under the the assumption that the conjectures {\bf P1-P15} hold. Using the methods developed in \cite{lusztig:unequal}, $J$ provides us with a way of studying the left-cell representations of $\mathcal{H}$. Recall the integers $\gamma_{xyz}$ defined for all $x,$ $y,$ and $z$ in $W$ as the constant terms of $v^{{\boldsymbol{a}}(z)} h_{xyz^{-1}}.$ Then $J$ is the free abelian group with basis $\{t_x \; | \; x \in W\}$. To endow it with a ring structure, define a bilinear product on $J$ by $$t_x \cdot t_y = \sum_{z \in W} \gamma_{xyz} t_{z^{-1}}$$ for $x$ and $y$ in $W$. Conjectures {\bf P1-P15} allow us to state the following results. \begin{theorem}[\cite{lusztig:unequal}] Assuming conjectures {\bf P1-P15}, the following hold: \begin{enumerate} \item $J$ is an associative ring with identity element $1_J = \sum_{d \in \mathcal{D}} n_d t_d.$ \item The group algebra $\mathbb{C}[W]$ is isomorphic as a $\mathbb{C}$-algebra to $J_\mathbb{C} = \mathbb{C} \otimes_\mathbb{Z} J.$ \end{enumerate} \end{theorem} Following \cite[\S 20.2]{lusztig:unequal}, we will write $E_\spadesuit$ for the $J_\mathbb{C}$-module corresponding to a $\mathbb{C}[W]$-module $E$. It shares its underlying space with $E$, while the action of an element of $J_\mathbb{C}$ is defined by the action of its image under the isomorphism with $\mathbb{C}[W]$. Consider a left cell $\mathfrak{C}$ of $W$ and define $J^\mathfrak{C}_\mathbb{C}$ to be $\oplus_{x \in \mathfrak{C}} \mathbb{C} t_x.$ By {\bf P8}, this is a left ideal in $J_\mathbb{C}.$ Furthermore, \begin{theorem}[\cite{lusztig:unequal}]\label{theorem:lusztig2} Assuming that the conjectures {\bf P1-P15} hold, the $J_\mathbb{C}$-modules $J^\mathfrak{C}_\mathbb{C}$ and $[\mathfrak{C}]_\spadesuit$ are isomorphic. \end{theorem} We are ready to address Lemma 12.15 of \cite{lusztig:characters}. Its original proof relies on a characterization of left cells in terms of the dual bases $\{C_x\}$ and $\{D_x\}$ stated in \cite{lusztig:characters}(5.1.14). This result in turn relies on positivity properties which do not hold in the unequal parameter case and therefore a new approach to the lemma is required. We owe the idea of using $J$ in the present proof to M.~Geck. \begin{lemma}\label{lemma:12.15} Assume that conjectures {\bf P1-P15} hold. If $\mathfrak{C}$ and $\mathfrak{C'}$ are two left cells in $W$ with respect to a weight function $L$, then $$\dim \textup{Hom}_W([\mathfrak{C}],[\mathfrak{C'}]) = | \mathfrak{C} \cap \mathfrak{C'}^{-1} |.$$ \end{lemma} \begin{proof} Let $x \in \mathfrak{C}^{-1} \cap \mathfrak{C'}$ and define a map $\phi_x$ on $J^\mathfrak{C}_\mathbb{C}$ via $\phi_x(t_y) = t_y t_x.$ With $x$ and $y$ as above, we can write $$t_y t_x = \sum \gamma_{yxz} t_{z^{-1}}.$$ For $\gamma_{yxz} \neq 0$, {\bf P8} implies $x \sim_\mathcal{L} z^{-1}.$ Since $x \in \mathfrak{C}'$, this forces $t_y t_x$ to lie in $J^\mathfrak{C'}_\mathbb{C},$ and we have in fact defined a map $\phi_x: J^\mathfrak{C}_\mathbb{C} \rightarrow J^\mathfrak{C'}_\mathbb{C}.$ We will show that as $x$ runs over the set $\mathfrak{C}^{-1} \cap \mathfrak{C'}$, the maps $\phi_x$ are linearly independent. So assume that for some constants $a_x$ we have $$\sum_{x \in \mathfrak{C}^{-1} \cap \mathfrak{C'}} a_x \phi_x = 0 \text{ and, consequently } \sum_{x \in \mathfrak{C}^{-1} \cap \mathfrak{C'}} a_x t_y t_x = 0$$ for all $y \in \mathfrak{C}.$ In particular, if $d$ is the unique element in $\mathcal{D} \cap \mathfrak{C}$ guaranteed by {\bf P13} then we also have $$\sum_{x \in \mathfrak{C}^{-1} \cap \mathfrak{C'}} a_x t_d t_x = \sum_{y \in \mathfrak{C}^{-1} \cap \mathfrak{C'}} \pm a_x t_x =0,$$ where the first equality follows from {\bf P2, P5, P7,} and {\bf P13}. But this means that $a_x=0$ for all relevant $x$, or in other words, that the $\phi_x$ are linearly independent. We can therefore conclude that $ \textup{dim Hom}_{J_\mathbb{C}}(J^\mathfrak{C}_\mathbb{C}, J^\mathfrak{C'}_\mathbb{C}) \geq | \mathfrak{C}^{-1} \cap \mathfrak{C'} |.$ Since this inequality is true for all pairs of left cells $\mathfrak{C}$ and $\mathfrak{C'}$ in $W$, we have $$ \sum_{\mathfrak{C},\mathfrak{C'}} \textup{ dim Hom}_{J_\mathbb{C}} (J^\mathfrak{C}_\mathbb{C}, J^\mathfrak{C'}_\mathbb{C}) \geq \sum_{\mathfrak{C},\mathfrak{C'}} | \mathfrak{C}^{-1} \cap \mathfrak{C'} |. $$ The right side of this inequality is just the order of $W$ since each of its elements lies in a unique left and a unique right cell. On the other hand, by the correspondence resulting from Theorem \ref{theorem:lusztig2} the left side is $$ \textup{ dim Hom}_{J_\mathbb{C}} \Big( \sum_\mathfrak{C} J^\mathfrak{C}_\mathbb{C}, \sum_\mathfrak{C'}J^\mathfrak{C'}_\mathbb{C}\Big) \\ = \textup{ dim Hom}_W (\textup{Reg}_W,\textup{Reg}_W)\\ = |W|. $$ Hence the original inequality must be in fact an equality and the lemma follows. \end{proof} We immediately obtain the following corollary, whose proof is identical to that of \cite{lusztig:characters}(12.17). \begin{corollary}\label{corollary:involutions} Assume that conjectures {\bf P1-P15} hold and that the left cell modules of $W$ with respect to a weight function $L$ are multiplicity-free. Then $\mathfrak{C} \cap \mathfrak{C}^{-1}$ is the set of involutions in $\mathfrak{C}$. \end{corollary} \section{Type $B_n$}\label{section:typeB} The goal of this section is to detail the combinatorics of arbitrary rank standard domino tableaux necessary to describe Kazhdan-Lusztig cells in type $B_n$. \subsection{Domino Tableaux} Consider a partition $p$ of a natural number $n$. We will view it as a Young diagram $Y_p$, a left-justified array of squares whose row lengths decrease weakly. The square in row $i$ and column $j$ of $Y_p$ will be denoted $s_{ij}$ and a pair of squares in $Y_p$ of the form $\{s_{ij},s_{i+1,j}\}$ or $\{s_{ij},s_{i,j+1}\}$ will be called a {\it domino}. A domino is {\it removable} from $Y_p$ if deleting its underlying squares leaves either another Young diagram containing the square $s_{11}$ or the empty set. Successive deletions of removable dominos from a Young diagram $Y_p$ must eventually terminate in a staircase partition containing $\binom{r+1}{2}$ squares for some non-negative integer $r$. This number is determined entirely by the underlying partition $p$ and does not depend on the sequence of deletions of removable dominos. We will write $p \in \mathcal{P}_r$ and say that $p$ is a {\it partition of rank $r$}. The {\it core of $p$} is its underlying staircase partition. \begin{example} The partition $p=[4,3^2,1]$ lies in the set $\mathcal{P}_2$. Below are its Young diagram $Y_p$ and a domino tiling resulting from a sequence of deletions of removable dominos exhibiting the underlying staircase partition. $$ \begin{small} \begin{tableau} :.{}.{}.{}.{}\\ :.{}.{}.{}\\ :.{}.{}.{}\\ :.{}\\ \end{tableau} \end{small} \hspace{1in} \begin{small} \begin{tableau} :.{}.{}>{}\\ :.{}>{}\\ :^{}>{}\\ :;\\ \end{tableau} \end{small} $$ \end{example} Consider $p \in \mathcal{P}_r$. It is a partition of the integer $2n+\binom{r+1}{2}$ for some $n$. A {\it standard domino tableau of rank $r$ and shape $p$} is a tiling of the non-core squares of $Y_p$ by dominos, each of which is labeled by a unique integer from $\{1, \ldots , n\}$ in such a way that the labels increase along its rows and columns. We will write $SDT_r(p)$ for the set of standard domino tableaux of rank $r$ of shape $p$ and $SDT_r(n)$ for the set of standard domino tableaux of rank $r$ which contain exactly $n$ dominos. For $T \in SDT_r(n)$, we will say that the square $s_{ij}$ is {\it variable} if $i+j \equiv r \mod 2$ and {\it fixed} otherwise. As discussed in \cite{garfinkle1} and \cite{pietraho:rscore}, a choice of fixed squares on a tableau $T$ allows us to define two notions, a partition of its dominos into cycles and the operation of moving through a cycle. The moving through map, when applied to a cycle $c$ in a tableau $T$ yields another standard domino tableau $MT(T,c)$ which differs from $T$ only in the labels of the variable squares of $c$. If $c$ contains $D(l,T)$, the domino in $T$ with label $l$, then $MT(T,c)$ is in some sense the minimally-affected standard domino tableau in which the label of the variable square in $D(l,T)$ is changed. We refer the reader to \cite{pietraho:rscore} for the detailed definitions. If the shape of $MT(T,c)$ is the same as the shape of $T$, we will say that $c$ is a {\it closed cycle}. Otherwise, one square will be removed from $T$ (or added to its core) and one will be added. In this case, we will say the $c$ is {\it open} and denote the aforementioned squares as $s_b(c)$ and $s_f(c),$ respectively. Finally, if $s_b(c)$ is adjacent to the core of $T$, we will say that $c$ is a {\it core open cycle}. We will write $OC(T)$ for the set of all open cycles of $T$ and $OC^*(T)$ the subset of non-core open cycles. \subsection{Generalized Robinson-Schensted Algorithms} The Weyl group $W_n$ of type $B_n$ consists of the set of signed permutations on $n$ letters, which we write in one-line notation as $w=(w_1 \, w_2 \, \ldots w_n)$. For each non-negative integer $r$, there is an injective map $$G_r: W_n \rightarrow SDT_r(n) \times SDT_r(n)$$ which is onto the subset of domino tableaux of the same-shape, see \cite{garfinkle1} and \cite{vanleeuwen:rank}. We will write $G_r(x)=(S_r(x),T_r(x))$ for the image of a permutation $x$ and refer to the two components as the {\it left} and {\it right tableaux of $x$}. \begin{definition}\label{definition:combinatorialcells} Consider $x, y \in W_n$ and fix a non-negative integer $r$. We will say that \begin{enumerate} \item $x \approx^\iota_\mathcal{L} y$ if $T_r(y) = T_r(x)$, and \item $x \approx_\mathcal{L} y$ if $T_r(y)= MT(T_r(x),C)$ for some $C\subset OC^*(T_r(x)).$ \end{enumerate} \end{definition} We will call the equivalence classes defined by $\approx^\iota_\mathcal{L}$ {\it irreducible combinatorial left cells of rank $r$} in $W$, and those defined by $\approx_\mathcal{L}$ its {\it reducible combinatorial left cells of rank $r$.} In the irreducible case, we will say that the combinatorial left cell is {\it represented by the tableau} $T_r(x)$. In the reducible case, we will say that the combinatorial left cell is {\it represented by the set} $\{MT(T_r(x),C) \; | \; C\subset OC^*(T_r(x))\}$ of standard domino tableaux. \subsection{Cells in type $B_n$} \label{section:cellsB} Consider the generators of $W_n$ as in the following diagram: \begin{center} \begin{picture}(300,30) \put( 50 ,10){\circle*{5}} \put( 50, 8){\line(1,0){40}} \put( 50, 12){\line(1,0){40}} \put( 48, 20){$t$} \put( 90 ,10){\circle*{5}} \put(130 ,10){\circle*{5}} \put(230 ,10){\circle*{5}} \put( 90, 10){\line(1,0){40}} \put(130, 10){\line(1,0){25}} \put(170, 10){\circle*{2}} \put(180, 10){\circle*{2}} \put(190, 10){\circle*{2}} \put(205, 10){\line(1,0){25}} \put( 88 ,20){$s_1$} \put(128, 20){$s_2$} \put(222, 20){$s_{n-1}$} \end{picture} \end{center} Define the weight function $L$ by $L(t)=b$ and $L(s_i)=a$ for all $i$ and set $s=\frac{b}{a}$. The following is a conjecture of Bonnaf\'e, Geck, Iancu, and Lam, and appears as Conjectures A, B, and D in \cite{bgil}: \begin{conjecture}\label{conjecture:bgil} Consider a Weyl group of type $B_n$ with a weight function $L$ and parameter $s$ defined as above. \begin{enumerate} \item When $s \not\in \mathbb{N}$, the Kazhdan-Lusztig left cells coincide with the irreducible combinatorial left cells of rank $\lfloor s \rfloor$. \item When $s \in \mathbb{N}$, the Kazhdan-Lusztig left cells coincide with the reducible combinatorial left cells of rank $s-1$. \end{enumerate} \end{conjecture} This conjecture is well-known to be true for $s=1$ by work of Garfinkle \cite{garfinkle3}, and has been verified when $s>n-1$ by Bonnaf\'e and Iancu \cite{bonnafe:iancu}. It has also been shown to hold for all values of $s$ when $n \leq 6$, see \cite{bgil}. Furthermore, assuming {\bf P1-P15}, C.~Bonnaf\'e has shown the conjecture to be true in the irreducible case, and that in the reducible case, Kazhdan-Lusztig left cells are unions of the reducible combinatorial left cells \cite{bonnafe:knuth}. \section{Constructible Representations in Type $B_n$}\label{section:constructible} M.~Geck has shown that if Lusztig's conjectures {\bf P1-P15} hold, then the $W$-modules carried by the Kazhdan-Lusztig left cells of an unequal parameter Hecke algebra are precisely the constructible ones \cite{geck:constructible}. Defined in the unequal parameter setting by Lusztig in \cite{lusztig:unequal}(20.15), constructible modules arise via truncated induction and tensoring with the sign representation. The goal of this section is to give a combinatorial description of the effects of these two operations on $W$-modules in type $B_n.$ Our approach is based on the equal-parameter results of \cite{mcgovern:leftcells}. \subsection{Irreducible $W_n$-modules}\label{section:irreducible} Let us restrict our attention to type $B_n$, write $W_n$ for the corresponding Weyl group, and define constants $a$, $b$, and $s,$ as in Section \ref{section:cellsB}. We begin by recalling the standard parametrization of irreducible $W_n$-modules. Let $\mathcal{P}^2$ be the set of ordered pairs of partitions and $\mathcal{P}^2(n)$ be the subset of $\mathcal{P}^2$ where the combined sum of the parts of both partitions is $n$. \begin{theorem} \label{theorem:irrW} The set of irreducible representations of $W_n$ is parametrized by $\mathcal{P}^2(n)$. If we write $[(d,f)]$ for the representation corresponding to $(d,f) \in \mathcal{P}^2(n)$, then $$[(f^t,d^t)] \cong [(d,f)] \otimes \textup{sgn},$$ where $p^t$ denotes the transpose of the partition $p$. \end{theorem} In this form, the connection between irreducible $W_n$-modules and the description of left cells in Conjecture \ref{conjecture:bgil} is not clear. To remedy this, we would like to restate Theorem \ref{theorem:irrW} in terms of partitions of arbitrary rank which arise as shapes of the standard domino tableaux in this conjecture. Thus let $r= \lfloor s \rfloor$ if $s \not\in \mathbb{N}$, $r= s-1$ otherwise, and write $\epsilon=s - \lfloor s \rfloor$. As an intermediary to this goal, we define the notion of a {\it symbol of defect $t$ and residue $\epsilon$} for a non-negative integer $t$ and $0\leq \epsilon < 1$. It is an array of non-negative numbers of the form $$ \Lambda= \left( \begin{array}{cccccc} \lambda_1 + \epsilon & \lambda_2 + \epsilon & &\ldots & & \lambda_{N+t} + \epsilon \\ {} & \mu_1 & \mu_2 & \ldots & \mu_N & {} \end{array} \right) $$ where the (possibly empty) sequences $\{\lambda_i\}$ and $\{\mu_i\}$ consist of integers and are strictly increasing. If we define a related symbol by letting $$ \Lambda'= \left( \begin{array}{ccccccc} \epsilon & \lambda_1+1+ \epsilon & \lambda_2+2+ \epsilon &\ldots & & \lambda_{N+t} +N+t+ \epsilon\\ {} & 0 & \mu_1+1 & \ldots & \mu_N+N & {} \end{array} \right) $$ then the binary relation defined by setting $\Lambda \sim \Lambda'$ generates an equivalence relation. We will write $Sym^\epsilon_t$ for the set of its equivalence classes. We describe two maps between symbols and partitions. A partition can be used to construct a symbol in the following way. If $p = (p_1,p_2, \ldots, p_k)$, form $p^\sharp=(p_1, p_2, \ldots, p_{k'})$ by adding an additional zero term to $p$ if the rank of $p$ has the same parity as $k$. Dividing the set $\{p_i+k'-i\}_{i=1}^{k'}$ into its odd and even parts yields two sequences $$\{2\mu_i+1\}_{i=1}^N\text{ and }\{2\lambda_i\}_{i=1}^{N+t}$$ for some non-negative integer $t$. A symbol $\Lambda_p$ of defect $t$ and residue $\epsilon$ corresponding to $p$ can now be defined by arranging the integers $\lambda_i$ and $\mu_i$ into an array as above. Given a symbol of defect $t$ and residue $\epsilon$, it is also possible to construct an ordered pair of partitions. With $\Lambda$ as above, let $$d_\Lambda = \{\lambda_i - i +1\}_{i=1}^{N+t}\text{ and }f_\Lambda =\{\mu_i - i +1\}_{i=1}^N.$$ Both constructions are well-behaved with respect to the equivalence on symbols defined above. The next theorem follows from \cite{james:kerber}(2.7). \begin{theorem} The maps $p \mapsto \Lambda_p$ and $\Lambda \mapsto (d_{\Lambda}, f_{\Lambda})$ define bijections $$\mathcal{P}_r \rightarrow Sym^\epsilon_{r+1} \rightarrow \mathcal{P}^2$$ for all values of $r$ and $\epsilon$. Consequently, their composition yields a bijection between $\mathcal{P}_r(n)$ and $\mathcal{P}^2(n)$. \label{theorem:bijections} \end{theorem} This result allows us to custom tailor a parametrization of irreducible $W_n$-modules to each value of the parameter $s$ by defining $r$ and $\epsilon$ as above. Together with Lusztig's Lemma 22.18 of \cite{lusztig:unequal}, the present theorem implies the following alternate parametrization of the representations of $W_n$ in terms of symbols. A parametrization in terms of partitions of rank $r$ follows. \begin{corollary} If we fix values of the defect $r$ and residue $\epsilon$, then the set of irreducible representations of $W_n$ is parametrized by the set of equivalence classes of symbols $\{ \Lambda \in Sym^\epsilon_{r+1} \; | \; \text{ parts of $d_\Lambda$ and $f_\Lambda$ sum to $n$}\}.$ Writing $[\Lambda]$ for the representation corresponding to $\Lambda$, we have $$[\bar{\Lambda}]=[\Lambda] \otimes \textup{sgn}$$ where the symbol $\bar{\Lambda}$ is defined from $\Lambda$ by the following procedure. Write $\Lambda$ as above and let $\tau$ be the integer part its largest entry. Then the integer parts of the top and bottom rows of $\bar{\Lambda}$ consist of the complements of $\{\tau-\mu_i\}_i$ and $\{\tau-\lambda_i\}_i$ in $[0,\tau] \cap \mathbb{Z}$, respectively. \end{corollary} \begin{corollary} \label{corollary:tensoringwithsign} If we fix a non-negative integer $r$, then the set of irreducible representations of $W_n$ is parametrized by $\mathcal{P}_r(n)$. Writing $[p]$ for the representation corresponding to $p \in \mathcal{P}_r(n)$, we have $$[p^t] \cong [p] \otimes \textup{sgn},$$ where $p^t$ is the transpose of the partition $p$. \end{corollary} \begin{example} Let $s=3 \frac{1}{2}$, so that $r=3$ and $\epsilon = \frac{1}{2}$, and consider the irreducible representation $[((1^3), (1))]$ of $W_4$. Then according to the above parametrizations, $[((1^3), (1))]=[(4,3,2^2)]=[\Lambda_{(4,3,2^2)}]$ where $$ \Lambda_{[(4,3,2^2)]}= \left( \begin{array}{cccccc} \frac{1}{2} & 2 \frac{1}{2} & 3 \frac{1}{2} & 4 \frac{1}{2}\\ {} & {} & \hspace{-.3in}1 & {} \end{array} \right) $$ is a symbol of defect 3 and residue $\frac{1}{2}$. Note that $((1^3), (1)) \in \mathcal{P}^2(4)$, $(4,3,2^2) \in \mathcal{P}_2(4)$, and $\Lambda_{(4,3,2^2)}$ is a representative of a class in $Sym^{\epsilon}_3$ for $\epsilon = 1/2$. Furthermore, $[((1^3), (1))] \otimes \textup{sgn} = [((1),(3))] = [(4,3,2^2)] \otimes \textup{sgn} = [(4^2,2,1)] = [\Lambda_{(4,3,2^2)}]\otimes \textup{sgn} = [\Lambda_{(4^2,2,1)}],$ where $$ \Lambda_{[(4^2,2,1)]}= \left( \begin{array}{cccccc} \frac{1}{2} & 1 \frac{1}{2} & 2 \frac{1}{2} & 4 \frac{1}{2}\\ {} & {} & \hspace{-.3in}3 & {} \end{array} \right). $$ \end{example} We will need the following lemma, which holds for finite $W$ whenever {\bf P1-P15} hold. It is a combination of \cite{lusztig:unequal}(11.7) and \cite{lusztig:unequal}(21.5). \begin{lemma} \label{lemma:longword} Consider a Kazhdan-Lusztig left cell $\mathfrak{C} \subset W$ and let $w_0$ be the longest element of $W$. Then $\mathfrak{C}w_0$ is also a left cell in $W$, and $[\mathfrak{C}w_0] \cong \mathfrak{C} \otimes \textup{sgn}$ as $W$-modules. \end{lemma} \subsection{Truncated Induction} We now turn to a combinatorial description of truncated induction in terms of the above parameter sets. If $\pi$ is a representation of $W_I$, a parabolic subgroup of $W_n$, Lusztig defined a representation $J_{W_I}^W (\pi)$ of $W=W_n$, \cite{lusztig:unequal}(20.15). Its precise definition depends of the parameters of the underlying Hecke algebra, so it is natural to expect that this is manifested in the combinatorics studied above. Following \cite[\S 2]{mcgovern:leftcells} and \cite{lusztig:classofirreducible}, we note that due to the transitivity of truncated induction and the fact that the situation in type $A$ is well-understood, we need to only understand how truncated induction works when $W_I$ is a maximal parabolic subgroup whose type $A$ component acts by the sign representation on $\pi$. Henceforth, let $W_I$ be a maximal parabolic subgroup in $W_n$ with factors $W'$ of type $B_m$ and $S_l$ of type $A_{l-1}$, where $m+l = n$; furthermore, write $\textup{sgn}_l$ for the sign representation of $S_l$. Truncated induction behaves well with respect to cell structure. In fact, the following lemma holds for general $W$. \begin{lemma}[\cite{geck:leftcells}]\label{lemma:paraboliccells} Let $\mathfrak{C}'$ be a left cell of $W_I$. Then we have $$J_{W_I}^W([\mathfrak{C}']) \cong [\mathfrak{C}],$$ where $\mathfrak{C}$ is the left cell of $W$ such that $\mathfrak{C}'\subset \mathfrak{C}.$ \end{lemma} We first provide a description of the situation in type $B_n$ in terms of symbols. Consider a symbol $\Lambda'$ of defect $r+1$ and residue $\epsilon$; via the equivalence on symbols, we can assume that it has at least $l$ entries. If the set of $l$ largest entries of $\Lambda'$ is uniquely defined, then let $\Lambda$ be the symbol obtained by increasing each of the entries in this set by one. If it is not, then let $\Lambda^\textup{I} $ and $\Lambda^{\textup{II}}$ be the two symbols obtained by increasing the largest $l-1$ entries of $\Lambda'$ and then each of the two $l$th largest entries in turn by one. \begin{proposition}[\cite{lusztig:unequal}(22.17)] The representation $J_{W_I}^W ( [\Lambda'] \otimes \textup{sgn}_l )$ is $[\Lambda]$ if the set of $l$ largest entries of $\Lambda'$ is uniquely defined, and $[\Lambda^ \textup{I}] + [\Lambda^{\textup{II}}]$ if it is not. The former is always the case if $[\Lambda']$ is a symbol of residue $\epsilon \neq 0$. \end{proposition} It is not difficult to reformulate this result in terms of partitions of rank $r$. Consider a partition $p=(p_1,p_2, \ldots p_k) \in \mathcal{P}_r$. We can assume that $k \geq l$ by adding zero parts to $p$ as necessary. Let $k'$ be the number of parts of $p^\sharp$. Define \begin{align*} p^\textup{I} & = (p_1+2, \ldots , p_l+2, p_{l+1}, \ldots, p_k), \text{ and} \\ p^\textup{II} & = (p_1+2, \ldots , p_{l-1}+2 , p_l+1, p_{l+1}+1, p_{r+2}, \ldots, p_k). \end{align*} Note that both $p^\textup{I}$ and $p^\textup{II}$ are again partitions of rank $r$. \begin{corollary} \label{corollary:truncated} The representation $J_{W_I}^W ([p] \otimes \textup{sgn}_l)$ produced by truncated induction is $[p^\textup{I}]$ whenever $p_l > p_{l+1}$, $p_l+r-l$ is odd, or $\epsilon \neq 0$. Otherwise, $$J_{W_I}^W ( [p] \otimes \textup{sgn}_l) = [p^\textup{I}]+[p^\textup{II}].$$ \end{corollary} \begin{proof}Using the results of the preceding proposition, we have to check under what conditions the set of $l$ largest entries in a symbol $\Lambda'$ is uniquely defined and then determine the preimages of the symbols $\Lambda^\textup{I}$ and $\Lambda^{\textup{II}}$ under the map of Theorem \ref{theorem:bijections}. When $\epsilon \neq 0$, the $l$ largest entries in $\Lambda'$ are uniquely determined since all of its entries must be distinct. When $\epsilon =0$, there will be an ambiguity in determining the $l$ largest entries iff $p_l+k'-l$ and $p_{l+1}+k'-l-1$ are consecutive integers with the first one being odd. Together with the observation that $k'$ is always of the opposite parity from $r$, this gives us the conditions of the proposition. Determining the partitions corresponding to $\Lambda^\textup{I}$ and $\Lambda^\textup{II}$ is then just a simple calculation. \end{proof} Note that the parity conditions of the proposition imply that in the case when $J_{W_I}^W ([p] \otimes \textup{sgn}_l)$ is reducible, the square $s_{l,p_l+1}$ of the Young diagrams of $p^\textup{I}$ and $p^\textup{II}$ is fixed. In particular, this means that when endowed with the maximal label, the domino $\{s_{l,p_l+1}, s_{l,p_l+2}\}$ constitutes an open cycle in a domino tableau of shape $p^\textup{I}$. Its image under the moving through map is $\{s_{l+1,p_l+1}, s_{l,p_l+1}\}$ with underlying partition $p^\textup{II}.$ This observation leads to the following lemma: \begin{lemma}\label{lemma:inductionshapes} Let $n=m+l$ and consider $w'=(w_1 \, w_2 \ldots w_m) \in W_m$. Write $T'=T_r(w')$ for its right tableau of rank $r$ and define a set of partitions $$ \mathbb{P}' = \{shape \, MT(T', C) \; | \; C \subset OC^*(T')\} \subset \mathcal{P}_r(m).$$ Define the set $\mathbb{P} = \{ p^\textup{I} \; | \; p \in \mathbb{P}'\} \cup \{ p^\textup{II} \; | \; p \in \mathbb{P}' \text{ and $p_l = p_{l+1}$ with $p_l+r-l$ even} \}.$ If $w = (w_1 \, w_2 \ldots w_m \; n \; n-1 \, \ldots m+1) \in W_n$ with right tableau $T=T_r(w)$, then $$\mathbb{P}=\{shape \, MT(T, C) \; | \; C \subset OC^*(T)\} \subset \mathcal{P}_r(n).$$ \end{lemma} \begin{proof} The lemma relates the non-core open cycles in $T'$ to the non-core open cycles in $T$, hence it follows from the description of the behavior of cycles under domino insertion in \cite{pietraho:rscore}(3.6). However, things are really simpler than that, and we describe the situation fully. Note that $T$ is obtained from $T'$ by placing horizontal dominos with labels $m+1$ through $n$ at the end of its first $l$ rows. Essentially, there are four possibilities. We write $s_{ij}$ for the left square of the domino added to row $i$ and let $p= shape \, T'$. \begin{enumerate} \item $s_{ij}=S_f(c)$ for a cycle $c$ of $T'$. Then the domino joins the cycle $c$ and the final square of the new cycle is $s_{i,j+2}$. \item $s_{ij-1}=S_b(c)$ for a cycle $c$ of $T'$. Then the domino joins the cycle $c$ and the beginning square of the new cycle is $s_{i,j+1}$. \item $p_{i-1} = p_{i}$ with $p_i+r-i$ odd. Then the dominos with labels $m+i-1$ and $m+i$ in $T$ form a closed cycle in $T$. \item \label{case:extracycle} $p_{l} = p_{l+1}$ with $p_l+r-l$ even. Then the domino with label $n$ forms a singleton non-core open cycle in $T$ which does not correspond to a cycle in $T'$. \end{enumerate} If $C \subset OC^*(T')$ and $\widetilde{C}$ is the set of the corresponding cycles in $T$, then it is clear from the above description that $\{shape \, MT(T, \widetilde{C}) \; | \; C \subset OC^*(T')\} =\{ p^\textup{I} \; | \; p \in \mathbb{P}'\}$. If case (\ref{case:extracycle}) arises and $T$ has an additional non-core open cycle $c =\{n\}$, then $\{shape \, MT(T, \widetilde{C}\cup c) \; | \; C \subset OC^*(T')\} =\{ p^\textup{II} \; | \; p \in \mathbb{P}'\}.$ The lemma follows. \end{proof} \begin{example} Let $s=3$, so that $r=2$ and $\epsilon=0$, and consider the partition $(4,3,2^3) \in \mathcal{P}_2(5)$. It corresponds to the symbol $$ \Lambda_{[(4,3,2^3)]}= \left( \begin{array}{cccccc} 1 & 2 & 3 & 4 \\ {} & {} & \hspace{-.2in}1 & {} \end{array} \right) \in Sym_3^0 $$ For $l=4$, we have $J_{W_I}^W ([(4,3,2^3)]\otimes \textup{sgn}_4) = [(6,5,4,3^2)] + [(6,5,4^2,2)].$ Note that both partitions lie in $\mathcal{P}_2(9)$. In terms of symbols, $$J_{W_I}^W ([\Lambda_{(4,3,2^3)}] \otimes \textup{sgn}_4)= \left[ \left( \begin{array}{cccccc} 2 & 3 & 4 & 5 \\ {} & {} & \hspace{-.2in}1 & {} \end{array} \right) \right] + \left[ \left( \begin{array}{cccccc} 1 & 3 & 4 & 5 \\ {} & {} & \hspace{-.2in}2 & {} \end{array} \right) \right] $$ \end{example} \section{$W_n$-module structure and standard domino tableaux}\label{section:modulestructure} Viewing cells as constructible representations allows us to examine their structure inductively. Using the description of truncated induction and tensoring with sign derived in the previous section we describe the $W_n$-module carried by each cell in terms of the parametrization of irreducible $W_n$-modules of Section \ref{section:irreducible}. We begin with a few facts about combinatorial cells. \begin{lemma} \label{lemma:intersection} Consider two combinatorial left cells $\mathfrak{C}$ and $\mathfrak{C}'$ in $W_n$ of rank $r$ represented by sets $\mathbb{T}$ and $\mathbb{T}'$ of rank $r$ standard domino tableaux. Then $$| \mathfrak{C} \cap \mathfrak{C}'^{-1}| = M$$ where $M$ is the number of tableaux in $\mathbb{T}$ whose shape matches the shape of a tableau in $\mathbb{T}'$. \end{lemma} \begin{proof} Suppose first that $\mathfrak{C}$ and $\mathfrak{C}'$ are irreducible so that $\mathbb{T}=\{T\}$ and $\mathbb{T}'=\{T'\}$. If they are of the same shape, then the intersection $ \mathfrak{C} \cap \mathfrak{C}'^{-1} = G_r^{-1}(T',T)$; otherwise, it is empty. On the other hand, if $\mathfrak{C}$ and $\mathfrak{C}'$ are reducible, then let $J$ consist of the tableaux in $\mathbb{T}$ whose shape matches the shape of a tableau in $\mathbb{T}'$ and define $|J|=M$. Recall that by the definition of a combinatorial left cell, $\mathbb{T} = \{MT(T,C) \; | \; C \in OC^*{T}\}$ for some tableau $T$ and therefore $\mathbb{T}$ consists of only tableaux of differing shapes. If $T \in J$, write $T'$ for the the unique tableau in $\mathbb{T}'$ of the same shape as $T$. Then $$ \mathfrak{C} \cap \mathfrak{C}'^{-1} = \bigcup_{T \in J} G_r^{-1}(T',T).$$ \end{proof} We can obtain a slightly better description of the intersection of a combinatorial left cell and a combinatorial right cell by recalling the definition of an extended open cycle in a tableau relative to another tableau of the same shape. See \cite{garfinkle2}(2.3.1) or \cite{pietraho:equivalence}(2.4) for the details. In general, an extended open cycle is a union of open cycles. \begin{corollary}\label{corollary:intersection} Consider two reducible combinatorial left cells $\mathfrak{C}$ and $\mathfrak{C}'$ in $W_n$ of rank $r$ represented by sets $\mathbb{T}$ and $\mathbb{T}'$ of rank $r$ standard domino tableaux. If $T \in \mathbb{T}$ and $T' \in \mathbb{T}'$ are of the same shape and $m$ is the number of non-core extended open cycles $m$ in $T$ relative to $T'$, then $$|\mathfrak{C} \cap \mathfrak{C}'^{-1}| = 2^m.$$ \end{corollary} \begin{proof} An extended open cycle in $T$ relative to $T'$ is a minimal set of open cycles in $T$ and $T'$ such that moving through it produces another pair of tableaux of the same shape. Consequently, moving through two different extended open cycles are independent operations. Noting that $$\mathbb{T} =\{ MT(T,C) \; | \; C\subset OC^*(T)\} \text{ and } \mathbb{T}' =\{ MT(T',C) \; | \; C\subset OC^*(T')\},$$ we have that a tableau-pair $(S,S') \in \mathbb{T} \times \mathbb{T}'$ is same-shape iff it differs from $(T,T')$ by moving through a set of non-core extended open cycles in $T$ relative to $T'$. Thus, if $E$ is the set of non-core extended open cycles in $T$ relative to $T'$, then $$\mathfrak{C} \cap \mathfrak{C}'^{-1}=\bigcup_{D \subset E} G_r^{-1}\big(MT((T',T),D)\big),$$ from which the result follows. \end{proof} Recall the parameter $s$ derived from a weight function $L$ in type $B_n$. We will call a Kazhdan-Lusztig left cell in this setting a {\it left cell of weight $s$}. C.~Bonnaf\'e \cite{bonnafe:knuth} has shown that: \begin{itemize} \item under the assumption that statements {\bf P1-P15} of Section \ref{section:klcells} hold, when $s \not \in \mathbb{N}$, left cells of weight $s$ are precisely the irreducible combinatorial left cells of rank $r=\lfloor s \rfloor$, and \item when $s \in \mathbb{N}$, left cells of weight $s$ are unions of reducible combinatorial left cells of rank $r=s-1$. \end{itemize} In this way, as in Definition \ref{definition:combinatorialcells}, we can say that a left cell of weight $s$ is {\it represented by} a set of standard domino tableaux of rank $r$. For non-integer $s$, this set consists of the unique tableau representing the irreducible combinatorial left cell, and in the latter, it is the union of the sets of tableaux representing each of the reducible combinatorial cells in the Kazhdan-Lusztig cell. In what follows, we assume that statements {\bf P1-P15} hold. \begin{lemma}\label{lemma:disjointshapes} Suppose that $\mathfrak{C}$ is a left cell of weight $s$ and $\mathfrak{C} = \coprod_i \mathfrak{D}_i $ is its decomposition into combinatorial left cells of rank $r$. If we let $\mathbb{T}_i$ be the set of domino tableaux representing $\mathfrak{D}_i,$ then the set of shapes of tableaux in $\mathbb{T}_i$ is disjoint from the set of shapes of tableaux in $\mathbb{T}_j$ whenever $i \neq j$. \end{lemma} \begin{proof} By Corollary \ref{corollary:involutions}, $\mathfrak{C}\cap \mathfrak{C}^{-1}$ consists of the involutions in $\mathfrak{C}$. The set of involutions in each combinatorial cell $\mathfrak{D}_i$ consists of $\mathfrak{D}_i \cap \mathfrak{D}_i^{-1}$. This forces $\mathfrak{D}_i \cap \mathfrak{D}_j^{-1}=\varnothing$ whenever $i\neq j$, which can only occur if the set of shapes of tableaux in $\mathbb{T}_i$ is disjoint from the set of shapes of tableaux in $\mathbb{T}_j$, by Lemma \ref{lemma:intersection}. \end{proof} We first show that the shapes of the standard domino tableaux of rank $r$ representing a left cell of weight $s$ determine its $W_n$-module structure: \begin{definition} Suppose $\mathbb{T}$ is a set of standard domino tableaux of rank $r$. For $T \in \mathbb{T}$, we will write $p_T \in \mathcal{P}_r(n)$ for its underlying partition, and define $$[\mathbb{T}] = \bigoplus_{T \in \mathbb{T}} [p_T].$$ \end{definition} \begin{lemma}\label{lemma:shapes} Suppose that $\mathfrak{C}$ and $\mathfrak{C'}$ are left cells of weight $s$ in $W_n$ and $$ \mathfrak{C} = \coprod_{i \leq c} \mathfrak{D}_i \text{ as well as } \mathfrak{C}' = \coprod_{i \leq d} \mathfrak{D}'_i$$ are their decompositions into combinatorial left cells of rank $r$. Suppose that each $\mathfrak{D}_i$ and $\mathfrak{D}'_i$ is represented by the set of rank $r$ tableaux $\mathbb{T}_i$ and $\mathbb{T}'_i$, respectively. Then $[\mathfrak{C}] \cong [\mathfrak{C'}]$ iff $c=d$ and, suitably ordered, $[\mathbb{T}_i] \cong [\mathbb{T}'_i]$ for all $i$. \end{lemma} \begin{proof} For clarity, we treat the integer and non-integer values of $s$ separately. First assume $s \not \in \mathbb{N}$ so that $c=d=1$ and take $\{T\}= \mathbb{T}_1$ and $\{T'\}=\mathbb{T}_1'.$ By Lemmas \ref{lemma:12.15} and \ref{lemma:intersection}, we have $\dim \textup{Hom}_W([\mathfrak{C}],[\mathfrak{C}]) = \dim \textup{Hom}_W([\mathfrak{C}'],[\mathfrak{C}'])=1.$ Furthermore, we have that $\dim \textup{Hom}_W([\mathfrak{C}],[\mathfrak{C}']) = | \mathfrak{C} \cap \mathfrak{C}'^{-1} |=1$ if and only if the shapes of $T$ and $T'$ coincide; otherwise, $\dim \textup{Hom}_W([\mathfrak{C}],[\mathfrak{C}'])=0$. The lemma follows. Next, assume $s \in \mathbb{N}$. Suppose first that $[\mathfrak{C}] \cong [\mathfrak{C'}].$ Then $\dim \textup{Hom}(\mathfrak{C},\mathfrak{C}) = \dim \textup{Hom}(\mathfrak{C}',\mathfrak{C'}) =\dim \textup{Hom}(\mathfrak{C},\mathfrak{C}')$, and by Lemma \ref{lemma:12.15}, $|\mathfrak{C} \cap \mathfrak{C}^{-1}|= |\mathfrak{C}' \cap \mathfrak{C'}^{-1}|=|\mathfrak{C} \cap \mathfrak{C'}^{-1}|.$ By Lemma \ref{lemma:disjointshapes}, we have $$\sum_{i\leq c} |\mathfrak{D}_i \cap \mathfrak{D}_i^{-1}| = \sum_{i\leq d} |\mathfrak{D}'_i \cap {\mathfrak{D}'_i}^{-1}| = \sum_{i,j} |\mathfrak{D}_i \cap {\mathfrak{D}'_j}^{-1}|.$$ We can now use Corollary \ref{corollary:intersection} to examine the terms of this equality. For a combinatorial cell $\mathfrak{D}_i$, there is at most one cell $\mathfrak{D}'_{i'}$ such that there are $T_i \in \mathbb{T}_i$ and $T'_{i'} \in \mathbb{T}'_{i'}$ of the same shape, by Lemma \ref{lemma:disjointshapes}. Let $I$ be the set of $i$ for which this occurs. Let $c_i$ and $d_i$ be the numbers of non-core open cycles in $T_i$ and $T'_{i'}$ and for each $i\in I$, let $m_i$ be the number of non-core extended open cycles in $T_i$ relative to $T'_{i'}$. Then $m_i \leq c_i, d_{i'}$ with equality iff the non-core extended open cycles are just the non-core open cycles. By Corollary \ref{corollary:intersection}, $\sum_{i\leq c} |\mathfrak{D}_i \cap \mathfrak{D}_i^{-1}| = \sum_{i\leq c} 2^{c_i},$ $ \sum_{i\leq d} |\mathfrak{D}'_i \cap {\mathfrak{D}'_i}^{-1}| = \sum_{i\leq d} 2^{d_i}$, and $\sum_{I} |\mathfrak{D}_i \cap {\mathfrak{D}'_{i'}}^{-1}| = \sum_I 2^{m_i}.$ But the previous equation now implies that $m_i=c_i=d_{i'}$, $c=d$, $I=\{1, \ldots, c\}$ and by the definition of a combinatorial left cell in our setting, that $[\mathbb{T}_i] \cong [\mathbb{T}'_{i'}]$ for all $i \in I$. Conversely, assume that $c=d$ and $[\mathbb{T}_i] \cong [\mathbb{T}'_i]$ for all $i$ and choose tableaux $T_i \in \mathbb{T}_i$ and $T'_i \in \mathbb{T}'_i$ of the same shape. By the definition of combinatorial cells, there is a correspondence between the non-core open cycles of $T_i$ and those of $T'_i$ such that their beginning and final squares coincide, implying that the set of non-core extended open cycles in $T_i$ relative to $T'_i$ is precisely the set of open cycles of $T_i$. Therefore, for each $i$ we have $|\mathfrak{D}_i \cap \mathfrak{D}_i^{-1}| = |\mathfrak{D}_i \cap {\mathfrak{D}'_i}^{-1}|.$ Consequently, by Lemmas \ref{lemma:disjointshapes} and \ref{lemma:12.15}, and Corollary \ref{corollary:intersection}: $$\dim \textup{Hom} (\mathfrak{C}, \mathfrak{C}')= \sum_i |\mathfrak{D}_i \cap {\mathfrak{D}'_i}^{-1}| = \sum_i |\mathfrak{D}_i \cap \mathfrak{D}_i^{-1}| = \dim \textup{Hom} (\mathfrak{C}, \mathfrak{C}).$$ Reversing the roles of $\mathfrak{C}$ and $\mathfrak{C}'$ above implies the desired result. \end{proof} \begin{theorem} \label{theorem:main} Suppose that $\mathfrak{C}$ is a left cell of weight $s$ in $W_n$ represented by a set $\mathbb{T}$ of standard domino tableaux of rank $r$. Then $[\mathfrak{C}] \cong [\mathbb{T}]$ as $W_n$-modules. \end{theorem} \begin{proof} In light of the result from Lemma \ref{lemma:shapes}, we can prove the theorem by verifying it holds for a representative of each isomorphism class of left cells. Under our assumptions, the results of \cite{geck:constructible} hold and left cell modules coincide with constructible representations of $W_n$. Therefore, a representative of each isomorphism class of left cells can be obtained by repeated truncated induction and tensoring with sign. Recall our description of irreducible $W_n$-modules by partitions of rank $r$. Via Corollaries \ref{corollary:tensoringwithsign} and \ref{corollary:truncated}, we have a description of both operations on the level of partitions. We verify that the effect of truncated induction and tensoring with sign on the shapes of the tableaux representing a left cell is the same, and the theorem follows by induction. We treat the integer and non-integer values of $s$ separately. First assume $s \not \in \mathbb{N}$, so that each left cell is represented by a single tableau. We begin by investigating the effect on tensoring with sign. If $[\mathfrak{C}]$ is a left cell module and $w \in \mathfrak{C}$, then $\mathfrak{C}$ is represented by the tableau $T_r(w)$ of shape $p$. By Lemma \ref{lemma:longword}, $\mathfrak{C}w_0$ is also a left cell and $[\mathfrak{C}w_0] \cong [\mathfrak{C}] \otimes \textup{sgn}.$ It is represented by the tableau $T_r(ww_0) = T_r(w)^t$ of shape $p^t$. By Corollary \ref{corollary:tensoringwithsign}, if we assume that $[\mathfrak{C}]$ carries the irreducible module associated to the shape of its representative tableau, then so does $[\mathfrak{C}w_0]\cong [\mathfrak{C}] \otimes \textup{sgn}.$ For the case of truncated induction, consider a maximal parabolic subgroup $W_I = W_m \times S_l$ of $W_n$. Choose $w'=(w_1 \, w_2 \ldots w_m) \in W_m$ and let $\mathfrak{C}'$ be its left cell, represented by the tableau $T'=T_r(w')$. Let $p = shape \; T'$. By Lemma \ref{lemma:paraboliccells}, $J_{W_I}^W( [\mathfrak{C}'] \otimes \textup{sgn}_l )=[\mathfrak{C}]$ for a left cell $\mathfrak{C} \subset W_n$ and furthermore, the element $w=(w_1 \, w_2 \ldots w_m \; n \; n-1 \ldots m+1) \in \mathfrak{C}$. The left cell $\mathfrak{C}$ is represented by the tableau $T_r(w)$ whose shape is $p^I$, using the notation of (\ref{corollary:truncated}). By Corollary \ref{corollary:truncated}, if we assume that $[\mathfrak{C}']$ carries the irreducible module associated to the shape of its representative tableau, then so does $[\mathfrak{C}]\cong J_{W_I}^W( [\mathfrak{C}'] \otimes \textup{sgn}_l ).$ Next assume $s \in \mathbb{N}$, so that each left cell is represented by a family of rank $r$ standard domino tableaux. Again, we begin by investigating the effect on tensoring with sign. Suppose $\mathfrak{C}$ is a left cell represented by the set $\mathbb{T}$ and for each $T \in \mathbb{T}$, $w_T \in W_n$ is chosen so that $T_r(w_T)=T$. By Lemma \ref{lemma:longword}, $\mathfrak{C}w_0$ is also a left cell and $[\mathfrak{C}w_0] \cong [\mathfrak{C}] \otimes \textup{sgn}.$ It is represented by the set of tableaux $T_r(w_T w_0) = T_r(w_T)^t$ (for $T \in \mathbb{T}$), which we write as $\mathbb{T}^t$. By Corollary \ref{corollary:tensoringwithsign}, if we assume that $[\mathfrak{C}]$ carries the module $[\mathbb{T}]$ then $[\mathfrak{C}w_0]\cong [\mathfrak{C}] \otimes \textup{sgn}$ carries the module $[\mathbb{T}^t].$ For the case of truncated induction, again consider a maximal parabolic subgroup $W_I = W_m \times S_l$ of $W_n$. Let $\mathfrak{C}'$ be a left cell of $W_m$ and let $\mathfrak{C}'=\coprod_i \mathfrak{D}'_i$ be its decomposition into combinatorial left cells. Suppose that $\mathfrak{D}'_i$ is represented by the set $\mathbb{T}'_i$ of domino tableaux and let $\mathbb{T}'= \coprod_i \mathbb{T}'_i$. By definition of combinatorial left cells, every $\mathbb{T}'_i = \{ MT(T'_i, C) \; | \; C \subset OC^*(T'_i)\}$ for some rank $r$ standard domino tableau $T'_i$. For each $i$, choose $\widetilde{w}^i=(w^i_1 \, w^i_2 \ldots w^i_m) \in W_m$ with $T'_i=T_r(\widetilde{w}^i)$ so that $\widetilde{w}^i \in \mathfrak{D}'_i$. By Lemma \ref{lemma:paraboliccells}, $J_{W_I}^W( [\mathfrak{C}'] \otimes \textup{sgn}_l )=[\mathfrak{C}]$ for a left cell $\mathfrak{C} \subset W_n$. Furthermore, $w^i=(w^i_1 \, w^i_2 \ldots w^i_m \; n \; n-1 \ldots m+1) \in \mathfrak{C}$ and if $T_i = T_r(w^i)$, then $\mathfrak{C}$ is represented by the set of tableaux $\mathbb{T}= \coprod_i \{MT(T_i, C) \; | \; C \subset OC^*(T_i)\}$. Lemma \ref{lemma:inductionshapes} describes the shapes of the tableaux in $\mathbb{T}$ in terms of the shapes of the tableaux in $\mathbb{T}'$. This, together with Corollary \ref{corollary:truncated} shows that if we assume that $[\mathfrak{C}']$ carries the module $[\mathbb{T}']$, then $[\mathfrak{C}]\cong J_{W_I}^W( [\mathfrak{C}'] \otimes \textup{sgn}_l )$ carries the module $[\mathbb{T}].$ \end{proof} \begin{corollary} Consider a Weyl group of type $B_n$ with a weight function $L$ and parameter $s$ defined as above. If statements {\bf P1-P15} hold, then \begin{enumerate} \item When $s \not\in \mathbb{N}$, the Kazhdan-Lusztig left cells of weight $s$ coincide with the irreducible combinatorial left cells of rank $\lfloor s \rfloor$. \item When $s \in \mathbb{N}$, the Kazhdan-Lusztig left cells of weight $s$ coincide with the reducible combinatorial left cells of rank $s-1$. \end{enumerate} If the set $\mathbb{T}$ of standard domino tableaux represents the left cell $\mathfrak{C}$ in $W_n$, then $[\mathfrak{C}] \cong [\mathbb{T}]$ as $W_n$-modules. Furthermore, if $T \in \mathbb{T}$, then the number of elements of $\mathfrak{C}$ with right tableau $T$ is the dimension of the irreducible constituent $[p_T]$ of $[\mathfrak{C}]$. \end{corollary} \begin{proof} The first part in the case $s \not\in \mathbb{N}$ is a result of C.~Bonnaf\'e \cite{bonnafe:knuth}. To verify it in the case $s \in \mathbb{N}$, write a Kazhdan-Lusztig left cell $\mathfrak{C}$ in terms of combinatorial left cells as $\mathfrak{C}=\coprod_{i\in I} \mathfrak{D}_i$. Since $[\mathfrak{C}]$ is constructible, the main result of \cite{pietraho:constructible} shows that $[\mathfrak{C}] \cong [\widetilde{\mathbb{T}}]$ as $W_n$-modules where $\widetilde{\mathbb{T}} =\{MT(T, C) \; | \; C \subset OC^*(T)\}$ for some standard domino tableau $T$ of rank $r$. Let each $\mathfrak{D}_i$ be represented by $\mathbb{T}_i = \{MT(T_i, C) \; | \; C \subset OC^*(T_i)\}$ and write $\mathbb{T} = \coprod_{i\in I} \mathbb{T}_i$. By Theorem \ref{theorem:main}, $[\mathbb{T}] = [\widetilde{\mathbb{T}}].$ This implies that for every $i$, the set of beginning and ending squares of non-core open cycles in $T_i$ is contained in the corresponding set in $T$. However, the size of this set is constant for every partition in the set of possible shapes of tableaux in $[\mathbb{T}]$. By Lemma \ref{lemma:disjointshapes}, the only way this can occur is if $|I|=1$, that is, $\mathfrak{C}$ consists of a single combinatorial cell. Finally, we verify the last statement of the corollary. If $s \not \in \mathbb{N}$, consider a left cell $\mathfrak{C}$ represented by the tableau $T$. Then $\dim [\mathfrak{C}] = \sum |\mathfrak{C} \cap \mathfrak{C'}^{-1}|$, the sum taken over all left cells $\mathfrak{C}'$ in $W_n$. But $|\mathfrak{C} \cap \mathfrak{C'}^{-1}|=1$ iff the shape of the tableaux representing $\mathfrak{C}$ and $\mathfrak{C}'$ are the same; otherwise it is zero. Since each left cell is represented by a unique tableau, the above sum equals the number of tableaux of the same shape as $T$. This is the same as the number of elements of $\mathfrak{C}$ with right tableau $T$. If $s \in \mathbb{N}$, consider left cells $\mathfrak{C}$ and $\mathfrak{C}'.$ For $w \in \mathfrak{C} \cap \mathfrak{C'}^{-1},$ $[shape \, T_r(w)]$ must be a component of both $[\mathfrak{C}]$ and $[\mathfrak{C'}]$. Furthermore, each $w \in \mathfrak{C} \cap \mathfrak{C'}^{-1}$ must have the right tableau of a unique shape, establishing a bijection between $\mathfrak{C} \cap \mathfrak{C'}^{-1}$ and the set of irreducible modules common to $[\mathfrak{C}]$ and $[\mathfrak{C'}].$ If we let $\mathfrak{C}'$ vary over all left cells of $W_n$, the statement follows by Lemma \ref{lemma:12.15}. \end{proof} It should be remarked that the above statement classifying the module structure of left cells is not the strongest one could hope for. In the so-called ``asymptotic" case when $s$ is sufficiently large, M.~Geck has shown that whenever the tableaux representing $[\mathfrak{C}]$ and $[\mathfrak{C}']$ equal, then not only are the underlying $\mathcal{H}$-modules isomorphic, but the underlying structure constants are the same. More precisely, there is a bijection $\mathfrak{C} \rightarrow \mathfrak{C}'$ sending $x \mapsto x'$ such that $$h_{w,x,y}=h_{w,x',y'} \text{ for all $w \in W_n$ and $x,y \in \mathfrak{C}$}.$$ It would be interesting to know under what circumstances this stronger statement holds for other values of $s$.
1,108,101,563,485
arxiv
\section{Introduction} \label{Introduction} Neutrino oscillations have been measured with high accuracy in solar, atmospheric and long-baseline neutrino oscillation experiments (see \cite{1205.4018,1205.5254,1209.3023}). Hence, we know that neutrinos are massive and mixed particles (see \cite{Giunti-Kim-2007,0704.1800}) and there are two independent squared-mass differences: the solar $\Delta{m}^2_{\text{SOL}} \simeq 7.5 \times 10^{-5} \, \text{eV}^2$ and the atmospheric $\Delta{m}^2_{\text{ATM}} \simeq 2.3 \times 10^{-3} \, \text{eV}^2$. This is in agreement with the standard three-neutrino mixing paradigm, in which the three active neutrinos $\nu_{e}$, $\nu_{\mu}$, $\nu_{\tau}$ are superpositions of three massive neutrinos $\nu_1$, $\nu_2$, $\nu_3$ with respective masses $m_1$, $m_2$, $m_3$. The two measured squared-mass differences can be interpreted as $ \Delta{m}^2_{\text{SOL}} = \Delta{m}^2_{21} $ and $ \Delta{m}^2_{\text{ATM}} = |\Delta{m}^2_{31}| \simeq |\Delta{m}^2_{32}| $, with $\Delta{m}^2_{kj}=m_k^2-m_j^2$. The completeness of the three-neutrino mixing paradigm has been challenged by the following indications in favor of short-baseline neutrino oscillations, which require the existence of at least one additional squared-mass difference, $\Delta{m}^2_{\text{SBL}}$, which is much larger than $\Delta{m}^2_{\text{SOL}}$ and $\Delta{m}^2_{\text{ATM}}$: \begin{enumerate} \renewcommand{\labelenumi}{\theenumi.} \renewcommand{\theenumi}{\arabic{enumi}} \item The LSND experiment, in which a signal of short-baseline $\bar\nu_{\mu}\to\bar\nu_{e}$ oscillations has been observed with a statistical significance of about $3.8\sigma$ \cite{nucl-ex/9504002,hep-ex/0104049}. \item The reactor antineutrino anomaly \cite{1101.2755}, which is a deficit of the rate of $\bar\nu_{e}$ observed in several short-baseline reactor neutrino experiments in comparison with that expected from a new calculation of the reactor neutrino fluxes \cite{1101.2663,1106.0687}. The statistical significance is about $2.8\sigma$. \item The Gallium neutrino anomaly \cite{nucl-ex/0512041,Laveder:2007zz,hep-ph/0610352,1006.3244,1210.5715}, consisting in a short-baseline disappearance of $\nu_{e}$ measured in the Gallium radioactive source experiments GALLEX \cite{1001.2731} and SAGE \cite{0901.2200} with a statistical significance of about $2.9\sigma$. \end{enumerate} In this review, we consider 3+1 \cite{hep-ph/9606411,hep-ph/9607372,hep-ph/9903454,hep-ph/0405172}, 3+2 \cite{hep-ph/0305255,hep-ph/0609177,0705.0107,0906.1997} and 3+1+1 \cite{1010.3970,1201.6662,1205.1791,1306.6079} neutrino mixing schemes in which there are one or two additional massive neutrinos at the eV scale and the masses of the three standard massive neutrinos are much smaller. Since from the LEP measurement of the invisible width of the $Z$ boson we know that there are only three active neutrinos (see \cite{Giunti-Kim-2007}), in the flavor basis the additional massive neutrinos correspond to sterile neutrinos \cite{Pontecorvo:1968fh}, which do not have standard weak interactions. The possible existence of sterile neutrinos is very interesting, because they are new particles which could give us precious information on the physics beyond the Standard Model (see \cite{hep-ph/0111326,hep-ph/0603118}). The existence of light sterile neutrinos is also very important for astrophysics (see \cite{1206.6231}) and cosmology (see \cite{1301.7102,1307.0637}). In the 3+1 scheme, the effective probability of $\nua{\alpha}\to\nua{\beta}$ transitions in short-baseline experiments has the two-neutrino-like form \begin{equation} P_{\nua{\alpha}\to\nua{\beta}} = \delta_{\alpha\beta} - 4 |U_{\alpha4}|^2 \left( \delta_{\alpha\beta} - |U_{\beta4}|^2 \right) \sin^2\!\left( \dfrac{\Delta{m}^2_{41}L}{4E} \right) \,, \label{pab} \end{equation} where $U$ is the mixing matrix, $L$ is the source-detector distance, $E$ is the neutrino energy and $\Delta{m}^2_{41} = m_{4}^2 - m_{1}^2 = \Delta{m}^2_{\text{SBL}} \sim 1 \, \text{eV}^2$. The electron and muon neutrino and antineutrino appearance and disappearance in short-baseline experiments depend on $|U_{e4}|^2$ and $|U_{\mu4}|^2$, which determine the amplitude $\sin^22\vartheta_{e\mu} = 4 |U_{e4}|^2 |U_{\mu4}|^2$ of $\nua{\mu}\to\nua{e}$ transitions, the amplitude $\sin^22\vartheta_{ee} = 4 |U_{e4}|^2 \left( 1 - |U_{e4}|^2 \right)$ of $\nua{e}$ disappearance, and the amplitude $\sin^22\vartheta_{\mu\mu} = 4 |U_{\mu4}|^2 \left( 1 - |U_{\mu4}|^2 \right)$ of $\nua{\mu}$ disappearance. Since the oscillation probabilities of neutrinos and antineutrinos are related by a complex conjugation of the elements of the mixing matrix (see \cite{Giunti-Kim-2007}), the effective probabilities of short-baseline $\nu_{\mu}\to\nu_{e}$ and $\bar\nu_{\mu}\to\bar\nu_{e}$ transitions are equal. Hence, the 3+1 scheme cannot explain a possible CP-violating difference of $\nu_{\mu}\to\nu_{e}$ and $\bar\nu_{\mu}\to\bar\nu_{e}$ transitions in short-baseline experiments. In order to allow this possibility, one must consider a 3+2 scheme, in which, there are four additional effective mixing parameters in short-baseline experiments: $\Delta{m}^2_{51}$, which is conventionally assumed $\geq\Delta{m}^2_{41}$, $|U_{e5}|^2$, $|U_{\mu5}|^2$ and $\eta = \text{arg}\left[U_{e4}^*U_{\mu4}U_{e5}U_{\mu5}^*\right]$. Since this complex phase appears with different signs in the effective 3+2 probabilities of short-baseline $\nu_{\mu}\to\nu_{e}$ and $\bar\nu_{\mu}\to\bar\nu_{e}$ transitions, it can generate measurable CP violations. A puzzling feature of the 3+2 scheme is that it needs the existence of two sterile neutrinos with masses at the eV scale. We think that it may be considered as more plausible that sterile neutrinos have a hierarchy of masses. Hence, it is interesting to consider also the 3+1+1 scheme \cite{1010.3970,1201.6662,1205.1791,1306.6079}, in which $m_{5}$ is much heavier than 1 eV and the oscillations due to $\Delta{m}^2_{51}$ are averaged. Hence, in the analysis of short-baseline data in the 3+1+1 scheme there is one effective parameter less than in the 3+2 scheme ($\Delta{m}^2_{51}$), but CP violations generated by $\eta$ are observable. \section{Global Fits} \label{Global Fits} Global fits of short-baseline neutrino oscillation data have been presented recently in Refs.~\cite{1303.3011,1308.5288}. These analyses take into account the final results of the MiniBooNE experiment, which was made in order to check the LSND signal with about one order of magnitude larger distance ($L$) and energy ($E$), but the same order of magnitude for the ratio $L/E$ from which neutrino oscillations depend. Unfortunately, the results of the MiniBooNE experiment are ambiguous, because the LSND signal was not seen in neutrino mode \cite{0812.2243} and the signal observed in 2010 \cite{1007.1150} with the first half of the antineutrino data was not observed in the second half of the data \cite{1303.2588}. Moreover, the MiniBooNE data in both neutrino and antineutrino modes show an excess in the low-energy bins which is widely considered to be anomalous because it is at odds with neutrino oscillations \cite{1109.4033,1111.1069}\footnote{ The interesting possibility of reconciling the low--energy anomalous data with neutrino oscillations through energy reconstruction effects proposed in Refs.~\cite{Martini:2012fa,Martini:2012uc} still needs a detailed study. }. In the following we summarize the results of the analysis of short-baseline data presented in Ref.~\cite{1308.5288} of the following three groups of experiments: \begin{enumerate} \renewcommand{\labelenumi}{(\theenumi)} \renewcommand{\theenumi}{\Alph{enumi}} \item The $\nua{\mu}\to\nua{e}$ appearance data of the LSND \cite{hep-ex/0104049}, MiniBooNE \cite{1303.2588}, BNL-E776 \cite{Borodovsky:1992pn}, KARMEN \cite{Armbruster:2002mp}, NOMAD \cite{Astier:2003gs}, ICARUS \cite{1307.4699} and OPERA \cite{1303.3953} experiments. \item The $\nua{e}$ disappearance data described in Ref.~\cite{1210.5715}, which take into account the reactor \cite{1101.2663,1101.2755,1106.0687} and Gallium \cite{nucl-ex/0512041,Laveder:2007zz,hep-ph/0610352,0711.4222,1006.3244} anomalies. \item The constraints on $\nua{\mu}$ disappearance obtained from the data of the CDHSW experiment \cite{Dydak:1983zq}, from the analysis \cite{0705.0107} of the data of atmospheric neutrino oscillation experiments\footnote{ The IceCube data, which could give a marginal contribution \cite{1206.6903,1307.6824}, have not been considered because the analysis is too complicated and subject to large uncertainties. }, from the analysis \cite{1109.4033} of the MINOS neutral-current data \cite{Adamson:2011ku} and from the analysis of the SciBooNE-MiniBooNE neutrino \cite{Mahn:2011ea} and antineutrino \cite{Cheng:2012yy} data. \end{enumerate} \begin{table}[t] \begin{center} \begin{tabular}{c|cccc|cc|cc} &3+1 &3+1 &3+1 &3+1 &3+2 &3+2 &3+1+1 &3+1+1 \\ &LOW &HIG &noMB &noLSND &LOW &HIG &LOW &HIG \\ \hline $\chi^{2}_{\text{min}}$ &291.7 &261.8 &236.1 &278.4 &284.4 &256.4 &289.8 &259.0 \\ NDF &256 &250 &218 &252 &252 &246 &253 &247 \\ GoF & 6\% &29\% &19\% &12\% & 8\% &31\% & 6\% &29\% \\ \hline $(\chi^{2}_{\text{min}})_{\text{APP}}$ &99.3 &77.0 &50.9 &91.8 &87.7 &69.8 &94.8 &75.5 \\ $(\chi^{2}_{\text{min}})_{\text{DIS}}$ &180.1 &180.1 &180.1 &180.1 &179.1 &179.1 &180.1 &180.1 \\ $\Delta\chi^{2}_{\text{PG}}$ &12.7 &4.8 &5.1 &6.4 &17.7 &7.5 &14.9 &3.4 \\ $\text{NDF}_{\text{PG}}$ &2 &2 &2 &2 &4 &4 &3 &3 \\ $\text{GoF}_{\text{PG}}$ &0.2\% & 9\% & 8\% & 4\% &0.1\% &11\% &0.2\% &34\% \\ \hline $\Delta\chi^{2}_{\text{NO}}$ &$47.5$ &$46.2$ &$47.1$ &$8.3$ &$54.8$ &$51.6$ &$49.4$ &$49.1$ \\ $\text{NDF}_{\text{NO}}$ &$3$ &$3$ &$3$ &$3$ &$7$ &$7$ &$6$ &$6$ \\ $n\sigma_{\text{NO}}$ &$6.3\sigma$ &$6.2\sigma$ &$6.3\sigma$ &$2.1\sigma$ &$6.0\sigma$ &$5.8\sigma$ &$5.8\sigma$ &$5.8\sigma$ \\ \end{tabular} \end{center} \caption{ \label{tab:chi} \footnotesize Results of the fit of short-baseline data \cite{1308.5288} taking into account all MiniBooNE data (LOW), only the MiniBooNE data above 475 MeV (HIG), without MiniBooNE data (noMB) and without LSND data (noLSND) in the 3+1, 3+2 and 3+1+1 schemes. The first three lines give the minimum $\chi^{2}$ ($\chi^{2}_{\text{min}}$), the number of degrees of freedom (NDF) and the goodness-of-fit (GoF). The following five lines give the quantities relevant for the appearance-disappearance (APP-DIS) parameter goodness-of-fit (PG) \protect\cite{hep-ph/0304176}. The last three lines give the difference between the $\chi^{2}$ without short-baseline oscillations and $\chi^{2}_{\text{min}}$ ($\Delta\chi^{2}_{\text{NO}}$), the corresponding difference of number of degrees of freedom ($\text{NDF}_{\text{NO}}$) and the resulting number of $\sigma$'s ($n\sigma_{\text{NO}}$) for which the absence of oscillations is disfavored. } \end{table} Table~\ref{tab:chi} summarizes the statistical results obtained in Ref.~\cite{1308.5288} from global fits of the data above in the 3+1, 3+2 and 3+1+1 schemes. In the LOW fits all the MiniBooNE data are considered, including the anomalous low-energy bins, which are omitted in the HIG fits. There is also a 3+1-noMB fit without MiniBooNE data and a 3+1-noLSND fit without LSND data. From Tab.~\ref{tab:chi}, one can see that in all fits which include the LSND data the absence of short-baseline oscillations is disfavored by about $6\sigma$, because the improvement of the $\chi^2$ with short-baseline oscillations is much larger than the number of oscillation parameters. In all the 3+1, 3+2 and 3+1+1 schemes the goodness-of-fit in the LOW analysis is significantly worse than that in the HIG analysis and the appearance-disappearance parameter goodness-of-fit is much worse. This result confirms the fact that the MiniBooNE low-energy anomaly is incompatible with neutrino oscillations, because it would require a small value of $\Delta{m}^2_{41}$ and a large value of $\sin^22\vartheta_{e\mu}$ \cite{1109.4033,1111.1069}, which are excluded by the data of other experiments (see Ref.~\cite{1308.5288} for further details)\footnote{ One could fit the three anomalous MiniBooNE low-energy bins in a 3+2 scheme \cite{1207.4765} by considering the appearance data without the ICARUS \cite{1307.4699} and OPERA \cite{1303.3953} constraints, but the corresponding relatively large transition probabilities are excluded by the disappearance data. }. Note that the appearance-disappearance tension in the 3+2-LOW fit is even worse than that in the 3+1-LOW fit, since the $\Delta\chi^{2}_{\text{PG}}$ is so much larger that it cannot be compensated by the additional degrees of freedom (this behavior has been explained in Ref.~\cite{1302.6720}). Therefore, we think that it is very likely that the MiniBooNE low-energy anomaly has an explanation which is different from neutrino oscillations and the HIG fits are more reliable than the LOW fits. The 3+2 mixing scheme, was considered to be interesting in 2010 when the MiniBooNE neutrino \cite{0812.2243} and antineutrino \cite{1007.1150} data showed a CP-violating tension. Unfortunately, this tension reduced considerably in the final MiniBooNE data \cite{1303.2588} and from Tab.~\ref{tab:chi} one can see that there is little improvement of the 3+2-HIG fit with respect to the 3+1-HIG fit, in spite of the four additional parameters and the additional possibility of CP violation. Moreover, since the p-value obtained by restricting the 3+2 scheme to 3+1 disfavors the 3+1 scheme only at $1.2\sigma$ \cite{1308.5288}, we think that considering the larger complexity of the 3+2 scheme is not justified by the data\footnote{ See however the somewhat different conclusions reached in Ref.~\cite{1303.3011}. }. The results of the 3+1+1-HIG fit presented in Tab.~\ref{tab:chi} show that the appearance-disappearance parameter goodness-of-fit is remarkably good, with a $\Delta\chi^{2}_{\text{PG}}$ that is smaller than those in the 3+1-HIG and 3+2-HIG fits. However, the $\chi^2_{\text{min}}$ in the 3+1+1-HIG is only slightly smaller than that in the 3+1-HIG fit and the p-value obtained by restricting the 3+1+1 scheme to 3+1 disfavors the 3+1 scheme only at $0.8\sigma$ \cite{1308.5288}. Therefore, there is no compelling reason to prefer the more complex 3+1+1 to the simpler 3+1 scheme. \begin{figure*}[t] \null \hfill \includegraphics*[width=0.49\linewidth]{fig-01a.eps} \hfill \includegraphics*[width=0.49\linewidth]{fig-01b.eps} \hfill \null \caption{ \label{fig:glo} \footnotesize Allowed regions in the $\sin^{2}2\vartheta_{e\mu}$--$\Delta{m}^{2}_{41}$, $\sin^{2}2\vartheta_{ee}$--$\Delta{m}^{2}_{41}$ and $\sin^{2}2\vartheta_{\mu\mu}$--$\Delta{m}^{2}_{41}$ planes obtained in the global (GLO) 3+1-HIG fit \cite{1308.5288} of short-baseline neutrino oscillation data compared with the $3\sigma$ allowed regions obtained from $\protect\nua{\mu}\to\protect\nua{e}$ short-baseline appearance data (APP) and the $3\sigma$ constraints obtained from $\protect\nua{e}$ short-baseline disappearance data ($\nu_{e}$ DIS), $\protect\nua{\mu}$ short-baseline disappearance data ($\nu_{\mu}$ DIS) and the combined short-baseline disappearance data (DIS). The best-fit points of the GLO and APP fits are indicated by crosses. } \end{figure*} Figure~\ref{fig:glo} shows the allowed regions in the $\sin^{2}2\vartheta_{e\mu}$--$\Delta{m}^{2}_{41}$, $\sin^{2}2\vartheta_{ee}$--$\Delta{m}^{2}_{41}$ and $\sin^{2}2\vartheta_{\mu\mu}$--$\Delta{m}^{2}_{41}$ planes obtained in the 3+1-HIG fit of Ref.~\cite{1308.5288}. These regions are relevant, respectively, for $\nua{\mu}\to\nua{e}$ appearance, $\nua{e}$ disappearance and $\nua{\mu}$ disappearance searches. The corresponding marginal allowed intervals of the oscillation parameters are given in Tab.~\ref{tab:int}. Figure~\ref{fig:glo} shows also the region allowed by $\nua{\mu}\to\nua{e}$ appearance data and the constraints from $\nua{e}$ disappearance and $\nua{\mu}$ disappearance data. One can see that the combined disappearance constraint in the $\sin^{2}2\vartheta_{e\mu}$--$\Delta{m}^{2}_{41}$ plane excludes a large part of the region allowed by $\nua{\mu}\to\nua{e}$ appearance data, leading to the well-known appearance-disappearance tension \cite{1103.4570,1107.1452,1109.4033,1111.1069,1207.4765,1207.6515,1302.6720,1303.3011} quantified by the parameter goodness-of-fit in Tab.~\ref{tab:chi}. \begin{table}[t] \begin{center} \begin{tabular}{c|cccc} CL & $\Delta{m}^2_{41}[\text{eV}^2]$ & $\sin^22\vartheta_{e\mu}$ & $\sin^22\vartheta_{ee}$ & $\sin^22\vartheta_{\mu\mu}$ \\ \hline 68.27\% & $ 1.55 - 1.72 $ & $ 0.0012 - 0.0018 $ & $ 0.089 - 0.15 $ & $ 0.036 - 0.065 $ \\ \hline 90.00\% & $ 1.19 - 1.91 $ & $ 0.001 - 0.0022 $ & $ 0.072 - 0.17 $ & $ 0.03 - 0.085 $ \\ \hline 95.00\% & $ 1.15 - 1.97 $ & $ 0.00093 - 0.0023 $ & $ 0.066 - 0.18 $ & $ 0.028 - 0.095 $ \\ \hline 95.45\% & $ 1.14 - 1.97 $ & $ 0.00091 - 0.0024 $ & $ 0.065 - 0.18 $ & $ 0.027 - 0.095 $ \\ \hline 99.00\% & $ 0.87 - 2.09 $ & $ 0.00078 - 0.003 $ & $ 0.054 - 0.2 $ & $ 0.022 - 0.12 $ \\ \hline 99.73\% & $ 0.82 - 2.19 $ & $ 0.00066 - 0.0034 $ & $ 0.047 - 0.22 $ & $ 0.019 - 0.14 $ \end{tabular} \end{center} \caption{ \label{tab:int} \footnotesize Marginal allowed intervals of the oscillation parameters obtained in the global 3+1-HIG fit of short-baseline neutrino oscillation data \cite{1308.5288}. } \end{table} It is interesting to investigate what is the impact of the MiniBooNE experiment on the global analysis of short-baseline neutrino oscillation data. With this aim, the authors of Ref.~\cite{1308.5288} performed two additional 3+1 fits: a 3+1-noMB fit without MiniBooNE data and a 3+1-noLSND fit without LSND data. From Tab.~\ref{tab:chi} one can see that the results of the 3+1-noMB fit are similar to those of the 3+1-HIG fit and the exclusion of the case of no-oscillations remains at the level of $6\sigma$. On the other hand, in the 3+1-noLSND fit, without LSND data, the exclusion of the case of no-oscillations drops dramatically to $2.1\sigma$. In fact, in this case the main indication in favor of short-baseline oscillations is given by the reactor and Gallium anomalies which have a similar statistical significance (see Section~\ref{Introduction}). Therefore, it is clear that the LSND experiment is still crucial for the indication in favor of short-baseline $\bar\nu_{\mu}\to\bar\nu_{e}$ transitions and the MiniBooNE experiment has been rather inconclusive. \section{Conclusions} \label{Conclusions} The results of the global fit of short-baseline neutrino oscillation data presented in Ref.~\cite{1308.5288} show that the data can be explained by 3+1 neutrino mixing and this simplest scheme beyond three-neutrino mixing cannot be rejected in favor of the more complex 3+2 and 3+1+1 schemes. The low-energy MiniBooNE anomaly cannot be explained by neutrino oscillations in any of these schemes. Moreover, the crucial indication in favor of short-baseline $\bar\nu_{\mu}\to\bar\nu_{e}$ appearance is still given by the old LSND data and the MiniBooNE experiment has been inconclusive. Hence new better experiments are needed in order to check this signal \cite{1204.5379,1304.2047,1307.7097,1308.0494,1308.6822}. \section*{References}
1,108,101,563,486
arxiv
\chapter*{Introduction} In this thesis I will try to shed some light into the twilight zone between mathematics and theoretical physics. The risk of being in a twilight zone is that you get lost, which in this case entails uncertainty on the kind of scientific language to use or even on the kind of reasoning. The pay-off is that it is exciting to discover unexpected things. In my case these are the rich mathematical structures that are hidden in (our view on) nature. The structures that I found while studying BRST quantization (BRST are the first letters of the inventors Becchi, Rouet, Stora and Tyutin) of Topological Quantum Field Theories (TQFT's) include equivariant cohomology, super Fourier transform and double complexes. Along with some geometry they constitute the tool-kit for the mathematical model I developped for certain TQFT's. This tool-kit is presented in the first chapter and provides the reader with the mathematical background that is necessary to read this thesis. Although this thesis certainly is a thesis in mathematics, some readers might welcome a background from physics. Therefore, the rest of this introduction is devoted to what field theory is about and why we are interested in TQFT's. Field theories are used to describe dynamical systems (e.g., systems of elementary particles). They involve a finite dimensional manifold $M$ (which is often identified with space-time), a set of fields (often a space of sections of some fiber bundle over $M$) and a real valued functional ${\cal S}$ on the set of fields, called the {\it action}. The action needs to be local which means that it is of the form ${\cal S}=\int_M {\cal L}$, where ${\cal L}$ is a $C^\infty(M)$-valued functional, called the Lagrangian density. Stationary points of the action are called the classical solutions of the field theory. They satisfy the Euler-Lagrange equations. \vspace{5pt} An example of a field theory is Yang-Mills theory. Let $M$ be a four dimensional Riemannian manifold, $G$ a semi-simple compact Lie group and $P \rightarrow M$ a principal $G$-bundle. Let ${\cal A}$ denote the affine space of connections on $P$. This is the space of fields for Yang-Mills (YM) theory. We shall now describe the action functional. For any connection $A \in {\cal A}$ the curvature $F_A$ is a two form on $P$ with values in the Lie algebra ${\got g}$ of $G$. Let us choose a $G$-invariant metric on $P$. Then the YM action is the square of the length of the curvature, using this metric and the Killing metric on ${\got g}$. In a formula \begin{equation} {\cal S}_{\rm YM}(A) = \sfrac{1}{4\pi^2} \int_M \mid F_A \mid^2 \end{equation} where the integrand (obtained using the Killing form on ${\got g}$ and the Riemannian metric on $P$ at each point ) is a $G$-invariant function and hence descends to a function on $M$. The classical solutions of YM theory are connections that satisfy the YM equations, which includes self dual connections on $P$. An important feature of this field theory is that there is an infinite dimensional group, the group of gauge transformations ${\cal G}$, acting on the space of fields and leaving the action invariant. In fact, the action ${\cal S}_{\rm YM}$ should be regarded as being defined on the quotient ${\cal A}/{\cal G}$. At least part of the space of classical solutions is then finite dimensional (namely the moduli space of instantons). \vspace{5pt} Topological Field Theories are given by actions that are invariant under diffeomorphism groups of some kind. The name \lq topological' is a bit misleading because there is often much more structure involved than only a topology. There are two types of Topological Field Theories. The first type consists of field theories described by an action that is invariant under Diff($M$). The second type has a constant action and is therefore invariant under all possible transformations of the fields. Of both types we will give an example here in order to show the difference between these field theories. \vspace{5pt} The best known example of the first type is Chern-Simons (CS) theory. Let $M$ now be a compact three dimensional manifold, $G$ a semi-simple compact Lie group with Lie algebra ${\got g}$. The space of fields ${\cal A}$ is the linear space of ${\got g}$-valued one forms on $M$. The following action on ${\cal A}$ defines CS-theory \begin{equation} {\cal S}_{\rm CS}(A) = \sfrac{1}{4\pi} \int_M {\rm Tr}(A \wedge {\rm d}A + \frac{1}{3} A \wedge [A,A]), \end{equation} where Tr denotes the Killing form on ${\got g} \times {\got g}$. Since the integrand is a differential three form, the action is manifestly invariant under Diff($M$), the group of all diffeomorphisms of $M$. The classical solutions are the flat connections on the trivial bundle $M \times G$. Of great interest is the quantum theory associated to this field theory. The so-called quantum observables give rise to knot invariants that are related to the Jones- and the HOMFLY polynomials. This is the main importance of topological field theories: although the classical theory is rather dull, their associated quantum field theories give rise to interesting invariants. \vspace{5pt} A well known example of the second type is Topological Yang-Mills theory (TYM). The space of fields is the same as for ordinary YM, but the action now reads \begin{equation} {\cal S}_{\rm TYM}(A)= \sfrac{1}{4\pi^2} \int_M {\rm Tr}(F_A \wedge F_A) \end{equation} There is no metric needed to define this action. The integrand is a four form on $P$, but it is $G$-invariant and horizontal and therefore corresponds uniquely to a four form on $M$. It turns out that this action is independent of $A$, it only depends on the type of bundle $P \rightarrow M$. So, all variations of the fields $A$ are symmetries and the symmetry group is Diff(${\cal A}$). However, recall that the space of fields is ${\cal A}/{\cal G}$ rather than ${\cal A}$. In order to achieve this change, one uses the larger symmetry group ${\cal G} \rhd \!\!\! < {\rm Diff}({\cal A})$ and only considers so-called basic elements with respect to the ${\cal G}$-action. Of course, classically, TYM is not interesting at all. All fields are stationary points, hence there are no Euler Lagrange equations. On the other hand, the quantum observables are extremely interesting. Witten showed in [W1] that they correspond to the Donaldson polynomials, which are invariants for the differentiable structure on $M$. \vspace{5pt} A big difference between the two types of topological field theories is that for the first type it is essential that the fields are really fields, i.e., that there is an underlying manifold $M$, whereas for the second type this is not important at all. We could as well start with some finite dimensional manifold $X$ (representing the space of fields) and study quantization on $X$ in the presense of a symmetry group ${\cal G} \rhd \!\!\! < {\rm Diff}(X)$. So, where the type 1 theories are essentially infinite dimensional, the type 2 theories are not, except for the fact that Diff($X$) is infinite dimensional. The type 2 theories were baptized \lq Cohomological Field Theories' by Witten ([W2]). \vspace{5pt} In this thesis we will study the BRST quantization method applied to these Cohomological Field Theories. We will replace Diff($X$) by a finite dimensional group $H$, acting transitively on $X$. This leaves us with a completely finite dimensional model for Cohomological Field Theories. Path integrals in this model are just integrals over $X$ and thus can be studied in great detail. We will use them in the last chapter to prove a localization formula for equivariant forms on manifolds with boundary. \vspace{5pt} To conclude this introduction we will make some comments on notation and on prerequisites. Knowledge of differential geometry (bundles, connections, cohomology) and of some algebra (graded algebras, rings, modules) is certainly necessary to read this thesis. Familiarity with super structures (Berezin integration, super derivations) and quantum field theory (path integrals, correlation functions) might be helpful. In the sequel, the symbols $M$ and $N$ denote finite dimensional differentiable manifolds, $Z$ is always a submanifold defined by the zeroes of some set of smooth functions and from now on $X$ denotes reduced phase spaces. $G$, $H$ and $S$ are Lie groups and ${\cal A}$, ${\cal B}$ and ${\cal P}$ are algebras, often with a lot of gradings. This implies that for our model we will use $G$ and $M$ rather than the symbols ${\cal G}$ and $X$ of above (the reader should not confuse this with the $G$ and $M$ appearing in the type 1 theories!). Furthermore, Lie algebras are always denoted by the gothic symbols ${\got g}$, ${\got h}$ and ${\got s}$ and elements of these Lie algebras can be traced by searching for the greek symbol $\xi$. The reader be warned that whenever a choice of a basis of a vector space is involved, I will make use of the summation convention, saying that a summation is understood wherever the same indices appear, one as a subscript and one as a superscript. Tensor products will always be graded and will be taken over the real numbers ${\bf R}$ as is also the case for all vector spaces and algebras. \chapter{The tool-kit} The aim of this chapter is to provide the mathematical tools used in the next chapters to build the finite dimensional model for Cohomological Field Theories. The theorems 1.2.1 and 1.3.3 were published in [Ka1]. \section{Geometry} In this first section we will introduce the main concepts of symplectic geometry. Working in the symplectic category has proven to be extremely useful in the past. In this thesis it will be used in chapter two (BRST theory is very transparent for Hamiltonian group actions) and in chapter four (it provides nice examples of cohomology computations). Also in this section, we will introduce the geometric input needed to understand the structure of path integrals that is investigated in chapter three. \subsection{Symplectic geometry} Let $M$ be a symplectic manifold with symplectic form $\sigma$. This means that $\sigma$ is closed and that, at each $x \in M$, $\sigma_x$ is a non-degenerate antisymmetric bilinear form on $T_xM$. In particular, this implies that $M$ is even dimensional and orientable, since the top degree part of exp$(\sigma)$ is nowhere vanishing. The symplectic form can be used to associate to any function $f \in C^\infty(M)$ a vector field $V_f$ as follows. $\sigma_x$ identifies $T_xM$ and $T^*_xM$. By definition, $V_f(x)$ is $-$d$f_x$ under this identification. $V_f$ is called the Hamiltonian vector field associated to the function $f$. This makes $C^\infty(M)$ into a Poisson algebra through \begin{equation} \{f,g \} := \sigma(V_f,V_g) \end{equation} This bracket is antisymmetric and satisfies the derivation property. The Jacobi identity follows from d$\sigma=0$. Let $G$ be a Lie group acting on a symplectic manifold $M$ with symplectic form $\sigma$. The $G$-action is called Hamiltonian if there exists an equivariant mapping $\mu:M \rightarrow {\got g^*}$, called the momentum mapping, such that for each $\xi \in {\got g}$ the infinitesimal action $V_\xi$ is equal to the Hamiltonian vector field defined by the function $f_\xi: x \mapsto <\xi, \mu(x)>$ on $M$. Thus \begin{equation} \iota_{V_\xi} \sigma = -{\rm d}f_\xi \;\hspace{10pt} (\xi \in {\got g}) \end{equation} where the lhs denotes contraction of $\sigma$ with the vertical vector field $V_\xi$. To a Hamiltonian $G$-action on $M$ one can associate a symplectic quotient $X$ (also called the reduced phase space) as follows. Suppose that $0 \in {\got g^*}$ is a regular value of $\mu$. This implies that $Z=\mu^{-1}(0)$ is a submanifold of $M$ and that $G$ acts locally free on $Z$ (see, e.g., [AM]). If $G$ is compact, only finite stabilizer groups can occur, so $X=\mu^{-1}(0)/G$ is well defined as an orbifold ([Sa]) and is called the symplectic quotient of $M$ by $G$. A group action is called symplectic if $\sigma$ is invariant. This implies that d$(\iota_{V_\xi}\sigma)=0$ for any $\xi \in {\got g}$. If, in addition, $H^1(M)=0$ and $G$ is compact we obtain a map $\mu$ that can be made equivariant by integration. So every compact symplectic group action on a simply connected manifold is Hamiltonian ([AM],[Ki]). An important example of a Hamiltonian group action is the following. Let $M$ be any manifold (not necessarily symplectic), let $G$ be any group action on $M$. Then $N=T^*M$ is a symplectic manifold and the natural lift of the $G$-action to $N$ is Hamiltonian. If $G$ acts freely on $M$, then the symplectic quotient is isomorphic to $T^*(M/G)$. \subsection{Submanifolds and Thom classes} In physics, one is interested (when quantizing via path integrals) in writing integrals over submanifolds as integrals over the whole manifold. The manifolds in this context are infinite dimensional, but for the sake of simplicity we will stick to finite dimensional objects throughout this thesis. Expressing integrals over submanifolds as integrals over larger manifolds asks for representatives of Poincar\'e duals. We shall explain this. Let $M$ be a smooth oriented manifold of dimension $m>0$ and let $Z \subset M$ be a smooth oriented compact submanifold of dimension $m-n$. Integration over $Z$ induces a linear map from $H^{m-n}(M)$ to $\bf{R}$ (assuming that $Z$ has no boundary), hence an element of $H^{m-n}(M)^*$. By Poincar\'e duality this element corresponds to an element of $H_{\rm cpt}^n(M)$, which is by definition the Poincar\'e dual [$\eta$] of $Z \subset M$. More explicitly, if $\omega$ is any element of $H^{m-n}(M)$, then \begin{equation} \int_Z i^* \omega \; = \; \int_M \omega \wedge \eta \end{equation} where $i:Z \rightarrow M$ is the inclusion map. Obviously, the support of $\eta$ may be shrunk into any open neighbourhood of $Z$ in $M$ (see, e.g., [BT]). Let $\pi :{\cal V} \rightarrow M$ be a vector bundle with fiber the $n$-dimensional vector space $V$ and let $F:M \rightarrow {\cal V}$ be a generic section. If $\tau$ is a representative for the Thom class of ${\cal V}$, then $F^* \tau$ is a representative for the Poincar\'e dual of the zero locus of $F$ in $M$ ([BT]). By definition, the Thom class is represented by forms that give 1 when integrated over the fibers. Stated otherwise, let $H_{\rm cv}^i({\cal V}) \rightarrow H^{i-n}(M)$ be integration over the fibers of classes that can be integrated when restricted to the fibers (normally it is assumed that the forms have compact support in the fiber direction, but, as is pointed out in [MQ], $L_2$ support will also do). This map is an isomorphism, the Thom isomorphism, and its inverse corresponds to $\pi^*$ followed by multiplication with the Thom class $[\tau]$. Representatives for the Thom class of a vector bundle are not very easy to find , except for the case of a trivial bundle. In this case, the Thom class is just a normalized generator of $H^n(V)$. Using an inner product on $V$ and an orientation, we get a volume form $dz^1 \wedge \ldots \wedge dz^n$ and the Thom class is represented by $f \; dz^1 \wedge \ldots \wedge dz^n$, where $f$ is a function on $V$ such that $\int_V f =1$. We will restate this trivial result in a complicated way now, the purpose being the generalization to associated vector bundles $M \times_G V$ appearing in chapter three, where we obtain the Mathai-Quillen representative for the Thom class ([MQ]). Let $z^i$ be linear coordinates on $V$, $b_i$ their dual coordinates on $V^*$. Let $\psi^i={\rm d}z^i$ and $\bar{\psi}_i={\rm d}b_i$. On the algebra $\Omega(M) \otimes \Omega(V) \otimes \Omega(V^*)$ consider the differential s$=$d$\otimes 1\otimes 1 + 1 \otimes $d$ \otimes 1 + 1 \otimes 1 \otimes \delta$, where $\delta$ is defined by \begin{equation} \label{42} \delta (b_i)=0, \hspace{1cm} \delta (\bar{\psi}_i)=-b_i \end{equation} Using all this notation, we have \begin{prop}\label{tc1} Let $F$ be a map $M \rightarrow V$ such that $F^{-1}(0)$ is a manifold. $F$ can be regarded as section $M \rightarrow M \times V$ of a trivial vector bundle. Assume that this section is transversal to the zero section. The differential form \begin{equation}\label{reptc1} (2\pi)^{-n} \int_{V^*} e^{i \; s(z^j \bar{\psi_j} - i\sum_j b_j \bar{\psi}_j)} =(\sqrt{\pi})^{-n} e^{-\sum_i (z^j)^2} {\rm d}z^1 \wedge \ldots \wedge {\rm d}z^n \end{equation} represents the Thom class of the vector bundle $M \times V$. Its pull back by $F:M \rightarrow M \times V$ is a closed form in $\Omega(M)$ representing the Poincare dual of the submanifold given by the equations $F=0$. \end{prop} \section{Equivariant cohomology} Equivariant cohomology has been set up to compute cohomology of quotient spaces of the form $M/G$, where $M$ is some manifold and $G$ a connected Lie group acting on $M$. In the case of a free and proper $G$-action, $M/G$ is a manifold without singularities and one requires equivariant cohomology to coincide with the de Rham cohomology of the quotient space. We shall define equivariant cohomology and we will see that it fulfills this requirement. Furthermore, we will introduce two very useful models in this section. \subsection{Topological definition} Let $EG \rightarrow BG$ be the universal $G$-bundle, i.e. $EG$ is contractible and every principal $G$-bundle over some base space $B$ is the pull back of the universal one by a map $B \rightarrow BG$. $BG$ is called the classifying space of $G$-bundles. Every Lie group has a universal bundle that is unique up to homotopy (see, e.g., [Hu]). The standard example of a universal bundle is the inductive limit of the Hopf fibrations $S^{2n+1} \rightarrow {\bf CP}^n$, which is a model for the universal $S^1$-bundle. Let $M$ be a $G$-manifold. We define the associated fibre bundle $M_G := EG \times_G M$ with fibres isomorphic to $M$ and base space $BG$. The equivariant cohomology of $M$, $H^\ast_G (M)$, is by definition the cohomology of the fibre bundle $M_G$: \begin{equation} H^\ast_G (M) := H^\ast (EG \times_G M) \end{equation} In the case of a free and proper group action, $M_G$ can be seen as a fibre bundle over $M/G$ with fibre the contractible space $EG$. So we have $H^\ast (M_G) \cong H^\ast (M/G)$. On the other hand, for $M = \{ x \}$ (the singleton set) we have $H^\ast_G(M) = H^\ast (BG)$ and therefore the equivariant cohomology of a point can be quite complicated (see, e.g., [AB]). For compact connected groups $G$, there are two nice models for the equivariant cohomology of $G$-manifolds $M$. Compactness is really necessary for these models (see [AB]). The models are called the Weil model and the Cartan model. We shall describe these in detail now. Some algebraic facts used in the next subsections are collected in section 1.4. \subsection{Weil model} The Weil model for equivariant cohomology makes use of the Weil algebra $W({\got g}) := S({\got g}^\ast ) \otimes \Lambda ({\got g}^\ast )$. It has a ${\bf Z}$-grading by giving the generators $\phi^a$ of $S({\got g}^\ast )$ degree 2 and the generators $\omega^a$ of $\Lambda ({\got g}^\ast )$ degree 1 $(a=1,...,\dim {\got g}^\ast )$. The $\omega^a$ are of course anti-commuting, the $\phi^a $ commuting and both sets are dual to the same fixed basis $( \xi_a)$ of ${\got g}$ (the Lie algebra of $G$). \vspace{10pt} Suppose we are given a connection on some principal $G$-bundle $P$. This gives rise to two maps, the curvature ${\got g}^\ast \rightarrow \Omega^2(P)$ and the connection ${\got g}^\ast \rightarrow \Omega^1 (P)$. These maps generate an algebra homomorphism, called the Chern-Weil homomorphism, \begin{equation}\label{e2} W({\got g}) = S({\got g}^\ast ) \otimes \Lambda ({\got g}^\ast ) \rightarrow \Omega (P) \end{equation} We shall make this map into a homomorphism of differential algebras by defining the following differential on the Weil algebra $W({\got g})$ (the $f^{a}_{bc}$ are the structure constants of ${\got g}$ with respect to the fixed basis $( \xi_a)$): \begin{eqnarray}\label{e3} {\rm d}_W\omega^a &=& - \sfrac{1}{2} f^{a}_{bc} \omega^b \omega^c + \phi^a \\ {\rm d}_W\phi^a &=& - f^{a}_{bc} \omega^b \phi^c \nonumber \end{eqnarray} where a summation is understood over indices appearing both as a subscript and as a superscript. This definition can be extended to $W({\got g})$ using the fact that ${\rm d}_W$ is a graded derivation of degree one (see section 1.4.1). Because the relations above coincide with the definitions of the curvature and Bianchi's identity, respectively, the map (\ref{e2}) is (more or less by definition of ${\rm d}_W$) a homomorphism of differential algebras. However, the differential (\ref{e3}) does not give an interesting cohomology: $H^\ast (W({\got g})) \cong {\bf R}$ as can be seen from a shift of generators $\phi^a \rightarrow \phi^a - \frac{1}{2} f^{a}_{bc} \omega^b \omega^c$. It becomes interesting if we introduce two other derivations on $W({\got g})$, the interior product $I_a$ (of degree -1) and the Lie derivative $L_a$ (of degree zero) $\; (a=1,...,\dim {\got g})$: \begin{eqnarray}\label{e4} I_a \omega^b &=& \delta^b_a \nonumber \\ I_a \phi^b &=& 0 \\ L_a &=& I_a {\rm d}_W + {\rm d}_W I_a =: [I_a ,{\rm d}_W]^+ \nonumber \end{eqnarray} These operations are the algebraic analogues of interior product of the connection 1-form and the infinitesimal generators of the $G$-action, and the Lie derivative of differential forms. The action of the $L_a$ is nothing but the natural (coadjoint) action on $W({\got g})$. The operators (\ref{e3}) and (\ref{e4}) generate a ${\bf Z}_2$-graded Lie subalgebra of the Lie superalgebra of all graded derivations. In general, the Lie super bracket of two derivations $D_1$ and $D_2$ is defined by \begin{equation}\label{sbrac} [D_1,D_2] = D_1D_2 - (-1)^{{\rm deg}(D_1) {\rm deg}(D_2)} D_2D_1 \end{equation} In this case, the bracket is easily calculated and is given by the following formulas (the upper + means anti-commutator, the upper - means commutator of the operators). \begin{eqnarray}\label{e5} [{\rm d}_W, L_a]^- = 0 & & [I_a, I_b]^+ = 0 \nonumber \\ {[{\rm d}_W, I_a]}^+ = L_a & & [ L_a, L_b ]^- = f^{c}_{ab} L_c \\ {[{\rm d}_W,{\rm d}_W]}^+ = 0 & & [ L_a, I_b]^- = f^{c}_{ab} I_c \nonumber \end{eqnarray} As in the expressions above, we sum over indices if they appear twice, one up and one down. One sees immediately that the relations (\ref{e5}) are independent of the choise of a basis of ${\got g}$. They reflect the differential geometric situation on $G$-manifolds (the bracket is just the commutator), so the algebra (\ref{e5}) not only acts on $W({\got g})$, it also acts on the algebra of differential forms, $\Omega (M)$ and thus also on the tensor product of these two algebras. We will distinguish operators on different algebras by using different notations. Namely, $i_a$ for the interior product of forms and $V_{\xi_a}$, the vertical vector field generated by $\xi_a$ and ${\cal L}_a$ for the Lie derivative of forms in the direction of $V_{\xi_a}$. Finally, we are able to define the Weil model for equivariant cohomology. The algebra of interest is the basic subalgebra of $W({\got g}) \otimes \Omega (M)$, denoted by $(W({\got g}) \otimes \Omega (M))_{\rm basic}$. It consists of elements annihilated by all the $I_a \otimes 1 + 1 \otimes i_a$ and $L_a \otimes 1 + 1 \otimes {\cal L}_a$: \[ (W({\got g}) \otimes \Omega (M))_{\rm basic} = \] \begin{equation} = \left (\bigcap^{\dim (G)}_{a=1} {\rm ker} (I_a \otimes 1 +1 \otimes i_a) \right ) \cap \left ( \bigcap^{\dim (G)}_{b=1} {\rm ker} (L_b \otimes 1 +1 \otimes {\cal L}_b) \right ) \end{equation} This subalgebra is stable under ${\rm d}_W \otimes 1 + 1 \otimes {\rm d}_M$ (as follows from (\ref{e5})), so it is a differential algebra. In the sequel we shall omit the subscript $W$ and denote this differential also by d. The Chern-Weil homomorphism (\ref{e2}) with $P = EG$ reduces to an isomorphism on the level of cohomology if $G$ is compact and connected: \begin{equation} H^\ast_d ((W({\got g}) \otimes \Omega (M))_{\rm basic}) \cong H^\ast_G (M) \end{equation} For more details we refer to [AB]. \subsection{Cartan model} Another model for equivariant cohomology which is of considerable interest is the Cartan model. The map $\omega^a \mapsto 0$, $W({\got g}) \otimes \Omega (M) \rightarrow S({\got g}^\ast ) \otimes \Omega (M)$ induces an algebra isomorphism \begin{equation} (W({\got g}) \otimes \Omega (M))_{\rm basic} \cong (S({\got g}^\ast ) \otimes \Omega (M))^G \end{equation} where the upper $G$ means the (infinitesimal) $G$-invariant subalgebra. This isomorphism can be made into an isomorphism of differential algebras by defining the following derivation on $(S({\got g}^\ast ) \otimes \Omega (M))^G$: \begin{eqnarray}\label{e9} {\rm D} \phi^b &=& 0 \\ {\rm D} \eta &=& (1 \otimes {\rm d} - \phi^b \otimes i_b)(\eta) \hspace{1cm} (\eta \in \Omega (M)) \nonumber \end{eqnarray} This derivation squares to zero on the space of $G$-invariant elements (using the fact that $\phi^b L_b \otimes 1$ acts as zero on $S({\got g}^\ast)$). Its cohomology equals the one of the Weil model. This model for equivariant cohomology is called the Cartan model. \vspace{10pt} We shall describe how these models are related in more detail now, thereby introducing a mathematical model for the BRST algebra of topological field theories (more on this in chapter three). Remember that on the algebra $A = W({\got g}) \otimes \Omega (M)$ we have the following differential: \begin{eqnarray}\label{e10} {\rm d}\phi^a &=& -f^{a}_{bc} \omega^b \phi^c \nonumber \\ {\rm d}\omega^a &=& -\sfrac{1}{2} f^{a}_{bc} \omega^b \omega^c + \phi^a \\ {\rm d}\eta &= & {\rm exterior~differentiation~of} \; \eta \in \Omega (M) \nonumber \end{eqnarray} where the $\phi^a$ are generators of $S({\got g}^\ast )$, $\omega^a$ of $\Lambda ({\got g}^\ast )$ and $f^{a}_{bc}$ are structure constants of ${\got g}$, all defined with respect to the same fixed basis of ${\got g}$. The (unrestricted) BRST algebra $B$ of topological models on quotient spaces is the same as above, $B = W({\got g}) \otimes \Omega (M)$ (see, e.g., [OSvB]). But the differential on it (the BRST operator) differs: \begin{equation}\label{e11} \delta = {\rm d}+ \omega^a {\cal L}_a - \phi^b i_b, \end{equation} where the operators ${\cal L}_a$ and $i_b$ act on differential forms only and d acts on the whole algebra. Maybe, we better write $\delta = {\rm d}_W \otimes 1 + 1 \otimes {\rm d}_M + \omega^a \otimes {\cal L}_a - \phi^b \otimes i_b$ instead of (\ref{e11}). The action of (\ref{e11}) on $B$ coincides, e.g., with [OSvB]. It is easy to check that its square equals zero. We will show now that there is an algebra automorphism of $A$ that carries (\ref{e10}) into (\ref{e11}). \vspace{10pt} Let $\psi : B \rightarrow A$ be the map $\psi={\rm exp}(-\omega^a i_a) = \prod_\alpha (1-\omega^\alpha \otimes i_\alpha )$. Note that $\psi $ is degree preserving and that it differs in two ways from the map introduced by Mathai and Quillen ([MQ], \S 5). In the first place our map is an isomorphism on the whole algebra, rather than only on the basic or $G$-invariant subalgebra. Secondly, we discriminate between the \lq interior product' defined as an operation on forms and on the Weil algebra. The next theorem gives a natural setting for the algebra homomorphisms of [MQ], \S 5. \begin{thm} $\psi $ is an isomorphism of differential algebras, so the diagram \[ \begin{array}{rcl} B & \stackrel{\psi }{\longrightarrow } & A \\ & & \\ \delta \downarrow & & \downarrow {\rm d} \\ & & \\ B & \stackrel{\psi }{\longrightarrow } & A \end{array} \; \;\; \; commutes. \] \end{thm} ${\bf{Proof.}}$ $\psi^{-1} = {\rm exp} (\omega^a i_a)$, so it is clear that $\psi $ is bijective. Furthermore, it is an algebra homomorphism because all the factors $(1- \omega^\alpha \otimes i_\alpha)$ are algebra homomorphisms. This implies that $\psi^{-1} \circ {\rm d} \circ \psi$ is a derivation on $B$, so that it would be sufficient to check that it equals $\delta$ on the generators of $B$. However, it is equally much work to verify directly the equivalence between the two differentials. In the sequel, we sum over roman but \underline{not} over greek indices. We have: \begin{eqnarray} \delta \circ \omega^\alpha i_\alpha &=& -\sfrac{1}{2} f^{\alpha }_{bc} \omega^{b} \omega^{c} i_\alpha + \phi^\alpha i_\alpha - \omega^\alpha \delta i_\alpha \\ \omega^\alpha i_\alpha \circ \delta & = & \omega^\alpha i_\alpha {\rm d}-\omega^\alpha \omega^a i_\alpha {\cal L}_a - \omega^\alpha \phi^b i_\alpha i_b \nonumber \end{eqnarray} Subtracting these equations results in: \begin{equation}\label{e13} [\delta ,\omega^\alpha i_\alpha ] = -\sfrac{1}{2} f^{\alpha }_{bc} \omega^b \omega^c i_\alpha + \phi^\alpha i_\alpha - \omega^\alpha {\cal L}_\alpha - \omega^\alpha \omega^a [{\cal L}_a, i_\alpha ] \end{equation} from which we see that $[\delta ,\omega^\alpha i_\alpha ] = (1+\omega^\alpha i_\alpha ) [\delta ,\omega^\alpha i_\alpha ]$. We want to commute the extra term (\ref{e13}) to the right of the product $\prod_\alpha (1+ \omega^\alpha i_\alpha )$. So we compute \begin{equation} [[\delta , \omega^\alpha i_\alpha ], \omega^\beta i_\beta ] = [ -\omega^\alpha {\cal L}_\alpha , \omega^\beta i_\beta ] = - f^{c}_{\alpha \beta } \omega^\alpha \omega^\beta i_c \end{equation} Finally, we get: \begin{eqnarray} \delta \circ \psi^{-1} & = & \prod_\alpha (1+\omega^\alpha i_\alpha ) \{ \delta - \sum_\alpha \omega^\alpha {\cal L}_\alpha + \sum_\alpha \phi^\alpha i_\alpha - \sfrac{1}{2} \sum_\alpha f^{\alpha }_{bc} \omega^b \omega^c i_\alpha \nonumber \\ & + & \sum_\alpha f^{c}_{a\alpha } \omega^a \omega^\alpha i_c - \sum_{\alpha < \beta } f^{c}_{\alpha \beta } \omega^\alpha \omega^\beta i_c \}= \\ &=& \psi^{-1} \circ d. \nonumber \end{eqnarray} $\Box$ \vspace{10pt} As a corollary of this theorem, we get the following isomorphism: \begin{equation}\label{isonr} H^\ast_\delta (B) \cong H^\ast_d (A) \cong H^*(M) \end{equation} where the last isomorphism follows from the triviality of the cohomology of $({\rm d}, W({\got g}))$. To compute the image of the basic subalgebra of $A$, we need to know the images of the operators $I_a \otimes 1 + 1 \otimes i_a$ (these images already appeared in [OSvB], but now we know where they come from). It turns out that the corresponding operator on $B$ only acts on $W({\got g})$. This follows from: \begin{equation}\label{e17} (I_a \otimes 1) \prod_{\alpha } (1+ \omega^\alpha i_\alpha ) = \prod_{\alpha } (1+ \omega^\alpha i_\alpha ) (I_a \otimes 1 + 1 \otimes i_a) \end{equation} Furthermore, the operators $L_a \otimes 1 + 1 \otimes {\cal L}_a$ commute with $\psi $, so the $G$-action on both algebras is the same. This follows also from (compare this with (\ref{e5})): \begin{equation}\label{e18} [{\rm d}+ \omega^a {\cal L}_a - \phi^b i_b, I_c \otimes 1] = L_c \otimes 1 + 1 \otimes {\cal L}_c \end{equation} We are now able to compute the equivalence of the Weil model, induced by $\psi $. The intersection of the kernels of $I_a \otimes 1$ restricts $B$ to $S({\got g}^\ast )\otimes \Omega (M)$. The kernels of $L_a \otimes 1+1 \otimes {\cal L}_a$ restrict it further to the $G$-invariant subalgebra of $S({\got g}^\ast ) \otimes \Omega (M)$. So the corresponding subalgebra of $A_{\rm basic}$ is $(S({\got g}^\ast )\otimes \Omega (M))^G$, the algebra of the Cartan model! Even more is true. The operator $\delta $ on this subalgebra equals the differential (\ref{e9}). We summarize this in the following theorem. \begin{thm} The $\omega$-independent, $G$-invariant elements of the BRST algebra $B$ give the Cartan model for equivariant cohomology. We have the following commutative diagram: \begin{equation}\label{e19} \begin{array}{rcl} (B,\delta ) & \stackrel{\psi }{\longrightarrow } & (A,d) \\ & & \\ \uparrow & & \uparrow \\ & & \\ (S({\got g}^\ast ) \otimes \Omega (M))^G & \stackrel{\psi }{\longrightarrow } & (W({\got g}) \otimes \Omega (M))_{\rm basic} \end{array} \end{equation} where the vertical arrows are just inclusions. The map $\psi $ between the two restricted algebras is \[ \psi^{-1} \mid_{A_{\rm basic}} : \omega^a \mapsto 0 \] \end{thm} ${\bf{Proof}}$. The commutativity follows from the equations (\ref{e17}) and (\ref{e18}). The last remark follows directly from the identity: \begin{equation} \psi^{-1} \mid_{A_{\rm basic}} \; = \prod_\alpha (1+\omega^\alpha \otimes i_\alpha ) \mid_{A_{\rm basic}} = \prod_\alpha (1-\omega^\alpha I_\alpha \otimes 1) \mid_{A_{\rm basic}}. \end{equation} $\Box$ \vspace{10pt} We would like to point out here that the isomorphism of the bottom line in (\ref{e19}) was also proved by Mathai and Quillen ([MQ], \S 5). \vspace{5pt} A bit more generally we can use a parameter in the isomorphism $\psi $: \begin{equation} \psi_t = {\rm exp} (-t \omega^a i_a) \; : \; B \rightarrow A \end{equation} We can calculate what operator on $B$ corresponds to d. It turns out that \begin{equation} \psi_{-t} \circ {\rm d}\circ \psi_t = \exp ({\rm ad}(t\omega^a i_a))({\rm d}) = \end{equation} \[ = {\rm d}+ t\omega^a \otimes {\cal L}_a - t \phi^b \otimes i_b + \sfrac{1}{2} t(1-t) f^{c}_{ab} \omega^a \omega^b \otimes i_c. \] So, if we introduce $\delta_t$ as a notation for this differential, then $\delta_0 = {\rm d} , \delta_1 = \delta$. Furthermore, we have \begin{equation} (I_a \otimes 1 + (1-t) \otimes i_a) \prod_\alpha (1+ t\omega^\alpha i_\alpha) = \prod_\alpha (1+ t\omega^\alpha i_\alpha) (I_a \otimes 1 + 1 \otimes i_a) \end{equation} and \begin{equation} [\delta_t, I_a \otimes 1 + (1-t) \; 1 \otimes i_a] = L_a \otimes 1 + 1 \otimes {\cal L}_a \end{equation} So we obtain a family of Lie super algebras acting on $W({\got g}) \otimes \Omega (M)$, generated by \begin{eqnarray} & & I_a \otimes 1 + (1-t) \; 1 \otimes i_a \nonumber \\ & & 1 \otimes {\cal L}_a + L_a \otimes 1 \\ & & \delta_t \nonumber \end{eqnarray} \section{Fourier transform of differential forms} Let $V$ be a complex $n$-dimensional vector space and let $\Lambda(V)$ be its Grassmannian algebra of dimension $2^n$. Recall that Berezin integration on $\Lambda(V)$ is a linear map from $\Lambda(V)$ to $\bf{C}$ that is zero on elements of degree less than $n$ and is 1 on some fixed element $\psi^1 \wedge \ldots \wedge \psi^n \in \Lambda^n(V)$. It is called integration, because it has a lot of properties similar to ordinary integration. E.g., a linear coordinate transformation $A:V \rightarrow V$ must be compensated by a Jacobian. However, this Jacobian is $\det^{-1}(A)$ instead of $\det(A)$. (By the way, this is precisely the reason why integration of differential forms can be defined independent of coordinates: the Jacobians cancel each other!). A nice reference for this material is the book of Bryce de Witt [dW]. We need the following (trivial) extension of this integration map. \begin{equation} \int d \psi : \Lambda(V^*) \otimes \Lambda(V) \rightarrow \Lambda(V^*) \end{equation} where $V^*$ is the linear dual of $V$. The tensor product is a tensor product between ${\bf Z}_2-$graded algebras. Fourier transform on Grassmann algebras can be defined in the following way \begin{defi} Let $\psi^1, \ldots ,\psi^n$ be generators of $\Lambda(V)$ of degree 1 such that $\int d\psi (\psi^1 \wedge \ldots \wedge \psi^n) = 1$ and let $\bar{\psi}_1, \ldots , \bar{\psi}_n$ be their duals in $\Lambda^1(V^*)$. For every $\eta \in \Lambda(V)$, Fourier transform ${\cal F} : \Lambda(V) \rightarrow \Lambda(V^*)$ is defined by \begin{equation} {\cal F}(\eta) = \int d \psi \; (\eta \wedge e^{i \; \bar{\psi}_j \otimes \psi^j}) \end{equation} where ${\rm exp}(i \; \bar{\psi}_j \otimes \psi^j) \in \Lambda(V^*) \otimes \Lambda(V) $ is given by the well known power series, which in this case stops at the n-th power. \end{defi} Of course, as it stands it is just a copy of the definition of ordinary Fourier transform. The next proposition shows that it has also properties like ordinary Fourier transform. \begin{prop} If $\dim (V)$ is even, then ${\cal F}^2 : \Lambda(V) \rightarrow \Lambda(V)$ equals the identity. \end{prop} ${\bf Proof.}$ $\cal{F}$ is a linear map, so it suffices to check the statement on homogeneous elements in $\Lambda(V)$. Let $ \eta =\psi^1 \wedge \ldots \wedge \psi^k$ be an element of $\Lambda^k(V)$. The component of $\eta \cdot {\rm exp}(i \; \bar{\psi}_j \otimes \psi^j)$ that is in $\Lambda(V^*) \otimes \Lambda^n(V)$ is $(i)^{n-k} (-1)^{\frac{1}{2}(n+k-1)(n-k)} \bar{\psi}_{k+1} \wedge \ldots \wedge \bar{\psi}_n \otimes \psi^1 \wedge \ldots \wedge \psi^n$. Thus, \begin{equation} {\cal F} (\psi^1 \wedge \ldots \wedge \psi^k) = (i)^{n^2 - k^2} \bar{\psi}_{k+1} \wedge \ldots \wedge \bar{\psi}_n \in \Lambda(V^*). \end{equation} Applying Fourier transform once again, we obtain \begin{equation} {\cal F}^2 (\psi^1 \wedge \ldots \wedge \psi^k) = (i)^{n^2 - k^2} (i)^{k^2} \psi^1 \wedge \ldots \wedge \psi^k. \end{equation} For $n$ even, the prefactor equals $1$, so ${\cal F}^2$ is the identity. $\Box$ \vspace{10pt} {\bf Remarks} 1.1) Of course, so far we have just done linear algebra. Nevertheless, combining this with ordinary Fourier transform, it will turn out to be independent of the choice of a measure and therefore very useful. 1.2) The factor $i$ in the exponent is only meant to give a nice prefactor when computing ${\cal F}^2$. It is of no importance for the convergence of the integral as it is in the ordinary case. 1.3) We have stressed earlier the fact that we should rather be working in an infinite dimensional context. It should not surprise me very much if this Fourier theory will turn out to be very useful in that context also. E.g., differential forms of finite codegree can easily be obtained from differential forms of finite degree, using Fourier transform. \vspace{10pt} We will now combine this definition with ordinary Fourier transform to obtain Fourier transform of differential forms. We will take the Schwartz functions, denoted by ${\cal S}(V)$, as the domain for Fourier transform. Differential $k-$forms, $\Omega_s^k(V)$, can be seen as elements of ${\cal S}(V) \otimes \Lambda^k(V^*)$. Note that, although the notations are very similar, this algebra has little to do with the Weil algebra used in the previous sections. Combined Fourier transform maps this space to ${\cal S}(V^*) \otimes \Lambda^{n-k}(V)$: \[ {\cal F}(f \otimes \eta)(b) = \int_V f \; e^{i<b \mid \cdot >} \otimes \eta e^{i \omega} \] where $\int_V$ is integration of differential forms and $\omega$ is the canonical symplectic 2-form on $V \times V^*$. In coordinates $z^i$ and differentials $\psi^i ={\rm d}z^i$, $\omega = \bar{\psi_j} \otimes \psi^j \;$, $ \langle b \mid \cdot \rangle $ means $b_iz^i$ and integration of differential forms boils down to ordinary integration over the $z^i$ and Berezin integration over the $\psi^i$. A lot of properties of Fourier transform on functions extend to this combined Fourier transform. E.g., it is possible to extend the definition of the convolution product such that it is the Fourier image of the wedge product of differential forms. The convolution of a $k-$form and an $l-$form is a $(k+l-n)-$form. Recall that convolution between two functions $f$ and $g$ in ${\cal S}(V)$ is defined by $f*g(y) = \int_V f(x)g(y-x)dx$. Therefore, it is natural to define for $\eta, \zeta \in \Lambda(V)$: \begin{equation} (\eta * \zeta)(\phi) := \int \eta (\psi) \wedge \zeta (\phi - \psi) d \psi \end{equation} where $\eta(\psi)$ means, $\eta$ expressed in terms of generators $\psi^j$. The $\phi^i$ are just other names for the same generators, as it is the case in the definition of ordinary convolution. Here, $\zeta(\phi - \psi)$ means, substitute $\phi^i - \psi^i$ wherever a $\psi^i$ occurs in $\zeta (\psi)$. It is not very difficult to prove that for every $\eta, \zeta \in \Lambda(V)$ we have \begin{equation} {\cal F}(\eta \wedge \zeta) = {\cal F}(\eta) * {\cal F}(\zeta) \end{equation} Of course, we can combine this convolution product with the ordinary one to obtain a convolution product on the algebra of differential forms ${\cal S}(V) \otimes \Lambda(V)$. Note that the top form $\psi^1 \wedge \ldots \wedge \psi^n$ is the unit element for the convolution product in $\Lambda(V)$, whereas for the product on ${\cal S}(V)$ the unit is not contained in ${\cal S}(V)$ (it is the Dirac distribution). \vspace{10pt} ${\bf Example}$. Suppose $ \dim(V)=4; \psi^1, \ldots ,\psi^4$ are generators of $\Lambda (V)$ such that $\int \psi^1 \psi^2 \psi^3 \psi^4 d\psi =1$. Then, \[{\cal F}(\psi^1 \psi^2)= \int \psi^1 \psi^2 e^{i\bar{\psi}_j \psi^j} d\psi = \] \[ =\int \psi^1 \psi^2 \; (\sfrac{-1}{2!})(\bar{\psi}_3 \psi^3 + \bar{\psi}_4 \psi^4)^2 d\psi = \bar{\psi}_3 \bar{\psi}_4 \] and \[ {\cal F}(\psi^1)= -i \; \bar{\psi}_2 \bar{\psi}_3 \bar{\psi}_4 , \hspace{20pt} {\cal F}(\psi^2)= i\; \bar{\psi}_1 \bar{\psi}_3 \bar{\psi}_4. \] Now, let us calculate the convolution product \[ \!\! {\cal F}(\psi^1) * {\cal F}(\psi^2) = \int \phi_2 \phi_3 \phi_4 (\bar{\psi}_1 - \phi_1)(\bar{\psi}_3 - \phi_3)(\bar{\psi}_4 -\phi_4) d\phi = \bar{\psi}_3 \bar{\psi}_4. \] So for this particular case we have verified that the Fourier image of the wedge product is the (super-)convolution product. \vspace{10pt} Another property of Fourier transform that we would like to extend is the following. If $z^i$ are coordinates on $V$ and $b_i$ are the dual coordinates on $V^*$ , then it is well known that ${\cal F}(\frac{\partial f}{\partial z^j}) = (-i b_j) {\cal F}(f)$. Thus, multiplying with $b_i$ is a derivation for the convolution product. On the (super-)algebra $ \Omega_s(V)= {\cal S}(V) \otimes \Lambda(V^*)$ there exists a (super-)derivation with square zero, namely the de Rham differential d. We would like to know its Fourier image. Let us define on ${\cal S}(V^*) \otimes \Lambda(V)$ the Koszul differential $\delta$ as follows. $\delta$ is the derivation of degree $-1$ acting on generating elements by \begin{equation} \delta(f) = 0 \hspace{80pt} (f \in {\cal S}(V^*)) \end{equation} \[ \delta(f \otimes \bar{\psi}_i) = - f b_i \otimes 1 \hspace{25pt} (i=1, \ldots ,\dim(V)) \] Here, $\bar{\psi_i}$ is the image of $b_i$ under the inclusion $V \rightarrow \Lambda(V)$. Obviously, the definition of $\delta$ does not depend on the choice of linear coordinates on $V^*$. We will prove now that this is the Fourier image of the de Rham differential. \begin{thm} ${\cal F} \circ {\rm d}= \delta \circ {\cal F}$ \end{thm} ${\bf Proof.}$ \[\delta \circ {\cal F} (f \otimes \eta) = \delta (\int f \; e^{i\; <b \mid z>} \otimes \eta \; e^{i\; \bar{\psi}_j \psi^j} dz \; d\psi) = \] \[\int f(z) e^{i\; <b \mid z>} (-ib_j) \otimes \psi^j \eta e^{i\; \bar{\psi}_j \psi^j} dz \; d\psi = \] \[\int \frac{\partial f}{\partial z^j} \; e^{i\; <b \mid z>} \otimes \psi^j \; \eta \; e^{i\; \bar{\psi}_j \psi^j} dz \; d\psi = {\cal F} \circ {\rm d} (f \otimes \eta). \Box \] \vspace{5pt} {\bf Remarks} 1.4) One may wonder why $\delta$ depends on the linear structure of $V^*$, whereas d does not ($\delta$ only commutes with linear diffeomorphisms, d commutes with all diffeomorphisms). This is because Fourier transform, which carries the one into the other, depends on the linear structure. 1.5) In the BRST model of topological theories, the Fourier transform will be used as follows. The algebra $\Omega(V) \otimes \Omega(V^*) $ can be given a double complex structure using the differentials ${\rm d} \otimes 1$ and $(-1)^p \otimes \delta$ ($p$ being the degree operator). The sum $s$ of these two differentials is part of the BRST operator. Using $s$, we can define (extended) Fourier transform $\bar{{\cal F}} : \Omega_s(V) \otimes \Omega_s(V^*) \rightarrow \Omega_s(V)$ by integration over $\Omega_s(V^*)$, after multiplication with exp$(i s(z^j \bar{\psi}_j))$. From the proposition above, it follows that $\bar{{\cal F}} \circ s = {\rm d} \circ \bar{{\cal F}}$. BRST theory uses the map $\bar{{\cal F}}$ to obtain d-closed differential forms from rather simple $s$-closed expressions. \section{Some algebra} In this section we recall some elementary facts on superstructures and double complexes that will be used throughout this thesis. \subsection{Superstuff} A super vector space is a vector space with a ${\bf Z}_2$-grading. A superalgebra is a super vector space such that the product respects the grading. In this section we will consider superalgebras of type ${\cal A}=\Lambda V \otimes SW$, where $V$ and $W$ are vector spaces. ${\cal A}={\cal A}^+ \oplus {\cal A}^-$, ${\cal A}^+$ being the even part $\Lambda^{\rm even}V \otimes SW$, ${\cal A^-}$ being the odd part $\Lambda^{\rm odd}V \otimes SW$. The supercommutator on a superalgebra is defined by (\ref{sbrac}). If it vanishes for any two elements of the algebra, the superalgebra is called supercommutative. Our algebras ${\cal A}$ are supercommutative. Another example of a supercommutative superalgebra is the space of differential forms on a manifold. Each linear map d on ${\cal A}$ can be written as (d$^0_0$,d$^0_1$,d$^1_0$,d$^1_1$), where d$^1_0:{\cal A}^+ \rightarrow {\cal A}^-$, etc. A linear map d is called a derivation if all d$_i^j$ satisfy \begin{equation} {\rm d}_i^j(ab) = ({\rm d}_i^ja) b +(-1)^{(i+j){\rm deg}a}a ({\rm d}_i^jb) \end{equation} for homogeneous (i.e., either even or odd) elements $a \in {\cal A}$. Let us write d$_+={\rm d}^0_0 +{\rm d}_1^1$ and d$_-={\rm d}_0^1 + {\rm d}_1^0$. Then d$=$d$_++$d$_-$. This endows the space of linear maps End(${\cal A}$), and thus also the subspace of derivations Der(${\cal A}$), with a superstructure. This will be denoted by Der$({\cal A}) = {\rm Der}_-({\cal A}) \oplus $Der$_+({\cal A})$. The following statements are easy to prove \begin{prop} a) The supercommutator of two derivations is again a derivation. Hence Der(${\cal A}$) has a Lie superalgebra structure. b) Any linear map $V \oplus W \rightarrow {\cal A}$ can be extended uniquely to a derivation on ${\cal A}$. In particular, a derivation is already defined through its action on $\Lambda^1V \otimes S^1W$. \end{prop} \subsection{Double complex structures and applications} In this section we would like to show that for double complexes, satisfying certain conditions, the cohomology is given by the $E_2$-term of the spectral sequence. However, we shall not use the theory of spectral sequences, but prove all statements \lq by hand'. Furthermore, we will give some useful examples. Let ${\cal A}=\oplus_{p,q} {\cal A}^{p,q}$ be an algebra with two non-negative gradings ($p,q \in {\bf Z}_{\geq 0})$. Let d$_1$ and d$_2$ be two differentials, i.e., d$_i^2=0$ and both are derivations, \[ {\rm d}_1:{\cal A}^{p,q} \rightarrow {\cal A}^{p+1,q}, \hspace{1cm} {\rm d}_2: {\cal A}^{p,q} \rightarrow {\cal A}^{p,q+1} \] Furthermore, let us assume that d$_1$d$_2 =-$d$_2$d$_1$. Then D$=$d$_1+$d$_2$ is again a differential, increasing the total degree $p+q$ by $1$. Thus we obtain three different cohomologies on ${\cal A}$, two bi-graded ones ($H_{d_1}^{\bullet ,q}({\cal A})$ and $H_{d_2}^{p,\bullet}({\cal A})$) and the single graded total cohomology $H_D^\bullet ({\cal A})$. We will need the following filtration on $H_D({\cal A})$: for all $P \geq 0$, define \[ {\cal A}_P^r=\left ( \bigoplus_{p+q=r} {\cal A}^{p,q} \right) \bigcap \left( \bigoplus_{p \leq P} (\oplus_{q} {\cal A}^{p,q}) \right ) \] We will fix $r \geq 0$ and often omit the superscript of ${\cal A}_P^r$ in the sequel. The inclusions $H_D({\cal A}_P) \subset H_D({\cal A}_{P+1})$ induce a filtration on $H^r_D({\cal A})$. Let $GH_D^r({\cal A})$ denote the associated graded algebra. \begin{thm}\label{spseq1} There exists a natural map \begin{equation}\label{dcmap} H_D({\cal A}^r_P) \rightarrow H_{d_2}^{P,r-P}(H_{d_1}({\cal A})) \end{equation} Furthermore, if, whenever $p-q$ is odd, $H^{p,q}_{d_1}({\cal A})=0$, then this map is surjective and we have an isomorphism of graded vector spaces \begin{equation} GH_D^r({\cal A}) \simeq H_{d_2}(H_{d_1}({\cal A}^r)), \end{equation} where ${\cal A}^r= \oplus_{p+q=r} {\cal A}^{p,q}$. \end{thm} {\bf Proof.} Suppose $\eta = \sum_{p+q=r} \eta^{p,q} \in {\cal A}_P$ satisfies D$\eta=0$. Thus, d$_1(\eta^{p,q})=-{\rm d}_2(\eta^{p+1,q-1})$. Let $Q=r-P$. We will prove that $\eta \mapsto \eta^{P,Q}$ induces a map between cohomologies. As d$_1(\eta^{P,Q})=0$ and d$_2(\eta^{P,Q}) = {\rm d}_1(- \eta^{P-1,Q+1})$, $\eta^{P,Q}$ represents a class in $H_{d_2}(H_{d_1}({\cal A}))$, indeed. Furthermore, it is easily checked that $\eta$ and $\eta + {\rm D}\rho$ are mapped to the same class. This proves the first part of the theorem. Next, suppose $H_{d_1}({\cal A}^{p,q})=0$, whenever $p-q$ is odd. Let $\eta^{P,Q}$ represent a non-zero class in the rhs of (\ref{dcmap}), i.e., d$_1 (\eta^{P,Q})=0$ and d$_2(\eta^{P,Q})={\rm d}_1(\eta^{P-1,Q+1})$ for some $\eta^{P-1,Q+1} \in {\cal A}$. We may assume that $P-Q$ is even (otherwise, [$\eta^{P,Q}]=0$). From \[ {\rm d}_1({\rm d}_2(\eta^{P-1,Q+1})) =-{\rm d}_2({\rm d}_1(\eta^{P-1,Q+1})) =-{\rm d}_2^2(\eta^{P,Q})=0 \] we get the existence of $\eta^{P-2,Q+2}$ such that ${\rm d}_2(\eta^{P-1,Q+1}) = {\rm d}_1(\eta^{P-2,Q+2})$. Again ${\rm d}_1({\rm d}_2( \eta^{P-2,Q+2})) = 0$ and we find an $\eta^{P-3,Q+3}$ etc. Putting things together, we obtain $\eta= \sum_i^P (-1)^i \eta^{P-i,Q+i}$, which is, by construction, closed under D. The last statement in the theorem follows directly from the fact that the kernel of (\ref{dcmap}) is given by the inclusion $H_D({\cal A}_{P-1}^r) \rightarrow H_D({\cal A}_P^r)$. $\Box$ \vspace{5pt} {\bf Remarks} 1.6) Obviously, $\eta$ is far from being unique. Its image in $GH_D({\cal A})$, however, is unique. The method of finding a representative $\eta$ is called the zig-zag construction ([BT]). 1.7) Without any restrictions on the cohomology of d$_1$, $GH_D({\cal A})$ can be computed using a spectral sequence $E_i$. Fortunately, we will not need this full theory. However, sometimes we will use terminology like: \lq the spectral sequence degenerates at $E_2$'. The meaning of these words is just in the theorem. By the way, using spectral sequences gives an easy proof of the theorems in this section. 1.8) Of course, if we substitute \lq $p-q$ is odd' by \lq $p-q$ is even', we obtain a similar theorem. \vspace{5pt} {\bf Example}. As an example for later use, consider the Cartan model $(S({\got g^*}) \otimes \Omega(V))^G$ for a vector space $V$, on which a compact connected group $G$ acts linearly. On $V$ we allow only differential forms that are square integrable. This implies that $H^i(V)=0$, except for $i=n=$dim$(V)$. We introduce the following two gradings. The first one is the sum of the polynomial degree of $S({\got g^*})$ and the form degree on $V$. The second one is only the polynomial degree. Note that the sum of the two degrees equals the degree defined earlier and that the two components of the Cartan differential, d and $\phi^b \otimes \iota_b$, are differentials of degree $(1,0)$ and $(0,1)$, respectively. Applying the theorem in this case, and using the fact that for compact groups each de Rham cohomology class contains a $G$-invariant representative gives that the equivariant cohomology of $V$ is (as vector space) $S({\got g^*})^G \otimes H^n(V)$. We will use this in chapter three as follows. The differential form (\ref{reptc1}) represents a generator of $H^n(V)$. Formally, it can therefore be extended, using the zigzag construction, to an equivariantly closed differential form. In chapter three we will solve this zigzag problem using our BRST model and (super) Fourier transform. It turns out that the representative is the same as the one in [MQ]. \vspace{5pt} By using similar arguments as in the proof above, we obtain \begin{thm}\label{dc2} Suppose $H^{p,q}({\cal A})=0$ for all $p \neq P$. Then \begin{equation} GH_D({\cal A}) \simeq H_{d_2}^{r-P}(H^P_{d_1}({\cal A})) \end{equation} is an isomorphism of graded vector spaces. \end{thm} \vspace{5pt} {\bf Example}. Remember from section 1.2 the Weil algebra $W({\got g})=S({\got g^*}) \otimes \Lambda({\got g^*})$, generated by $\phi^a$ and $\omega^a$, and equipped with the differential d$_W$. Let us introduce the degree $p$ as the polynomial degree of $S({\got g^*})$ and $q$ as the sum of $p$ and the exterior degree on $\Lambda({\got g^*})$. The total degree $p+q$ agrees with the degree defined earlier on $W$. We will define two anti-commuting differentials that add up to d$_W$, thereby giving the Weil algebra a double complex structure. \begin{eqnarray}\label{d1d2} {\rm d}_1 \omega^a = \phi^a & & {\rm d}_2 \omega^a = -\sfrac{1}{2} f^a_{bc} \omega^b \omega^c \\ {\rm d}_1 \phi^a = 0 & & {\rm d}_2 \phi^a = -f^a_{bc} \omega^b \phi^c \nonumber \end{eqnarray} The second one is the Lie algebra cohomology differential with values in the representation $S({\got g^*})$. Since $H_{d_1}(W)={\bf R}$, we can apply the theorem, obtaining $H_{d_W}(W)={\bf R}$. We found this result earlier by making a shift in the generators. To look at the Weil differential algebra in this way is very important in understanding how we arrive, in chapter three, from Lie algebra cohomology to equivariant cohomology by adding 'ghosts for ghosts' (the $\phi^a$) and restricting the algebra ($\omega$-independent, $G$-invariant elements only). \chapter{BRST theory} In this chapter the foundations of BRST theory will be discussed and we will set the stage for our model to be described in the next chapter. Although we will follow in our description of the BRST algebra general concepts on how to define these algebras, our definition involves non-trivial vector bundles and is therefore more geometric. The results of sections 2.2 and 2.3 as well as those of sections 3.1 and 3.2 of the next chapter can be found in [CK]. \section{Historical background} The roots of BRST theory date from the late sixties, when physicists added anti-commuting variables (called Faddeev-Popov ghosts) to the classical action in order to obtain a quantum action that gave rise to convergent path integrals. This procedure (which is part of the path integral quantization) can be explained in a simple finite dimensional situation, which already features the BRST symmetry. Let $G$ be a Lie group acting isometrically on a Riemannian manifold $M$. For simplicity, assume that the action is free and that $\pi:M \rightarrow B$ is the associated principal fibration. Suppose $U \subset B$ is an open subset such that $\pi^{-1}(U) \simeq S_U \times G$, $S_U \subset M$ being some (slice) submanifold given by the zeroes of the functions $g^i$ $(i=1, \ldots, {\rm dim}(G))$. We would like to integrate $G$-invariant functions of the form $e^{i{\cal S}}$ over $\pi^{-1}(U)$. Let $(\xi_j)$ be an orthonormal basis of ${\got g}$ with respect to some inner product, let $f^i_{jk}$ denote the structure constants and let $X_j=V_{\xi_j}$ denote the generating vector fields of the $G$-action. Then \begin{equation} \int_{\pi^{-1}(U)} e^{i{\cal S}} = {\rm vol}(G) \int_{\pi^{-1}(U)} \delta(g^i) \; {\rm det}(X_j (g^i)) e^{i{\cal S}}, \end{equation} where $\delta(g^i)$ is the pull back by $g^i:\pi^{-1}(U) \rightarrow {\bf R}$ of the Dirac delta function (readers that do not like to regard this as a function may consider to skip this introductory model). The equation can be derived using local coordinates in a straightforward manner. Note that the rhs is independent of the choice of $g^i$. In this simple model, $M$ represents the space of paths, $g^i$ are called gauge functions and ${\cal S}$ is the classical action. The obvious advantage of the rhs expression is that we can ommit the vol$(G)$ factor (which is infinite in field theory), thereby removing at least one obstacle for a finite answer. Next, we will write $\delta(g^i) {\rm det}(X_j (g^i))$ as an exponential, using three new types of variables, $b_i$, $\bar{c}_i$ and $c^i$ $(i=1, \ldots ,{\rm dim}(G))$. The $b_i$ are commuting variables, the $\bar{c}_i$ and $c^i$ are anti-commuting. Using Berezin integration, we obtain \begin{equation} \int_{\pi^{-1}(U)} e^{i{\cal S}} = {\rm vol}(G) \int_{\pi^{-1}(U)} e^{i({\cal S}+b_k g^k + \bar{c}_i X_j (g^i) c^j)} \end{equation} The exponential on the rhs is called the quantum action, the $c^i$ are Faddeev-Popov ghosts and the $\bar{c}_i$ are anti-ghosts. We will now show where the BRST symmetry is in this simple model. Let us define the following differential \begin{eqnarray} s(f) = X_j(f) \; c^j & & s(\bar{c}_i)=-b_i \\ s(c^k)=-\sfrac{1}{2} f^k_{ij} c^i c^j & & s(b_i)=0 \nonumber \end{eqnarray} It is easy to check that $s^2=0$, that $s({\cal S})=0$ and that the quantum action is nothing but ${\cal S}+s(-\bar{c}_i g^i)$. To say it in other words: ${\cal S}$ defines a cohomology class for $s$ and the integral remains unchanged if we take another representative. The cohomology involved is called the BRST cohomology and $s$ is the BRST operator. It is precisely this infinitesimal symmetry that was discovered in 1975 by Becchi, Rouet and Stora ([BRS]) in the quantum action of certain field theories. Independently, Tyutin wrote an unpublished preprint on the same, hence the name BRST. During the late seventies it were mainly Russian physicists developing BRST related theories ([BV],[FF]). It soon became clear that BRST theory had a symplectic counterpart, that it could explain the multi ghost interactions of, e.g., supergravity and that BRST theory was important to obtain quantum actions and to prove renormizability. New impetus came from Marc Henneaux' thesis published in [He], where he described a BRST theory for Hamiltonian systems with constraints. It turned out that, in the case of first class constraints, BRST theory involves the use of superPoisson structures and supercanonical transformations rather than ordinary Poisson algebras. After this work of Henneaux, BRST became an industry. Mathematicians were involved to develop the Poisson algebraic and the differential algebraic parts, while in the physics community BRST theory became an important quantization concept. Part of the history is captured in [St]. \section{BRST complex for group symmetries} The aim of this section is to describe BRST theory associated to a symplectic manifold $N$, called the phase space, and a set of first class constraints. Since we will be interested in constraints coming from group actions, we will deal mostly with the group case, making only some remarks on the general situation. First class constraints are just functions on $N$ satisfying certain conditions. The zero set $Z$ of these functions is called the constraint manifold (provided that it is a manifold). In the sequel we will write constraints as a short hand for first class constraints. BRST theory assigns to a given set of constraints a BRST charge $Q$. This charge $Q$ is an element of a super Poisson algebra extension ${\cal P}$ of $C^\infty(N)$. $Q$ is a natural and geometrical object in the sense that if the constraint manifold is represented by another set of constraints, then the two BRST charges are related by a supercanonical transformation. Let $(N,\sigma)$ be a symplectic manifold. Then $C^\infty(N)$ is a Poisson algebra with Poisson bracket $\{ \cdot,\cdot\}$. A collection of functions $(f_a)$ is called a set of first class constraints if (remember we use the summation convention) \begin{equation}\label{struc} \{ f_a, f_b \} = c_{ab}^c f_c \end{equation} for certain $c_{ab}^c \in C^\infty(N)$. For simplicity we will assume that $N$ is finite dimensional and that $f_a$ is a finite collection of functions. Condition (\ref{struc}) implies that the ideal $I \subset C^\infty(N)$ generated by the constraints is a Poisson subalgebra. Constraints are called regular or independent if $Z$ is a manifold of codimension equal to the number of constraints. They are called weakly regular or reducible if $Z$ is a manifold of codimension smaller than the number of constraints. The latter case will be the subject of the next section. Let $S$ be a Lie group acting Hamiltonially on $N$. The components of the associated momentum map, well defined after introducing a basis $(\xi_a)$ of ${\got s}=$Lie$(S)$, satisfy (\ref{struc}). The $c_{ab}^c$ are constants in this case (the group case) and equal the structure constants $f^c_{ab} $ of the Lie algebra ${\got s}$, using the same base. We will assume in this section that the constraints are regular, i.e., 0 is a regular value of the momentum map. By definition, the BRST algebra ${\cal P}$ in the group case is $\Lambda {\got s} \otimes \Lambda {\got s^*} \otimes C^\infty (N)$. This algebra has two gradings, induced by the degrees of the Grassmann algebra elements. They are called the anti-ghost number and ghost number, respectively. The basis elements $\xi_a$, regarded as members of $\Lambda^1 {\got s}$, have anti-ghost number $1$. The elements $\omega^a$ of the dual base are in $\Lambda^1 {\got s^*}$ and have ghost number $1$. In [KS] the following two differentials are defined on ${\cal P}$. The first one, $\delta$, is zero on $\Lambda {\got s^*} \otimes C^\infty(N)$ and maps $\xi_a \mapsto f_a$. It lowers the anti-ghost degree by $1$. $\delta$ is called the Koszul operator and the complex is called Koszul resolution. The second differential, d, is the Lie algebra cohomology operator with values in the representation $\Lambda {\got s} \otimes C^\infty (N)$. It has ghost degree $1$ and acts on generators as \begin{eqnarray} {\rm d} \; (\xi_a \otimes 1 \otimes f)& = & f_{ab}^c \xi_c \otimes \omega^b \otimes f -\xi_a \otimes \omega^b \otimes \{f_b,f\} \\ \nonumber {\rm d} \; (1 \otimes \omega^c \otimes 1)& = & -1 \otimes \sfrac{1}{2} f_{ab}^c \omega^a \wedge \omega^b \otimes 1 \end{eqnarray} One can check that $\delta^2={\rm d}^2=0$ and that d$\delta = -\delta$d. Thus the derivation $D=$d$+\delta$ is also a differential. $D$ is the BRST operator. It has total degree 1 if we define the total degree to be the difference of the ghost and anti-ghost degrees. Obviously, $({\cal P},D)$ has a double complex structure, so its cohomology $H_D({\cal P})$ can be computed using a spectral sequence. It turns out that, because of the regularity condition, $\delta$ has only non-zero cohomology in anti-ghost degree zero and that it equals $\Lambda{\got s^*} \otimes C^\infty(N)/I$. Therefore, the spectral sequence degenerates at $E_2$. Thus \begin{equation} H_D({\cal P})=H_{\rm d}(H_\delta({\cal P}))= H_{\rm d}(\Lambda{\got s}^* \otimes C^\infty(N)/I) \end{equation} Using $C^\infty(N)/I \simeq C^\infty(Z)$, we obtain for the BRST cohomology in degree zero $H^0_D({\cal P}) \simeq C^\infty(Z/S)$, the algebra of functions on the reduced phase space. \vspace{5pt} We will now give a nice formulation of the differential $D$ in terms of super Poisson algebras. A super Poisson algebra is an algebra with a {\bf Z}$_2$-grading and a super Lie algebra structure that is compatible with the ring structure in the sense that a graded Leibnitz rule holds: \begin{equation} \{a_1\cdot a_2,a_3\}=a_1\cdot\{a_2,a_3\}+(-1)^{{\rm deg}(a_2) \cdot {\rm deg}(a_1)} a_2\cdot\{a_1,a_3\} \end{equation} For odd elements the Poisson bracket is commutative, for any other pair of homogeneous elements the Poisson bracket is anti-commutative. Let us introduce a super Poisson structure on ${\cal P}$ that extends the one on $C^\infty(N)$. Besides the bracket between two functions the only non-vanishing bracket between generators is $\{\xi_a,\omega^b\}=\{\omega^b, \xi_a\}=\delta_a^b$. This determines the Poisson bracket completely using the Leibnitz rule. By definition, the BRST charge is a $Q \in S$ such that $D=\{Q, \cdot \; \}$. As a little miracle, it exists and equals $Q=f_a \omega^a - \frac{1}{2} f_{ab}^c \xi_c \omega^a \omega^b$. It has total degree 1 and satisfies $\{ Q,Q \} =0$. Although the formulation in terms of a Poisson algebra is both natural and transparent it does not help very much for computing the cohomology. It only helps to see that the cohomology $H^\bullet _D({\cal P})$ inherits a Poisson structure. \vspace{5pt} {\bf Remark} 2.1) It is important here to remark that the construction of the BRST complex is much more powerful in the general case (\ref{struc}). There will always exist a $Q \in {\cal P}$ that squares to zero and that is uniquely defined up to supercanonical transformations. This was proven in [BV], [FF] and [HT]. The last paper also shows that the BRST cohomology for regular first class constraints equals the vertical cohomology of the constraint manifold. The double complex structure is lost in general. This is due to the presence of higher order structure functions ([He]). \vspace{5pt} Non-regular constraints can be treated similarly, enlarging ${\cal P}$. Again there exists a well defined BRST charge and again its associated cohomology gives the vertical cohomology of the constraint manifold ([FHST],[St]). We will describe this in the next section for (transitive) group symmetries, for which the BRST complex is a double complex again. \section{BRST complex for reducible symmetries} In this section we also start with a Hamiltonian $S$-action on a symplectic manifold $N$. We will show how to obtain the de Rham cohomology from the BRST complex for transitive $S$-actions. We will assume that 0 is now only a weakly regular value of the momentum map, i.e., $Z$ is a submanifold but its codimension is smaller than the number of constraints. For this case, there is a similar theorem on the existence of a BRST charge ([BV], [FHST]). In this case there are relations among the constraints and the differential $\delta$ has non-zero cohomology in more than one degree now. The constraints are called reducible. To restore the resolution property (cohomology lives in one degree only) one enlarges the algebra ${\cal P}$ (introducing \lq anti-ghosts for anti-ghosts') and modifies $\delta$ (resulting in what is called a Koszul-Tate resolution). In the sequel we shall neglect $\delta$ and work directly with the constraint manifold $Z$. Only considering non-negative ghost degree is convenient in the group case. There is a double complex structure and $\delta$ is always constructed in such a way that the associated spectral sequence degenerates. We will denote the non-negative degree part of ${\cal P}$ by ${\cal B}$. From now on, we will always assume that the symplectic manifold $N$ is a cotangent bundle $T^*M$ and that the constraints come from a Hamiltonian $S$-action that is induced by an arbitrary action on $M$. $M$ is called the configuration space. $Z$ is then a subbundle of the vector bundle $N$ over $M$ and the reduced phase space $X=Z/S$ equals $T^*(M/S)$ (provided the quotient makes sense). In the sequel we shall use $M$ instead of $Z$, because the action on $Z$ comes from the one on $M$. It is in this sense that we study the BRST complex associated to an arbitrary Lie group action on an arbitrary manifold. In the sequel, we will assume that $S$ acts transitively on $M$ (this property is crucial for cohomological field theories). Let $S=H$ be a finite dimensional Lie group with Lie algebra ${\got h}$, $H_0$ a closed subgroup of $H$ and let $M=H/H_0$. We shall construct the BRST differential algebra for the symmetry group $H$ acting on $M$. Consider the following exact sequence of vector bundles \begin{equation}\label{seq1} 0 \rightarrow K_{\got h} \rightarrow V_{\got h} \rightarrow TM \rightarrow 0 \end{equation} where $V_{\got h}=M \times {\got h}$, which is mapped onto $TM$ using the infinitesimal action ${\got h} \rightarrow \Gamma (TM)$ followed by the evaluation map $M \times \Gamma (TM) \rightarrow TM$. $K_{\got h}$ is the kernel of this map. Note that the fibers of $K_{\got h}$ are isomorphic to ${\got h}_0$, but that this bundle need not be trivial. Note also that the corresponding sequence of sections of these bundles is a sequence of ${\got h}-$modules by giving $\Gamma(V_{\got h})=C^\infty(M) \otimes {\got h}$ the module structure of its two components and $\Gamma(TM)$ the obvious one. $\Gamma(K_{\got h})$ is then an ${\got h}-$module because it is the kernel of a module map. To apply (a geometrical version of the positive degree part of) the [FHST] construction we need to distinguish two cases. In the trivial case, $K_{\got h} \simeq M \times {\got h}_0$, the BRST algebra equals \begin{equation}\label{theirB} {\cal B}=S({\got h}^*_0) \otimes \Lambda({\got h}^*) \otimes C^\infty(M) \end{equation} Elements in $S^1({\got h}^*_0)$ are called ghosts of ghosts and have degree two. If $K_{\got h}$ is not trivial, we need to find some vector space ${\got h}_0'$ and one more exact sequence \begin{equation} 0 \rightarrow K_{{\got h}_0'} \rightarrow V_{{\got h}_0'} \rightarrow K_{\got h} \rightarrow 0 \end{equation} where $V_{{\got h'_0}}=M \times {\got h'_0}$ is mapped to $K_{\got h}$ using a map ${\got h_0}' \rightarrow \Gamma(K_{\got h})$ followed by the evaluation map. If $K_{{\got h}'_0} \simeq M \times {\got h_1}$ for some vector space ${\got h}_1$, then, by definition, \begin{equation} {\cal B}=\Lambda({\got h^*}_1) \otimes S({\got h^*}'_0) \otimes \Lambda({\got h^*}) \otimes C^\infty(M) \end{equation} Elements in $\Lambda^k({\got h^*}_1)$ are of degree $3k$ and are called ghosts of ghosts of ghosts. If $K_{\got h'_0}$ is not trivial we proceed in the same way finding another exact sequence for $K_{{\got h'}_0}$, etc. To circumvent this definition process, we will use a slightly altered definition. Let $SK^*_{\got h}$ be the vector bundle with fibers the symmetric algebras of the duals of the fibers of $K_{\got h}$, $\Gamma(SK^*_{\got h})$ its space of sections. The BRST algebra we will use in the sequel is \begin{equation}\label{myB} {\cal B}= \Gamma(SK^*_{\got h}) \otimes \Lambda({\got h^*}) \end{equation} Obviously, if $K_{\got h} \simeq M \times {\got h}_0$ then $\Gamma(SK^*_{\got h}) \simeq S({\got h^*}_0) \otimes C^\infty(M)$ and (\ref{myB}) is isomorphic to (\ref{theirB}). Furthermore, if $K_{\got h}$ has rank zero (free group action), then ${\cal B}=H_\delta({\cal P})$, the BRST algebra of the previous section. Before we define the BRST operator on this algebra, let us introduce a grading on ${\cal B}$. By definition, all elements of $ \Gamma(S^pK^*_{\got h}) \otimes \Lambda^q({\got h^*})$ have degree $2p+q$. This subspace will be denoted by ${\cal B}^{p,q}$. We shall define the BRST operator $D$ now. Let $\eta \in {\cal B}^{p,q}$. Then $D\eta= D_1 \eta + D_2 \eta$, where $D_1 \eta \in {\cal B}^{p,q+1}$ and $D_2 \eta \in {\cal B}^{p+1,q-1}$ are defined by \[ D_{1}\eta ({\xi} _{1},...,{\xi}_{q+1}) = {\displaystyle \mathop \sum_{i}} (-1)^{i-1}{\xi} _{i} \cdot \left [ \eta \left ( {\xi} _{1},..., \widehat{{\xi}_{i}},...,{\xi}_{q+1}\right ) \right ] + \] \begin{equation}\label{brstdif} +{\displaystyle \sum _{i<j}} \eta (\left [{\xi}_{i},{\xi}_{j}\right ], {\xi}_{1},..., \widehat{ {\xi}_{i}},...,\widehat{{\xi}_{j}},..., {\xi}_{q+1}) (-1)^{i+j}, \end{equation} \[ \! \! D_{2}\eta ({\xi}_{1},.., {\xi}_{q-1}) ({\phi}_{1},..,{\phi}_{p+1}) = {\displaystyle \sum_{i}}\eta\left ( {\phi}_{i}, {\xi}_{1},..,{\xi}_{q-1}\right) \left( {\phi}_{1},..,\widehat{{\phi}_{i}},.., {\phi}_{p+1}\right ), \] where all ${\xi} _{1},...,{\xi}_{q+1}$ are in $\got h$ and all ${\phi}_{1},..., {\phi}_{p+1} $ are in $\Gamma(K_{\got h}) \subset C^\infty(M) \otimes {\got h}$. $D_{1}$ is nothing but the differential of the standard complex for the computation of the Lie algebra cohomology of $\got h$ with values in the ${\got h}$-module $\Gamma (SK^*_{\got h})$, and $D_2$ uses the injection $\Gamma(K_{\got h}) \rightarrow C^\infty (M) \otimes {\got h}$. We are now going to exhibit a double complex structure on ${\cal B}$. On the algebra ${\cal B}$, we define two degrees whose sum will give our initial degree. The first degree of an element of ${\cal B}^{p,q}$ is $p$ and its second degree is $p+q$. The first (respectively second) degree is preserved by $ D_{1}$ (respectively $D_{2}$) and is increased by one by $D_{2}$ (respectively $D_{1}$). \begin{prop} The two gradings defined above and the differentials $D_{1}$, $D_{2}$ define a double complex structure on ${\cal B}$ whose associated total complex is the BRST complex. \end{prop} The reader be warned that this is another double complex than the one of section 2.2. The relation is that in the case that the rank of $K_{\got h}$ is zero, $D_2$ equals zero and $D_1$ is the operator d acting on $H_\delta({\cal P})$. {\bf Proof}. We have to prove the following relations $D_{1}^{2}=0$, $D_{2}^{2}=0$, $D_{1}D_{2}=-D_{2}D_{1}$. It is a standard computation to establish the first one. Let us prove the second one. Let $\eta$ be in ${\cal B}^{p,q}$. For all ${\xi} _{1},...,{\xi}_{q-2} \in {\got h}$ and all ${\phi}_{1},..., {\phi}_{p+2} \in \Gamma(K_{\got h}) \subset C^\infty(M) \otimes {\got h}$, we have \[ D_{2}^{2} \; \eta({\xi}_{1}, \ldots ,{\xi}_{q-2}) ({\phi}_{1}, \ldots ,{\phi}_{p+2})= \] \[ = {\displaystyle \sum _{i,j}} \eta \left ( {\phi}_{i}, {\phi}_{j}, \xi _{1}, \ldots , {\xi}_{q-2}\right )\left ({\phi}_{1}, \ldots , \widehat{{\phi}_{i}}, \ldots , \widehat{{\phi}_{j}}, \ldots ,{\phi}_{p+2} \right )=0 \] using the fact that this expression is symmetric in the ${\phi}_{i}$. Let us now check the third relation. We compute separately $D_{1}D_{2}$ and $D_{2}D_{1}$. They are both in ${\cal B}^{p+1,q}$. Keeping the same notations, on one hand we have \vspace{5pt} $\begin{array}{l} D_{1}D_{2}\eta ({\xi}_{1},...,{\xi}_{p}) ({\phi}_{1},...,{\phi}_{q+1}) = \\ ={\displaystyle \sum_{i}} (-1)^{i-1} \left ({\xi}_{i} \cdot \left ( D_{2}\eta \left ( {\xi}_{1},..., \widehat {{\xi}_{i}},...,1{\xi}_{p}\right ) \right ) \right ) \left ( {\phi}_{1},...,{\phi}_{q+1}\right ) \\ +{\displaystyle \sum_{i<j}}D_{2}\eta \left ( \left [ {\xi}_{i}, {\xi}_{j}\right ],..., \widehat{{\xi}_{i}},..., \widehat {{\xi}_{j}},...,{\xi}_{p} \right ) \left ( {\phi}_{1},...,{\phi}_{q}\right ) (-1)^{i+j} \\ = {\displaystyle \sum_{i,k}}{\xi}_{i} \left ( \eta \left ( {\phi}_{k},{\xi}_{i},... \widehat{{\xi}_{i}}... {\xi}_{p} \right )\left ( {\phi}_{1},..., \widehat{{\phi}_{k}},..., {\phi}_{q+1}\right ) \right ) (-1)^{i-1}\\ - {\displaystyle \sum_{i,k}} D_{2}\eta \left ( {\xi}_{1},..., \widehat{{\xi}_{i}},...,{\xi}_{p}\right ) \left ( {\phi}_{1},..., \left [ {\xi}_{i},{\phi}_{k}\right ],..., {\phi}_{q+1} \right ) (-1)^{i-1}\\ + {\displaystyle \sum_{i<j,k}}\eta \left ( {\phi}_{k}, \left [ {\xi}_{i}, {\xi}_{j}\right ], {\xi}_{1},..., \widehat{{\xi}_{i}},..., \widehat{{\xi}_{j}},...,{\xi}_{p} \right ) \left ( {\phi}_{1}, ..., \widehat {{\phi}_{k}},...,{\phi}_{q+1}\right )(-1)^{i+j}\\ = {\displaystyle \sum_{i,k}}{\xi}_{i} \left ( \eta \left ( {\phi}_{k},{\xi}_{i},..., \widehat{{\xi}_{i}},... , {\xi}_{p} \right )\left ( {\phi}_{1},..., \widehat{{\phi}_{k}},..., {\phi}_{q+1}\right ) \right ) (-1)^{i-1}\\ - {\displaystyle \sum_{i,k}} \eta \left (\left [ {\xi}_{i}, {\phi}_{k} \right ], {\xi}_{1},..., \widehat{{\xi}_{i}},...,{\xi}_{p}\right ) \left ( {\phi}_{1},..., \widehat{{\phi}_{k}}..., {\phi}_{q+1} \right ) (-1)^{i-1}\\ - {\displaystyle \sum_{i,j \neq k}} \eta \left ({\phi}_{j}, {\xi}_{1},..., \widehat{{\xi}_{i}},...,{\xi}_{p}\right ) \left ( {\phi}_{1},..., \widehat{{\phi}_{j}}, ...\left [ {\xi}_{i}, {\phi}_{k} \right ], ..., {\phi}_{q+1} \right ) (-1)^{i-1}\\ + {\displaystyle \sum_{i<j}}\eta \left ( {\phi}_{k}, \left [ {\xi}_{i}, {\xi}_{j}\right ], {\xi}_{1},..., \widehat{{\xi}_{i}},..., \widehat{{\xi}_{j}},...,{\xi}_{p} \right ) \left ( {\phi}_{1}, ..., \widehat{{\phi}_{k}},..., {\phi}_{q+1}\right )(-1)^{i+j}\\ \end{array}$ \vspace{5pt} On the other hand, \vspace{5pt} $\begin{array}{l} D_{2}D_{1}\eta({\xi}_{1},...,{\xi}_{p})({\phi}_{1},...,{\phi}_{q}) = \\ = \sum_k D_{1}\eta({\phi}_{k},{\xi}_{1},...,{\xi}_{p}) ({\phi}_{1},..., \hat{{\phi}_{k}},...,{\phi}_{q}) \\ = {\displaystyle \sum_{i,k}} \left ( {\xi} _{i} \cdot \left ( \eta \left ({\phi}_{k}, {\xi}_{1},..., \widehat {{\xi}_{i}},...{\xi}_{p}\right )\right ) \right ) \left ( {\phi}_{1},...,\widehat {{\phi}_{k}},...{\phi}_{q+1}\right ) (-1)^{i}\\ + {\displaystyle \sum_{k}} {\phi} _{k} \cdot \left ( \eta \left ({\xi}_{1},...{\xi}_{p}\right )\right ) \left ( {\phi}_{1},...,\widehat {{\phi}_{k}},...{\phi}_{q+1}\right ) \\ + {\displaystyle \sum_{k,i<j}} \eta \left (\left [ {\xi}_{i},{\xi}_{j}\right ], {\phi}_{k}, {\xi}_{1},..., \widehat{{\xi}_{i}},..., \widehat{{\xi}_{j}},...,{\xi}_{p} \right ) \left ( {\phi}_{1}, ..., \widehat{{\phi}_{k}},..., {\phi}_{q+1}\right )(-1)^{i+j}\\ + {\displaystyle \sum_{k,i}} \eta \left (\left [ {\phi}_{k},{\xi}_{i}\right ], {\xi}_{1},..., \widehat{{\xi}_{i}},...,{\xi}_{p} \right ) \left ( {\phi}_{1}, ..., \widehat{{\phi}_{k}},..., {\phi}_{q+1}\right )(-1)^{i}\\ \end{array} $ \vspace{5pt} As the second term equals zero, and the first term can be expanded further into two terms, it is clear that $D_{1}D_{2}= - D_{2}D_{1}$. $\Box$ \vspace{10pt} According to [FHST], the cohomology of $({\cal B},D)$ is isomorphic to the de Rham cohomology of $M$, $H_D({\cal B}) \simeq H(M)$. However, we need to prove this, since our complex is slightly different from the one in [FHST]. \begin{thm} Let ($\Omega(M),{\rm d}$) be the de Rham complex associated to $M$. The map \begin{eqnarray}\label{fimap} \Phi : \Omega^k(M)=\Gamma(\Lambda^kT^*M) &\rightarrow& \Gamma(S^0K^*_{\got h}) \otimes \Lambda^k {\got h^*} \subset {\cal B} \\ \nonumber \Phi (\eta)(\xi_1, \ldots ,\xi_k) &=& \eta(V_{\xi_1}, \ldots ,V_{\xi_k}) \end{eqnarray} induces an isomorphism of cohomologies. More precisely, the spectral sequence associated to the double complex (${\cal B},D$) degenerates at $E_2$ and the de Rham complex is isomorphic with the $E_1-$term $H_{D_2}({\cal B})$. \end{thm} {\bf Proof}. A simple computation shows that $D_1 \circ \Phi = \Phi \circ {\rm d}$ and that $D_2 \circ \Phi =0$. We will show that Im($\Phi$) equals the cohomology of $D_2$. Using theorem \ref{dc2} and the injectivity of (\ref{fimap}) will then finish the proof. Choose an inner product on ${\got h}$. If $\iota$ denotes the map $\Gamma(K_{\got h}) \rightarrow C^\infty(M) \otimes {\got h}$, then this inner product, which induces a metric on $M \times {\got h}$, gives a map $\pi:C^\infty(M) \otimes {\got h} \rightarrow \Gamma(K_{\got h})$, such that $\pi \circ \iota$ is the identity and $\iota \circ \pi$ is a projection. Let us denote the ring $C^\infty(M)$ by $A$ and the dual of the projection $\iota \circ \pi$ by $\rho: A \otimes {\got h^*} \rightarrow A \otimes {\got h^*}$. The $A$-module $A \otimes {\got h^*}$ splits in a direct sum of two $A$-modules, the image $P_1$ of $\rho$ and its complement, the image $P_2$ of $1-\rho$. So $\rho$ is the projection operator on $P_1 \oplus P_2$, equal to {\bf 1} (the identity) on the first term and equal to zero on the second term. In the sequel, $\rho$ will also denote the extension of $\rho$ to a derivation on \[ A \otimes \Lambda{\got h^*} = \Lambda_A(P_1 \oplus P_2) \simeq \Lambda_AP_1 \otimes_A \Lambda_AP_2. \] Thus $\rho$ can be written as ${\bf 1} \otimes 1$, the identity on $P_1$, extended as a derivation. ${\cal B}$ can be written as $\Gamma(SK^*_{\got h}) \otimes_A \Lambda_AP_1 \otimes_A \Lambda_AP_2$. Note that $P_1$ is isomorphic to $\Gamma(K_{\got h})$, that Im($\Phi$) is isomorphic to $1\otimes 1 \otimes \Lambda_AP_2$ and that $D_2$ vanishes on this subalgebra. We will show now that $D_2$ has trivial cohomology on the remaining factor $\Gamma(SK^*_{\got h}) \otimes_A \Lambda_AP_1$. Recall that $D_2$ was defined by ($\eta \in \Gamma(S^pK_{\got h}) \otimes \Lambda^q{\got h^*})$ \[ \! \! \! \! (D_2\eta)(\xi_1, .., \xi_{q-1})(\phi_1, .., \phi_{p+1}) =\sum_j \eta(\iota(\phi_j), \xi_1, .., \xi_{q-1}) (\phi_1, ..,\hat{\phi_j},..,\phi_{p+1}) \] Let us define $C:{\cal B}^{p,q} \rightarrow {\cal B}^{p-1,q+1}$ by (for $p>0$) \[ (C\eta)(\xi_1,..,\xi_{q+1})(\phi_1,..,\phi_{p-1})= \] \[ =\sum_i (-1)^{i+1} \eta(\xi_1,..,\hat{\xi_i},..,\xi_{q+1}) (\pi(\xi_i),\phi_1,..,\phi_{p-1}) \] A straightforward computation shows that \begin{equation}\label{chom} D_2 \circ C + C \circ D_2 = {\bf 1} \otimes 1 + 1 \otimes \rho, \end{equation} where {\bf 1} denotes the derivation coming from the identity on ${\cal B}^{1,0}$. Note that the second part is in fact the identity (extended as a derivation) on $\Lambda_AP_1$. So, from (\ref{chom}) we conclude that $D_2$ has only non-zero cohomology in degree $(0,q)$ (the only degrees for which $C$ is not defined). Finally, if we define $C$ to be zero for degree $(0,q)$ elements, then (\ref{chom}) remains true (the first term on the rhs is of course zero in degree $p=0$) and it is easy to derive that in this degree ker($D_2$)=Im($\Phi$), which proves \[ H_D({\cal B}) \simeq H_{D_1}(H_{D_2}({\cal B})) = H_{D_1}({\rm Im}(\Phi)) \simeq H(M). \; \; \; \Box \] \chapter{BRST model for Cohomological Field Theories} In this chapter we present a finite dimensional model for Cohomological Field Theories. This model will be derived from constructions in the previous chapter by choosing an appropriate symmetry group. We will describe how path integrals look like in this model and identify the partition function with the Mathai-Quillen representative for the equivariant Thom class. This analysis of path integrals uses the Fourier transform of differential forms as described in chapter one and was published in [Ka1]. \section{Introduction to the model} Because in field theory one uses actions rather than Hamiltonians, we will have to work with the path space of the configuration space rather than with the configuration space itself. So from now on, $M$ will represent a space of paths of some field theory. The construction of the BRST complex is the same in both cases, so that we can use the results of chapter two. In Cohomological Field Theories, field theories with actions ${\cal S}=0$, the symmetry group is Diff$(M)$, the set of all possible transformation of the fields (a field is just an element of $M$). The stabilizer at each point is isomorphic to the subgroup of diffeomorphisms leaving one point fixed. Thus we are really in a $M=H/H_0$ situation of the previous chapter, though there we took $H$ to be finite dimensional to avoid analytical complexities. In the most interesting Cohomological Field Theories there is an additional symmetry group $G$ (gauge symmetries). Of course, Diff$(M)$ contains already all possible diffeomorphisms, so at first it is not obvious what is gained by introducing $G$. However, for field theories with actions that are only $G$-invariant, the dynamics is on $M/G$ rather than on $M$ (like the YM example in the introduction of this thesis). Also, the space of all fields is often contractible, so from a mathematical point of view $M/G$ is a lot more attractive to study. We will achieve this by constructing the BRST complex associated to a $G \rhd \!\!\! < {\rm Diff}(M)$ symmetry on $M$ and then restrict the algebra appropriately. We will prove that this restricted complex can be mapped to the Cartan model for equivariant cohomology of $(M,G)$, inducing an isomorphism of the cohomologies. As in chapter two, we shall use a finite dimensional Lie group $H$ instead of Diff$(M)$ (for the case that $H$ is infinite dimensional we refer to [CK]). $H$ is supposed to act transitively on $M$, so that $M \simeq H/H_0$ for a certain $H_0<H$. Furthermore, the symmetry group will be the semi-direct product $S=G \rhd \!\!\! < H$, for the direct product $G \times H$ does not provide an action on $M$ (at least, not of the form $(g,h)\cdot x = g \cdot (h \cdot x)$). Let us assume, for simplicity, that $G<H$, then the product in $S$ is defined as \begin{equation} (g,h) \cdot (g',h') = (gg',(g')^{-1}hg'h') \end{equation} It is easy to check that this product gives rise to a well defined group action of $S$ on $M$. The Lie bracket, associated to this product on $S$, reads \begin{equation} [(\xi,\nu),(\xi',\nu')]=([\xi,\xi'],[\nu,\xi']+ [\xi,\nu']+[\nu,\nu']) \end{equation} Note that the Lie subalgebras $\{(0,\nu) \mid \nu \in {\got h} \}$ and $\{(\xi,-\xi) \mid \xi \in {\got g} \}$ are invariant under the adjoint action, whereas $\{ (\xi,0) \mid \xi \in {\got g} \}$ is not. In fact, the first two subalgebras are commuting ideals whose direct sum equals the whole algebra. The reason for this is the existence of the following isomorphism $G \rhd \!\!\! < H \rightarrow G \times H$, $(g,h) \mapsto (g,gh)$. \section{The model} The aim of this section is to show that applying chapter two to the symmetry group $S = G \rhd \!\!\! < H$, we obtain the differential algebra $(B, \delta)$, where $B=W({\got g}) \otimes \Omega(M)$ and $\delta={\rm d}_W \otimes 1 + 1 \otimes {\rm d}_M + \omega^a \otimes {\cal L}_a - \phi^b \otimes \iota_b$ (see section 1.2.3). Restriction to a subalgebra will give the Cartan model for equivariant cohomology. To arrive at the differential $\delta$, we will need the splitting (\ref{d1d2}) of d$_W \otimes 1$ into d$_1 + {\rm d}_2$. The only difference with the previous chapter is that we have a larger symmetry group $S=G \rhd \!\!\! < H$ here. So, instead of the exact sequence (\ref{seq1}), we have \begin{equation} 0 \rightarrow K_{\got h} \oplus V_{\got g} \rightarrow V_{{\got g} \rhd \!\!\! < {\got h}} \rightarrow TM \rightarrow 0 \end{equation} where $V_{\got g}$ denotes the trivial vector bundle $M \times {\got g}$, where the third arrow is given by $(x,(\xi,\nu)) \mapsto V_{\xi +\nu}(x)$ and the second one consists of the imbedding $K_{\got h} \rightarrow V_{\got h}$ and the map $V_{\got g} \rightarrow V_{{\got g} \rhd \!\!\! < {\got h}}, \;\xi \mapsto (\xi,-\xi)$. Thus the BRST algebra, associated to the $S$-action on $M$, equals ${\cal B} =S({\got g^*}) \otimes \Gamma(SK^*_{\got h}) \otimes \Lambda({\got g^*}) \otimes \Lambda({\got h^*})$ and is equipped with the following gradings \begin{equation} {\cal B}^{p,q,r,s} = \left (S^p({\got g^*}) \otimes \Lambda^q({\got g^*}) \right ) \otimes \left ( \Gamma(S^rK^*_{\got h}) \otimes \Lambda^s({\got h^*}) \right ). \end{equation} As in the previous chapter, we will use the map (\ref{fimap}) again, with ${\got g} \rhd \!\!\! < {\got h}$ instead of ${\got h}$. If we followed the strategy of section 2.3, then we would prove that the differential $D_2$ has non-zero cohomology only in dimension $p+r=0$ (hence $p=r=0$) and end up with the de Rham complex ($\Omega(M),{\rm d}$). However, since we would like to end up with $W({\got g}) \otimes \Omega(M)$, we will follow a slightly different strategy. We will split $D_2$ into two parts, $D_2^{\got g}$ and $D_2^{\got h}$, and only use the latter to get rid of the $r$-grading and to keep the $p$-grading. Furthermore, we will use that, applying (\ref{fimap}), contraction and action of ${\got g}$ on $C^\infty(M) \otimes \Lambda {\got h^*}$ corresponds to contraction and Lie derivation of ${\got g}$ on $\Omega(M)$. This is immediate after a glance at the definition (\ref{fimap}). We shall compute the BRST differential now and identify it with (\ref{e11}), using a map like (\ref{fimap}). Suppose $\eta \in {\cal B}^{p,q,r,s}$, then, as in chapter two, $D\eta=D_1\eta + D_2\eta$, where $D_1\eta \in {\cal B}^{p,q+1,r,s} \oplus {\cal B}^{p,q,r,s+1}$ and $D_2\eta \in ({\cal B}^{p+1,q-1,r,s} \oplus {\cal B}^{p+1,q,r,s-1}) \oplus {\cal B}^{p,q,r+1,s-1}$. Thus we obtain operators $D_2^{\got g}, D_2^{\got h}$, respectively given by projection on the different homogeneous subspaces of ${\cal B}$. \begin{thm} $D_2^{\got h}$ commutes with $D_1+D_2^{\got g}$, hence $({\cal B},D)$ is a double complex with gradings $2p+q+r+s$ and $r$. $D_2^{\got h}$ has cohomology only in degree $r=0$, so the associated spectral sequence degenerates at $E_2$. Furthermore, the map \begin{equation} {\bf 1} \otimes \Phi :W({\got g}) \otimes \Omega(M) \rightarrow {\cal B}, \end{equation} where $\Phi$ is the map (\ref{fimap}) and $W({\got g}) \otimes \Omega(M)$ is equipped with differential (\ref{e11}), ${\rm d}_W \otimes 1 +1\otimes{\rm d} + \omega^a\otimes {\cal L}_a -\phi^b\otimes \iota_b$, is a map of differential algebras that maps $W({\got g}) \otimes \Omega(M)$ isomorphically onto the $E_1$ term $H_{D_2^{\got h}}({\cal B}). $ \end{thm} {\bf Proof}. The first part of the theorem is straightforward. To prove that $D_1 +D_2^{\got g}$ gives the differential (\ref{e11}), note that, since $[{\got g},{\got h}] \subset {\got h}$, we can split the Lie algebra cohomology operator $D_1$ into $D_1^{\got h} + D_1^{\got g}$, where $D_1^{\got h}$ is the $\Gamma(SK^*_{\got h})$-valued Lie algebra cohomology differential on ${\got h}$ (note that the action of ${\got h}$ on $S({\got g^*})$ equals zero because of the last remark of section 3.1) and $D_1^{\got g}$ is the $S({\got g^*}) \otimes \gamma(SK^*_{\got h}) \otimes \Lambda {\got h^*}$-valued Lie algebra cohomology operator on ${\got g}$. With these notations, the following observations are straightforward and finish the proof of the theorem. 1) $D_1^{\got h}$ is probably the simplest. It is just the Lie algebra cohomology operator on $\Lambda {\got h^*}$ with values in $\Gamma(SK^*_{\got h})$, as defined in (\ref{brstdif}). Under the map (\ref{fimap}) it turns into the de Rham differential d on $\Omega(M)$. \vspace{5pt} 2) $D_1^{\got g}$ is more complicated due to the fact that ${\got g}$ is embedded diagonally in ${\got g} \oplus {\got h}$. This last fact implies that $D_1^{\got g}$ is not just the Lie algebra cohomology operator on $\Lambda {\got g^*}$ with values in $S({\got g^*})$, given by d$_2$ in (\ref{d1d2}), but rather with values in $S({\got g^*}) \otimes \Gamma (SK^*_{\got h}) \otimes \Lambda{\got h^*}$. Thus, $D_1^{\got g}$ gives us not only the d$_2$ part of (\ref{d1d2}), but also the $\omega^a \otimes {\cal L}_a$ part, after applying (\ref{fimap}). \vspace{5pt} 3) $D_2^{\got g}$ consists of two parts, for the same reason. A section of $V_{\got g}$ is mapped to itself, giving the d$_1$ part of (\ref{e11}) (see (\ref{d1d2})), and to minus itself (viewed now as a section of $V_{\got h}$), giving the $-\phi^b \otimes \iota_b$ part, after applying (\ref{fimap}). \vspace{5pt} 4) Finally, $D_2^{\got h}$ is used, as in section 2.3, to identify the BRST complex (through a simple spectral sequence) with the complex $W({\got g}) \otimes \Omega(M)$, inherited with the differential (\ref{e11}). \hspace{3cm} $\Box$ \vspace{5pt} The cohomology of the differential algebra above, the unrestricted BRST algebra, is just $H(M)$ (see (\ref{isonr})). What we are really interested in is the cohomology of the restricted algebra, $(S({\got g^*}) \otimes \Omega(M))^G$. This is because we want to study quantum mechanics on $M/G$ rather than on $M$ (which is an affine space, usually). In the first chapter we saw that restriction gives the Cartan model for equivariant cohomology. So, starting with the Lie algebra cohomology complex, we have finally arrived at the Cartan model! It turns out that from the point of view taken in this section, the difference between the Weil model and the Cartan model is just the choice of a basis in ${\got g} \oplus {\got h}$. Indeed, if we define an action of $G \times H$ on $M$ by $(g,h)\cdot x = h \cdot x$, then the previous construction will give us the usual differential ${\rm d}_W \otimes1 + 1 \otimes {\rm d}$. From the isomorphism at the end of section 3.1 we see that both group actions are in fact the same and that the difference in the differentials comes from the choice of a basis to describe the Lie algebra cohomology operator. \section{Incorporation of anti-ghosts} Up till now, we have constructed the following model for a $G \rhd \!\!\! < {\rm Diff}(M)$-action on a space of fields $M$. The positive degree part of the BRST algebra equals $(S({\got g^*}) \otimes \Omega(M))^G$ and the BRST operator on this algebra equals $s=1 \otimes {\rm d} - \phi^b \otimes \iota_b$. In order to compute path integrals (representing physical quantities) we have to choose a (gauge fixing) function $F:M \rightarrow V$ (like the functions $g^i$ at the beginning of chapter two), where $V$ is some vector space of dimension $n$. To respect the $G$-action, we require $V$ to be a representation space for $G$ and $F$ to be equivariant. In the case of a free $G$-action, $F$ would represent a section of the associated vector bundle $M \times_G V$. We will describe how to obtain a quantum action using this $F$ and we will prove that the final expression is the Mathai-Quillen representative for the equivariant Thom class. As in the introductory example of chapter two, let us introduce some more variables: \begin{equation} {\cal P}=\left(S({\got g^*}) \otimes \Omega(M) \otimes \Omega(V) \otimes \Omega(V^*) \right)^G \end{equation} We extend $s$ as follows. On $S({\got g^*}) \otimes \Omega(M) \otimes \Omega(V)$ it is the differential of the Cartan model for $(G,M \times V)$. On $S({\got g^*}) \otimes \Omega(V^*)$ it is the Fourier transform (${\cal F}:\Omega(V) \rightarrow \Omega(V^*)$) of $s$ restricted to $\Omega(V)$. This extension coincides with the description in, e.g., [OSvB]. Let $z^i$ denote linear coordinates on $V$, $b_i$ the dual ones on $V^*$. Furthermore, let $\psi^i$=d$z^i$ and $\bar{\psi}_i$=d$b_i$, then $\iota_\xi(\psi^j)=\xi \cdot z^j$ (for all $\xi \in {\got g}$) and \[{\cal F}(\iota_\xi (f \otimes \eta)) = \int f \; e^{i \;<b \mid z>} \otimes (\iota_\xi \; \eta) e^{i\bar{\psi}_j \psi^j} dzd\psi \; = \] \[ =-\int f \; e^{i\; <b \mid z>} \otimes (-1)^\eta \eta (\iota_\xi e^{i\bar{\psi}_j \psi^j}) dzd\psi \; = \] \[ =\int i\xi(z^j) f \; e^{i\; <b \mid z>} \otimes \bar{\psi}_j\eta \; e^{i\bar{\psi}_j \psi^j} dzd\psi \; = \] \[=\xi(\sfrac{\partial}{\partial b_j}) \otimes \bar{\psi}_j {\cal F}(f \otimes \eta) = (-\sfrac{\partial}{\partial b_j} \otimes \xi \cdot \bar{\psi}_j) {\cal F}(f \otimes \eta). \] for any $f \otimes \eta \in \Omega(V)$. Therefore, $s$ acts as follows on $\Omega(V) \otimes \Omega(V^*)$ \begin{eqnarray} s(z^i) = \psi^i , & & s(\psi^i) = -\phi^a \otimes X_a \cdot z^i \nonumber \\ s(\bar{\psi}_i) = -b_i, & & s(b_i) = \phi^a \otimes X_a \cdot \bar{\psi}_i \end{eqnarray} Let us state all this in another way. If $D_M={\rm d}-\phi^b \iota_b$ is the Cartan differential for $(G,M)$, $D_V$ is the one for $(G,V)$, then \begin{equation} s=D_M + D_V + {\cal F} \circ D_V \circ {\cal F}^{-1} \end{equation} Using this one easily sees that integrating $\eta e^{i \; s(z^j \bar{\psi}_j)}$ over $V^*$, where $\eta \in {\cal P}$ is any $s$-closed element, one obtains $\eta' \in \left(S({\got g^*}) \otimes \Omega(M) \otimes \Omega(V) \right)^G$ that is closed for $D_M +D_V$. This is because the exponential indicates that we are really Fourier transforming $\eta$ (see remark 1.5). If we pull this Fourier transformed element back using the map $F:M \rightarrow V$, we obtain an equivariantly closed differential form on $M$. In this way path integral quantization combined with BRST theory gives rise to representatives for equivariant cohomology classes. \section{Path integrals and correlation functions} The following proposition is an equivariant version of (\ref{tc1}). \begin{thm}\label{tc2} The element \begin{equation}\label{reptc2} \sfrac{1}{(2\pi)^n} \int_{V^*} e^{i \; s(<z-ib \mid \bar{\psi}>)} db \; d\bar{\psi} = \sfrac{1}{(\sqrt{\pi})^n} e^{-z^2} \; \int e^{i \; <\psi \mid \bar{\psi}> + i \; <\phi \cdot \bar{\psi} \mid \bar{\psi} >} d\bar{\psi} \end{equation} is a closed form in $(\Omega(V) \otimes S({\got g}^*))^G$ that represents the equivariant Thom class. \end{thm} ${\bf Proof.}$ Essentially, the proof consists of showing that (\ref{reptc2}) is closed under d$-\phi^a \otimes i_a$ and that its top form part equals (\ref{reptc1}), as in the first example of section 1.4.2. Note that the integrand above can be written as a product $e^{i\; s <z \mid \bar{\psi}> \; } \; e^{s<b \mid \bar{\psi}>}$, which means that we are Fourier transforming $e^{s<b \mid \bar{\psi}>}$. Because this form is obviously $s$-closed, the integral itself is $D_V$-closed. Furthermore, its top form part equals expression (\ref{reptc1}). This can be verified by replacing $\phi$ by zero. $\Box$ \vspace{5pt} {\bf Remark} 3.1) This representative is the same as the one constructed in [MQ]. Also in [AJ] the connection between this representative and quantum actions of cohomological field theories is explained. \vspace{5pt} To end this section, we shall make some comments on the context in which this theorem may be used. Suppose that the $G$-action is free. We can pull back (\ref{reptc2}) by $F:M \rightarrow V$ to obtain a representative (in the Cartan model) for the Poincar\'e dual of $F^{-1}(0)/G$ in $M/G$. Physicists use the expression for the Poincar\'e dual to compute quantum correlation functions of BRST invariant observables. We shall explain what this means. Denote $F^{-1}(0)/G \subset M/G$ by $X$ and let $i:X \rightarrow M/G$ be the inclusion. Using the formula above, we obtain a multilinear function on $H^*(M/G)$ by integrating products of closed differential forms over $M/G$. This polynomial, evaluated at a given product of forms, is called a quantum correlation function. BRST invariant observables for a topological quantum (field) theory are just cohomology classes on the configuration space (which equals $M/G$ in this case). So, mathematically, it is clear what the correlation functions are. They are products of classes in $H^*(M/G)$ (of total degree $\dim(X)$) integrated over $M/G$ using the Poincar\'e dual of $X$. Two problems remain. The first is that we would like to integrate over $M$ rather than over $M/G$ (to incorporate non-free group actions). Secondly, we still have to apply the Weil homomorphism to get a differential form representative instead of a Cartan model representative. BRST theory solves the two problems together by introducing a dual Weil algebra, $W({\got g}^*) =S({\got g}) \otimes \Lambda({\got g})$, generated by $\bar{\phi}_a$ (degree -2) and $\bar{\omega}_a$ (degree -1) ($a=1, \ldots , \dim(G)$). The BRST differential extends as follows \begin{equation} s \; \bar{\omega}_a = -f^c_{ab} \phi^b \otimes \bar{\phi}_c , \hspace{20pt} s \; \bar{\phi}_a = \bar{\omega}_a \end{equation} Furthermore, a Riemannian metric on $M$ and a non degenerate invariant bilinear form on ${\got g}$ is used to obtain a linear map $\nu: {\got g}^* \rightarrow \Omega(M)$ from the infinitesimal action ${\got g} \rightarrow \Gamma(TM)$. Because we have chosen a basis of ${\got g}$ we can define the 1-forms $\nu^a$ to be the components of the map $\nu$. To solve the two problems mentioned above, just add the following factor to the integrand \begin{equation}\label{eqn47} \int_{W({\got g}^*)} e^{i \; s(\bar{\phi}_a \nu^a)} d\bar{\phi} \; d\bar{\omega} \end{equation} and perform an extra integration over the $\phi^a$. In [AJ] it is shown that this is equivalent with multiplying by the vertical top form and substituting the \lq Riemannian' curvature for $\phi$, thereby solving the two problems together. \vspace{10pt} {\bf Remarks} 3.2) We are well aware of the fact that this last part is not at all self contained. This is because we do not like rewriting parts of [AJ]. Note that we do not need to calculate the precise expression for the connection 1-form, because a change of $\nu$ can be absorbed by a transformation of the variables $\bar{\phi}$ and $\bar{\omega}$, of which the Jacobians cancel each other (as usual). 3.3) The final expression we obtain this way has also meaning in the case the action of $G$ is not a free action. We still have a polynomial on cohomology classes in the Cartan model expressed as an integral over $M$. It is only more difficult to identify the cohomology of the Cartan model with the de Rham cohomology of the quotient space, since the latter is not well defined. However, this is a serious case to investigate, since it is very common in physics to have non free group actions (e.g., in TYMT, if there are reducible connections). In the next chapter we will make a start with this, applying the model to the case of a symplectic manifold $M$ on which $G$ acts Hamiltonially and where $F$ is the momentum map $\mu$. 3.4) Combining expressions (\ref{reptc2}) and (\ref{eqn47}), we get the quantum action for TYMT as derived in [W1]. \chapter{Applications to symplectic geometry} In this last chapter we will see that the integrals of chapter three lead to a fixed point formula for equivariantly closed forms. Applying this to Hamiltonian circle actions we will show how to obtain information on the ring structure of the cohomology of the symplectic quotient. As an illustration we compute the cohomology rings of all possible symplectic quotients of ${\bf CP}^n$ and $S^1$. Apart from the localization formula for equivariant forms on manifolds with boundary, all the results here are also in [Ka2]. Similar results were obtained (a few months earlier) by Wu ([Wu]), using [W3]. Very recently, a beautiful generalization appeared in [JK]. \section{A fixed point formula} Let $M$ be a compact manifold with a boundary $\partial M$. Let $\mu$ be a non-negative function on $M$ such that $\partial M = \mu^{-1}(0)$. Suppose $M$ is equipped with a circle action such that $\partial M$ is invariant and the fixed points do not lie on the boundary. This implies that all stabilizer groups of points on the boundary are finite and thus $X:=\mu^{-1}(0)/S^1$ is an orbifold ([Sa]). We will assume that the main stabilizer is $\{ 1 \}$ to avoid extra factors in the formulas. We shall denote by $V$ the generating vector field of the circle action ($S^1={\bf R}/{\bf Z}$). \subsection{Isolated fixed points} We shall first assume that all fixed point are isolated, so there are only finitely many of them. Let $\alpha=\sum \alpha_j \phi^j \in {\bf R}[\phi] \otimes \Omega(M)^{S^1}$ be an equivariantly closed differential form on $M$, i.e., ${\rm d} \alpha_j = \iota_V \alpha_{j-1}$ for all $j$. It is mapped to a closed differential form $r(\alpha)$ on $X$ by restriction to $\partial M$ followed by the map that gives the equivalence between the Cartan model and differential forms on the quotient. This will be explained in more detail in the next section. The integral of $r(\alpha)$ over $X$ can be expressed as a summation over the fixed point set $F \subset M$ by means of the following formula. \begin{thm}\label{fp} Let $M$ be an $S^1$-manifold with an invariant boundary $\partial M$ and isolated fixed points $F$. Let $\alpha$ be an equivariantly closed form on $M$ of total degree $2n-2={\rm dim}(M)-2={\rm dim}(X)$. Then \begin{equation}\label{4} \int_{\partial M/S^1} r(\alpha) = \sum_{P \in F} \frac{\alpha^{(0)}_{n-1}(P)} {\prod \; n_i(P)} \end{equation} where $\Pi n_i(P)$ is the product of the weights of the circle action linearized around the fixed point $P$ and the superscript $(0)$ denotes the fact that the differential form $\alpha_{n-1}$ is of degree zero. \end{thm} {\bf Proof}. Let $M_\epsilon$ denote the manifold $M$ where small neighbourhoods around the fixed points are removed. On $M_\epsilon$ the circle action is locally free so we can find a connection 1-form $\theta$ on $M_\epsilon$. The theorem can be proved by using the following integral (of chapter three) \begin{equation}\label{complint} \frac{1}{\sqrt{\pi}} \int_{M_\epsilon}\int{\rm d}\phi (\sum \alpha_j \phi^j)(e^{-\mu^2} {\rm d}\mu) \int_{W({\got g^*})} e^{i\bar{\omega}\theta} e^{i\bar{\phi}({\rm d}\theta - \phi)}. \end{equation} Performing the integration over $\phi$, $\bar{\phi}$ and $\bar{\omega}$, we obtain \begin{equation} \frac{1}{\sqrt \pi} \int_{M_\epsilon} e^{-t^2\mu^2} {\rm d} (t\mu) \wedge \theta \wedge \sum \alpha_j ({\rm d} \theta)^j \end{equation} where we substituted $t\mu$ for $\mu$ ($t \in {\bf R}$). Computing the limit $t \rightarrow \infty$ in two different ways one obtains the identity (\ref{4}). This proof was given in [Ka2]. A more direct proof goes as follows. Since ${\rm d}(\theta \wedge \sum \alpha_j ({\rm d} \theta)^j)= \iota_V (\theta \wedge \sum \alpha_j ({\rm d} \theta ) ^{j+1})$, the top form part on both sides must be zero. Therefore, \begin{equation}\label{27} 0= \int_{M_\epsilon} {\rm d} ( \theta \wedge \sum \alpha_j ({\rm d}\theta)^j) = \int_{\partial M_\epsilon} ( \theta \wedge \sum \alpha_j ({\rm d}\theta)^j) \end{equation} One part of the boundary of $M_\epsilon$ is the boundary of $M$, hence part of the rhs equals \begin{equation}\label{16} \int_{\mu^{-1}(0)} \theta \wedge \sum \alpha_j ({\rm d}\theta)^j \end{equation} It is important to remark that the integrand of (\ref{16}) equals $\theta \wedge r(\alpha)$ (this will be explained in the next section). Integration over the orbits of the circle action gives us the lhs of the theorem. The other part of the boundary consists of boundaries of small neighbourhoods around the isolated fixed points. We will choose local coordinates around each fixed point to compute (\ref{27}). Identify an open neighbourhood of some fixed point $P$ with ${\bf R}^{2n}$ such that $S^1$ acts linearly with weights $n_i(P)$, $i=1, \ldots, n$. Using linear coordinates $x^i, y^i$ we can choose \begin{equation}\label{conn} \theta = \frac{\sum n_i (x^i {\rm d} y^i - y^i {\rm d} x^i)} {\sum n_i^2 ((x^i)^2 +(y^i)^2)} \cdot \frac{1}{2\pi} \end{equation} Using smooth partitions of unity this connection has an extension to a smooth connection on $M_{\epsilon}$. The only term of the integrand in (\ref{27}) that gives a non-zero contribution is the term with the highest singularity, $\theta \wedge ({\rm d} \theta)^{n-1}$. Integrating this term over some area, surrounding the origin, gives $1/\prod \; n_i(P)$ (the most convenient way to do the calculation is to use polar coordinates in each ${\bf R}^2$). Noticing that the coefficient $\alpha_{n-1}$ is a zero form, we finally obtain the rhs of the formula in the theorem. \vspace{5pt} {\bf Remark} 4.1) Let us apply the theorem to a Hamiltonian circle action with momentum map $\mu$. Let $M$ be the manifold given by $\mu \geq 0$. The formula expresses integrals over the reduced phase space $X=\mu^{-1}(0)/S^1$ as a sum over fixed points $P$ for which $\mu(P)>0$. Of course, we get a similar formula for $\mu \leq 0$. Combining these formulas and using the equivariantly closed form $\alpha= (\sigma - \mu \phi)^j \phi^{n-j-1}$, we get the following series of identities \begin{equation} \sum_{P \in F} \frac{\mu(P)^j}{\prod n_i(P)} =0 \; \; \; (j=0,1, \ldots ,n-1), \end{equation} where $F$ is now the set of all fixed points of the Hamiltonian circle action. These identities also follow directly from the Duistermaat-Heckman formula ([DH]), see, e.g., [Wu], corollary 3.3. \subsection{Non-isolated fixed points} Going through the proof once more, one sees that the theorem can be extended to the case of non-isolated fixed points. Suppose $F$ is a union of connected submanifolds $F_i^{(2k)}$, where the upper index denotes the dimension of the component. Let $X$ again denote the quotient $\partial M/S^1$. If the normal bundle of $F$ in $M$ is trivial, we can use local analysis as above and the formula of the theorem generalizes to \begin{equation}\label{genfpf} \int_X r(\alpha) = \sum_{F^{(2k)}_i \subset F} \frac{ \int_{F_i^{(2k)}} \alpha_{n-k-1}^{(2k)}} {\prod \; n_j(F_i^{(2k)})} \end{equation} However, if the normal bundle of $F$ is non-trivial, the rhs of (\ref{genfpf}) is only the first term of a much more complicated formula involving $\alpha_j$ for $j>n-k-1$ and characteristic classes of the normal bundle. We shall derive this formula here, using localization for integrals of equivariantly closed forms ([BGV]). Notice that we are after an expression for $\int \theta \wedge \sum \alpha_j ({\rm d} \theta)^j$ around $F_i^{(2k)}$ as an integral over $F_i^{(2k)}$. Proofs of equivariant localization provide such an expression. In fact, the form $\theta \wedge \sum \alpha_j ({\rm d} \theta)^j$ appears explicitly in [DH] (Addendum, (2.6)-(2.11)). Replacing their factor $(-1)^ke^{J_X}\frac{\sigma^{n-k}}{(n-k)!}$ by $\alpha_{k-1}$, their local computation gives us (together with the localization formula in [BGV]) \begin{equation}\label{nifpf} \int_X r(\alpha) = \sum_{F_i^{(2k)} \subset F} \int_{F_i^{(2k)}} \frac{\sum \alpha_j \phi^{j+1}}{\epsilon_i(\phi)} \end{equation} where $\epsilon_i(\phi)$ is the equivariant Euler class of the normal bundle of $F_i^{(2k)}$ in $M$. Note that the rhs should be independent of $\phi$ and that substituting $\epsilon_i(\phi)= \phi^{n-k} \prod_{j=1}^{n-k} n_j(F_i^{(2k)}) + l.o.t.$ gives the rhs of (\ref{genfpf}) as a starting term. \vspace{15pt} It's appropriate to say something here on Witten's non-abelian localization ([W3]), since his methods also produce a (different) proof of the theorem above. Let $G$ act on a manifold $M$ of dimension $m$. Let ($(S({\got g}^*) \otimes \Omega(M))^G$, $D$) be the Cartan model for the equivariant cohomology of $M$. Define $S({\got g}^*)$-valued integration over $M$ of an equivariant form $\alpha=\sum_i p_i \otimes \alpha^{(i)}$, where $\alpha^{(i)} \in \Omega^i(M)$ and $p_i \in S({\got g^*})$ as follows \begin{equation} \int_M \alpha = \int_M \sum_i p_i \otimes \alpha^{(i)} = p_m \cdot \int_M \alpha^{(m)} \end{equation} It is important to note that this equivariant integration gives zero on $D$-exact forms, so that it is a well defined operation on equivariant cohomology classes. Now, let $\lambda \in \Omega^1(M)^G$ and let $\alpha$ be a closed equivariant form. Since ${\rm exp}(itD(\lambda))-1$ is $D$-exact, the forms $\alpha$ and $\alpha \wedge {\rm exp}(itD(\lambda))$ are in the same class, so \begin{equation}\label{12} \int_M \alpha = \int_M \alpha \wedge e^{itD(\lambda)} \end{equation} This is the key of Witten's non-abelian localization. Since $D(\lambda)$ has a component $\iota_V \lambda \in \Omega^0(M)$, the rhs of (\ref{12}) localizes at the critical points of $\iota_V \lambda$ for $t \rightarrow \infty$. Of course, the integral does not depend on $t$, so the answer can exactly be computed using stationary phase approximation. Witten integrates both sides of (\ref{12}) over $\got g$, interpreting the answers in terms of distributions. In [Wu] the computation is carried out explicitely for Hamiltonian circle actions and some special choise of $\lambda$ and $\alpha$. Formulas are obtained similar to (\ref{4}). \vspace{5pt} {\bf Remark} 4.2) We can use (\ref{12}) to prove the following general localization formula for manifolds with boundary (under the same assumptions as above), extending the one of [AB] and [BGV]. \begin{equation}\label{glf} \int_M \sum \alpha_i \phi^i = \sum_{F_i \subset F} \int_{F_i} \frac{\sum \alpha_i \phi^i} {\epsilon_i(\phi)} - \int_{\partial M/S^1} \sum_{i,k} \tilde{\alpha}_i ({\rm d}\theta)^k \phi^{i-k-1} \end{equation} where ${\rm d}\theta$ represents the curvature class of $\partial M \rightarrow X$ on the base space, and $\tilde{\alpha}_i$ is the form on $X$ that comes from the basic form $\alpha_i-\theta \wedge \iota_V \alpha_i$ on $\partial M$. The proof is almost the same as in [BGV]. the only difference is an extra boundary term $\int_{\partial M} \lambda \alpha \wedge \frac{e^{itD\lambda}-1} {D\lambda}$ that appears in (\ref{12}). Using this localization formula, we can derive the fixed point formula (\ref{4}) immediately. Taking $\alpha$ homogeneous of degree $2n-2$, the lhs vanishes and the rhs gives the formula. So, if people would have looked at localization formulas on manifolds with boundaries, the fixed point formula for integrals over symplectic quotients of circle actions would have been found much earlier. \section{Cohomology of symplectic quotients} In this section we will explain how the fixed point formula can be used to compute the ring structure on the cohomology of symplectic quotients. We will start with some results for the Cartan model of equivariant cohomology. For the following special case the equivariant cohomology $H^*_G(M)$ can be computed (as a vector space). \begin{prop}\label{prop3.1} Let $M$ be a manifold with a compact $G$-action for which the odd Betti numbers are zero. Then the map \begin{equation}\label{dcc2} H^l_G(M) \rightarrow \bigoplus_p (S^p({\got g^*})^G \otimes H^{l-2p}(M)), \end{equation} which associates to an equivariant cohomology class its part involving its highest form degree, is well defined and surjective for every $l \geq 0$. Moreover, {\rm dim}$H^l_G(M)=\sum_p {\rm dim} S^p({\got g^*})^G \cdot {\rm dim} H^{l-2p}(M)$. \end{prop} {\bf Proof.} The proof is based on the fact that the Cartan model actually has a double complex structure. To see this, note that $1 \otimes {\rm d}$ and $\phi^b \otimes \iota_b$ (super) commute because of the $G$-invariance. Furthermore, note that $\phi^b \otimes \iota_b$ increases the polynomial degree by one and does not change the sum of the polynomial degree and the form degree. For $1 \otimes {\rm d}$, vice versa. Since the difference of the two degrees is the form degree and since the odd Betti numbers vanish, we can apply theorem \ref{spseq1}, which proves the proposition. \vspace{5pt} {\bf Remark} 4.3) What this proposition in fact says is that, for the special case of vanishing odd Betti numbers, we know the dimensions of all the $H^l_G(M)$ as well as the top form parts (provided we know both $M$ and $G$ well enough). Reconstructing from an invariant top form part a D-closed element is called the zig-zag construction ([BT]). Obviously it is non-unique. \vspace{5pt} From chapter one we recall that if $M$ is a symplectic manifold, then the $G$-action in proposition \ref{prop3.1} is Hamiltonian. In fact, the proposition is true in a more general setting. \begin{thm}\label{thmKi} {\rm ([Ki], [Gi])}. Let $G$ be a connected compact Lie group acting Hamiltonially on a compact symplectic manifold $M$. Then the map \begin{equation}\label{dcc3} H^l_G(M) \rightarrow \bigoplus_p (S^p({\got g^*})^G \otimes H^{l-2p}(M)) \end{equation} which associates to an equivariant cohomology class its part involving the highest form degree is well defined and surjective for every $l \geq 0$. Moreover, {\rm dim}$H^l_G(M)=\sum_p {\rm dim} S^p({\got g^*})^G \cdot {\rm dim}H^{l-2p}(M)$. \end{thm} The proof of this theorem is more complicated than the proof of proposition \ref{prop3.1}. One can find it in [Ki] and [Gi]. The usefulness of this theorem is obvious. It says that the spectral sequence associated to the Cartan model degenerates, so that one can use the zig-zag construction to construct equivariant representatives from invariant polynomials on $\got g$ with values in the space of closed invariant differential forms. As an example we take the constant polynomial equal to the symplectic 2-form $\sigma$. It is closed for $1\otimes {\rm d}$ and from the theorem we know that $\phi^b \otimes \iota_b \sigma$ must be $1 \otimes$d-exact. Actually, this is precisely the condition that the action is Hamiltonian. Thus we see that $\sigma$ corresponds to the equivariant representative $\sigma - \phi^b \otimes \mu_b$, where $\mu_b$ are the components of the momentum map. Let us assume now that $G$ acts locally free on $Z=\mu^{-1}(0)$ and denote the quotient by $X$. Consider the following sequence of maps, starting with equivariant forms on $M$ and ending in the differential forms on the quotient $X$. \begin{eqnarray}\label{psimaps} (S({\got g^*}) \otimes \Omega(M))^G \stackrel{\Psi_1}{\rightarrow} (W({\got g}) \otimes \Omega(M))_{\rm basic} \stackrel{\Psi_2}{\rightarrow} (W({\got g}) \otimes \Omega(Z))_{\rm basic} \end{eqnarray} \[ \stackrel{\Psi_3}{\rightarrow} \Omega(Z)_{\rm basic} \rightarrow \Omega(X) \] where $\Psi_1$ is the map $\Psi$ of section 1.2.3, $\Psi_2$ is just restriction to $Z$ and $\Psi_3 = c_\theta \otimes 1$ is the Chern-Weil homomorphism (section 1.2.2). We shall denote the composition of these maps by (this notation was already introduced in the previous section to state the fixed point formula) \begin{equation} r: (S({\got g^*}) \otimes \Omega(M))^G \rightarrow \Omega(X) \end{equation} Kirwan ([Ki]) has shown that the map $\Psi_2$ induces a surjective map on the level of cohomology. Since the other maps induce isomorphisms of cohomologies, the induced map \begin{equation}\label{rstreep} \bar{r} : H_G(M) \rightarrow H(X) \end{equation} is an epimorphism. This is essential for the cohomology computations later on. We shall explicitely describe (\ref{psimaps}) for circle actions. Let $V$ be the generating vector field and let $\theta$ be a connection one form on $Z$, that is, $\iota_V \theta =1$. Furthermore, let $\sum \alpha_j \phi^j \in {\bf R}[\phi] \otimes \Omega(M)^{S^1}$ be some equivariant differential form on $M$. Then \begin{equation}\label{rmap} r(\alpha) = \sum \alpha_j ({\rm d}\theta)^j - \theta \wedge \sum (\iota_V \alpha_j)({\rm d}\theta)^j \end{equation} which is a basic form on $Z$ and hence descends to the symplectic quotient $X$. Note that to obtain the first term on the rhs we just substituted the curvature form d$\theta$ for $\phi$ in $\alpha$. To conclude this section we explain how one computes the ring structure on $H(X)$. The map (\ref{rstreep}) is a ringhomomorphism and therefore the ring structure on $H(X)$ comes from the ring structure on $H_G(M)$ by dividing out the kernel ker($\bar{r}$). In principle, this kernel can be computed using Poincar\'e duality for orbifolds ([Sa]). Since $X$ is compact, integration over $X$ can be regarded as a non-degenerate bilinear form on the cohomology. If we know how to integrate elements coming from $H_G(M)$ we can compute ker($\bar{r}$) in the following way. A closed equivariant form $\alpha$ represents a class in ker($\bar{r}$) iff \begin{equation}\label{pd} \int_X r(\alpha) \wedge r(\beta) = 0 \end{equation} for all closed equivariant forms $\beta$. Using our fixed point formula we can compute the lhs and thus determine which $\alpha$ are in the kernel of $r$. We will do this computation now for a series of examples. \section{The {\bf CP}$^n$-case} In this section we will give a series of examples with $M={\bf CP}^n$ and $G=S^1$. We shall compute the cohomology rings of all symplectic quotients of ${\bf CP}^n$ and $S^1$. \subsection{The circle action} Let $(z_0:\ldots:z_n)$ denote homogeneous coordinates on {\bf CP}$^n$. Every (linear) circle action on ${\bf CP}^n$ can be characterized by an $(n+1)-$tuple of integers $m_i$, such that \begin{equation}\label{lam} \lambda \cdot (z_0:\ldots:z_n)= (\lambda^{m_0}z_0:\ldots : \lambda^{m_n}z_n) \end{equation} Note that if the $m_i$ are shifted all by the same integer, then the action does not change. This action is Hamiltonian for the standard symplectic form on {\bf CP}$^n$. The requirement of isolated fixed points is equivalent with all $m_i$ being different. The fixed point set $F$ in this case consists of the points $P_i=(0: \dots:1:\ldots :0)$, where the 1 is on the $i-$th place. As Hamiltonian (momentum map) we take \begin{equation}\label{ham} \mu_\nu : {\bf CP}^n \rightarrow {\bf R}, \;\; z \mapsto \frac{\sum m_i z_i \bar{z_i}}{\sum z_i \bar{z}_i} -\nu \end{equation} where $\nu$ is some real number. By varying $\nu$ we can obtain all possible symplectic quotients of the action (\ref{lam}). The special map $\mu_0$ will be denoted by $\mu$. In the sequel, $X$ will denote the reduced phase space $\mu^{-1}(\nu)/S^1$. As one can see from this expression the values of $\mu_\nu$ at the fixed points equal the numbers $m_i-\nu$. Shifting of the $m_i$ by the same number can be absorbed by a change of $\nu$. The requirement that $0$ is a regular value of $\mu_\nu$ translates into the requirement that none of the $m_i-\nu$ equals zero. \subsection{The cohomology ring} Recall that our task is to compute the kernel of the ring epimorphism $\bar{r}: H_G(M) \rightarrow H(X)$, using Poincar\'e duality on $X$ and the integration formula. We will first determine $H_G(M)$ for our case. \begin{prop} The algebra homomorphism $h:{\bf R}[\phi,\tau] \rightarrow {\bf R}[\phi] \otimes \Omega({\bf CP}^n)$, which sends $\phi$ to $\phi \otimes 1$ and $\tau$ to $1 \otimes \sigma - \phi \otimes \mu$ induces a surjective algebra homomorphism $\bar{h}:{\bf R}[\phi,\tau] \rightarrow H_{S^1}({\bf CP}^n)$. \end{prop} {\bf Proof}. From theorem \ref{thmKi} and the fact that $H({\bf CP}^n)$ is generated by the class of the symplectic form $\sigma$, it follows that, as a vector space, \begin{equation}\label{33} H_{S^1}({\bf CP}^n) \simeq {\bf R}[\phi] \otimes {\bf R}[\sigma]/\sigma^{n+1} \end{equation} where $\phi$ and $\sigma$ both have degree two. Since this isomorphism comes from the degeneration of a spectral sequence, we can carry out the zig-zag construction ([BT]) and obtain that $ H_{S^1}({\bf CP}^n) $ is generated by two elements, namely $\phi$ and $\tau=1 \otimes \sigma - \phi \otimes \mu$. \vspace{5pt} {\bf Remark} 4.4) The kernel of $\bar{h}$ is equal to the ideal generated by $\prod_{P \in F} (\tau + \mu(P) \phi)$. Thus we obtain the following isomorphism of algebras \begin{equation} H_{S^1}({\bf CP}^n) \simeq {\bf R}[\phi,\tau]/(\prod_{P \in F} (\tau +\mu(P) \phi)) \end{equation} The following arguments will prove this statement. From (\ref{33}) we know $H_{S^1}({\bf CP}^n)$ as a graded vector space over ${\bf R}$. If $l \leq n$, then dim$H_{S^1}^{2l}({\bf CP}^n)=l+1$ and it is spanned by $\phi^{l-i} \tau^i$ ($i=0, \ldots ,l$). If $l \geq n$, then dim$\; H_{S^1}^{2l}({\bf CP}^n) = n+1$. So the kernel of $\bar{h}$ is a principal ideal generated by a polynomial of degree $n+1$. To determine this polynomial note that the restriction $H_{S^1}^{2n+2}({\bf CP}^n) \rightarrow H_{S^1}^{2n+2}(F) \simeq {\bf R} \phi^{n+1} \otimes {\bf R}^F$ is surjective (due to the fact that all $\mu(P_i), \; P_i \in F$ are different; see also next section), hence injective. Furthermore, $\prod_{P \in F} (\tau + \mu(P) \phi)$ vanishes, when restricted to $F$, from which the result follows. \vspace{5pt} Composing $\bar{h}$ with $\bar{r}$ we obtain a surjective homomorphism of rings \begin{equation} \bar{r} \circ \bar{h} : {\bf R}[\phi,\tau] \rightarrow H(X) \end{equation} Thus we see that $H(X)$ is generated by the reduced symplectic form $\sigma_\nu$ and the curvature d$\theta$ of the circle bundle $Z \rightarrow X$. This follows from (\ref{rmap}) and the remark after (\ref{16}). We shall determine the kernel $I \subset {\bf R}[\phi,\tau]$ of this map. For sure $I$ is a non-zero ideal because $H(X)$ has no non-zero elements of degree greater than $n-1$. In particular, $\prod_{P \in F} (\tau +\mu(P) \phi) \in I$, which also follows from remark 4.6. In fact, $I$ is generated by two factors of this polynomial. Let $k$ be the number of fixed points $P$ at which $\mu(P)>\nu$. Define \[ p_k= \prod_{\mu(P)>\nu} (\tau + \mu(P) \phi), \hspace{20pt} q_{n-k+1}= \prod_{\mu(P)<\nu} (\tau + \mu(P) \phi). \] The following theorem gives the cohomology ring of the symplectic quotient $X$. A proof of this theorem will be given later on. \begin{thm}\label{thm5.1} $I \subset {\bf R}[\phi,\tau]$ is the ideal generated by $p_k$ and $q_{n-k+1}$, so we have the following isomorphism of rings \begin{equation}\label{cohring} H(X) \simeq {\bf R}[\phi,\tau]/I \end{equation} Explicitly, $H(X)$ is generated by the image of $\phi$, the curvature class $c=[{\rm d}\theta]$ in $H^2(X)$ of the circle bundle $\mu^{-1}(\nu) \rightarrow X$, and the image of $\tau$, which is equal to $[\sigma_\nu] - \nu c$, where $\sigma_\nu$ denotes the symplectic form of the reduced phase space. $I$ is the ideal of relations between these generators. \end{thm} {\bf Remark} 4.5) The generators $\phi$ and $\tau$ map in fact to rational classes on $X$, so ${\bf Q}[\phi, \tau] \rightarrow H(X;{\bf Q})$ is well defined and surjective. This follows from the following considerations. $X$ is the quotient of a $T^2=S^1 \times S^1$ action on the subspace $Y$ of ${\bf C}^{n+1}$ given by the equations $\sum \bar{z}_i z_i =1$ and $\sum m_i \; \bar{z}_i z_i=\nu$. The first $S^1$ acts linearly on ${\bf C}^{n+1}$ with all weights equal to 1 and the second $S^1$ acts with weights $m_i$. Let us give their generating vector fields the names $V'$ and $V$, respectively. We will construct a connection for this $T^2$-action such that the curvature components on $X$ are the images of $\phi$ and $\tau$, hence rational. Let $\theta$ be the connection of section four but now seen as a basic element on $Y$ with respect to the first circle action. Let $\theta '$ be the connection on $S^{2n+1}$ whose curvature gives rise to the symplectic form on ${\bf CP}^n$, $\theta ' = \frac{1}{2\pi} \sum {\rm Im} (\bar{z}_i {\rm d} z_i)$. Then, on $Y$, $\iota_V \theta' = \nu$, so connection components are $\theta$ and $\theta' - \nu \theta$ with corresponding curvature components d$\theta$ and $\sigma - \nu {\rm d} \theta$, precisely the differential forms to which $\phi$ and $\tau$ are mapped. \subsection{Bilinear forms} Let $\alpha=\sum \alpha_j \phi^j$ and $\beta=\sum \beta_j \phi^j$ be equivariant closed forms of degrees $2l$ and $2n-2l-2$, respectively. For convenience, we shall use the same symbols for the cohomology classes they represent. Let $k$ be the number of fixed points $P_i$ for which $\mu(P_i) > \nu$. We may assume that $2k \leq n+1$, otherwise we take the other fixed points. Our fixed point formula gives \begin{equation}\label{blf} \int_X r(\alpha) \wedge r(\beta) = \sum_{i=1}^k c_i \; \alpha^{(0)}(P_i) \beta^{(0)}(P_i) \end{equation} for certain non-zero real coefficients $c_i$. This is a bilinear form on $H_G^{2l}(M) \times H^{2n-2l-2}_G(M)$ representing integration on $X$. If $\int_X r(\alpha) \wedge r(\beta)=0 $ for all $\beta \in H_G^{2n-2l-2}$ then $\alpha$ is in the ideal $I$ (see also (\ref{pd})). Since $I$ is non-empty, this form must be degenerated. Note that the rhs of (\ref{blf}) can also be viewed as a diagonal bilinear form on ${\bf R}^k \times {\bf R}^k$. As such, it is non-degenerate because the $c_i$ are non-zero. We will use this to compute $I$. For each $l<n$ define the map \begin{equation} \epsilon_l : H^{2l}_G(M) \rightarrow {\bf R}^k, \; \; \alpha \mapsto (\alpha^{(0)}(P_1), \ldots , \alpha^{(0)}(P_k)) \end{equation} It is precisely the images of this map that appear on the rhs of (\ref{blf}). Since $H^{2l}_G(M)$ is spanned by $\phi^{l-i} \tau^{i}$ ($i=0, \ldots, l$) as a vector space, it is easy to see what the image of $\epsilon_l$ is. If we define $\mu_i := -\mu(P_i)$ ($i=1, \ldots ,k$), then \begin{equation}\label{vektortjes} \epsilon_l (H^{2l}_G(M)) = \; < \left( \begin{array}{ccc} 1\\ \vdots \\ 1 \end{array} \right) , \left( \begin{array}{ccc} \mu_1 \\ \vdots \\ \mu_k \end{array} \right) , \ldots , \left( \begin{array}{ccc} \mu_1^l \\ \vdots \\ \mu_k^l \end{array} \right) > \end{equation} Recall that the $\mu_i$ is just a subset of the $-m_i$ and that these are all different. Therefore, as long as $l \leq k-1$ the image is exactly $(l+1)-$dimensional, so $\epsilon_l$ is injective for $l<k$. Furthermore, $\epsilon_l$ is surjective for $l \geq k-1$. The following proposition is now obvious. \begin{prop}\label{52} Let $\alpha$ be a polynomial in $\phi$ and $\tau$, homogeneous of degree $l$, then $\alpha \in I$ iff it is in the orthocomplement of $\epsilon_{n-l-1} (H_G^{2n-2l-2}(M)) \subset {\bf R}^k$ for the bilinear form diag($c_1, \ldots ,c_k$) on ${\bf R}^k$. \end{prop} \subsection{Computing the ideal} In this section we will give a proof of theorem \ref{thm5.1} by computing the ideal $I$. Before we compute the degeneracy of (\ref{blf}) note that if $\epsilon_l$ has a kernel then this kernel is also in $I$. In fact, there is a kernel and we shall describe it now. The map $\epsilon_l$ is given by \begin{equation} \sum_{i=0}^l a_i \; \phi^{l-i} \tau^i \mapsto ( \sum a_i \mu_1^i, \ldots , \sum a_i \mu_k^i) \end{equation} the $a_i$ being real coefficients. As one sees from this expression, elements in the kernel come from polynomials $\sum a_i x^i$ for which all the $\mu_i$ are zeroes. Thus we have proved \begin{prop}\label{5.1} For $l<k$ the maps $\epsilon_l$ are injective. For $l \geq k-1$ they are surjective and if $l \geq k$ the kernel of $\epsilon_l$ consists of all polynomials in $\phi$ and $\tau$, homogeneous of degree $l$, who are divisible by $p_k$, where $p_k$ is the polynomial $\prod_j (\tau - \mu_j \phi)$ of degree $k$. \end{prop} This implies that the ideal generated by this polynomial is a subideal of $I$. Moreover, it expresses $\tau^k$ in lower order powers of $\tau$ and therefore gives already a bound, equal to $k$, on the Betti numbers of $X$. To obtain the other polynomials in $I$ we proceed as follows. For each $l$, we know the image of $\epsilon_{n-l-1}$. If $n-l-1 \geq k-1$ then $\epsilon_{n-l-1}$ is surjective, so from proposition \ref{52} we derive that there are no $\alpha \in I$ of degree $l \leq n-k$. Since $k \leq \frac{n+1}{2}$, $n-k \geq \frac{n-1}{2}$, which is half of the dimension of $X$. This implies that all polynomials in $I$ have at least degree $k$ and therefore $b_l(X)=l+1$ for all $l<k$. Furthermore, for $k \leq l \leq n-k$, the only relations are those coming from ker($\epsilon_k$). So, for these $l$, $b_l(X)=k$. For $l>n-k$, the $b_l(X)$ equal the $b_{n-l-1}(X)$ due to Poincar\'e duality. Thus, they decrease by one if $l$ increases by one. The next proposition finishes the proof of theorem \ref{thm5.1}. \begin{prop} Let $\nu_i$ denote the $n-k+1$ values $\mu(P_i)$, where $P_i$ are critical points at which $\mu(P_i) < \nu$. Then the polynomial $q_{n-k+1} = \prod_j (\tau + \nu_j \phi)$ is in I. Moreover, the ideal I is generated by $p_k$ and $q_{n-k+1}$. \end{prop} {\bf Proof}. If we had taken a summation over the fixed points $P_i$ for which $\mu(P_i)<\nu$, then we would have found $q_{n-k+1}$ instead of $p_k$ to be in $I$. This proves that it is in $I$. Furthermore, the two polynomials have no common factors, as all the $\mu(P)$, $P \in F$, are different. This implies that the subideal generated by these two polynomials gives mutually dependent relations on the generators only in degree greater than $(n-k+1) + k=n+1$. So there is no need for more generators. \vspace{10pt} {\bf Remarks} 4.6) For non-isolated fixed points it happens that the cohomology ring is given by the same formula (\ref{cohring}). E.g., if $l$ of the $m_i$ are equal, there is a ${\bf CP}^{l-1}$ fixed point manifold. The momentum map $\mu_\nu$ is constant on this submanifold and instead of $l$ different $\mu(P_i)$ we have to take $l$ times the same value, so that the total number of $\mu_i$ and $\nu_i$ remains $n+1$. However, the proof in this case is somewhat different. First of all the formula (\ref{blf}) involves higher cohomology classes, namely all possible classes on the fixed point manifolds. This implies that the associated bilinear form, using (\ref{genfpf}) rather than (\ref{blf}), is no longer diagonal but consists of triangular blocks of dimension equal to the cohomological dimension of the fixed point manifold (in this case its complex dimension plus one). Furthermore, in the maps $\epsilon_i$ there will appear not only constants, but also cohomology classes. With these changes the proof goes entirely through for the general case. 4.7) The symplectic quotients studied here are toric varieties. The cohomology rings of these spaces have been computed by Danilov ([Da]). This provides a different description, obtained by different methods, of the same cohomology rings. 4.8) Very recently, Lisa Jeffrey and Frances Kirwan came up with a fixed point formula for general Lie groups ([JK]). Although inspired by Witten ([W3]), they use tough Fourier analysis to prove their formula. \section*{acknowledgements} I am grateful to a lot of people for discussions and helpful comments. Especially, I like to thank Hans Duistermaat and Peter Braam for their great help the last four years. Furthermore, I like to thank Raymond Stora, Gijs Tuynman, Jim Stasheff and Takashi Kimura for useful visits to their departments and Sophie Chemla for the work we did together. \chapter*{References} \begin{description} \item[[AB]] M. Atiyah, R. Bott, The moment map and equivariant cohomology, Topology 23 (1984) 1. \item[[AJ]] M. Atiyah, L.C. Jeffrey, Topological Lagrangians and cohomology, J. of Geom. and Phys., vol.7, no1 (1991) 119 \item[[AM]] R. Abraham, J.E. Marsden, Foundations of Mechanics, Addison Wesley, New York 1987 \item[[BGV]] N. Berline, E. Getzler, M. Vergne, Heat Kernels and Dirac Operators, Springer Verlag 1991 \item[[BRS]] C. Becchi, A. Rouet, R. Stora, Renormalization of the abelian Higgs-Kibble model, Commun. Math. Phys. 42 (1975) 127 \item[[BS]] L. Baulieu, I. Singer, Topological Yang-Mills symmetry, Nucl. Phys. (proc. suppl.) 5B (1988) 223 \item[[BT]] R. Bott, L. Tu, Differential Forms in Algebraic Topology, GTM 82, Springer Verlag 1982 \item[[BV]] I.A. Batalin, G.A. Vilkovisky, Relativistic S-matrix of dynamical systems with boson and fermion constraints, Phys. Lett. 69B (1977) 309 \item[[Ca]] H.Cartan, Transgression dans un groupe de Lie et dans un espace fibr\'e principal, Colloque de Topologie CBRM Bruxelles (1950) 57-71 \item[[CK]] S. Chemla, J. Kalkman, BRST cohomology for certain reducible topological symmetries, to appear in Commun. Math. Phys. \item[[Da]] V.I. Danilov, The geometry of toric varieties, Russ. Math. Surveys 33 (1978) 97 \item[[DH]] J.J. Duistermaat, G.J. Heckman, On the variation in the cohomology of the symplectic form of the reduced phase space, Invent. Math. 69 (1982) 259; Addendum, Inv. Math. 72 (1983) 153 \item[[FF]] E.S. Fradkin, T.E. Fradkina, Quantization of relativistic systems with boson and fermion first and second class constraints, Phys. Lett. 72B (1978) 343 \item[[FHST]] J. Fisch, M. Henneaux, J. Stasheff, C. Teitelboim, Existence, uniqueness and cohomology of the classical BRST charge with ghosts of ghosts, Commun. Math. Phys. 120 (1989) 379 \item[[Gi]] V.A. Ginzburg, Equivariant cohomologies and K\"ahler geometry, Funct. Anal. and its Appl. 21 (1987) 271 \item[[He]] M. Henneaux, Hamiltonian form of the path integral for theories with a gauge freedom, Phys. Rep. 126 (1985), no1 \item[[HT]] M. Henneaux, C. Teitelboim, BRST cohomology in classical mechanics, Commun. Math. Phys. 115 (1988) 213 \item[[Hu]] D. Husemoller, Fibre Bundles, GTM 20, Springer Verlag \item[[JK]] L.C. Jeffrey, F.C. Kirwan, Localization for non-abelian group actions, alg-geom/9307001 \item[[Ka1]] J. Kalkman, BRST model for equivariant cohomology and representatives for the equivariant Thom class, Commun. Math. Phys. 153 (1993) 447 \item[[Ka2]] J. Kalkman, Cohomology rings of symplectic quotients, preprint 795, Math. Institute, University of Utrecht \item[[Ki]] F.C. Kirwan, Cohomology of Quotients in Symplectic and Algebraic Geometry, Math. Notes Vol. 31, Princeton University Press 1984 \item[[KS]] B. Kostant, S. Sternberg, Symplectic reduction, BRS cohomology and infinite dimensional Clifford algebras, Ann. of Phys. 176 (1987) 49 \item[[MQ]] V. Mathai, D. Quillen, Thom classes, superconnections and equivariant differential forms, Topology 25 (1986) 85. \item[[OSvB]] S. Ouvry, R. Stora and P. van Baal, Algebraic characterization of TYM, Phys. Lett. 220B (1989) 1590 \item[[Sa]] I. Satake, On the generalization of the notion of manifold, Proc. Nat. Acad. Sci. USA 42 (1956) 359 \item[[St]] J.D. Stasheff, Homological (ghost) approach to constrained Hamiltonian systems, Contemp. Math. 132 (1992) 595 \item[[W1]] E. Witten, Topological quantum field theory, Commun. Math. Phys. 117 (1988) 353 \item[[W2]] E. Witten, Introduction to cohomological field theories, Int. J. Mod. Phys. A6 (1991) 2775 \item[[W3]] E. Witten, Two dimensional gauge theories revisited, J. Geom. Phys. 9 (1992) 303 \item[[dW]] B. de Witt, Supermanifolds, Cambridge University Pres, Second Edition, 1992. \item[[Wu]] S. Wu, An integration formula for the square of moment maps of circle actions, hep-th/9212071 \end{description} \end{document}
1,108,101,563,487
arxiv
\section*{\underline{\LARGE PART I}} \section*{I.0. Introduction} Dear participants, on behalf of Professor Anastasios Mallios, the Algebra and Geometry Section of the Mathematics Department of the University of Athens, the European Commission (principal sponsors) and Qualco (private partial sponsors), I wish to welcome you to the 1st {\itshape Glafka--2004: `Iconoclastic' Approaches to Quantum Gravity} theoretical physics conference. \paragraph{An `iconoclast' according to the lexicon.} According to {\itshape Webster's Encyclopedic Unabridged Dictionary of the English Language}, an `{\em iconoclast}' ({\sl I kon$^{'}$a klast$^{'}$}, noun) is: \begin{enumerate} \item a breaker or destroyer of images, especially those set up for religious veneration, and/or \item one who attacks cherished beliefs, traditional institutions, {\it etc.}, as being based on error or superstition. \end{enumerate} \noindent Historically, in Byzantium (723-843AC), `{\em iconoclasm}' ({\it alias}, `iconomachy') was the polemic movement against `{\em iconolatry}'---the worshipping of Christian icons (predominantly in churches).\footnote{In retrospect, I think I personally would have taken sides with the iconolatres instead of the iconoclasts after having visited the beautiful Byzantine Period section of the Benaki National Heritage museum last night.} The three scientists from past times that immediately spring to mind as `scientific iconoclasts' are Galileo Galilei, Charles Darwin and Albert Einstein. The latter revolutionized our ideas of space, time, matter, energy, and their dynamical intertransmutations. In view of some challenges presented by Quantum Gravity (QG), we may have to further revolutionize Einstein's ideas and thus further `{\em dissect the iconoclast}'. \paragraph{The twilight of the Quantum Gravity idol.} What is `{\em the icon}' in our case?: {\em Quantum Gravity} (QG)---arguably, the `Holy Grail' of theoretical physics in the dawn of the new millennium. However, there is no quantum theory of gravity to begin with---anyway, not a conceptually sound, mathematically consistent and `calculationally' finite one. In a nutshell, {\em there is no QG icon to destroy in the first place!} Hence, is our gathering here today `futile', actually `begging the question' and, ultimately, `begging the quest' for the icon? Certainly, however, there is a plethora of views and approaches to QG, so that a `mosaic', `patchwork' sort of picture of QG (with glaringly conflicting ideas at times!) has emerged over the last 30+ years of research, but there is no unanimous agreement on what QG is, or anyway, what it ought to be. By the way, theoretical physicists, unlike religious thinkers and preachers, are particularly bad when talking about `teleological' and `normative' aspects of their science, and that's a good thing in my opinion, as it reflects that they are, in a Socratic sense, not sure/certain about their knowledge---they have no rigid convictions that they cannot readily revise or even shed. In scientific research, uncertainty about a subject is a virtue, not a blemish. It is sort of liberating not to know, for it invites a wandering imagination and a way of looking at the World afresh.\footnote{See the prologue and epilogue of part II.} \section*{I.1. `First-Order' Iconoclasm} Thus in our case, `iconoclasm'---at least what I call here `{\em 1st-order iconoclasm}'---pertains to challenging standard or well established conceptions about and approaches to QG, as well as proposing alternative ones that are not `mainstream' or `fashionable' as it were. The way I see it, the pentaptych of (not mutually independent) qualities of the theoretical physics' iconoclast are the following (not in order of import or importance to her research endeavors and quests): \begin{enumerate} \item {\em Imagination} (contra knowledge; ``{\em Imagination is more important than knowledge}'' (Einstein)---the Glafka motto\footnote{See Glafka poster.}), \item {\em `Riskability'} ({\it ie}, able to take risks: `nothing ventured, nothing gained'---one of Chris Isham's favorite sayings. Also Wolfgang Pauli: ``{\em Only he who risks has a chance of succeeding}'',\footnote{See also Richard Feynman's quotation below.} \item {\em Obstinacy, perseverance} and {\em `pigheadedness'} (``{\em what do you care what other people think?}''---Feynman), \item {\em `Fearlessness'} (especially with regard to making mistakes and putting one's ideas to the theoretical test and criticism; Anastasios Mallios). \item {\em `Authoritilessness'} (question fairly well established ideas, concepts and practices---take nothing for granted, as a necessary given; see Einstein quotation below). \end{enumerate} \noindent Feynman's words about QG research below, taken from his Nobel Prize address, epitomize the second virtue of `iconoclasm' I wanted to highlight for you today: \begin{quotation} \noindent ``...{\small It is important that we don't all follow the same fashion. We must increase the amount of variety and the only way to do this is to implore you few guys, to take a risk with your own lives so that you will never be heard of again, and go off to the wild blue yonder to see if you can figure it out...}'' \end{quotation} \noindent Einstein's words bring out the fifth virtue of `iconoclasm' I wanted to highlight for you today: \begin{quotation} \noindent ``...{\small Concepts which have proved useful for ordering things easily assume so great an authority over us, that we forget their terrestrial origin and accept them as unalterable facts. They then become labelled as `conceptual necessities', `a priori situations', etc. The road of scientific progress is frequently blocked for long periods by such errors. It is therefore not just an idle game to exercise our ability to analyze familiar concepts, and to demonstrate the conditions on which their justification and usefulness depend, and the way in which these developed, little by little...}'' \end{quotation} \noindent While, about obstinacy, perseverance and stubbornly focusing on a goal, Einstein told once Ernst Strauss: \begin{quotation} \noindent ``{\small I know quite certainly that I myself have no special talent. Curiosity, obsession, and dogged endurance combined with self-criticism, have brought me to my ideas. Especially strong thinking power I do not have, or only to a modest degree. Many have far more of this than I without producing anything surprising...}'' \end{quotation} \noindent In this respect, Ernst Straus also relates the following anecdote about Albert Einstein that I think you will find at least amusing: \begin{quotation} \noindent ``{\small We had finished the preparation of a paper and we were looking for a paper clip. After opening a lot of drawers we finally found one which turned out to be too badly bent for use. So we were looking for a tool to straighten it. Opening a lot more drawers we came on a box of unused paper clips, Einstein immediately starting to shape one of them into a tool to straighten the bent one. When I asked him what he was doing, he said:} `{\em When I am set on a goal, it becomes very difficult to deflect me'.}'' \end{quotation} At the same time, I think it is important that the iconoclast does not forget that she is {\em standing on the shoulders of giants} (Isaac Newton); albeit, at the same time standing on her own two feet...which brings me to what I think of as the `{\em 2nd-order iconoclasm}'. \section*{I.2. `2nd-Order' Iconoclasm} Iconoclasts gather together to tear down each other's icons---their theories and general `{\it Weltaufbau und Weltanschaung}', like we have gathered here today. Of course, the idea is to pick up each other's pieces and synthesize {\em the} QG icon. For, \vskip 0.1in \centerline{\em iconoclasts should not just be `pure deconstructionists'.} \vskip 0.05in \noindent One feels that we ought to find common grounds---as it were, a common denominator---in our apparently diverse, but supposedly fundamental and unifying, conceptions of Nature's depths. We should all have faith in the unity of Physis---after all, we refer to the World as a Cosmos/$K\acute{o}\sigma\mu o\varsigma$, not a Chaos/$X\acute{\alpha}o\varsigma$---but we should also respect and appreciate each other's differences. As John Archibald Wheeler said: ``{\em More is different}''. We should search for unity in Nature's cherished diversity\footnote{Again, see the prologue and epilogue of part II.}...which brings me to the most radical iconoclasm of the `3rd kind'. \section*{I.3. `Third-Order' Iconoclasm} Here is the paradoxical question:\footnote{In analogy to the logical oxymoron: `{\em Who shaves the barber?'}.} \vskip 0.1in \centerline{\em Who cuts the QG iconoclast?\footnote{In Greek, an `{\em iconoclast}' (:`$\epsilon\iota\kappa o\nu o\kappa\lambda\acute{\alpha}\sigma\tau\eta\varsigma$') is (s)he who `{\em cuts icons}' (:`$\kappa\lambda\acute{\alpha}\zeta\epsilon\iota~\epsilon\iota\kappa \acute{o}\nu\epsilon\varsigma$').}} \vskip 0.05in \noindent Of course, it is important that my gross {\em idealization} of the 1st and 2nd-order QG iconoclast above---especially in view of a not `well defined', let alone unanimously agreed on, project for a QG theory-construction---does not turn into an {\em idolization}; for ideally, \vskip 0.05in \centerline{\em a genuine iconoclast should tear down all idols, including (and especially!) his own}. Thus, to pay my respects to the possibility that {\em we might be chasing a QG chimera after all}, here is a telling quotation of David Finkelstein---from an early (:May 1993) pre-print version of his 1996 book `{\itshape Quantum Relativity: A Synthesis of the Ideas of Einstein and Heisenberg}' (Springer-Verlag, 1996)---capturing what I coin the (most `radical') {\em `3rd-order' iconoclasm} of the elusive QG theory {\em itself}: \vskip 0.1in \centerline{\underline{\bf The Saviors of Physical Law}\footnote{This, in a metaphorical sense, `post-anticipates' Nikos Kazantzakis' `{\em Salvatores Dei}', excerpts of which we shall encounter in the sequel.}} \begin{quotation} \noindent ``...{\small\em What are we after as physicists? Once I would have said, the laws of nature; then, the law of nature. Now I wonder.}\footnote{Our emphasis.} {\small A law, or to speak more comprehensively, a theory, in the ordinary sense of the word, even a quantum theory of the kind studied today by almost all quantum physicists, is itself not a quantum object. We are supposed to be able to know the theory completely, even if it is a theory about quanta. Its symbols and rules of inference are supposed to be essentially non-quantum. For example, ordinary quantum theory assumes that we can know the form of the equations obeyed by by quantum variables exactly, even though we cannot know all the variables exactly. This is considered consistent with the indeterminacies of quantum theory, because the theory itself is assumed to sum up conclusions from arbitrarily many experiments. Nevertheless, since we expect that all is quantum, we cannot consistently expect such a theory to exist except as an approximation to a more quantum conception of a theory. At present we have non-quantum theories of quantum entities. Ultimately the theory too must reveal its variable nature. For example, the notion that an experiment can be repeated infinitely often is as implausible as the notion that it can be done infinitely quickly ($c=\infty$), or infinitely gently ($\hbar=0$). It is common to include in the Hamiltonian of (say) an electron a magnetic field that is treated as a non-quantum constant, expressing the action of electric currents in a coil that is not part of the endosystem but the exosystem. Such fields are called external fields. Upon closer inspection, it is understood, the external field resolves into a host of couplings between the original electron and those in the coil system, now part of the endosystem. {\em It seems likely that the entire Hamiltonian ultimately has the same status that we already give the external field. No element of it can resist resolution into further quantum variables. In pre-quantum physics the ideal of a final theory is closely connected with that of a final observer, who sees everything and does nothing. The ideal of a final theory seems absurd in a theory that has no final observer. When we renounce the ideal of a theory as a non-quantum object, what remains is a theory that is itself a quantum object. Indeed, from an experimental point of view, the usual equations that define a theory have no meaning by themselves, but only as information-storing elements of a larger system of users, as much part of the human race as our chromosomes, but responding more quickly to the environment. The fully quantum theory lies somewhere within the theorizing activity of the human race itself, or the subspecies of physicists, regarded as a quantum system. If this is indeed a quantum entity, then the goal of knowing it completely is a Cartesian fantasy, and at a certain stage in our development we will cease to be law-seekers and become law-makers. It is not clear what happens to the concept of a correct theory when we abandon the notion that it is a faithful picture of nature. Presumably, just as the theory is an aspect of our collective life, its truth is an aspect of the quality of our life}\footnote{Again, our emphasis throughout.}...}'' \end{quotation} \noindent The {\em `law-making'}, as opposed to the (merely) {\em `law-seeking'}, imperative (of what is here coined `3rd-order iconoclasm') in the Finkelstein quotation above recalls Nikos Kazantzakis' concluding words---as it were, the distillation and r\'esum\'e of his spiritual credo---in his `swan-song' of a book {\itshape `Salvatores Dei (The Saviours of God): Spiritual Exercises'}:\footnote{Translated by Kimon Friar (a Touchstone Book, Simon \& Schuster Publishers, 1960).} \begin{quotation} ``{\small ...1. Blessed be all those who hear and rush to free you, Lord, and who say: {\em `Only You and I exist.'} \vskip 0.1in 2. Blessed too be all those who free you and become united with you, Lord, and who say: {\em `You and I are One.'} \vskip 0.1in 3. And thrice blessed be those who bear on their shoulders and do not buckle under this great, sublime, and terrifying secret: {\em `That even this One does not exist'}...}'' \end{quotation} \noindent And with these `agnostic' (but not necessarily pessimistic!) and `mystical' remarks, I wish you all wholeheartedly: \vskip 0.2in \centerline{\underline{\large\bf Enjoy a mystifying Glafka!}} \vskip 0.2in \section*{I.4. Hegelian Postscript: The Owl of Minerva} And when you thought it was all over, I would like to close this opening talk with a `{\em post-anticipation}' of the deeper significance of Glafka, inspired by a recent e-mail exchange with Rafael Sorkin. First, I would like to quote Peter Singer---the famous `bioethicist', from his Princeton homepage:\footnote{http://www. petersingerlinks.com/minerva.htm} \begin{quotation} \noindent ``{\small ...Minerva, the Roman goddess of wisdom, was the equivalent of the Greek goddess Athena.\footnote{The patron goddess of Athens.} She was associated with the owl, traditionally regarded as wise, and hence a metaphor for {\em philosophy}. Hegel wrote, in the preface to his {\itshape Philosophy of Right}: `{\em The owl of Minerva spreads its wings only with the falling of the dusk}'. He meant that philosophy understands reality only after the event. It cannot prescribe how the world ought to be...}'' \end{quotation} \noindent Rafael shared with me Balachandran's (:his celebrated colleague-physicist at Syracuse University) interpretation of Hegel's owl (which I personally prefer to Singer's strictly `{\em after-the-fact}' one), according to which: \begin{quotation} \noindent ``{\small ...Minerva's owl is spreading its wings at dusk (or something to that effect), the meaning reputedly being that only when an event or development is near its end does its significance become clear...}'' \end{quotation} \noindent Regarding our Glafka gathering here, it's good that there's still another 3 days, plus 10 hours or so, till dusk falls on the last day of the meeting... \vskip 0.2in \centerline{$<\bullet>$~$<\bullet>$~$<\bullet>$~$<\bullet>$~$<\bullet>$~$<\bullet>$~$<\bullet>$~$<\bullet>$} \vskip 0.1in \section*{\underline{\LARGE \bf PART II}} \section*{II.0. Prologue: General Motivational Remarks} Quantum Gravity (QG) has as many facets as there are approaches to it. There is no unanimous agreement on what QG `really' is---what are its central questions, its main aims, its basic problems, or what ought to be ultimately resolved; hence the current `zoo' of approaches to it. There certainly is overlap between the concepts, the mathematical techniques and the basic aims of the various approaches, but the very fact that there are so many different routes to such a supposedly fundamental quest betrays more our ignorance rather than our resourcefulness about what QG `truly' stands for, or at least about how it should be `properly' addressed and approached. {\it Prima facie}, the danger that goes hand in hand with the said proliferation of approaches to QG observed lately is that the {\it aufbau} of such a theory may eventually degenerate into the erection of some kind of Babel Tower, where workers working on each individual approach, just by virtue of the big number of different, simultaneously developing, schemes (with the concomitant development of `idiosyncratic' conceptual and technical jargon, as well as approach-specific mathematical techniques), may find it difficult to communicate with each other. As a result, like the mutually isolated seagull populations of the Galapagos islands that Charles Darwin came across, the various approaches may eventually cease to be able to cross-breed and the workers will become `alienated' from each other---{\it ie}, they will not be able to communicate, let alone to fruitfully interact, check or cross-fertilize each other's ideas and results. Thus, the QG vision shall inevitably become disorientated and fragmented; and what's worse, perhaps irreversibly so. It will then be hard to believe that all these different workers and their ventures do indeed have a common goal (:QG), even if they nominally say so ({\it eg}, in conferences!). Of course, there is that general feeling, ever since the inception and advent of General Relativity (GR) and subsequently of Quantum Mechanics (QM), that QG ought to be a coherent amalgamation of those two pillar theories of 20th century theoretical physics. Perhaps one of the two theories (or even both!) may have to undergo significant modifications in order for QG to emerge as a consistent `unison-by-alteration' of the two. On the other hand, the gut feeling of many (if not of most) workers in the field is that, no matter how advanced and sophisticated our technical (:mathematical) machinery is, we lack the proper conceptual-physical questions that will open the Pandora's box of QG. It may well be that the fancy maths get in the way of the simple fundamental questions we need to come up with in order to crack the QG `code'. We may be rushing, primarily dazed by past successes of our mathematical panoply, to give intricate and complex mathematical answers to simple, yet profound, physical questions that have not been well posed, or even asked(!), yet. Fittingly here, Woody Allen's \vskip 0.1in \centerline{\small ``I have an answer, can somebody please tell me the question?''} \vskip 0.1in \noindent springs to mind. Time and again the history of the development of theoretical physics has taught us that in the end, {\em Nature invariably outsmarts our maths} no matter how sophisticated and clever they may be, while our own knowledge is not only insignificant compared to Her wisdom, but also many times it sabotages the very path that we are trying to pave towards the fundamental physical questions. For, very often, (mathematical) knowledge inhibits (physical) intuition and imagination. Or perhaps, in a promethean sense opposite to that above, it may be that \begin{quotation} \noindent {\em we are not adventurous and `iconoclastic' enough in our theory-making enterprizes as well as in the mathematical means that we employ so as to take the `necessary' risks to look at the QG problem afresh}\footnote{See part I.}---{\it eg}, by creating new theoretical concepts, new mathematical tools and techniques, as well as a novel way of philosophizing about them. \end{quotation} \noindent In keeping with the `zoological' metaphor above, \begin{quotation} \noindent so far the attempts to bring together GR and QM to a cogent ({\it ie}, a conceptually sound, mathematically consistent, as well as calculationally finite) QG, seem to this author to be like {\em trying to cross a parrot with a hyena: so that it (:QG) can tell us what it is laughing about}. \end{quotation} \noindent All in all, it may well be the case that the QG riddle has been with us for well over half a century now, stubbornly resisting (re)solution and embarrassingly eluding all our sophisticated mathematical means of description, because we insist on applying and trying to marry the `old' physical concepts and maths---which, let it be appreciated here, have proven to be of great import in formulating separately the ever so successful and experimentally vindicated GR and QM---to the virtually unknown realm of QG.\footnote{This `palindromic' thesis between {\em too much} and {\em not enough} maths for QG, simply reflects the mean, neutral position of ignorance, ambivalence and uncertainty of this author about these matters. See concluding section.} The following `words of caution' by Albert Einstein \cite{einst7} are very pertinent to this discussion: \begin{quotation} \noindent ``{\small ...Concepts which have proven useful for ordering things easily assume so great an authority over us, that we forget their terrestrial origin and accept them as unalterable facts. They then become labelled as `conceptual necessities', `a priori situations', etc. The road of scientific progress is frequently blocked for long periods by such errors. It is therefore not just an idle game to exercise our ability to analyse familiar concepts, and to demonstrate the conditions on which their justification and usefulness depend, and the way in which these developed, little by little...}'' (1916) \end{quotation} In the present paper we take sides more with the second alternative above, namely, that a new theoretical/mathematical framework---one that comes equipped with new concepts and principles, and it is thus potentially able to cast new light on old ones, as well as to generate new physical questions---is needed to readdress, reformulate and possibly retackle {\em afresh} certain caustic, persistently problematic issues in current QG research. The framework we have in mind is Mallios' purely algebraico-categorical (:sheaf-theoretic) Abstract Differential Geometry (ADG) \cite{mall1,mall2,mall4}, while the account that follows is a semantic, conceptual and philosophical distillation-{\it cum}-update of results (and their related aftermath) of a series of applications of ADG to gravity\footnote{In the sequel, gravity (classical or quantum), formulated ADG-theoretically, will be coined `{\em ADG-gravity}' \cite{rap5,rap7,malrap4}.} in the past half-decade or so \cite{mall2,mall3,malros1,malros2,malros3,malrap1,malrap2,malrap3,malrap4,rap5,rap7,mall9,mall7,mall11,mall10}. Further details about formal-technical (:mathematical) terms and results are left to those original papers. After this introduction, the paper unfolds in three sections, as follows: in the next section we give a brief {\it r\'esum\'e} of the principal didactics, as well as the basic physical concepts, semantics and hermeneutics of ADG. The section that follows it addresses certain important current classical and quantum gravity issues under the prism of the background spacetime manifoldless ADG, and it ends with a brief discussion of current and near future developments of the theory along topos and more general category-theoretic lines. The paper closes by continuing the way it started; {\it ie}, by making general remarks on the significance and import of a new mathematical-theoretical framework (such as ADG) in current and future QG research. \section{II.1. The Basic Tenets and Didactics of ADG} ADG, we have learned both from theory and from numerous applications, is a way of doing differential geometry {\em purely algebraically} (:sheaf-theoretically), without using any notion of smoothness in the usual sense of Classical Differential Geometry (CDG)\footnote{In the sequel, the names Differential Calculus (or simply Calculus) and Analysis shall be regarded as synonyms to the CDG of smooth manifolds.}---{\it ie}, without employing a base geometrical differential manifold. {\it In summa}, ADG is a Calculus-free, entirely algebraic, background manifoldless theoretical framework of differential geometry \cite{mall1,mall2,mall4}. At the basis of ADG lies the notion of $\mathbf{K}$-{\em algebraized space} ($\mathbf{K}=\mathbf{R},\mathbf{C}$), by which one means an in principle arbitrary base topological space $X$, carrying a sheaf $\mathbf{A}$ of (commutative) $\mathbb{K}$-algebras ($\mathbb{K}=\mathbb{R},\mathbb{C}$) called the {\em structure sheaf of generalized arithmetics or coordinates}. A family $\mathcal{U}$ of open subsets $U$ of $X$ covering it is called a {\em system of local open gauges}, while our generalized local measurements (of coordinates) relative to $\mathcal{U}$ are modelled after the local sections of $\mathbf{A}$, $\mathbf{A}(U)\equiv\Gamma(\mathcal{U}\ni U,\mathbf{A})$. With $\mathbf{A}$ in hand, a {\em vector sheaf} $\mathbf{\mathcal{E}}$ of rank $n$ is a sheaf of vector spaces of dimensionality $n$ that is locally expressible as a finite power (:Whitney sum) of $\mathbf{A}$: $\mathbf{\mathcal{E}}(U)\simeq\mathbf{A}^{n}(U)$. By a {\em local gauge frame} $e^{U}$ ($\mathcal{U}\ni U\subset X$), one means an $n$-tuple $(e_{1},e_{2}\ldots e_{n})$ of local sections of $\mathbf{\mathcal{E}}$ providing a basis for the vector spaces inhabiting its stalks. Let it be stressed here that the role of $X$ is just as a `surrogate scaffolding', which serves as a substrate for the sheaf-theoretic localization of the objects living in the stalks of the vector and algebra sheaves involved. $X$ has no physical significance, as we shall argue below. One realizes from the beginning how important $\mathbf{A}$ is in the theory. We take it almost axiomatically that \begin{quotation} \noindent {\em there is no `geometry' without measurement, and no measurement without a difference}---{\it ie}, what we measure is always differences or changes in some `measurable' quantities ({\it eg}, coordinates),\footnote{{\it En passant}, let it be stressed here that it is {\em we} the theorists that declare and determine up-front what is measurable when we build up our theories. In this sense, theory and observation are closely tied to each other (in Greek, `{\em theory}', {\it viz.}, `$\theta\epsilon\omega\rho\acute{\iota}\alpha$', means `a way of {\em looking} at things'). In a deep sense, we see what {\em we} want to look at (even in the mind's eye). This also recalls Einstein's advice to Heisenberg that, apart from the fact that a theory cannot be built solely on observable quantities, ``{\em it is the theory that determines what can be observed, not the other way round}'' \cite{heisenberg}. {\it In toto}, `geometry' is a creature of the theorist, since it is effectively a mathematical encodement of and sums up all her observations (:`measurements'). However, as Einstein advised above, in a physical theory not all entities are `geometrical' (:`observable' or `measurable'). (See remarks in the sequel about the principal notion of connection $\mathcal{D}$ in ADG and ADG-gravity.)} the variability of which is being secured in our scheme by the fact that, in the case of coordinates, $\mathbf{A}$ is a {\em sheaf}. \end{quotation} \noindent Indeed, the notion of sheaf is intimately entwined with that of localization, which physically may be thought of as the {\em act of gauging physical quantities}, which in turn essentially promotes them to (dynamically) variable entities. The bottom-line of all this is that \begin{quotation} \noindent the algebras in $\mathbf{A}$ are {\em differential} algebras---{\it ie}, they are able to provide us with some kind of differential operator, via which then we represent the said (dynamical) changes (:differences). \end{quotation} \noindent In turn, we assume that all the `observables' (:measurable dynamically variable physical quantities) in our theory can always be expressed in terms of $\mathbf{A}$ ({\it eg}, as $\otimes_{\mathbf{A}}$-tensors).\footnote{$\otimes_{\mathbf{A}}$ is the homological tensor product functor.} In a subtle sense, \begin{quotation} \noindent from the ADG-theoretic perspective {\em all differential geometry boils down to the $\mathbf{A}$ that we choose to use up-front in the theory's aufbau}. \end{quotation} \noindent Parenthetically, but in the same line of thought, we would like to answer briefly to Shing-Shen Chern's philosophical pondering in \cite{chern}: \begin{quotation} \noindent ``{\small ...A mystery is the role of differentiation. The analytic method is most effective when the functions involved are smooth. Hence I wish to quote a philosophical question posed by Clifford Taubes:\footnote{The reference given here is \cite{taubes}.} Do humans really take derivatives? Can they tell the difference?}...'' \end{quotation} \vskip 0.1in \noindent by holding that {\em humans do indeed differentiate} (and they can `really' tell the difference!) {\em insofar as they can measure}.\footnote{To be precise, in \cite{taubes} Taubes was talking about so-called {\em inequivalent differential structures} that a manifold can admit ({\it eg}, {\it \`a la} John Milnor). In anticipation of the basic ADG-didactics that follow below, our reply here has a slightly different sense, pertaining to Chern's mentioning that the most effective method (of differentiating) is that of Analysis, via smooth manifolds.} From the ADG-theoretic vantage, they can indeed assume different $\mathbf{A}$s, provided of course these structure sheaves of generalized arithmetics (:coordinates or measurements) furnish them with a differential operator ({\it viz}. connection) $\partial$. This discussion brings us to the central notion of ADG. The neuralgic concept of ADG, as befits any scheme that aspires to qualify as a theory of {\em differential} geometry proper, is that of {\em connection} $\mathcal{D}$ ({\it alias}, generalized differential $\partial$). $\partial$ (or $\mathcal{D}$) is categorically defined as a $\mathbf{K}$-linear, Leibnizian {\em sheaf morphism} between $\mathbf{A}$ (or $\mathbf{\mathcal{E}}$), and a sheaf $\mathbf{\Omega}$ of $\mathbf{A}$-modules of differential form-like entities being the ADG-analogues of the smooth differential forms encountered in CDG. The connections in ADG are fittingly coined $\mathbf{A}$-connections, since $\mathbf{A}$ is the `source' of the differential operator $\partial$ (or equivalently, $\mathbf{\mathcal{E}}\simeq_{\mathrm{loc}}\mathbf{A}^{n}$ is the `domain' of $\mathcal{D}$). In turn, by a {\em field} in ADG, one refers to the pair $(\mathbf{\mathcal{E}} ,\mathcal{D})$, where $\mathbf{\mathcal{E}}$ is the carrier space of the connection $\mathcal{D}$.\footnote{This definition of a field may be thought of as an abstraction and generalization of Yuri Manin's definition of an electromagnetic (:Maxwell) field as a connection on a line bundle (although in ADG we do not work with fiber bundles, but with sheaves, which are more `flexible' and versatile structures).} The ADG-conception of $\partial$ and $\mathcal{D}$ is a Leibnizian ({\it ie}, relational, algebraic), not a Newtonian, one. That is, in ADG we obtain the differential (structure) from the algebraic relations (:structure) of the objects living in the stalks of the vector and algebra sheaves involved, and not from a background geometrical `space(time)' continuum (:manifold), which `cartesianly' mediates in our Calculus (ultimately, in our differential geometric calculations) in the guise of (smooth) coordinates as in the usual CDG of manifolds. With $\partial$ and $\mathcal{D}$ in hand, we can then define the important notion of {\em curvature} $R$ of a connection $\mathcal{D}$, an $\mathbf{A}$-metric $\rho$, torsion, and all the standard concepts and constructions of the (pseudo-)Riemannian geometry of GR; albeit, to stress it again, entirely algebraico-categorically, without using any background geometrical locally Euclidean (:manifold) space(time). $R$, like $\mathcal{D}$, is a sheaf morphism, but unlike its underlying connection which is only a $\mathbf{K}$-morphism, it is an $\mathbf{A}$-morphism (or $\otimes_{\mathbf{A}}$-tensor). The dynamical relations (:physical laws) between the observable physical quantities noted above are then expressed differential geometrically as differential equations proper. In other words, \begin{quotation} \noindent in ADG the laws of physics are categorically expressed as {\em equations between sheaf morphisms},\footnote{From this it follows what we noted earlier, namely, that the base arbitrary topological space $X$ plays absolutely no role in the physical dynamics in our theory.} such as the curvature of the connection. \end{quotation} \noindent In ADG-gravity in particular, the vacuum Einstein equations are formulated in terms of the Ricci scalar curvature $\EuScript{R}$ of a gravitational connection $\mathcal{D}$:\footnote{This is the only displayed mathematical expression in the present paper!} \begin{equation}\label{eq1} \EuScript{R}(\mathbf{\mathcal{E}})=0 \end{equation} \noindent Perhaps the deepest observation one can make about (\ref{eq1}) above is that it is an `$\mathbf{A}$-{\em functorial}' expression. This means that the Einstein equations are expressed via the curvature of the connection (and not directly in terms of the connection itself!), which as noted above is an $\mathbf{A}$-morphism (:an $\otimes_{\mathbf{A}}$-tensor). The gravitational field, in the guise of $R(\mathcal{D})$, `sees through' and it is unaffected ({\it ie}, it remains `invariant') by our generalized measurements in $\mathbf{A}$. This is a {\em categorical} description of the ADG-analogue of the Principle of General Covariance (PGC) of GR, which group-theoretically may be represented by ${\mathcal{A}}ut\mathbf{\mathcal{E}}$ as we shall note in the next section. In connection with the discussion around footnote 18 above, it is interesting to note that the principal entity in ADG-gravity, the gravitational connection $\mathcal{D}$, strictly speaking is {\em not} itself an `observable'---{\it ie}, a {\em measurable} dynamical entity in the theory---as it is {\em not} a `geometrical object' (:an $\mathbf{A}$-morphism or $\otimes_{\mathbf{A}}$-tensor). However, its curvature $R(\mathcal{D})$ is an observable, and the vacuum Einstein equations (\ref{eq1}) are expressed via it.\footnote{On this remark hinges the observation that {\em $\mathcal{D}$ is not a geometrical entity}; rather, it is an {\em algebraic} (:analytic) one. (See also Anastasios Mallios' contribution to this volume \cite{mall10}.)} The moral here {\it vis-\`a-vis} Einstein's advice to Heisenberg in footnote 5, is that the central notion in ADG-gravity (and in ADG in general)---that of connection $\mathcal{D}$---is an `unobservable' entity, as it eludes our generalized coordinates (:measurements) in $\mathbf{A}$. In turn, on the last observation above rests our generalized Principle of Field Realism (PFR), which is closely related to our categorical version of the PGC of GR noted earlier (:$\mathbf{A}$-functoriality), and roughly it maintains that \begin{quotation} \noindent The ADG-gravitational field $\mathcal{D}$, and the field law (\ref{eq1}) that it defines differential geometrically (:as a differential equation proper), remains unaffected (and the corresponding law `invariant') by our `subjective', arbitrary choices of $\mathbf{A}$. \end{quotation} \noindent Einstein's words below, taken from his `{\itshape Time, Space, and Gravitation}' article in \cite{einst10} where he gives an account of how he arrived at the PGC of GR as `invariance of the law of gravity under arbitrary coordinate transformations', are very relevant here: \begin{quotation} \noindent ``{\small ...Must the independence of physical laws with regard to a system of coordinates be limited to systems of coordinates in uniform movement of translation with regard to one another? {\small\em What has nature to do with the coordinate systems that \underline{we} propose and with their motions?}\footnote{Our emphasis.} Although it may be necessary for our descriptions of nature to employ systems of coordinates that we have selected arbitrarily, the choice should not be limited in any way so far as their state of motion is concerned\footnote{Or perhaps better expressed, (the said arbitrary choice of any particular system of) coordinates should not affect in any way the dynamical equations (laws) of motion of the fields in focus.}...}'' \end{quotation} \noindent the subtle but important generalization of the PGC of GR by ADG-gravity culminating in the PFR above is that \begin{quotation} \noindent the field law of gravity remains unaffected (:`invariant') not only by arbitrary (:general) {\em smooth} coordinate transformations ({\it ie}, by general transformations of coordinates within the structure sheaf $\mathbf{A}\equiv\mathcal{C}^{\infty}_{M}$ chosen by the theorist/`observer'), but also by arbitrary changes of $\mathbf{A}$ itself. \end{quotation} \noindent In our work this last remark has been promoted to a principle, coined the Principle of Algebraic Relativity of Differentiability (PARD), and it maintains that \begin{quotation} \noindent no matter what $\mathbf{A}$ is chosen to furnish us with, and thus to geometrically represent (in $\mathbf{\mathcal{E}}$), the gravitational field $\mathcal{D}$, the field law of gravity that the latter defines remains unaffected by it. \end{quotation} \noindent Thus, as a pun to Taubes' question that Chern was quoted as asking in the previous section, we can now retort: {\em the ADG-gravitational connection field is indifferent to different choices of differential algebras of generalized coordinates $\mathbf{A}$ that we employ to represent it (on $\mathbf{\mathcal{E}}$)}. For, to emulate Einstein's words above: {\em what has nature (here, the gravitational field law) to do with the $\mathbf{A}$s that we choose to geometrically represent (via $\mathbf{\mathcal{E}}$) the (inherently algebraic) gravitational field $\mathcal{D}$?} In closing this section, it must be stressed in view of the last remarks and footnote above that the generalized coordinates in $\mathbf{A}$, once they supply us with the differential geometric mechanism---{\it ie}, with the differential $\partial$ or the connection $\mathcal{D}$---they are effectively ({\it ie}, as far as the expression of the field law of gravity is concerned) `discarded' as they have absolutely no physical significance, since the gravitational field dynamics (\ref{eq1}) `sees through' them (:it is $\mathbf{A}$-covariant, or $\mathbf{A}$-functorial). It took Einstein more than 7 years to appreciate the metric and hence the dynamical\footnote{Since in GR {\it \`a la} Einstein, the metric $g_{\mu\nu}$ is the sole dynamical variable.} insignificance of coordinates; albeit, the smooth base spacetime manifold (:$\mathbf{A}_{X}\equiv\mathcal{C}^{\infty}_{M}$) is invaluable in standard GR, if anything, in order to formulate the theory differential geometrically ({\it ie}, to model the dynamics after differential equations proper) \cite{kriele}. {\it In toto}, in GR too, the Einstein equations are generally covariant since they are formulated as differential equations between smooth, $\otimes_{\mathcal{C}^{\infty}_{M}}$-tensors. The subtle point here is that in the manifold and CDG-based GR, whenever a concrete calculation is made, the smooth coordinates are invoked and the background spacetime continuum provides us with a geometro-physical interpretation of the theory. That is, in GR, spacetime events and smooth spacetime intervals between them have a direct experimental meaning, as they are `quantities' to be measured (:recall that $g_{\mu\nu}$ represents both the gravitational field and the spacetime chronogeometry). By contrast, in the purely algebraic ADG-gravity, there is {\it a priori} no need for a geometrical (smooth) spacetime interpretation of the theory.\footnote{This doing away with the smooth background geometrical spacetime manifold of ADG-gravity proves to be very important in both classical and quantum gravity current research as we shall argue in the next section.} Here is a challenging question for future physical applications of ADG: \begin{quotation} \noindent {\em Can we relate the theory (:ADG-gravity) to experience directly from its purely algebraic underpinnings, without recourse to a background geometrical manifold representation and its associated spacetime interpretation?}\footnote{This author is indebted to the referee of \cite{rap5} for bringing him to ask this question with his acute remarks on the connection between ADG-gravity's doing away with coordinates and experiment.} \end{quotation} \section{II.2. Implications of Background Spacetime Manifoldlessness} In this section we outline the main `aftermaths'---{\it ie}, the results following the application of the ADG-maths (pun intended)---of numerous applications of the base spacetime manifoldless ADG to gravity. To prevent the reader's distraction from repeated referencing within the text, the citations where all the results that follow can be found are \cite{mall2,mall3,malros1,malros2,malros3,malrap1,malrap2,malrap3,malrap4,rap5,rap7,mall9,mall7,mall11,mall10}. \paragraph{ADG-gravity as pure gauge theory of the 3rd kind.} ADG-gravity has been called `{\em pure gauge theory of the third kind}' due to the following three characteristic features: \begin{itemize} \item First, the sole dynamical variable in ADG-gravity is the $\mathbf{A}$-connection $\mathcal{D}$. This is in contradistinction to the original second-order formalism of GR due to Einstein in which the sole dynamical variable is the spacetime metric $g_{\mu\nu}$ whose ten components represent the gravitational potentials, or even to the recent first-order Palatini-type of formalism due to Ashtekar in which two gravitational variables are involved---the tetrad field $e_{\mu}$ and the spin-Lorentzian connection $\mathcal{A}$.\footnote{Let it be noted here that the smooth metric of the original 2nd-order formalism is still present `in disguise' in Ashtekar's scheme \cite{ash}, as $g_{\mu\nu}$ is effectively encoded in the {\it vierbein} $e_{\mu}$s.} Fittingly, the ADG-formulation of gravity has been called `{\em half-order formalism}', since only half the variables (namely, only the connection) of the first-order formalism are involved. \item Second, due to the manifest absence of a background geometrical smooth spacetime manifold $M$, there is no distinction between external (:spacetime) and internal (:gauge) symmetries. In ADG-gravity, the $\mathbf{\Omega}$ of external smooth spacetime symmetries, traditionally implementing the PGC in the manifold and, {\it in extenso}, the CDG-based GR, is replaced by ${\mathcal{A}}ut\mathbf{\mathcal{E}}$---the principal group sheaf of automorphisms of the ADG-gravitational field $(\mathbf{\mathcal{E}} ,\mathcal{D})$. Of course, by virtue of the local isomorphism $\mathbf{\mathcal{E}}|_{U}\simeq\mathbf{A}^{n}$, ${\mathcal{A}}ut\mathbf{\mathcal{E}}$ assumes locally the more familiar form: ${\mathcal{A}}ut\mathbf{\mathcal{E}}|_{U}=\mathcal{G}\mathcal{L}(n,\mathbf{A}(U))$---the group sheaf of general (generalized) coordinates' transformations. This is a Kleinian perspective on field geometry: the geometry of the field (:and concomitantly, of the law that it defines) is its automorphism group (:and concomitantly, the symmetries of the law that it defines). \item And third, from the above it follows that ADG-gravity is neither a gauge theory of the 1st kind (:global gauge symmetries, global gauge frames), nor one of the 2nd kind (:spacetime localized gauge symmetries, local gauge frames). There is no external, to the ADG-gravitational field $(\mathbf{\mathcal{E}} ,\mathcal{D})$, spacetime. The field is a dynamically autonomous entity, whose `auto-symmetries' (:`self-invariances' of the law (\ref{eq1}) that it defines) are encoded in ${\mathcal{A}}ut\mathbf{\mathcal{E}}$. This makes the ADG-gravitational field an autonomous, `{\em external spacetime unconstrained gauge system}'. As a result, in ADG-gravity there is no distinction between external (:`spacetime') and internal (:`gauge') symmetries: all symmetries are `esoteric' to the field, pure gauge ones. \end{itemize} In view of the above, the `{\em background smooth spacetime manifoldless half-order formalism}' of ADG-gravity may shed light on the outstanding problem of treating gravity as a gauge theory proper \cite{ivanenko}---a problem which is largely due to our persistently fallacious viewing of $\mathbf{\Omega}$ as a gauge group proper \cite{weinstein}. In the absence of an external (:background) geometrical spacetime manifold $M$ and the autonomous conception of the gravitational field in ADG-gravity, we encounter no problems originating from $M$ and its $\mathbf{\Omega}$ `structure group'. On the other hand, the classical theory (GR), as well as various attempts to quantize it by retaining the base $M$ and hence the entire CDG-technology, do encounter such problems---one of them being the problem of regarding gravity as a gauge theory proper mentioned above. Let us discuss some more of them. \paragraph{The role of singularities in ADG-gravity.} The role of singularities in GR was well known and appreciated since the times of Einstein and Schwarzschild, but it got worked out and further clarified in the celebrated works of Hawking and Penrose in the late 60s/early 70s. Briefly, singularities are thought of as {\it loci} in the spacetime continuum where some physically important quantity grows without bound and, ultimately, the Einstein gravitational equations seem to break down. Given some generic conditions, the Einstein equations appear to `predict' singularities---sites of their own destruction. This is pretty much the general aftermath of the manifold based Analysis of spacetime singularities \cite{clarke4}. In this Analysis (and this is the general consensus in gravitational physics), although singularities are pushed to the boundary of an otherwise regular spacetime manifold, they are regarded as being physically significant, in spite of Einstein's position to the contrary till the end of his life \cite{einst3}: \begin{quotation} \noindent ``{\small ...A field theory is not yet completely determined by the system of field equations. Should one admit the appearance of singularities?...{\small\em It is my opinion that singularities must be excluded. It does not seem reasonable to me to introduce into a continuum theory points (or lines {\it etc.}) for which the field equations do not hold}\footnote{Our emphasis.}...}'' \end{quotation} \noindent In this line of thought however, few would doubt that the main culprit for the singularities of GR is the smooth base spacetime manifold which is {\it a priori} assumed in the theory, in the sense that every singularity is a pathology of a smooth function in $\mathcal{C}^{\infty}_{M}$---the sheaf of germs of smooth functions on $M$.\footnote{Here it is tacitly assumed that a differential manifold $M$ is nothing else but the algebra $\mathcal{C}^{\infty}(M)$ of smooth functions on it (Gel'fand duality).} Moreover, the very PGC of GR, which is mathematically implemented via $\mathbf{\Omega}$ as noted before, appears to come in conflict with the existence of gravitational singularities, which makes a precise definition of the latter perhaps the most problematic issue in GR \cite{geroch,clarke4}. By contrast, in the base spacetime manifoldless ADG-gravity, singularities are not thought of as breakdown points of the law of gravity, at least not in any differential geometric sense. Quite on the contrary, {\em the ADG-formulated Einstein equations are seen to hold over singularities of any kind}. This is not so much a `{\em resolution}' of singularities in the usual sense of the term, as an `{\em absorption}' of them in the ADG-gravitational field $(\mathbf{\mathcal{E}} ,\mathcal{D})$. That is, singularities are incorporated in $\mathbf{A}$ (thus, in effect, they are absorbed in $\mathbf{\mathcal{E}}$), in the sense that they are singularities of some functional, generalized coordinate-type of entity in the structure sheaf of generalized arithmetics that {\em we} choose in the first place to employ in the theory. The aforementioned $\mathbf{A}$-functoriality of the ADG-gravitational dynamics secures that the ADG-gravitational field `sees through' the singularities carried by $\mathbf{A}$, and the latter in no sense are breakdown {\it loci} of the differentially (:differential geometrically) represented field law of gravity as a differential equation proper as the manifold and CDG-based analysis of spacetime singularities has hitherto made us believe \cite{clarke4}. Thus, in view of the ADG-generalized PGC and its associated PFR mentioned in the previous section, Einstein's `non-belief' in singularities can be succinctly justified in ADG-gravity as follows: \begin{quotation} \noindent What has nature (here, the physical field of gravity and the law that it defines as a differential equation) to do with coordinates (here, $\mathbf{A}$) and the singularities that they carry? If coordinates are unphysical because they do not partake into the ADG-gravitational dynamics (:$\mathbf{A}$-functoriality of (\ref{eq1})), then so are singularities, since they are inherent in $\mathbf{A}$. \end{quotation} Nevertheless, the general opinion nowadays is that, although gravitational singularities are a problem of classical gravity (GR) long before its quantization becomes an issue, a quantum theory of gravity should, if not remove them completely much in the same way that quantum electrodynamics did away with the unphysical infinities in Maxwell's theory, at least show us a way towards their resolution \cite{pen5}. We thus turn to some quantum implications of the base manifoldless ADG-gravity and how the singularity-absorption into $\mathbf{A}$ mentioned above may come in handy. \paragraph{Towards a 3rd-quantized theory of gravity.} The ADG-theoretic outlook on gravity is field-theoretic {\it par excellence}. In fact, it is purely 3rd-gauge field-theoretic, as it employs solely the algebraic connection field and there is no external (to the field) geometrical spacetime manifold. From a geometric (pre)quantization and 2nd (:field) quantization vantage, the (local) sections of $\mathbf{\mathcal{E}}$ represent (local) quantum particle (position) states of the field.\footnote{Indeed, $\mathbf{\mathcal{E}}$ may be thought of as the associated (:representation) sheaf of the principal group sheaf ${\mathcal{A}}ut\mathbf{\mathcal{E}}$ of field automorphisms.} Moreover, these `field quanta' obey an ADG-analogue of the spin-statistics connection: extending to vector sheaves Selesnick's bundle-theoretic musings in \cite{sel}, boson states correspond to sections of {\em line} sheaves\footnote{Vector sheaves of rank $1$.}, while fermions are represented by sections of vector sheaves of rank greater than $1$. Parenthetically, it must be noted here that the said representation of (gauge and matter) particle-quanta states as sections of the corresponding $\mathbf{\mathcal{E}}$s ties well with the aforesaid incorporation of singularities in $\mathbf{A}$ (or $\mathbf{\mathcal{E}}$), in the following sense: ever since the inception of GR, and subsequently with the advent of QM, it is well reported that Einstein in his unitary field theory program\footnote{Which, let it be noted here, was intended to `explain away' QM altogether.} wished to describe the particle-quanta as `{\em singularities in the field}'. Prophetically, Eddington \cite{eddington2} anticipated him: \begin{quotation} \noindent ``{\small ...It is startling to find that the whole of dynamics of material systems is contained in the law of gravitation; at first gravitation seems scarcely relevant in much of our dynamics. But there is a natural explanation. {\small\em A particle of matter is a singularity in the gravitational field},\footnote{Our emphasis.} and its mass is the pole-strength of the singularity; consequently {\small\em the laws of motion of the singularities must be contained in the field-equations},\footnote{Again, our emphasis.} just as those of electromagnetic singularities (electrons) are contained in the electromagnetic field-equations...}'' \end{quotation} \noindent By absorbing the singularities into $\mathbf{A}$, by identifying quantum-particle states as sections of $\mathbf{\mathcal{E}}$ ({\it ie}, in effect of $\mathbf{A}$!), and by the $\mathbf{A}$-functoriality of the ADG-gravitational dynamics, we have a direct realization of Eddington's anticipation above: {\em the particle-quanta co-vary with the field-law itself}. In a strong de Broglie-Bohmian sense, the connections are the `{\em guiding fields}' of their particles: they embody them and carry them along the dynamics (:field equations) that they define. The upshot of all this is that, due to the external spacetime manifoldlessness of the theory, the quantum perspective on ADG-gravity: \begin{itemize} \item May be coined 3rd-quantum field theory.\footnote{Recently, Petros Wallden brought to the attention of this author that the term `{\em third quantization}' has already been used in quantum gravity and quantum cosmology research \cite{strominger}. However, the sense in which we use this term is quite different from that.} {\it In toto}, {\em QG from the ADG-perspective is a 3rd-quantum, 3rd-gauge field theory}. \item Since the ADG-gravitational field is an external spacetime unconstrained gauge system, there is also {\it prima facie} no problem in defining (gauge invariant) observables in (vacuum) Einstein gravity \cite{torre1}, or a (physical) inner product (:physical Hilbert space); while no problem of time arises either, since $\mathbf{\Omega}$ is absent from the theory from the very start \cite{ish2,torre2}.\footnote{All these problems are encountered in the manifold (and CDG) based canonical approaches to QG, in which the gravitational field is viewed as a spacetime constrained gauge system and $\mathbf{\Omega}$ represents those so-called primary space-time constraints (:in a canonical $3+1$-split smooth spacetime manifold setting, the primary constraints are the $3$-spatial diffeos and the Hamiltonian time-diffeo resulting in the celebrated Wheeler-de Witt equation satisfied by physical states).} \item In a possible covariant (:path integral) quantization of ADG-gravity, the physical configuration space is the moduli space of the affine space $\sconn$ of $\mathbf{A}$-connections, modulo the field's gauge auto-transformations in ${\mathcal{A}}ut\mathbf{\mathcal{E}}$. Here too, since $\mathbf{\Omega}$ is not present, there should be no problem in finding a convenient measure to implement the said functional integral. Towards this end, and with some new ADG-results in hand \cite{mall4}, Radon-type of measures on $\sconn /{\mathcal{A}}ut\mathbf{\mathcal{E}}$ are currently being investigated. There have been recent QG tendencies to develop differential geometric ideas and a related integration theory on the moduli space of gravitational connections, as for example in Loop Quantum Gravity (LQG) \cite{ashlew2,ashlew5,smolin}, but advances appear to be stymied by the ever-present background smooth spacetime manifold and its associated $\mathbf{\Omega}$ \cite{baez2,baez3}. \item {\em There is no quantization of spacetime {\it per se} entertained in ADG-gravity, since there is no spacetime to begin with}. Such a spacetime quantization procedure figures prominently in current gauge-theoretic ({\it ie}, connection based) approaches to QG such as LQG, and it is used there to resolve smooth spacetime singularities \cite{modesto,husain}. Thus here we have an instance of the aforesaid general anticipation of current QG researchers, namely, that a quantum theory of gravity should remove singularities. Indeed, LQG appears to resolve singularities via spacetime quantization. Again, this must be contrasted against ADG-gravity, where {\it ab initio} there is no spacetime continuum hence no spacetime quantization either, while singularities are being absorbed in the field law itself; hence, strictly speaking, there is no need for their `quantum resolution'. \item Last but not least comes the issue of the formulation of a manifestly {\em background independent} non-perturbative QG \cite{ash5,alvarez,smolin}. Normally, `background independence' means `{\em background geometry (:metric) independence}'. ADG-gravity is explicitly background metric independent, since no metric is involved in the theory ({\it ie}, the aforementioned $\mathbf{A}$-metric has no physical significance---it is not a dynamical variable---in the theory).\footnote{It is an optional, auxiliary structure externally (to the field $\mathcal{D}$) imposed by the experimenter (:`observer' or `measurer'); much like $\mathbf{A}$ itself.} Furthermore, unlike the current connection based approaches to QG, which vitally rely on a background smooth manifold for their differential geometric concepts and constructions, {\em ADG-gravity is manifestly background spacetime manifold independent}. \end{itemize} Thus, in view of all the virtues of ADG-gravity above, one is tempted to ask the following couple of questions: \begin{quotation} \noindent $\bullet$ In the guise of (\ref{eq1}), {\em don't we already possess a quantum version of the (vacuum) Einstein equations?} \vskip 0.05in \noindent and concomitantly: \vskip 0.05in \noindent $\bullet$ Since not only a background metric, but also a background spacetime (manifold) is {\em not} involved in the theory, does the need arise to {\em quantize spacetime itself}? \end{quotation} \noindent The immediate reply is `{\em yes}' and `{\em no}', respectively. \paragraph{The future in a nutshell: QG in a topos.} The last paragraph in the present section is concerned with the possibility of formulating ADG-theoretically QG in a topos. A topos is a special type of category that can be interpreted both as an abstract `pointless space' and as a `logical universe of variable mathematical entities'. In a topos, geometry and logic are unified \cite{macmo}. Thus, the basic intention here is to organize the sheaves involved in ADG-gravity into a topos-like structure in which deep logico-geometrical issues in QG can be addressed. A mathematical byproduct of such an investigation would be to link ADG with the topos-theoretic Synthetic Differential Geometry (SDG) of Kock and Lawvere \cite{kock,laven}, which in turn has enjoyed various applications so far to classical and quantum gravity \cite{grink,guts0,guts1,guts2,guts3,guts,buttish4,ish3}. In this respect, of purely mathematical interest would be to compare and try to bring together under a topos-theoretic setting the principal notion of both ADG and SDG---that of {\em connection} \cite{mall-3,mall1,vas1,kock1,kock,laven}. In the context of a finitary, causal and quantal version of Lorentzian gravity formulated in ADG-terms \cite{malrap1,malrap2,malrap3,rap5,malrap4}, this enterprize (with a Grothendieck topos twist closely akin to a recent approach to quantum geometry and QG coined `Causal Site Theory' \cite{crane}\footnote{A categorical generalization of the `Causal Set Theory' of Sorkin {\it et al.} \cite{sorkin1,sork2,sork3,sork4}.}) has already commenced \cite{rap7}.\footnote{Anticipatory works of such an enterprize are \cite{rap0,rap4,rap6}.} Another categorical approach to QG which ADG-gravity could in principle be related to is the recent `{\em Quantizing on a Category}' (QC) general mathematical scheme due to Isham \cite{ish5,ish6,ish7,ish8}. The algebraico-categorical QC is closely akin to ADG both conceptually and technically, having affine basic motivations and aims. QC's main goal is to quantize systems with configuration (or history) spaces consisting of `points' having internal (algebraic) structure. The main motivation behind QC is the failure of applying the conventional quantization concepts and techniques to `systems' ({\it eg}, causets or spacetime topologies) whose configuration (or general history) spaces are far from being structureless-pointed differential manifolds. Isham's approach hinges on two innovations: first it regards the relevant entities as objects in a category, and then it views the categorical morphisms as abstract analogues of momentum (derivation maps) in the usual (manifold based) theories. As it is the case with ADG, although this approach includes the standard manifold based quantization techniques, it goes much further by making possible the quantization of systems whose `state' spaces are not smooth continua. Indeed, there appear to be close ties between QC and ADG-gravity---ties which ought to be looked at closer. {\em Prima facie}, both schemes concentrate on evading the (pathological) pointed differential manifold---be it the configuration space of some classical or quantum physical system, or the background spacetime arena of classical or quantum (field) physics---and they both employ `pointless', categorico-algebraic methods. Both focus on an abstract (categorical) representation of the notion of derivative or derivation: in QC, Isham abstracts from the usual continuum based notion of vector field (derivation), to arrive at the categorical notion of arrow field which is a map that respects the internal structure of the categorical objects one wishes to focus on (eg, topological spaces or causets); while in our work, the notion of derivative is abstracted and generalized to that of an algebraic connection, defined categorically as a sheaf morphism, on a sheaf of suitably algebraized structures ({\it eg}, causal sets, or finitary topological spaces and the incidence algebras thereof representing quantum causal sets, as in the finitary version of ADG-gravity \cite{malrap1,malrap2,malrap3,rap5,rap7}). \section{II.3. Epilogue: General Closing Remarks} In this epilogue we would first like to discuss whether it is still reasonable to believe that we can use differential geometric ideas in the quantum deep, that is, in the QG domain. Then, we would like to conclude this paper by continuing the general theme of the prologue, namely, that QG research is in need of new concepts, new mathematics, and a novel way of philosophizing about them. \paragraph{Still use differential geometry in QG?} Although the general feeling nowadays among theoretical physicists (and in particular, `quantum gravitists') is that below a so-called Planck length-time ($\ell_{P}$-$t_{P}$),\footnote{$\ell_{P}=\sqrt{\frac{G\hbar}{c^{3}}}=1.6\times 10^{-33}cm;~ t_{P}=\sqrt{\frac{G\hbar}{c^{5}}}=5.3\times 10^{-44}s$.} where quantum gravitational effects are supposed to become significant, the space-time continuum (:manifold) should give way to something more reticular (:discrete) and quantal, CDG-ideas and technology still abound in current QG research. Consider for instance the manifold based CDG used in all its glory in the canonical and covariant approaches to QG ({\it eg}, LQG \cite{ashlew2,ashlew5}), or the higher-dimensional (real analytic or holomorphic) manifolds ({\it eg}, Riemann surfaces, K\"ahler manifolds, Calabi-Yau manifolds, supermanifolds, {\it etc.}) engaged in (super)string theory research, or even the so-called noncommutative differential spaces that Connes' Noncommutative Differential Geometry propounds \cite{kastler,connes,connes1}, which are still, deep down, differential manifolds in the usual sense of the term. {\it In toto}, smooth manifolds and CDG are still well and prosper in QG. A few people, however, have aired over the years serious doubts about whether the spacetime continuum and, {\it in extenso}, the CDG that is based on it, could be applied {\em at all} in the QG domain. Starting (in chronological order) with Einstein, then going to Feynman, the doubts reach their climax in Isham's categorematic `{\em no-go of differential geometry in QG}' below: \begin{quotation} \noindent ``{\small `...You have correctly grasped the drawback that the continuum brings. If the molecular view of matter is the correct (appropriate) one; {\it ie}, if a part of the universe is to be represented by a finite number of points, then the continuum of the present theory contains too great a manifold of possibilities. I also believe that this `too great' is responsible for the fact that our present means of description miscarry with quantum theory. {\em The problem seems to me how one can formulate statements about a discontinuum without calling upon a continuum space-time as an aid; the latter should be banned from theory as a supplementary construction not justified by the essence of the problem---a construction which corresponds to nothing real. But we still lack the mathematical structure unfortunately}.\footnote{Our emphasis.} How much have I already plagued myself in this way [of the manifold]!...}'' \cite{stachel} \end{quotation} \centerline{.............................} \begin{quotation} \noindent ``{\small{\em ...The theory that space is continuous is wrong, because we get...infinities} {\small\rm [ viz. `singularities']} {\em\small and other similar difficulties} ...{\small\rm [ while]} {\small\em the simple ideas of geometry, extended down to infinitely small, are wrong\footnote{Our emphasis throughout.}...}}'' \cite{feyn1} \end{quotation} \centerline{.............................} \begin{quotation} \noindent ``{\small{\em ...At the Planck-length scale, differential geometry is simply incompatible with quantum theory}...{\small [so that]} {\small\em one will not be able to use differential geometry in the true quantum-gravity theory\footnote{Our emphasis.}...}}'' \cite{ish} \end{quotation} \vskip 0.1in \noindent Isham's remarks are shrewd, critical and iconoclastic: \begin{quotation} {\em CDG and the classical $\mathcal{C}^{\infty}$-smooth manifold model of spacetime supporting its constructions `miscarry with'} (to use Einstein's expression above) {\em quantum theory, and it will therefore be of no import to QG research}. \end{quotation} \noindent On the other hand, and this is one of the basic aftermaths of our work, from an ADG-theoretic point of view it is not exactly that differential geometric ideas cannot be used in the quantum regime---as if the intrinsic differential geometric mechanism (which in its essence is of an algebraic nature) fails in one way or another when applied to the realm of QG---but rather that when that mechanism is geometrically effectuated or implemented (represented) by the (cartesian mediation in the guise of the smooth coordinates of the) background $\mathcal{C}^{\infty}$-smooth spacetime manifold as in CDG, then all the said problems (:singularities, unphysical infinities, $\mathbf{\Omega}$-related pathologies) crop up and are insurmountable (always within the confines of, {\it ie}, with the concepts and the methods of, the theoretical framework of the manifold based Analysis). Thus, to pronounce this subtle but crucial from the ADG-perspective difference, we maintain that \begin{quotation} \noindent the second part of Isham's quotation above should also carry the adjective `{\em classical}' in front of `{\em differential geometry}', and read: `{\em one will not be able to use \underline{classical} differential geometry}' (or equivalently, a geometrical base differential spacetime manifold) `{\em in the true quantum-gravity theory}'. \end{quotation} \noindent {\it In summa}, the aforesaid subtle distinction hinges on the physical non-existence of a background geometrical smooth spacetime manifold, {\em not} of the inapplicability of the {\em essentially algebraic mechanism of differential geometry}, which can still be retained and applied to QG research. Metaphorically speaking, ADG-gravity has shown us a way {\em not} to throw away the baby (:the invaluable algebraic differential geometric mechanism) together with the bath-water (:the base smooth spacetime manifold). The `icon' (or perhaps better, the `idol') that Isham's iconoclastic words ought to cut out of physics once and for all is the background geometrical spacetime manifold and {\em not} the invaluable differential geometric machinery which CDG has so far misled us into thinking that is inextricably tied to the base manifold. To summarize, in the background geometrical spacetime manifoldless ADG-gravity, all the classical and quantum gravity problems we mentioned in the previous section, which are all due to the base $M$, its $\mathbf{\Omega}$ and, {\it in extenso}, to the CDG that is based on the latter, simply disappear---{\it ie}, they become non-problems. Thus, ADG does not solve these puzzles; it simply cuts the Gordian knot that they present us within the CDG-framework. This is analogous to how Wittgenstein \cite{witt1} maintained that philosophical problems could be solved: simply by changing perspective---ultimately, by changing theoretical framework: \begin{quotation} \noindent ``{\small ...The solution of philosophical problems can be compared with a gift in a fairy tale: in the magic castle it appears enchanted and if you look at it outside in daylight it is nothing but an ordinary bit of iron (or something of the sort)\footnote{Our emphasis throughout.}...}'' \end{quotation} \noindent Indeed, problems in GR like that of singularities and Einstein's hole argument \cite{stachel3,stachel5,stachel0,stachel9}, as well as the problem of time and that of observables in QG, look formidable (in fact, insuperable!) when viewed and tackled via the manifold based CDG---ultimately, when we are bound by ``{\em the golden shackles of the manifold}'' \cite{ish}. However, under the light of ADG, `{\em gold looks nothing but an ordinary bit of iron}'. Furthermore, much in the same way that Wittgenstein in \cite{witt2} contended that \begin{quotation} \noindent ``{\small ...Our task is, not to discover new calculi, but to describe the present situation in a new light...}'' \end{quotation} \noindent our ADG-framework (and, as a result, ADG-gravity), does not purport to be some kind of new Differential Calculus (and, accordingly, ADG-gravity a new theory of gravitation); it simply goes to show that most (if not all!) of the differential geometric mechanism `inherent' in CDG can be articulated entirely algebraically, without the cartesian mediation of a background geometrical (spacetime) manifold (with all the supposedly physical pathologies that the latter is pregnant to). In addition, it goes without saying that if the base geometrical $M$ has to go, so must the geometrical (spacetime) interpretation of the theory (:GR).\footnote{Of course, it now behooves us to answer to the question posed at the end of section II.1.} For after all, Einstein too, overlooking the great success that the geometrical spacetime manifold based GR enjoyed during his lifetime, insisted that: \begin{quotation} \noindent ``{\small ...Time and space are modes by which {\em we think}, not conditions in which we live}'' \cite{einst2}\footnote{This quotation can also be found in Anastasios Mallios' contribution to this volume.}...``{\small [the spacetime continuum] corresponds to nothing real}'' \cite{stachel}..., [but perhaps more importantly, that] ``{\small [Quantum theory] {\em does not seem to be in accordance with a continuum theory, and must lead to an attempt to find a purely algebraic theory for the description of reality. But nobody knows how to obtain the basis of such a theory}'' \cite{einst3}}. \end{quotation} \noindent Indeed, we are tempted to say that when Einstein was talking about ``{\em ...concepts which have proven useful for ordering things easily assume so great an authority over us, that we forget their terrestrial origin and accept them as unalterable facts. They then become labelled as `conceptual necessities', `a priori situations', etc.}'' in the quotation we saw in the introduction, he was `subconsciously' referring to the {\it a priori} concept (and use by CDG-means) of the spacetime continuum in GR. Moreover, again to emulate Einstein's concluding words in that quotation, we believe that \begin{quotation} \noindent{\em the road of progress in QG has been blocked for a long period by our erroneous insistence on the `physicality' of the background geometrical spacetime continuum}. \end{quotation} \noindent Parenthetically, and on more general grounds, let it be stressed here that Einstein, during his later years, went as far as to insist that (and we quote him indirectly via Peter Bergmann from \cite{berg0}):\footnote{This quotation can also be found in Anastasios Mallios' contribution to this volume.} \begin{quotation} \noindent ``{\small ...geometrization of physics is {\em not} a foremost or even a meaningful objective...}'' \end{quotation} \noindent Thus, we see that Einstein towards the end of his life tended to leave behind `geometry' and take on `algebra' {\it vis-\`a-vis} the quantum domain. Lately, Einstein's words for a purely algebraic description of physical phenomena in the quantum deep in the penultimate quotation above, have found fertile ground as there have been tendencies towards a purely algebraic theoresis of QG. Ten years ago, Louis Crane asked characteristically in the very title of a paper of his \cite{crane0}: \vskip 0.1in \centerline{\em ``Clock and category: is quantum gravity algebraic?''} \vskip 0.1in \noindent The purely algebraico-categorical ADG-gravity appears to answer to it affirmatively, and what's more, in a {\em background spacetime manifoldless differential geometric setting}, in spite of Isham's doubts and reservations above. May ADG provide the theoretical framework that Einstein was (and some of us still are nowadays!) looking for in our journey towards QG. However, even if that does not turn out to be the case in the end, at least we will have in our hands an entirely algebraic (re)formulation of differential geometry---a novel framework pregnant with new concepts, new principles, new techniques, and new theoretical terms. Following Wallace Stevens' \cite{stevens} dictum, that: \vskip 0.1in \centerline{\small ``...Progress in any aspect is a movement through changes in terminology...''} \vskip 0.1in \noindent we believe it is worth trying to move towards our QG destination through the ADG-path. Let us now pick the argument from where we left it earlier, when the problem of gravitational singularities was discussed under the prism of the background spacetime manifoldless ADG-gravity, and comment on the closely related problem of the unphysical infinities associated with those singularities, as well as the non-renormalizable infinities appearing in QG when treated as another, manifold based, QFT. \paragraph{Whence the unphysical infinities?} There are infinities associated with gravitational singularities, there is no doubt about that. For instance, the curvature of the spherically symmetric Schwarzschild gravitational field of a point-particle of mass $m$ diverges as $\frac{m^{2}}{r^{6}}$ as one approaches it ($r\longrightarrow 0$); moreover, there is no analytic extension of the Schwarzschild spacetime manifold so as to include the singular {\it locus} $m$ with the other regular points of the manifold \cite{clarke4}. In contradistinction to the exterior Schwarzschild singularity at $r=2m$ (:horizon) which has been branded a virtual, coordinate singularity, the interior $r=0$ one is thought of as a true singularity, with physical significance. Nevertheless, it is altogether hard to believe that there are actually physically meaningful infinities in Nature. As noted earlier, many researchers hoped (and still do!) that QG will remove singularities in the same way that QED removed the Maxwellian infinities. Thus, perturbative QG, by emulating the other quantum gauge theories of matter, initially regarded QG as another QFT (on a flat Minkowski background!) and evoked the (arguably {\it ad hoc}) process of renormalization to remove gravitational infinities. It soon failed miserably, because of the dimensionful gravitational coupling constant. Theoretical physicists are people of resourcefulness, strong resolve and stout heart, thus they evoked (or `better', they introduced by hand!) extra dimensions, extra fields to occupy them and extra symmetries between those extra fields ({\it eg}, supergravity and supersymmetric string theories) in order to `smear' the offensive {\it loci}, much like one blows up singularities in algebraic geometry. The singular interaction point-vertices of the Feynman diagrams of the meeting propagation lines of the point-particles of QFT were smeared and smoothened out by world-tubes of propagating closed strings, being `welded' smoothly into one another at the interaction sites. However, infinities, although tamed a bit, are still seen to persist galore (never mind the grave expense of theoretical economy that accompanies the introduction of more and more in principle unobservable fields and their particle-quanta).\footnote{Of course, this blatant violation of Occam's razor is not necessarily bad by itself, as at least it keeps the experimentalists busy (and quiet!) designing experiments to look for the `predicted' extra particles ({\it eg}, the superpartners of the known particles), whose existence appears to be mandated by theory.} At the same time, people from the non-perturbative QG camp soon realized that non-renormalizability is not a problem in itself if one takes into consideration that QG, as opposed to the other quantum gauge forces of matter, has associated with it a fundamental space-time scale---the Planck length-time---which as noted earlier is an expression involving the fundamental constants of the three theories that are supposed to be merged into QG: $G$ from (Newtonian) gravity, $c$ from relativity, and $\hbar$ from quantum mechanics. The Planck scale can then be thought of as prohibiting {\em in principle} the integration down to infinitely small spacetime distances; or dually in the perturbation series/integrals, up to infinite momenergies. Non-perturbative QG fundamentally assumes that spacetime is inherently cut-off (:`regularized') by the Planck scale, so that below it the continuum picture should be replaced by something more discrete and quantum. All this is well known and good. The infinities have not only kept us occupied for a while, but they have provided us with a wealth of new ideas and techniques in our struggle and strife to remove them ({\it eg}, anomalies, spontaneous symmetry breaking, phase changes, catastrophes and other critical phenomena, as well as the renormalization group technology that goes hand in hand with them, {\it etc.}) \cite{jackiw}. However, their stubborn persistence makes us still abide by our main thesis here: it is indeed the background smooth spacetime continuum, accommodating uncountably infinite degrees of freedom of the fields which are modelled after smooth functions on it (or $\otimes_{\mathcal{C}^{\infty}_{M}}$-tensors thereof), that is responsible for all those pestilential infinities. We must therefore give up {\em in principle} the spacetime continuum (:manifold) and the usual Analysis (Calculus or CDG) based on it, because they appear to miscarry in the QG deep. In this line of thought we can metaphorically paraphrase Evariste Galois': \vskip 0.1in \centerline{``{\small\em Les calcules sont impracticables}'',\footnote{``{\em Calculations are impractical}''.}} \vskip 0.1in \noindent and add that {\em the Differential Calculus, when effectuated via the background geometrical spacetime continuum, is an obstacle rather than a boon to QG research}. In turn, this reminds us of Richard Feynman calling the usual differential geometry ``{\em fancy schmanzy}'', doubting the up-front geometrical interpretation of GR, and opting instead for a combinatory-algebraic (diagrammatic-relational) scheme along QFTheoretic lines for its quantization \cite{feyn2}.\footnote{See especially the forward, titled ``{\itshape Quantum Gravity}'', written by Brian Hatfield, giving a brief account of Feynman's approach to QG. Hatfield argues there that Feynman not only felt that the (differential) geometrical interpretation of gravity `gets in the way of its quantization', but also that it masks its fundamental gauge character.} Of course, Feynman's unsuccessful attempt at quantizing gravity by applying the perturbative-diagrammatic technology of QED is well documented. At the same time, from the ADG-gravitational perspective we cannot accept the non-perturbative QG's thesis that there is a fundamental spacetime scale in Nature either, simply because there is no spacetime in Nature to begin with. From our viewpoint, in a Leibnizian sense, {\em `spacetime' is the (dynamical) objects that comprise `it'; that is, the (dynamical) fields}. Accepting the existence of a fundamental scale in Nature, above which Einstein's equations hold, but below which the latter break down and another set of equations (:those of the QG we are supposed to be after) are in force, is analogous to accepting singularities as {\em physical} entities. They both violate the universality of Physical Law, and undermine the unity and autonomy of the gravitational field. That our calculations are plagued by infinities is more likely because the usual Differential Calculus that {\em we} employ is inextricably tied to a geometrical base spacetime continuum that {\em we} assume up-front in the theory. Our manifold based Analysis invites infinities by allowing for infinitary processes (of divergence) relative the base topological continuum. On the other hand, {\em there is no infinity in algebra}, and our purely algebraic ADG-gravity suffers from no such unphysical pathologies. It would be weird, or indeed comical(!), to even try to fathom what would the meaning of the notion of `singularity' be in a purely `pointless' and `space(tile)less' algebraico-categorical setting like ours. For example, an attempt at the following analogy produces funny thoughts: \begin{quotation} \noindent Does a singularity bend (or break!) the categorical arrows (:connection field morphisms) in ADG-gravity in a way analogous to how a point-electron is geometrically envisaged to distort the Faraday lines of force of the electromagnetic field in its vicinity? Then, {\it mutatis mutandis} for the gravitational field lines of force strongly focusing towards a point-mass, as in the case of the interior Schwarzschild (black hole) singularity. \end{quotation} \paragraph{New theoretical-mathematical framework for QG.} To connect this epilogue back to the prologue like the proverbial tail-biting serpent, in QG research the glaring absence of comprehensive experiments and thus of reliable and concrete experimental data to support and constrain theory-making is, at least from a mathematical viewpoint, quite liberating. The tentative, transient and speculative nature of the field invites virtually unrestrained conceptual imagination, mathematical creativity and wild philosophical wandering. Even that most austere and critical of all 20th century theoretical physicists, Wolfgang Pauli, said about the prospect of quantizing the gravitational field \cite{pauli}: \begin{quotation} \noindent ``{\small ...Every theoretical possibility is a potential route to success...[however, in this field] only he who risks has a chance to succeed...}''\footnote{See also Feynman's quotation in the introductory paper to this volume.} \end{quotation} \noindent Abiding by John Wheeler's dictum that `{\em more is different}', the plethora of (mathematical) approaches to QG are more than welcome (even if we coined it the seemingly derogatory `{\em zoo}' in the prologue!), under the proviso that every now and then unifying efforts are made to patch together the mosaic of approaches to QG into a single---or at least to a regular---pattern tapestry. This can be achieved for example by occasionally leaving the worm's eye-view---as it were, the `local', nitty-gritty problems and technical calculations of each individual approach---and by trying to attain a `global' conceptual, bird's eye-view of the field; one that at least tries to make `dictionary correspondences', in both conceptual and technical jargon, between different approaches. For Nature is economical, and so must be our theories of Her---if not in (mathematical) technicalities, at least conceptually. On the other hand, Paul Dirac, more than 70 years ago \cite{dirac3}, implored us to apply all our existing mathematical arsenal, and even to invent and create new mathematics in order to tackle the outstanding theoretical physics problems of the last century---QG arguably being {\em the} central one that stubbornly resists (re)solution in our times:\footnote{Quote borrowed from fairly recent paper by Ludwig Faddeev \cite{faddeev}.} \begin{quotation} \noindent ``{\small ...The steady progress of physics requires for its theoretical foundation a mathematics that gets continually more advanced. This is only natural and to be expected. What, however, was not expected by the scientific workers of the last century was the particular form that the line of advancement of the mathematics would take, namely, it was expected that the mathematics would get more complicated, but would rest on a permanent basis of axioms and definitions, while actually the modern physical developments have required a mathematics that continually shifts its foundation and gets more abstract...It seems likely that this process of increasing abstraction will continue in the future and that advance in physics is to be associated with a continual modification and generalization of the axioms at the base of mathematics rather than with logical development of any one mathematical scheme on a fixed foundation. There are at present fundamental problems in theoretical physics awaiting solution [...]\footnote{At this point Dirac mentions a couple of outstanding mathematical physics problems of his times, which are hereby omitted.} the solution of which problems will presumably require a more drastic revision of our fundamental concepts than any that have gone before. Quite likely these changes will be so great that it will be beyond the power of human intelligence to get the necessary new ideas by direct attempt to formulate the experimental data in mathematical terms. The theoretical worker in the future will therefore have to proceed in a more indirect way. {\em The most powerful method of advance that can be suggested at present is to employ all the resources of pure mathematics in attempts to perfect and generalise the mathematical formalism that forms the existing basis of theoretical physics, and after each success in this direction, to try to interpret the new mathematical features in terms of physical entities}\footnote{Our emphasis.}...}'' \end{quotation} \noindent At the same time, however, there is this nagging little voice at the back of every theoretical physicist's mind cautioning her about the {\em New Maths Version of Murphy's Law}, maintaining that \begin{quotation} \noindent {\em whenever there is a 50-50 chance that a new mathematical theory applies to physics successfully, 9 times out of 10 it turns out to fail},\footnote{A watered down version of what David Finkelstein has coined the `{\em mathetic fallacy}' in theoretical physics (private communication).} \end{quotation} \noindent notwithstanding Eugene Wigner's `{\em unreasonable effectiveness of mathematics}'. In turn, this further evokes forebodings of scepticism and fear, reminding her of Pauli's (in)famous remark that ``{\em this theory is so bad, it's not even wrong}''. Nevertheless, it is the main position of this author that such reservations and phobias have to be put aside in the dawn of the new millennium, for in the end they only present inertia to, and create an attitude of pessimism (invariably resulting to indolence) in the development of theoretical physics. We have to be innovative, adventurous and unconventional, perhaps even iconoclastic,\footnote{See opening paper in this issue.} not only about our technical-mathematical machinery, but also about the conceptual and philosophical underpinnings of our fundamental theories of Nature---with QG in particular, since it is arguably the deepest of them all. Gerard 't Hooft put it succinctly in \cite{thooft}: \begin{quotation} \noindent ``{\small ...The problems of quantum gravity are much more than purely technical ones. They touch upon very essential philosophical issues...}'' \end{quotation} \noindent Thus, we should not inappreciably pass-by this unique opportunity that QG is offering us: to bring together Physics and Philosophy, thus reinstate the luster of `{\em Naturphilosophie}' that theoretical physics seems to have lost in the last century, predominantly due to its focusing on technical (:mathematical) formalism, atrophizing at the same time important conceptual/interpretational issues. Ultimately, we should not be afraid of making mistakes, or fear that our theories will come short of describing Nature completely, because anyway, on the one hand the maths is our own free intellectual creation\footnote{Recall from the quotes given above Einstein referring to the (mathematical) concept of the (spacetime) continuum as a `{\em mode by which \underline{we} think}', as well as his warning us in general not to forget the `{\em terrestrial origin}' of various concepts, no matter how useful they may have been in the past.} (thus, we can take responsibility for their shortcomings and blemishes, and rectify them when necessary), while on the other, Physis is almost {\it de facto} wiser than us. This simply goes to show that theoretical physics is a never ending quest, and thus that our theories are in a constant process of revision, refinement and extension. To close this epilogue the way we started it, as Faddeev maintains in \cite{faddeev} motivated by Dirac's remarks above, theoretical/mathematical physics cannot---in fact it {\em should not}---rely anymore on experiment for its progress. It should become more and more autonomous, more and more abstract, as well as versatile and wide ranging. Once again, the tried and tested age-old virtues of conceptual simplicity, mathematical economy and beauty---virtues that are trademarks in the celebrated works of such giants as Einstein and Dirac---can be called to guide us in our theoretical physics (ad)ventures through our presumed `subject': Physis.\footnote{Of course, if anything, {\em we} are the subjects of Nature, not the other way round. Hence the quotation marks.} And we can rest assured that these virtues shall safeguard us from `mathematically arbitrary' theory-making. \begin{quotation} \noindent After all, it is well known that when the solar eclipse results were due back from Arthur Eddington's 1919 Cape Town expedition, in Berlin Max Planck could not go to sleep in anticipation and excitement about whether GR would be experimentally (:observationally) vindicated; or on the contrary, whether it would fail to deliver in the end. {\em Einstein on the other hand reportedly went to bed by eight o'clock}... \end{quotation} \section*{Acknowledgments} This author is indebted to the following people for numerous exchanges, feedback and critique, as well as for their moral encouragement and material support over the past half-decade, to his research program of applying ADG to classical and quantum gravity: Chris Isham, Jim Lambek, Tasos Mallios, Chris Mulvey, Steve Selesnick, Rafael Sorkin, John Stachel, Petros Wallden and Roman Zapatrin. He would also like to acknowledge financial support from the European Commission in the form of a European Reintegration Grant (ERG-CT-505432) held at the University of Athens (Greece).
1,108,101,563,488
arxiv
\section{Improved Generative Inpainting Network} \begin{figure*}[h] \begin{center} \includegraphics[width=1.\linewidth]{fig/framework.png} \end{center} \caption{Illustration of framework.} \label{fig:framework} \end{figure*} In this section, we introduce an improved generative image inpainting network based on work~\cite{iizuka2017globally} as our baseline model. In addition to dilated convolution, global and local GAN proposed in work~\cite{iizuka2017globally}, we propose several enhancements on inpainting network architecture and training objectives. The overview of our improved framework is shown in Figure~\ref{fig:framework}. It is a remarkable fact that the improved model converges faster and stabler. For the Places2~\cite{zhou2017places} dataset, we dramatically reduce training time for convergence from 11,520 GPU-hours (Nvidia K80) to 120 GPU-hours (Nvidia GTX 1080). Moreover, image blending post-processing step in~\cite{iizuka2017globally} is no longer necessary. Thus, we use our improved version as baseline. \textbf{Inpainting network} As mentioned in~\cite{iizuka2017globally}, receptive field size of output neurons (a.k.a.\ spatial support) is especially important for inpainting large holes. Thus, Iizuka et al.\ \cite{iizuka2017globally} introduce dilated convolution. To further increase receptive field size and stabilize training, we introduce another network which predicts coarse structures and color as roughout results. It is trained with reconstruction loss explicitly and gradients from deeper layers jointly, which can be viewed as a mimic of residual learning~\cite{he2016deep} or deep supervision~\cite{lee2015deeply}. The inpainting network is designed in thin and deep scheme thus has less parameters than work~\cite{iizuka2017globally}. Moreover, we use symmetric (or reflective) padding for all convolution layers. Batch normalization layers~\cite{ioffe2015batch} are removed. ELUs~\cite{clevert2015fast} are used as activation function. We clip the value from output layer instead of using \(tanh\) or \(sigmoid\) function. Furthermore, we separate the global and local GAN instead of concatenate features after fully-connected layer. More details can be found in supplementary. \textbf{Global and local Wasserstein GAN} We introduce two Wasserstein GAN critics for modeling global and local content consistency: a local one on generated bounding box region and a global one on entire completed image. Different from previous generative inpainting networks~\cite{iizuka2017globally, li2017generative, pathak2016context}, where authors use DCGAN~\cite{radford2015unsupervised} for adversarial supervision, we formulize improved WGAN~\cite{arjovsky2017wasserstein, gulrajani2017improved} with modification on gradient penalty term for image inpainting. To review, WGAN uses \textit{Earth-Mover} distance (a.k.a.\ \textit{Wasserstein-1}) distance \(W(\mathbb{P}_r, \mathbb{P}_g)\) instead of \textit{Total Variation}, \textit{Kullback-Leibler} or \textit{Jensen-Shannon} divergence for measure of two distributions. The objective function for a WGAN is constructed by applying the \textit{Kantorovich-Rubinstein} duality to obtain: \[ \min_G\max_{D \in \mathcal{D}} E_{\mathbf{x} \sim \mathbb{P}_r}[D(\mathbf{x})] - E_{\tilde{\mathbf{x}} \sim \mathbb{P}_g}[D(\tilde{\mathbf{x}})],\] where \(\mathcal{D}\) is the set of 1-Lipschitz functions and \(\mathbb{P}_g\) is the model distribution implicitly defined by \(\tilde{\mathbf{x}} = G(\mathbf{z})\). \(\mathbf{z}\) is the input to generator which can be a vector of random noise from uniform distribution or samples from a set. To enforce the Lipschitz constraint on the critic, Arjovsky et al.~\cite{arjovsky2017wasserstein} proposed to clip the weights of the critic to lie within a compact space \([-c, c]\), where \(c\) is a constant value. Gulrajani et al.~\cite{gulrajani2017improved} proposed an improved version with a gradient penalty term \[\lambda E_{\hat{\mathbf{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}(\lVert \nabla_{\hat{\mathbf{x}}} D(\hat{\mathbf{x}})\rVert_2 - 1)^2,\] where \(\mathbf{\hat x}\) is sampled from straight lines between points sampled from distribution \(\mathbb{P}_g\) and \(\mathbb{P}_r\). The reason is that the gradient of \(D^*\) at all points \(\hat{\mathbf{x}} = (1 - t)\mathbf{x} + t\tilde{\mathbf{x}}\) on the straight line should points directly towards current sampled \(\tilde{\mathbf{x}}\), meaning \(\nabla_{\mathbf{\hat x}} D^*(\hat{\mathbf{x}}) = \frac{\tilde{\mathbf{x}}-\hat{\mathbf{x}}}{\lVert \tilde{\mathbf{x}}-\hat{\mathbf{x}}\rVert}\). For image inpainting, we only try to predict hole regions thus the gradient penalty should be directly applied only on pixels in holes, not in the conditional inputs (surrounding pixels). This can be implemented with multiplication of gradients and input mask as follows: \[\lambda E_{\hat{\mathbf{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}(\lVert \nabla_{\hat{\mathbf{x}}} D(\hat{\mathbf{x}}) \odot (\mathbf{1} - \mathbf{m})\rVert_2 - 1)^2,\] assuming \(m_{i,j} = 0\) means the missing pixel and \(1\) otherwise. \(\lambda\) is set to 10 in all experiments. We use a weighted sum of pixel-wise \(\ell_1\) loss (instead of mean-square-error in~\cite{iizuka2017globally}) and WGAN adversarial losses. Note that in primal space, \textit{Wasserstein-1} distance in WGAN is based on \(\ell_1\) ground distance: \[ W(\mathbb{P}_r, \mathbb{P}_g) = \inf_{\gamma \in \prod(\mathbb{P}_r, \mathbb{P}_g)} E_{(\mathbf{x},\mathbf{y}) \sim \gamma}[\lVert \mathbf{x} - \mathbf{y} \rVert],\] where \(\prod(\mathbb{P}_r, \mathbb{P}_g)\) denotes the set of all joint distributions \(\gamma(\mathbf{x}, \mathbf{y})\) whose marginals are respectively \(\mathbb{P}_r\) and \(\mathbb{P}_g\). Intuitively, the pixel-wise reconstruction loss directly regresses holes to the current ground truth image, while WGANs implicitly learn potentially correct images and train generator with adversarial gradients. Both losses measure distance of images with pixel-wise \(\ell_1\) difference. \textbf{Spatially discounted reconstruction loss} To fill the hole in an image, there could exist many plausible solutions. Intuitively, pixels near hole boundaries have less label variations than ones in center of hole. In other words, if all pixels are treated equally with reconstruction loss, the convolutional filters are trained to focus on predicting the hard-to-guess hole center, thus deteriorating boundary consistency. The same problem is also experienced in reinforcement learning. When long term rewards have large variations during sampling, people use temporal discounted rewards over sampled trajectories~\cite{sutton1998reinforcement}. Similarly, we introduce spatially discounted reconstruction loss using a simple pre-computed loss weighting mask \(\mathbf{M}\) controlled by one scalar parameter \(\gamma\). For each pixel in hole, the loss weighting value is computed as \(\gamma^{l}\), where \(l\) is the distance to the nearest rectangle boundary. \(\gamma\) is set to 0.99 in all experiments. The problem is also addressed in~\cite{pathak2016context, yeh2016semantic}. Importance weighted context loss, proposed in~\cite{yeh2016semantic}, is pixel-wise loss spatially weighted by ratio of uncorrupted pixels within fixed windows size (\(7 \times 7\) pixels). Pathak et al.\ \cite{pathak2016context} predict a slightly larger patch with higher loss weighting (\(\times 10\)) in overlapped region (7 pixels). For inpainting large holes, the proposed discounted loss makes more sense and introduces almost zero computational overhead. Experiments show that it slightly improves visual quality. We use discounted \(\ell_1\) reconstruction loss as default. \section{Proposed Algorithm} \subsection{Contextual attention} \label{subsec:attention} \begin{figure}[t] \begin{center} \includegraphics[width=1.\linewidth]{fig/contextual_attention_layer.png} \end{center} \caption{Contextual attention layer.} \label{fig:contextual_attention_layer} \end{figure} Convolution neural networks take local features layer-by-layer thus is not effective for borrowing patches of interest, especially the ones at distant spatial locations. To solve the problem, we consider attention mechanism in deep models. We find existing feed-forward spatial attention modules are very limited on the task of image inpainting. Thus, we propose a novel feed-forward contextual attention layer that can be trained in an end-to-end manner. In short, contextual attention layer is designed with convolution for matching, softmax for comparison and deconvolution for pasting. We also introduce two additional convolutions to propagate attention scores horizontally and vertically which encourage to attend on coherent regions instead of individual pixels. Moreover, we introduce several choices to improve memory efficiency with almost no drop of performance. Finally, it is noteworthy that contextual attention layer can take images of arbitrary resolution during testing. \textbf{Match and paste} We consider the problem where we have features of missing pixels (foreground) and surroundings (background). As shown in Figure~\ref{fig:contextual_attention_layer}, we firstly extract patches (\(3 \times 3\)) located in backgrounds and reshape them as convolutional filters. To match missing patches \(\{f_{(x, y)}\}\) with surrounding patches \(\{b_{x',y'}\}\), we use normalized inner product (cosine similarity) \[s_{x,y,x',y'} = \langle \frac{f_{x,y}}{||f_{x,y}||}, \frac{b_{x',y'}}{||b_{x',y'}||}\rangle,\] where \(s_{x,y,x',y'}\) is the matching score of center patch located in background \((x',y')\) and foreground patch located in \((x, y)\). We then normalize the matching score with scaled softmax along \(x'y'\)-dimension to get attention score \(s^*_{x,y,x',y'} = \textit{softmax}_{x',y'} (\lambda s_{x,y,x',y'})\), where \(\lambda\) is a constant value. This can be efficiently implemented as convolution and channel-wise softmax as shown in Figure~\ref{fig:contextual_attention_layer}. Similarly, as the final step, we reuse extracted patches \(\{b_{x',y'}\}\) as deconvolutional filters for differentiable patch pasting. Values of overlapped pixels are averaged. We notice that previously several works~\cite{chen2016fast, li2016combining} utilize convolution to speed up computing nearest neighbor fields for style transfer tasks. Differently, those methods directly get a result with the maximum score, remaining the whole module non-differentiable. We should also notice that contextual attention layer can be viewed as a kind of soft PatchMatch~\cite{barnes2009patchmatch} on feature space learned with deep neural networks. \textbf{Attention propagation} We further introduce coherency prior in contextual attention layer which significantly improves inpainting quality. The idea of coherency prior is that a shifted foreground patch is likely corresponding to an equal shift in the background patch by the natural of image data. That is, for example, \(s^*_{x, y, x', y'}\) usually have very close value with \(s^*_{x+1, y, x'+1, y'}\). To encourage resulted attention map consists of coherence patches instead of individual pixels, we perform a left-right propagation followed by a top-down propagation with kernel size of k. That is: \[ \begin{split} {s^h}_{x, y, x', y'} = & \sum_{i \in \{-k, ..., k\}}s^*_{x+i, y, x'+i, y'}, \\ {s^v}_{x, y, x', y'} = & \sum_{j \in \{-k, ..., k\}}s^h_{x, y+j, x', y'+j}. \end{split} \] The propagation can be efficiently implemented as convolution with fixed constant kernel (i.e.\ identity matrix) and proper corner paddings. k is set to 2 in all of our experiments. This property can significantly improves inpainting results in the forward path and provides richer gradients in the backward path. Note that it is also possible to learn a spatial propagation with a few convolutions and activations. With attention propagation, the resulted attention map is often formed from coherent regions. \textbf{Memory efficiency} Assume that we have a feature of resolution \(128 \times 128\) with missing region of resolution \(64 \times 64\). Then the resulted number of convolutional filters is 12,288. This may cause too much memory overhead for GPUs. To overcome the issue, we introduce two options: 1) Extracting background patches with strides to reduce the number of filters. 2) Reducing resolution of foreground attention map for convolution and then use background patches of raw resolution to paste with strides. \subsection{Unified inpainting network} \label{subsec:unified} \begin{figure}[t] \begin{center} \includegraphics[width=1.\linewidth]{fig/unified_attention_network.png} \end{center} \caption{Unified with attention.} \label{fig:unified_attention} \end{figure} To borrow background patches without lossing ability of hallucinating novel contents, we introduce two parallel encoders as shown in Figure~\ref{fig:unified_attention} based on Figure~\ref{fig:framework}. The bottom encoder specifically focuses on hallucinating contents with layer-by-layer (dilated) convolution, while the top one tries to attend on background features of interest. Output features from two encoders are then aggregated and fed into single decoder to obtain the final output. For training, given a \textit{raw image} \(\mathbf{x}\) of size \(H \times W\), we sample a boolean \textit{image mask} \(\mathbf{m}\) on a random location. \textit{Input image} \(\mathbf{z}\) is corrupted from \textit{raw image} as \(\mathbf{z} = \mathbf{x} \odot \mathbf{m}\). Inpainting network \(G\) takes channel-wise concatenation of \textit{input image} and \textit{image mask} as input, and output \textit{predicted image} \(\mathbf{x'} = G(\mathbf{z, m})\) with size the same as input (i.e.\ \(H \times W\)). Pasting the masked region of \(\mathbf{x'}\) to \textit{input image}, we get \textit{completed image} \(\tilde{\mathbf{x}} = \mathbf{z} + \mathbf{x'} \odot \mathbf{(1-m)}\). Image values are linearly scaled from range \([0, 256]\) to \([-1, 1]\) in all experiments. To sample \textit{image mask} with variable size, we firstly sample a mask \(\mathbf{m'}\) with size exactly as \(h \times w\) on random location. Then we sample another mask \(\mathbf{m}\) inside \(\mathbf{m'}\) with random height in range \([\frac{3}{4}h, h]\) and width in \([\frac{3}{4}w, w]\) on a random location as final \textit{image mask}. Note that the input to local GAN has to be with fixed size because of final fully-connected layers. Thus, we take the bounding box cropping of \textit{completed image} \(\tilde{\mathbf{x}}\) by mask \(\mathbf{m'}\) as input to local GAN, and use \textit{completed image} \(\tilde{\mathbf{x}}\) as input to global GAN. Note that unlike work~\cite{iizuka2017globally}, we do not sample another random mask to construct real data for local GAN, instead we use same mask \(\mathbf{m'}\) to crop ground truth image as real data. We describe training procedure in algorithm~\ref{algo:algo}. \input{tex_figures/algo.tex} \section{Conclusion} We proposed a coarse-to-fine generative image inpainting framework and introduced our baseline model as well as full model with a novel contextual attention module. We showed that the contextual attention module significantly improves image inpainting results by learning feature representations for explicitly matching and attending to relevant background patches. As a future work, we plan to extend the method to very high-resolution inpainting applications using ideas similar to progressive growing of GANs~\cite{karras2017progressive}. The proposed inpainting framework and contextual attention module can also be applied on conditional image generation, image editing and computational photography tasks including image-based rendering, image super-resolution, guided editing and many others. {\small \bibliographystyle{ieee} \section{Introduction} Filling missing pixels of an image, often referred as image inpainting or completion, is an important task in computer vision. It has many applications in photo editing, image-based rendering and computational photography~\cite{barnes2009patchmatch, levin2004seamless, newson2014video, park2017transformation, simakov2008summarizing, yeh2016semantic}. The core challenge of image inpainting lies in synthesizing visually realistic and semantically plausible pixels for the missing regions that are coherent with existing ones. Early works~\cite{barnes2009patchmatch, hays2007scene} attempted to solve the problem using ideas similar to texture synthesis~\cite{efros2001image, efros1999texture}, i.e.\ by matching and copying background patches into holes starting from low-resolution to high-resolution or propagating from hole boundaries. These approaches work well especially in background inpainting tasks, and are widely deployed in practical applications~\cite{barnes2009patchmatch}. However, as they assume missing patches can be found somewhere in background regions, they cannot hallucinate novel image contents for challenging cases where inpainting regions involve complex, non-repetitive structures (e.g.\ faces, objects). Moreover, these methods are not able to capture high-level semantics. Rapid progress in deep convolutional neural networks (CNN) and generative adversarial networks (GAN)~\cite{goodfellow2014generative} inspired recent works~\cite{iizuka2017globally, li2017generative, pathak2016context, yeh2016semantic} to formulate inpainting as a conditional image generation problem where high-level recognition and low-level pixel synthesis are formulated into a convolutional encoder-decoder network, jointly trained with adversarial networks to encourage the coherency between generated and existing pixels. These works are shown to generate plausible new contents in highly structured images, such as faces, objects and scenes. Unfortunately, these CNN-based methods often create boundary artifacts, distorted structures and blurry textures inconsistent with surrounding areas. We found that this is likely due to ineffectiveness of convolutional neural networks in modeling long-term correlations between distant contextual information and the hole regions. For example, to allow a pixel being influenced by the content of 64 pixels away, it requires at least 6 layers of \(3 \times 3\) convolutions with dilation factor 2 or equivalent~\cite{iizuka2017globally, yu2015multi}. Nevertheless, a dilated convolution samples features from a regular and symmetric grid and thus may not be able to weigh the features of interest over the others. Note that a recent work~\cite{yang2016high} attempts to address the appearance discrepancy by optimizing texture similarities between generated patches and the matched patches in known regions. Although improving the visual quality, this method is being dragged by hundreds of gradient descent iterations and costs minutes to process an image with resolution \(512 \times 512\) on GPUs. We present a unified feed-forward generative network with a novel contextual attention layer for image inpainting. Our proposed network consists of two stages. The first stage is a simple dilated convolutional network trained with reconstruction loss to rough out the missing contents. The contextual attention is integrated in the second stage. The core idea of contextual attention is to use the features of known patches as convolutional filters to process the generated patches. It is designed and implemented with convolution for matching generated patches with known contextual patches, channel-wise softmax to weigh relevant patches and deconvolution to reconstruct the generated patches with contextual patches. The contextual attention module also has spatial propagation layer to encourage spatial coherency of attention. In order to allow the network to hallucinate novel contents, we have another convolutional pathway in parallel with the contextual attention pathway. The two pathways are aggregated and fed into single decoder to obtain the final output. The whole network is trained end to end with reconstruction losses and two Wasserstein GAN losses~\cite{arjovsky2017wasserstein, gulrajani2017improved}, where one critic looks at the global image while the other looks at the local patch of the missing region. Experiments on multiple datasets including faces, textures and natural images demonstrate that the proposed approach generates higher-quality inpainting results than existing ones. Example results are shown in Figure~\ref{fig:intro}. Our contributions are summarized as follows: \begin{itemize} \setlength\itemsep{.2em} \item We propose a novel contextual attention layer to explicitly attend on related feature patches at distant spatial locations. \item We introduce several techniques including inpainting network enhancements, global and local WGANs~\cite{gulrajani2017improved} and spatially discounted reconstruction loss to improve the training stability and speed based on the current the state-of-the-art generative image inpainting network~\cite{iizuka2017globally}. As a result, we are able to train the network in a week instead of two months. \item Our unified feed-forward generative network achieves high-quality inpainting results on a variety of challenging datasets including CelebA faces~\cite{liu2015faceattributes}, CelebA-HQ faces~\cite{karras2017progressive}, DTD textures~\cite{cimpoi2014describing}, ImageNet~\cite{russakovsky2015imagenet} and Places2~\cite{zhou2017places}. \end{itemize} \section{Related Work} \subsection{Image Inpainting} Existing works for image inpainting can be mainly divided into two groups. The first group represents traditional diffusion-based or patch-based methods with low-level features. The second group attempts to solve the inpainting problem by a learning-based approach, e.g.\ training deep convolutional neural networks to predict pixels for the missing regions. Traditional diffusion or patch-based approaches such as~\cite{ballester2001filling, bertalmio2000image, efros2001image, efros1999texture} typically use variational algorithms or patch similarity to propagate information from the background regions to the holes. These methods work well for stationary textures but are limited for non-stationary data such as natural images. Simakov et al.\ ~\cite{simakov2008summarizing} propose a bidirectional patch similarity-based scheme to better model non-stationary visual data for re-targeting and inpainting applications. However, dense computation of patch similarity~\cite{simakov2008summarizing} is a very expensive operation, which prohibits practical applications of such method. In order to address the challenge, a fast nearest neighbor field algorithm called PatchMatch~\cite{barnes2009patchmatch} has been proposed which has shown significant practical values for image editing applications including inpainting. Recently, deep learning and GAN-based approaches have emerged as a promising paradigm for image inpainting. Initial efforts~\cite{kohler2014mask, xu2014deep} train convolutional neural networks for denoising and inpainting of small regions. Context Encoders~\cite{pathak2016context} firstly train deep neural networks for inpainting large holes. It is trained to complete center region of \(64 \times 64\) in a \(128 \times 128\) image, with both \(\ell_2\) pixel-wise reconstruction loss and generative adversarial loss as the objective function. More recently, Iizuka et al.\ \cite{iizuka2017globally} improve it by introducing both global and local discriminators as adversarial losses. The global discriminator assesses if completed image is coherent as a whole, while the local discriminator focus on a small area centered at the generated region to enforce the local consistency. In addition, Iizuka et al.\ \cite{iizuka2017globally} use dilated convolutions in inpainting network to replace channel-wise fully connected layer adopted in Context Encoders, both techinics are proposed for increasing receptive fields of output neurons. Meanwhile, there have been several studies focusing on generative face inpainting. Yeh et al.\ \cite{yeh2016semantic} search for the closest encoding in latent space of the corrupted image and decode to get completed image. Li et al.\ \cite{li2017generative} introduce additional face parsing loss for face completion. However, these methods typically require post processing steps such as image blending operation to enforce color coherency near the hole boundaries. Several works~\cite{snelgrove2017high, yang2016high} follow ideas from image stylization~\cite{chen2016fast, li2016combining} to formulate the inpainting as an optimization problem. For example, Yang et al.\ \cite{yang2016high} propose a multi-scale neural patch synthesis approach based on joint optimization of image content and texture constraints, which not only preserves contextual structures but also produces high-frequency details by matching and adapting patches with the most similar mid-layer feature correlations of a deep classification network. This approach shows promising visual results but is very slow due to the optimization process. \subsection{Attention Modeling} There have been many studies on learning spatial attention in deep convolutional neural networks. Here, we select to review a few representative ones related to the proposed contextual attention model. Jaderberg et al.~\cite{jaderberg2015spatial} firstly propose a parametric spatial attention module called spatial transformer network (STN) for object classification tasks. The model has a localization module to predict parameters of global affine transformation to warp features. However, this model assumes a global transformation so is not suitable for modeling patch-wise attention. Zhou et al.~\cite{zhou2016view} introduce an appearance flow to predict offset vectors specifying which pixels in the input view should be moved to reconstruct the target view for novel view synthesis. This method is shown to be effective for matching related views of the same objects but is not effective in predicting a flow field from the background region to the hole, according to our experiments. Recently, Dai et al.\ \cite{dai2017deformable} and Jeon et al.\ \cite{jeon2017active} propose to learn spatially attentive or active convolutional kernels. These methods can potentially better leverage information to deform the convolutional kernel shape during training but may still be limited when we need to borrow exact features from the background. \section{Improved Generative Inpainting Network} \begin{figure*}[h] \begin{center} \includegraphics[width=1.\linewidth]{fig/framework.jpg} \end{center} \caption{Overview of our improved generative inpainting framework. The coarse network is trained with reconstruction loss explicitly, while the refinement network is trained with reconstruction loss, global and local WGAN-GP adversarial loss.} \label{fig:framework} \end{figure*} We first construct our baseline generative image inpainting network by reproducing and making several improvements to the recent state-of-the-art inpainting model~\cite{iizuka2017globally} which has shown promising visual results for inpainting images of faces, building facades and natural images. \textbf{Coarse-to-fine network architecture} The network architecture of our improved model is shown in Figure~\ref{fig:framework}. We follow the same input and output configurations as in~\cite{iizuka2017globally} for training and inference, i.e. the generator network takes an image with white pixels filled in the holes and a binary mask indicating the hole regions as input pairs, and outputs the final completed image. We pair the input with a corresponding binary mask to handle holes with variable sizes, shapes and locations. The input to the network is a \(256 \times 256\) image with a rectangle missing region sampled randomly during training, and the trained model can take an image of different sizes with multiple holes in it. In image inpainting tasks, the size of the receptive fields should be sufficiently large, and Iizuka et al.\ \cite{iizuka2017globally} adopt dilated convolution for that purpose. To further enlarge the receptive fields and stabilize training, we introduce a two-stage coarse-to-fine network architecture where the first network makes an initial coarse prediction, and the second network takes the coarse prediction as inputs and predict refined results. The coarse network is trained with the reconstruction loss explicitly, while the refinement network is trained with the reconstruction as well as GAN losses. Intuitively, the refinement network sees a more complete scene than the original image with missing regions, so its encoder can learn better feature representation than the coarse network. This two-stage network architecture is similar in spirits to residual learning~\cite{he2016deep} or deep supervision~\cite{lee2015deeply}. Also, our inpainting network is designed in a thin and deep scheme for efficiency purpose and has fewer parameters than the one in ~\cite{iizuka2017globally}. In terms of layer implementations, we use mirror padding for all convolution layers and remove batch normalization layers~\cite{ioffe2015batch} (which we found deteriorates color coherence). Also, we use ELUs~\cite{clevert2015fast} as activation functions instead of ReLU in~\cite{iizuka2017globally}, and clip the output filter values instead of using \(tanh\) or \(sigmoid\) functions. In addition, we found separating global and local feature representations for GAN training works better than feature concatenation in~\cite{iizuka2017globally}. More details can be found in the supplementary materials. \textbf{Global and local Wasserstein GANs} \label{subsec:wgan} Different from previous generative inpainting networks~\cite{iizuka2017globally, li2017generative, pathak2016context} which rely on DCGAN~\cite{radford2015unsupervised} for adversarial supervision, we propose to use a modified version of WGAN-GP~\cite{arjovsky2017wasserstein, gulrajani2017improved}. We attach the WGAN-GP loss to both global and local outputs of the second-stage refinement network to enforce global and local consistency, inspired by~\cite{iizuka2017globally}. WGAN-GP loss is well-known to outperform existing GAN losses for image generation tasks, and it works well when combined with \(\ell_1\) reconstruction loss as they both use the \(\ell_1\) distance metric. Specifically, WGAN uses the \textit{Earth-Mover} distance (a.k.a.\ \textit{Wasserstein-1}) distance \(W(\mathbb{P}_r, \mathbb{P}_g)\) for comparing the generated and real data distributions. Its objective function is constructed by applying the \textit{Kantorovich-Rubinstein} duality: \[ \min_G\max_{D \in \mathcal{D}} E_{\mathbf{x} \sim \mathbb{P}_r}[D(\mathbf{x})] - E_{\tilde{\mathbf{x}} \sim \mathbb{P}_g}[D(\tilde{\mathbf{x}})],\] where \(\mathcal{D}\) is the set of 1-Lipschitz functions and \(\mathbb{P}_g\) is the model distribution implicitly defined by \(\tilde{\mathbf{x}} = G(\mathbf{z})\). \(\mathbf{z}\) is the input to the generator. Gulrajani et al.~\cite{gulrajani2017improved} proposed an improved version of WGAN with a gradient penalty term \[\lambda E_{\hat{\mathbf{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}(\lVert \nabla_{\hat{\mathbf{x}}} D(\hat{\mathbf{x}})\rVert_2 - 1)^2,\] where \(\mathbf{\hat x}\) is sampled from the straight line between points sampled from distribution \(\mathbb{P}_g\) and \(\mathbb{P}_r\). The reason is that the gradient of \(D^*\) at all points \(\hat{\mathbf{x}} = (1 - t)\mathbf{x} + t\tilde{\mathbf{x}}\) on the straight line should point directly towards current sample \(\tilde{\mathbf{x}}\), meaning \(\nabla_{\mathbf{\hat x}} D^*(\hat{\mathbf{x}}) = \frac{\tilde{\mathbf{x}}-\hat{\mathbf{x}}}{\lVert \tilde{\mathbf{x}}-\hat{\mathbf{x}}\rVert}\). For image inpainting, we only try to predict hole regions, thus the gradient penalty should be applied only to pixels inside the holes. This can be implemented with multiplication of gradients and input mask \(\mathbf{m}\) as follows: \[\lambda E_{\hat{\mathbf{x}} \sim \mathbb{P}_{\hat{\mathbf{x}}}}(\lVert \nabla_{\hat{\mathbf{x}}} D(\hat{\mathbf{x}}) \odot (\mathbf{1} - \mathbf{m})\rVert_2 - 1)^2,\] where the mask value is \(0\) for missing pixels and \(1\) for elsewhere. \(\lambda\) is set to 10 in all experiments. We use a weighted sum of pixel-wise \(\ell_1\) loss (instead of mean-square-error as in~\cite{iizuka2017globally}) and WGAN adversarial losses. Note that in primal space, \textit{Wasserstein-1} distance in WGAN is based on \(\ell_1\) ground distance: \[ W(\mathbb{P}_r, \mathbb{P}_g) = \inf_{\gamma \in \prod(\mathbb{P}_r, \mathbb{P}_g)} E_{(\mathbf{x},\mathbf{y}) \sim \gamma}[\lVert \mathbf{x} - \mathbf{y} \rVert],\] where \(\prod(\mathbb{P}_r, \mathbb{P}_g)\) denotes the set of all joint distributions \(\gamma(\mathbf{x}, \mathbf{y})\) whose marginals are respectively \(\mathbb{P}_r\) and \(\mathbb{P}_g\). Intuitively, the pixel-wise reconstruction loss directly regresses holes to the current ground truth image, while WGANs implicitly learn to match potentially correct images and train the generator with adversarial gradients. As both losses measure pixel-wise \(\ell_1\) distances, the combined loss is easier to train and makes the optimization process stabler. \textbf{Spatially discounted reconstruction loss} Inpainting problems involve hallucination of pixels, so it could have many plausible solutions for any given context. In challenging cases, a plausible completed image can have patches or pixels that are very different from those in the original image. As we use the original image as the only ground truth to compute a reconstruction loss, strong enforcement of reconstruction loss in those pixels may mislead the training process of convolutional network. Intuitively, missing pixels near the hole boundaries have much less ambiguity than those pixels closer to the center of the hole. This is similar to the issue observed in reinforcement learning. When long-term rewards have large variations during sampling, people use temporal discounted rewards over sampled trajectories~\cite{sutton1998reinforcement}. Inspired by this, we introduce spatially discounted reconstruction loss using a weight mask \(\mathbf{M}\). The weight of each pixel in the mask is computed as \(\gamma^{l}\), where \(l\) is the distance of the pixel to the nearest known pixel. \(\gamma\) is set to 0.99 in all experiments. Similar weighting ideas are also explored in~\cite{pathak2016context, yeh2016semantic}. Importance weighted context loss, proposed in~\cite{yeh2016semantic}, is spatially weighted by the ratio of uncorrupted pixels within a fixed window (e.g. \(7 \times 7\)). Pathak et al.\ \cite{pathak2016context} predict a slightly larger patch with higher loss weighting (\(\times 10\)) in the border area. For inpainting large hole, the proposed discounted loss is more effective for improving the visual quality. We use discounted \(\ell_1\) reconstruction loss in our implementation. With all the above improvements, our baseline generative inpainting model converges much faster than~\cite{iizuka2017globally} and result in more accurate inpainting results. For Places2~\cite{zhou2017places}, we reduce the training time from 11,520 GPU-hours (K80) reported by~\cite{iizuka2017globally} to 120 GPU-hours (GTX 1080) which is almost \(100 \times\) speedup. Moreover, the post-processing step (image blending)~\cite{iizuka2017globally} is no longer necessary. \section{Image Inpainting with Contextual Attention} Convolutional neural networks process image features with local convolutional kernel layer by layer thus are not effective for borrowing features from distant spatial locations. To overcome the limitation, we consider attention mechanism and introduce a novel contextual attention layer in the deep generative network. In this section, we first discuss details of the contextual attention layer, and then address how we integrate it into our unified inpainting network. \subsection{Contextual Attention} \label{subsec:attention} \begin{figure}[t] \centering \includegraphics[width=1.\linewidth]{fig/contextual_attention_layer.jpg} \caption{Illustration of the contextual attention layer. Firstly we use convolution to compute matching score of foreground patches with background patches (as convolutional filters). Then we apply softmax to compare and get attention score for each pixel. Finally we reconstruct foreground patches with background patches by performing deconvolution on attention score. The contextual attention layer is differentiable and fully-convolutional.} \label{fig:contextual_attention_layer} \end{figure} The contextual attention layer learns where to borrow or copy feature information from known background patches to generate missing patches. It is differentiable, thus can be trained in deep models, and fully-convolutional, which allows testing on arbitrary resolutions. \textbf{Match and attend} We consider the problem where we want to match features of missing pixels (foreground) to surroundings (background). As shown in Figure~\ref{fig:contextual_attention_layer}, we first extract patches (\(3 \times 3\)) in background and reshape them as convolutional filters. To match foreground patches \(\{f_{x, y}\}\) with backgrounds ones \(\{b_{x',y'}\}\), we measure with normalized inner product (cosine similarity) \[s_{x,y,x',y'} = \langle \frac{f_{x,y}}{||f_{x,y}||}, \frac{b_{x',y'}}{||b_{x',y'}||}\rangle,\] where \(s_{x,y,x',y'}\) represents similarity of patch centered in background \((x',y')\) and foreground \((x, y)\). Then we weigh the similarity with scaled softmax along \(x'y'\)-dimension to get attention score for each pixel \(s^*_{x,y,x',y'} = \textit{softmax}_{x',y'} (\lambda s_{x,y,x',y'})\), where \(\lambda\) is a constant value. This is efficiently implemented as convolution and channel-wise softmax. Finally, we reuse extracted patches \(\{b_{x',y'}\}\) as deconvolutional filters to reconstruct foregrounds. Values of overlapped pixels are averaged. \textbf{Attention propagation} We further encourage coherency of attention by propagation (fusion). The idea of coherency is that a shift in foreground patch is likely corresponding to an equal shift in background patch for attention. For example, \(s^*_{x, y, x', y'}\) usually have close value with \(s^*_{x+1, y, x'+1, y'}\). To model and encourage coherency of attention maps, we do a left-right propagation followed by a top-down propagation with kernel size of \(k\). Take left-right propagation as an example, we get new attention score with: \[\hat{s}_{x, y, x', y'} = \sum_{i \in \{-k, ..., k\}}s^*_{x+i, y, x'+i, y'}.\] The propagation is efficiently implemented as convolution with identity matrix as kernels. Attention propagation significantly improves inpainting results in testing and enriches gradients in training. \textbf{Memory efficiency} Assuming that a \(64 \times 64\) region is missing in a \(128 \times 128\) feature map, then the number of convolutional filters extracted from backgrounds is 12,288. This may cause memory overhead for GPUs. To overcome this issue, we introduce two options: 1) extracting background patches with strides to reduce the number of filters and 2) downscaling resolution of foreground inputs before convolution and upscaling attention map after propagation. \subsection{Unified Inpainting Network} \label{subsec:unified} \begin{figure}[t] \centering \includegraphics[width=1.\linewidth]{fig/unified_attention_network.jpg} \caption{Based on coarse result from the first encoder-decoder network, two parallel encoders are introduced and then merged to single decoder to get inpainting result. For visualization of attention map, color indicates relative location of the most interested background patch for each pixel in foreground. For examples, white (center of color coding map) means the pixel attends on itself, pink on bottom-left, green means on top-right.} \label{fig:unified_attention} \end{figure} To integrate attention module, we introduce two parallel encoders as shown in Figure~\ref{fig:unified_attention} based on Figure~\ref{fig:framework}. The bottom encoder specifically focuses on hallucinating contents with layer-by-layer (dilated) convolution, while the top one tries to attend on background features of interest. Output features from two encoders are aggregated and fed into a single decoder to obtain the final output. To interpret contextual attention, we visualize it in a way shown in Figure~\ref{fig:unified_attention}. We use color to indicate the relative location of the most interested background patch for each foreground pixel. For examples, white (center of color coding map) means the pixel attends on itself, pink on bottom-left, green on top-right. The offset value is scaled differently for different images to best visualize the most interesting range. For training, given a raw image \(\mathbf{x}\), we sample a binary image mask \(\mathbf{m}\) at a random location. Input image \(\mathbf{z}\) is corrupted from the raw image as \(\mathbf{z} = \mathbf{x} \odot \mathbf{m}\). Inpainting network \(G\) takes concatenation of \(\mathbf{z}\) and \(\mathbf{m}\) as input, and output predicted image \(\mathbf{x'} = G(\mathbf{z, m})\) with the same size as input. Pasting the masked region of \(\mathbf{x'}\) to input image, we get the inpainting output \(\tilde{\mathbf{x}} = \mathbf{z} + \mathbf{x'} \odot \mathbf{(1-m)}\). Image values of input and output are linearly scaled to \([-1, 1]\) in all experiments. Training procedure is shown in Algorithm~\ref{algo:algo}. \input{tex_figures/algo.tex} \section{Experiments} \label{sec:expr} We evaluate the proposed inpainting model on four datasets including Places2~\cite{zhou2017places}, CelebA faces~\cite{liu2015faceattributes}, CelebA-HQ faces~\cite{karras2017progressive}, DTD textures~\cite{cimpoi2014describing} and ImageNet~\cite{russakovsky2015imagenet}. \textbf{Qualitative comparisons} First, we show in Figure~\ref{fig:expr_siggraph} that our baseline model generates comparable inpainting results with the previous state-of-the-art~\cite{iizuka2017globally} by comparing our output result and result copied from their main paper. Note that no post-processing step is performed for our baseline model, while image blending is applied in result of~\cite{iizuka2017globally}. Next we use the most challenging Places2 dataset to evaluate our full model with contextual attention by comparing to our baseline two-stage model which is extended from the previous state-of-the-art~\cite{iizuka2017globally}. For training, we use images of resolution \(256 \times 256\) with largest hole size \(128 \times 128\) described in Section~\ref{subsec:unified}. Both methods are based on fully-convolutional neural networks thus can fill in multiple holes on images of different resolutions. Visual comparisons on a variety of complex scenes from the validation set are shown in Figure~\ref{fig:expr_main_results}. Those test images are all with size \(512 \times 680\) for consistency of testing. All the results reported are direct outputs from the trained models without using any post-processing. For each example, we also visualize latent attention map for our model in the last column (color coding is explained in Section~ \ref{subsec:unified}). \input{tex_figures/siggraph_comp.tex} \input{tex_figures/main_results.tex} As shown in the figure, our full model with contextual attention can leverage the surrounding textures and structures and consequently generates more realistic results with much less artifacts than the baseline model. Visualizations of attention maps reveal that our method is aware of contextual image structures and can adaptively borrow information from surrounding areas to help the synthesis and generation. In Figure~\ref{fig:expr_main_others}, we also show some example results and attention maps of our full model trained on CelebA, DTD and ImageNet. Due to space limitation, we include more results for these datasets in the supplementary material. \input{tex_figures/main_others.tex} \textbf{Quantitative comparisons} Like other image generation tasks, image inpainting lacks good quantitative evaluation metrics. Inception score~\cite{salimans2016improved} introduced for evaluating GAN models is not a good metric for evaluating image inpainting methods as inpainting mostly focuses on background filling (e.g.\ object removal case), not on its ability to generate a variety classes of objects. Evaluation metrics in terms of reconstruction errors are also not perfect as there are many possible solutions different from the original image content. Nevertheless, we report our evaluation in terms of mean \(\ell_1\) error, mean \(\ell_2\) error, peak signal-to-noise ratio (PSNR) and total variation (TV) loss on validation set on Places2 just for reference in Table~\ref{tab:quantitative}. As shown in the table, learning-based methods perform better in terms of \(\ell_1\), \(\ell_2\) errors and PSNR, while methods directly copying raw image patches have lower total variation loss. \input{tex_figures/quantitative_table.tex} Our full model has a total of \textbf{2.9M} parameters, which is roughly half of model proposed in~\cite{iizuka2017globally}. Models are implemented on TensorFlow v1.3, CUDNN v6.0, CUDA v8.0, and run on hardware with CPU Intel(R) Xeon(R) CPU E5-2697 v3 (2.60GHz) and GPU GTX 1080 Ti. Our full model runs at \textbf{0.2} seconds per frame on GPU and \textbf{1.5} seconds per frame on CPU for images of resolution \(\mathbf{512 \times 512}\) on average. \subsection{Ablation study} \input{tex_figures/pixel_flow.tex} \textbf{Contextual attention \vs spatial transformer network and appearance flow} We investigate the effectiveness of contextual attention comparing to other spatial attention modules including appearance flow~\cite{zhou2016view} and spatial transformer network~\cite{jaderberg2015spatial} for image inpainting. For appearance flow~\cite{zhou2016view}, we train on the same framework except that the contextual attention layer is replaced with a convolution layer to directly predict 2-D pixel offsets as attention. As shown in Figure~\ref{fig:expr_pixel_flow}, for a very different test image pair, appearance flow returns very similar attention maps, meaning that the network may stuck in a bad local minima. To improve results of appearance flow, we also investigated ideas of multiple attention aggregation and patch-based attention. None of these ideas work well enough to improve the inpainting results. Also, we show the results with the spatial transformer network~\cite{jaderberg2015spatial} as attention in our framework in Figure~\ref{fig:expr_pixel_flow}. As shown in the figure, STN-based attention does not work well for inpainting as its global affine transformation is too coarse. \input{tex_figures/gan_comp_figure.tex} \textbf{Choice of the GAN loss for image inpainting} Our inpainting framework benefits greatly from the WGAN-GP loss as validated by its learning curves and faster/stabler convergence behaviors. The same model trained with DCGAN sometimes collapses to limited modes for the inpainting task, as shown in Figure~\ref{fig:expr_gan_comp}. We also experimented with LSGAN~\cite{mao2016least}, and the results were worse. \textbf{Essential reconstruction loss} We also performed testing if we could drop out the \(\ell_1\) reconstruction loss and purely rely on the adversarial loss (i.e.\ improved WGANs) to generate good results. To draw a conclusion, we train our inpainting model without \(\ell_1\) reconstruction loss in the refinement network. Our conclusion is that the pixel-wise reconstruction loss, although tends to make the result blurry, is an essential ingredient for image inpainting. The reconstruction loss is helpful in capturing content structures and serves as a powerful regularization term for training GANs. \textbf{Perceptual loss, style loss and total variation loss} We have not found perceptual loss (reconstruction loss on VGG features), style loss (squared Frobenius norm of Gram matrix computed on the VGG features)~\cite{johnson2016perceptual} and total variation (TV) loss bring noticeable improvements for image inpainting in our framework, thus are not used.
1,108,101,563,489
arxiv
\section{Introduction} Measurements of the cosmic microwave background (CMB) polarization in search of a gravitational wave signal from the epoch of inflation are limited by a foreground of galactic dust \citep{keck_array_and_bicep2_collaborations_improved_2016}. Improving constraints on r, the tensor-to-scalar ratio, hinges on imaging galactic dust in the 200-300 GHz atmospheric window, where the brightness of dust relative to the CMB is enhanced relative to 95 GHz or 150 GHz \citep{kamionkowski_quest_2016}. In this paper we explore the thermal kinetic inductance detector (TKID) as a path to fill a 200-300 GHz focal plane, inspired by detector developments for x-ray spectroscopy \citep{ulbricht_highly_2015,quaranta_mitigation_2014,miceli_towards_2014,cecil_optimization_2015,lindeman_resonator-bolometer_2014}. TKIDs are bolometers whose thermometer exploits the temperature dependence of the kinetic inductance effect. As in a direct absorber kinetic inductance detector (KID) \citep{day_broadband_2003,mccarrick_horn-coupled_2014,dober_optical_2015}, the resonant frequency of an LC resonator shifts in response to the quasiparticle density in a superconducting inductor. However, in a TKID, rather than directly breaking pairs, photons are absorbed on a suspended island shared by the inductor and quasiparticles are produced thermally. Like KIDs, TKIDs can be frequency multiplexed by assigning each detector a different resonant frequency, and weakly coupling the resonators to a shared readout transmission line. \\\indent The potential advantage of TKIDs is engineering freedom. In a KID, the function of electromagnetic absorption, conduction of optical power out of the detector and to the bath, and low frequency readout are performed by the kinetic inductor which must be simultaneously optimized for all three functions. In a TKID, these functions can be separated into a load resistor, a silicon nitride membrane, and superconducting inductor, which can be independently optimized, at the cost of a many layer fabrication process. Fortunately, we can leverage the many similarities this process has to existing transition edge sensor (TES) bolometer fabrication \citep{ade_antenna-coupled_2015}. \section{Device design} We fabricated a test chip to study the suitability of TKIDs for ground based observations. Images of the mask and fabricated device are shown in Fig.~\ref{fig:mask_all}. The test chip contains 5 TKIDs, one with an unreleased bolometer, and four with bolometer leg lengths 100 µm, 200 µm, 300 µm 400 µm. The bolometer geometry is based on the design used for the Keck/BICEP program, which uses six parallel legs to mechanically suspend and thermally isolate a silicon nitride island \citep{kuo_antenna-coupled_2008}. To facilitate step coverage, the mechanical substrate of the bolometer island, low stress silicon nitride, is 1/3 the thickness of that used for the Keck/BICEP TES bolometers, so we fabricated multiple leg lengths to measure the thermal conductivity (G) as a function of leg length for the thinner silicon nitride. The temperature sensing inductor on the island is a 16 mm long, 1µm wide meandering trace of 50nm thick aluminum. On the island, niobium traces contact the inductor leads and run off along the island legs to an interdigitated resonator capacitor. The silicon nitride layer under and around the capacitor is removed so that the resonator capacitor is deposited directly on the silicon wafer. To vary the resonant frequencies from 260 to 315 MHz, the number of fingers in the resonator capacitors is varied. \\ \indent The resonators are weakly coupled to a readout transmission line through a second small interdigitated capacitor on one side of the resonator, and through the parallel plate capacitance of the resonator to the ground of the chip holder to complete the circuit. The parallel plate capacitance to ground acts on both sides of the resonator, and on the coupling capacitor side acts to weaken the coupling. This was not accounted for in the design, so the devices are more weakly coupled than intended with coupling $Q_c$=80,000-140,000 rather than 20,000. \\\indent Rather than integrate an antenna directly into this chip, power is deposited onto the bolometer island via a resistive heater and DC current source. This calibrates the leg thermal conductivity and detector noise directly in units of watts. On the tested chip, two released bolometers (100 µm and 400 µm legs) and the unreleased bolometer were fully functional. Of the remaining two bolometers, one resonator was broken and one heater was left disconnected from the cryostat wiring. \\\indent The devices were fabricated on a high resisivity silicon substrate $\approx$500 µm thick. A 40 nm silicon dioxide and 300 nm low stress silicon nitride film are deposited on top of the wafer, which form the membrane layer. E-beam evaporated lift-off aluminum 50 nm for the inductor is deposited on top of this. The gold resistor layer is 200 nm thick, e-beam evaporated, and patterned with lift-off. Next, the silicon nitride and silicon dioxide are etched to expose silicon and define the island and the region where the capacitor will be deposited. The niobium layer for the capacitor and wiring is 400 nm thick and patterned with lift-off. Finally, holes in the island are drilled by deep reactive ion etching and the island released with xenon difluoride. \begin{figure}[!tbp] \centering \begin{tabular}{ccc} \includegraphics[width=0.3 \textwidth]{mask_all.png} & \includegraphics[width=0.3 \textwidth]{mask_zoom1.png} & \includegraphics[width=0.3 \textwidth]{boloisland.png} \end{tabular} \caption{Left, layout of TKID chip mask. The readout line runs vertically down the center of the chip. Five active resonators on the right side of the chip have heater resistors that run to wirebond pads on the right edge of the chip. The two devices in the lower left of the chip are resistance and $T_c$ test structures for the aluminum inductor layer. Middle, zoom in on one resonator of TKID chip mask, showing the resonator capacitor on the left and island on the right. Right, photograph of one bolometer island, showing the meandered aluminum inductor on the left and the gold heater resistor on a niobium washer on the right. Color banding around the bolometer release holes suggests the oxide etch stop has been consumed and up to $\approx$100 nm of the silicon nitride etched by the xenon difluoride.} \label{fig:mask_all} \end{figure} \section{Device Characterization} The film properties of the aluminum are measured via a test structure on one side of the chip consisting of niobium leads running to an aluminum inductor identical to the inductor on the bolometer islands. The transition temperature of the aluminum film is 1.32 K and the sheet resistance is 0.23 $\Omega/\Box$, which implies a thin film Mattis-Bardeen kinetic inductance of $0.24 pH/\Box$ \citep{zmuidzinas_superconducting_2012}. The niobium to aluminum contact is zero resistance up to the critical current of $\approx$100 µA. \\\indent To measure the bolometer thermal conductance to the bath, the resonator is first calibrated by sweeping the bath temperature while measuring the resonant frequency with a small readout excitation of -110 dBm (0.01 pW). Then the bath is returned to base temperature and the island heater power is swept while measuring the resonant frequency to measure the island temperature. The measured conductances are 26 pW/K and 64 pW/K for the 400 µm and 100 µm detectors respectively, at an island temperature of 0.38 K. 0.38 K operating temperature compromises between quality factor ($Q_{i,MB} \approx 10^4$) and noise equivalent power (NEP) in an ideal aluminum TKID. \\\indent To measure the responsivity as a function of frequency of the 400 µm leg bolometer, a function generator supplied a DC offset and a sine wave chirp with a logarithmic sweep in frequency to the island heater. The frequency of the resonator was monitored using the technique described in Section \ref{sec:noise}. The power spectrum of the response is shown in Fig.~\ref{fig:timeconstant}. The response is a good fit to a one pole model with 3 dB frequency 28.6 Hz or a heat capacity of 0.14 pJ/K. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=2.0in]{timeconstant.png} \end{tabular} \caption{Measured frequency response of the 400 µm leg bolometer. Blue shows the measured resonator frequency power spectrum response to a log-swept chirp heater excitation, normalized to the power spectrum of the log sweep. Orange shows the one pole model fit.} \label{fig:timeconstant} \end{figure} \section{TLS} The power and temperature dependence of the resonator quality factors show substantial loss consistent with the the behavior of two level systems (TLS). We follow standard methods to model the resonator and the temperature and power dependence of TLS\citep{mccarrick_horn-coupled_2014,gao_experimental_2008,von_schickfus_saturation_1977}. A model for $S_{21}$ of a resonator is fit to extract the resonator quality factor $Q_r$ and complex coupling quality factor $Q_e$. The internal resonator quality factor is estimated from the difference of $Q_r$ and the real part of $Q_e$. The internal quality factors are then fit to a sum of contributions from TLS (Eq. \ref{eq:tls}), Mattis-Bardeen quasiparticles from the measured $T_c$, and a constant $Q_i$ for loss due to other sources. Power dependence is captured by the terms $P_g$, the generator power, and $P_c$, the generator power above which TLS is saturated. \begin{equation} Q_{TLS} = Q_{TLS_0} \frac{\sqrt{ 1 + \frac{P_g}{P_c} }}{\tanh \left( \frac{ h f}{2 k T} \right) } \label{eq:tls} \end{equation} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=2.0in]{powervQ.png} \includegraphics[width=2.0in]{tempvQ.png} \end{tabular} \caption{Power and temperature dependence of $Q_i$. Power dependence is measured at 0.15 K while temperature dependence is measured at -110 dBm. Points are data and lines are fits. Legend shows fitted parameters. The blue curve with twice the $Q_i$ of the other devices is the unreleased device.} \label{fig:ptvq} \end{figure} As shown in Fig.~\ref{fig:ptvq}, $Q_i$ is strongly limited by a TLS consistent loss where $Q_i$ increases with power and temperature. The limiting performance in the unreleased devices may be due to side wall coating during the niobium lift-off step, visible as flags of metal or resist along the lines of the capacitor. Subsequent unreleased devices fabricated using etch-back to pattern the niobium capacitor show low temperature and power $Q_i > 60,000$. \\\indent The released devices show much lower $Q_i$. We are currently considering two possibilities. Chemical damage is one possibility, as the long distance ($\approx$100 µm) between release hole and edge of the island necessitated a long exposure to xenon difluoride (15 pulses, 45 seconds per pulse, 3 torr). The island can be re-engineered to reduce this distance to 15 µm. A second possibility is mechanical damage due to stress on the island during and after the release process. \section{Noise} \label{sec:noise} To characterize NEP, the resonator frequency is tracked by continuous network analysis. Once per millisecond, a chirp generated by a software radio sweeps in frequency across a 1 MHz bandwidth centered on the resonant frequency of one detector, from high to low frequency. A 1 ms chirp is slow compared to the time constant of the resonator ($\approx$10 µs) but fast compared to the time constant of the island ($\approx$5 ms). The ratio of the fourier transforms of the received signal and input chirp is proportional to the transfer function $S_{21}$. The position of the resonance in each transfer function sweep is fit to produce a resonator frequency timestream sampled at 1 kHz. Fluctuations in gain and phase of the readout are removed by fitting a baseline of the transfer function away from the resonance. The wide bandwidth of the sweep compared to a single tone readout allows the resonator to be tracked across large temperature changes corresponding to frequency shifts of many resonance widths. We find this a practical technique for reading out and characterizing single resonators, but for chirp rates slow compared to the resonator ring down time, a time multiplexing penalty makes it impractical for large numbers of resonators. An example of the continuous network analysis from the chirp readout scheme is shown in the left panel of Fig.~\ref{fig:chirps}. To measure the NEP of the detector, the resonator frequency is monitored while calibrating resonator frequency into power by applying a sine wave of known frequency and power to the island heater. The resulting noise power spectrum is shown in the right panel of Fig.~\ref{fig:chirps}, and reaches $\approx$200 aW/$\sqrt{Hz}$. This is a few times the single mode photon noise under the south pole atmosphere, and an order of magnitude larger than the phonon noise expected for the 0.38 K island temperature. Despite the large contribution to detector $Q_i$ from TLS loss, the excess does not appear to originate from TLS, as it is highly correlated between different resonators. We suspect environmental effects such as pulse tube vibrations, the magnetic environment \citep{flanigan_magnetic_2016} and the infrared environment \citep{barends_minimizing_2011} are responsible, and a study of those effects is on going. \begin{figure}[!tbp] \centering \begin{tabular}{cc} \includegraphics[width=0.5 \textwidth]{waterfall_abs.png} & \includegraphics[width=0.5 \textwidth]{chirp_nep1.png} \end{tabular} \caption{Left, Waterfall plot of network analyses from chirp readout. Each horizontal slice is a single network analysis, separated by the one below it by one millisecond. The TKID's heater is excited by a 30Hz sine wave with a DC offset of 1.6pW and an of RMS 0.3pW, sufficient to move the heater several resonance widths. This is an exaggeration of the motion used to perform NEP calibration and demonstrate the capability to track the resonator across large changes in power. Right, NEP of 400 µm leg TKID measured at several heater powers. Amplitude spectral density is not corrected for thermal time constant. The large spike at 30 Hz is the NEP calibration tone. The neighboring spike at 35 Hz is due to electrical pick up from the pulse tube valve stepper motor. } \label{fig:chirps} \end{figure} \begin{acknowledgements} This work was supported by JPL's Research and Technology Development Fund for projects R.17.223.057 and R.17.223.058. \end{acknowledgements} \pagebreak \bibliographystyle{unsrtnat}
1,108,101,563,490
arxiv
\section{Introduction} Historically, the discovery of cooler main sequence stars has led to tension between the observations and the synthetic spectral energy distributions (SEDs) generated by model atmospheres. Major advances are made with every discovery, which resolves most discrepancies, until the next coolest type is found. Plane-parallel, radiative-convective atmospheres in local thermodynamic and hydrostatic equilibrium did not reproduce observations of M dwarfs until more complete linelists of molecular transitions for hydrides and oxides were calculated \citep[e.g.][]{Allard_1995, Cushing_2003,Cushing_2005, Tennyson_2007}. Discovery of the very red L dwarfs led to the recognition of condensation and settling as important processes in cool atmospheres \citep{Tsuji_1996, Ruiz_1997, Burrows_1999, Lodders_1999, Ackerman_2001, Woitke_2003, Morley_2012, Morley_2014}. Infrared observations provided evidence of additional non-equilibrium processes, with more CO absorption at $\lambda \approx 4.5~\mu$m, and less NH$_3$ at $\lambda \approx 1.5~\mu$m and $\lambda \approx 11~\mu$m than would be present in an atmosphere in chemical equilibrium \citep[e.g.][]{Saumon_2000,Saumon_2006, Leggett_2007}. Vertical transport of gas in the atmospheres of the solar system giant planets produces non-equilibrium chemical abundances \citep{Fegley_1985, Noll_1997} and this became recognised as an intrinsic feature of cool stellar and substellar atmospheres also. In the last decade, cold substellar objects have been discovered which have even more in common with the giant planets. Substellar objects, or brown dwarfs, have insufficient mass for stable fusion and they cool with time \citep{[e.g.][]Dantona_1985, Burrows_1993, Baraffe_1998, Saumon_2008, Phillips_2020}. These objects form the extended low-mass tail of the stellar mass function \citep[e.g.][]{Kirkpatrick_2019, Kirkpatrick_2020}, and brown dwarfs as low-mass as 4 Jupiter-masses have been found in young clusters and associations \citep{Best_2017, Esplin_2017, Luhman_2020, Lodieu_2021}. Older, free-floating and cold, very low-mass objects have also been found; the most extreme example is the few-Gyr-old WISE J085510.83$-$071442.5, hereafter J0855, which is a 260~K, $\sim$5 Jupiter-mass object, 2~pc from the Sun \citep{Luhman_2014, Luhman_2016, Leggett_2017}. The properties of giant planets and brown dwarfs overlap significantly \citep{Showman_2013, Morley_2014, Line_2015, Showman_2019}, and the difference between their formation mechanisms is an active research area \citep{Schlaufman2018, Nielsen_2019, Wagner_2019, Bowler_2020}. The coldest objects have SEDs that are currently difficult to reproduce, and resolving this problem is important; characterization of the cold field brown dwarfs is vital for understanding both the terminus of the mass function and for optimizing studies of exoplanets. We tackle this problem here. All but one of the known brown dwarf systems with effective temperature ($T_{\rm eff}$) $<$ 500~K were discovered by the mid-infrared all-sky survey executed by the {\it Wide-field Infrared Survey Explorer} \citep[WISE,][]{Wright_2010}. The additional cold brown dwarf, a distant companion to the white dwarf WD 0806$-$661 \citep{Luhman_2011}, was discovered in mid-infrared images taken by the Infrared Array Camera \citep[IRAC,][]{Fazio_2004} on board the {\it Spitzer Space Telescope} \citep{Werner_2004}. Some of these have been resolved into close similar-mass binary systems \citep[e.g.][]{Liu_2011, Liu_2012, Dupuy_2015}, while others appear super-luminous (compared to models) but have not been resolved in high spatial-resolution imaging \citep{Beichman_2013, Opitz_2016}. Synthetic SEDs show that half of the energy emitted by a brown dwarf with $T_{\rm eff} < 600$~K is captured by the {\it WISE} W2 filter centered at $\lambda \approx 4.6~\mu$m (or the similar {\it Spitzer} [4.5] filter). In contrast, very little flux emerges through the W1 filter bandpass (or the {\it Spitzer} [3.6] filter), which includes the strong 3.3~$\mu$m CH$_4$ absorption \citep[e.g.][]{Leggett_2017}. Hence cold brown dwarfs can be identified by very red W1 $-$ W2 (or [3.6] $-$ [4.5]) colors. Currently $\sim 50$ brown dwarfs with $T_{\rm eff} \lesssim 450$~K, classified as Y dwarfs, are known \citep{Cushing_2011, Luhman_2011, Kirkpatrick_2012, Tinney_2012, Kirkpatrick_2013, Cushing_2014, Luhman_2014, Pinfield_2014b, Schneider_2015, Martin_2018, Marocco_2019, Bardalez_2020, Meisner_2020a, Meisner_2020b}. Based on spectral analyses of an early subset of these objects, \citet{Leggett_2017} found that most are relatively young, lower-gravity, and lower-mass objects --— $\sim 1$--3 Gyr-old and $\sim 6$ Jupiter-mass --- but there were also a few older, higher-gravity, and higher-mass objects --— $\sim 6$ Gyr-old and $\sim 14$ Jupiter-mass; a range of metallicity was also indicated. It is likely that the larger sample follows a similar distribution in age and metallicity as these values are typical of the low-mass solar neighborhood \citep{Dupuy_2017, Buder_2019}. In the year 2020, two new cold brown dwarf model grids were made available. One of these is the Sonora-Bobcat grid \footnote{\url{https://zenodo.org/record/1405206\#.XqoiBVNKiH4}} of solar- and non-solar metallicity atmospheres, with the atmospheres in chemical equilibrium \citep[][and submitted]{Marley_2017}. The other is the ATMO 2020 grid \footnote{\url{http://opendata.erc-atmo.eu}} of solar-metallicity models both in chemical equilibrium and out of equilibrium with weak and strong mixing \citep{Phillips_2020}. Also in 2020, new candidate $T_{\rm eff} <$ 400~K brown dwarfs were announced \citep{Bardalez_2020, Kirkpatrick_2020, Meisner_2020a, Meisner_2020b}, and new ground-based spectroscopic observations at $\lambda \approx 4.8~\mu$m were published \citep{Miles_2020}. A study of the cold planet-like brown dwarfs which includes the mid-infrared, and uses state-of-the-art model atmospheres, is now possible. Such a study is timely, given the scheduled 2021 launch of the {\it James Webb Space Telescope} ({\it JWST}) for which such objects will be prime targets. We present new infrared photometric measurements of cold brown dwarfs in Section 2. In Section 3 we compare the observed colors of late-T and Y-type brown dwarfs to the synthetic colors generated by the new atmospheric models. We show that, while the models can reproduce much of the SED, large discrepancies remain. In Section 4 we describe possible missing physics in the current models, which impacts the pressure-temperature adiabatic profile of the atmospheres. We test adiabat-adjusted model atmospheres in Section 5 by comparing synthetic spectra and photometry to observations of seven brown dwarfs, at wavelengths of 1 -- 20~$\mu$m. We show that a much improved fit can be obtained, and in Section 6 we use a grid of the adiabat-adjusted models to explore the properties of a larger sample of Y dwarfs. Our Conclusions are given in Section 7. In the Appendix we illustrate trends with temperature for $JWST$ colors, provide a grid of colors generated by the adiabat-adjusted models, and give a compilation of the photometry used in this work. \bigskip \section{New Photometry} \smallskip \subsection{Image Processing} The DRAGONS software package \citep{Labrie_2019} was used to reduce all the new imaging data obtained at Gemini Observatory for this work. DRAGONS documentation is available at: https://dragons.readthedocs.io/en/stable/. For Gemini's infrared cameras, DRAGONS performs these initial steps: the non-linearity correction is applied; counts are converted from data numbers to electrons; bad pixel masks are applied; and the read and Poisson noise is added to the FITS extension which carries the variance information. Multiple dark observations are stacked to create a master dark. A master flat is created from multiple lamps-on and lamps-off observations; the flat is normalized and thresholded for out-of-range values. Science data is divided by the appropriate flat field for filter and read mode. The sky contribution is determined for each pointing using the images taken at other positions in the dither pattern. The sky is then subtracted from each science image. Point sources are detected in each image, and these are used to align and stack the data set for each object. Each sky-subtracted image in the stack is numerically scaled based on the background signal, by factors typically $< 5$\%, to produce a final image. For images obtained with the adaptive optics multi-detector imager GSAOI at Gemini South \citep{McGregor_2004}, an add-on package called Disco-Stu determines the astrometric transformations to perform the stacking and create the final image. We used simple aperture photometry to measure magnitudes from processed images. The processed images either came from our new Gemini observations or from data archives, as we describe below. We used circular apertures with annular sky regions positioned to avoid nearby sources. The size of the target aperture was typically small, with diameters of 6 to 10 native pixels, in order to reduce noise and exclude potential nearby sources. We corrected for any loss of flux through the aperture by determining aperture corrections using bright isolated point sources in the science target image. Zeropoints for the processed images were determined from calibrators in the image or observed separately, or from the FITS header in the case of archival data. Extinction corrections were not applied to the ground-based data because the near-infrared extinction is small\footnote{\url{ https://www.gemini.edu/observing/telescopes-and-sites/sites}} and the targets were observed at airmasses $\lesssim 1.7$. \smallskip \subsection{Gemini Observatory J-band Photometry of Candidate Cold Brown Dwarfs} To examine the nature of the candidate late-type brown dwarfs identified by \citet{Marocco_2019} and \citet{Meisner_2020a, Meisner_2020b}, we obtained $J$-band imaging at Gemini Observatory using the Near-InfraRed Imager (NIRI) at Gemini North \citep{Hodapp_2003} and FLAMINGOS-2 at Gemini South \citep{Eikenberry_2006}. Table 1 gives target names and Gemini program identifications; the targets were selected as those accessible at the time of the Observatory's Proposal Calls. \setlength\tabcolsep{2pt} \begin{deluxetable*}{cccccccccccccrc}[t!] \tabletypesize{\scriptsize} \tablecaption{New Near-Infrared Photometry and Estimates of $T_{\rm eff}$} \tablewidth{0pt} \tablehead{ \colhead{{\it WISE} Name} & \colhead{Disc.} & \colhead{Spec.} & \colhead{Type} & \colhead{Gemini} & \colhead{Obs. Date} & \colhead{Instrument} & \colhead{On-Source} & \multicolumn{5}{c}{Photometry, MKO mag} & \multicolumn{2}{c}{$T_{\rm eff}$~K}\\ \colhead{RA/Dec J} & \colhead{Ref.} & \colhead{Type} & \colhead{Ref.} & \colhead{Program ID} & \colhead{yyyymmdd} & \colhead{Name} & \colhead{Exp., hr} & \colhead{$Y$} & \colhead{$J$} & \colhead{$H$} & \colhead{$K_s$\tablenotemark{a}} & \colhead{$K$} & \colhead{Est.} & \colhead{Ref.} } \startdata 021243.55 & Me20a & Y1 & 1 & GS-2019B-DD-107 & 20191211 & FLAMINGOS-2 & 1.38 & & 22.70 & & & & 390 & 2\\ $+$053147.2 & & & & & & & & & $\pm$ 0.09 & & & && \\ 030237.53 & Ti18 & Y0: & Ti18 & & 2013 & VIRCAM\tablenotemark{b} & & & 20.67 & & && 460 & 2\\ $-$581740.3 & & & & & & & & & $\pm$ 0.23 & & & & & \\ 032109.59 & Me20a & Y0.5 & Me20a & GN-2020B-Q-321 & 20200930 & NIRI & 0.58 & & 21.30 & & & & 415 & 2\\ $+$693204.5 & & & & & & & & & $\pm$ 0.06 && & & &\\ 033605.05 & Ma13b & Y0 & Ma18 & GN-2020B-ENG-1 & 20201001 & NIRI & 0.57 & 21.02\tablenotemark{c} & 21.26 & 21.59 & 21.4 & & 445 & 2\\ $-$014350.4 & & & & & & & & $\pm$ 0.11 & $\pm$ 0.14 & $\pm$ 0.31 & $\pm$ 0.5 & & &\\ 040235.55 & Me20a & Y1& Me20a & GS-2021A-FT-205 & 20210322 & FLAMINGOS-2 & 0.8 & & 24.0 & & & & 370 & 2 \\ $-$265145.4 & & & & & & & & & $\pm$ 0.5 & & & & & \\ 050305.68 & Me20b & Y1 & Me20b & GS-2021A-FT-205 & 20210303 & FLAMINGOS-2 & 2.11 & & 22.54 & & & & 345 & 2 \\ $-$564834.0 & & & & & & & & & $\pm$ 0.09 && & & & \\ 050615.56 & Me20b & T8 & PGpc & GS-2013B-Q-16 & 20131224 & FLAMINGOS-2 & 0.16 & & 20.31 & 20.89 & & & 600 & 1\\ $-$514521.3 & & & & & & & & & $\pm$ 0.05 & $\pm$ 0.14 & & && \\ 064723.23 & Ki13 & Y1 & Ki13 & GS-2019B-Q-220 & 20121210, & GSAOI & 1.30 & & & & 23.03 & & 405 & 2\\ $-$623235.5 & & & & & 12, 13, 14 & & & & & & $\pm$ 0.15 & & &\\ 085938.95 & Me20a & Y0 & Me20a & GN-2020B-Q-321 & 20201225 & NIRI & 0.13 & & 21.39 & & & & 450 & 2\\ $+$534908.7 & & & & & & & & & $\pm$ 0.15 & & & & & \\ 092503.2 & Ki20 & T8 & 1 & & 2017 & VIRCAM\tablenotemark{d} & & & 18.29 & & & & 700 & 1 \\ $-$472013.8 & & & & & & & & & $\pm$ 0.05 & & & & & \\ 093852.89 & Me20a & Y0 & Me20a & GS-CAL20210429 & 20210429 & FLAMINGOS-2 & 0.44 & & 21.08 & 21.49 & 21.11 & & 455 & 2 \\ $+$063440.6 & & & & & & & & & 0.10 & 0.21 & 0.23 & && \\ 094005.50 & Me20a & $\ge$Y1 & Me20a & GN-2020A-FT-205 & 20200310 & NIRI & 0.88 & & 21.88 & & & & 410 & 2\\ $+$523359.2 & & & & & & & & & $\pm$ 0.11 & & & & &\\ 125721.01 & Me20b& Y1 & Me20b & GN-2021A-FT-206 & 20210409 & NIRI & 1.72 & & 23.35 & & & & 390 & 2\\ $+$715349.3 & & & & & & & & & $\pm$ 0.20 & & & &&\\ 144606.62 & Me20a & $\ge$Y1 & Me20a & GS-2020A-FT-204 & 20200305 & FLAMINGOS-2 & 4.32 & & 23.20 & & && 350 & 2\\ $-$231717.8 & & & & & & & & & $\pm$ 0.14 & & & &&\\ 193054.55 & Me20b & $\ge$Y1 & Me20b & GN-2020B-Q-321 & 20201001 & NIRI & 1.78 & & 22.54 & & & & 365 & 2 \\ $-$205949.4 & & & & & & & & & $\pm$ 0.13 & & & & &\\ 193518.58 & Ma19 & $\ge$Y1 & Me20a & GN-2020B-Q-321 & 20200823, & NIRI & 1.70 & & 23.93 & & & & 365\tablenotemark{e} & 2\\ $-$154620.3 & & & & & 20200929, 30 & & & & $\pm$ 0.33 & & & & &\\ 193656.08 & Me20a & Y0 & Me20a & GN-2020B-Q-321 & 20201001 & NIRI & 0.03 & &20.16 & & & & 450 & 2\\ $+$040801.2 & & & & & & & & & $\pm$ 0.12 & & & & \\ 200520.38 & Ma13a & sdT8 & Ma13a & GN-2021A-FT-206 & 20210511, & NIRI & 1.4 & 19.99\tablenotemark{f} & 19.54 & 19.55 & & 21.00 & 600 & 1 \\ $+$542433.9 & & & & & 20210517 & & & 0.07 & 0.07 & 0.03 & & 0.09 & & \\ 223022.60 & Me20a & $\ge$Y1 & Me20a & GN-2020B-Q-321 & 20201001, & NIRI & 1.30 & & 22.99 & & & & 395 & 2\\ $+$254907.5 & & & & & 05 & & & & $\pm$ 0.20 & & & \\ 224319.56 & Me20b & Y0 & Me20b & & 2013 & VIRCAM\tablenotemark{b} & & 21.16 & 21.14 & & & & 450 & 2 \\ $-$145857.3 & & & & & & & & $\pm$ 0.34 & $\pm$ 0.26 & & & & & \\ 224916.17& Me20a & T9.5 & Me20a & GN-2020B-Q-321 & 20200917 & NIRI & 0.28 & &21.89 & & & & 460 & 2\\ $+$371551.4 & & & & & & & & & $\pm$ 0.10 & & & & &\\ \enddata \tablenotetext{a}{\citet{Leggett_2015} measure $K - K_s = 0.4 \pm 0.1$ for a T8 and a T9 dwarf using FLAMINGOS-2, implying $K = 21.8 \pm 0.5$ for J033605.05$-$014350.4, $K = 23.43 \pm 0.18$ for J064723.23$-$623235.5, and $K = 21.51 \pm 0.25$ for J093852.89$+$063440.6.} \vskip -0.1in \tablenotetext{b}{Measured here using VISTA VHS imaging data.} \vskip -0.1in \tablenotetext{c}{In the native NIRI system $Y = 21.19 \pm 0.10$ for J033605.05$-$014350.4; we adopted ${Y}_{\mathrm{NIRI}}-{Y}_{\mathrm{MKO}}=0.17\pm 0.03$ as determined by \citet{Liu_2012} for late-T and Y dwarfs.} \vskip -0.1in \tablenotetext{d}{Measured here using VVVX ESO Public Survey imaging data.} \vskip -0.1in \tablenotetext{e}{Assuming the system is an equal-mass binary, see Section 6.3.} \vskip -0.1in \tablenotetext{f}{In the native NIRI system $Y = 20.03 \pm 0.05$ for J200520.38$-$145857.3; we synthesized ${Y}_{\mathrm{NIRI}}-{Y}_{\mathrm{MKO}}$ for this object using the observed $Y$-band spectrum from \citet{Mace_2013b} and the filter profiles for NIRI\footnote{\url{https://www.gemini.edu/instrumentation/niri/components\#Filters}} and MKO\footnote{\url{https://http://svo2.cab.inta-csic.es/svo/theory/fps3/index.php?mode=browse&gname=UKIRT&gname2=UKIDSS&asttype= }}.} \vskip -0.05in \tablerefs{(1) this work, type ($\pm \approx 0.5$) based on the type-color, and $T_{\rm eff}$ ($\pm \approx 50$~K) based on the $T_{\rm eff}$-color, relationships of \citet{Kirkpatrick_2019, Kirkpatrick_2020}; (2) this work, $T_{\rm eff}$ ($\pm \approx 25$~K) based on the $T_{\rm eff}$-color relationships determined in Section 6.2, with $T_{\rm eff}$ values rounded to 5~K; Ki13 -- \citet{Kirkpatrick_2013}; Ki20 -- \citet{Kirkpatrick_2020}; Ma13a -- \citet{Mace_2013b}; Ma13b -- \citet{Mace_2013a}; Ma18 -- \citet{Martin_2018}; Ma19 -- \cite{Marocco_2019}; Me20a,b -- \citet{Meisner_2020a, Meisner_2020b}; PGpc -- Pinfield, P. and Gromadzki, M. private communication 2014; Ti18 -- \citet{Tinney_2018}. } \end{deluxetable*} The $J$ filter is defined by the Mauna Kea Observatories photometric system \citep{Tokunaga_2002}. The camera pixel scales are $0\farcs 12$ for NIRI and $0\farcs 18$ for FLAMINGOS-2. Telescope dithers of 12 -- 15" were used, in the form of a 5- or 9-point grid. All nights were photometric and the targets were observed at airmasses of 1.1 -- 1.7. The delivered full width half maximum (FWHM) of the point spread function (PSF) was $0\farcs4$ to $1\farcs0$. The magnitude zeropoint was determined from UKIDSS or VISTA sky survey photometry \citep{Lawrence_2007, McMahon_2013, Sutherland_2015, Dye_2018} of stars in the field of view; typically four to eight such stars were available. In the case of J094005.50$+$523359.2, hereafter J0940, only two survey stars were available and the zeropoint was determined by averaging the value implied by those stars plus a measurement of a UKIRT Faint Standard \citep{Leggett_2006} executed immediately after the one-hour science observation and at a similar airmass (1.1 cf. 1.2 for the science); the three zeropoint measurements agreed to 10\%. Sky noise for these images was typically at the 5 -- 10\% value, and usually dominated the uncertainty. Table 1 gives the final values. One of the targets, CWISEP J021243.55$+$053147.2, hereafter J0212, was identified by \citet{Meisner_2020a} as having very red [3.6] $-$ [4.5] colors, but not having significant motion, and therefore not listed in their table of Y dwarf candidates. Subsequently, \citet{Kirkpatrick_2020} also determined a low-significance motion of $\mu_{\alpha} = -59.8 \pm 45.0$ and $\mu_{\delta} = 57.0 \pm 27.4$~mas yr$^{-1}$, as well as a poor-quality parallax of $24.7 \pm 16.3$~mas; \citet{Kirkpatrick_2020} suggest that J0212 is a background source. However, the extremely red $J -$ [4.5] color that we measure for this object, with very little flux at [3.6], implies that J0212 is cold and molecule-rich. The study of {\it WISE} colors by \citet{Nikutta_2014} shows that AGN and infrared luminous galaxies can be very red in W1 $-$ W2, however such objects are also red in W2 $-$ W3; if J0212 falls into such a category it would be detected in W3, which it is not. A more plausible solution is that the object is a binary and the actual parallax value is close to the upper limit of the current measurement; we show below that the luminosity of J0212 is then consistent with the observed $J -$ [4.5] and [3.6] $-$ [4.5] colors. We therefore suggest that J0212 is a binary Y dwarf at a distance of $\sim$24~pc. \smallskip \subsection{Other New Near-Infrared Photometry} As part of a project to measure photometric transformations between the UKIDSS and VISTA sky survey, NIRI, and FLAMINGOS-2 systems, a field containing the Y dwarf WISE J033605.05$-$014350.4 was observed at Gemini North at $YJHK_sK$ (the brown dwarf was not detected at $K$), and a field containing the Y dwarf CWISEP J093852.89$+$063440.6 was observed at Gemini South at $JHK_s$. The data were reduced in the manner described in the previous section, and the results are given in Table 1. Table 1 lists the $JH$ magnitudes for WISEA J050615.56$-$514521.3, which was listed by \citet{Meisner_2020b} as a very late T dwarf candidate. This object was also targeted in the deep WISE search by \citet{Pinfield_2014a} and the unpublished photometry and spectral type (from their spectroscopy) is provided courtesy of a private communication with that team. We measured $YJHK$ magnitudes for the late-T subdwarf WISE J200520.38$+$542433.9, also known as Wolf 1130C \citep{Mace_2013b}, in order to have a set of near-infrared colors for a known very metal-poor object with [m/H] $\approx -0.75$ \citep{Kessel_2019}. The data were obtained using NIRI at Gemini North and were reduced in the manner described in the previous section. The results are given in Table 1. CWISE J092503.20$-$472013.8 was listed by \citet{Kirkpatrick_2020} as a candidate Y0 dwarf based on its motion, and W1 $-$ W2 color (3.93 $\pm$ 0.38). We used VVVX ESO Public Survey data \footnote{\url{https://www.eso.org/sci/publications/announcements/sciann17186.html}} to determine the $J$ magnitude given in Table 1. The brown dwarf was not detected in the $K_s$ survey data. The $J -$ W2 color of the target (2.99 $\pm$ 0.06) provides an improved spectral type estimate of T8, based on Figures 13 and 14 of \citet{Kirkpatrick_2020}. In addition, we searched for detections in the UKIDSS and VISTA surveys' imaging data for Y dwarfs without near-infrared photometry. We determined magnitudes for two Y0 dwarfs from the VISTA Hemisphere Survey \citep[VHS,][]{McMahon_2013}: WISEA J030237.53$-$581740.3 ($J$), and WISEA J224319.56$-$145857.3 ($Y$, $J$). The results are given in Table 1. Finally, to explore the known discrepancy between observations and models at $\lambda \approx 2~\mu$m \citep[e.g.][]{Leggett_2019}, we obtained $K$-band images of the Y1 dwarf WISE J064723.23$-$623235.5 \citep{Kirkpatrick_2013}, hereafter J0647. This object was chosen in order to better measure the discrepancy for the coldest Y dwarfs, where little $K$-band imaging is available. Because of the faintness of the target, we used the adaptive optics imager GSAOI \citep{McGregor_2004} at Gemini South. Table 1 gives the program identification and the dates on which J0647 was observed. The imager has a pixel scale of $0\farcs 02$. The nights were photometric and the delivered FWHM was $\sim 0\farcs 1$. Sixty-six 90~s observations were made, of which 52 with better seeing of $\leq 0\farcs 095$ were used in the final image. J0647 was observed at an airmass of $\sim 1.2$, and the telescope was dithered by random 1 -- 4" offsets. Aperture photometry was carried out with apertures of diameter $0\farcs 12$ and $0\farcs 20$, which gave consistent results after the application of the aperture corrections. The magnitude zeropoint was determined using stars from the VISTA Hemisphere Survey \citep{McMahon_2013} which were in the GSAOI field of view. Table 1 gives our derived $K_s$ for J0647. \smallskip \subsection{Mid-Infrared Photometry} Our goal is to reproduce the SED of the coldest brown dwarfs over all wavelengths where significant flux is emitted. It is important therefore to include the mid-infrared region; furthermore, knowledge of the mid-infrared is crucial for planning observations with {\it JWST}. \begin{figure} \plottwo{Fig1a.png}{Fig1b.png} \caption{Examples of {\it WISE} images where faint brown dwarfs were previously not included in the ALLWISE catalog (left) or where the photometry was compromised by background sources (right). In the latter case, the smaller aperture used here allowed the brown dwarf to be better isolated, resulting in a W4 magnitude fainter than the catalog value by $\sim 2$ magnitudes. } \end{figure} \setlength\tabcolsep{3pt} \begin{deluxetable*}{lcccRRRRRR} \tabletypesize{\footnotesize} \tablecaption{Revised and New {\it WISE} and {\it Spitzer} Photometry} \tablehead{ \colhead{{\it WISE} Name} & \colhead{Disc.} & \colhead{Spec.} & \colhead{Type} & \multicolumn{2}{c}{ALLWISE Catalog} & \multicolumn{4}{c}{This Work} \\ \colhead{RA/Dec J} & \colhead{Ref.} & \colhead{Type} & \colhead{Ref.} & \colhead{W3} & \colhead{W4} & \colhead{W3} & \colhead{W4} & \colhead{[3.6]} & \colhead{[4.5]} } \startdata 001449.96$+$795116.1& Ba20 & T8 & Ba20 & & & 13.69 $\pm$ 0.40 & & & \\ 002810.59$+$521853.1 & Me20b & T7.5 & Me20b & & & 13.95 $\pm$ 0.43 & & & \\ 013217.78$-$581825.9 & Me20b & T9 & Me20b & & & 14.10 $\pm$ 0.41 & & & \\ 014603.23$-$261908.7 & Me20b & T7.5 & Me20b & & & 13.63 $\pm$ 0.34 & & & \\ 081117.81$-$805141.3 & Ma13b & T9.5 & Ma13b & 12.64 $\pm$ 0.32 & 9.21 $\pm$ 0.38 & & 11.09 $\pm$ 0.65 & & \\ 085510.83$-$071442.5\tablenotemark{a} & Lu14 & $>$Y4 & Ki19 & 11.14 \pm 0.13 & & 11.51 \pm 0.06 & 10.56 \pm 0.50 & & \\ 085757.95$+$570847.5 & Ge02 & L8 & Ge02 & 10.32 \pm 0.06 & 8.64 \pm 0.35 & & 10.48 \pm 0.50 & & \\ 093735.63$+$293127.2 & Bu02 & T6pec & Bu06 & 10.70 \pm 0.10 & & & 10.36 \pm 0.34 & & \\ 105349.41$-$460241.2 & Me20b & T8.5 & Me20b & & & 14.13 \pm 0.40 & & & \\ 112106.36$-$623221.5 & Ki20 & T7 & 1 & & & & & 16.47 \pm 0.10 & 15.13 \pm 0.04 \\ 125721.01$+$715349.3 & Me20b& Y1 & Me20b & & & 13.55 \pm 0.33 & & & \\ 182831.08$+$265037.8 & Cu11 & $>$Y2 & Ki12 & 12.44 \pm 0.34 & & & 10.65 \pm 0.52 & & \\ 193054.55$-$205949.4 & Me20b & Y1 & Me20b & & & 14.44 \pm 0.58 & & & \\ 214025.23$-$332707.4 & Me20b & T8.5 & Me20b & & & 13.32 \pm 0.32 & & & \\ 225404.16$-$265257.5 & Me20b & T9.5 & Me20b & & & 13.29 \pm 0.29 & & & \\ \enddata \tablenotetext{a}{\citet{Wright_2014} and \citet{Kirkpatrick_2019} demonstrate that the first epoch of {\it WISE} observations of J0855 are significantly contaminated at W1 by background sources. The W3 and W4 images date to the same epoch and the background sources will therefore be at the same location as J0855. \citet{Wright_2014} measure W1 $= 16.12$ and W1 $-$ W2 $=0.67 \pm 0.17$ for these sources from images where J0855 has moved away (post-cryo). \citet{Nikutta_2014} analyse {\it WISE} colors for large samples of Galactic sources; their Figure 6 (panel 3) shows that the W1 $-$ W2 color is likely to be on the bluer side of the \citet{Wright_2014} measurement, and the most likely values of W2 $-$ W3 and W3 $-$ W4 are $\sim 0.8$ and $\sim 1.0$ respectively. Hence the background sources are expected to have W3 $\approx$ 15 and W4 $\approx$ 14, and so are not likely to significantly contaminate the J0855 W3 and W4 values in the Table. The successful model fits we show in Section 5.2 support this conclusion.} \vskip -0.05in \tablerefs{ 1 -- this work; Ba20 -- \citet{Bardalez_2020}; Bu02 -- \citet{Burgasser_2002}; Bu06 -- \citet{Burgasser_2006}; Cu11 -- \citet{Cushing_2011}; Ge02 -- \citet{Geballe_2002}; Ki12 -- \citet{Kirkpatrick_2012}; Ki19 -- \citet{Kirkpatrick_2019}; Ki20 -- \citet{Kirkpatrick_2020}; Lu14 -- \citet{Luhman_2014}; Ma13b -- \citet{Mace_2013a}; Me20b -- \citet{Meisner_2020b}. } \end{deluxetable*} {\it WISE} catalog photometry \footnote{\url{https://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-dd}} of faint targets can be compromised by nearby objects, and fainter objects are sometimes omitted altogether. The sensitivity limits for a signal-to-noise ratio (SNR) $= 5$ are $\sim$11.5 and 8.0 magnitudes for the W3 ($\lambda \sim 14~\mu$m ) and W4 ($\lambda \sim 22~\mu$m ) filters, respectively \footnote{\url{https://https://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2_3a.html}}. We examined the W3 and W4 images provided by the {\it WISE} Image Service \footnote{\url{https://irsa.ipac.caltech.edu/applications/wise/}} for the colder brown dwarfs, and determined new or revised values for the photometry based on this visual inspection. We also looked for W3 and W4 data for warmer brown dwarfs to determine color trends. We identified sources where a point source could be resolved by eye, at the correct location for the epoch of the W3 or W4 observation, allowing for the proper motion of the source. Figure 1 gives examples of sky regions where we obtained new or revised {\it WISE} magnitudes. We carried out aperture photometry on the {\it WISE} images using apertures of 3- or 5-pixel radii (4 or 7") and annular skies. These apertures are smaller than the predefined fitting radius used by the ALLWISE profile-fitting photometry routine, $r_{fit}$: $r_{fit} = 1.25 \times FWHM$ where $FWHM$ is 6" for bands 1 -- 3 and 12" for band 4\footnote{\url{https://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4_4c.html\#wpro}}. The smaller aperture reduced the noise contribution from the background and improved exclusion of nearby sources. Aperture corrections were measured using isolated and brighter stars in the field. Zeropoints are taken from the {\it WISE} image header. Table 2 gives our new W3 and W4 measurements, as well as the ALLWISE Source Catalog values. The uncertainties in the new measurements are due to background noise and are large in most cases, with SNRs of 2 or 3 only. Nevertheless significant differences exist between our values and those reported in the catalog (Table 2). These long-wavelength colors are useful for comparing to colors generated by current model atmospheres, and for planning {\it JWST} observations. CWISE J112106.36$-$623221.5 was listed by \citet{Kirkpatrick_2020} as a candidate Y0 dwarf, based on its motion, W2 detection and W1 non-detection. {\it Spitzer} imaging data are available for the source via AORs r42735360 and r23699712 at the {\it Spitzer} Heritage Archive \footnote{\url{https://sha.ipac.caltech.edu/applications/Spitzer/SHA/}}. We carried out aperture photometry on these images using apertures of 3-pixel radii ($1\farcs8$) and annular skies. Aperture corrections were measured using isolated and brighter stars in the field, and the counts calibrated photometrically according to the {\it Spitzer} IRAC Manual\footnote{\url{https://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/14/\#_Toc59022361}}. Table 2 gives the [3.6] and [4.5] magnitudes for the source, which was not detected at longer wavelengths in the earlier cryogenic observation. The two measurements of the source, taken four years apart, agree to 20\% at [3.6] and 2\% at [4.5]. The [3.6] $-$ [4.5] color of the target (1.34 $\pm$ 0.11) provides an improved spectral type estimate of T7, based on Figure 14 of \citet{Kirkpatrick_2020}. \bigskip \section{Observed and Modelled Colors of T and Y Dwarfs} \begin{figure} \plotone{Fig2.pdf} \vskip -0.15in \caption{Color-color diagrams for late T and Y dwarfs. Black dots are photometry from the literature; blue dots are new data presented here. Olive green lines are chemical equilibrium Sonora-Bobcat models, and yellow lines are chemical equilibrium ATMO 2020 sequences for a mass of 0.015~$M_{\odot}$ ($\log~g \approx 4.5$). Dark red lines are chemical non-equilibrium ATMO 2020 sequences for masses of 0.015~$M_{\odot}$ ($\log~g \approx 4.5$) and 0.005~$M_{\odot}$ ($\log~g \approx 4.0$). Line types indicate gravity and metallicity as in the legend. Approximate $T_{\rm eff}$ values along the top axis are from the ATMO 2020 non-equilibrium chemistry models. Circled points indicate seven dwarfs that we analyse in detail in Section 5, which are identified by short name in the bottom panel. Four color outliers are also identified: ULAS J141623.94$+$134836.30 (``S1416B''), WISE J111838.70$+$312537.9 (``W1118''), WISEA J215018.25$-$752039.7B (``W2150B''), and Wolf 1130C. W1118 is a distant companion to a quadruple system composed of F and G stars \citep{Wright_2013}. S1416B and W2150B are companions to L dwarfs \citep{Burningham_2010a, Faherty_2020}. Wolf 1130C is a companion to an sdM and white dwarf binary \citep{Mace_2013b}. W1118, S1416B and Wolf 1130C are members of metal-poor systems with [m/H] $= -0.3$, [m/H] $\approx -0.3$ and [m/H] $\approx -0.75$, respectively \citep{Wright_2013, Gonzales_2020, Kessel_2019}. } \end{figure} \begin{figure} \plotone{Fig3.pdf} \vskip -0.25in \caption{Color-magnitude diagrams for late T and Y dwarfs. Symbols and lines are as in Figure 2. Approximate $T_{\rm eff}$ values along the top middle axis are from the ATMO 2020 non-equilibrium chemistry models. Over-luminous Y-dwarfs which are possibly unresolved binaries are identified: CWISEP J021243.55$+$053147.2, WISE J053516.80$-$750024.9, WISEPA J182831.08$+$265037.8, and CWISEP J193518.59$-$154620.3. In the lower panel, the metal-poor T dwarfs S1416B and Wolf 1130C are identified (see also Figure 2). } \end{figure} Models of brown dwarf atmospheres are typically characterized by a set of physical and chemical parameters. The most fundamental is the total energy output, or luminosity ($L$) which is defined by Stefan's Law as $L = \sigma T_{\rm eff}^4 \times 4\pi R^2$, where $\sigma$ is the Stefan-Boltzmann constant, $R$ is the radius of the object, and $T_{\rm eff}$ the effective temperature. Another important parameter is the surface gravity $g$, which is defined as $g = GM/R^2$ where $M$ is the mass and $G$ is the gravitational constant. The chemical composition of the atmosphere is usually described as the abundance of metals relative to hydrogen [m/H], normalized to the solar value. In addition, some models include cloud formation via a sedimentation parameter and a fractional cloud cover \citep[e.g.][]{Morley_2014}. Also, some models represent vertical transport of gas (which results in disequilibrium chemical abundances) as a diffusive process, via the vertical eddy diffusivity parameter $K_{zz}$ \citep[cm$^2$ s$^{-1}$, e.g.][]{Saumon_2006}. The models we use here are parameterized by: $T_{\rm eff}$, $g$, [m/H] and $K_{zz}$. They are cloud-free and we discuss the possible impact of clouds later in this paper. Figures 2 and 3 show color-color and color-magnitude diagrams for late-T and Y-type brown dwarfs. Observed colors are plotted, as well as sequences from the Sonora-Bobcat models \footnote{\url{https://zenodo.org/record/1405206\#.XqoiBVNKiH4}} \citep[][and submitted]{Marley_2017} and the ATMO 2020 models \footnote{\url{http://opendata.erc-atmo.eu}} \citep{Phillips_2020}. Figure 2 shows various colors plotted against $J -$ [4.5], as a proxy for $T_{\rm eff}$. Note however that $J -$[4.5] is also sensitive to gravity, metallicity, mixing, and clouds (e.g. Figure 3 bottom panel). The photometry is taken from this work (Tables 1 and 2) and the literature \citep[][see also the photometry compilation in the Appendix]{Leggett_2017, Kirkpatrick_2019, Marocco_2019, Bardalez_2020,Faherty_2020, Kirkpatrick_2020, Marocco_2020, Meisner_2020a, Meisner_2020b}. Figure 3 shows color-magnitude diagrams for late T and Y dwarfs with measured trigonometric parallaxes. Parallaxes are taken from \citet{Leggett_2017, Martin_2018, Kirkpatrick_2019, Bardalez_2020, Kirkpatrick_2020, Marocco_2020}. The absolute [4.5] magnitude is shown as a function of the near-infrared color $J - H$, the mid-infrared color [3.6] $-$ [4.5], and the long-baseline color $J -$ [4.5]. The absolute [4.5] magnitude can be used as a proxy for luminosity because $\sim$half of the total energy is captured by this filter for cold brown dwarfs. Luminosity in turn is strongly correlated with $T_{\rm eff}$ through the Stefan-Boltzmann law, because the radius of a brown dwarf does not change significantly after around 0.3~Gyr \citep[][see also Section 5.5]{Burrows_1997}. Note however that the [4.5] flux is also sensitive to gravity, metallicity, and mixing (e.g. Figure 3 bottom panel). The new photometric measurements presented here (Tables 1 and 2) are represented by blue points in Figures 2 and 3. The new data support and build on the empirical sequence in each panel of Figure 2; the $K$-band datapoint for J0647 nicely fills in a gap in the $J - K$ sequence at $J -$ [4.5] $\approx 8$, and the new $J$-band data improves the definition of the tight $J -$ [4.5]:[3.6] $-$ [4.5] observational sequence. For the 400 -- 600~K brown dwarfs, the $J - K$ and [4.5] $-$ W3 colors appear to have a large degree of intrinsic scatter; we discuss this further in Section 6.2. \begin{figure*}[b] \vskip -1.1in \hskip -0.2in \gridline{\fig{Fig4a.pdf}{0.53\textwidth}{} \hskip -0.2in \fig{Fig4b.pdf}{0.53\textwidth}{} } \vskip -0.4in \caption{ Mid-infrared color-color diagram for M, L, T and Y dwarfs. Model sequences are cloud-free, with solar-metallicity and $\log g = 4.5$. Olive green lines are Sonora-Bobcat chemical equilibrium sequences for 250 $\leq T_{\rm eff}$~K $\leq$ 2400, and dark red are ATMO 2020 non-equilibrium chemistry sequences for 330 $\leq T_{\rm eff}$~K $\leq$ 1280. The Sonora-Bobcat chemical equilibrium models reproduce the mid-infrared colors of late-M to late-L-type dwarfs, and the ATMO 2020 non-equilibrium chemistry models reproduce the colors of late-L to late-T dwarfs. An empirical by-eye sequence is shown which combines the two, and uses the observations of the Y dwarfs to anchor the red end of the sequence. } \end{figure*} Figures 2 and 3 show that the most recent models at the time of writing, the ATMO 2020 and Sonora-Bobcat models, generate very similar colors for the same parameters. That is, the chemical equilibrium solar-metallicity cloud-free ATMO 2020 and Sonora-Bobcat model sequences (yellow and olive green solid lines in the figures) are very similar. The models which include vigorous mixing (dark red lines) do a better job of reproducing the observed $J -$ [4.5]:$J - H$ and $J -$ [4.5]:[4.5] $-$ W3 sequences in Figure 2, and the $J - H$:$M_{[4.5]}$ and [3.6] $-$[4.5]:$M_{[4.5]}$ sequences in Figure 3. This is because mixing in these cool atmospheres has the net result of decreasing the NH$_3$ abundance and increasing N$_2$, and increasing CO at the expense of CH$_4$ \citep[e.g.][]{Noll_1997, Saumon_2006, Saumon_2007, Visscher_2011,Zahnle_2014, Leggett_2015, Tremblin_2015, Phillips_2020}. The $H$- and W3-bands brighten when the NH$_3$ absorption decreases, and [4.5] becomes fainter due to increased CO. For a representative 400~K brown dwarf with $\log g = 4.5$, the ATMO 2020 models with no mixing and with strong mixing ($\log K_{zz} = 6$) give $\delta H = -0.7$, $\delta$W3 $= -0.2$, and $\delta$[4.5] $= +0.3$. However, although the non-equilibrium chemistry models reproduce much of the data in Figures 2 and 3, Figure 2 shows that all models diverge from the observed $J - K$ and [3.6] $-$ [4.5] colors for $T_{\rm eff} \lesssim 600$~K. Discrepancies between observations and synthetic colors are also apparent in the $J -$ [4.5]:$M_{[4.5]}$ plot in Figure 3. Figure 4 shows observed mid-infrared colors for M, L, T and Y dwarfs which can be used to estimate 5 -- 20~$\mu$m colors of cool dwarfs, for example for {\it JWST} observations. If used for this purpose, the reader should note that the uncertainties are large and exposure estimates should therefore be conservative. We include a by-eye empirical sequence which can be used for interpolation. It is important to note that {\it chemical equilibrium models will underestimate the [4.5] $-$ W3 and [4.5] $-$ W4 colors of T and Y dwarfs by $\sim 1$~magnitude.} \bigskip \section{Modifications to Brown Dwarf Model Atmospheres} Given the discrepancies between observations and models for brown dwarfs with $T_{\rm eff} < 600$~K (Figures 2 and 3), we explored modifications to the model structure. We used the ATMO 2020 models which include strong mixing as the starting point, as overall they reproduce the observations better than the chemical equilibrium models. Energy transport in a cool dwarf atmosphere is predominantly convective, with radiative cooling becoming important high in the atmosphere where the pressure is too low for convection to be efficient. Convection is treated as an adiabatic process where pressure $P$ and temperature $T$ are defined by ${P}^{(1-\gamma )}{T}^{\gamma }=\mathrm{constant}$. For an ideal gas, $\gamma$ is the ratio of specific heats at constant pressure and volume and, for a gas composed entirely of molecular hydrogen, $\gamma = 1.4$. The reader is referred to \citet{Marley_2015} and \citet{Zhang_2020} for reviews of the important processes in model atmospheres. One-dimensional models, such as the ATMO and Sonora-Bobcat models, represent the atmosphere as a $P-T$ profile which maps the cooling from the core out to the surface, and by a chemical abundance profile which maps the chemical changes that occur through the atmosphere as $P$ and $T$ change. The $P-T$ profile can be thought of as a slice through the atmosphere, where both temperature and pressure decrease with increasing altitude. Of course, an actual brown dwarf atmosphere is more complex. These objects rotate rapidly with periods of a few hours, similar to the solar system giant planets \citep{Zapatero_2006,Cushing_2016, Esplin_2016, Leggett_2016b,Scholz_2018, Vos_2020, Tannock_2021}. They also have a radius approximately equal to Jupiter's \citep[e.g.][]{Burrows_1997}. The atmospheres are turbulent, and are likely to have planetary-like features such as zones, spots and planetary-scale waves \citep{Apai_2017, Showman_2019}. \citet{Showman_2013} simulate the dynamics of a brown dwarf atmosphere and demonstrate that for a rotation period of a few hours, large-scale, organized horizontal wind speeds of tens of m s$^{-1}$ are plausible, and coherent vertical circulation moves air parcels over a scale height ($\sim 7$~km) in $\sim 10^5$~seconds. These motions translate into a diffusion parameter $K_{zz}\sim 10^6$~cm$^2$ s$^{-1}$, typical of the values used in the ATMO 2020 non-equilibrium chemistry models \citep[][their Figure 1]{Phillips_2020}. The coefficient is higher in the atmospheres of Jupiter and Saturn where $K_{zz}\approx 10^8$~cm$^2$ s$^{-1}$ \citep{Wang_2016}. The $\lambda \sim 5~\mu$m spectrum of the very cold brown dwarf J0855 is also best fit with a high mixing coefficient of $K_{zz}\approx 10^{8.5}$~cm$^2$ s$^{-1}$ \citep[][and Section 5.2]{Miles_2020}. \citet{Augustson_2019}, and references therein, describe how convection in a rotating stellar or planetary atmosphere can change the chemical composition and thermodynamic properties of the gas and therefore impact the differential rotation, opacity, and thermodynamic gradients of the atmosphere. The model developed by \citet{Augustson_2019} connects the rotation rate and vertical diffusion coefficient to the velocity of the gas motion, the divergence from adiabacity, and characteristic scale lengths. The damping effect of rotation can decrease the size of the convection zone, leading to sharper thermodynamic and chemical gradients than would otherwise be present. Furthermore, both superadiabatic and subadiabatic temperature gradients can exist in the atmosphere. The atmospheres of the solar system giant planets are not perfectly adiabatic \citep[e.g.][]{Guillot_1994, Guillot_2005, Vazan_2020} and various mechanisms can produce a non-adiabatic cooling curve in giant planet and brown dwarf atmospheres. These include compositional changes such as those due to condensation \citep[e.g.][]{Robinson_2012}, or the CO $\Leftrightarrow$ CH$_4$ changes at the L- to T-type spectral transition \citep{Tremblin_2015, Tremblin_2019}. The upper atmosphere can be heated by a cloud deck, or by breaking gravity waves \citep[e.g.][]{Schubert_2003, O'Donoghue_2016}. Further evidence in support of non-adiabatic $P-T$ profiles in brown dwarf atmospheres comes from retrieval analyses. \citet{Line_2015, Line_2017} and \citet{Zalesky_2019} reproduce near-infrared observations of T and Y dwarfs with non-adiabatic $P-T$ curves, and \citet{Piette_2020} show that a parametric $P-T$ profile can be used to determine accurate atmospheric parameters from a high precision spectrum of a T dwarf. In summary, there is significant evidence that the $P-T$ curve of a brown dwarf atmosphere does not, and should not be expected to, follow the standard adiabat. In this work we treat the adiabatic parameter $\gamma$ as a variable, along with $T_{\rm eff}$, $g$, [m/H] and $K_{zz}$, and generate a small number of models to compare to observations of a sample of cold brown dwarfs. In the ATMO 2020 models, the initial value of $\gamma$ is determined for each atmospheric layer using the equation of state tables from \citet{Saumon_1995}; for our tuned models we force $\gamma$ to be constant in the upper atmosphere. The tuning process is described in the next section. The models are cloud-free, and clouds are not expected to be significant in the photospheres of 400 -- 600~K brown dwarfs \citep{Morley_2012, Morley_2014}. However, for the warmest atmospheres in our sample there may be chloride and sulfide clouds in deep regions that can contribute flux at wavelengths where the atmosphere is clear. For the coldest objects, water clouds may form high in the atmosphere, and these would impact the SED at wavelengths where the the atmosphere is opaque. We discuss this further in Section 5.3. The ATMO 2020 models we use here have a fixed potassium abundance. \citet{Phillips_2020} show that different treatments of potassium broadening produce large variations in the shape of the blue wing of the $Y$-band flux peak in brown dwarfs. Those authors note that an order of magnitude reduction in the K abundance improves the agreement between models and observations, and suggest that current modelling of the potassium chemistry, including its condensation into KCl, is slightly incorrect. In this work we adopt a K abundance of 4$\times 10^{-9}$ for the late-type T dwarf we use as a proof-of-concept, UGPS J072227.51$-$054031.2 (hereafter J0722). For the cooler Y dwarfs, we adopt a K abundance of $1\times 10^{-9}$. We return to the issue of potassium and the $Y$-band in Section 6.1. The analysis presented here is a first step towards including processes currently missing in all brown dwarf models. We simplify the complex three-dimensional turbulent atmospheres by parameterizing the $P-T$ profile in a one-dimensional model. We show below that this simple approach significantly improves the agreement with observations. \bigskip \section{Tuning the Pressure-Temperature Profile} \medskip \subsection{Proof of Concept: the 500~K Brown Dwarf UGPS J072227.51-054031.2} \begin{figure} \plotone{Fig5.pdf} \vskip -0.35in \caption{The black line is the flux-calibrated spectrum of UGPS J072227.51$-$054031.2 \citep{Lucas_2010, Leggett_2012, Miles_2020}. The black circles are the observed {\it Spitzer} [4.5] and {\it WISE} W3 photometric datapoints, and the dashed line indicates the width of the W3 filter which peaks at $\lambda \sim 14~\mu$m. The colored lines are ATMO 2020 models with parameters given in the legends. Significant absorbers are identified at the top of panel (b). Also in panel (b), the small blue and black datapoints demonstrate the good agreement between the observed and tuned-model photometry in the mid-infrared. Panel (c) demonstrates the improvement in fit provided by the tuned model (note the linear $y$-axis). Panel (d) compares the standard and tuned ATMO 2020 non-equilibrium models. Note that the model fluxes are not scaled to match the data, they are scaled by the measured distance and by the brown dwarf radius calculated by ATMO 2020 evolutionary models. } \end{figure} We use observations of the bright late-type T-dwarf J0722 for our initial test. This brown dwarf has $T_{\rm eff} \approx 500$~K and has extensive observational data, including spectra at $\lambda \sim 3.5~\mu$m and $\lambda \sim 4.8~\mu$m \citep{Lucas_2010, Leggett_2012, Miles_2020}. Table 3 lists previous determinations of the atmospheric parameters of J0722. \citet{Leggett_2012} compare the observed near-infrared and $\lambda \sim 3.5~\mu$m spectra of J0722, and mid-infrared photometry, to chemical non-equilibrium cloud-free \citet{Saumon_2012} models. Constrained by luminosity, they find a range in the [$T_{\rm eff},\log g$] parameters of [492,3.5] to [550,5.0]. The mid-infrared observations pushed the parameter selection to the lower temperatures and gravities, while the near-infrared was better fit by the higher temperature and gravity solution. \citet{Filippazzo_2015} and \citet{Dupuy_2013} also use luminosity-based arguments to determine the parameters given in Table 3, while \citet{Miles_2020} use the Sonora-Bobcat model grid and near- and mid-infrared photometry to constrain $T_{\rm eff}$, evolutionary models to constrain $g$, and the $4.8~\mu$m spectrum to constrain $K_{zz}$. The top panel of Figure 5 shows SEDs generated by standard models with parameters typical of those found for J0722. As found in earlier analyses \citep[e.g.][]{Leggett_2012, Miles_2020}, the fit is quite good, especially for the non-equilibrium chemistry models. However the calculated $YJ$ fluxes are higher than observed and the $2 \lesssim \lambda~\mu$m $\lesssim 4$ flux is lower than observed. The direction of these offsets is consistent with the systematic discrepancies seen in the colors of the cooler brown dwarfs in Figures 2 and 3: \begin{itemize} \item the modelled $J - H$ and $J -$ [4.5] are too blue because $J$ (in particular, see Figure 5) is too bright, \item the modelled $J - K$ is much too blue because $J$ is too bright {\it and} $K$ too faint, \item and the modelled [3.6] $-$ [4.5] is too red because [3.6] is too faint. \end{itemize} In other words, {\it the discrepancies demonstrated in the top panel of Figure 5, for standard models, apply to all brown dwarfs with $T_{\rm eff} < 600$~K}. Panel (b) of Figure 5 shows a model we have tuned to better fit the observations. The tuning is done manually, iterating over ages of 100~Myr, 1~Gyr, 5~Gyr, and 10~Gyr, and metallicities of $-0.5$, 0 and $+0.3$ dex. The steps involved are: \begin{itemize} \item assume {\it a priori} that $\log K_{zz} = 7$ \item select $\log g$ and radius based on evolutionary models for the selected age \item select $T_{\rm eff}$ to reproduce the observed flux at [4.5] \item decrease $\gamma$ to reduce the $YJHK$ flux \item let $\gamma$ increase to the standard value at a depth in the atmosphere defined by pressure $P(\gamma, max)$, and deeper, to increase the $YJ$ flux as necessary \item adjust $\log K_{zz}$ if the other adjustments have changed the [4.5] flux \end{itemize} The fits are also constrained by ensuring that the scaling used to transform the model surface flux to that detected at Earth, which depends on the distance to the dwarf and its radius, is consistent with the evolutionary models. Once a reasonable fit is obtained, judged by eye for this preliminary analysis, selection between any similar quality fits is done by choosing the fit that best agrees with the observed [3.6] and W3 photometry. \begin{figure} \gridline{ \fig{Fig6a.pdf}{0.5\textwidth}{} \fig{Fig6b.pdf}{0.5\textwidth}{} } \vskip -0.6in \gridline{ \fig{Fig6c.pdf}{0.5\textwidth}{} \fig{Fig6d.pdf}{0.5\textwidth}{} } \vskip -0.6in \gridline{ \fig{Fig6e.pdf}{0.5\textwidth}{} \fig{Fig6f.pdf}{0.5\textwidth}{} } \vskip -0.4in \caption{Comparison of the effects of varying the model parameters in our tuning process, for a representative $T_{\rm eff}=400$~K model. Each variation of the six parameters is displayed as a plot pair, with the near-infrared region in the upper plot and the mid-infrared in the lower. The grey line shows the SED for the model with parameters as in the legend. Red and blue lines show the SEDs generated by increasing or decreasing the parameter, respectively. The parameter that is being varied is shown in the upper panel. The spectra are normalized to a value of 10 at $\lambda = 4.98~\mu$m (a local flux maximum). } \end{figure} \begin{figure} \gridline{ \fig{Fig7a.png}{0.47\textwidth}{} \fig{Fig7b.png}{0.47\textwidth}{} } \vskip -0.4in \gridline{ \fig{Fig7c.png}{0.47\textwidth}{} \fig{Fig7d.png}{0.47\textwidth}{} } \vskip -0.4in \gridline{ \fig{Fig7e.png}{0.47\textwidth}{} \fig{Fig7f.png}{0.47\textwidth}{} } \vskip -0.4in \gridline{ \fig{Fig7g.png}{0.47\textwidth}{} \fig{Fig7h.png}{0.47\textwidth}{} } \vskip -0.3in \caption{ The top left panel shows condensation curves for elements in equilibrium (see text for discussion). The other panels are pressure-temperature profiles (left and upper axes) and flux contribution functions (thick blue line, left and lower axes), for the brown dwarfs in our tuning sample. Blue $P-T$ profiles are tuned to fit the data by reducing the adiabat $\gamma$ at $P < P _{\gamma, \rm max}$; red lines have a standard radiative-convective profile. } \end{figure} \setlength\tabcolsep{3pt} \begin{deluxetable*}{crcccclcccccccc}[t] \tabletypesize{\scriptsize} \tablecaption{Atmospheric Parameters for the $P$-$T$ Tuned Brown Dwarf Sample} \tablehead{ \\ & & \multicolumn{5}{c}{Previous Work} & \multicolumn{6}{c}{This Work} & \multicolumn{2}{c}{Evol. Model} \\ {Name} & {$V_{tan}$} &{$T_{\rm eff}$} & {$\log g$} & {[m/H]} & {$\log K_{zz}$ } & {Ref.} & {$T_{\rm eff}$} & {$\log g$} & {[m/H]} & {$\log K_{zz}$} & {$\gamma$} & $\Delta\gamma$(P,T) & {$\sim$Mass} & {$\sim$Age} \\ & {km~s$^{-1}$} &{K} & {cm~s$^{-2}$} & & {cm$^2$s$^{-1}$} & & {K} & {cm~s$^{-2}$} & & {cm$^2$s$^{-1}$} & & bar, K & {$M_{Jup}$} & {Gyr} } \startdata {\it WISEA} & 16.8 & 310 -- 340 & 3.75 -- 4.25 & $\gg 0$ & 6.0 & Le17 & 325 & 4.0 & +0.3 & 6.0 & 1.30 & (15,860) & 5 & 1.0 \\ J035000.31 & $\pm$ 0.3 & 300 -- 350 &$\sim 5.00$ & & & Sc15 & & & & & & & \\ $-$565830.5 & & 294 -- 341 & 3.92 -- 4.47 & & & Du13\tablenotemark{a} & & & & & & & \\ \tableline UGPS & 18.9 & 522 -- 558 & 3.70 -- 4.40 & & 4.4 & Mi20 & 540 & 4.50 & 0.0 & 7.0 & 1.27 & \nodata & 15 & 1.5 \\ J072227.51 & $\pm$ 0.2 & 524 -- 614 & 4.15 -- 5.21 & $\sim$0 & & Fi15 & & & & & & & \\ $-$054031.2 & & 493 -- 551 & 4.38 -- 4.92 & & & Du13\tablenotemark{a} & & & & & & & \\ & & 490 -- 520 & 3.50 -- 4.50 & $\sim$0 & 5.5 & Le12 & & & & & & & \\ \tableline {\it WISE} & 88.0 & 249 -- 260 & 3.50 -- 4.50 & & 8.5 & Mi20 & 260 & 4.00 & 0.0 & 8.7 & 1.33 & (50,870) & 5 & 3.0 \\ J085510.83& $\pm$ 0.6 & 240 -- 260 & 3.50 -- 4.30 & & 6.0 & Le17 & & & & & & & \\ $-$071442.5 & & $\sim 240$ & $\sim 4.00$ & & & Lu16 & & & & & & & \\ \tableline {\it WISEPA} & 25.7 & 396 -- 434 & 4.30 -- 4.90 & & 6.0 & Mi20 & 375 & 4.50 & +0.3 & 6.0 & 1.27 & (12,760) & 12 & 3.0 \\ J154151.66 & $\pm$ 0.4 & 302 -- 474 & 3.72 -- 4.24 & & & Za19 & & & & & & & \\ $-$225025.2 & & 360 -- 390 & 4.25 -- 4.75 & $>0$ & 6.0 & Le17 & & & & & & & \\ & & $\approx$400 & 4.00 -- 4.50 & & & Sc15 & & & & & & & \\ & & 335 -- 367 & 4.03 -- 4.54 & & & Du13\tablenotemark{a} & & & & & & & \\ \tableline {\it WISEPA} & 48.6 & 310 -- 340 & 3.75 -- 4.25 & $\ll 0$ & 6.0 & Le17 & 375 & 4.0 & -0.5 & 7.0 & 1.20 & (7,640) & 5 & 0.5 \\ J182831.08 & $\pm$ 1.1 & 421 -- 470 & 4.24 -- 4.78 & & & Du13\tablenotemark{a} & & & & & & & \\ $+$265037.8AB\tablenotemark{b}\tablenotemark{c} & & & & & & & & & & & & \\ \tableline {\it WISEPC} & 33.6 & 471 - 522 & 4.40 -- 5.00 & & 5.3 & Mi20 & 475 & 4.25 & 0.0 & 7.0 & 1.20 &(7.5,820) & 8 & 0.5 \\ J205628.90 & $\pm$0.5 & 447 -- 523 & 4.64 -- 5.18 & & & Za19 & & & & & & & \\ $+$145953.3 && 410 -- 440 & 4.25 -- 4.75 & $>0$ & 6.0 & Le17 & & & & & & & \\ & & 400 -- 450 & 4.00 -- 4.50 & & & Sc15 & & & & & & & \\ & & 414 -- 460 & 4.23 -- 4.76 & & & Du13\tablenotemark{a} & & & & & & & \\ \tableline {\it WISEA} & 53.2 & 310 -- 340 & 4.27 -- 4.75 & $>0$ & 6.0 & Le17 & 350 & 4.00 & 0.0 & 7.0 & 1.25 & (10,740) & 5 & 0.5 \\ J220905.75 &$\pm$0.8 & 500 -- 550 & 4.00 -- 4.50 & & & Sc15\tablenotemark{d} & & & & & & & \\ $+$271143.6 & & & & & & & & & & & & & \\ \enddata \smallskip \tablecomments{ Excluding any systematic errors, we estimate the uncertainties in our derived parameters to be $\pm$20~K in $T_{\rm eff}$, $\pm$0.25~dex in $\log g$, $\pm$0.3 dex in [m/H], $\pm$1 dex in $\log K_{zz}$, $\pm$0.1 in $\gamma$, and $\pm$10~bar in $P_{\gamma -max}$ (Figure 6). These uncertainties lead to an uncertainty in mass and age of a factor of $\sim$2 and $\sim$3, respectively (Section 5.5).} \vskip -0.05in \tablenotetext{a}{The \citet{Dupuy_2013} $T_{\rm eff}$ and $\log g$ values quoted in the Table use the bolometric luminosities given in that paper combined with the more recent measurements of parallaxes used here.} \vskip -0.1in \tablenotetext{b}{J1828 could not be fit by us as a single star. The parameters given here and the fits shown in Figure 8 assume it is an equal-mass binary system.} \vskip -0.1in \tablenotetext{c}{The \citet{Dupuy_2013} higher temperature for J1828 is based on the assumption that it is a single object.} \vskip -0.1in \tablenotetext{d}{A value as high as 500~K for $T_{\rm eff}$ is not plausible for J2209, as also pointed out by \cite{Martin_2018}. We suspect the noisy near-infrared spectrum skewed the model fit by \citet{Schneider_2015}.} \vskip -0.05in \tablerefs{Du13 - \citet{Dupuy_2013}, Fi15 - \citet{Filippazzo_2015}, Le12 - \citet{Leggett_2012}, Le17 - \citet{Leggett_2017}, Lu16 - \citet{Luhman_2016}, Mi20 - \citet{Miles_2020}, Sc15 - \citet{Schneider_2015}, Za19 - \citet[][constrained]{Zalesky_2019}. Tangential velocities are from \citet{Kirkpatrick_2019}. } \end{deluxetable*} \setlength\tabcolsep{6pt} Figure 6 illustrates the sensitivity of the synthetic 0.9 -- 20~$\mu$m spectrum to the parameters (for these models, longer wavelengths of 20 -- 30~$\mu$m do not show significant sensitivity). The shape of the SED is very sensitive to temperature, and also to metallicity. $\gamma$ impacts the slope from the near- to the mid-infrared, as well as the depth of the strong absorption bands. Gravity signatures are more subtle, and somewhat degenerate with metallicity. However gravity is also constrained by the flux scaling to Earth, via the mass-radius relationship used by the evolutionary models. We discuss this further in Section 5.5. The SED generated by the tuned model provides a significantly improved fit to observations of J0722. The agreement with the near-infrared spectrum and the $4 \leq \lambda~\mu$m $\leq 5$ spectrum is now excellent, instead of being a factor of $\sim 3$ discrepant. Also, the discrepancy at the bottom of the strong 3.3~$\mu$m CH$_4$ band is reduced to a factor of $\sim$ 2 from a factor of $\sim 10$. Apart from the reduced adiabat, the other atmospheric parameters --- $T_{\rm eff}$, $g$, [m/H] and $K_{zz}$ --- are consistent with previous determinations (Table 3). Panel (d) of Figure 5 compares the spectra generated for J0722 by the standard and tuned non-equilibrium chemistry ATMO 2020 models. The difference in the near-infrared region is clear, as is that in the 2 -- 4~$\mu$m region. {\it JWST} spectra at 5 -- 9~$\mu$m, impossible to obtain from the ground, will provide an additional check on this approach. Figure 7, top right panel, shows the standard and tuned $P-T$ diagram for J0722, as well as the contribution function -- the pressure or atmospheric layer from which flux at a certain wavelength arises. Standard curves for a $T_{\rm eff}$ value equal to that determined from the fit, and a temperature 100~K cooler, are shown; these demonstrate that the tuned model has an interior (where the $\lambda \sim 1~\mu$m flux originates) similar to the cooler standard model, and an upper atmosphere similar to the warmer standard model. The fact that the 3.3~$\mu$m feature is still somewhat deeper than observed suggests that the revised $P-T$ profile may not be warm enough where this flux originates, in the upper atmosphere at pressures $\sim$ 0.1~bar. Interestingly, the need for upper atmosphere heating has also been identified in retrieval analyses of L dwarf atmospheres \citep{Burningham_2017}. We discuss this further in Section 6. \medskip \subsection{Tuned-Model Fits to 250 -- 500~K Brown Dwarfs} \begin{figure}[b] \plotone{Fig8.pdf} \vskip -0.35in \caption{Adiabat-tuned fits for three $T_{\rm eff}\sim 400$~K brown dwarfs, identified in the legends. Solid black lines are observed spectra \citep[][and Cushing et al. 2021 submitted]{Cushing_2011, Leggett_2013, Schneider_2015, Miles_2020}, and the black points are observed photometric data, with vertical error bars where these are larger than the symbol. Uncertainties in the observed J2056 spectra are negligible, in the J1541 and J1828(AB) spectra they are 10 -- 20\% in regions where there is significant flux. Dashed black lines indicate the passbands of the broad W3 and W4 filters. Blue lines are synthetic spectra generated by the tuned models with parameters given in the legends, and blue points are the synthetic photometry. } \end{figure} \begin{figure} \plotone{Fig9.pdf} \vskip -0.35in \caption{Adiabat-tuned fits for three $T_{\rm eff}\sim 300$~K brown dwarfs, identified in the legends. Solid black lines are observed spectra \citep{Schneider_2015, Leggett_2016a, Morley_2018, Miles_2020}, and the black points are observed photometric data, with vertical error bars where these are larger than the symbol. Uncertainties in the observed near-infrared spectra, in regions where there is significant flux, are 20 -- 50\% for J2209 and J0350. Uncertainties in the observed spectra for J0855 are 10 -- 30\% for the $L$-band and 5 -- 20\% for the $M$-band.} \end{figure} We extended the approach described above to colder brown dwarfs. The sample consists of three cold brown dwarfs for which \citet{Miles_2020} provide $\lambda \approx 4.8~\mu$m spectra, because this region is sensitive to mixing of CH$_4$ and CO (Figures 5 and 6): J0855, {\it WISEPA} J154151.66$-$225025.2 (hereafter J1541), and {\it WISEPC} J205628.90$+$145953.3 (hereafter J2056). We added three other brown dwarfs with $J -$[4.5] colors between those of J2056 and J1541, and the extreme dwarf J0855, all of which have W3 photometry available --- {\it WISE} J035000.31$-$565830.5 (hereafter J0350), {\it WISEPA} J182831.08+265037.8 (hereafter J1828), and {\it WISE} J220905.73+271143.9 (hereafter J2209). Figures 2 and 3 identify the target objects in the color-color diagrams, and Table 3 lists the six objects, with atmospheric parameters determined here and previously. We could not fit the absolute flux level of J1828 as a single object, but we did find a satisfactory fit assuming it is an equal-mass binary. We refer to J1828 from here on as J1828(AB) to clarify that the estimated properties assume binarity. Figures 8 and 9 show the SEDs of the six Y dwarfs in our sample --- observational data as well as the best by-eye tuned model spectrum --- in order of decreasing $T_{\rm eff}$. The W4 photometric point is included for J1828(AB) and J0855 in the Figures; it was not used when judging fit quality as the uncertainty is large (Table 2), but the observed and modelled photometry agree within the uncertainties. Note the increasing dominance of the mid-infrared region and the steady reddening of the [3.6] $-$ [4.5] color with decreasing temperature in Figures 8 and 9. Note also the pronounced difference between J1541 and J1828(AB) in Figure 8 although they have the same $T_{\rm eff}$ --- the lower metallicity and $\gamma$ of J1828(AB) suppress the $YJHK$ flux and broaden the $Y$ band peak. These changes in the SEDs are also demonstrated in Figure 6. As $T_{\rm eff}$ drops to 260~K there is a loss of flux at $\lambda \sim 1~\mu$m. The fits shown in Figures 8 and 9 are generally very good across the entire SED. The height and width of the near-infrared flux peaks are well reproduced, with the exception of J1828(AB) where the $H$-band peak is a factor of $\sim$2 too bright, and the $Y$-band peak for J0350, where the model is a factor of three too faint. The $Y$-band discrepancy suggests that a large amount of flux at $\lambda \lesssim 1.0~\mu$m is missing from the models, as the red wing of the flux peak is well matched. Both these systems are challenging --- J1828(AB) is a very metal-poor likely-multiple system, and J0350 is a cold metal-rich brown dwarf. The model flux at $\lambda \approx 3.3~\mu$m is low, as also seen for J0722 in Figure 5 (although the agreement is improved by a factor of $\sim 5$ compared to standard-adiabat models). This leads to [3.6] magnitudes that are a few-tenths to a magnitude fainter than observed. The spectrum of J0855 in Figure 9 suggests that the loss occurs only at the blue end of the $3.13 \lesssim \lambda ~\mu$m $\lesssim 3.92$ [3.6] filter bandpass. The mid-infrared fluxes are otherwise well matched. The coldest object, J0855, is very well matched --- the observed and synthetic photometry generated by the tuned model agree within the measurement uncertainties at all passbands apart from [3.6], and the 3.5 -- 4.1~$\mu$m and 4.5 -- 5.1~$\mu$m spectra are well reproduced (see also Section 5.4). The agreement between these tuned non-equilibrium chemistry models and observations is better than has been possible in the past. Previous efforts to fit the mid-infrared spectroscopy and photometry of WISE 0855 by \citet{Morley_2018} found that models with lower CH$_4$ abundances could adequately fit the data, including models with sub-solar metallicity and C/O ratios (see the low-metallicity sequence in the [3.6] $-$ [4.5] panel in Figure 2). Low-metallicity models that adequately match the mid-infrared photometry are too bright at near-infrared wavelengths, but a deep continuum opacity source (e.g. clouds) could readily decrease the near-infrared flux to match the observed photometry. Those authors found that upper-atmosphere heating could not be invoked to fit the observed properties, but did not explore changes to the deep adiabatic structure. In other recent work, the model comparisons to J0722, J2056, J1541 and J0855 by \citet[their Figure 3]{Miles_2020} show large discrepancies (factors of 2 -- 3) at most wavelengths. Table 3 gives our derived model parameters and compares these to previously determined values. Excluding any systematic errors, we estimate the uncertainties in our derived atmospheric parameters, based on the full fit to the SED, to be $\pm$20~K in $T_{\rm eff}$, $\pm$0.25~dex in $\log g$, $\pm$0.3 dex in [m/H], $\pm$1 dex in $\log K_{zz}$, $\pm$0.1 in $\gamma$, and $\pm$10~bar in $P_{\gamma max}$. This is based on the sensitivity of the SED to the parameters (Figure 6); gravity is constrained by both the SED and the mass-radius relationship of the ATMO 2020 evolutionary models \citep{Phillips_2020}. The absolute uncertainty in the parameter estimates is difficult to assess but is unlikely to be more than twice these values, given the agreement between the estimates for individual objects in Table 3, which were arrived at using different models and different methods. Furthermore, the ATMO 2020 evolutionary models have been tested against a small sample of brown dwarfs with dynamically determined masses, and the ages derived are appropriate for the solar neighborhood \citep{Dupuy_2017, Buder_2019}. The evolutionary models also produce cooling curves very similar to earlier models, while using a more recent equation of state for H–He mixtures \citep{Chabrier_2019}. The atmospheric parameters determined for J1541 and J2056 by \citet{Zalesky_2019} in Table 3 are of particular relevance, as those authors use a retrieval method to adjust the atmosphere properties in order to reproduce observations, somewhat similar to (but more complex than) our $P-T$ tuning technique \citep[see][]{Line_2015}. \citet{Zalesky_2019} constrain their fits using {\it HST} near-infrared spectra while we use longer-baseline observations, which allows us to probe the higher and cooler regions of the atmosphere (compare our Figure 7 to Zalesky et al. Figure 2). The shape of the profile we determine for J2056 is similar to that found by \citet{Zalesky_2019}, with the atmosphere cooler at deeper layers and warmer in the upper layers compared to the grid models. However the difference between the tuned and standard temperatures are larger at deeper layers in our models, for example at 100~bar we find $\delta T \approx 500$~K compared to 100~K for \citet{Zalesky_2019}. For J1541, the deviation of the shape of the profile from the standard model is larger in the Zalesky et al. analysis than in our analysis. \citet{Zalesky_2019} find that both the upper and lower regions of the atmosphere are warmer by $\sim$500~K, while we find that the deeper layers are cooler with only small differences from standard in the upper regions. Nevertheless both analyses indicate that the $P-T$ profile deviates from the standard form, typically with cooler regions in the deeper layers of late-T and Y dwarf atmospheres, from which the near-infrared radiation emerges. The parameters determined by \citet{Miles_2020} are also of interest, as the 4.8~$\mu$m spectra presented by those authors provides a constraint on $K_{zz}$. Both this work and Miles et al. find a very high $K_{zz}$ for the extremely cold J0855. We are in agreement for J1541, however Miles et al. find a lower value than ours for J0722 and J2056 (Table 3). We suggest that our estimates are more robust as they are based on broader wavelength coverage. Our tuning sample of six Y dwarfs has a relatively small range in the photospheric adiabatic parameter $\gamma$ (typically 1.2 -- 1.3), and in the diffusion coefficient $\log K_{zz}$ (typically 6 -- 7), but some variation in these values for a larger sample would not be surprising. The global properties of a brown dwarf atmosphere are likely to vary with inclination to the line of sight. For example, models of turbulent convection in rapidly rotating atmospheres, including the solar system gas giants, calculate that $K_{zz}$ is latitude-dependent, decreasing from the equator to the poles \citep[e.g.][]{Flasar_1978, Visscher_2010, Wang_2016}. Measurements of variability are also likely to be inclination-dependent \citep{Vos_2017}. \medskip \subsection{Clouds, Chemical Changes and the Disruption of Convection in Y Dwarfs} Figure 7 shows the standard and modified $P-T$ profile for the six Y dwarfs in our sample. Also shown is the contribution function, which indicates the pressure layer from which the near- to mid-infrared flux emerges in the tuned model. From the coldest to the warmest object, the $1~\mu$m light emerges from regions where $P \sim$ 10 -- 100~bar and temperatures are 900 -- 1500~K, while the $10~\mu$m light emerges from regions where $P \sim 1$~bar and temperatures are 250 -- 500~K. Where the atmosphere is more opaque, such as at $\lambda \sim$~3, 6 or 8~$\mu$m, the light emerges from high and cold regions where $P \sim 0.1$~bar and $T \sim$ 150 -- 350~K. \begin{figure*}[t] \gridline{\fig{Fig10a.pdf}{1.0\textwidth}{\vskip -0.4in (a)}} \vskip -0.4in \gridline{\fig{Fig10b.png}{0.5\textwidth}{\vskip -0.1in (b)} \fig{Fig10c.png}{0.5\textwidth}{\vskip -0.1in (c)} } \caption{The upper panel (a) shows observations (black line) and the tuned-adiabat model spectra (blue line) from Figures 8 and 9, for $3.5 \leq \lambda~\mu$m $\leq 5.5$. The lower panels (b)(c) show ATMO 2020 opacity calculations for these wavelengths, for two representative values of $T_{\rm eff}$.} \end{figure*} The condensation curves in the top left panel of Figure 7 suggest that water clouds would be expected in the upper layers of the atmosphere of J0855, and possibly in the very upper atmosphere of J0350 and J2209 \citep[see also][their Figure 6]{Morley_2014}. These could produce the heating in the upper atmosphere needed to increase the model flux at $\lambda \approx 3.3~\mu$m, although this could also be accomplished by breaking gravity waves as is likely in the solar system giant planet atmospheres above the 1-bar pressure surface \citep[e.g.][]{Schubert_2003, O'Donoghue_2016}. The condensation curves also indicate that KCl and Na$_2$S clouds would be important in the regions where the near-infrared flux originates, for our sample, i.e. $P \sim$ 10~bar and $T \sim$ 1000~K \citep[see also][their Figure 4] {Morley_2012}. The 10~bar/1000~K level also corresponds to where nitrogen moves into the NH$_3$ form from N$_2$, in equilibrium conditions (Figure 7). It is interesting to note that our fits indicate that the $P-T$ curve reverts to the standard adiabat at pressures around 10~bar for the 325 -- 475~K Y dwarfs in our sample, and 50~bar for the 260~K J0855; all at temperatures of 750 -- 870~K (the very metal-poor J1828(AB) system appears to transition at a slightly cooler 640~K). This may indicate that convection in Y dwarf atmospheres is disrupted once the atmosphere cools to $\sim$800~K, by the change in nitrogen chemistry and/or the condensation of chlorides and sulfides. We find that for the warmer T9 dwarf J0722 any increase in $\gamma$ occurs at higher pressures which are not sampled by the emergent SED, suggesting different physics is at play for T dwarfs. \medskip \subsection{$5~\mu$m Spectra of Brown Dwarfs and the Detection of Phosphine} Phosphine is a non-equilibrium species that is seen in the $5~\mu$m spectra of Saturn and Jupiter; it is a useful species which can be used to study both atmospheric dynamics and the effect of photochemistry on planetary atmospheres \citep[e.g.][]{Fletcher_2009}. PH$_3$ is not detected in ground-based spectra of J0855 and other cold brown dwarfs, although it is expected to be abundant \citep{Skemer_2016, Morley_2018, Miles_2020}. Because of the potential diagnostic value of the species, we explore what the new tuned models indicate for its detectability. Figure 10(a) shows 3.5 -- 5.5~$\mu$m spectra of the four brown dwarfs in our tuning sample with such data. We also show the derived adiabat-tuned fit for each object, which reproduces the observations well. Figure 10(b)(c) show the opacity contributions from various species at these wavelengths, for representative temperatures. These opacity contributions are taken from the ATMO 2020 models with vertical mixing, which only consider the non-equilibrium abundances of the major carbon- and nitrogen- bearing species, and thus do not take into account the mixing of $\rm{PH_3}$ \citep{Phillips_2020}. The spectral regions that can be observed from the ground, the $L$ and $M$ bands, are dominated by CH$_4$ and CO absorption bands, respectively, for the 400~K and warmer brown dwarfs. For the 260~K J0855, H$_2$O becomes the dominant opacity source in the $M$-band. Although \citet[][their Figure 19]{Morley_2018} find that the red edge of the $L$-band and the blue edge of the $M$-band in J0855 should show PH$_3$ absorption, at the enhanced abundance brought about by mixing, these are difficult wavelengths to work at from ground-based observatories. We calculate that there is a strong feature due to PH$_3$ at 4.30~$\mu$m in the spectra of cold brown dwarfs, even when assuming $\rm{PH_3}$ is in chemical equilibrium. Hence {\it JWST} observations should finally confirm the presence of PH$_3$ in brown dwarf atmospheres. \medskip \subsection{Estimating Masses and Ages for the Six Y Dwarfs} \begin{figure} \vskip -1.5in \includegraphics[width = 7.5 in]{Fig11.pdf} \vskip -0.3in \caption{Evolutionary curves from ATMO 2020 models (\url{http://opendata.erc-atmo.eu}). Solid black lines are iso-mass sequences for objects with mass shown along the right axis. Evolution proceeds from left to right. Dashed lines are isochrones for the ages indicated, and dotted brown lines are lines of constant radii, for the values indicated. The location of the six Y dwarfs in our tuning sample are shown by short name. } \end{figure} Figure 11 shows the evolution of cold brown dwarfs in a $T_{\rm eff}$:gravity diagram. The luminosity, or absolute brightness, of a brown dwarf, as measured at the Earth, is determined by $T_{\rm eff}$, radius and distance. The uncertainty in distance is very small for these nearby objects, and the SED is very sensitive to temperature (Figure 6), with the net result that the absolute flux level constrains radius to $\sim 10\%$ (Figures 8 and 9). Figure 11 shows that $\log g$ can then be constrained to $\pm 0.3$~dex, mass can be constrained to a factor of $\sim$2, and age to a factor of $\sim$3, for a notional 400~K brown dwarf. Table 3 gives the atmospheric and evolutionary parameters we derived here from the $T_{\rm eff}$ and gravity of each tuned-adiabat model fit. For our tuning sample of six Y dwarfs, the evolutionary models give ages of between approximately 0.5 and 3~Gyr (Table 3, see also Figure 11). These values agree, within the uncertainties, with what would be expected for a local sample --- 1 -- 3~Gyr \citep{Dupuy_2017, Buder_2019}. Weak support for relative youth is provided by the tangential velocities which suggest thin disk membership \citep{Dupuy_2012} and so an age younger than 8~Gyr \citep{Kilic_2017}. The estimated masses for the six Y dwarfs are very low for this cold sample --- between 5 and 12 Jupiters. \bigskip \section{Application to the Larger Y Dwarf Sample} \subsection{Color Trends} \begin{figure} \centering \includegraphics[width=6in]{Fig12.pdf} \vskip -0.2in \caption{Color-color diagrams for late T and Y dwarfs. Symbols and lines are as in Figure 2, with the addition of modified-adiabat model sequences shown in blue. The model has $K_{zz} = 10^7$ and $\gamma = 1.25$ at pressures of 15 bar and lower. Values of $T_{\rm eff}$ from this model are shown along the top axis. For the frequently used [3.6] $-$ [4.5] color diagnostic, the model deviates from observations for the coldest objects, and semi-empirical values of $T_{\rm eff}$ are shown in grey along the right axis (see Section 6.2). } \end{figure} \begin{figure} \plotone{Fig13.pdf} \vskip -0.3in \caption{Color-magnitude diagrams for late T and Y dwarfs. Sequences are as in Figure 12. Grey $\sim$diagonal lines in the bottom panel indicate constant $T_{\rm eff}$, as labelled, for metallicities ranging from approximately $+0.3$ on the left, to $-0.5$ on the right. The location of the metal-poor envelope edge in the bottom panel is consistent with the low-metallicity Sonora-Bobcat models (Figure 3), and with the observed population. Our SED analysis of J1828 indicates that it is an equal-mass binary with [m/H] $\sim -0.5$ (Figure 8). J0212, J0535, and J1935 are also likely to be similar-mass binary systems. Notionally single Y dwarfs which are estimated to have $T_{\rm eff} \lesssim$ 400~K are identified in the legend by the first four digits of the {\it WISE} catalog Right Ascension, or their binary name in the case of the white dwarf companion. } \end{figure} To check how the modified-adiabat non-equilibrium chemistry models perform for a larger sample, Figures 12 and 13 repeat the color-color and color-magnitude diagrams of Figures 2 and 3, but this time they include a model sequence generated by a small grid of the $P-T$-modified models. For this grid we adopt $K_{zz} = 10^7$~cm$^2$s$^{-1}$, $\gamma = 1.25$ and $P_{\gamma -max} = 15$~bar. Colors are calculated for two gravities, $\log g = 4.0$ and 4.5, and two metallicities, [m/H] $=$ 0.0 and $+0.3$. A sequence generated by the standard non-equilibrium chemistry model is also shown for comparison. The top panel of Figure 12, $J -$ [4.5]:$Y - J$, shows that there is a systematic issue in the $Y$-band for the 325 -- 450~K brown dwarfs, as the models are fainter at $Y$ than observed, by a few-tenths to one magnitude. The spectral fits in Figures 8 and 9 suggest that the problem is too little flux in the models at the blue wing of the $Y$-band, suggesting in turn that a more rigorous approach to the treatment of the strong $0.8~\mu$m K~I line is called for (see Section 4). The models are likely to have issues with two important chemical changes at 325 -- 450~K, exploration of which are beyond the scope of this paper: collisions with H$_2$ affect the shape of the wings of the alkali resonance lines \citep{Allard_2016}, and neutral K gas transitions to KCl gas and then to KCl solid \citep[e.g.][]{Lodders_1999}. The fits to the other colors and magnitudes in Figures 12 and 13 are good to excellent. The agreement between the models and observations at $J - K$ and [3.6] $-$ [4.5] is greatly improved. The previous $\gtrsim 1$~magnitude discrepancy for these colors is now $\approx 0$ for $J - K$ and reduced to a few tenths of a magnitude for [3.6] $-$ [4.5]. In the color-magnitude diagram, Figure 13, the previous $\approx$ 0.4 mag discrepancy in $J - H$ is resolved, as is the $\approx$ 1.0 mag discrepancy in $J -$ [4.5]. \medskip \subsection{$T_{\rm eff}$ and Metallicity Estimates for Y Dwarfs} Figures 12 and 13 show that, as well as temperature, the metallicity and gravity of the atmosphere can impact the colors of a brown dwarf. We find that temperature and metallicity have the largest impact, as also indicated by the synthetic spectra shown in Figure 6. Figure 14 is a plot of $T_{\rm eff}$ against [3.6] $-$ [4.5], $J -$ [4.5], and $M_{[4.5]}$, which are the most commonly available photometric measurements for Y dwarfs, currently. The relationships in Figure 14 are determined from the modified-adiabat model grid, which spans $250 \leq T_{\rm eff}$~K $\leq 500$. These models have $K_{zz} = 10^7$~cm$^2$s$^{-1}$, $\gamma = 1.25$ and $P_{\gamma -max} = 15$~bar. The relationship for the [3.6] $-$ [4.5] color includes an empirical correction based on observations of the Y dwarfs for which we do a full SED fit in Section 5.2. The Figure suggests that the $J -$ [4.5] color is particularly sensitive to metallicity. Table 4 gives polynomial fits to the solar metallicity relationships shown in Figure 14. We estimate the uncertainty in a color-derived $T_{\rm eff}$ to be $\pm 25$~K, based on the scatter seen when determining $T_{\rm eff}$ from different colors, and comparing the SED-determined $T_{\rm eff}$ to the color value. \begin{figure}[b] \vskip -1.0in \plotone{Fig14.pdf} \vskip -0.3in \caption{Synthetic colors from the modified-adiabat model grid, see text. Table 4 gives polynomial fits to the solar metallicity relationships shown in the Figure. } \end{figure} \begin{deluxetable*}{lccc}[t] \tabletypesize{\normalsize} \tablecaption{Polynomial Relationships\\for Estimating $T_{\rm eff}$ from Color} \tablehead{ \colhead{Color} & \colhead{$a_0$} & \colhead{$a_1$} & \colhead{$a_2$} } \startdata [3.6] $-$ [4.5]$^a$ & 850 & $-$166.7 & \\ $J -$ [4.5] & 816 & $-$81.64 & 2.9572 \\ $M_{[4.5]}$ & 5331 & $-$544.5 & 14.4990 \\ \enddata \tablecomments{$T_{\rm eff}$ is estimated using: ~ $T_{\rm eff} = a_0 + a_1\times{\rm Color} + a_2\times{\rm Color}^2$\\ Relationships are valid for $250 \leq T_{\rm eff}$~K $\leq 500$. Excluding any systematic errors, the uncertainty in $T_{\rm eff}$ is $\pm 25$~K. Solar metallicity is assumed; metal-rich objects will be cooler, and metal-poor object warmer, for a given $J -$ [4.5] (see Table 5 and Figure 14).\\ $^a$ Semi-empirical. } \end{deluxetable*} \begin{deluxetable*}{cccc}[b] \tabletypesize{\normalsize} \tablecaption{Estimate of Color Sensitivity to Metallicity and Gravity for $T_{\rm eff} = 400$~K} \tablehead{ \colhead{Color} & \multicolumn{2}{c}{ $\delta$ mag} & \colhead{Important} \\ \colhead{} & \colhead{$\delta\log g = +0.5$} & \colhead{$\delta$[m/H] $= +0.3$} & \colhead{Chemistry} } \startdata $\delta(J - H)$ & $-0.1$ & $-0.3$ & H$_2$ at $J$ \\ $\delta(J - K)$ & $-0.7$ & $+0.4$ & (stronger) H$_2$ at $K$ \\ $\delta(J -$ [4.5]) & $+0.1$ & $-1.1$ & H$_2$ at $J$, CO at [4.5]\\ $\delta$([3.6] $-$ [4.5]) & $-0.2$ & $+0.4$ & CH$_4$ at [3.6], CO at [4.5] \\ $\delta$([4.5] $-$ W3) & $-0.2$ & $+0.6$ & CO at [4.5], H$_2$ at W3 \\ \enddata \tablecomments{Generated by a $P - T$ modified-adiabat model with $K_{zz} = 10^7$~cm$^2$s$^{-1}$, $\gamma = 1.25$ and $P_{\gamma -max} = 15$~bar, except for the [3.6] $-$ [4.5] color which is empirical. See also Figure 5 for opacity identification and Figure 6 for SED sensitivity to gravity and metallicity.} \end{deluxetable*} \begin{figure}[t] \plotone{Fig15.pdf} \vskip -0.1in \caption{Color-color plots for estimating $T_{\rm eff}$ and metallicity for Y dwarfs. Blue lines are isotherms with metallicity ranging from approximately $+0.3$ to $-0.5$ from left to right. Green triangles, identified in the upper panel with the first four digits of the object's RA, correspond to candidate Y dwarfs with lower limits only on $J$, or no constraint on $J$ in the case of J2351. In the lower panel, the possible equal-mass binaries J0212, J0535, J1828, and J1935 are identified. Of the seven $J$-limit objects, J2351 is not in the lower panel as there is no parallax available. } \end{figure} Table 5 summarizes the dependencies of various colors on metallicity and gravity for a representative 400~K brown dwarf. All color changes are calculated by a modified-adiabat model, except for [3.6] $-$ [4.5] which is estimated from the two 375~K brown dwarfs analysed in Section 5, which differ in metallicity and gravity. By referencing the SED parameter dependence shown in Figure 6, and the opacity identifications shown in Figure 5, we find that there are two opacities which drive the pressure (gravity) and metallicity sensitivity in the models: the CO absorption at $\lambda \approx 4.6~\mu$m (Figure 6), and collision-induced H$_2$ opacity \citep[e.g.][]{Allard_1995, Burgasser_2002,Knapp_2004, Saumon_2012}. The H$_2$ opacity at these low temperatures has two broad peaks with similar absorption coefficients, at $\lambda \approx 2.2~\mu$m ($K$) and $\lambda \approx 11.1~\mu$m (W3); there is a weaker absorption peak at $\lambda \approx 1.2~\mu$m \citep[$YJ$, ][their Figure 1]{Saumon_2012}. \setlength\tabcolsep{3pt} \begin{deluxetable*}{lcccrrclcccrr}[t] \tabletypesize{\scriptsize} \tablecaption{Estimates of $T_{\rm eff}$ and Metallicity for Candidate and Confirmed Y Dwarfs} \tablehead{ \colhead{{\it WISE} Name} & \colhead{Disc.} & \colhead{Spec.} & \colhead{Type} & \colhead{$T_{\rm eff}$ K} & \colhead{[m/H]} & & \colhead{{\it WISE} Name} & \colhead{Disc.} & \colhead{Spec.} & \colhead{Type} & \colhead{$T_{\rm eff}$ K} & \colhead{[m/H]} \\ & \colhead{Ref.} & \colhead{Type} & \colhead{Ref.} & & & & & \colhead{Ref.} & \colhead{Type} & \colhead{Ref.} & & } \startdata \\ 014656.66$+$423410.0B\tablenotemark{a} & Ki12 & Y0 & Du15 & 435 & $\sim 0$ & & 120604.38$+$840110.6 & Sc15 & Y0 & Sc15 & 475 & $\sim$0 \\ 021243.55$+$053147.2(AB)\tablenotemark{b}\tablenotemark{c} & Me20a & Y1 & 1 & 400 & & & 121756.91$+$162640.2B & Ki11 & Y0 & Le14 & 460 & $\sim$0 \\ 023842.60$-$133210.7 & Me20a & Y1 & Me20a & 400 & & & 125721.01$+$715349.3 & Me20b & Y1 & Me20b & 390 & $\sim$0 \\ 030237.53$-$581740.3 & Ti18 & Y0: & Ti18 & 460 & $> 0$ & & 135937.65$-$435226.9\tablenotemark{c} & Me20a & Y0 & Me20a & 455 & \\ 030449.03$-$270508.3 & Pi14b & Y0pec & Pi14a & 465 & $\sim$0 & & 140518.40$+$553421.4 & Cu11 & Y0.5 & Cu16 & 400 & $\sim$0 \\ 032109.59$+$693204.5 & Me20a & Y0.5 & Me20a & 415 & $> 0$ & & 144606.62$-$231717.8 & Me20a &Y1 & Me20a & 350 & $> 0$ \\ 033605.05$-$014350.4 & Ma13b & Y0 & Ma18 & 445 & $<$ 0 & & 154151.66$-$225025.2\tablenotemark{d} & Cu11 & Y1 & Sc15 & 375 & $+0.3$ \\ 035000.32$-$565830.2\tablenotemark{d} & Ki12 & Y1 & Ki12 & 325 & $+0.3$ & & 163940.86$-$684744.6 & Ti12 & Y0pec & Sc15 & 405 & $\sim$0 \\ 035934.06$-$540154.6 & Ki12 & Y0 & Ki12 & 475 & $< 0$ & & 173835.53$+$273258.9 & Cu11 & Y0 & Cu11 & 450 & $> 0$ \\ 040235.55$-$265145.4 & Me20a & Y1 & Me20a & 370 & $< 0$ & & 182831.08$+$265037.8(AB)\tablenotemark{d}\tablenotemark{e} & Cu11 & $\ge$Y2 & Ki12 & 375 & $-0.5$ \\ 041022.71$+$150248.5 & Cu11 & Y0 & Cu11 & 435 & $> 0$ & & 193054.55$-$205949.4 & Me20b & Y1 & Me20b & 365 & $\sim$0 \\ 050305.68$-$564834.0 & Me20b & Y1 & Me20b & 345 & $> 0$ & & 193518.59$-$154620.3(AB)\tablenotemark{e} & Me20a & Y1 & Me20a & 365 & $< 0$ \\ 053516.80$-$750024.9(AB)\tablenotemark{e} & Ki12 & $\ge$Y1: & Ki13 & 415 & $< 0$ & & 193656.08$+$040801.2 & Me20a & Y0 & Me20a & 450 & $\sim$0 \\ 063428.10$+$504925.9 & Me20a & Y0 & Me20a & 445 & & & 201146.45$-$481259.7 & Me20a & Y0 & Me20a & 465 & \\ 064723.23$-$623235.5 & Ki13 & Y1 & Ki13 & 405 & $< 0$ & & 205628.90$+$145953.3\tablenotemark{d} & Cu11 & Y0 & Cu11 & 475 & 0.0 \\ 071322.55$-$291751.9 & Ki12 & Y0 & Ki12 & 465 & $\sim$0 & & 220905.73$+$271143.9\tablenotemark{d} & Cu14 & Y0: & Cu14 & 350 & 0.0 \\ 073444.02$-$715744.0 & Ki12 & Y1 & Ki12 & 470 & $\sim$0 & & 222055.31$-$362817.4 & Ki12 & Y0 & Ki12 & 450 & $\sim$0 \\ 080714.68$-$661848.7 & Lu11 & Y1 & Ki19 & 415 & $< 0$ & & 223022.60$+$254907.5\tablenotemark{c} & Me20a & Y1 & Me20a & 395 & \\ 082507.35$+$280548.5 & Sc15 & Y0.5 & Sc15 & 380 & $\sim$0 & & 224319.56$-$145857.3 & Me20b & Y0 & Me20b & 450 & \\ 083011.95$+$283716.0 & Ba20 & Y1 & Ba20 & 335 & & & 225628.97$+$400227.3 & Me20a & Y1 & Me20a & 345 & $> 0$\\ 085510.83$-$071442.5\tablenotemark{d} & Lu14 & $\ge$Y4 & Ki19 & 260 & 0.0 & & 235120.62$-$700025.8 & Me20b & Y0.5 & 1 & 405 & \\ 085938.95$+$534908.7\tablenotemark{c} & Me20a & Y0 & Me20a & 450 & & & 235402.79$+$024014.1 & Sc15 & Y0 & Sc15 & 355 & $\sim$0 \\ 093852.89$+$063440.6\tablenotemark{c} & Me20a & Y0 & Me20a & 455 & & & 235547.99$+$380438.9 & Me20a & Y0 & Me20a & 480 & \\ 094005.50$+$523359.2\tablenotemark{c} & Me20a & $\ge$Y1 & Me20a & 410 & & & 235644.78$-$481456.3 & Me20a & Y0.5 & Me20a & 425 & \\ 104756.81$+$545741.6 & Me20a & Y0 & Me20a & 400 & & & & & & & &\\ 114156.67$-$332635.5 & Ti14 & Y0 & Ti18 & 485 & $\sim$0 & & & & & & &\\ \enddata \tablenotetext{a}{No measured resolved $5~\mu$m photometry is published for the close binary. For this work we deconvolve the {\it Spitzer} photometry \citep{Kirkpatrick_2019} using spectral types of T9 and Y0 for the components \citep{Dupuy_2015}, and adopting $\delta$[3.6] $= 1.00 \pm 0.15$ and $\delta$[4.5]$= 0.7 \pm 0.10$ \citep[][their Figure 14]{Kirkpatrick_2020}. } \vskip -0.1in \tablenotetext{b}{$T_{\rm eff}$ is estimated from [3.6] $-$ [4.5] and $J -$ [4.5]; the value is consistent with the $M_{[4.5]}$-implied value if the system is an equal mass binary {\it and} the true parallax is close to the upper limit on the current uncertain measurement.} \vskip -0.1in \tablenotetext{c}{The $M_{[4.5]}$ magnitude was ignored in the estimate due to the large uncertainty in the distance modulus ($> 0.4$~mag).} \vskip -0.1in \tablenotetext{d}{The parameter estimates are based on the full SED fits described in Section 5.2.} \vskip -0.1in \tablenotetext{e}{The parameter estimates assume the system is an equal mass binary.} \vskip -0.1in \tablerefs{(1) this work, type ($\pm \approx 0.5$) based on the type-color relationship of \citet{Kirkpatrick_2019}; Ba20 -- \citet{Bardalez_2020}; Cu11 -- \citet{Cushing_2011}; Du15 -- \citet{Dupuy_2015}; Ki11, 12, 13, 19 -- \citet{Kirkpatrick_2011, Kirkpatrick_2012, Kirkpatrick_2013, Kirkpatrick_2019}; Le14 -- \citet{Leggett_2014}; Lu11, 14 -- \citet{Luhman_2011, Luhman_2014}; Ma13b -- \citet{Mace_2013b}; Ma18 -- \citet{Martin_2018}; Ma19 -- \cite{Marocco_2019}; Me20a,b -- \citet{Meisner_2020a, Meisner_2020b}; Pi14a -- \citet{Pinfield_2014b}; Sc15 -- \citet{Schneider_2015}; Ti12, 14, 18 -- \citet{Tinney_2012, Tinney_2014, Tinney_2018}. } \end{deluxetable*} Figure 15 shows late-T and Y dwarf candidates in color-color diagrams which take advantage of the metallicity-sensitivity of the $J -$ [4.5] color, which becomes redder with decreasing metallicity. For warmer brown dwarfs, \citet[][their Figure 3]{Schneider_2020} show that metal-poor T-type (sub)dwarfs are also red in $J -$ [4.5] for their W1 $-$ W2 color, which is similar to the [3.6] $-$ [4.5] color. Note that the observationally-defined metal-poor population edge, in the lower panels of Figures 13 and 15, is consistent with the location of the metal-poor chemical-equilibrium Sonora-Bobcat sequence shown in Figure 3. This would be expected, as metal-paucity reduces the size of the chemical changes brought about by mixing \citep[][their Figures 4 and 10]{Zahnle_2014}. The commonly available colors for Y dwarfs are shown in Figure 15: [3.6] $-$ [4.5] and $M_{[4.5]}$ as a function of $J -$ [4.5]. Observations, together with the modified-adiabat disequilibrium chemistry models (with an empirical correction to [3.6] $-$ [4.5]), show that $T_{\rm eff}$ and metallicity can be estimated for cold brown dwarfs using such a figure. Figure 15 includes all 50 currently known candidate Y dwarfs. Seven of these do not have a measurement of $J$. A lower limit of $J \gtrsim 24.6$ was determined for WISEA J083011.95$+$283716.0 by transforming the F125W limit given by \citet{Bardalez_2020} using transformations from \citet{Leggett_2017}. Lower limits on $J$ were taken from \citet{Meisner_2020a, Meisner_2020b} for CWISEP J104756.81$+$545741.6 ($J \gtrsim 19.8$), and CWISEP J201146.45$-$481259.7 ($J \gtrsim 20.1$). For three other objects we determined limits from the UKIDSS and VISTA surveys' imaging data: CWISEP J023842.60$-$133210.7 ($J \gtrsim 23.0$), CWISEP J063428.10$+$504925.9 ($J \gtrsim 20.0$), and CWISEP J135937.65$-$435226.9 ($J \gtrsim 20.5$). No constraint on $J$ is currently available for WISEA J235120.62$-$700025.8. Table 6 lists the 50 Y dwarfs (or Y dwarf candidates) along with spectral type, $T_{\rm eff}$ and (where there is sufficient information) [m/H]. For six of the Y dwarfs, identified in the Table, we carried out a detailed atmospheric analysis in Section 5.2, and those values of $T_{\rm eff}$ and [m/H] are given in the Table, as well as in Table 3. The parameters for the other Y dwarfs are based on one to three colors, using the relationships given in Table 4. $T_{\rm eff}$ is determined from [3.6] $-$ [4.5], $J -$ [4.5] and $M_{[4.5]}$, with the $T_{\rm eff}$ values rounded to 5~K. The average of the color-implied $T_{\rm eff}$ value is adopted, unless all three estimates are available and the $J -$ [4.5] color is discrepant (suggesting a non-solar metallicity, Figures 14 and 15), in which case the two other values are averaged. \medskip \subsection{Super-Luminous Y Dwarfs and Binarity} There are four Y dwarfs for which the luminosity-implied $T_{\rm eff}$ is only consistent with the SED- or color-implied value if the dwarf is an unresolved multiple system: CWISEP J021243.55+053147.2 WISE J053516.80$-$750024.9, WISEPA J182831.08$+$265037.8, and CWISEP J193518.59$-$154620.3 (see Figures 3, 13, and 15). If these four objects are approximately-equal-mass binaries, the sample of 50 Y dwarf systems then includes the secondaries of three known resolved systems, plus these four unresolved binaries, for a notional binary fraction of 14\%. Table 7 summarises the properties of these confirmed and candidate binaries. The number of candidate binary systems is consistent with studies of the binary fraction of substellar objects --- for example \citet{Fontanive_2018} find a binary fraction of $8 \pm 6\%$ for T5 -- Y0 brown dwarfs at separations of 1.0 -- 1000 AU, with a mass ratio distribution peaking around unity. \setlength\tabcolsep{7pt} \begin{deluxetable*}{lccccc}[t] \tabletypesize{\normalsize} \tablecaption{Known and Candidate Binary Y Dwarfs} \tablehead{ \colhead{{\it WISE} (Other) Name} & \multicolumn{3}{c}{Separation} & \multicolumn{2}{c}{Mass Ratio}\\ & \colhead{arcsecond} & \colhead{AU\tablenotemark{a}} & \colhead{Ref.} & \colhead{Value} & \colhead{Ref.} } \startdata 014656.66$+$423410.0B & $0\farcs 09$ & 1.7 & Du15 & 0.9 & Du15 \\ 021243.55$+$053147.2(AB) & $< 0\farcs 36$ & $< 9$ & 1 & 1.0 & 1 \\ 053516.80$-$750024.9(AB) & $< 0\farcs 15$ & $< 2.2$ & Op16 & 1.0 & 1 \\ 080714$-$661848 (WD 0806$-$661B) & 130 & 2500 & Lu11 & 0.004\tablenotemark{b} & Lu11 \\ 121756.91$+$162640.2B & $0\farcs 76$ & 7.1 & Li12 & 0.7 & Le14 \\ 182831.08$+$265037.8(AB) & $< 0\farcs 05$ & $< 0.4$ & Be13 & 1.0 & 1 \\ 193518.59$-$154620.3(AB) & $< 0\farcs 35$ & $< 5$ & 1 & 1.0 & 1 \\ \enddata \tablenotetext{a}{Distance in AU calculated using parallaxes from \citet{Kirkpatrick_2020}. For J0212 the upper limit on the parallax is used, which is more consistent with the observed colors (Figures 3, 13, 15).} \tablenotetext{b}{The mass ratio uses the white dwarf progenitor mass.} \vskip -0.05in \tablerefs{ (1) this work; Be13 -- \citet{Beichman_2013}; Du15 -- \citet{Dupuy_2015}; Le14 -- \citet{Leggett_2014}; Li12 -- \citet{Liu_2012}; Lu11 - \citet{Luhman_2011}; Op16 -- \citet{Opitz_2016}. } \end{deluxetable*} \bigskip \bigskip \section{Conclusions} The cold Y dwarfs are important laboratories for atmospheric dynamics because the regions from which the 1 --10~$\mu$m light emerges span a range in pressure of 3 orders of magnitude (Figure 7). They are also rapid rotators \citep{Cushing_2016, Esplin_2016, Leggett_2016b, Tannock_2021}. Under these conditions, small departures from standard radiative/convective equilibrium is a natural and stable phenomenon \citep[e.g.][]{Guillot_2005, Augustson_2019, Tremblin_2019, Zhang_2020}. In this work we show that a $\sim 10$\% reduction in the standard adiabat in the upper photosphere of Y dwarfs leads to cooler deeper photospheres. This change yields significant and comprehensive improvements in the agreement between modelled and observed colors and spectra of brown dwarfs with $T_{\rm eff} < 600$~K (Figures 5, 12, 13). The modified-adiabat models with non-equilibrium chemistry that we outline here produce the best fit to date of the 1 -- 20~$\mu$m flux distribution of brown dwarfs cooler than 600~K (Figures 5, 8, 9). A summary of key results follows. \begin{itemize} \item New near-infrared photometry is presented for 4 late-T and 17 Y dwarfs (Table 1). \item New or revised mid-infrared photometry is presented for one L, 10 T, and 4 Y dwarfs (Table 2). \item Spectral type estimates are revised in Section 2 for three brown dwarfs, using the new photometry: \begin{itemize} \item CWISEP 021243.55$+$053147.2 from background source to likely binary Y dwarf system \item CWISE J092503.20$-$472013. from Y0 to T8 \item CWISE J112106.36$-$623221.5 from Y0 to T7 \end{itemize} \item We reconfirm that chemical abundances are not in equilibrium, due to vertical mixing (Figures 2, 3, 5). The decrease in NH$_3$ and increase in CO impacts the flux at $H$, $K$, [4.5] and W3 by 30\% to a factor of two. Of particular importance for {\it JWST}, chemical equilibrium models will underestimate the [4.5] $-$ W3($\sim 14~\mu$m) and [4.5] $-$ W4($\sim 22~\mu$m) colors of T and Y dwarfs by $\sim 1$ magnitude (Figure 4). \item Current (2020) atmospheric models generate $J - K$ and [3.6] $-$ [4.5] colors that deviate from observations by a factor of $\sim 3$, for $T_{\rm eff} < 600$~K (Figure 2). \item As a first step towards including processes currently missing in all brown dwarf models, we parameterize the pressure-temperature atmospheric profile in the one-dimensional ATMO 2020 disequilibrium chemistry models, and explore fits to the SEDs of 7 brown dwarfs with $260 \lesssim T_{\rm eff}$~K $\lesssim 540$~K (Section 5). A decrease in the adiabatic gradient at pressures of 10 -- 50 bar and temperatures $\sim 800$~K produces cooler deep atmospheres for a given $T_{\rm eff}$, and effectively reproduces observations at $1 \lesssim \lambda ~\mu$m $\lesssim 20$ (Figures 5, 8, 9). Discrepancies that remain are at the factor of $\sim 2$ level in the $Y$- and [3.6]-band for $T_{\rm eff} \lesssim 400$~K (Figure 12). Note that the discrepancy at [3.6] is reduced by a factor of $\sim 5$ compared to standard-adiabat models. \item Spectroscopy shows that the problems at $Y$ and [3.6] for the $T_{\rm eff} \lesssim 400$~K Y dwarfs occur at the blue side of the passbands. \begin{itemize} \item For $Y$, the issue is most likely to be deficiencies in modelling the red wing of the K~I resonant line \citep{Phillips_2020}. \item For [3.6], it appears that high in the atmosphere, where pressures are $\sim 0.1$~bar and this flux originates, the temperature needs to be higher. The heating could be caused by breaking gravity waves, as is likely in the solar system giant planet atmospheres above the 1-bar pressure surface \citep[e.g.][]{Schubert_2003, O'Donoghue_2016}. The $T_{\rm eff} \lesssim 350$~K Y dwarfs may have an upper atmosphere heated by water condensation (Figure 7). \end{itemize} \item The fact that the adiabat changes at temperatures around 800~K and pressures of 10 -- 50 bar, for the 6 Y dwarfs studied in detail, may indicate that convection is disrupted in Y dwarf atmospheres by a change in nitrogen chemistry and/or the condensation of chlorides and sulfides (Figure 7, top left panel). \item The atmospheric parameters combined with evolutionary models indicate that the six Y dwarfs have an age between 0.5 and 3~Gyr and masses of 5 -- 12 Jupiters (Table 3). \item We generate a limited grid of modified-adiabat disequilibrium chemistry models and provide relationships between $T_{\rm eff}$ and the commonly used colors: [3.6] $-$ [4.5], $J -$ [4.5], $M_{[4.5]}$ (Table 4). The models indicate that there are two opacities which drive the pressure (gravity) and metallicity sensitivity in the models: the CO absorption at $\lambda \approx 4.6~\mu$m (Figure 6), and collision-induced H$_2$ opacity with broad peaks at $\lambda \approx$ 1.2, 2.2 and 11.1~$\mu$m \citep[][their Figure 1]{Saumon_2012}. \item We show that the $J -$ [4.5] color is particularly sensitive to metallicity (Table 5), and that a diagram which plots [3.6] $-$ [4.5] and $M_{[4.5]}$ as a function of $J -$ [4.5] can be used to estimate $T_{\rm eff}$ and metallicity (Figure 15). We estimate these parameters for the 50 known candidate Y dwarfs (Table 6). \item We find that there are four super-luminous Y dwarfs which are likely to be unresolved binaries; together with the three known resolved binary Y dwarf components, this suggests a binary fraction of $\sim 14$\% for Y dwarfs (Table 7). Such a number is consistent with what is found for L and T dwarfs \citep[e.g.][]{Fontanive_2018}. \item The Appendix gives examples of temperature-sensitive {\it JWST} colors, tables of colors generated by the modified-adiabat model grid, and a compilation of the photometry used in this work. \end{itemize} \acknowledgments Supported by the international Gemini Observatory, a program of NSF’s NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation, on behalf of the Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. This work was enabled in part by observations made from the Gemini North telescope, located within the Maunakea Science Reserve and adjacent to the summit of Maunakea. We are grateful for the privilege of observing the Universe from a place that is unique in both its astronomical quality and its cultural significance. This publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work is based in part on observations made with the {\it Spitzer} Space Telescope, obtained from the NASA/ IPAC Infrared Science Archive, both of which are operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with the National Aeronautics and Space Administration. C.V.M. acknowledges the support of the National Science Foundation grant number 1910969. P.T. acknowledges supports by the European Research Council under Grant Agreement ATMO 757858. \bigskip \bigskip {\bf We dedicate this work to France Allard and Adam Showman, both prematurely lost to astronomy in 2020. They leave a legacy of work that is vital to the understanding of low mass stars, brown dwarfs, exoplanets and the solar system.} \clearpage
1,108,101,563,491
arxiv
\section{Introduction} \label{S-Introduction} In our previous studies of the solar magnetogranulation based on 2D MHD simulations by Gadun \cite{Gadun00, Gadun01} we have touched upon the very important problem of the distribution of magnetic fields in quiet photospheric regions. As the interest in the weak magnetic fields of the quiet Sun, their nature and structure, and in the interpretation of their observations continues to grow \cite{Gross96, Keller94, Lin99}, we return to the analysis of the numerical simulation and the synthesis of the Stokes profiles of the infrared line $\lambda$~1564.8~nm in the context of weak photospheric magnetic fields. Observations of magnetic fields in the quiet regions on the Sun evidence very weak magnetic fluxes ($10^{-7}$--$10^{-8}$~Wb). The fine field structure cannot be resolved even with the highest spatial resolution, a very high signal-to-noise ratio, and at good seeing. It is also difficult to calibrate correctly the measurements, i.e., to determine the true field strength. For the time being the only possibility to study in detail the small-scale structure of photospheric fields is the direct numerical simulation of magnetoconvection. Our high-resolution studies with a spatial step of 35 km \cite{Gadun99, Shem99} made with the use of 2D MHD magnetogranulation models \cite{Gadun00} demonstrated that these models are helpful in understanding various properties of photospheric magnetic fields and their interaction with convective motions on granular scales. These models, when used to study the magnetic field distribution in granules and intergranular lanes, will hopefully provide insight into various problems related to the structure and nature of magnetic fields in the photospheres of the Sun and stars. Among numerous observation data on the magnetic fields of the quiet Sun, of special interest are the observations of IR lines which turned out to be very helpful in measuring small-scale fields below 0.1 T. The results reveal a complex field structure in quiet regions with low flux densities. The magnetic fields of the quiet Sun seem to be strongly mixed in space -- weak, intermediate, and strong fluctuations as well as thin fluxtubes alternate with one another. Theoretical magnetoconvection simulation \cite{Emonet} also suggests that magnetic fields in quiet regions are structurized from the greatest to the smallest ones. Their sizes can be much smaller than the present-day resolution threshold. In addition, the theory as well as observations suggest that the magnetic fields are a mixture of fields of different polarities. Measurements of magnetic fluxes on the quiet Sun outside the supergranulation network \cite{Lites02} indicate that the disbalance of fluxes of different polarities there is smaller than in the network. Recent advances in the observations of weak fields raised the questions as to whether the weak fields are the remnants of a strong magnetic flux circulating at all times due to convection or they are generated by the local dynamo mechanism, whether their dimension and strength can vary continuously down to very small values, and whether the data on the distribution of weak fields are sufficiently reliable. The distributions derived from the observations of Stokes profiles of spectral lines in the visible range and in the infrared turn out to be essentially different. The maximum of the relative distribution obtained from the $\lambda$~1564.8~nm line \cite{Collados01, Khom02, Lin95, Lin99} indicates that the majority of fields have strength much lower than 100~mT, while the observations in the visible spectrum give a maximum of the strength distribution at about 100~mT \cite{Gross96, Lites02, Sanchez00, Socas02} and the fields below and above 100~mT fill only a small part of the photosphere volume (about 1~percent) in quiet regions. According to the IR observations, this part is slightly greater, but it is still quite small as compared to the part occupied by very weak (0.4--4~mT) turbulent fields which cover the whole surface of the quiet Sun (they were recently detected through the use of the Hanle effect \cite{Stenflo98}). All these results obtained by various techniques can be reconciled when we assume complex topology of magnetic fields in the photosphere, where weak and strong different-scaled magnetic fields are mixed and entangled. At the same time it is not clear why the field strength distribution maximum in quiet photospheric regions points at kilogauss fields when lines in the visible spectrum are observed and subkilogauss fields when IR lines are measured. The distinction can be caused by the specificity of magnetic field measurements in the visible and IR lines, which strongly differ in their parameters. In this study based on a time sequence of 2D MHD magnetogranulation models \cite{Gadun00, Gadun01}, we obtained a distribution of magnetic fields with a high numerical resolution in the solar photosphere outside active regions. We established a possible cause of the differences in the field distributions derived with the use of synthesized visible and IR lines. \section{Magnetic field strength distribution} We took a 30-min sequence of 2D MHD models described in detail in \cite{Gadun00, Gadun99, Gadun01}. The sequence contains 56 two-dimensional models with 112 columns in each. We extracted the field strength $B$ at several photospheric levels ($\log \tau_5=0,$ -0.5, -1, -1.5, -2, -3) in every model column and built the field strength distribution based on the simulation data. Figure 1 shows the strength distributions at these levels (thick curves). At the next step we synthesized the Stokes profiles of the $\lambda$~1564.8~nm line for every column in every model by integrating the Unno-Rachkovskii equations for the polarized radiation transfer. Thus we obtained the 6272 profiles for various statistical investigations of relative distributions of atmospheric parameters derived from the Stokes profiles of the $\lambda$~1564.8~nm line. Note that the choice of the IR $\lambda$~1564.8~nm line for our investigations was not accidental. The polarization properties of this line were discussed in detail in \cite{Sanchez00}. The authors of \cite{Sanchez00} assume that the observations of quiet regions on the Sun with magnetic elements with strengths between 50~mT and 150~mT will give the field distribution with predominantly 50~mT when $\lambda$~1564.8~nm is used and 100~mT when the visible line Fe I $\lambda$~630.2~nm is used. It is not difficult for us to verify these assumption, since we have a nonhomogeneous photosphere model with known field distribution at various levels, on the one hand, and the Stokes profiles calculated with the same models which can be used to find the field strength distribution, on the other hand. The observations of IR line profiles are often reduced with the use of a simple method in which the field strength is determined from the distance between the positive and negative maxima (peaks) of V profiles. We demonstrated in \cite{Shem99} that this direct method gives the most accurate field strengths when applied to the $\lambda$~1564.8~nm line, and so we used it in this case. We also made similar calculations for the visible $\lambda$~630.2~nm line to be able to compare the results for two different lines. We omitted from consideration the abnormally shaped V profiles which have two or more zero-crossings and can thus introduce additional errors in the distributions. The total numbers of analyzed profiles were 4577 for $\lambda$~1564.8~nm and 3897 for $\lambda$~630.2~nm. Both field distributions thus obtained are plotted in Fig.~1. The distribution obtained with the visible line demonstrates complete disagreement with the true distribution. The distribution calculated with the IR line coincides best with the MHD models at levels $\log \tau_5=$ -0.5 and -1 for fields above 50~mT. To specify the depth in the photosphere to which the field strength found from the $\lambda$~1564.8~nm line should be referred, we calculated the effective depths of V-peak formation. We used the contribution functions \cite{Shem92} to find the mean level in the nonhomogeneous photosphere where the effective absorption in the profiles of this line occurs. We calculated the profiles for two typical areas in the model -- one area covers the periphery and a part of a strong fluxtube and the other covers the center of a granule with predominantly horizontal fields. We obtained virtually all types of profiles met in granules and intergranular lanes. Figure 2 shows these I and V profiles together with the profiles of the effective depth of line formation. This effective depth substantially varies from one model column to another because of steep gradients of thermodynamic parameters along the line of sight and the magnetic field and velocity field gradients. It is difficult, therefore, to determine exactly the mean depth of line formation to which the field strength distribution refers. Nevertheless, we may conclude from Fig.~2 that the effective layer of the formation of V profile peaks lies at the level $\log \tau_5=$~-0.5 for $\lambda$~1564.8~nm and at $\log \tau_5=$ -1 for $\lambda$~630.2~nm. \begin{figure} \centering \includegraphics[width=16. cm]{shem1.eps} \caption[]{Magnetic field strength histograms (thick line) plotted from the data obtained directly from the time sequence of 2D MHD magnetogranulation models at various levels $\log\tau_5$ indicated on the plots. Thin and dotted lines show histograms of magnetic field strength obtained from the distances between the V peaks of the lines $\lambda 1564.8$~nm and $\lambda 630.2$~nm, respectively. } \label{Fig1} \end{figure} So, the field distributions we obtained from the MHD magnetogranulation simulation and from the synthesis of the $\lambda$~1564.8~nm and $\lambda$~630.2~nm lines allow us to draw the following conclusions. 1) In areas outside active regions the majority of photospheric magnetic fields are weaker than 50~mT and the field distribution maximum lies near 25~mT, on the average. \begin{figure} \centering \includegraphics[width=16.5 cm]{shem2.eps} \caption[]{The Stokes I profiles (top panels) and the Stokes V profiles (middle panels) and the effective depths of V profile formation (lower panels) for lines $\lambda 1564.8$~nm and $\lambda 630.2$~nm calculated at a periphery of the fluxtube (solid line), its centre (dashed line), its edge (dotted line), and a granule centre (thick dash-and-dot line). } \label{Fig2} \end{figure} 2) The relative distribution of magnetic field strength changes noticeably at different levels in the photosphere -- a redistribution of fields in height takes place. The number of strong fields ($>100$~mT) decreases, and the distribution peak changes its position. At the photosphere base ($\log \tau_5=0$) the principal distribution maximum is indicative of the predominance of fields with strengths of about 25~mT. There is also a smaller peak near 100~mT and an even smaller one near 70~mT. At the level $\log \tau_5=$~-0.5 the fields become weaker, and the distribution has only one peak near 20~mT. At the levels $\log \tau_5=$~-1 and -1.5 the principal peak shifts toward weaker fields (about 10~mT), and a new peak of about the same height appears near 45~mT. The doubling of the principal peak is the manifestation of the canopy effect (strong fluxtubes expand with growing height). As a result, the contribution of strong fields of fluxtubes in the total number of fields increases. In still higher layers the field in fluxtubes expands even more and becomes weaker as well, and the distribution of weak fields becomes nearly the same as at the photosphere base. So, the MHD simulation results substantiate the view that magnetic fields become weaker with growing height in the photosphere and that the fields are subjected to redistribution. 3) Photospheric magnetic fields outside the active regions are a mixture of weak, moderate, and strong fields. Their strength can vary continuously from the lowest values in inclined fields which fill the entire granule region to the greatest values in the fields compressed into thin vertical fluxtubes which are found in intergranular lanes. 4) A comparison between the true distribution (obtained from MHD models) and the calculated distribution (obtained from profile synthesis) clearly shows that the method in which field strength is determined directly from the distance between the V profile peaks of the $\lambda$~1564.8~nm line allows fields of 17~mT and greater to be measured. The number of measured fields is overestimated in the interval 20--50~mT, but for fields above 50~mT it agrees satisfactorily with actual numbers. It should be stressed that the number of fields with strengths of about 150~mT is estimated quite reliably by this method. 5) The field strength distribution derived from the $\lambda$~630.2~nm line suggests that this line is unsuitable for the given method of field strength measurements. The distribution maximum corresponds to fields with $B \approx100$~mT, which is at variance with the actual distribution. We tried to find out why the distribution based on the $\lambda$~1564.8~nm line profiles differs significantly from the actual distributions for fields below 50~mT and why the $\lambda$~630.2~nm line is not suitable for such studies at all. With this in mind we analyzed in detail the cause of the variation in the V profile peak separation in nonhomogeneous models. \section{The reliability of the field strength distribution \\ obtained from the 1564.8~nm and 630.2~nm lines} Below we give some quantitative estimates for the accuracy of the field strength determinations based on the $\lambda$~1564.8~nm and $\lambda$~630.2~nm lines. Recall that the sensitivity of a line to a given magnetic field depends on the width ratio $\Delta \lambda_B{/} \Delta \lambda_D$, where $ \Delta \lambda_B$ is the Zeeman splitting and $\Delta \lambda_D$ is the Doppler line width in the absence of magnetic fields. These widths depend on wavelength, temperature, line saturation, nonstationary velocities and their gradients. The Zeeman width also depends on the Land\'{e} factor as well as on the magnitudes and gradients of field strength and magnetic vector inclination. Hence it follows that the quantity $\Delta \lambda_B{/} \Delta \lambda_D$ for a specific magnetic field and a specific velocity field is a function of temperature and it grows approximately linearly with wavelength $\lambda$~and Land\'{e} factor $ g_{\rm eff}$. The product $\lambda g_{\rm eff}$ is often taken as a measure of the sensitivity of spectral lines to magnetic field. In the atmospheric regions with widely different magnetic properties the trustworthiness of the field strength measured with the use of Zeeman splitting in a specific line depends on the magnetic sensitivity of the line (e.i., from $\lambda g_{\rm eff}$) as well as on the magnetic field parameters, in other words, it depends on the splitting conditions for the given line. The splitting conditions which are illustrated in \cite{Rbin92, Solanki93, Solanki92, Stenflo87} are commonly divided into three groups. Soft conditions correspond to $\Delta \lambda_B < \Delta \lambda_D$, intermediate conditions to $\Delta \lambda_B \approx \Delta \lambda_D$, and hard ones to $\Delta \lambda_B > \Delta \lambda_D$. Special test calculations were made to establish these conditions for the two above lines and for our model atmosphere. We calculated the line profiles for one of the MHD model columns, having replaced the depth-dependent model field strengths by constant ones which varied from their smallest value to the greatest one. We also assumed that the fields were longitudinal. We calculated the field strength $B_{br}$ from the measured distances between the blue and the red V profile peaks and compared it to the actual field strengths (solid curves in Fig.~3). One can see that the conditions of strong splitting occur at $B > 30$~mT for the $\lambda$~1564.8~nm line and at $B > 150$~mT for the $\lambda$~630.2~nm line. The splitting is weak for all fields with $B < 17$~mT for $\lambda$~1564.8~nm and $B < 60$~mT for $\lambda$~630.2~nm. Under soft conditions, the measured field strength is determined by the Doppler width only and does not depend on any B variations -- the threshold field strength is close to 20~mT for $\lambda$~1564.8 nm and 90~mT for $\lambda$~630.2~nm. In the case of intermediate conditions, the overestimate of field strengths is greater for weaker fields. We also examined the influence of the gradients of magnetic field, temperature, vertical velocity, and inclination on the conditions which set in one case or another. It turned out that the action of field gradient on profile shape is greater than on the distance between $|V|$ maxima. This distance is also only slightly affected by temperature and velocity gradients, but the field inclination exerts the greatest effect on the separation of V profile peaks. As seen from Fig.~3, an increase of field inclination extends the limits of the intermediate condition interval towards stronger fields, and the hard conditions set in at $B\approx 40$~mT for $\gamma=30^{\circ}$ and $B \approx 50$~mT for $\gamma=75^{\circ}$ in the case of the $\lambda$~1564.8~nm line. This effect is even stronger for $\lambda$~630.2~nm. As a result the field strength is overestimated still further, and the deviation of the measured field strengths from the actual ones is greater in more horizontal and weaker fields. \begin{figure} \centering \includegraphics[width=15.5 cm]{shem3.eps} \caption[]{The field strengths $B_{br}$ derived from the distances between the V peaks of the synthesized profiles $\lambda 1564.8$~nm and $\lambda 630.2$~nm vs. the true (model) field strengths $B$ for magnetic vector inclination of $0^\circ$ (solid line), $30^\circ$ (dash-and-dot line), and $75^\circ$ (dashed line). Dotted line is line of equal field strengths. Vertical lines indicate the field strengths at which $\Delta \lambda_B {/} \Delta \lambda_D$ is equal to 1 (left line) and 2 (right line). } \label{Fig3} \end{figure} So, the direct method for the determination of $B$ from the distance between the $|$V$|$ profile maxima works better in the case of strong and longitudinal fields, and its accuracy deteriorates as the magnetic vector deviates from the vertical. For lines in the visible spectral range (like $\lambda$~630.2~nm) the measured field strength can be overestimated by 20--40~mT for strongly inclined fields. The line $\lambda$~630.2~nm can give trustworthy results only for strong-splitting conditions (fields above 150~mT) with inclinations less than $30^{\circ}$. In the case of weak-splitting conditions all measured fields will have a strength of about 100~mT. At the same time the effect of field inclination on measured strengths is insignificant for the $\lambda$~1564.8~nm line. The greatest overestimate which is possible under intermediate-splitting conditions is about 2--4~mT for strongly inclined fields. The field inclination effect for $\lambda$~1564.8~nm is 10 times weaker than for $\lambda$~630.2~nm. The accuracy of field strength measurements reflects directly on the field distributions, especially on those obtained for quiet regions, where inclined and weak fields dominate. In the case of soft conditions, when the magnetic sensitivity of lines is weak, weaker fields are measured as stronger ones (especially in the $\lambda$~630.2~nm line). As a result the field distribution is considerably distorted in the region below 150~mT (for $\lambda$~630.2~nm), while in the distribution derived with the use of the $\lambda$~1564.8~nm line the region of weak fields between 20 and 40~mT is substantially distorted and the fields below 17 mT are absent at all. It is precisely these results that we obtained (see Fig.~1). Hence it follows that in the case of complete resolution (filling factor $\alpha=1$) the most trustworthy field distribution is that which was found from the distance between the V peaks of $\lambda$~1564.8~nm only under strong-field conditions, i.e., for strengths above 50~mT. The number of fields measured in the range -20--50~mT is overestimated: first, weak fields below 17~mT are measured as 20-mT fields and, second, the presence of inclined fields results in an overestimation by 2--4~mT. So, the main maximum of the field distribution found from the $\lambda$~1564.8~nm line for quiet regions on the Sun is expected to be slightly shifted toward stronger fields with respect to the actual distribution, and it should also be higher at the expense of the fields from the weak-field conditions. It is most likely to be located at 30--40~mT. The field distribution beyond 50~mT up to the strongest kilogauss fields is not distorted. In the cases when the spatial resolution is not high (filling factor $\alpha < 1$), there is an additional profile broadening caused by the horizontal averaging of the profiles. This broadening also affects the shape of the distribution of measured fields, as judged from Fig.~4, which displays some distributions obtained from the $\lambda$~1564.8~nm line (thick curve), and directly from MHD models (dotted line). The greater the spatial averaging scale, the greater is the shift of the distribution maximum toward stronger fields (30--40~mT); in addition, another maximum appears at 50--60~mT, the number of fields above 60~mT noticeably decreases, and the distribution shows a steeper decline toward stronger fields. In Fig.~4 we also give for comparison the observation data from \cite{Khom02} acquired with a resolution of 0.5--1$^{\prime\prime}$. The best agreement can be seen in the second panel, where the profile averaging seems to be close to the angular resolution of observations. Satisfactory agreement between our calculations and observations from \cite{Khom02} argues for the reliability of our calculations and distributions of weak fields on the quiet Sun. As to the $\lambda$~630.2~nm line, the distances between its peaks give a completely false distribution. This line can be used only to measure longitudinal fields above 150~mT. All fields below 100~mT are fixed as fields of 90--120~mT, and the resulting distribution has its maximum at $B=100$~mT. In actual practice this line is not used in the method discussed here. It is often used, as a rule, in the inverse codes, when a model atmosphere and a magnetic vector are found by comparing the observed and calculated V profiles. Our analysis shows that it is practically impossible to recognize the separation of the V peaks in weak profiles without additional analysis of the Q, U profiles at small field strengths and large magnetic vector inclinations. The inverse methods can give stronger longitudinal fields instead of weaker horizontal fields. It is not surprising, therefore, that the distributions obtained by inverse methods with the $\lambda$~630.2~nm line have their maxima, as a rule, at 50--100~mT. We believe that the inverse methods need to be tested with the 2D MHD or 3D MHD models. Only then one may judge on the accuracy of the inversion. \section{Statistical properties of photospheric \\ magnetogranulation from the 1564.8~nm data} We consider the statistical relationships between the granule structure parameters derived from the synthesized $\lambda$~1564.8~nm line profiles. Figures~4a and 4b display the field strength $B_{br}$ histograms and $B_{br}$ averaged over equal intervals as a function of the line-of-sight velocity $V_z$ derived from the V profile zero-crossing shift. Note that negative velocities correspond to downward motion. The field increases with the downflow velocity, and this suggests that the magnetic field becomes stronger in the intergranular lanes, where downflows are concentrated. The same is illustrated by the relation in Fig.~4c which points to an increase of field strength in darker photospheric regions where the contrast $I_c/{<}I_c{>}$ is less than unity. The field strength changes from $40\pm25$~mT to $90\pm40$~mT, on the average, when going from granules to intergranular lanes. Similar relations derived from observations \cite{Khom02} give 45~mT for granules and 69--80~mT for the lanes. The radial velocity $V_z$ histograms (Fig.~5a) give a mean velocity of 0.37~km/s, which suggests the predominance of intense downflows. These downflows are associated with intergranular lanes, as suggested by close correlation between mean velocity and intensity contrast (Fig.~5c). A close relation between velocity and inclination should be also noted (Fig.~5b). Fields with greater inclinations are met mainly in the region of upflows, at sites with higher contrast, that is, at bright centers of granules. The asymmetry of radial velocity histograms and the correlation between velocity, inclination, and contrast bear out the fundamental property of magnetoconvection -- the asymmetry of upward and downward flows of matter with frozen-in magnetic fields. As a result the magnetic field structure and the scale of magnetic field variations are closely related to the granulation dimensions and structure. \begin{figure} \centering \includegraphics[width=16.cm]{shem4.eps} \caption[]{Magnetic field strength histograms (a) plotted with the use of the data from the synthesized $\lambda 1564.8$~nm line V profiles (thick line) and MHD models (dotted line) with different spatial averaging (35, 350, 700 km). The date of observations \cite{Khom02} with spatial resolution of 350--700~km idicate by thin line. Scatter plots of the field strength vs. radial velocity (b) and continuum intensity contrast (c). Solid lines are correlation curves, dashed lines are rms deviations; $r$ is the correlation coefficient. } \label{Fig4} \end{figure} \begin{figure} \centering \includegraphics[width=16. cm]{shem5.eps} \caption[]{Histograms of radial velocities (a) derived from zero-crossing shifts of the synthesized $\lambda 1564.8$~nm line V profiles. Scatter plots of the radial velocities vs. magnetic vector inclination $\gamma$ derived from the relation $ \tan^ 2 \gamma= {(Q^2 + U^2 )}^ {1/2}{/} V^2$ (b), and continuum intensity contrast $I_c/{<}I_c{>}$ (c). Symbol and curve coding are as in Firure 4. } \label{Fig5} \end{figure} As all observation data contain the results of the analysis of the asymmetry of observed Stokes profiles, we also give the corresponding data obtained from the synthesis of the $\lambda$~1564.8~nm line V profiles in the form of histograms and scattering plots as well as the relations between the corresponding quantities averaged over equal intervals (Figs 6 and 7). To demonstrate the asymmetry, we took the standard parameter $\delta a = (a_b - a_ r){/}(a_b + a_r$) which characterizes the amplitude asymmetry between the blue and red peaks of V~profile. The sample-averaged asymmetry is positive and very small (Fig.~6a), but it grows with the scale of the spatial averaging of the profiles and approaches the observed values \cite{Khom02}. The $\lambda$~1564.8~nm line asymmetry can change by $\pm40$ percent, much less than in the $\lambda$~630.2~nm line. This is evidence for a lower sensitivity of the former line to temperature and velocity gradients and magnetic vector variations. The area asymmetry $\delta A$ is measured in the same way as $\delta a$, and the relation between these quantities is demonstrated in Figs 6b,c. Figure 7 shows the correlation between the amplitude asymmetry $\delta a$ and other parameters. The smaller the field strength, the greater is the scatter of asymmetry parameter (Fig.~7a). The predominance of positive asymmetry tends to increase as the field strength grows (Fig.~7b). There is no well-defined relation between the asymmetry and radial velocity (Figs 7c,d), we can only notice that there are more profiles with positive asymmetry in upflow regions (negative velocities), while the number of profiles with negative asymmetry is slightly larger in downflow regions. This weak relationship breaks down as the spatial-averaging scale increases. \begin{figure} \centering \includegraphics[width=16. cm]{shem6.eps} \caption[]{Histograms of the V profile amplitude asymmetry of the synthesized $\lambda 1564.8$~nm lines (a); scatter plots of the V profile amplitude asymmetry vs. the V profile area asymmetry (b); solid lines are corresponding correlation curves, dotted lines are rms deviations, and $r$ is the correlation coefficient (c). } \label{Fig6} \end{figure} \begin{figure} \centering \includegraphics[width=16. cm]{shem7.eps} \caption[]{Scatter plots of the V profile amplitude asymmetry of the synthesized $\lambda 1564.8$~nm lines vs. magnetic field strength (a), radial velocity (c); corresponding correlation curves (b, d). Symbol and curve coding are as in Firure 6. } \label{Fig7} \end{figure} Note that Figures 4--7 also display the statistical relations for the parameters derived from the profiles averaged over 350-km and 700-km areas which correspond to resolutions of $0.5^{\prime\prime}$ and $1^{\prime\prime}$. The number of profiles drastically diminishes as the averaging scale grows, and the statistics results are less accurate, although the effects of the horizontal averaging of profiles can be still noticed in the figures. In some cases the results from the averaged profiles are in better agreement with observations. \section{Disbalance and distribution of magnetic flux} Disbalance of positive and negative magnetic fluxes is an important aspect of magnetic field distribution in the solar photosphere. It is defined as $\Delta F = (F^{+} + F^{-}){/}(|F^{+}| + |F^{-}|)$ \cite{Lites02}, i.e., it is not affected by unresolved fluxes, since the spatial resolution acts in the same way on both flux components. We used this formula to calculate the magnetic flux disbalance in the model region as a function of time. The magnetic flux was calculated every minute for the 2D MHD models from the whole 30-min 2D MHD model sequence as $F= \Sigma_{i = 1}^{N} B_i(\log \tau_5 = 0)\Delta x^2$, where $i$ is the model column number, $N=112$ is the number of columns in a 2D MHD model, and $\Delta x = 35$~km is the column width. The field strengths $B_i$ at $\log \tau_5 = 0$ were taken from the data of magnetoconvection simulation \cite{Gadun00}. Figure 8 shows the $\Delta F$ variation for 30 min. The smallest disbalance $\Delta F$ is 0.001 and the greatest one is 0.32. Time variations of $\Delta F$ display oscillatory behavior. Figure~9 (top panel) shows the magnetic flux histogram derived from the whole 30-min MHD model sequence with $\Delta x = 35$~km. The disbalance $\Delta F=-0.017$, the averaged field strength, or flux density, $\overline{B}= 0.202$~mT, and the averaged unsigned field strength $\overline{|B|}=32$~mT. Figure~9 also shows our results obtained with other spatial resolution $\Delta x = 350$~km (middle panel) and $\Delta x = 700$~km (lower panel). The shape of flux distribution vary substantially, while the disbalance and mean field strength show insignificant variations. We also try to compare our results with the measurements data of the general magnetic field (GMF) of the Sun \cite{Kotov77}, where the longitudinal field component for the Sun as a star is given. According to \cite{Kotov77}, the GMF strength characterizes the overbalance of the magnetic field of one polarization over the flux of the opposite polarization referred to a unit surface of the visible disk. Furthermore, the GMF is mainly controlled by the magnetic fluxes from vast areas which are not related to active regions. The contribution of active regions to the GMF is insignificant. The fields in such vast quiet areas were called background fields. The GMF strength (or the background fields) is 0.1--0.2~mT. We obtained $\overline{B}=-0.2$~mT, i.e., the absolute value the same as for the upper limit of observed field strengths. Although the agreement is quite good, it could be worse for various reasons. The measured GMF critically depends on magnetograph sensitivity and noise level. A twofold increase in the sensitivity results in nearly the same increase in the measured magnetic flux. Besides, the use of new calibration techniques in magnetic measurements also increases the measurement result (by a factor of 2.4) because old techniques did not take into consideration various factors: the saturation of lines in strong fields, when the V signal is not proportional to magnetic flux any more, the weakening of lines in regions with strong magnetic fields, and a partial compensation of the flux because of the presence of opposite polarities in the resolved surface element. In general, the measurement of the magnitude of the total solar flux with magnetographs with commonly low spatial resolution is assumed to give underestimated fluxes. The present measurements of the photospheric magnetic fields with substantially higher accuracy \cite{Lites02, Martin88, Wang95} found their complex structure. The main components of these fields are network (N) and intranetwork (IN) fields. The N fields have strengths of about 100~mT and mixed polarities, they are mainly met at the corners of convective cells of supergranules and can form clusters. At solar minimum periods the network covers the solar surface. According to current measurements, the magnetic flux from network clusters is about $2\cdot 10^{10}$--$3\cdot 10^{11}$~Wb. \begin{figure} \centering \includegraphics[width=10. cm]{shem8.eps} \caption[]{Magnetic flux disbalance calculated every minute from the 30-min 2D MHD model sequence as a function of simulation time. } \label{Fig8} \end{figure} \begin{figure} \centering \includegraphics[width=8. cm]{shem9.eps} \caption[]{Magnetic flux distribution calculated with different spatial averaging (35, 350, 700~km) for the 30-min 2D MHD model sequence. Averaged field strengths are given in~mT units. } \label{Fig9} \end{figure} With high sensitivity of polarimetric measurements and high spatial resolution at good seeing, weak fluxes from IN fields can be measured separately from the N fields. The IN fields are found inside the supergranulation network as big and small fragments. They are weaker and more diffuse, their fluxes are a factor of $10^2$ smaller than the fluxes from N fields. The mobility of IN fragments is higher, they are more free to move from the center of a supergranule to its boundaries. They can merge, annihilate one another, or interact with N fields. Being weak and having mixed polarities, the IN fields cannot penetrate to the outer chromosphere and corona. Observations of quiet regions with an spatial resolution of about $0.5^{\prime\prime}$ \cite{Lites02, Wang95} revealed that the mean flux density varied there from 0.3 to 4~mT. The IN fields contribute 0.165~mT to the mean flux density, which corresponds to a total solar flux of $10^{15}$ Wb. Since the lifetime of IN fragments is much less than one day, a flux of about $10^{16}$~Wb emerges from the Sun and disappears in the form of IN fields in one day. This flux is close to the total flux from the whole Sun observed at the maximum of solar cycle 21. According to \cite{Wang95}, the disbalance of the IN field flux is 0.08 and the mean field strength is 0.59~mT, while the IN disbalance obtained in \cite{Lites02} varies from 0.03 to 0.48 depending on fragment observed, and it is smaller approximately by a factor of three than the N disbalance. Our magnetoconvection simulation gives the mean field strength of 0.2~mT and the flux disbalance of 0.02. It is less than the lower limit of the disbalance observed with high spatial resolution. The difference seems to be caused, on the one hand, by the fact that we use the results of 2D MHD simulation instead 3D MHD sinulation and, on the other hand, by insufficient accuracy of observations because of inadequate noise level and seeing conditions. It should be noted that the extent of disbalance is also of importance in identifying the sources of weak IN fields \cite{Lites02}. The IN fields differ from the N fields not only by their properties but by their nature as well. As their disbalance is smaller, they are assumed \cite{Lites02} to originate from the local dynamo and not from the global convection circulation appropriate to the network fields. The authors of \cite{Stein02} ruled out the local surface dynamo as a mechanism of the IN field generation. The energy of weak small-scale fields in the surface layers grows due to field concentration rather than through the dynamo action, and the concentration of fields, their stretching and twisting are caused by convective motions. A 3D magnetoconvection simulation \cite{Stein02} demonstrated that under the conditions of strong stratification and asymmetric convective flows on the Sun a local short-period recirculation can arise near the surface. Such is the mechanism which can constantly sustain weak magnetic fields inside granules, mesogranules, and supergranules on the quiet Sun. Such a surface recirculation was also obtained in the study \cite{Ploner01} based on the 2D magnetoconvection simulation \cite{Gadun00}. The local surface short-period recirculation is favorable to rapid mixing of the fields of opposite polarities on small scales, thus diminishing the flux disbalance in the intranetwork fields. Today, the question of the nature of the magnetic IN field is open. \section{Conclusion} We used 2D MHD magnetogranulation models \cite{Gadun00} with relatively high numerical resolution to investigate the magnetic field distribution in the solar photosphere outside the active regions and obtained the following results. 1. At the photosphere base the magnetic fields are predominantly weak, their strength is less than 50~mT. The strength distribution peak corresponds, on the average, to 25-mT fields. The distribution tail is rather long, it extends to 150 mT. There is another small peak at 100--110~mT, it suggests that such strengths dominate in kilogauss fields in network fluxtubes. The photospheric field distribution varies considerably with height, and this is evidence of a redistribution of the fields due to their weakening, on the one hand, and the expansion of fluxtubes, which increase the number of strong fields, on the other hand. All these data suggest that photospheric magnetic fields are a mixture of various fields, from the weakest ones to the strongest kilogauss fields of fluxtubes. Their strength can vary almost continuously from the lowest values in inclined fields in granules to the highest ones in thin vertical fluxtubes found in intergranular lanes. Alternating flux polarities produce a polarity disbalance about -0.02. 2. The direct measurement of magnetic field strength from the distance between the maxima in the $\lambda$~1564.8~nm line profiles, providing a high spatial resolution ($\leq 0.5^{\prime\prime}$), is a very efficient and reliable tool for fields above 50~mT, but it fails for fields below 20~mT. All weak fields below 17 mT are measured as fields of 18--20~mT. In addition, the field strengths below 50 mT can be overestimated by 2--4~mT due to the effects of inclined fields which are ignored in this method. So, the relative distribution of fields above 50~mT acquired with the use of the $\lambda$~1564.8~nm line with sufficiently high spatial resolution is close to the actual distribution and can serve as a standard in testing other spectral lines with this method or in testing other methods. The number of fields of 20--50~mT is always overestimated, and the existence of fields below 20~mT in quiet solar regions remains unnoticed in these distributions. Even with the complete resolution of magnetic fields this method applied to the $\lambda$~1564.8~nm line, which is very sensitive to magnetic field, is unsuitable for measuring very weak fields below 20~mT. 3. The inverse methods in which the strengths of weak magnetic fields are determined from the Stokes V profiles of the visible lines like $\lambda$~630.2~nm also seem to overestimate the field strengths as compared to the methods based on the IR lines like $\lambda$~1564.8~nm. A possible reason is the weak magnetic sensitivity of $\lambda$~630.2~nm to fields below 150~mT, on the one hand, and its high sensitivity to magnetic vector inclination, on the other hand. It is difficult to separate the field strength effect from the field inclination effect on the V profile shape and the distance between profile peaks without making recourse to the Q and U profiles. The field inclination effect for the $\lambda$~630.2~nm line is nearly 10 times stronger than for the $\lambda$~1564.8~nm line. Therefore, the field distributions found with the use of the $\lambda$~630.2~nm line for quiet photospheric regions, where the fields are predominantly inclined, are less reliable, especially for subkilogauss fields, compared to the distributions found with the $\lambda$~1564.8~nm line. The high sensitivity of $\lambda$~630.2~nm to magnetic vector inclination can be successfully used for the diagnostics of this field parameter. {\bf Acknowledgements.}We wish to thank S. Solanki and E. Khomenko for making available the observation data on magnetic field distributions. This study was carried out within the framework of an international research program and was made possible in part by the INTAS Grant No. 0084.
1,108,101,563,492
arxiv
\section{Introduction} Markov random fields (MRFs) are widely used in a variety of applications as models for high-dimensional data. The primary reasons are interpretability of the model, whereby edges between variables indicate direct interaction, and efficiency of carrying out inference tasks such as computation of marginals or posteriors. Both of these objectives are helped by \emph{sparsity} of the model: edges can more easily be assigned meaning if there are few of them, and each update step in inference algorithms such as belief propagation or Gibbs sampler require computation depending on the degrees of the nodes. (While each update or iteration can be carried out more efficiently in a sparse model, it is not clear how to compare the number of iterations needed. In general, carrying out inference tasks is computationally hard even in bounded-degree models~\cite{sly2010computational}.) This paper takes a first step towards understanding what properties of an MRF with many edges can be captured by a model with far fewer edges. We focus on the Ising model, the canonical binary pairwise graphical model. Originally introduced by statistical physicists to study phase transitions in magnetic materials~\cite{ising1925beitrag,brush1967history}, these distributions capture rich dependence structure and are widely used in a variety of applications including for modeling images, neural networks, voting data and biological networks~\cite{banerjee2008model,greig1989exact, schneidman2006weak}. The Ising model assigns to each configuration $x\in \{-1,+1\}^n$ probability $$ p(x)= \frac1Z \exp\big({\tfrac{1}{2}x^\intercal J x}\big)\,, $$ where $J\in \mathbb{R}^{n\times n}$ is a symmetric matrix of interactions and the partition function $Z$ normalizes the distribution. The support of the interaction matrix $J$ is represented by a graph $G_J=([n], E_J)$ with $\{i,j\}\in E_J$ if and only if $J_{ij}\neq 0$. The Curie-Weiss model at `inverse temperature' $\beta$ is the Ising model on the complete graph with all entries of the interaction matrix $J$ equal to~$\frac{\beta}{n}$. Sparsification of graphs~\cite{spielman2011spectral,batson2013spectral} has in recent years had a large impact in theoretical computer science. The notion of approximation in that literature is spectral: given a graph with Laplacian $L$, the objective is to find a sparser graph with Laplacian $M$ such that $x^\intercal L x \approx x^\intercal M x$ for all $x$. The Ising model sufficient statistic, $x^\intercal J x$, is thus approximately preserved by spectral graph sparsification, but it is not clear how this translates to any sort of notion of nearness of the \emph{distributions} of corresponding Ising models, because of their inherent non-linearity. In this paper we initiate the study of the interplay between spectral approximation of graphs and Ising models by showing that low-order moments of the Curie-Weiss model (Ising model on the complete graph with uniform edge-weights) are accurately represented by expander graphs (which are spectral approximations of the complete graph). As discussed in~\cite{bresler2016learning}, low-order moments capture the probabilistic content of a model relevant to the machine learning task of making predictions based on partial observations. Our main result shows that $k$th order moments in the Curie-Weiss model are approximated to average accuracy $k/\sqrt{d}$ by $d$-regular approximate Ramanujan graphs (and more generally to average accuracy $k\epsilon$ by $\epsilon$-expander graphs). \begin{theorem}[Informal version of Theorem~\ref{main_application}] The $k$th order moments of the Curie-Weiss model on $n$ nodes with inverse temperature $\beta$ are approximated to within average error $kC(\beta)/\sqrt{d}$ by an Ising model on a $d$-regular approximate Ramanujan graph. \end{theorem} We note that random regular graphs are known to be approximately Ramanujan with high probability. The proof is based on a coupling argument together with the abstract comparison technique developed in this paper; in order to deal with the low-temperature regime where Glauber dynamics mixes slowly, we use the restricted dynamics studied in~\cite{levin2010glauber}. A much weaker bound can be obtained via the Gibbs variational principle, and we outline that method in Section~\ref{s:naive}. The techniques developed in the paper are likely to be of independent interest because of their applicability to other models, but we do not pursue that here. We frame our basic goal as that of comparing the expectations of a Lipschitz function under two distributions, and to that end we prove a bound in terms of nearness of \emph{Markov kernels} with desired stationary distributions. Specifically, our main abstract result, Theorem~\ref{main_theorem}, is stated in terms of the Glauber dynamics for the two distributions. We prove this theorem in Section~\ref{s:abstractResult}. The technique is based on Stein's method, which we review briefly in Section~\ref{s:prelim} along with relevant background on the Glauber dynamics and the Poisson equation. For any distribution $\mu(\cdot)$ over $\{-1,1\}^n$, we denote by $\mu_i(\cdot|x^{(\sim i)})$ the conditional distribution of the $i$th coordinate when the value of every other coordinate (denoted by $x^{(\sim i)}$) is fixed. \begin{theorem}[Short version of Theorem~\ref{main_theorem}]\label{t:main_short} Let $\mu$ and $\nu$ be probability measures on $\Omega = \{-1,+1\}^n$. Let $P$ be the kernel of Glauber dynamics with respect to $\mu$. Let $f : \Omega \to \mathbb{R}$ be any function and let $h:\{-1,1\}^n\to \mathbb{R}$ be a solution to the Poisson equation $h - Ph = f - \mathbb{E}_\mu f$. Then \begin{equation} |\mathbb{E}_\mu f - \mathbb{E}_\nu f| \leq \mathbb{E}_{\nu}\Big(\frac{1}{n}\sum_{i=1}^{n} |\Delta_i(h)| \cdot \|\mu_i(\cdot|x^{(\sim i)})-\nu_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}}\Big)\,, \end{equation} where $\Delta_i(h)$ is the discrete derivative of $h$ along the coordinate $i$. \end{theorem} If $P$ is contractive and $f$ is Lipschitz, then we get a simplified bound, given in Theorem~\ref{main_theorem}. Aside from applying the technique to prove Theorem~\ref{main_application} on approximation of Ising moments, we state a result in Subsection~\ref{ss:Dobrushin} comparing functionals of an Ising model with a perturbed Ising model when one of them has sufficiently weak interactions (specifically, we require a condition similar to, though slightly weaker than, Dobrushin's uniqueness condition). \begin{remark} The same result as stated in Theorem~\ref{t:main_short}, with a similar proof, was discovered independently in~\cite{reinert2017}. Their main application is to compare exponential random graphs with Erd\H{o}s-R\'enyi random graphs, whereas we use it to compare Ising models to the Curie-Weiss model. For added transparency we have coordinated the submissions of our two papers. \end{remark} We briefly outline the rest of the paper. Section 2 reviews Stein's method, the Poisson equation, Glauber dynamics, and motivates our technique. Section 3 states and proves the main abstract result. Section~\ref{s:IsingApproximation} contains the application to Ising models with weak interactions and our result on approximation of moments of the Curie-Weiss model by those of Ising models on expanders. The proof of the former is in Section~\ref{s:contractProof} and of the latter in Sections~\ref{s:ProofIdeas} and~\ref{s:IsingProof}. We remark that several papers consider the problem of \emph{testing} various properties of an Ising model from samples, such as whether the variables are jointly independent, equal to a known Ising model, etc. \cite{daskalakis2016testing,daskalakis2017concentration,gheissari2017concentration}. The problem of testing between dense and sparse Ising models is studied in \cite{bresler2018optimal}. \section{Preliminaries} \label{s:prelim} \subsection{Stein's Method} Stein's method was first introduced by Charles Stein in his famous paper \cite{stein1972bound} to prove distributional convergence of sums of random variables to a normal random variable even in the presence of dependence. The method gives explicit Berry-Esseen-type bounds for various probability metrics. The method has since been used to prove distributional convergence to a number of distributions including the Poisson distribution \cite{chen1975poisson}, the exponential distribution \cite{chatterjee2011exponential,Fulman2013} and $\beta$ distribution \cite{goldstein2013stein,dobler2015stein}. See~\cite{ross2011fundamentals} for a survey of Stein's method; we give a brief sketch. Consider a sequence of random variables $Y_n$ and a random variable $X$. Stein's method is a way prove distributional convergence of $Y_n$ to $X$ with explicit upper bounds on an appropriate probability metric (Kolmogorov-Smirnov, total variation, Wasserstein, etc.). This involves the following steps: \begin{enumerate} \item Find a characterizing operator $\mathcal{A}$ for the distribution of $X$, which maps functions $h$ over the state space of $X$ to give another function $\mathcal{A}h$ such that $$\mathbb{E}[ \mathcal{A}(h)(X)] = 0 \,.$$ Additionally, if $\mathbb{E}\mathcal{A}(h)(Y) = 0$ for a large enough class of functions $h$, then $Y \stackrel{d}{=} X$. Therefore the operator $\mathcal{A}$ is called a `characterizing operator'. \item For an appropriate class of functions $\mathcal{F}$ (depending on the desired probability metric), one solves the Stein equation $$\mathcal{A}h = f - \mathbb{E}f(X)$$ for arbitrary $f \in \mathcal{F}$. \item By bounding $|\mathbb{E}f(Y_n) - \mathbb{E}f(X)|$ in terms of $\mathbb{E}\mathcal{A}(h)(Y_n)$, which is shown to be tending to zero, it follows that $Y_n \stackrel{d}{\to} X$. \end{enumerate} The procedure above is often carried out via the method of exchangeable pairs (as done in Stein's original paper \cite{stein1972bound}; see also the survey by \cite{ross2011fundamentals} for details). An exchangeable pair $(Y_n, Y_n^{\prime})$ is constructed such that $Y_n^{\prime}$ is a small perturbation from $Y_n$ (which can be a step in some reversible Markov chain). Bounding the distance between $X$ and $Y_n$ then typically reduces to bounding how far $Y_n^{\prime}$ is from $Y_n$ in expectation. Since reversible Markov chains naturally give characterizing operators as well as `small perturbations', we formulate our problem along these lines. \subsection{Markov Chains and the Poisson Equation} In this paper, we only deal with finite state reversible and irreducible Markov Chains. Basic definitions and methods can be found in \cite{levin2009markov} and \cite{aldous2000reversible}. Henceforth, we use the notation in \cite{levin2009markov} for our exposition on Markov chains. Let $P$ be an irreducible Markov kernel and $\mu$ be its unique stationary distribution. We denote by $\mathbb{E}_\mu$ the expectation with respect to the measure $\mu$. It will be convenient to use functional analytic notation in tandem with probability theoretic notation for expectation, for instance replacing $\mathbb{E}g(X)$ for a variable $X\sim \mu$ by $\mathbb{E}_\mu g$. Given a function $f : \Omega \to \mathbb{R}$, we consider the following equation called the Poisson equation: \begin{equation} h - Ph = f - \mathbb{E}_{\mu}f \,. \label{poisson_equation} \end{equation} By definition of stationary distribution, $\mathbb{E}_{\mu}(h -Ph) = 0$. By uniqueness of the stationary distribution, it is clear that for any probability distribution $\eta$ over the same state space as $\mu$, $\mathbb{E}_{\eta}(h -Ph) = 0$ for all $h$ only if $\mu = \eta$. Therefore, we will use Equation~\eqref{poisson_equation} as the Stein equation and the operator $I-P$ as the characterizing operator for $\mu$. The Poisson equation was used in \cite{chatterjee2005concentration} to show sub-Gaussian concentration of Lipschitz functions of weakly dependent random variables using a variant of Stein's method. For the finite state, irreducible Markov chains we consider, solutions can be easily shown to exist in the following way: The Markov kernel $P$ can be written as a finite stochastic matrix and functions over the state space as column vectors. We denote the pseudo-inverse of the matrix $I-P$ by $(I-P)^\dagger$, and one can verify that $h = (I-P)^\dagger(f-\mathbb{E}_{\mu}f)$ is a solution to~\eqref{poisson_equation}. The solution to the Poisson equation is not unique: if $h(x)$ is a solution, then so is $h(x)+a$ for any $a \in \mathbb{R}$. We refer to the review article by Makowski and Schwartz in \cite{feinberg2012handbook} and references therein for material on solution to the Poisson equation on finite state spaces. We call the solution $h$ given in the following lemma the \emph{principal solution} of the Poisson equation. See \cite{feinberg2012handbook} for the proof. \begin{lemma}\label{principal_solution} Let the sequence of random variables $(X_i)_{i=0}^{\infty}$ be a Markov chain with transition kernel $P$. Suppose that $P$ is a finite state irreducible Markov kernel with stationary distribution~$\mu$. Then the Poisson equation~\eqref{poisson_equation} has the following solution: $$h(x) = \sum_{t=0}^{\infty} \mathbb{E}\left[f(X_t) - \mathbb{E}_{\mu}f \mid X_0 = x\right] \ .$$ \end{lemma} \subsection{Glauber Dynamics and Contracting Markov Chains}\label{ss:glauber} Given $x \in \Omega= \{-1,+1\}^n$, let $x^{(\sim i)}$ be the values of $x$ except at the $i$th coordinate. For any probability measure $p(\cdot)$ over $\Omega$ such that $p(x^{(\sim i)}) > 0$, we let $p_i(\cdot|x^{(\sim i)})$ denote the conditional distribution of the $i$th coordinate given the rest to be $x^{(\sim i)}$. We also denote by $x^{(i,+)}$ (and $x^{(i,-)}$) the vectors obtained by setting the $i$th coordinate of $x$ to be $1$ (and $-1$). For any real-valued function $f$ over $\Omega$, denote the discrete derivative over the $i$th coordinate by $\Delta_i(f) := f\left(x^{(i,+)}\right) - f\left(x^{(i,-)}\right)$. Given a probability measure $p$ over a product space $\mathcal{X}^n$, the \emph{Glauber Dynamics} generated by $p(\cdot)$ is the following Markov chain: \begin{enumerate} \item Given current state $X \in \mathcal{X}^n$, pick $I \in [n]$ uniformly and independently. \item Pick the new state $X^{\prime}$ such that $(X^{\prime})^i = X^i$ for all $i\neq I$. \item The $I$th coordinate $(X^{\prime})^I$ is obtained by resampling according to the conditional distribution $p_I(\cdot|X^{(\sim I)})$. \end{enumerate} All the Glauber dynamics chains considered in this paper are irreducible, aperiodic, reversible and have the generating distribution as the unique stationary distribution. Denote the Hamming distance by $d_H(x,y) = \sum_{i=1}^n \mathds{1}_{x^i \neq y^i}$. Consider two Markov chains $(X_t)$ and $(Y_t)$ evolving according to the same Markov transition kernel $P$ and with different initial distributions. Let $\alpha \in [0,1)$. We call the Markov kernel $P$ $\alpha$-contractive (with respect to the Hamming metric) if there exists a coupling between the chains such that $\mathbb{E}[d_H(X_{t},Y_{t})|X_0 =x,Y_0 =y] \leq \alpha^t d_H(x,y)$ for all $t \in \mathbb{N}$. \section{The Abstract Result}\label{s:abstractResult} Given two real-valued random variables $W_1$ and $W_2$, the $1$-Wasserstein distance between their distributions is defined as $$d_W(W_1,W_2) = \sup_{g \in 1\text{-Lip}} \mathbb{E}g(W_1) - \mathbb{E}g(W_2)\,.$$ Here the supremum is over 1-Lipschitz functions $g:\mathbb{R} \to \mathbb{R}$. \begin{theorem}[The abstract result]\label{main_theorem} Let $\mu$ and $\nu$ be probability measures on $\Omega = \{-1,+1\}^n$ with Glauber dynamics kernels $P$ and $Q$, respectively. Additionally, let $P$ be irreducible. Let $f : \Omega \to \mathbb{R}$ be any function and let $h$ be a solution to the Poisson equation~\eqref{poisson_equation}. Then \begin{equation} |\mathbb{E}_\mu f - \mathbb{E}_\nu f| \leq \mathbb{E}_{\nu}\Big(\frac{1}{n}\sum_{i=1}^{n} |\Delta_i(h)| \cdot \|\mu_i(\cdot|x^{(\sim i)})-\nu_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}}\Big)\,. \label{main_bound} \end{equation} Furthermore, if $P$ is $\alpha$-contractive and the function $f$ is $L$-Lipschitz with respect to the Hamming metric, then \begin{equation} |\mathbb{E}_\mu f - \mathbb{E}_\nu f| \leq \frac{L}{(1-\alpha)}\mathbb{E}_{\nu}\Big(\frac{1}{n}\sum_{i=1}^{n} \|\mu_i(\cdot|x^{(\sim i)})-\nu_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}}\Big)\,. \label{contractive_chain_bound} \end{equation} If $Z_\mu \sim \mu$ and $Z_\nu \sim \nu$, then \begin{equation} d_W(f(Z_\mu),f(Z_\nu)) \leq \frac{L}{(1-\alpha)}\mathbb{E}_{\nu}\Big(\frac{1}{n}\sum_{i=1}^{n} \|\mu_i(\cdot|x^{(\sim i)})-\nu_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}}\Big)\,. \label{contractive_chain_wasserstein_bound} \end{equation} \end{theorem} \begin{proof} To begin, since $\nu$ is stationary for $Q$, $\mathbb{E}_\nu h = \mathbb{E}_\nu Qh$. Taking expectation with respect to $\nu$ in~\eqref{poisson_equation}, we get \begin{equation} \mathbb{E}_\nu (Q-P)h = \mathbb{E}_\nu f - \mathbb{E}_\mu f\,. \label{poisson_expectation} \end{equation} By definition of the Glauber dynamics, \begin{align} (Q-P)h &= \frac{1}{n}\sum_{i=1}^{n} \Big(h(x^{(i,+)})\nu_i(1|x^{(\sim i)}) + h(x^{(i,-)})\nu_i(-1|x^{(\sim i)})\nonumber \\ &\qquad \qquad \quad - h(x^{(i,+)})\mu_i(1|x^{(\sim i)}) - h(x^{(i,-)})\mu_i(-1|x^{(\sim i)})\Big) \nonumber\\ &= \frac{1}{n}\sum_{i=1}^{n} \Delta_i(h)\big(\nu_i(1|x^{(\sim i)}) - \mu_i(1|x^{(\sim i)}) \big) \,. \label{tensorisation_glauber} \end{align} Combining \eqref{poisson_expectation} and \eqref{tensorisation_glauber}, along with the triangle inequality, yields~\eqref{main_bound}. To prove~\eqref{contractive_chain_bound}, it is sufficient to show that if $f$ is $L$-Lipschitz and $P$ is $\alpha$-contractive, then $\Delta_i(h) \leq \frac{L}{1-\alpha}$. This we achieve using Lemma~\ref{principal_solution}. Let $(X_t)$, $(Y_t)$ be Markov chains evolving with respect to the kernel $P$, coupled such that they are $\alpha$-contractive. Then, \begin{align*} |\Delta_i(h)(x)| &= \biggr|\sum_{t=0}^{\infty} \mathbb{E}\left[f(X_t) - f(Y_t) \mid X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\right] \biggr|\\ &\leq \sum_{t=0}^{\infty} \mathbb{E}\left[L d_H(X_t,Y_t) \mid X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\right] \\ &\leq L\sum_{t=0}^{\infty} \alpha^t \\ &= \frac{L}{1-\alpha}\,. \end{align*} Let $g: \mathbb{R} \to \mathbb{R}$ be any 1-Lipschitz function. Let $h_g$ be the solution to the Poisson equation $h_g - Ph_g = g\circ f - \mathbb{E}_\mu (g\circ f)$. To prove Equation~\eqref{contractive_chain_wasserstein_bound}, it is sufficient (by definition of Wasserstein distance) to show that for any 1-Lipschitz function $g$, $\Delta_i(h_g) \leq \frac{L}{1-\alpha}$. By Lemma~\ref{principal_solution}, \begin{align*} |\Delta_i(h_g)(x)| &= \biggr|\sum_{t=0}^{\infty} \mathbb{E}\left[g\circ f(X_t) - g\circ f(Y_t) \mid X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\right] \biggr|\\ &\leq \sum_{t=0}^{\infty} \mathbb{E}\left[|f(X_t) - f(Y_t)| \mid X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\right]\\ &\leq \sum_{t=0}^{\infty} \mathbb{E}\left[L d_H(X_t,Y_t) \mid X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\right] \,. \end{align*} The bound from the previous display now gives the result. \end{proof} Roughly speaking, according to Theorem~\ref{main_theorem}, if $\frac{1}{n}\sum_{i=1}^{n}\|\mu_i(\cdot|x^{(\sim i)})-\nu_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}}$ is small and $\Delta_i(h)$ is not too large, then $\mathbb{E}_\mu f \approx \mathbb{E}_\nu f$. The quantity $\Delta_i(h)$ is assured to be small if $f$ is Lipschitz and the chain is contractive, and this gives us a bound on the Wasserstein distance. In our main application we deal with chains which are not contractive everywhere and we use the stronger bound~\eqref{main_bound} to obtain results similar to~\eqref{contractive_chain_bound} and~\eqref{contractive_chain_wasserstein_bound}. \section{Ising Model and Approximation Results} \label{s:IsingApproximation} \subsection{Ising model} \label{subsec:ising_model_def} We now consider the Ising model. The \emph{interaction matrix} $J$ is a real-valued symmetric $n \times n$ matrix with zeros on the diagonal. Define the Hamiltonian $\mathcal{H}_J: \{-1,1\}^n \to \mathbb{R}$ by $$\mathcal{H}_J(x) = \frac{1}{2}x^{\intercal}Jx\,.$$ Construct the graph $G_J = ([n],E_J)$ with $(i,j) \in E$ iff $J_{ij} \neq 0$. An Ising model over graph $G_J$ with interaction matrix $J$ is the probability measure $\pi$ over $\{-1,1\}^n$ such that $\pi(x) \propto \exp{(H_J(x))}$. We call the Ising model ferromagnetic if $J_{ij} \geq 0$ for all $i,j$. For any simple graph $G = ([n],E)$ there is associated a symmetric $n\times n$ adjacency matrix $\mathcal{A}(G) = (\mathcal{A}_{ij})$, where \begin{equation*} \mathcal{A}_{ij} = \begin{cases} 1 &\text{if $(i,j) \in E$} \\ 0 &\text{otherwise}\,. \end{cases} \end{equation*} Let $K_n$ be the complete graph over $n$ nodes; we will use $A$ to denote its adjacency matrix. The \emph{Curie-Weiss model} at inverse temperature $\beta > 0$ is an Ising model with interaction matrix $\frac{\beta}{n}A$. It is known that the Curie-Weiss model undergoes phase transition at $\beta = 1$~\cite{ellis2007entropy}. We henceforth denote by $\mu$ the Curie-Weiss model at inverse temperature $\beta$. We will compare Ising models on the complete graph to those on a $d$-regular graph $G_d = ([n],E_d)$ (i.e., every node has degree $d$). Let $B$ denote the adjacency matrix of $G_d$. Given inverse temperature $\beta$, we take $\nu$ to be the Ising model with interaction matrix $\frac{\beta}{d}B$. \subsection{Expander Graphs} We recall that $A$ is set to be the adjacency matrix of $K_n$. The all-ones vector $\mathbf{1}=[1,1,\dots,1]^{\intercal}$ is an eigenvector of $A$ with eigenvalue $n-1$. It is also an eigenvector of $B$ with eigenvalue $d$. $B$ has the following spectral decomposition with vectors $v_i$ being mutually orthogonal and orthogonal to $\mathbf{1}$: \begin{equation} B = \frac{d}{n}\mathbf{1}\mathbf{1}^{\intercal} + \sum_{i=2}^{n}\lambda_i v_iv_i^{\intercal}\,. \label{spectrum_1} \end{equation} Because of the degeneracy of the eigenspaces of $A$, we can write: \begin{equation} A = \frac{n-1}{n}\mathbf{1}\mathbf{1}^{\intercal} + \sum_{i=2}^{n}v_iv_i^{\intercal}\,. \label{spectrum_2} \end{equation} Let $\epsilon \in (0,1)$. We call the graph $G_d$ an $\epsilon$-expander if the eigenvalues $\lambda_2,\dots,\lambda_n$ of its adjacency matrix $B$ satisfy $|\lambda_i| \leq \epsilon d$. Henceforth, we assume that $G_d$ is an $\epsilon$-expander. Then, from~\eqref{spectrum_1} and~\eqref{spectrum_2} we conclude that \begin{equation} \|\tfrac{\beta}{n}A-\tfrac{\beta}{d}B\|_2 \leq \beta (\epsilon + \tfrac{1}{n})\,. \label{norm_bound} \end{equation} Expanders have been extensively studied and used in a variety of applications. There are numerous explicit constructions for expander graphs. A famous result by Alon and Boppana \cite{nilli1991second} shows that $\epsilon \geq 2\frac{\sqrt{d-1}}{d}$ for any $d$-regular graph. A family of $d$-regular graphs with increasing number of nodes is called \emph{Ramanujan} if $\epsilon$ approaches $2\frac{\sqrt{d-1}}{d}$ asymptotically. A $d$-regular graph over $n$ nodes is said to be $\delta$-approximately Ramanujan if $\epsilon = 2\frac{\sqrt{d-1} + \delta}{d}$. \cite{friedman2008proof} shows that for every $\delta > 0$, a random $d$-regular graph is $\delta$-approximately Ramanujan with probability tending to $1$ as $n \to \infty$. Our main result in Subsection~\ref{ss:mainIsingMoments} is a bound on the difference of low-order moments of $\mu$ and $\nu$. Before discussing this, we warm up by applying our method to Ising models in the contracting regime. \subsection{Approximation of Ising Models under Dobrushin-like Condition} \label{ss:Dobrushin} In Theorem~\ref{t:contractIsing} below, we use the fact that Ising models contract when the interactions are weak enough to prove bounds on the Wasserstein distance between functionals of two Ising models. Given $x,y \in \Omega$, let $\Delta_{x,y}$ denote the column vector with elements $\frac{1}{2}|x_i - y_i| = \mathds{1}_{x_i \neq y_i}$. Let $|L|$ be the matrix with entries $(|L|)_{ij} = |L_{ij}|$ equal to the absolute values of entries of $L$. The Ising model with interaction matrix $L$ is then said to satisfy the \emph{Dobrushin-like condition} if $\|(|L|)\|_2 <1$. Essentially the same condition was used in \cite{hayes2006simple} and \cite{chatterjee2005concentration}. This contrasts with the classical Dobrushin condition, which requires that $\|(|L|)\|_{\infty} < 1$~\cite{dobrushin1970prescribing,weitz2005combinatorial,georgii2011gibbs}. In both the Curie-Weiss model with interaction matrix $\frac \beta n A$ and the Ising model on $d$-regular graph with interaction matrix $\frac \beta d B$, the Dobrushin-like condition as well as the classical Dobrushin condition are satisfied if and only if $\beta <1$. \begin{remark} We state these conditions in terms of the Ising interaction matrix, but in general they use the so-called dependence matrix. We briefly describe the connection. Given a measure $\pi$ over $\Omega$, the matrix $D = (d_{ij})$ is a dependence matrix for $\pi$ if for all $x,y \in \Omega$, $ \|\pi_i(\cdot| x^{(\sim i)}) - \pi_i(\cdot| y^{(\sim i)})\|_{\mathsf{TV}} \leq \sum_{j=1}^{n} d_{ij}\mathds{1}_{x^i \neq y^i}\,. $ The measure $\pi$ satisfies the Dobrushin condition with dependence matrix $D$ if $\|D\|_\infty < 1$. If $\pi$ is an Ising model with interaction matrix $J$, then $\pi_i(x_i =1|x^{(\sim i)}) = \frac12(1+\tanh{J_i^{\intercal}x})$ (here $J_i^{\intercal}$ is the $i$th row of $J$). Therefore, $ \|\pi_i(\cdot| x^{(\sim i)}) - \pi_i(\cdot| y^{(\sim i)})\|_{\mathsf{TV}} = \frac{1}{2} |\tanh{J_i^{\intercal}x} - \tanh{J_i^{\intercal}y}| \leq \sum_{j=1}^{n} |J_{ij}| \mathds{1}_{x^i \neq y^i} $ and we can consider $|J|$ as the dependence matrix. \end{remark} For $a\in (\mathbb{R}^+)^n$, let $f: \Omega \to \mathbb{R}$ be any function such that $\forall \ x,y \in \Omega$, $$ |f(x) - f(y)| \leq \sum_{i=1}^n a_i \mathds{1}_{x_i \neq y_i} = a^{\intercal}\Delta_{x,y}\,. $$ We call such a function $f$ an $a$-Lipschitz function. \begin{theorem}\label{t:contractIsing} Let $a \in (\mathbb{R}^{+})^n$ and let $f:\Omega \to \mathbb{R}$ be an $a$-Lipschitz function. If an interaction matrix $L$ (with corresponding Ising measure $\pi_L$) satisfies the Dobrushin-like condition, then for any other interaction matrix $M$ (with corresponding Ising measure $\pi_M$), $$|\mathbb{E}_{\pi_L} f - \mathbb{E}_{\pi_M} f| \leq \frac{\|a\|_2 \sqrt{n}}{2(1-\|(|L|)\|_2)} \|L-M\|_2\,.$$ \end{theorem} The proof, given in Section~\ref{s:contractProof}, uses ideas from Section~4.2 in~\cite{chatterjee2005concentration}, which proves results on concentration of Lipschitz functions of weakly dependent random variables. A simple consequence of this theorem is that when $\|(|L|)\|_2 < 1$, the Ising model is stable in the Wasserstein distance sense under small changes in inverse temperature. \begin{corollary} Let $M = (1+ \epsilon)L$. Then, for any $a$-Lipschitz function, $$ |\mathbb{E}_{\pi_L}f - \mathbb{E}_{\pi_M}f| \leq \epsilon\|a\|_2 \sqrt{n}\frac{\|L\|_2}{2(1-\|(|L|)\|_2)} \,.$$ \end{corollary} If $f$ is $\tfrac{1}{n}$-Lipschitz in each coordinate then $\|a\|_2 = \frac{1}{\sqrt{n}}$ (typical statistics like magnetization fall into this category). We conclude that for such functions $$ |\mathbb{E}_{\pi_L}f - \mathbb{E}_{\pi_M}f| \leq \frac{\epsilon\|L\|_2}{2(1-\|(|L|)\|_2)} \,.$$ \subsection{Main Result on Approximation of Ising Model Moments} \label{ss:mainIsingMoments} Let $\rho_{ij} = \mathbb{E}_{\mu} x^ix^j$ and $\tilde{\rho}_{ij} = \mathbb{E}_{\nu} x^ix^j$ denote the pairwise correlations in the two Ising models $\mu$ and $\nu$. It follows from Griffith's inequality~\cite{griffiths1967correlations} for ferromagnetic Ising models that for any $i$ and $j$, $$ 0\leq \rho_{ij} \leq 1 \quad \text{and}\quad 0 \leq \tilde{\rho_{ij}} \leq 1\,.$$ If two Ising models have the same pairwise correlations for every $i,j \in [n]$, then they are identical. For an Ising model $\eta$ with interaction matrix $J$, it is also not hard to show that if there are no paths between nodes $i$ and $j$ in the graph $G_J$, then $x^i$ and $x^j$ are independent and $\mathbb{E}_{\eta} [x^ix^j] = 0$. We refer to \cite{wainwright2008graphical} for proofs of these statements. We conclude that ${{{n}\choose{2}}}^{-1} \sum_{ij} |\rho_{ij} - \tilde{\rho}_{ij}|$ defines a metric on the space of Ising models over $n$ nodes. For positive even integers $k$, we denote the $k$th order moments for $i_1,..,i_k \in [n]$ by $$\rho^{(k)}[i_1,\dots,i_k] = \mathbb{E}_\mu\Big(\prod_{s=1}^k x^{i_s}\Big)$$ and similarly for $\tilde \rho^{(k)}[i_1,\dots,i_k]$, but with $\mu$ replaced by $\nu$. For a set $R = \{i_1,\dots,i_k\}$, we write $\rho^{(k)}[R]$ in place of $\rho^{(k)}[i_1,\dots,i_k]$. (We consider only even $k$, since odd moments are zero for Ising models with no external field.) Using Theorem~\ref{main_theorem}, we show the following approximation result on nearness of moments of the Curie-Weiss model and those of the Ising model on a sequence of regular expanders. \begin{theorem} \label{main_application} Let $A$ be the adjacency matrix of the complete graph and let $B$ be the adjacency matrix of a $d$-regular $\epsilon$-expander, both on $n$ nodes. Let the inverse temperature $\beta > 1$ be fixed, and consider the Ising models with interaction matrices $\frac\beta n A$ and $\frac \beta d B$, with moments $\rho$ and $\tilde \rho$ as described above. There exist positive constants $\epsilon_0(\beta)$ and $C(\beta)$ depending only on $\beta$ such that if $\epsilon < \epsilon_0(\beta)$, then for any even positive integer $k < n$ $$\frac{1}{{{n}\choose{k}}} \sum_{\substack{R \in [n] \\ |R| = k}}\big|\rho^{(k)}[R] - \tilde{\rho}^{(k)}[R]\big| \leq kC(\beta)\Big(\epsilon + \frac{1}{n}\Big)\,.$$ In particular, $$\frac{1}{{{n}\choose{2}}} \sum_{i j}|\rho_{ij} - \tilde{\rho}_{ij}| < 2C(\beta) \Big(\epsilon + \frac{1}{n}\Big)\,.$$ \end{theorem} For approximately Ramanujan graphs, $\epsilon = \Theta(\frac{1}{\sqrt{d}})$. By choosing a random $d$-regular graph, which is approximately Ramanujan with high probability, we can obtain arbitrarily accurate approximation of moments by choosing $d$ sufficiently large. If we care only about moments up to some fixed order $\bar k$, our result says that one can take any $d = \Omega(\bar k ^2)$ in order to obtain the desired approximation, completely independent of the size of the graph. The structure of the approximating graph $G_d$ is important. To see this, let the graph $G_d$ be the disjoint union of $\frac{n}{d}$ cliques each with $d$ nodes, a poor spectral sparsifier of the complete graph $K_n$. Consider the Ising model with interaction matrix $\frac{\beta}{d}\mathcal{A}(G_d)$. This graph is not an expander since it is not connected. If $i$ and $j$ are in different cliques, there is no path between $i$ and $j$ in $G_d$. Therefore, $\tilde{\rho}_{ij} = 0$. We conclude that only $O(\frac{d}{n})$ fraction of the pairs $(i,j)$ have correlation $\tilde{\rho}_{ij} > 0$. Since $\beta > 1$, it follows by standard analysis for the Curie-Weiss model that $\rho_{ij} > c_1(\beta) >0$ (see \cite{ellis2007entropy}). Therefore, \begin{align*} \frac{1}{{{n}\choose{2}}} \sum_{i j} |\rho_{ij} - \tilde{\rho}_{ij}| &\geq \frac{1}{{{n}\choose{2}}}\sum_{i j} (\rho_{ij} - \tilde{\rho}_{ij})\\ &\geq c_1(\beta) - O\Big(\frac{d}{n}\Big)\,. \end{align*} Here we have used the fact that $\tilde{\rho}_{ij} \leq 1$. It follows that if $\beta > 1$ and $d = o(n)$, then the left-hand side cannot be made arbitrarily small. The case $0 \leq\beta < 1$ is trivial in the sense that the average correlation is very small in both models and hence automatically well-matched. \begin{proposition} Consider the same setup as Theorem~\ref{main_application}, but with $0\leq \beta<1$. Then both $\sum_{i\neq j}{\rho_{ij}}=O(n)$ and $\sum_{i\neq j}{\tilde{\rho}_{ij}}=O(n)$, and hence $${{n}\choose{2}}^{-1} \sum_{j}\sum_{i< j}|\rho_{ij} - \tilde{\rho}_{ij}|\leq {{n}\choose{2}}^{-1} \sum_{j}\sum_{i< j}(\rho_{ij} + \tilde{\rho}_{ij} ) =O\left(\tfrac1n\right)\,.$$ \end{proposition} \begin{proof} To start, note that \begin{align} \sum_{i\neq j}{\rho_{ij}} &= \mathbb{E}_{\mu} \Big(\sum_{i=1}^n x^i\Big)^2 - n = \mathrm{var}_{\mu}\Big(\sum_{i=1}^n x^i\Big) -n \qquad \text{and} \label{eq:correlation_sum_1} \\ \sum_{i\neq j}{\tilde{\rho}_{ij}}&=\mathbb{E}_{\nu} \Big(\sum_{i=1}^n x^i\Big)^2 - n = \mathrm{var}_{\nu}\Big(\sum_{i=1}^n x^i\Big) -n \,. \label{eq:correlation_sum_2} \end{align} Thus, it suffices to show that the variances on the right-hand sides are $O(n)$. In the equations above, $\mathrm{var}_{\eta}(f)$ refers to variance of $f$ with respect to measure $\eta$. We bound the variance for the measure $\mu$ and identical arguments can be used to bound the variance with respect to $\nu$. Whenever $\beta < 1$, from the proof of Theorem 15.1 in \cite{levin2009markov}, we conclude that Glauber dynamics for both these models is $1-\frac{1- \beta}{n}$ contracting. Let $(\lambda_i)_{i=1}^{|\Omega|}$ be the eigenvalues of $P$. We let $|\lambda| := \sup \{1-|\lambda_i|: \lambda_i \neq 1, 1\leq i\leq |\Omega|\}$. From Theorem 13.1 in \cite{levin2009markov}, it follows that the spectral gap, $1-|\lambda|\geq \frac{1-\beta}{n}$. For any function $f : \Omega \to \mathbb{R}$, the Poin\'care inequality for $P$ bounds the variance under the stationary measure as $\mathrm{var}_\mu(f)\leq \frac12(1-|\lambda|)^{-1} \mathcal{E}(f,f)$, where the Dirichlet form $\mathcal{E}(f,f):= \sum_{x,y\in \Omega}(f(x)-f(y))^2P(x,y)\mu(x)$ (see Subsection 13.3 in \cite{levin2009markov}). The Poin\'care inequality then becomes \begin{align}\mathrm{var}_{\mu}(f)&\leq \frac{1}{2(1-|\lambda|)} \sum_{x,y\in \Omega}(f(x)-f(y))^2P(x,y)\mu(x) \nonumber\\ &\leq \frac{n}{2(1-\beta)}\sum_{x,y\in \Omega}(f(x)-f(y))^2P(x,y)\mu(x)\,. \label{eq:poincare_inequality} \end{align} Since $P$ is the Glauber dynamics, $P(x,y) >0$ only when $x$ and $y$ differ in at most one coordinate. When we take $f(x) = \sum_i x^i$, then $|f(x) -f(y)| \leq 2$ whenever $P(x,y)>0$. Plugging this into Equation~\eqref{eq:poincare_inequality} yields $$\mathrm{var}_{\mu}\Big(\sum_i x^i\Big)\leq \frac{2n}{1-\beta} = O(n)\,,$$ and similarly $\mathrm{var}_{\nu}\big(\sum_i x^i\big) = O(n)$. \end{proof} \section{Monotone Coupling and Proof of Theorem~\ref{t:contractIsing}} \label{s:contractProof} \subsection{Glauber Dynamics for Ising models and Monotone Coupling} \label{subsec:monotone_coupling} We specialize our previous discussion of the Glauber dynamics in Subsection~\ref{ss:glauber} to an Ising model with interaction matrix $J$. Let $J^{\intercal}_i$ denote the $i$th row of $J$. Given the current state $x \in\Omega= \{-1,1\}^n$, the Glauber Dynamics produces the new state $x^{\prime}$ as follows: Choose $I \in [n]$ uniformly at random. Construct the next state $x^{\prime}$ as $(x^{\prime})^i= x^i$ for $i\neq I$ and set independently \begin{equation*} (x^{\prime})^I = \begin{cases} 1 &\text{ with probability $\tfrac{1}2+ \tfrac12\tanh{J^{\intercal}_Ix}$ } \\ -1 &\text{ with probability $\tfrac{1}2- \tfrac12\tanh{J^{\intercal}_Ix}$}\,. \end{cases} \end{equation*} We refer to \cite{levin2009markov} for an introduction to mixing of Glauber dynamics for the Ising model. This Markov chain has been studied extensively and it can be shown that it mixes in $O(n\log{n})$ time (and is contracting for the `monotone coupling' described below) for high temperature under the Dobrushin-Shlosman condition~\cite{aizenman1987rapid} and under Dobrushin-like condition~\cite{hayes2006simple}. We now describe the monotone coupling used in the proof of Theorem~\ref{t:contractIsing}. Let $X_t$ and $Y_t$ be Glauber dynamics chains for the Ising model $\pi_J$ with interaction matrix $J$. Let $P^J$ denote the corresponding kernel. For both chains $X_t$ and $Y_t$, we choose the same random index $I$ and generate an independent random variable $u_t \sim \mathrm{unif}([0,1])$. Set $X_{t+1}^I$ (resp. $Y_{t+1}^I$) to $1$ iff $u_t \leq (\pi_J)_I\big(1\big| X_t^{(\sim I)}\big)$ (resp. $u_t \leq (\pi_J)_I\big(1\big| Y_t^{(\sim I)}\big)$ ). In the case when the entries of $J$ are all positive (i.e, ferromagnetic interactions), one can check that for the coupling above, if $X_0 \geq Y_0$ then $X_t \geq Y_t$ a.s. We note that since $J$ need not be ferromagnetic in the case considered in Theorem~\ref{t:contractIsing}, we cannot ensure that $X_t \geq Y_t$ a.s. if $X_0 \geq Y_0$. (Here $\geq$ is the entrywise partial order.) \subsection{Auxiliary Lemma} Before proceeding with the proof of Theorem~\ref{t:contractIsing}, we prove the following lemma that relates the quantity we wish to bound to the spectral norm of Ising interaction matrices. \begin{lemma}\label{spectral_bound} Let $f_1(x),\dots, f_n(x)$ be any real valued functions over $\Omega$ and define the vector $v_f(x) = [f_1,\dots,f_n(x)]^{\intercal}$. Let $\pi_L$ and $\pi_M$ denote Ising models with interaction matrices $L$ and $M$ respectively. Then, $$\frac{1}{n}\sum_{i=1}^{n}|f_i(x)|\cdot\|(\pi_L)_i(\cdot|x^{(\sim i)})- (\pi_M)_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}} \leq \|L-M\|_2 \frac{\|v_f\|_2}{2\sqrt{n}}\,.$$ In particular, when $L = \frac{\beta}{n}A$ (i.e, $\pi_L(\cdot) = \mu(\cdot)$, the Curie-Weiss model at inverse temperature $\beta$) and $M = \frac{\beta}{d}B$ (i.e, $\pi_M(\cdot) = \nu(\cdot)$, where $\nu(\dot)$ is the Ising model defined in Section~\ref{subsec:ising_model_def}) then $$\frac{1}{n}\sum_{i=1}^{n}|f_i(x)|\cdot\|\mu_i(\cdot|x^{(\sim i)})- \nu_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}} \leq \|\tfrac{\beta}{n}A - \tfrac{\beta}{d}B\|_2 \frac{\|v_f\|_2}{2\sqrt{n}}\,.$$ \end{lemma} \begin{proof} The proof follows from the $1$-Lipschitz property of the $\tanh(\cdot)$ function. Let $L^{\intercal}_i$ denote the $i$th row of $L$. We recall that $(\pi_L)_i(1|x^{(\sim i)}) = \frac{1}{2}(1+\tanh{L_i^{\intercal}x})$. There exist $c_i(x) \in \{-1,1\}$ such that the following holds, where we use the notation $v_{cf}^{\intercal}(x) = [c_1(x)f_1(x),\dots.,c_n(x)f_n(x)]$: \begin{align*} &\frac{1}{n}\sum_{i=1}^{n}|f_i(x)|\cdot\|(\pi_L)_i(\cdot|x^{(\sim i)})- (\pi_M)_i(\cdot|x^{(\sim i)})\|_{\mathsf{TV}} \\ &\qquad= \frac{1}{2n}\sum_{i=1}^{n}|f_i(x)|\cdot \big|\tanh{(L_i^{\intercal}x)} - \tanh{(M_i^{\intercal}x)}\big| \\ &\qquad\leq \frac{1}{2n} \sum_{i=1}^{n}|f_i(x)|\cdot\big|\big(L_i^{\intercal} - M_i^{\intercal}\big)x\big| \\ &\qquad= \frac{1}{2n} \sum_{i=1}^{n}c_i(x)f_i(x)\big(L_i^{\intercal} - M_i^{\intercal}\big)x \\ &\qquad= \frac{1}{2n} v_{cf}^{\intercal}(x)(L-M)x \\ &\qquad\leq \frac{1}{2n}\|x\|_2 \|L - M\|_2 \|v_{cf}\|_2\\ &\qquad= \|L-M\|_2 \frac{\|v_f\|_2}{2\sqrt{n}}\,.& \qedhere \end{align*} \end{proof} \subsection{Proof of Theorem~\ref{t:contractIsing}} Let $h$ be the solution of the Poisson equation $$h - P^L h = f - \mathbb{E}_{\pi_L}f\,.$$ In order to apply Theorem~\ref{main_theorem}, we bound the quantity \begin{align} |\Delta_i(h)(x)| &= \biggr\rvert\sum_{t=0}^{\infty}\mathbb{E}\Big[f(X_t) - f(Y_t)\Big| X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\Big]\biggr\rvert \notag \\ &\leq \sum_{t=0}^{\infty}\mathbb{E}\bigg[\sum_{i=1}^n a_i \mathds{1}_{X^i_t \neq Y^i_t}\biggr\rvert X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\bigg]\notag \\ \label{e:deltah} &= \sum_{t=0}^{\infty}\mathbb{E}\Big[ a^{\intercal}\Delta_{X_t, Y_t}\Big| X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\Big]\,. \end{align} The equation above holds for all couplings between $X_t$ and $Y_t$. We choose the monotone coupling as described in Section~\ref{subsec:monotone_coupling}. We recall that $(\pi_L)_i(1|x^{\sim i}) = \frac12(1+\tanh{L_i^{\intercal}x})$. From the definition of Glauber dynamics and monotone coupling it follows that $$\mathbb{E}\big[\mathds{1}_{X^i_t \neq Y_t^i}| X_{t-1}=x, Y_{t-1} = y\big] = (1-\tfrac{1}{n})\mathds{1}_{x_i \neq y_i} + \frac{1}{2n}|\tanh{L_i^{\intercal}x} - \tanh{L_i^{\intercal}y}|\,.$$ If $c$ is an $n$-dimensional column vector with positive entries, then \begin{align*} \mathbb{E}\big[ c^{\intercal}\Delta_{X_{t+1},Y_{t+1}}\big\rvert X_t = x, Y_t = y\big]&= \mathbb{E}\Big[\sum_{i=1}^n c_i \mathds{1}_{X^i_{t+1} \neq Y^i_{t+1}}\Big| X_t = x, Y_t = y\Big] \\ &\leq \sum_{i=1}^n (1-\tfrac{1}{n})a^{\intercal}\Delta_{x,y} \\&\qquad+ \tfrac{1}{2n}\sum_{i=1}^{n}a_i |\tanh{L_i^{\intercal }x} - \tanh{L_i^{\intercal }y}| \\ &\leq c^{\intercal}\left[(1-\tfrac{1}{n})I+ \tfrac{1}{n}|L| \right] \Delta_{x,y}\\ &= c^{\intercal}G \Delta_{x,y} \,, \end{align*} where $G := (1-\tfrac{1}{n})I+ \tfrac{1}{n}|L| $. Clearly, $\|G\|_2 < 1$ and hence $\sum_{t=0}^{\infty} G^t = (I-G)^{-1}$. Using the tower property of conditional expectation to apply the above inequality recursively, we conclude that $$\mathbb{E}\Big[ c^{\intercal}\Delta_{X_{t+1},Y_{t+1}}\Big| X_0 = x^{(i,+)}, Y_0 = x^{(i,-)}\Big] \leq c^{\intercal}G^{t+1}\Delta_{x^{(i,+)},x^{(i,-)}} \,.$$ Plugging the equation above into~\eqref{e:deltah} gives \begin{align*} |\Delta_i(h)(x)| \leq a^{\intercal}\bigg[\sum_{t=0}^{\infty} G^t \bigg]\Delta_{x^{(i,+)},x^{(i,-)}} &= a^{\intercal} (I-G)^{-1}\Delta_{x^{(i,+)},x^{(i,-)}} \\ &= \big[a^{\intercal}(I-G)^{-1}\big]^i \,. \end{align*} Recall that $\theta := \|(|L|)\|_2 < 1$, which implies that \begin{align*} \sqrt{\sum_{i=1}^n \Delta_i(h)^2} \leq \sqrt{\sum_{i=1}^n \big(\left[a^{\intercal}(I-G)^{-1}\right]^i\big)^2} &= \|a^{\intercal}(I-G)^{-1}\|_2 \\ &\leq \|a\|_2 \|(I-G)^{-1}\|_2 \\ &\leq \|a\|_2 \frac{1}{1-\|G\|_2}\\ &= \|a\|_2 \frac{n}{1-\theta} \,. \end{align*} We invoke Lemma~\ref{spectral_bound} and Theorem~\ref{main_theorem} to complete the proof: \begin{align*} |\mathbb{E}_{\pi_L} f - \mathbb{E}_{\pi_M} f| &\leq \mathbb{E}_{\pi_M}\sqrt{\sum_{i=1}^n \Delta_i(h)^2} \frac{\|L-M\|_2}{2\sqrt{n}} \\ &\leq \frac{\|a\|_2 \sqrt{n}}{2(1-\theta)} \|L-M\|_2\,. \end{align*} \section{Ideas in Proof of Theorem~\ref{main_application}} \label{s:ProofIdeas} \subsection{Overview} In this section we overview the main ideas behind the proof of Theorem~\ref{main_application}, which bounds the average difference in $k$th order moments in the Curie-Weiss model $\mu$ and the $d$-regular Ising model $\nu$. Let $k$ be any even positive integer such that $k < n$. For every $R \subset [n]$ such that $|R| = k$, let $C_{R} \in \{-1,1\}$ and define the function $f_C :\Omega \to \mathbb{R}$ \begin{equation}\label{e:f} f_C(x) = \frac{1}{2k{{n}\choose{k}}}\sum_{\substack{R \subset [n]\\ |R| = k}} C_{R}\prod_{i\in R}x^i\,. \end{equation} We suppress the subscript $C$ in $f_C$. Clearly, $f(x) = f(-x)$, i.e., $f$ is symmetric. Moreover, a calculation shows that $f$ is $\frac{1}{n}$-Lipschitz with respect to the Hamming metric. That is, for arbitrary $x,y \in \Omega$, $ |f(x)-f(y)| \leq \frac{1}{n}\sum_{i=1}^{n}\mathds{1}(x^i \neq y^i) $, which implies that $|f(x) - f(y)| \leq 1$ for any $x,y \in \Omega$. In Section~\ref{s:IsingProof} we will bound the quantity $|\mathbb{E}_\mu f - \mathbb{E}_\nu f|$ uniformly for any choice of $\{C_R\}$, which in turn relates the moments $\rho$ and $\tilde \rho$ (defined in Subsection~\ref{ss:mainIsingMoments}) since $$ \sup_{C_R}|\mathbb{E}_\mu f - \mathbb{E}_\nu f| = \frac{1}{2k{{n}\choose{k}}}\sum_{\substack{R \subset [n]\\ |R| = k}} \big|\rho^{(k)}[R] - \tilde{\rho}^{(k)}[R]\big|\,. $$ Let $P$ be the kernel of the Glauber dynamics of the Curie-Weiss model at inverse temperature $\beta > 1$. By Theorem~\ref{main_theorem}, bounding $|\mathbb{E}_\mu f - \mathbb{E}_\nu f|$ reduces to bounding $\mathbb{E}_{\nu}|\Delta_i(h)|$ for the specific function $h$ obtained by solving the Poisson equation $(I-P)h = f - \mathbb{E}_{\mu}f$. By Lemma~\ref{principal_solution}, we can write $h$ in terms of the expectation of a sum over time-steps for Glauber chains $X_t$ and $Y_t$ to obtain \begin{align}\label{e:h} {h}(x) - {h}(y) &= \mathbb{E}\bigg[\sum_{t=0}^{\infty}f({X}_t) - f({Y}_t)\biggr\rvert {X}_0=x , {Y}_0 =y\bigg] \end{align} from which we get $$ |\Delta_i(h)(x_0)| = \Big|\sum_{t=0}^{\infty}\mathbb{E}\Big[\big(f({X}_t) - f({Y}_t)\big)\Big|{X}_0=x_0^{(i,+)} , {Y}_0 =x_0^{(i,-)}\Big]\Big| \,. $$ By selecting $x_0\sim \nu$, this yields a method for bounding $\mathbb{E}_{\nu}|\Delta_i(h)|$ via coupling $X_t$ and $Y_t$. We now briefly overview the steps involved in bounding $\mathbb{E}_{\nu}|\Delta_i(h)|$. Let $m^{*}$ be the unique positive solution to $ s =\tanh{\beta s}$. \begin{enumerate} \item[Step 1:] For a good enough expander, $m(x):= \frac1n\sum_i x^i$ concentrates exponentially near $m^{*}$ and $-m^{*}$ under measure $\nu$. We show this in Lemma~\ref{expander_magnetisation_concentration}. The subsequent analysis is separated into two cases depending on whether or not $m(x_0)$ is close to $m^*$. \item[Step 2:] Theorem~\ref{main_theorem} requires specifying a Markov kernel; because the Glauber dynamics on the Curie-Weiss model mixes slowly when $\beta >1$, we instead use the \emph{restricted} (a.k.a. censored) Glauber dynamics, which restricts the Glauber dynamics to states with majority of $+1$ coordinates and mixes quickly. We justify this change with Lemma~\ref{symmetric_solution}. \item[Step 3:] Whenever $m(x)$ is not close to $m^{*}$, we show in Lemma~\ref{naive_bound} that $|\Delta_i(h)(x)|$ is at most polynomially large in $n$. This is achieved via coupling $X_t$ and $Y_t$ in \eqref{e:h} and makes use of fast mixing of the chain. \item[Step 4:] Whenever $m(x)$ is near enough to $m^{*}$, the restricted Glauber dynamics (and Glauber dynamics) for the Curie-Weiss model is contracting for a certain coupling. Using methods similar to the ones used in the proof of Theorem~\ref{t:contractIsing} in the contracting case, we conclude that $|\Delta_i(h)|(x)$ must be small if $m(x)$ is close to $m^*$. We show this in Section~\ref{s:coupling} via Lemmas~\ref{super_martingale_contraction},~\ref{contraction_bound_lemma} and Theorem~\ref{contraction_bound}. \item[Step 5:] Section~\ref{s:IsingProof} combines these statements to bound $\mathbb{E}_{\nu}|\Delta_i(h)|$ and prove Theorem~\ref{main_application}. \end{enumerate} \subsection{Concentration of Magnetization} Recall that $m^{*}$ is the largest solution to the equation $\tanh{\beta s} = s$. If $\beta \leq 1$, then $m^{*}=0$ and if $\beta >1$, then $m^{*} > 0$. Recall the magnetization $m(x) := \frac1{n}{\sum_{i=1}^{n}x^i}$. Whenever it is clear from context, we denote $m(x)$ by $m$. \begin{lemma}\label{expander_magnetisation_concentration} For every $\delta \in (0,1)$, there exists $c(\delta) >0$ and $\epsilon_0(\delta) > 0$ such that for all $\epsilon$-expanders $G_d$ with $\epsilon < \epsilon_0$, $$ \nu\big(\{|m-m^{*}| > \delta\}\cap \{|m+m^{*}| > \delta\}\big) \leq C_1(\beta)e^{-c(\delta)n}\,. $$ \end{lemma} The proof is essentially the same as the proof of concentration of magnetization in the Curie-Weiss model, but with a few variations. We defer the proof to the appendix. \subsection{Restricted Glauber Dynamics} Glauber dynamics for the Curie-Weiss model is well-understood and it can be shown to mix in $O(n\log{n})$ time when $\beta < 1$, $O(n^{\frac{3}{2}})$ time when $\beta = 1$, and takes exponentially long to mix when $\beta >1$ (see \cite{levin2010glauber} and references therein). The reason for exponentially slow mixing is that it takes exponential time for the chain to move from the positive phase to the negative phase and vice-versa. The Restricted Glauber Dynamics, described next, removes this barrier. Define $\Omega^{+} = \{x \in \Omega : \sum_{i}x^i \geq 0\}$. \cite{levin2010glauber} and \cite{ding2009censored} considered a censored/ restricted version of Glauber dynamics for the Curie-Weiss model where the chain is restricted to the positive phase $\Omega^+$. Let $\hat{X}_t$ be an instance of restricted Glauber dynamics and let $X'$ be obtained from $\hat{X}_t$ via one step of normal Glauber dynamics. If $X' \in \Omega^{+}$, then the restricted Glauber dynamics updates to $\hat X_{t+1} = X'$. Otherwise $X^{\prime} \notin \Omega^+$ and we flip all the spins, setting $\hat{X}_{t+1} = -X'$. The restricted Glauber dynamics $\hat{X}_t$ with initial state $\hat{X}_0 \in \Omega^{+}$ can be obtained from the normal Glauber dynamics also in a slightly different way. Let $X_t$ be a Glauber dynamics chain with $X_0 = \hat{X}_0\in \Omega^{+}$, and let \begin{equation*} \hat{X}_t = \begin{cases} X_t & \text{if $X_t \in \Omega^{+}$} \\ -X_t & \text{if $X_t \notin \Omega^{+}$} \,. \end{cases} \end{equation*} Whenever we refer to restricted Glauber dynamics, we assume that it is generated as a function of the regular Glauber dynamics in this way. If $\mu$ is the stationary measure of the original Glauber dynamics, then the unique stationary measure for the restricted chain is $\mu^{+}$ over $\Omega^{+}$, given by \begin{equation} \mu^{+}(x) = \begin{cases} 2\mu(x) & \text{if $m(x) > 0$} \\ \mu(x) & \text{if $m(x) = 0$}\,. \end{cases} \end{equation} Similarly, we define $\nu^{+}$ over $\Omega^{+}$ by \begin{equation} \nu^{+}(x) = \begin{cases} 2\nu(x) &\text{if $m(x) > 0$} \\ \nu(x) &\text{if $m(x) = 0$}\,. \end{cases} \end{equation} It follows by symmetry that if $f : \Omega \to \mathbb{R}$ is any function such that $f(x) = f(-x)$, then $$\mathbb{E}_{\mu} f= \mathbb{E}_{\mu^{+}}f \quad \text{and}\quad \mathbb{E}_{\nu} f= \mathbb{E}_{\nu^{+}}f\,.$$ It was shown in \cite{levin2010glauber} that restricted Glauber dynamics for the Curie-Weiss model mixes in $O(n\log{n})$ time for all $\beta > 1$. \begin{theorem}[Theorem~5.3 in \cite{levin2010glauber}] \label{t:Levin} Let $\beta > 1$. There is a constant $c(\beta) > 0$ so that $t_{mix}(n) \leq c(\beta)n\log{n}$ for the Glauber dynamics restricted to $\Omega^{+}$. \end{theorem} \begin{remark} \label{rem:coupling_existence} It follows from the proof of the theorem above that there exists a coupling of the restricted Glauber dynamics such that the chains starting at any two distinct initial states will collide in expected time $c(\beta)n\log{n}$. More concretely, let $\hat{X}_t$ and $\hat{Y}_t$ be two instances of restricted Glauber dynamics such that $\hat{X}_0 = x \in \Omega^{+}$ and $\hat{Y}_0 = y \in \Omega^{+}$. Let $\tau_0 =\inf\{t: \hat{X}_t= \hat{Y}_t\}$. There exists a coupling between the chains such that $$\sup_{x,y \in \Omega^{+}}\mathbb{E}\big[\tau_0\big| \hat{X}_0=x, \hat{Y}_0 = y\big] \leq c(\beta)n\log{n}\,.$$ and $\hat{X}_t = \hat{Y}_t$ a.s. $\forall \ t \geq \tau_0$. \end{remark} \subsection{Solution to Poisson Equation for Restricted Dynamics} The next lemma follows easily from the definitions and we omit its proof. \begin{lemma}\label{symmetric_solution} Let $f: \Omega \to \mathbb{R}$ be a symmetric function, i.e., for every $x \in \Omega$, $f(x) = f(-x)$. Let $P$ be the kernel of the Glauber dynamics for the Curie-Weiss model at inverse temperature $\beta$ and let $\hat{P}$ be the kernel for the corresponding restricted Glauber dynamics over $\Omega^{+}$ with stationary measure $\mu^{+}$. Then, the Poisson equations \begin{enumerate} \item $h(x) - (Ph)(x) = f(x)-\mathbb{E}_{\mu}f$ \item $\hat{h}(x) - (\hat{P}\hat{h})(x) = f(x) - \mathbb{E}_{\mu^{+}}f$ \end{enumerate} have principal solutions $h$ and $\hat{h}$ such that $h(x) = \hat{h}(x)$ for every $x \in \Omega^{+}$ and $h(x) = \hat{h}(-x)$ for every $x \in \Omega\setminus \Omega^{+}$. In particular, $h$ is symmetric. \end{lemma} By Lemma~\ref{symmetric_solution}, it is sufficient to solve the Poisson equation, and to bound $\mathbb{E}_\nu |\Delta_i (h)|$, for the restricted Glauber dynamics. Based on Lemmas~\ref{principal_solution} and~\ref{symmetric_solution} we have the following naive bound on the solution of the Poisson equation. \begin{lemma} \label{naive_bound} Let $f: \Omega \to \mathbb{R}$ be a symmetric function such that for any $x,y \in \Omega$, it holds that: $|f(x)-f(y)| \leq K$. Let $h$ be the solution to the Poisson equation $h - Ph = f - \mathbb{E}_{\mu}f$. Then, for any $x,y\in \Omega$, $|h(x) - h(y)| \leq KC(\beta)n\log{n}$. \end{lemma} \begin{proof} By Lemma~\ref{symmetric_solution}, $h$ is symmetric and we can without loss of generality assume that $x\in \Omega^+$. Now, we may work with $\hat h$ instead, since \begin{equation} h(x) - h(y) = \begin{cases} \hat{h}(x) - \hat{h}(y) &\text{ if $y \in \Omega^{+}$} \\ \hat{h}(x) - \hat{h}(-y) &\text{ if $y \in \Omega\setminus \Omega^{+}$}\,. \end{cases} \end{equation} Let $x,y \in \Omega^{+}$ and start two restricted Glauber dynamics Markov Chains for Curie-Weiss model $\hat{X}_t$ and $\hat{Y}_t$ with initial states $\hat{X}_0 = x$ and $\hat{Y}_0 = y$. Recall the definition $\tau_{0} = \inf\{t: \hat{X}_t = \hat{Y}_t\}$ from Remark~\ref{rem:coupling_existence}. We couple $\hat{X}_t$ and $\hat{Y}_t$ according to Remark~\ref{rem:coupling_existence} and use the bound for coupling time, $\mathbb{E}\left[\tau_0|\hat{X}_0 =x, \hat{Y}_0 = y\right] \leq C(\beta)n\log{n}$. By Lemma~\ref{principal_solution}, we can write $\hat h$ in terms of the expectation of a sum to obtain \begin{align*} \hat{h}(x) - \hat{h}(y) &= \mathbb{E}\left[\sum_{t=0}^{\infty}f(\hat{X}_t) - f(\hat{Y}_t)\biggr\rvert \hat{X}_0=x ,\hat{Y}_0 =y\right] \\ &\leq K\cdot \mathbb{E}\left[\sum_{t=0}^{\infty}\mathds{1}(\hat{X}_t\neq \hat{Y}_t)\biggr\rvert \hat{X}_0=x ,\hat{Y}_0 =y\right] \\ &= K\cdot \mathbb{E}\Big[\tau_{0}\Big| \hat{X}_0 = x, \hat{Y}_0 = y\Big] \\ &\leq KC(\beta)n\log{n}\,, \end{align*} completing the proof. \end{proof} The lemma above gives a rough bound of the form $|\Delta_i(h)(x)| \leq KC(\beta)n\log{n}$ for all $x\in \Omega$. In the next section we improve the bound for $x$ such that $m(x)$ is close to $m^{*}$ via a more delicate coupling argument. \section{Coupling Argument} \label{s:coupling} \subsection{Coupling for Improved Bound on $\Delta_i(h)$} For $x,y \in \Omega$, we write $x \geq y$ iff $x^i \geq y^i$ for every $i \in [n]$. We recall the monotone coupling from Subsection~\ref{subsec:monotone_coupling}. If the current states are $X$ and $Y$, we update the states to $X^{\prime}$ and $Y^{\prime}$ respectively as follows: we choose the same random index $I \sim \mathsf{unif} ([n])$. For all $j \neq I$, set $(X^{\prime})^j = X^j $ and $(Y^{\prime})^j = Y^j $. Generate an independent random variable $u_t \sim \mathrm{unif}([0,1])$. Set $(X^{\prime})^I$ (and $(Y^{\prime})^I$) to $1$ iff $u_t \leq \mu_I(1\rvert X^{(\sim i)})$ (and $u_t \leq \mu_I(1\rvert Y^{(\sim i)})$). For ferromagnetic Ising models when the update rule above is used, $X^{\prime} \geq Y^{\prime}$ almost surely if $X \geq Y$. We will shortly describe the coupling we use for the restricted Glauber dynamics, but we need to first record some useful properties of $g(s)=\tanh{\beta s}-s$ which follow from elementary calculus. \begin{lemma} \label{negative_slope_properties} Let $\beta>1$ and consider the function $g(s) = \tanh{\beta s} - s$ for $s \in [0,1]$. Denote by $m^*$ the strictly positive root of $g$. Then $g$ is concave, $g(0)=0$, $g^{\prime}(0) = \beta - 1 >0$, $g^{\prime}(m^{*}) := -\gamma^{*} < 0$, and also \begin{enumerate} \item For every $m > m^{*}$, $g^{\prime}(m) < -\gamma^{*}$ and \item There are $s_1,s_2 \in (0,1)$ with $s_1 < s_2 < m^{*}$ and $g^{\prime}(s_2)< g^{\prime}(s_1) < -\frac{1}{2}\gamma^{*}$\,. \end{enumerate} \end{lemma} We fix values $s_1$ and $s_2$ as given in the lemma (see Figure~\ref{fig:magnetization_profile} to understand the significance of the various quantities defined above). The scalar $s$ indexes the values of magnetization. The restricted Glauber dynamics for the Curie-Weiss model contracts whenever the magnetization value is in the red region -- i.e., where the slope of $g(s)$ is negative. Lemma~\ref{expander_magnetisation_concentration} shows that under measure $\nu(\cdot)$, the magnetization concentrates in the blue region. \begin{figure} \centering \includegraphics[scale=0.3]{figure_for_paper.eps} \vspace{-5mm} \caption{$g(s)$ for $\beta = 1.2$} \label{fig:magnetization_profile} \vspace{-3mm} \end{figure} Let the set $S_n := \{-1,-1+\frac{2}{n},\dots,+1\}$ (that is, the set of all possible values of $m(x)$). For any $s \in [-1,1]$, define $\langle s\rangle := \sup S_n\cap [-1,s]$. \textbf{The Coupling:} Let $x_0 \in \Omega^{+}$ be an arbitrary point such that $m(x_0) \geq \frac{2}{n}$. Consider two restricted Glauber chains $\hat{X}_t$ and $\hat{Y}_t$ for the Curie-Weiss model, with stationary measure $\mu^{+}$, such that $\hat{X}_0 = x_0^{(i,+)} \in \Omega^{+}$ and $\hat{Y}_0 = x_0^{(i,-)} \in \Omega^{+}$. We define $\tau_1 = \inf\{t: m(\hat{Y}_t) = \langle s_1\rangle\}$ and use the following coupling between $\hat{X}_t$ and $\hat{Y}_t$: \begin{enumerate} \item If $m(x_0) \leq \langle s_2\rangle $, we couple them as in Remark~\ref{rem:coupling_existence}. \item If $m(x_0) > \langle s_2 \rangle$ and $t \leq \tau_1$, monotone couple $\hat{X}_t$ and $\hat{Y}_t$. If $\hat{X}_{\tau_1} = \hat{Y}_{\tau_1}$, couple them so that $\hat{X}_t= \hat{Y}_t$ for $t > \tau_1$. Since $\hat{X}_0 \geq \hat{Y}_0$, the monotone coupling ensures that $\hat{X}_t \geq \hat{Y}_t$ for $t \leq \tau_1$. \item If $\hat{X}_{\tau_1} \neq \hat{Y}_{\tau_1}$, then for $t> \tau_1$, we couple them as in Remark~\ref{rem:coupling_existence}. \end{enumerate} Suppose that $x_0 \in \Omega^{+}$ is such that $m(x_0) > \langle s_2 \rangle$. The coupling above is constructed to give a better bound on $|\Delta_i(h)(x_0)|$ than in Lemma~\ref{naive_bound}. The intuition behind it is that whenever $m(\hat{X}_t) \geq \langle s_1\rangle$ and $m(\hat{Y}_t) \geq \langle s_1\rangle$ (that is, when $t \leq \tau_1$), the chains are contracting under the monotone coupling. This is shown in Lemma~\ref{super_martingale_contraction} and used in Lemma~\ref{contraction_bound_lemma} to bound $|\Delta_i(h)(x_0)|$ in terms of $\rho^{K}+ \mathbb{P}(\tau_1 < K)$ (where $\rho = 1-\Theta(\tfrac{1}{n})$ is the contraction coefficient and $K$ is any integer). This proof is a generalization of the proof of Theorem~\ref{t:contractIsing}. To use this bound we need to show that $\mathbb{P}(\tau_1 < K)$ is small, i.e. the walk usually takes a long time to hit $\langle s_1\rangle$. This is shown in Lemma~\ref{lem:escape_time_bound} as a consequence of $m(\hat{Y}_t)$ being a birth-death process with positive drift when it is between $\langle s_1\rangle$ and $\langle s_2 \rangle$. Define \begin{equation} \tau_{\mathrm{coup}} = \begin{cases} 0 &\text{ if $\hat{X}_{\tau_1} = \hat{Y}_{\tau_1}$} \\ \inf\{t: \hat{X}_t = \hat{Y}_t\} - \tau_1 &\text{ otherwise}\,. \end{cases} \end{equation} \begin{lemma}\label{super_martingale_contraction} Let $x_0\in \Omega$ be such that $m(x_0) \geq s_2 + \frac{2}{n} $. Let $f$ be symmetric and $\frac{1}{n}$-Lipschitz in each coordinate. Let $\gamma^{*} >0$ be as in Lemma~\ref{negative_slope_properties}. Define the chains $\hat{Y}_t$ and $\hat{X}_t$ as defined above, and let $\rho := \big(1-\frac{\gamma^{*}(n-1)}{2n^2}\big)$. Then, the following hold: \begin{enumerate} \item $\mathbb{E}\Big[\big|f(\hat{X}_t) - f(\hat{Y}_t)\big|\mathds{1}_{t\leq \tau_1}\Big| \hat{X}_0 = x_0^{(i,+)}, \hat{Y}_0 = x_0^{(i,-)}\Big]\leq \frac{1}{n}\rho^t$\vspace{1mm} \item $\mathbb{P}(\hat{X}_{\tau_1} \neq \hat{Y}_{\tau_1}|\tau_1 \geq K) \leq \frac{\rho^{K}}{\mathbb{P}(\tau_1 \geq K)} $. \end{enumerate} \end{lemma} \begin{proof} Let $ 1 \leq t \leq \tau_1$. By the Lipschitz property of $f$ and monotone coupling between the chains, \begin{align} |f(\hat{X}_t) - f(\hat{Y}_t)| &\leq \frac{1}{n}\sum_{i=1}^{n}\mathds{1}(\hat{X}_t^{i}\neq \hat{Y}_t^{i}) \\ &= \frac{1}{2n}\sum_{i=1}^{n}|\hat{X}^{i}_t - \hat{Y}^{i}_t| \nonumber \\ &= \frac{1}{2n}\sum_{i=1}^{n}\hat{X}^{i}_t - \hat{Y}^{i}_t \nonumber \\ &= \frac{1}{2}\big(m(\hat{X}_t) - m(\hat{Y}_t)\big)\,. \label{lipschitz_property} \end{align} Let $m_i := \frac1n{\sum_{j \neq i}x^j}$ so that $$ \mu_i(1|x^{(\sim i)}) = \frac12 + \frac12{\tanh(\beta m_i)}\,. $$ Note that $\sum_{i=1}^n m_i = (n-1)m$. By monotonicity of the coupling and definition of $\tau_1$, $m_i(\hat{X}_{t-1}) \geq m_i(\hat{Y}_{t-1}) \geq s_1$ almost surely, and we assume in what follows that $x_{t-1}$ and $y_{t-1}$ satisfy $m_i(x_{t-1}) \geq m_i(y_{t-1}) \geq s_1$. Conditioning on whether or not an update occurs at a location in which $x_{t-1}$ and $y_{t-1}$ differ, we obtain \begin{align} &\mathbb{E}\Big[m(\hat{X}_t) - m(\hat{Y}_t)|\hat{X}_{t-1} = x_{t-1}, \hat{Y}_{t-1} = y_{t-1}\Big] \nonumber \\ &= m(x_{t-1})-m(y_{t-1}) - \frac{1}{n^2} \sum_{i=1}^n (x_{t-1}^i -y_{t-1}^i) \nonumber\\ &\quad+ \frac{1}{n^2}\sum_{i=1}^{n}\big(\tanh{(\beta m_i(x_{t-1})})-\tanh{(\beta m_i(y_{t-1}))}\big) \nonumber\\ &= m(x_{t-1})-m(y_{t-1}) -\frac{1}{n(n-1)}\sum_{i=1}^{n}\big(m_i(x_{t-1}) - m_i(y_{t-1})\big)\nonumber\\ &\quad+\frac{1}{n^2}\sum_{i=1}^{n}\big(\tanh(\beta m_i(x_{t-1}))-\tanh{(\beta m_i(y_{t-1}))}\big)\nonumber \\ &\leq m(x_{t-1})-m(y_{t-1}) +\frac{1}{n^2}\sum_{i=1}^n g\big(m_i(x_{t-1})\big) - g\big(m_i(y_{t-1})\big)\nonumber\\ &\leq m(x_{t-1}) - m(y_{t-1}) -\frac{\gamma^{*}}{2n^2} \sum_{i=1}^{n}\left(m_i(x_{t-1}) - m_i(y_{t-1})\right) \nonumber\\ &= \Big(1-\frac{\gamma^{*}(n-1)}{2n^2}\Big)\big(m(x_{t-1}) - m(y_{t-1})\big) \nonumber\\ &= \rho\big(m(x_{t-1}) - m(y_{t-1})\big)\,.\label{contraction_curie_weiss} \end{align} Here we have used the properties of $g$ stated in Lemma~\ref{negative_slope_properties}. Therefore, for $t \leq \tau_1$, $M_t = \rho^{-t}(m(\hat{X}_{t}) - m(\hat{Y}_{t}))$ is a positive super-martingale with respect to the filtration $\mathcal{F}_t = \sigma\big(\hat{X}_0,\hat{Y}_0,\hat{X}_1,\hat{Y}_1,\dots,\hat{X}_t,\hat{Y}_t\big)$ and $\tau_1$ is a stopping time. By the Optional Stopping Theorem, we conclude that \begin{align*} \frac{2}{n} &= \mathbb{E}[M_0]\\ &\geq \mathbb{E}\left[M_{t\wedge\tau_1}\right] \\ &\geq \mathbb{E}\big[\rho^{-t}(m(\hat{X}_t)- m(\hat{Y}_t)) \mathds{1}_{t \leq \tau_1}\big]\,. \end{align*} Thus, $\mathbb{E}\big[(m(\hat{X}_t) - m(\hat{Y}_t))\mathds{1}_{t \leq \tau_1}\big] \leq \frac{2\rho^{t}}{n}$. We use~\eqref{lipschitz_property} to complete the proof of the first part of the lemma. Turning to the second part, using the fact that $\rho <1$ gives \begin{align} \frac{2}{n} &= \mathbb{E}[M_0]\nonumber \\ &\geq \mathbb{E}\left[M_{\tau_1}\right] \nonumber \\ &= \mathbb{E}\left[\rho^{-\tau_1}(m(\hat{X}_{\tau_1})-m(\hat{Y}_{\tau_1})) \right]\nonumber \\ &\geq \mathbb{E}\left[\rho^{-K}(m(\hat{X}_{\tau_1})-m(\hat{Y}_{\tau_1}))|\tau_1 \geq K\right]\mathbb{P}(\tau_1 \geq K) \label{expectation_bound}\,. \end{align} By monotone coupling, we know that $\hat{X}_{\tau_1} \neq \hat{Y}_{\tau_1}$ iff $m(\hat{X}_{\tau_1})-m(\hat{Y}_{\tau_1}) \geq \frac{2}{n}$. Therefore, using Markov's inequality and~\eqref{expectation_bound} we conclude that \begin{align*} \mathbb{P}(\hat{X}_{\tau_1} \neq \hat{Y}_{\tau_1}|\tau_1 \geq K) &= \mathbb{P}\Big(m(\hat{X}_{\tau_1})-m(\hat{Y}_{\tau_1}) \geq \frac{2}{n}\Big|\tau_1 \geq K\Big) \\ &\leq \frac{n\cdot \mathbb{E}\big[\big(m(\hat{X}_{\tau_1})-m(\hat{Y}_{\tau_1})\big)|\tau_1 \geq K\big]}{2} \\ &\leq \frac{\rho^{K}}{\mathbb{P}(\tau_1 \geq K)}\,.&\qedhere \end{align*} \end{proof} \begin{lemma}\label{contraction_bound_lemma} Let $x_0\in \Omega$ be such that $m(x_0) \geq s_2 + \frac{2}{n} $. Let $\hat{X}_t$, $\hat{Y}_t$, $f$, $\rho$ and $h$ be as defined above. Then for every $K \in \mathbb{N}$, $$|\Delta_i(h)(x_0)| \leq \frac{1}{n}\frac{1}{1-\rho} + C(\beta)n\log{n}\left[\rho^K + \mathbb{P}(\tau_1 < K) \right]\,.$$ \end{lemma} \begin{proof} For the sake of brevity, only in this proof, we implicity assume the conditioning $\hat{X}_0 = x_0^{(i,+)}$ and $\hat{Y}_0 = x_0^{(i,-)}$ whenever the expectation operator is used. Expanding the principal solution to the Poisson equation yields \begin{align} |\Delta_i(h)(x_0)| &= \bigg|\sum_{t=0}^{\infty}\mathbb{E}\left[\big(f(\hat{X}_t) - f(\hat{Y}_t)\big)\right]\bigg| \nonumber \\ &\leq \sum_{t=0}^{\infty}\mathbb{E}\left[\big|f(\hat{X}_t)-f(\hat{Y}_t)\big|\right]\nonumber \\ &= \sum_{t=0}^{\infty}\mathbb{E}\left[\big|f(\hat{X}_t)-f(\hat{Y}_t)\big|(\mathds{1}_{t \leq \tau_1}+\mathds{1}_{t>\tau_1})\right]\nonumber\\ &\leq \sum_{t=0}^{\infty} \frac{\rho^t}{n} + \sum_{t=0}^{\infty}\mathbb{E}\left[\big|f(\hat{X}_t)-f(\hat{Y}_t)\big|\mathds{1}_{t>\tau_1} \right] \nonumber \\ &= \frac{1}{n}\frac{1}{1-\rho} + \sum_{t=0}^{\infty}\mathbb{E}\left[\big|f(\hat{X}_t)-f(\hat{Y}_t)\big|\mathds{1}_{t>\tau_1}\right]\,. \label{exponential_coupling} \end{align} Here we have used Lemma~\ref{super_martingale_contraction} in the second to last step. By definition of the coupling, if $\hat{X}_{\tau_1} = \hat{Y}_{\tau_1}$, then $f(\hat{X}_t) - f(\hat{Y}_t) = 0$ for all $t > \tau_1$. Further, $|f(\hat{X}_t)-f(\hat{Y}_t)| \leq \mathds{1}_{t \leq \tau_{\mathrm{coup}}+\tau_1}$ (since $|f(x) - f(y)| \leq 1$). Given $K\in \mathbb{N}$, we conclude that \begin{align} &\sum_{t=0}^{\infty}\mathbb{E}\big[|f(\hat{X}_t)-f(\hat{Y}_t)\big|\mathds{1}_{t>\tau_1}\big] \nonumber \\&\leq \mathbb{E}\left[\tau_{\mathrm{coup}} \right] \nonumber \\ &= \sum_{x,y \in \Omega^{+}}\mathbb{E}\big[\tau_{\mathrm{coup}}\big| \hat{X}_{\tau_1}=x, \hat{Y}_{\tau_1}=y\big]\cdot \mathbb{P}(\hat{X}_{\tau_1} = x, \hat{Y}_{\tau_1} = y)\nonumber \\&\leq C(\beta)n\log{n}\sum_{x,y \in \Omega^{+}}\mathds{1}_{x\neq y}\mathbb{P}(\hat{X}_{\tau_1} = x, \hat{Y}_{\tau_1} = y)\nonumber \\ &= C(\beta)n\log{n} \mathbb{P}(\hat{X}_{\tau_1}\neq \hat{Y}_{\tau_1}) \nonumber \\ &= C(\beta)n\log{n}\mathbb{P}(\hat{X}_{\tau_1}\neq \hat{Y}_{\tau_1} | \tau_1 \geq K)\mathbb{P}(\tau_1 \geq K)\nonumber \\ &\quad + C(\beta)n\log{n}\mathbb{P}(\hat{X}_{\tau_1}\neq \hat{Y}_{\tau_1} | \tau_1 < K)\mathbb{P}(\tau_1 < K)\nonumber \\ &\leq C(\beta)n\log{n}\left[\mathbb{P}(\hat{X}_{\tau_1}\neq \hat{Y}_{\tau_1} | \tau_1 \geq K)\mathbb{P}(\tau_1 \geq K) + \mathbb{P}(\tau_1 < K)\right]\nonumber \\ &\leq \label{non_exponential_coupling} C(\beta)n\log{n}\left[\rho^K + \mathbb{P}(\tau_1 < K) \right]\,. \end{align} Here we have used Theorem~\ref{t:Levin} in the second inequality and Lemma~\ref{super_martingale_contraction} in the last inequality. By~\eqref{exponential_coupling} and~\eqref{non_exponential_coupling}, we conclude the result. \end{proof} Lemma~\ref{contraction_bound_lemma} bounds $|\Delta_i(h)|$ in terms of $\mathbb{P}(\tau_1 < K)$. We upper bound this probability in the following lemma. \begin{lemma} \label{lem:escape_time_bound} Let $x_0 \in \Omega$ be such that $m(x_0)\geq \langle s_2\rangle + \frac{2}{n}$. For every integer $K$, $$\mathbb{P}(\tau_1 < K) \leq K^2 \exp{\left(-c_1(\beta)n\right)}\,.$$ Here $c_1(\beta) >0$ is a constant that depends only on $\beta$. \end{lemma} The proof, which we defer to Appendix~\ref{subsec:escape_time_bound_proof}, is by coupling the magnetization chain to an appropriate birth-death chain and using hitting time results for birth-death chains. \begin{theorem}\label{contraction_bound} If $m(x_0) \geq \langle s_2\rangle + \frac{2}{n}$, then there are constants $c$ and $c'$ depending only on $\beta$ such that $$|\Delta_i(h)(x_0)| \leq \frac{4}{\gamma^{*}}\big(1+ c \cdot \exp(-c' n)\big)\,.$$ \end{theorem} \begin{proof} By Lemma~\ref{contraction_bound_lemma}, we have for every positive integer $K$, $$|\Delta_i(h)(x_0)| \leq \frac{1}{n}\frac{1}{1-\rho} + C(\beta)n\log{n}\left[\rho^K + \mathbb{P}(\tau_1 < K) \right]\,.$$ Clearly, for $n\geq 2$, $$\frac{1}{n}\frac{1}{1-\rho} \leq \frac{4}{\gamma^{*}}\,. $$ By Lemma~\ref{lem:escape_time_bound}, $\mathbb{P}(\tau_1 < K) \leq K^2 \exp{(-c_1(\beta)n)} $, and we take $K \geq Cn^2$. \end{proof} We are now ready to prove Theorem~\ref{main_application}. \section{Proof of Theorem~\ref{main_application}} \label{s:IsingProof} We use all the notation developed in Section~\ref{s:ProofIdeas}. Let $h$ be the solution to the Poisson equation $(I-P)h = f - \mathbb{E}_{\mu}f$ with $f$ defined in \eqref{e:f} at the beginning of Section~\ref{s:ProofIdeas}. It follows by Theorem~\ref{main_theorem} and Lemma~\ref{spectral_bound} to show that \begin{equation} |\mathbb{E}_\mu f - \mathbb{E}_\nu f| \leq \|\tfrac{\beta}{n}A-\tfrac{\beta}{d}B\|_2 \mathbb{E}_{\nu}\frac{\|v_{\Delta(h)}\|_2}{2\sqrt{n}}\,, \label{main_bound2} \end{equation} where $v_{\Delta(h)} := (\Delta_1(h),..,\Delta_n(h))^{\intercal}$. By Jensen's inequality, \begin{align} \mathbb{E}_{\nu} \|v_{\Delta(h)}\| &= \mathbb{E}_{\nu} \sqrt{\sum_{i=1}^n \Delta_i(h)^2} \leq \sqrt{\sum_{i=1}^n \mathbb{E}_{\nu}\Delta_i(h)^2}\,. \label{sqrt_jensen} \end{align} Now, using Lemmas~\ref{symmetric_solution} and~\ref{naive_bound} and Theorem~\ref{contraction_bound} we conclude \begin{equation*} |\Delta_i(h)(x)| \leq \begin{cases} \frac{4}{\gamma^{*}}(1+o_n(1)) &\text{if $|m(x)| \geq \langle s_2\rangle + 2/n$}\\ C(\beta)n\log{n} &\text{otherwise}\,. \end{cases} \end{equation*} We take $0< \delta(\beta) < m^{*} - \langle s_2\rangle - \frac{2}{n}$ to be dependent only on $\beta$. By Lemma~\ref{expander_magnetisation_concentration} there exists $\epsilon_0$ such that if $\epsilon < \epsilon_0$, then for some $c(\delta) > 0$ $$ \nu^{+}(|m-m^{*}| > \delta) \leq e^{-c(\delta)n}\,. $$ By Lemma~\ref{symmetric_solution}, $\Delta_i(h)^2$ is a symmetric function of $x$. Therefore, \begin{align} \mathbb{E}_{\nu}\Delta_i(h)^2 &= \mathbb{E}_{\nu^+}\Delta_i(h)^2 \nonumber \\ &\leq \frac{16}{(\gamma^{*})^2}(1+o(1)) + v^{+}(|m-m^{*}| > \delta) C(\beta)^2n^2\log^2{n}\nonumber \\ &\leq \frac{16}{(\gamma^{*})^2}(1+o(1)) + e^{-c(\delta)n} C(\beta)^2n^2\log^2{n}\nonumber \\ &= \frac{16}{(\gamma^{*})^2}(1+o(1))\,. \label{expectation_bound_1} \end{align} We note that by picking $C_{R} = \mathrm{sgn}(\rho^{(k)}[R] - \tilde{\rho}^{(k)}[R])$, we obtain that $$\frac{1}{{{n}\choose{k}}} \sum_{\substack{R \subset [n] \\ |R| = k}}|\rho^{(k)}[R] - \tilde{\rho}^{(k)}[R]| = 2k|\mathbb{E}_\mu f - \mathbb{E}_\nu f|\,.$$ The equation above along with~\eqref{sqrt_jensen},~\eqref{expectation_bound_1}, and~\eqref{main_bound2}, implies \begin{align*} \frac{1}{{{n}\choose{k}}} \sum_{\substack{R \subset [n] \\ |R| = k}}|\rho^{(k)}[R] - \tilde{\rho}^{(k)}[R]| &= 2k|\mathbb{E}_\mu f - \mathbb{E}_\nu f| \\ &\leq 2k\|\tfrac{\beta}{n}A-\tfrac{\beta}{d}B\|_2 \mathbb{E}_{\nu}\frac{\|v_{\Delta(h)}\|_2}{2\sqrt{n}} \\ &\leq \frac{k}{\sqrt{n}}\beta \left(\epsilon + \tfrac{1}{n}\right) \sqrt{\sum_{i}\mathbb{E}_{\nu}(\Delta_i(h))^2} \\ &\leq 4\frac{k\beta}{\gamma^{*}}(1+o_n(1))\left(\epsilon +\tfrac{1}{n}\right)\,, \end{align*} which completes the proof. \hfill \qed \section{Comparison to Naive Bounds} \label{s:naive} Using the symmetry inherent in the Curie-Weiss model, we sketch another method to obtain an inequality similar (but much weaker) to the one in Theorem~\ref{main_application}. We don't give the proofs of the results below. All of them can be proved using definitions and standard techniques. Let $D_{\mathsf{SKL}}(\mu; \nu) = D_{\mathsf{KL}}(\mu\| \nu) + D_{\mathsf{KL}}(\nu\| \mu) $ denote the symmetric KL-divergence between measures $\mu$ and $\nu$. \begin{lemma} $D_{\mathsf{KL}} (\nu || \mu)\leq D_{\mathsf{SKL}}(\mu;\nu) \leq n\|\tfrac{\beta}{n}A-\tfrac{\beta}{d}B\|_2 $ \label{KL_bound} \end{lemma} Let $X \sim \mu$ and $X^{\prime} \sim \mu$ such that they are independent of each other. Define $m_2(X,X^{\prime}) := \frac{1}{n} \sum_{i=1}^{n}X^i(X^{\prime})^i$ \begin{lemma} For the Curie Weiss model at any fixed temperature, $$\log \mathbb{E}_\mu \exp{ \lambda(m^2-(m^{*})^2)} \leq O(\log{n}) + \frac{C_1(\beta) \lambda^{2}}{2n}$$ and $$\log\mathbb{E}_{\mu\otimes\mu}\exp{\lambda(m_2^2 -(m^*)^4)} \leq O(\log{n})+ \frac{C_2(\beta)}{2n}\lambda^2\,.$$ Here $C_1(\beta)$ and $C_2(\beta)$ are positive constants that depend only on $\beta$. \label{sub_gaussian} \end{lemma} Consider the set of probability distributions over $\Omega \times \Omega$, $\mathcal{S} = \{M: M \ll \mu\otimes \mu\}$. Let $f: \Omega \times \Omega \to \mathbb{R}$ be defined by $f(x,x^{\prime}) = m_2^{2} - (m^*)^4$. By Gibbs' variational principle, $$\log{\mathbb{E}_{\mu\otimes \mu}\exp{\lambda f}} = \sup_{M \in \mathcal{S}} \lambda\mathbb{E}_M f - D_{\mathsf{KL}}(M|| \mu\otimes \mu)\,.$$ Taking $M = \nu \otimes \nu$ (whence $D_{\mathsf{KL}}(\nu \otimes \nu|| \mu\otimes \mu) = 2D_{\mathsf{KL}}(\nu || \mu)$) and using Lemma~\ref{sub_gaussian}, we conclude that: $$\lambda\mathbb{E}_{\nu \otimes \nu} f - 2D(\nu||\mu) \leq C\log{n}+ \frac{C_2}{2n}\lambda^2\,.$$ Letting $\lambda = \frac{n\mathbb{E}_{\nu \otimes \nu} f}{C_2}$, we conclude that $$|\mathbb{E}_{\nu \otimes \nu} f | = O\left( \sqrt{\frac{\log{n}}{n}} +\sqrt{ \frac{D_{\mathsf{KL}}(\nu || \mu)}{n}}\right)\,.$$ and taking $M = \mu \otimes \mu$ (whence $D_{\mathsf{KL}}(\mu \otimes \mu|| \mu\otimes \mu) = 0$) we conclude that $$|\mathbb{E}_{\mu \otimes \mu} f | = O\left(\sqrt{\frac{\log{n}}{n}}\right) \,.$$ Therefore, \begin{equation} |\mathbb{E}_{\mu \otimes \mu} f - \mathbb{E}_{\nu \otimes \nu} f | = O\left( \sqrt{\frac{\log{n}}{n}} +\sqrt{ \frac{D_{\mathsf{KL}}(\nu || \mu)}{n}}\right)\,. \label{expectation_compare_1} \end{equation} By similar considerations, taking $g(x) = m^2 - (m^*)^2$, we conclude that \begin{equation} \left|\mathbb{E}_{\nu }[ g]- \mathbb{E}_{\mu} [g]\right| = O\left( \sqrt{\frac{\log{n}}{n}} +\sqrt{ \frac{D_{\mathsf{KL}}(\nu || \mu)}{n}}\right)\,. \label{expectation_compare_2} \end{equation} For the Curie-Weiss model, by symmetry, $\rho_{ij} = \rho$ (the same for all $i\neq j$). Clearly, $$\mathbb{E}_\mu m^2 = \frac{1}{n} + \frac{2}{n^2} \sum_{i\neq j} \rho$$ $$\mathbb{E}_\nu m^2 = \frac{1}{n} + \frac{2}{n^2} \sum_{i\neq j} \tilde{\rho}_{ij}$$ $$\mathbb{E}_{\mu\otimes \mu} m_2^2 = \frac{1}{n} +\frac{2}{n^2} \sum_{i\neq j} \rho^2$$ $$\mathbb{E}_{\nu \otimes \nu} m_2^2 = \frac{1}{n} + \frac{2}{n^2} \sum_{i\neq j} \tilde{\rho}^2_{ij}\,.$$ Therefore, \begin{align*} \sum_{i\neq j} (\rho_{ij} -\tilde{\rho}_{ij})^2 &= \sum_{i\neq j} \tilde{\rho}_{ij}^2 +\rho^2 - 2\rho(\sum_{i\neq j} \tilde{\rho}_{ij}) \\ &= \sum_{i\neq j} \tilde{\rho}_{ij}^2 +\rho^2 - 2\rho\left(\sum_{i\neq j} \rho + \frac{n^2}{2}(\mathbb{E}_\nu m^2 - \mathbb{E}_\mu m^2)\right) \\ &= \sum_{i \neq j} \tilde{\rho}_{ij}^2 -\rho^2 - n^2 (\mathbb{E}_\nu m^2 - \mathbb{E}_\mu m^2)) \\ &= \frac{n^2}{2}\left(\mathbb{E}_{\nu \otimes \nu} m_2^2 - \mathbb{E}_{\mu \otimes \mu} m_2^2\right)-n^2 (\mathbb{E}_\nu m^2 - \mathbb{E}_\mu m^2)\\ &\leq n^2 |\mathbb{E}_{\mu \otimes \mu} f - \mathbb{E}_{\nu \otimes \nu} f | + n^2|\mathbb{E}_\mu g - \mathbb{E}_\nu g|\,. \end{align*} Using the equation above and Equations~\ref{expectation_compare_1} and~\ref{expectation_compare_2} and Lemma~\ref{KL_bound} we conclude that \begin{equation} \frac{1}{{{n}\choose{2}}}\sum_{i\neq j} (\rho_{ij} -\tilde{\rho}_{ij})^2 \leq O\left( \sqrt{\frac{\log{n}}{n}} +\sqrt{\|\tfrac{\beta}{n}A-\tfrac{\beta}{d}B\|_2 }\right)\,. \end{equation} When $\epsilon = o(\tfrac{\log{n}}{n})$, the equation above reduces to: $$ \frac{1}{{{n}\choose{2}}}\sum_{i\neq j} (\rho_{ij} -\tilde{\rho}_{ij})^2 \leq O\left( \sqrt{\epsilon} \right)\,.$$ This is similar to the result in Theorem~\ref{main_application} but weaker by a 4th power. \section*{Acknowledgment} GB is grateful to Andrea Montanari and Devavrat Shah for discussions on related topics. Also we thank Gesine Reinert and Nathan Ross for exchanging manuscripts with us. \bibliographystyle{imsart-number}
1,108,101,563,493
arxiv
\section{Additional results} \begin{comment} \subsection{Additional quantitative results for CONTEXT} Table~\ref{tab:context} shows additional results for the CONTEXT dataset. In particular, results are added for training on the sparse-loss baseline as well as for the optimal labeling of superpixels consistent with the scribble annotations. As expected, the model trained with the sparse loss (i.e., Eq (1) evaluated only at the labeled locations) exhibited severe overfitting. Also in line with our expectations, we observed that the accuracy of the optimal labeling of superpixels consistent with the scribble annotations (SPCON) decreased relative to the VOC 2012 dataset. We hypothesize this is due partly to the fact that the finer classes of CONTEXT require finer boundaries (we used the same superpixel segmentation parameters as we used for VOC 2012). We also note that the training-set segmentations inferred by \ourMethod{} (the row labeled {\em \ourMethod{} training $P_{\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{B}}$, 0\% abstain} were more accurate than those of SPCON ($75.5\%$ MIOU vs. $70.2\%$). This constitutes strong evidence for the necessity of learning boundaries (as in \ourMethod{}) as opposed to using superpixel segmentations for propagating sparse labels. \begin{table} \centering \begin{tabular}{lll} Method & MIOU & w/ CRF\\\hline\hline \ourMethod{} $Q_{\ensuremath{\theta}, \ensuremath{I}}$ & 36.1 & 37.4 \\ \ourMethod{} training $P_{\ensuremath{y} \mid \ensuremath{\hat y}, B}$, 0\% abstain & 75.5 & . \\\hline Sparse-loss baseline & 26.6 & . \\ Train on dense ground truth & 31.7 & 32.4 \\ ScribbleSup (reported) & . & 36.1 \\\hline Opt. consistent s-p. labels (SPCON) & 70.2 & . \end{tabular} \caption{Results on CONTEXT validation set} \label{tab:context} \end{table} \end{comment} \subsection{Additional qualitative results} An expanded version of the VOC 2012 result figure is shown in Fig.~\ref{fig:vocLarge}. \begin{figure*} \centering \includegraphics[width=6in]{figures/vocValLarge} \caption{Validation set results on VOC 2012 dataset \label{fig:vocLarge}} \end{figure*} \section{Derivations} \subsection{Probabilistic justification of loss} In Sec. 2.1, it is claimed that Eq. 2 follows from marginalizing Eq. 1 over the unobserved labels, given certain assumptions. This is proved here. Marginalizing the objective in Eq. 1 over $y$ is expressed as: \begin{align} \sum_{\tilde \ensuremath{y}, \tilde \ensuremath{B}} P_{\ensuremath{y}, \ensuremath{B} \mid \hat \ensuremath{y}, \ensuremath{I}}(\tilde \ensuremath{y}, \tilde \ensuremath{B}) \sum_{\ensuremath{x} \in \ensuremath{X}} H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})). \end{align} Using the conditional independence assumptions and the assumption that $\ensuremath{B}$ is a deterministic function of $\ensuremath{I}$ yields \begin{align} \sum_{\tilde \ensuremath{y}} P_{\ensuremath{y} \mid \hat \ensuremath{y}, \ensuremath{B}_\ensuremath{I}}(\tilde \ensuremath{y}) \sum_{\ensuremath{x} \in \ensuremath{X}} H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})), \end{align} where $\ensuremath{B}_\ensuremath{I}$ denotes the boundaries as a function of $\ensuremath{I}$. Assuming that $y(x)$ and $y(x')$ are conditionally independent given $\ensuremath{\hat y}, \ensuremath{B}$, $\forall \ensuremath{x} \neq \ensuremath{x}' \in \ensuremath{X}$ allows us to represent $P_{\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{I}}}$ as a product of factors: \begin{align} \sum_{\tilde \ensuremath{y}} \left( \prod_{\ensuremath{x}' \in \ensuremath{X}} P_{\ensuremath{y}(\ensuremath{x}') \mid \hat \ensuremath{y}, \ensuremath{B}_\ensuremath{I}}(\tilde \ensuremath{y}(\ensuremath{x}')) \right) \sum_{\ensuremath{x} \in \ensuremath{X}} H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})). \end{align} Factorizing out sums equal to one results in \begin{align} \sum_{\ensuremath{x} \in \ensuremath{X}} \sum_{\tilde y(x)} P_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{I}}}(\tilde \ensuremath{y}(\ensuremath{x})) H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})). \end{align} Expanding the definition of cross-entropy yields the desired expression: \begin{align} \sum_{\ensuremath{x} \in \ensuremath{X}} H(P_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_\ensuremath{I}}, \, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})) \end{align} \subsection{Fr\'{e}chet derivative of matrix inverse} This result is used in the derivation of the derivative of the random-walk partition function. An intuitive derivation is provided here. Suppose $Ax=b$. We wish to find an expansion linear in $\epsilon V$ for $\tilde x := (A + \epsilon V)^{-1}b$, assuming the inverse exists. \begin{align} (A + \epsilon V) \tilde x & = b\\ A \tilde x & = b - \epsilon V \tilde x\\ \tilde x & = A^{-1}b - \epsilon A^{-1} V \tilde x\\ & = A^{-1}b - \epsilon A^{-1} V (A^{-1} b - O(\epsilon))\\ & = A^{-1}b - \epsilon A^{-1} V A^{-1} b + O(\epsilon^2) \end{align} where the second-to-last line follows from recursive expansion of the same expression. \end{document} \section{Introduction} \input{intro} \section{Method} \input{methodOverview} \section{Related work} \input{relatedWork} \section{Experiments} \input{experiments} \section{Conclusions} \input{conclusions} {\small \bibliographystyle{ieee} \subsection{Probabilistic justification} \begin{figure} \centering \includegraphics[width=1.25in]{figures/graphicalModel} \caption{Independence assumptions as a graphical model \label{fig:gm}} \end{figure} The proposed loss function~\eqref{eq:ourLoss} arises from a natural probabilistic extension of~\eqref{eq:segmentationLoss} to the case where dense labels are unobserved. Specifically, we consider marginalizing~\eqref{eq:segmentationLoss} over the unobserved dense labels, given the observed sparse labels. This requires us to define $P(\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{I})$. We submit that a natural way to do so is to introduce a new variable $B$ representing the image's semantic boundaries. This results in the proposed graphical model in Fig.~\ref{fig:gm} to represent the independence structure of $\ensuremath{y},\ensuremath{\hat y},\ensuremath{B},\ensuremath{I}$. It is then straightforward to show that~\eqref{eq:ourLoss} is equivalent to marginalizing~\eqref{eq:segmentationLoss} with respect to a certain distribution $P(\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{I})$, after making a few assumptions. First, the conditional independence structure depicted in Fig.~\ref{fig:gm} is assumed. $y(x)$ and $y(x')$ are assumed conditionally independent given $\ensuremath{\hat y}, \ensuremath{B},$ $\forall \ensuremath{x} \neq \ensuremath{x}' \in X$, which allows us to specify $P(\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{B})$ in terms of marginal distributions and simplifies inference. Finally, $\ensuremath{B}$ is assumed to be a deterministic function of $\ensuremath{I}$, defined via parameters $\ensuremath{\phi}$. We note that the conditional independence assumptions made in Fig.~\ref{fig:gm} are significant. In particular, $\ensuremath{y}$ is assumed independent of $\ensuremath{I}$ given $\ensuremath{\hat y}$ and $\ensuremath{B}$. This essentially implies that there is at least one label for each connected component of the true label image, since knowing the underlying image usually does give us information as to the label of unlabeled connected components. In practice, strictly labeling every connected component is not necessary. However, training data that egregiously violates this assumption will likely yield poor results. \subsection{Random-walk inference} \label{sec:rw} Key to our method is the efficient computation of the random-walk hitting probabilities~\eqref{eq:pathLabelProb}. It is well-known that such probabilities can be computed efficiently via solving linear systems~\cite{grady2006random}. We briefly review this result here. The basic strategy is to compute the {\em partition function} $Z_{xl}$, which sums the right-hand-side of~\eqref{eq:pathProb} over all paths starting at $x$ and ending in a point labeled $l$. We can then derive a dynamic programming recursion expressing $Z_{xl}$ in terms of the same quantity at neighboring points $x'$. This recursion defines a set of sparse linear constraints on $Z$, which we can then solve using standard sparse solvers. We first define $\ensuremath{\Xi}_{xl} := \{\ensuremath{\xi} \in \ensuremath{\Xi} \mid \ensuremath{\xi}_0 = \ensuremath{x}, \, \ensuremath{\hat y}(\ensuremath{\xi}_{\ensuremath{\tau}(\ensuremath{\xi})}) = l\}$. $Z_{xl}$ is then defined as \begin{equation} Z_{\ensuremath{x} l} := \sum_{\ensuremath{\xi} \in \ensuremath{\Xi}_{\ensuremath{x} l}} \exp \left(- \sum_{t=0}^{\ensuremath{\tau}(\ensuremath{\xi})-1} \ensuremath{B}_{\ensuremath{\phi},\ensuremath{I}}(\ensuremath{\xi}_t) \right) \frac{1}{4}^{\tau(\xi)}. \end{equation} The first term in the inner sum can be factored out by introducing a new summation over the four nearest neighbors of $\ensuremath{x}$, denoted $x' \sim \ensuremath{x}$, easily yielding the recursion \begin{align} Z_{\ensuremath{x} l} & = \frac{1}{4} e^{- \ensuremath{B}_{\ensuremath{\phi},\ensuremath{I}}(\ensuremath{x})} \sum_{\substack{x' \sim \ensuremath{x}, \\ \ensuremath{\xi} \in \ensuremath{\Xi}_{x'l}}} \frac{1}{4}^{\tau(\xi)} e^{- \sum_{t=0}^{\ensuremath{\tau}(\ensuremath{\xi})-1} \ensuremath{B}_{\ensuremath{\phi},\ensuremath{I}}(\ensuremath{\xi}_t)} \notag\\ & = \frac{1}{4} e^{- \ensuremath{B}_{\ensuremath{\phi},\ensuremath{I}}(\ensuremath{x})} \sum_{x' \sim \ensuremath{x}} Z_{x'l}. \label{eq:zRec} \end{align} Boundary conditions must also be considered in order to fully constrain the solution. Paths exiting the image are assumed to have zero probability: hence, $Z_{\ensuremath{x} l} := 0, \, \forall \ensuremath{x} \notin \ensuremath{X}$. Paths starting at a labeled point $\ensuremath{x} \in \ensuremath{\hat \mX}$ immediately terminate with probability 1; hence, $Z_{\ensuremath{x} l} := 1, \, \forall \ensuremath{x} \in \ensuremath{\hat \mX}$. Solving this system yields a unique solution for $Z$, from which the desired probabilities are computed as \begin{equation} \ensuremath{P}_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{\phi}, \ensuremath{I}}}(y') = \frac{Z_{\ensuremath{x} y'}}{\sum_{l \in \ensuremath{\mathcal L}} Z_{\ensuremath{x} l}}. \end{equation} \subsection{Random-walk backpropagation} \label{sec:rwDeriv} In order to apply backpropagation, we must ultimately compute the derivative of the loss with respect to a change in the boundary score prediction $B_{\ensuremath{\phi},\ensuremath{I}}$. Here, we focus on computing the derivative of the partition function $Z$ with respect to the boundary score $B$, the other steps being trivial. Since computing $Z$ amounts to solving a linear system, this turns out to be fairly simple. Let us write the constraints~\eqref{eq:zRec} in matrix form $A z = b$, such that $A$ is square, $z_i := Z_{\ensuremath{x}_i}$ (assigning a unique linear index $i$ to each $\ensuremath{x}_i \in \ensuremath{X}$, and temporarily omitting the dependence on $l$), and the $i$th rows of $A,b$ correspond to the constraints $C_i Z_{\ensuremath{x}_i} = \sum_{x' \sim \ensuremath{x}_i} Z_{x'}$, $Z_{\ensuremath{x}_i} = 0$, or $Z_{\ensuremath{x}_i} = 1$ (as appropriate), where $C_i := \exp (\ensuremath{B}_{\ensuremath{\phi},\ensuremath{I}}(\ensuremath{x}_i) )$. Let us consider the effect of adding a small variation $\epsilon V$ to $A$, and then re-solving the system. It can be shown that \begin{equation} (A+\epsilon V)^{-1} b = z - \epsilon A^{-1} V z + O(\epsilon^2). \end{equation} Substituting the first-order dependence on $V$ into a Taylor expansion of the loss $L$ yields: \begin{align} L((A + \epsilon V)^{-1} b) & = L(z) - \left< \dod{L}{z}, \epsilon A^{-1} V z \right> + O(\epsilon^2) \notag\\ & = L(z) - \left< A^{-1 \intercal} \dod{L}{z} z^\intercal, \epsilon V \right> + O(\epsilon^2) \notag. \end{align} A first-order variation of $C_i$ corresponds to $V_i = -\delta_{ii}$, which implies that \begin{equation} \dod{L}{C_i} = \left( A^{\intercal -1} \dod{L}{z} \right)_i z_i. \end{equation} In summary, this implies that computing the loss derivatives with respect to the boundary score can be implemented efficiently by solving the sparse adjoint system $A^{\intercal}\od{L}{C} = \od{L}{z}$, and multiplying the result pointwise by the partition function $z$, which in turn allows us to efficiently incorporate sparse label propagation as a function of boundary prediction into an arbitrary deep-learning framework. \subsection{Uncertainty-weighting the loss} \label{sec:weights} An advantage of our method over prior work is that the random-walk method produces a distribution over dense labelings $\ensuremath{P}_{\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{\phi}, \ensuremath{I}}}$ given sparse labels, as opposed to a MAP estimate. These uncertainty estimtates can be used to down-weight the loss in areas where the inferred labels may be incorrect, as illustrated in Fig.~\ref{fig:lossWeights}. In this example, the boundary predictor failed to correctly predict parts of object boundaries. In the vicinity of these gaps, the label distribution is uncertain, and the MAP estimate is incorrect. However, we can mitigate the problem by down-weighting the loss proportional to the uncertainty estimate. More concretely, we actually minimize the following modification of the loss~\eqref{eq:ourLoss}: \begin{align} \sum_{\ensuremath{x} \in \ensuremath{X}} w(x) \textrm{KL}\kldx{\ensuremath{P}_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{\phi}, \ensuremath{I}}}} {\ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})} + H(\ensuremath{P}_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{\phi}, \ensuremath{I}}}), \label{eq:loss} \end{align} where we define $w(x) := \exp (- \alpha H(\ensuremath{P}_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{\phi}, \ensuremath{I}}}))$, for some fixed parameter $\alpha$. This loss reduces to~\eqref{eq:ourLoss} for the case $w(x) = 1$. Although the KL component of the loss can be avoided by increasing the prediction entropy, the explicit entropy regularization term prevents trivial solutions of very large entropy everywhere. \begin{figure} \centering \includegraphics[width=2.5in]{figures/lossWeights} \caption{Visualization of loss uncertainty weights (blue = low weight, red = high weight) \label{fig:lossWeights}} \end{figure} \section{Errata} \begin{itemize} \item In Sec. 2.1, line 275 should read ``It is then straightforward to show that (2) is equivalent to marginalizing (1) \ldots''. \end{itemize} \subsection{Random-walk backpropagation} There are minor sign errors in Section 2.3. Equation 9 should read: \begin{equation} (A+\epsilon V)^{-1} b = z - \epsilon A^{-1} V z + O(\epsilon^2). \end{equation} The equation after Eq. 9 should read: \begin{align} L((A + \epsilon V)^{-1} b) & = L(z) - \left< \dod{L}{z}, \epsilon A^{-1} V z \right> + O(\epsilon^2) \notag\\ & = L(z) - \left< A^{-1 \intercal} \dod{L}{z} z^\intercal, \epsilon V \right> + O(\epsilon^2) \notag. \end{align} The next line should read: ``A first-order variation of $C_i$ corresponds to $V_i = -\delta_{ii}$.'' \section{Additional results} \subsection{Additional quantitative results for CONTEXT} Table~\ref{tab:context} shows additional results for the CONTEXT dataset. In particular, results are added for training on the sparse-loss baseline as well as for the optimal labeling of superpixels consistent with the scribble annotations. As expected, the model trained with the sparse loss (i.e., Eq (1) evaluated only at the labeled locations) exhibited severe overfitting. Also in line with our expectations, we observed that the accuracy of the optimal labeling of superpixels consistent with the scribble annotations (SPCON) decreased relative to the VOC 2012 dataset. We hypothesize this is due partly to the fact that the finer classes of CONTEXT require finer boundaries (we used the same superpixel segmentation parameters as we used for VOC 2012). We also note that the training-set segmentations inferred by \ourMethod{} (the row labeled {\em \ourMethod{} training $P_{\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{B}}$, 0\% abstain} were more accurate than those of SPCON ($75.5\%$ MIOU vs. $70.2\%$). This constitutes strong evidence for the necessity of learning boundaries (as in \ourMethod{}) as opposed to using superpixel segmentations for propagating sparse labels. \begin{table} \centering \begin{tabular}{lll} Method & MIOU & w/ CRF\\\hline\hline \ourMethod{} $Q_{\ensuremath{\theta}, \ensuremath{I}}$ & 36.1 & 37.4 \\ \ourMethod{} training $P_{\ensuremath{y} \mid \ensuremath{\hat y}, B}$, 0\% abstain & 75.5 & . \\\hline Sparse-loss baseline & 26.6 & . \\ Train on dense ground truth & 31.7 & 32.4 \\ ScribbleSup (reported) & . & 36.1 \\\hline Opt. consistent s-p. labels (SPCON) & 70.2 & . \end{tabular} \caption{Results on CONTEXT validation set} \label{tab:context} \end{table} \subsection{Additional qualitative results} Expanded versions of the qualitative results figures are given in Fig.~\ref{fig:vocLarge} and Fig.~\ref{fig:contextLarge}. \begin{figure*} \centering \includegraphics[width=6in]{figures/vocValLarge} \caption{Validation set results on VOC 2012 dataset \label{fig:vocLarge}} \end{figure*} \begin{figure*} \centering \includegraphics[width=6in]{figures/contextTrainLarge} \caption{Training results on CONTEXT dataset \label{fig:contextLarge}} \end{figure*} \section{Derivations} \subsection{Probabilistic justification of loss} In Sec. 2.1, it is claimed that Eq. 2 follows from marginalizing Eq. 1 over the unobserved labels, given certain assumptions. This is proved here. Marginalizing the objective in Eq. 1 over $y$ is expressed as: \begin{align} \sum_{\tilde \ensuremath{y}, \tilde \ensuremath{B}} P_{\ensuremath{y}, \ensuremath{B} \mid \hat \ensuremath{y}, \ensuremath{I}}(\tilde \ensuremath{y}, \tilde \ensuremath{B}) \sum_{\ensuremath{x} \in \ensuremath{X}} H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})). \end{align} Using the conditional independence assumptions and the assumption that $\ensuremath{B}$ is a deterministic function of $\ensuremath{I}$ yields \begin{align} \sum_{\tilde \ensuremath{y}} P_{\ensuremath{y} \mid \hat \ensuremath{y}, \ensuremath{B}_\ensuremath{I}}(\tilde \ensuremath{y}) \sum_{\ensuremath{x} \in \ensuremath{X}} H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})), \end{align} where $\ensuremath{B}_\ensuremath{I}$ denotes the boundaries as a function of $\ensuremath{I}$. Assuming that $y(x)$ and $y(x')$ are conditionally independent given $\ensuremath{\hat y}, \ensuremath{B}$, $\forall \ensuremath{x} \neq \ensuremath{x}' \in \ensuremath{X}$ allows us to represent $P_{\ensuremath{y} \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{I}}}$ as a product of factors: \begin{align} \sum_{\tilde \ensuremath{y}} \left( \prod_{\ensuremath{x}' \in \ensuremath{X}} P_{\ensuremath{y}(\ensuremath{x}') \mid \hat \ensuremath{y}, \ensuremath{B}_\ensuremath{I}}(\tilde \ensuremath{y}(\ensuremath{x}')) \right) \sum_{\ensuremath{x} \in \ensuremath{X}} H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})). \end{align} Factorizing out sums equal to one results in \begin{align} \sum_{\ensuremath{x} \in \ensuremath{X}} \sum_{\tilde y(x)} P_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_{\ensuremath{I}}}(\tilde \ensuremath{y}(\ensuremath{x})) H(\delta_{\tilde \ensuremath{y}(\ensuremath{x})} \,, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})). \end{align} Expanding the definition of cross-entropy yields the desired expression: \begin{align} \sum_{\ensuremath{x} \in \ensuremath{X}} H(P_{\ensuremath{y}(\ensuremath{x}) \mid \ensuremath{\hat y}, \ensuremath{B}_\ensuremath{I}}, \, \ensuremath{Q}_{\ensuremath{\theta}, \ensuremath{I}}(\ensuremath{x})) \end{align} \subsection{Fr\'{e}chet derivative of matrix inverse} This result is used in the derivation of the derivative of the random-walk partition function. An intuitive derivation is provided here. Suppose $Ax=b$. We wish to find an expansion linear in $\epsilon V$ for $\tilde x := (A + \epsilon V)^{-1}b$, assuming the inverse exists. \begin{align} (A + \epsilon V) \tilde x & = b\\ A \tilde x & = b - \epsilon V \tilde x\\ \tilde x & = A^{-1}b - \epsilon A^{-1} V \tilde x\\ & = A^{-1}b - \epsilon A^{-1} V (A^{-1} b - O(\epsilon))\\ & = A^{-1}b - \epsilon A^{-1} V A^{-1} b + O(\epsilon^2) \end{align} where the second-to-last line follows from recursive expansion of the same expression. \end{document}
1,108,101,563,494
arxiv
\section{Introduction} \label{sec:intro} The theory of combinatorial limits is an evolving area of combinatorics. The most developed is the theory of graph limits, which is covered in detail in a recent monograph by Lov\'asz~\cite{Lov12}. Further results concerning many other combinatorial structures exist, e.g.\ for permutations~\cite{GleGKK15,HopKMRS13,HopKMS11,KraP13,ChaKNPSV20,KenKRW20,Kur22} or for partial orders~\cite{HlaMPP15,Jan11}. In the case of graphs, limits of dense graphs~\cite{BorCLSSV06,BorCLSV08,BorCLSV12,LovS06,LovS10}, also see~\cite{CorR20a,CorR20} for a general theory of limits of dense combinatorial structures, and limits of sparse graphs~\cite{AldL07,BenS01,BolR11,BorCG17,Ele07,HatLS14} evolved to a large extent independently. A notion of first order convergence was introduced by Ne\v set\v ril and Ossona de Mendez~\cite{NesO16,NesO20} as an attempt to unify convergence notions in the dense and sparse regimes. This general notion can be applied in the setting of any relational structures, see e.g.~\cite{HosNO17} for results on limits of mappings or~\cite{KarKLM17} on matroids. Informally speaking, a sequence of relational structures is \emph{first order convergent} if for any first order property, the density of $\ell$-tuples of the elements having this property converges; a formal definition is given in Subsection~\ref{subsec:fo}. Every first order convergent sequence of dense graphs is convergent in the sense of dense graph convergence from~\cite{BorCLSV08,BorCLSV12}, and every first order convergent sequence of graphs with bounded degree is convergent in the sense of Benjamini-Schramm convergence as defined in~\cite{BenS01}. A first order convergent sequence of graphs can be associated with an analytic limit object, which is referred to as a \emph{modeling limit} (see Subsection~\ref{subsec:fo} for a formal definition). However, not every first order convergent sequence of graphs has a modeling limit~\cite{NesO20} and establishing the existence of a modeling limit for first order convergent sequences of graphs is an important problem in relation to first order convergence of graphs: a modeling limit of a first order convergent sequence of dense graphs yields a graphon, the standard limit object for convergent sequences of dense graphs, and a modeling limit of a first order convergent sequence of sparse graphs that satisfies the strong finitary mass transport principle (see Subsection~\ref{subsec:fo} for the definition of the principle) yields a graphing, the standard limit object for convergent sequence of sparse graphs. Ne\v set\v ril and Ossona de Mendez~\cite{NesO20} conjectured that every first order convergent sequence of graphs from a nowhere-dense class of graphs has a modeling limit. Nowhere-dense classes of graphs include many sparse classes of graphs, in particular, classes of graphs with bounded degree and minor closed classes of graphs; see~\cite{NesO08a,NesO08b,NesO08c,NesO11,NesO12book} for further details and many applications. The existence of modeling limits for convergent sequences of graphs from a monotone nowhere-dense class of graphs was proven in~\cite{NesO19}. \begin{theorem}[Ne\v set\v ril and Ossona de Mendez~\cite{NesO19}] \label{thm:general} Let $\CC$ be a monotone class of graphs. Every first order convergent sequence of graphs from $\CC$ has a modeling limit if and only if $\CC$ is nowhere-dense. \end{theorem} Theorem~\ref{thm:general} gives little control on the measure of vertex subsets in a modeling limit, which naturally have the same size in finite graphs, e.g., those joined by a perfect matching. The strong finitary mass transport principle, vaguely speaking, translates natural constraints on sizes of vertex subsets to measures of corresponding vertex subsets in a modeling limit. We refer to Subsection~\ref{subsec:fo} for further details. Ne\v set\v ril and Ossona de Mendez~\cite{NesO19} conjectured that Theorem~\ref{thm:general} can be strengthened by adding a condition that modeling limits satisfy the strong finitary mass transport principle. \begin{conjecture}[{Ne\v set\v ril and Ossona de Mendez~\cite[Conjecture 6.1]{NesO19}}] Let $\CC$ be a nowhere-dense monotone class of graphs. Every first order convergent sequence of graphs from $\CC$ has a modeling limit that satisfies the strong finitary mass transport principle. \end{conjecture} The existence of modeling limits satisfying the strong finitary mass transport principle is known for first order convergent sequences of trees of bounded depth and more generally sequences of graphs with bounded tree-depth~\cite{NesO20}, sequences of trees~\cite{NesO16} and sequences of graphs with bounded path-width~\cite{GajHKKKOOT17}, which can be interpreted in plane trees. Our main result (Theorem~\ref{thm:main}) establishes the existence of modeling limits satisfying the strong finitary mass transport principle for sequences of graphs with bounded tree-width. \newcounter{storemaintheorem} \setcounter{storemaintheorem}{\thetheorem} \begin{theorem} \label{thm:main} Let $k$ be a positive integer. Every first-order convergent sequence of graphs with tree-width at most $k$ has a modeling limit satisfying the strong finitary mass transport principle. \end{theorem} While it may seem at the first sight that a proof of Theorem~\ref{thm:main} can be an easy combination of a proof of the existence of modeling limits satisfying the strong finitary mass transport principle for trees from~\cite{NesO16} and for graphs with bounded path-width~\cite{GajHKKKOOT17}, this is actually not the case. In fact, the argument in~\cite{GajHKKKOOT17} is based on interpretation of graphs with bounded path-width in plane trees, i.e., the results in both~\cite{GajHKKKOOT17} and~\cite{NesO16} on the existence of modeling limits satisfying the strong finitary mass transport principle do not go significantly beyond the class of trees. We have not been able to find a first order interpretation of graphs with bounded tree-width in (plane) trees, and we believe that this is related to a possibly complex structure of vertex cuts in such graphs, which need to be addressed using a more general approach. Specifically, the proof of Theorem~\ref{thm:main} is based on constructing modeling limits of rooted $k$-trees, whose orientation essentially encodes the universal weak coloring orders studied in relation to sparse classes of graphs~\cite{NesO12book}, so the proof may be amenable to an extension to graph classes with bounded expansion in principle. We remark that our arguments can be easily adapted to show the existence of modeling limits of first order convergent sequences of graphs with bounded tree-width that are residual, which could yield an alternative proof of Theorem~\ref{thm:main} when combined with the framework described in~\cite[Theorem 1]{NesO16}. The proof of Theorem~\ref{thm:main}, similarly to the proof of the existence of modeling limits of plane trees in~\cite{GajHKKKOOT17}, has two steps: the decomposition step, focused on distilling first order properties of graphs in the sequence, and the composition step, focused on constructing a modeling limit consistent with the identified first order properties. These two steps also appear implicitly in~\cite{NesO16,NesO20}, in particular, the decomposition step is strongly related to the comb structure results presented in~\cite{NesO16,NesO20}. The arguments of the decomposition step of the proof of Theorem~\ref{thm:main} are analogous to those used in~\cite[Subsection 3.1]{GajHKKKOOT17}. The composition step however requires a conceptual extension of techniques used for modeling limits of trees as we had to deal with vertex separations of sizes larger than one. This was achieved by a careful analysis of different types of paths arising in the orientation corresponding to a weak coloring order. This analysis allows defining the edge set of a modeling limit in a measurable and consistent way for vertex separations of sizes larger than one. The paper is organized as follows. In Section~\ref{sec:notation}, we introduce notation used in the paper, in particular, notions related to graphs with bounded tree-width, first order convergence, and model theory. In Section~\ref{sec:decompose}, we overview the decomposition step from~\cite{GajHKKKOOT17} and phrase the presented results in the context of graphs with bounded tree-width. The core of the paper is Section~\ref{sec:compose} where we construct modeling limits of edge-colored rooted $k$-trees that satisfy the strong finitary mass transport principle. This construction is then used in Section~\ref{sec:main} to prove Theorem~\ref{thm:main}. \section{Notation} \label{sec:notation} We first introduce notation specific to this paper. For the notions not defined here, we refer the reader to \cite{Die10} and~\cite{EbbF05} for the graph theory terminology and the model theory terminology, respectively. Some of the less common general notation that we use here include the following. The set of the first $k$ positive integers is denoted by $[k]$ and $\NN^*$ denotes the set $\NN\cup\{\infty\}$. If $x$ is a real number and $z$ is a positive real, we write $x\mod z$ for the unique real $x'\in [0,z)$ such that $x=x'+kz$ for some $k\in\ZZ$. Finally, if $G$ is a graph, then $|G|$ stands for the number of vertices of $G$. A graph has tree-width at most $k$ if and only if it is a subgraph of a $k$-tree. A \emph{$k$-tree} can be defined recursively as follows: each complete graph with at most $k$ vertices is a $k$-tree, and if $G$ is a $k$-tree, then any graph obtained from $G$ by adding a vertex adjacent to $k$ vertices forming a complete subgraph is also $k$-tree. We next define a notion of a \emph{rooted $k$-tree}; the definition is also recursive. Any transitive tournament with at most $k$ vertices is a rooted $k$-tree, and if $G$ is a rooted $k$-tree and $v_1,\ldots,v_k$ are vertices that form a transitive tournament, then the graph obtained from $G$ by adding a new vertex $v$ and adding an edge directed from $v$ to $v_i$ for every $i\in [k]$ is also a rooted $k$-tree. Observe that every rooted $k$-tree is an acyclic orientation of a $k$-tree (the converse need not be true). In the setting above, assume that $v_1\cdots v_k$ is a directed path; we say that the vertex $v_i$, $i\in [k]$, is the \emph{$i$-parent} of the vertex $v$ and the vertex $v$ is an \emph{$i$-child} of $v_i$. We extend this notation to the initial tournament by setting the out-neighbors of each vertex to be its $1$-parent, $2$-parent, etc.\ in a way that a vertex with $\ell$ out-neighbors has an $i$-parent for every $i\in [\ell]$ and there is an edge from its $i$-parent to its $i'$-parent for every $i<i'$. An edge from $v$ to its $i$-parent is referred to as an \emph{$i$-edge}. Hence, every edge is an $i$-edge for some $i\in [k]$. To simplify our exposition, we say that $e$ is an \emph{$A$-edge} for $A\subseteq [k]$ if $e$ is an $i$-edge for some $i\in A$. Finally, when $i$ is not important for our considerations, we may just say that $v'$ is a \emph{parent} of $v$, and $v$ is a \emph{child} of $v'$, if there is a directed edge from $v$ to $v'$, i.e., when $v'$ is an $i$-parent of $v$ for some $i\in [k]$. We state several simple properties of rooted $k$-trees in the form of propositions to be able to refer to them later in our exposition. \begin{proposition} \label{prop:tw} The tree-width of a graph $G$ is the minimum $k$ for which there exists an orientation of $G$ that is a spanning subgraph of a rooted $k$-tree. \end{proposition} \begin{proposition} \label{prop:parent} Let $G$ be a rooted $k$-tree, $v$ a vertex of $G$, and $w$ and $w'$ the $i$-parent and $i'$-parent of $v$ for some $i<i'$. Then the vertex $w'$ is an $i''$-parent of $w$ for some $i''\le i'$. \end{proposition} In our exposition, we will consider rooted $k$-trees with edges colored with two colors, which we will refer to as \emph{$2$-edge-colored $k$-trees}. Hence, each edge of a $2$-edge-colored $k$-tree is an $i$-edge for some $i\in [k]$ and also has one of the two colors. In the proof of our main result, given a convergent sequences of graphs $(G_n)_{n\in\NN}$ with tree-width $k$, we construct modeling limits for a convergent sequence of $2$-edge-colored $k$-trees such that the graphs $G_n$ are spanning subgraphs of the $2$-edge-colored $k$-trees. The $2$-coloring of the edges of the $2$-edge-colored $k$-trees encodes which edges of the $k$-trees of the sequence are also edges of the graphs $G_n$. \subsection{First order convergence} \label{subsec:fo} We now formally define the notion of first order convergence. This notion can be used for all relational structures and beyond, e.g., matroids~\cite{KarKLM17}, however, for simplicity, we limit our exposition to graphs, which may (but need not) be directed and edge-colored. In particular, in the case of $2$-edge-colored $k$-trees, we consider relational structures with $k+2$ binary relations such that $k$ of them encode the relation between vertices and their $i$-parents, $i\in [k]$, and two binary relations encode the edge-coloring with two colors. If $\psi$ is a first order formula with $\ell$ free variables and $G$ is a (finite) graph, then the \emph{Stone pairing} $\langle \psi,G\rangle$ is the probability that a uniformly chosen $\ell$-tuple of vertices of $G$ satisfies $\psi$. A sequence $(G_n)_{n\in\NN}$ of graphs is \emph{first order convergent} if the limit $\lim\limits_{n\to\infty}\langle \psi,G_n\rangle$ exists for every first order formula $\psi$. Every sequence of graphs has a first order convergent subsequence, see e.g.~\cite{NesO20,NesO12,NesO16a}. A \emph{modeling} $M$ is a (finite or infinite) graph with a standard Borel space used as its vertex set equipped with a probability measure such that for every first order formula $\psi$ with $\ell$ free variables, the set of all $\ell$-tuples of vertices of $M$ satisfying a formula $\psi$ is measurable in the product measure. In the analogy to the graph case, the \emph{Stone pairing} $\langle \psi,M\rangle$ is the probability that a randomly chosen $\ell$-tuple of vertices satisfies $\psi$, i.e., $\langle \psi,M\rangle$ is the measure of the set containing $\ell$-tuples of vertices $v_1,\ldots,v_\ell$ such that $M\models\psi(v_1,\ldots,v_{\ell})$. If a finite graph is viewed as a modeling with a uniform discrete probability measure on its vertex set, then the Stone pairings for the graph and the modeling obtained in this way coincide. A modeling $M$ is a \emph{modeling limit} of a first order convergent sequence $(G_n)_{n\in\NN}$ if $$\lim_{n\to\infty}\langle \psi,G_n\rangle=\langle\psi,M\rangle$$ for every first order formula $\psi$. Every modeling limit $M$ of a first order convergent sequence of graphs satisfies the \emph{finitary mass transport principle}. This means that for any two given first order formulas $\psi$ and $\psi'$, each with one free variable, such that every vertex $v$ satisfying $\psi(v)$ has at least $a$ neighbors satisfying $\psi'$ and every vertex $v$ satisfying $\psi'(v)$ has at most $b$ neighbors satisfying $\psi$, it holds that \[a\langle\psi,M\rangle\le b\langle\psi',M\rangle\,\mbox{.}\] For further details, we refer the reader to~\cite{NesO16}. A stronger variant of this principle, known as the \emph{strong finitary mass transport principle}, requires that the following holds for any measurable subsets $A$ and $B$ of the vertices of $M$: if each vertex of $A$ has at least $a$ neighbors in $B$ and each vertex of $B$ has at most $b$ neighbors in $A$, then \[a\mu(A)\le b\mu(B)\,\mbox{,}\] where $\mu$ is the probability measure of $M$. Note that the assertion of the finitary mass transport principle requires this inequality to hold only for first order definable subsets of vertices. The strong finitary mass transport principle is satisfied by any finite graph when viewed as a modeling but it need not hold for modelings in general; for example, the modeling with the vertex set $[0,1]$ and the edge set formed by a perfect matching between the Cantor set (or any other uncountable set of measure 0) and its complement does not satisfy the strong finitary mass transport principle. In particular, the existence of a modeling limit of a first order convergent sequence of graphs does not a~priori imply the existence of a modeling limit satisfying the strong finitary mass transport principle, and indeed the modeling limits constructed in~\cite{NesO19} do not satisfy the strong finitary mass transport principle in general. The importance of the strong finitary mass transport principle comes from its relation to graphings, which are limit representations of Benjamini-Schramm convergent sequences of bounded degree graphs, as a modeling limit of a first order convergent sequence of bounded degree graphs must satisfy the strong finitary mass transport principle in order to be a graphing, see~\cite[Subsection~3.2]{NesO20}. \subsection{Hintikka chains} \label{subsec:Hintikka} Our argument uses the topological space of Hintikka chains, which was used in~\cite{GajHKKKOOT17} and which we now recall. We remark that this space is homeomorphic to the Stone space of $\mathrm{FO}_1^{\mathrm{local}}$ studied in~\cite{NesO16,NesO19,NesO20} (the formal definition of $\mathrm{FO}_1^{\mathrm{local}}$ is below). Hintikka sentences are maximally expressive sentences with a certain quantifier depth, and, informally speaking, Hintikka chains form a local variant of this notion with a single free variable. Consider a signature that includes the signature of graphs and is finite except that it may contain countably many unary relational symbols. In the exposition of our main argument, the signature will consist of $k+2$ binary relational symbols for $i$-edges, $i\in [k]$, and the $2$-edge-coloring, and countably many unary relation symbols $U_i$, $i\in\NN$, which will always contain at most one vertex in our setting and which will be used to pinpoint some special vertices in graphs that we consider. A first order formula $\psi$ where each quantifier is restricted to the neighbors of one of the vertices is said to be \emph{local}, i.e., it only contains quantifiers of the form $\forall_z x$ or $\exists_z x$ and $x$ is required to be a neighbor of $z$. For example, we can express that a vertex $z$ has a neighbor of degree exactly one with the local formula $\psi(z)\equiv\exists_z x\;\forall_x y\; y=z$. Note that a formula is local if and only if its satisfaction depends on a finite neighborhood of its free variables. The quantifier depth of a local formula is defined in the usual way. A formula is a \emph{$d$-formula} if its quantifier depth is at most $d$ and it does not involve any unary predicates $U_i$ with $i> d$; a $d$-formula with no free variables is referred to as a \emph{$d$-sentence}. The same argument as in the textbook case of first order sentences yields that there are only finitely many non-equivalent local $d$-formulas with one free variable for every $d$. Let $\mathrm{FO}_1^{\mathrm{local}}$ be a maximal set of non-equivalent local formulas with one free variable, i.e., a set containing one representative from each equivalence class of local formulas with one free variable. If $d$ is an integer, the \emph{$d$-Hintikka type} of a vertex $v$ of a (not necessarily finite) graph $G$ is the set of all $d$-formulas $\psi\in\mathrm{FO}_1^{\mathrm{local}}$ such that $G\models\psi(v)$. We can visualize relations of different Hintikka types by an infinite rooted tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$: the non-root vertices of the tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$ are associated with all $d$-Hintikka types for $d\in\NN$, every vertex associated with a $1$-Hintikka type is adjacent to the root and a vertex associated with each of $d$-Hintikka types for $d\ge 2$ is adjacent to the unique vertex associate with the $(d-1)$-Hintikka type containing formulas equivalent to the $(d-1)$-formulas contained in the $d$-Hintikka type. Observe that every vertex of the infinite rooted tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$ has a finite degree. A formula $\psi\in\mathrm{FO}_1^{\mathrm{local}}$ with quantifier depth at most $d$ is called a \emph{$d$-Hintikka formula} if there exist a (not necessarily finite) graph $G$ and a vertex $v$ of $G$ such that $\psi$ is equivalent to the conjunction of the \emph{$d$-Hintikka type} of the vertex $v$ of the graph $G$ (note that $\psi$ must actually be equivalent to one of the formulas in the $d$-Hintikka type of $v$). Fix a $d$-Hintikka formula $\psi$. Observe that if $v$ is a vertex of a graph $G$ and $v'$ is a vertex of a graph $G'$ such that $G\models\psi(v)$ and $G'\models\psi(v')$, then the $d$-Hintikka types of $v$ and $v'$ are the same. So, we can speak of the \emph{$d$-Hintikka type} of $\psi$; these are all $d$-formulas $\psi'$ such that if $G\models\psi(v)$, then $G\models\psi'(v)$ for any graph $G$ and any vertex $v$ of $G$. A \emph{Hintikka chain} is a sequence $(\psi_d)_{d\in\NN}$ such that $\psi_d$ is a $d$-Hintikka formula and the $d$-Hintikka type of $\psi_d$ contains $\psi_1,\ldots,\psi_{d-1}$. In the infinite rooted tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$, Hintikka chains are in one-to-one correspondence with infinite paths from the root: if $(\psi_d)_{d\in\NN}$ is a Hintikka chain, then the path is formed by the root and the vertices associated with the $d$-Hintikka type of $\psi_d$. In particular, for every $d$-Hintikka formula $\psi_d$, there are only finite many $(d+1)$-Hintikka formula $\psi_{d+1}$ such that the $(d+1)$-Hintikka type of $\psi_{d+1}$ contains $\psi_d$. Following the standard terminology related to infinite rooted trees, we refer to infinite paths from the root in the tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$ as to $\emph{rays}$, and when we speak of a \emph{subtree} of $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$, we mean a subgraph formed by a vertex and all its descendants. Note that if $G$ is a finite graph and $v$ a vertex of $G$, then the Hintikka chain $(\psi_d)_{d\in\NN}$ such that $G\models\psi_d(v)$ satisfies that there exists $d_0\in\NN$ such that $\psi_d=\psi_{d_0}$ for all $d\ge d_0$. However, this is not true for infinite graphs $G$. If $\psi$ is a $d$-Hintikka formula, the set $\{(\psi_i)_{i\in\NN}:\psi_d=\psi\}$ of Hintikka chains is called \emph{basic}. Observe that basic sets of Hintikka chains correspond to sets of rays in $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$ that lead to the same subtree of the tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$. The set of Hintikka chains (formed by Hintikka formulas with a fixed signature) can be equipped with the topology with the base formed by basic sets; basic sets are clopen in this topology. This defines a Polish space on the set of Hintikka chains, which is homeomorphic to the Stone space of $\mathrm{FO}_1^{\mathrm{local}}$ studied in~\cite{NesO20,NesO12}. Also note that the topology that we have just defined is the same as the natural way of defining a topology on the space of rays of an infinite rooted tree. Observe that for every formula $\psi\in\mathrm{FO}_1^{\mathrm{local}}$, say $\psi$ is a $d$-formula, the set of all Hintikka chains $(\psi_i)_{i\in\NN}$ such that the $d$-Hintikka type of $\psi_{d}$ contains $\psi$ is a finite union of basic sets. In particular, the set of all such Hintikka chains is clopen in the just defined topology. \section{Decomposition step} \label{sec:decompose} Our main argument consists of two steps, which we refer in the analogy to~\cite{GajHKKKOOT17} as the decomposition step and the composition step. In this section, we present the former, which is analogous to that for plane trees given in~\cite[Section 3.1]{GajHKKKOOT17}; this step is also closely related to comb structure results presented in~\cite{NesO20,NesO16}. As the decomposition step follows closely \cite[Section 3.1]{GajHKKKOOT17} and the arguments are analogous, we present the main ideas and refer the reader to~\cite[Section 3.1]{GajHKKKOOT17} for further details. We start with recalling the following structural result from~\cite{NesO20}. \begin{theorem} \label{thm:major} Let $\CC$ be a nowhere-dense class of graphs. For every $d\in\NN$ and $\varepsilon>0$, there exists $N\in\NN$ such that every graph $G\in\CC$ contains a set $S$ of at most $N$ vertices such that the $d$-neighborhood of every vertex $G\setminus S$ contains at most $\varepsilon |G|$ vertices. \end{theorem} We will enhance the signature of graphs by countably many unary relations $U_i$, $i\in\NN$; these relations will identify vertices described in Theorem~\ref{thm:major}, whose removal makes the considered sequence of graphs residual in the sense of~\cite{NesO16,NesO19}. In particular, we will require that no vertex satisfies more than one of the relations $U_i$, $i\in\NN$, and each relation is satisfied by at most one vertex. To simplify our notation, we will write $c_i$ for the vertex satisfying $U_i$ if it exists. We will refer to graphs with unary relations $U_i$, $i\in\NN$, as to \emph{marked graphs} following the terminology used in~\cite{NesO19}. A sequence of marked graphs $(G_n)_{n\in\NN}$ is {\em null-partitioned} if the following two conditions hold (the definition is analogous to that for plane trees in~\cite{GajHKKKOOT17}): \begin{itemize} \item for every $k\in\NN$, there exists $n_0\in\NN$ such that there exist vertices satisfying $U_1,\ldots,U_k$ in $G_n$ for every $n\ge n_0$, i.e., the vertices $c_1,\ldots,c_k$ are well-defined in $G_n$ for every $n\ge n_0$, and \item for every $\varepsilon>0$, there exist integers $n_0$ and $k_0$ such that the size of each components of $G_n\setminus\{c_1,\ldots,c_{k_0}\}$ is at most $\varepsilon |G_n|$ for every $n\ge n_0$, in particular, there exist vertices satisfying $U_1,\ldots,U_{k_0}$ in $G_n$ for every $n\ge n_0$. \end{itemize} The following lemma, which is a counterpart of~\cite[Lemma 4]{GajHKKKOOT17} and the proof follows the same lines, readily follows from Theorem~\ref{thm:major}, and corresponds to the existence of marked quasi-residual sequences used in~\cite{NesO19}. \begin{lemma} \label{lm:unary} Let $\CC$ be a nowhere-dense monotone class of graphs. Let $(G_n)_{n\in\NN}$ be a first order convergent sequence of graphs $G_n\in\CC$ such that the orders of $G_n$ tend to infinity. There exists a first order convergent null-partitioned sequence $(G'_n)_{n\in\NN}$ obtained from a subsequence of $(G_n)_{n\in\NN}$ by interpreting unary relational symbols $U_i$, $i\in\NN$. \end{lemma} The first order properties are linked with Ehrenfeucht-Fra{\"\i}ss\'e games~\cite{EbbF05}. It is well-known that two structures satisfy the same first order sentences with quantifier depth $d$ if and only if the duplicator has a winning strategy for the $d$-round Ehrenfeucht-Fra{\"\i}ss\'e game played on two structures, in our settings, on two $2$-edge-colored rooted marked $k$-trees. The $d$-round Ehrenfeucht-Fra{\"\i}ss\'e game is played by two players, called the spoiler and the duplicator. In the $i$-th round, the spoiler chooses a vertex of one of the $k$-trees (the spoiler can choose a different $k$-tree in different rounds) and places the $i$-th pebble on that vertex. The duplicator responds with placing the $i$-th pebble on a vertex of the other $k$-tree. At the end of the game, the duplicator wins if the mapping between the subforests induced by the vertices with the pebbles that maps the vertex with the $i$-pebble in the first $k$-tree to the vertex with the $i$-pebble in the other $k$-tree is an isomorphism preserving the $j$-parent relations, $j\in [k]$, the unary relations $U_i$, $i\in\NN$, and the edge colors. If two $2$-edge-colored rooted marked $k$-trees $G$ and $G'$ satisfy the same $d$-sentences if and only if the duplicator has a winning strategy for the $d$-round Ehrenfeucht-Fra{\"\i}ss\'e game when played with two $2$-edge-colored rooted marked $k$-trees $G$ and $G'$ with all unary relations $U_i$, $i>k$, set to be empty. The following theorem is the counterpart of~\cite[Theorem 6]{GajHKKKOOT17}. The theorem can be proven using an argument analogous to that used to prove \cite[Lemma 5]{GajHKKKOOT17}, which is based on the link of $d$-sentences to the $d$-round Ehrenfeucht-Fra{\"\i}ss\'e games presented above. However, in our (simpler) setting not involving ordered edge incidences, the statement of the theorem for each $d\in\NN$ directly follows from Hanf's Theorem applied to relational structures obtained by removing unary relations $U_i$ with $i>d$. \begin{theorem} \label{thm:hanf} For every integer $d$, there exist integers $D$ and $\Gamma$ such that for any two (not necessarily finite) $2$-edge-colored rooted marked $k$-trees $G$ and $G'$ if the $k$-trees $G$ and $G'$ have the same number of vertices of each $D$-Hintikka type or the number of the vertices of this type is at least $\Gamma$ in both $G$ and $G'$, then the sets of $d$-sentences satisfied by $G$ and $G'$ are the same. \end{theorem} Fix a first order convergent null-partitioned sequence of $2$-edge-colored rooted marked $k$-trees $(G_n)_{n\in\NN}$. We associate the sequence with two functions, $\nu:\mathrm{FO}_1^{\mathrm{local}}\to\NN^*$ and $\mu:\mathrm{FO}_1^{\mathrm{local}}\to [0,1]$, which we will refer to as the \emph{discrete Stone measure} and the \emph{Stone measure} of the sequence (strictly speaking, $\nu$ and $\mu$ are not measures as they are not defined on a $\sigma$-algebra, however, each can be extended to a measure on the $\sigma$-algebra formed by Borel sets of Hintikka chains as argued in the next paragraph). If $\psi\in\mathrm{FO}_1^{\mathrm{local}}$, then $\nu(\psi)$ is the limit of the number of vertices $u$ such that $G_n\models\psi(u)$, and $\mu(\psi)$ is the limit of $\langle\psi,G_n\rangle$. In the analogy to the finite case, if $M$ is a modeling, we define its \emph{discrete Stone measure} and its \emph{Stone measure} by setting $\nu(\psi)=|\{u\mbox{ s.t. }M\models\psi(u)\}|$ and $\mu(\psi)=\langle\psi,M\rangle$ for $\psi\in\mathrm{FO}_1^{\mathrm{local}}$. As we have already mentioned, the Stone measure $\mu$ defined in the previous paragraph yields a measure on the $\sigma$-algebra formed by Borel sets of Hintikka chains. This follows from~\cite[Theorem 8]{NesO20} but we sketch a short self-contained argument. Let $\RR$ be the ring formed by finite unions of basic sets of Hintikka chains; observe that $\RR$ consists of finite unions of disjoint basic sets of Hintikka chains, which correspond to finite unions of disjoint subtrees of the infinite rooted tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$ introduced in Subsection~\ref{subsec:Hintikka}. Let $X\in\RR$ be the union of disjoint basic sets corresponding to Hintikka formulas $\psi_1,\ldots,\psi_k$ and define $\mu_{\RR}(X)=\mu(\psi_1)+\cdots+\mu(\psi_k)$. Clearly, the mapping $\mu_{\RR}$ is additive. Since every countable union of non-empty pairwise disjoint sets from $\RR$ that is contained in $\RR$ must be finite (this is implied by K\"onig's Lemma applied to the infinite rooted tree $\TT_{\mathrm{FO}_1^{\mathrm{local}}}$), the mapping $\mu_{\RR}$ is a premeasure. By Carath\'eodory's Extension Theorem, the premeasure $\mu_{\RR}$ extends to a measure on the $\sigma$-algebra formed by Borel sets of Hintikka chains. We will also use $\mu$ to denote this measure, which is uniquely determined by the Stone measure $\mu$; we believe that no confusion can arise by doing us so. The following lemma relates first order convergent sequences of $2$-edge-colored rooted marked $k$-trees and their modeling limits. The proof is analogous to that of~\cite[Lemma 7]{GajHKKKOOT17}, and it can also be derived from~\cite[Theorem 36]{NesO16}. \begin{lemma} \label{lm:modeling} Let $(G_n)_{n\in\NN}$ be a first order convergent null-partitioned sequence of $2$-edge-colored rooted marked $k$-trees with increasing orders and let $\nu$ and $\mu$ be its discrete Stone measure and Stone measure, respectively. If $M$ is a $2$-edge-colored rooted $k$-tree modeling such that \begin{itemize} \item the discrete Stone measure of $M$ is $\nu$, \item the Stone measure of $M$ is $\mu$, \item there exists exactly one vertex $c_i$ of $M$ that satisfies the unary relation $U_i$, \item the $r$-neighborhood in $M\setminus\{c_i,i\in\NN\}$ of each vertex in $M\setminus\{c_i,i\in\NN\}$ has zero measure for every $r\in\NN$, \end{itemize} then $M$ is a modeling limit of $(G_n)_{n\in\NN}$. \end{lemma} \section{Composition step} \label{sec:compose} The following theorem is the core result of this paper. \begin{theorem} \label{thm:compose} Fix a positive integer $k$. Let $(G_n)_{n\in\NN}$ be a first order convergent null-partitioned sequence of $2$-edge-colored rooted marked $k$-trees and let $\nu$ and $\mu$ be its discrete Stone measure and Stone measure, respectively. Then there exists a modeling limit $M$ such that \begin{itemize} \item the discrete Stone measure of $M$ is $\nu$, \item the Stone measure of $M$ is $\mu$, \item there exists exactly one vertex $c_i$ of $M$ that satisfies the unary relation $U_i$ for all $i\in\NN$, \item the $r$-neighborhood of each vertex in $M\setminus\{c_i,i\in\NN\}$ has zero measure for every $r\in\NN$, \item the modeling $M$ satisfies the strong finitary mass transport principle, and \item the modeling $M$ also satisfies the strong finitary mass transport principle when all edges of one of the two colors are removed. \end{itemize} \end{theorem} \begin{proof} We start by extending the mapping $\nu$ to Hintikka chains: if $\Psi=(\psi_i)_{i\in\NN}$ is a Hintikka chain, then \[\nu(\Psi)=\lim_{i\to\infty}\nu(\psi_i)\,\mbox{.}\] Note that the support of the measure $\mu$ is the set containing all Hintikka chains $(\psi_i)_{i\in\NN}$ such that the measure of the basic set of Hintikka chains corresponding to $\psi_i$ is positive for every $i\in\NN$. We show that the support of the measure $\mu$ is a subset of $\nu^{-1}(\infty)$ (in general, the support of $\mu$ need not be equal to $\nu^{-1}(\infty)$). Assume to a contrary that $\nu(\Psi)=k\in\NN$ for some $\Psi$ in the support of $\mu$. Then there exists a $d$-Hintikka formula $\psi_d$ such that $\nu(\psi_d)=k$. Hence, there exists $n_0$ such that every $G_n$, $n\ge n_0$, contains exactly $k$ vertices $u_1,\dots,u_k$ such that $G_n\models\psi_d(u_i)$ for $i\in[k]$. Since the sequence $(G_n)_{n\in\NN}$ is null-partitioned (in particular, the orders of $(G_n)_{n\in\NN}$ tend to infinity), it follows that $\mu(\psi_d)=0$. Consequently, the measure of the basic set of Hintikka chains corresponding to $\psi_d$ is zero and $\Psi$ is not in the support of $\mu$. The proof is next split into several steps, each of which we start with a brief title of the step. \textbf{Vertex set.} The vertex set of the modeling $M$ that we construct consists of two sets: $V_f$ contains all pairs $(\Psi,i)$ where $\Psi$ is a Hintikka chain such that $\nu(\Psi)\in\NN$ and $i\in [\nu(\Psi)]$, and $V_{\infty}=\nu^{-1}(\infty)\times [0,1)^{2k+1}$. Note that the set $V_f$ is countable. Also observe that for every $i\in\NN$, the set $V_f$ contains the unique vertex satisfying $U_i$. A Hintikka chain encodes many properties of a vertex. In particular, it uniquely determines the Hintikka chains of the $i$-parent of the vertex $v$ for every $i\in [k]$, which we will refer to as the $i$-parent Hintikka chains. The Hintikka chain also determines whether the vertex is one of the vertices satisfying $U_i$, $i\in\NN$, and whether it is contained in the initial tournament (initial stands here for initial in the recursive definition of a rooted $k$-tree). Furthermore, the Hintikka chain also determines the number of children that satisfy a particular local first order formula and, more generally, that have a certain Hintikka chain. Finally, the Hintikka chain describes the existence or the non-existence of finite paths consisting only of $A$-edges for arbitrary $A\subseteq [k]$ between the parents of the vertex. We say that an $i$-edge $e$ contained in such a path is said to be \emph{$m$-finitary} for $m\in\NN$ if the head of $e$ is the head of exactly $m$ $i$-edges whose tail has the same Hintikka chain as the tail of $e$; if $e$ is $m$-finitary for some $m$, we say that $e$ is \emph{finitary}, otherwise if $e$ is not $m$-finitary for any $m$, then $e$ is \emph{infinitary}. Note that whether $e$ is $m$-finitary for some $m\in\NN$ or infinitary is implied by the Hintikka chain. In a slightly informal way, we will be speaking about all these properties as the properties of the Hintikka chain. \textbf{Unary relations.} We now continue with the construction of the modeling $M$ with defining the unary relations $U_i$, $i\in\NN$. For each $i\in\NN$, there is exactly one Hintikka chain $\Psi$ with $\nu(\Psi)>0$ such that the vertex of $\Psi$ satisfies $U_i$. It holds $\nu(\Psi)=1$ for such a Hintikka chain $\Psi$, and the vertex $(\Psi,1)\in V_f$ will be the unique vertex of $M$ satisfying the unary relation $U_i$. In what follows, we write $c_i$ for this vertex of $M$. \textbf{Edges.} We next define the edges of the modeling limit $M$ by describing the edges leading from each vertex of $M$ to its parents. To do so, we fix for every $d\in [2k]$ a continuous measure preserving bijection $\zeta^d:[0,1)\to [0,1)^d$; for $x\in [0,1)$ and $i\in [d]$, we write $\zeta^d_i(x)$ for the $i$-th coordinate of $\zeta^d(x)$. We first consider the vertices contained in $V_f$. Let $(\Psi,m)\in V_f$ (note that such a vertex may be one of the vertices $c_i$, $i\in\NN$). For every $i\in [k]$ such that $\Psi$ implies the existence of an $i$-parent, we proceed as follows. Let $\Psi'$ be the $i$-parent Hintikka chain of $\Psi$; observe that $\Psi'\not=\Psi$ as a rooted $k$-tree is acyclic. The definition of the discrete Stone measure and the first order convergence of $(G_n)_{n\in\NN}$ imply that for every $d\in\NN$, there exists $n_0\in\NN$ such that every graph $G_n$, $n\ge n_0$, has exactly $\nu(\Psi)$ vertices satisfying the $d$-th formula of the Hintikka chain $\Psi$. Hence, there exists $n'_0\in\NN$, $d_0\in\NN$ and $D\in\NN$ such that every graph $G_n$, $n\ge n_0$, has exactly $\nu(\Psi)$ vertices satisfying the $d_0$-th formula of the Hintikka chain $\Psi$ and every $i$-parent of such a vertex is the $i$-parent of exactly $D$ vertices satisfying the $d_0$-th formula of the Hintikka chain $\Psi$. It follows that $\nu(\Psi)=D\nu(\Psi')$, in particular, $\nu(\Psi')$ is a non-zero integer which divides $\nu(\Psi)$. In the modeling limit $M$, we set the $i$-parent of the vertex $(\Psi,m)$ to be the vertex $(\Psi',m')$ where $m'=\lceil m\nu(\Psi')/\nu(\Psi) \rceil$, and the color of the edge from $(\Psi,m)$ to $(\Psi',m')$ as determined by the Hintikka chain $\Psi$. This determines all edges between the vertices of $V_f$, which include all edges leading from the vertices of $V_f$. Next, consider a vertex $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)\in V_{\infty}$ and $i\in [k]$. Since there are at most $k$ vertices without an $i$-parent (these are the vertices of the initial tournament in the definition of a rooted $k$-tree), the Hintikka chain $\Psi$ determines that the vertex described by $\Psi$ has the $i'$-parent for every $i'\in [k]$, in particular, it has the $i$-parent. Let $\Psi'$ be the Hintikka chain of the $i$-parent. We consider all finite paths from $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to its $i$-parent whose existence is implied by $\Psi$; we will say that a path (whose existence is implied by $\Psi$) from the tail to the head of an edge $e$ is a \emph{detour} of $e$ if \begin{itemize} \item $e$ is a finitary $m$-edge and the path is formed by $[m-1]$-edges only, or \item $e$ is an infinitary $m$-edge and all infinitary edges of the path are $[m-1]$-edges. \end{itemize} An edge with no detour is said to be \emph{important}. Observe that if $e$ is a finitary edge, then any detour of $e$ is formed by finitary edges only (if a detour contained an infinitary edge, then there would be infinitely many paths from vertices with the same Hintikka chain as the tail of $e$ to the head of $e$ and since such paths start at different vertices, $e$ would be infinitary). It follows that $\Psi$ implies the existence of a path from the vertex $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to its $i$-parent that is formed by important edges only. Indeed, replacing an $i'$-edge $e$ that is not important with a detour of $e$ either decreases the number of infinitary $\{i',\ldots,k\}$-edges, or preserves the number of infinitary $\{i',\ldots,k\}$-edges while decreasing the number of all $\{i',\ldots,k\}$-edges; hence, the process of replacing edges that are not important with corresponding detours as long as possible always terminates. Fix a path $P$ from $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to its $i$-parent formed by important edges only; note that $P$ is a type of a path rather than an actual path. Let $\ell$ be the length of the path $P$. Set $\Psi^0$ to be the Hintikka chain $\Psi$ and set $\Psi^j$ for $j=1,\ldots,\ell$ to be the Hintikka chain of the head of the $j$-th edge of the path $P$. Let $\ell_\infty$ the largest $j$ such that $\nu(\Psi^j)=\infty$ (possibly $\ell_\infty=0$ or $\ell_\infty=\ell$). We next identify vertices $(\Psi^j,n^j_0,h^j_1,n^j_1,\ldots,h^j_k,n^j_k)$ for $j=1,\ldots,\ell_{\infty}$, which are actual vertices in a modeling limit $M$; the vertex $(\Psi^j,n^j_0,h^j_1,n^j_1,\ldots,h^j_k,n^j_k)$ should be the head of the $j$-th important edge on the actual path from the vertex $(\Psi,h_1,n_1,\ldots,h_k,n_k)$ to its $i$-parent in the modeling limit $M$ that corresponds to $P$. So, we define $(2k+1)$-tuples $(n^j_0,h^j_1,n^j_1,\ldots,h^j_k,n^j_k)$ for $j=0,\ldots,\ell_{\infty}$ recursively as follows. We first set $(n^0_0,h^0_1,n^0_1,\ldots,h^0_k,n^0_k)$ to $(n_0,h_1,n_1,\ldots,h_k,n_k)$. If the $j$-th edge of the path $P$ is an infinitary $i'$-edge, we set \begin{align*} (n^j_0,h^j_1,n^j_1,\ldots,h^j_k,n^j_k)=(&\zeta_1^{2i'}(n^{j-1}_{i'}),\ldots,\zeta^{2i'}_{2i'-1}(n^{j-1}_{i'}),\\ &h^{j-1}_{i'}+\sqrt{2}\mod 1,\zeta_{2i'}^{2i'}(n^{j-1}_{i'}),\\ &h^{j-1}_{i'+1},n^{j-1}_{i'+1},\ldots,h^{j-1}_k,n^{j-1}_k). \end{align*} If the $j$-th edge of the path $P$ is an $m$-finitary $i'$-edge and the edge from the head of the $j$-th edge to its $i''$-parent for every $i''=i'+1,\ldots,k$ is infinitary or has a detour, we set \begin{align*} (n^j_0,h^j_1,n^j_1,\ldots,h^j_k,n^j_k)=(&m\cdot n^{j-1}_0\mod 1,h^{j-1}_1,n^{j-1}_1,\ldots,h^{j-1}_{i'-1},n^{j-1}_{i'-1},\\ &h^{j-1}_{i'}+\sqrt{2}\mod 1,n^{j-1}_{i'},\\ &h^{j-1}_{i'+1},n^{j-1}_{i'+1},\ldots,h^{j-1}_k,n^{j-1}_k). \end{align*} Otherwise, i.e., when the $j$-th edge of the path $P$ is an $m$-finitary $i'$-edge and there exists $i''>i'$ such that the edge from the head of the $j$-th edge to its $i''$-parent is finitary and has no detour, we set \begin{align*} (n^j_0,h^j_1,n^j_1,\ldots,h^j_k,n^j_k)=(m\cdot n^{j-1}_0\mod 1,h^{j-1}_1,n^{j-1}_1,\ldots,h^{j-1}_k,n^{j-1}_k). \end{align*} If $\ell=\ell_\infty$, we set the $i$-parent of $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to be $(\Psi',n^\ell_0,h^\ell_1,n^\ell_1,\ldots,h^\ell_k,n^\ell_k)$ and the color of the edge from $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to the $i$-parent as determined by $\Psi$. If $\ell>\ell_\infty$, we set the $i$-parent of $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to be $\left(\Psi',1+\lfloor\nu(\Psi')\cdot n^{\ell_\infty}_k\rfloor\right)$; the color of the edge from $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to the $i$-parent is again determined by $\Psi$. Note that in the latter case we can think of the $j$-th vertex for $j>\ell_\infty$ as being $\left(\Psi^j,1+\lfloor\nu(\Psi^j)\cdot n^{\ell_\infty}_k\rfloor\right)$, which is consistent with the definition of edges among the vertices of $V_f$. This concludes the definition of the edge set of the modeling limit $M$. \textbf{Well-defined.} We next verify that the definition of the $i$-parent of a vertex does not depend on the choice of the path $P$. We prove that if $P$ and $P'$ are two paths from a vertex $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ to its $i$-parent that are formed by important edges only, then they yield the same definition of the $i$-parent; the proof proceeds by induction on the sum of the lengths of $P$ and $P'$. The base of the induction is formed by the case when the sum of the lengths of $P$ and $P'$ is equal to two, i.e., the path $P$ and $P'$ are actually the same edge and so the claim holds. We now present the induction step. Fix a vertex $v=(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ and two paths $P$ and $P'$ from the vertex $v$ to its $i$-parent, which we denote by $u$, that both are formed by important edges only. Let $w$ and $w'$ be the heads of the first edge of $P$ and $P'$, and let $j$ and $j'$ be such that the first edges of $P$ and $P'$ are an $j$-edge and an $j'$-edge, respectively. See Figure~\ref{fig:welldefined} for the notation. By symmetry, we may assume that $j\le j'$. Note that $w$ is the $j$-parent of $v$, $w'$ is its $j'$-parent, $u$ is the $i'$-parent of $w$ for some $i'\leq i$, and let $\Psi_w$ and $\Psi_{w'}$ be the Hintikka chains of $w$ and $w'$, respectively. We first deal with the case that $w=w'$. Consider the paths $Q$ and $Q'$ obtained from $P$ and $P'$ by removing their first edge, respectively. If $\nu(\Psi_w)=\infty$, the induction assumption implies that the paths $Q$ and $Q'$ yield the same definition of the $i'$-parent of $w$. If $\nu(\Psi_w)\not=\infty$, the definition of edges between the vertices of $V_f$ implies that the paths $Q$ and $Q'$ yield the same definition of the $i'$-parent of $w$. In either of the cases, the paths $P$ and $P'$ yield the same definition of the $i$-parent of $v$. \begin{figure} \begin{center} \epsfbox{folim-tw-1.mps} \end{center} \caption{Notation used in the proof of Theorem~\ref{thm:compose} when establishing that edges of $M$ are well-defined.} \label{fig:welldefined} \end{figure} We next deal with the case that $w\not=w'$. By Proposition~\ref{prop:parent}, $w'$ is the $j''$-parent of $w$ for some $j''\le j'$. Since the first edge of $P'$ is important, it holds that $j'=j''$ (otherwise, $vww'$ is a detour of the edge $vw'$) and the edge $ww'$ is important (otherwise, the edge $vw$ together with a detour of the edge $ww'$ is a detour of the edge $vw'$). Let $Q$ be the path $P$ with the first edge removed and $Q'$ be the path $P'$ with the first edge replaced with $ww'$. If $\nu(\Psi_w)\not=\infty$, then $\nu(\Psi_{w'})\not=\infty$ and the definition of the edges from $V_{\infty}$ to $V_f$ yields that $w=(\Psi_w,1+\lfloor\nu(\Psi_w)n_k\rfloor)$ and $w'=(\Psi_{w'},1+\lfloor\nu(\Psi_{w'})n_k\rfloor)$. The definition of edges between the vertices of $V_f$ implies that the paths $Q$ and $Q'$ yield the same definition of the $i'$-parent of $w$. If $\nu(\Psi_w)=\infty$, the paths $Q$ and $Q'$ yield the same definition of the $i'$-parent of $w$ by induction as they both are formed by important edges only. To finish the proof of the induction step, we need to establish that the paths $vw'$ and $vww'$ yield the same definition of the $j'$-parent of $v$. We first consider the case that $\nu(\Psi_{w'})\not=\infty$. It follows that both edges $vw'$ and $ww'$ are important infinitary $j'$-edges, which in turn implies that the last coordinate of $v$ and $w$ is the same. Hence, $w'$ is $(\Psi_{w'},1+\lfloor\nu(\Psi_{w'})n_k\rfloor)$ when defined using either of the paths $vw'$ and $vww'$. In the rest, we assume that both $\nu(\Psi_w)$ and $\nu(\Psi_{w'})$ are equal to $\infty$. If the edge $vw'$ is infinitary, then the edge $ww'$ is also infinitary (otherwise, $vww'$ is a detour of $vw'$). As both edges $vw'$ and $ww'$ are important infinitary $j'$-edges and $vw$ is an $j$-edge for $j<j'$, the vertex $w'$ is the same when defined using either of the paths $vw'$ and $vww'$. If the edge $vw'$ is $m$-finitary, then the edges $vw$ and $ww'$ are both finitary; moreover, if $vw$ is $m_1$-finitary and $ww'$ is $m_2$-finitary, then $m=m_1m_2$. Hence, the vertex $w$ is $(\Psi_w,n_0\cdot m_1\mod 1,h_1,n_1,\ldots,h_k,n_k)$ as the edge $vw$ is $j$-edge and the edge $ww'$ is an important $j'$-edge for $j'>j$. If there exists an edge from $w'$ to its $\widehat{j}$-parent for some $\widehat{j}>j'$ that is finitary and has no detour, then the vertex $w'$ is $(\Psi_{w'},n_0\cdot m\mod 1,h_1,n_1,\ldots,h_k,n_k)$. Otherwise, the vertex $w'$ is $(\Psi_{w'},n_0\cdot m\mod 1,h_1,\ldots,n_{j'-1},h_{j'}+\sqrt{2}\mod 1,n_{j'},\ldots,n_k)$. Hence, the vertex $w'$ is the same when defined using either of the paths $vw'$ and $vww'$ in all cases. Since the edges of $M$ are well-defined, it follows that the number of $i$-children with a given Hintikka type of every vertex $v$ of $M$ is the number determined by the Hintikka type of $v$. \textbf{Edge consistency.} We next verify the following property, which we refer to as \emph{edge consistency}. For every vertex $w$, the following holds: if $w'$ and $w''$ are the $i'$-parent and the $i''$-parent of $w$, $i'<i''$, respectively, and $w''$ is the $i$-parent of $w'$ according to the Hintikka chain of $w$, then $w''$ is the $i$-parent of $w'$ in $M$. Note that this property is not completely automatically satisfied as the definition of edges from $w'$ to its parents is independent of the definition of edges from $w$ to its parents. If $w\in V_f$, then the edge consistency straightforwardly follows from the definition of edges among the vertices of $V_f$. If $w\in V_{\infty}$, consider a path $P'$ from $w$ to $w'$ and a path $P''$ from $w'$ to $w''$ formed by important edges only. Note that the concatenation of the paths $P'$ and $P''$ can be used to define $i''$-parent of $w$. If $w'\in V_{\infty}$, then the path $P'$ can be used to define the $i'$-parent $w'$ of $w$ and the path $P''$ can be used to define the $i$-parent $w''$ of $w'$, and so the edge consistency holds. If $w'\in V_f$, then the path $P'$ can be used to define the $i'$-parent $w'$ of $w$ and the definition of edges among the vertices of $V_f$ implies that $w''$ is the $i$-parent of $w'$. Hence, the edge consistency holds in this case, too. \textbf{Acyclicity.} We next argue that the modeling $M$ is acyclic, i.e., it does not contain a finite directed cycle. This is straightforward to verify for vertices in $V_f$. The Hintikka chain of any vertex reachable (using the edges following their orientation) from a vertex $(\Psi,m)\in V_f$ can never be $\Psi$; otherwise, the Hintikka chain $\Psi$ would imply the existence of a reachable vertex whose Hintikka $d$-types also form $\Psi$, which would imply the existence of a vertex reachable from that vertex whose Hintikka $d$-types also form $\Psi$, etc., and so the discrete Stone measure of $\Psi$ would have to be infinite. We conclude that if $M$ has a directed cycle, then it is comprised by vertices of $V_{\infty}$ only. Observe that if two vertices $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ and $(\Psi',n'_0,h'_1,n'_1,\ldots,h'_k,n'_k)$ are joined by an edge, then either $h_j=h'_j$ and $n_j=n'_j$ for all $j\in [k]$, or there exists $j\in [k]$ such that $h'_j=h_j+\sqrt{2}\mod 1$ and $h'_{j'}=h_j$ for $j'=j+1,\ldots,k$. Hence, if $M$ has a cycle formed by vertices of $V_{\infty}$, then the cycle is formed by vertices $(\Psi^1,n^1_0,h_1,n_1,\ldots,h_k,n_k),\ldots,(\Psi^\ell,n^\ell_0,h_1,n_1,\ldots,h_k,n_k)$, i.e., the last $2k$ coordinates of all vertices of the cycle are the same. However, this is only possible if every edge $e$ of the cycle satisfies the following: if $e$ is an $i$-edge, the head of $e$ has an $i'$-parent for $i'>i$ such that the edge from the head of $e$ to the $i'$-parent is finitary and important. Let $i_0$ be the maximum of such $i'$ taken over all edges of the cycle. Consider a vertex $v$ of the cycle such that the edge to its $i_0$-parent is finitary and important and let $e$ be the edge of the cycle leading from $v$. By the choice of $i_0$, the edge $e$ is an $i$-edge for $i<i_0$ (otherwise, its head would have an $i'$-parent for $i'>i_0$ such that the edge to the $i'$-parent is finitary and important). Since the edge from $v$ to its $i_0$-parent is important, the $i_0$-parent of $v$ is also the $i_0$-parent of the head of $e$: it is $i'$-parent for $i'\le i_0$ by Proposition~\ref{prop:parent} and $i'=i_0$ as otherwise, there would be a detour for the edge from $v$ to its $i_0$-parent. We conclude that there exists a vertex $w$ such that $w$ is the $i_0$-parent of every vertex of the cycle and the edge from each vertex of the cycle to $w$ is finitary (and important). Let $\Psi$ be the Hintikka chain of any vertex of the cycle. The construction of the modeling $M$ implies that $w$ has an $i_0$-child with the Hintikka chain $\Psi$ and there is a finite path to another $i_0$-child of $w$ with the Hintikka chain $\Psi$; it follows that $w$ has infinitely many $i_0$-children with the Hintikka chain $\Psi$ and so the edges from such $i_0$-children to $w$ cannot be finitary. We conclude that $M$ has no directed cycle comprised by vertices of $V_{\infty}$ only. \textbf{Measure and its properties.} We next define a probability measure on $V_f\cup V_{\infty}$, which is denoted by $\mu_M$, as follows. We start with defining a topology on $V_f\cup V_{\infty}$: a subset $X\subseteq V_f\cup V_{\infty}$ is open if the set $X\cap V_{\infty}$ is open in the product topology of the product of $\mathrm{FO}_1^{\mathrm{local}}$ and $[0,1)^{2k+1}$. In particular, every subset of the countable set $V_f$ is open. This also determines the $\sigma$-algebra on $M$, which is formed by all Borel sets. Finally, the measure of a set $X\subseteq V_f\cup V_{\infty}$ is the measure of the set $X\cap V_{\infty}$ given by the product measure determined by the measure $\mu$ and the uniform (Borel) measure on $[0,1)^{2k+1}$. We next verify that $M$ is a modeling limit, i.e., all first order definable subsets of $M^k$ for every $k\in\NN$ are measurable, and has the properties given in the statement of the theorem. The edge consistency of $M$ implies that, for every Hintikka chain $\Psi=(\psi_d)_{d\in\NN}$, the vertices $u$ of $M$ such that the $d$-Hintikka type of $u$ is $\psi_d$ for all $d\in\NN$ are exactly the vertices whose first coordinate is $\Psi$, i.e., vertices $(\Psi,i)\in V_f$ if $\nu(\Psi)\in\NN$ and vertices $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)\in V_{\infty}$ if $\nu(\Psi)=\infty$. If $\psi$ is a local first order formula with a single free variable, then the set of the Hintikka chains $\Psi$ of vertices $v$ of $M$ such that $M\models\psi(v)$ is a finite union of basic sets of Hintikka chains. This implies that the subset of the vertices $v$ of $M$ such that $M\models\psi(v)$ is open and so measurable. We also obtain that the discrete Stone measure and the Stone measure of the modeling $M$ are $\nu$ and $\mu$, respectively. We now argue, using the way that $M$ was constructed, that all first order definable subsets of $M^\ell$, $\ell\in\NN$, are measurable in the product measure and the modeling $M$ satisfies the strong finitary mass transport principle. This argument is analogous to the corresponding argument in the proof of~\cite[Lemma 40]{NesO16} but we include it for completeness. For $i\in [k]$, let $g_i$ be the (partial) mapping from $V_f\cup V_{\infty}$ to $V_f\cup V_{\infty}$ that maps $x\in V_f\cup V_{\infty}$ to the $i$-parent of $x$, and let $g^0_i$ be the (partial) mapping from $V_f\cup V_{\infty}$ to $V_f\cup V_{\infty}$ that maps $x\in V_f\cup V_{\infty}$ to the $i$-parent of $x$ unless the $i$-parent of $x$ is contained in $V_{\infty}$ and the edge joining $x$ to the $i$-parent is not important. In other words, $g^0_i$ is defined for those $x\in V_f\cup V_{\infty}$ such that either the $i$-parent of $x$ is contained in $V_f$ or the $i$-parent of $x$ is contained in $V_{\infty}$ and the edge joining $x$ to the $i$-parent is important. Let $g'_i$ for $i\in [k]$ be the mapping from Hintikka chains to Hintikka chains such that $g'_i(\Psi)$ is the Hintikka chain of the $i$-parent of a vertex described by $\Psi$. Observe that the mapping $g'_i$ is continuous on the space of Hintikka chains with the topology defined in Subsection~\ref{subsec:Hintikka}. Since the mappings $f(x):=x+\sqrt{2}\mod 1$, $f_i(x)=ix\mod 1$, $i\in\NN$, $g'_i$, $i\in [k]$, and $\zeta^d$, $d\in [2k]$, are continuous (in the case of $f$ and $f_i$ when viewed as a function from $\mathbb{R}/\ZZ$ to $\mathbb{R}/\ZZ$), the definition of the edges of $M$ yields that each of the mappings $g^0_i$, $i\in [k]$, is continuous. Since $\mu$ is the Stone measure of a first order convergent sequence of rooted $2$-edge-colored $k$-trees, the mapping $g'_i$ also satisfies the following: if $Y$ is a measurable subset of Hintikka chains such that each element of $Y$ has exactly $d$ $i$-children, then $\mu((g'_i)^{-1}(Y))=d\mu(Y)$. We obtain using that the mappings $f$, $f_i$ and $\zeta^d$ are measure preserving the following: if $X$ is measurable subset of the domain of $g^0_i$ and $Y$ is a measurable subset of the codomain of $g^0_i$ such that each vertex of $Y$ has exactly $d$ $i$-children contained in $X$, then $\mu_M((g^0_i)^{-1}(Y)\cap X)=d\mu_M(Y)$. We next show that each of the mappings $g_i$, $i\in [k]$, is also continuous and has a measure preserving-like property similar to that of $g^0_i$, which is given at the end of the previous paragraph. Fix $i\in [k]$, and let $V_i$ be the domain of $g^0_i$. We next consider directed paths $P$ comprised of two or more edges; similarly as when we defined edges of $M$, we think of a path $P$ as a type of a path rather than an actual path in a modeling; note that $P$ bears information which of its edges are $1$-edges, $2$-edges, etc. Let $\preceq$ be the lexicographic order on paths $P$, in particular, every path is preceded by all paths of shorter lengths. Let $V_{i,P}$ be the set containing those $x\in (V_f\cup V_{\infty})\setminus V_i$ such that $M$ contains a path from $x$ to its $i$-parent formed by important edges only that matches the types of edges in $P$ but $M$ contains no path from $x$ to its $i$-parent formed by important edges that matches the types of edges in any path $P'\prec P$. Since every subset of $M$ that can be defined by a local first order formula with a single free variable is measurable, the set $V_{i,P}$ is measurable for every $i\in [k]$ and every $P$. Observe that the mapping $g_i$ restricted to $V_{i,P}$ is a composition of $g^0_j$, $j\in [k]$, in particular, the restriction of $g_i$ to $V_{i,P}$ is continuous and has the property given at the end of the previous paragraph. Since the sets $V_i$ and $V_{i,P}$ partition the domain of $g_i$, the mapping $g_i$ is continuous and satisfies the following: if $X$ and $Y$ are measurable subsets of $V_f\cup V_{\infty}$ such that each vertex of $Y$ has exactly $d$ $i$-children contained in $X$, then $\mu_M(g_i^{-1}(Y)\cap X)=d\mu_M(Y)$. We refer to this property of the mapping $g_i$ as \emph{measure semipreserving}. We now establish that every first order-definable subset of $M^k$ is measurable. We have already argued that every subset of $M$ defined by a local first order formula $\psi$ is open. Since the mapping $g_i$ is continuous, the subset of $M^2$ given by the $i$-th child-parent relation is measurable for every $i\in [k]$. Theorem~\ref{thm:hanf} yields that whether an $\ell$-tuple of vertices of $M$ satisfies a particular formula with $\ell$ free variables depends on the types of the vertices and the configuration formed by them (the subgraph induced by their sufficiently large neighborhoods), which can be described by a finite encoding using the child-parent relations in a measurable way. It follows that every first order-definable subset of $M^{\ell}$ is measurable for every $\ell\in\NN$, i.e., $M$ is a modeling limit. \textbf{Strong finitary mass transport principle.} We now verify that the modeling $M$ satisfies the strong finitary mass transport principle, i.e., we show that any two measurable subsets $A$ and $B$ of $M$ such that each vertex of $A$ has at least $a$ neighbors in $B$ and each vertex of $B$ has at most $b$ neighbors in $A$ satisfy that $a\mu_M(A)\le b\mu_M(B)$. Fix such subsets $A$ and $B$ and the corresponding integers $a$ and $b$. Since the measure of $V_f$ is zero, we can assume that $A\subseteq V_{\infty}$. Similarly, since the measure of vertices that are joined to any of their children by an infinitary edge is zero, we can also assume that all edges directed from $B$ to $A$ are finitary. For $i\in [k]$, let $V'_i$ be the subset of $V_i$ formed by those $x\in V_i$ such that the edge from $x$ to the $i$-parent of $x$ is infinitary. In particular, $V'_i$ contains all $x\in V_i$ such that the $i$-parent of $x$ is contained in $V_f$. Similarly, we define $V'_{i,P}$ for a path $P$ with two or more edges to be the subset of $V_i$ formed by those $x\in V_{i,P}$ such that the path from $x$ to its $i$-parent described by $P$ contains an infinitary edge. Note that $g^0_i$ has the following property for every integer $m\in\NN$: if sets $X\subseteq V_i$ and $Y\subseteq V_f\cup V_{\infty}$ satisfy that each vertex $y\in Y$ has at most $m$ $i$-children in $X$, then the set $X\cap (g^0_i)^{-1}(Y)$ has measure zero. Since the restriction of the mapping $g_i$ to $V_{i,P}$ is a composition of mappings $g^0_j$, $j\in [k]$, the following holds for every subset $X\subseteq V'_{i,P}$ and $Y\subseteq V_f\cup V_{\infty}$: if there exists an integer $m\in\NN$ such that each vertex of $Y$ has at most $m$ $i$-children in $X$, then the set $X\cap g_i^{-1}(Y)$ has zero measure. Finally, since the sets $V_i$ and $V_{i,P}$ partition the domain of $g_i$ and the edge from $x$ to its $i$-parent is infinitary iff $x$ is contained in $V'_i$ or $V'_{i,P}$ for some $P$, we conclude that the following holds for every integer $m\in\NN$: if $X$ is a subset of the domain of $g_i$ such that the edge from each vertex of $X$ to its $i$-parent is infinitary and $Y$ is a subset of $V_f\cup V_{\infty}$ such that each vertex of $Y$ has at most $m$ $i$-children in $X$, then the set $X\cap g_i^{-1}(Y)$ has zero measure. For every $i\in [k]$, let $A_i$ be the set of all $x\in A$ such that the edge from $x$ to its $i$-parent is infinitary. Since each vertex of $B$ has at most $b$ $i$-children in $X$, the set $A_i\cap g_i^{-1}(B)$ has zero measure. Hence, we can assume that $A_i=\emptyset$ for every $i\in [k]$ (as the elements of each set $A_i$ can be removed from $A$ without altering its measure). This yields that all edges of $M$ directed from the set $A$ to the set $B$ are finitary. Since all edges directed from the set $A$ to the set $B$ are finitary and $A\subseteq V_{\infty}$, no edge between $A$ and $B$ is incident with a vertex of $B\cap V_f$. Hence, we can assume that $B\subseteq V_{\infty}$ in the rest. Recall that we have already argued that we can assume that all edges of $M$ directed from the set $B$ to the set $A$ are finitary. Since both sets $A$ and $B$ are subsets of $V_{\infty}$, all edges between $A$ and $B$ (in either direction) are finitary and each of the mappings $g_i$, $i\in [k]$, is measure semipreserving, it follows that $a\mu_M(A)\le b\mu_M(B)$. Hence, the modeling $M$ has the strong finitary mass transport principle. We remark that an analogous argument yields that the strong finitary mass transport principle is preserved if all edges of $M$ that have one of the two colors are removed as the restriction of each mapping $g_i$ to the vertices joined to the $i$-parent by an edge of the non-removed color is also measure semipreserving. \textbf{Residuality.} It now remains to verify the fourth property from the statement of the theorem. Suppose that there exist a vertex $(\Psi,i)\in V_f$ and $r\in\NN$ such that the $r$-neighborhood of $(\Psi,i)$ in $M\setminus\{c_i,i\in\NN\}$ has a positive measure, say $\varepsilon>0$. Let $\Psi=(\psi_j)_{j\in\NN}$. Since $\nu(\Psi)$ is finite, there exists $\psi_j$ such that $\nu(\psi_j)=m$ and $m\in\NN$. By the definition of a null-partitioned sequence of graphs, there exist integers $n_0$ and $k_0$ such that the $r$-neighborhood of each vertex in $G_n\setminus\{c_1,\ldots,c_{k_0}\}$, $n\ge n_0$, contains at most $\varepsilon |G_n|/2m$ vertices. In particular, this holds for the $m$ vertices satisfying $\psi_j$. This implies that the Stone measure of vertices joined to one of the $m$ vertices satisfying $\psi_j$ by a (not necessarily directed) path of length at most $r$ that avoids $c_1,\ldots,c_{k_0}$ does not exceed $\varepsilon/2$ (note that this property is first order expressible and therefore captured by the Stone measure). Hence, the $r$-neighborhood of $(\Psi,i)$ in $M\setminus\{c_i,i\in\NN\}$ can have measure at most $\varepsilon/2$, contrary to our assumption that its measure is $\varepsilon$. Next, suppose that there exist a vertex $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)\in V_\infty$ and an integer $r$ such that the $r$-neighborhood of $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ in $M\setminus\{c_i,i\in\NN\}$ has a positive measure. The $r$-neighborhood of $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ contains only vertices of $V_\infty$ (otherwise, $V_f$ would contain a vertex whose $2r$-neighborhood has a positive measure). However, the definition of the modeling $M$ implies that the penultimate coordinates of all the vertices in the $r$-neighborhood of $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ are $h\pm\sqrt{2}i\mod 1$ for $i=-r,\ldots,+r$ (here, we use that the $r$-neighborhood does not contain a vertex from $V_f$). Since the set of all vertices of $V_{\infty}$ with the penultimate coordinate equal to $h\pm\sqrt{2}i\mod 1$, $i=-r,\ldots,+r$, has measure zero (because of the set of possible values for this coordinate has measure zero in $[0,1)$), the $r$-neighborhood of $(\Psi,n_0,h_1,n_1,\ldots,h_k,n_k)$ in $M\setminus\{c_i,i\in\NN\}$ also has measure zero. \end{proof} \section{Graphs with bounded tree-width} \label{sec:main} Theorem~\ref{thm:compose} almost readily yields our main result. \setcounter{theorem}{\thestoremaintheorem} \begin{theorem} Let $k$ be a positive integer. Every first-order convergent sequence of graphs with tree-width at most $k$ has a modeling limit satisfying the strong finitary mass transport principle. \end{theorem} \begin{proof} Let $(H_n)_{n\in\NN}$ be a first-order convergent sequence of graphs with tree-width at most $k$. As every graph with tree-width at most $k$ is a subgraph of a rooted $k$-tree (by Proposition~\ref{prop:tw}), for every $n\in\NN$, there exists a rooted $k$-tree $H'_n$ such that $H_n$ is a spanning subgraph of $H'_n$. Color the edges of $H'_n$ with two colors representing whether the edge is also an edge of $H_n$ or not. Hence, $(H'_n)_{n\in\NN}$ is a sequence of $2$-edge-colored rooted $k$-trees. Let $(G_n)_{n\in\NN}$ be a subsequence of $(H'_n)_{n\in\NN}$ that is first order convergent. By Lemma~\ref{lm:unary}, there exists a first order convergent null-partitioned sequence $(G'_n)_{n\in\NN}$ obtained from a subsequence of $(G_n)_{n\in\NN}$ by interpreting unary relational symbols $U_i$, $i\in\NN$. Lemma~\ref{lm:modeling} and Theorem~\ref{thm:compose} yield that the sequence $(G'_n)_{n\in\NN}$ has a modeling limit $M_G$ satisfying the properties given in Theorem~\ref{thm:compose}. Let $M_H$ be the modeling obtained from $M_G$ by removing all relations except the binary relation encoding the edge-color that represents edges of $H'_n$ that are also in $H_n$. The modeling $M_H$ is a modeling limit of the sequence $(H_n)_{n\in\NN}$ and it satisfies the strong finitary mass transport principle by the last point in the statement of Theorem~\ref{thm:compose}. \end{proof} \section*{Acknowledgement} The authors would like to thank both anonymous reviewers for their detailed and insightful comments, which have helped to significantly improve the presentation of the arguments given in the paper. \bibliographystyle{bibstyle}
1,108,101,563,495
arxiv
\section{Introduction} \label{sec:Intro} In this paper we present an asymptotic formula for the eigenvalue counting function of the Schr\"odinger operator $-\Delta +V$ for unbounded potentials $V$ on several types of unbounded fractal spaces. Such an asymptotic formula is often attributed to Niels Bohr in the Euclidean setting. We identify a set of sufficient conditions for Bohr's formula to hold on locally self-similar metric measure spaces which admit a cellular decomposition, and then verify these conditions for fractafolds \cites{StrFractafold,StrTep} and fractal fields \cite{HamKumFields} based on nested fractals. In particular, we are able to partially answer a question of Fan, Khandker, and Strichartz \cite{StrichartzSHO} regarding the spectral asymptotics of the harmonic oscillator potential on the infinite blow-up of a Sierpinski gasket (abbreviated $SG$). All these results have similarities in the classical theory of 1D Sturm-Liouville operators (see \cite{ReedSimonVol4}). The deep analogy between nested fractals (the typical representative being $SG$) and the real line $\mathbb{R}_+^1 = [0,\infty)$ is related to the fact that all of them are \emph{finitely ramified}. (A set is said to be \emph{finitely ramified} if it can be divided into several disconnected subsets upon removing a finite number of points from the set. For $\mathbb{R}_+^1$ it suffices to remove one point; for $SG$, two points.) Let us recall several known results from the spectral theory of the 1D Sch{\"o}dinger operator \begin{equation} H\psi = -\psi'' + V(x)\psi, \qquad x\geq 0 \end{equation} with boundary condition at $x=0$ of either Dirichlet type, $\psi(0)=0$, or Neumann type, $\psi'(0)=0$. \begin{enumerate}[label=\Roman*.] \item \label{iBohr} Assume that $V(x) \to +\infty$ as $|x|\to +\infty$. Then, by the result of H. Weyl, the spectrum of $H$ in $L^2([0,\infty), dx)$ is discrete and, under some technical conditions, \begin{equation} N(\lambda,V) := \#\{\lambda_i(H)\leq \lambda\} \sim \frac{1}{\pi}\int_0^\infty \sqrt{(\lambda-V(x))_+} \,dx. \end{equation} This is known as N. Bohr's formula, see \cites{LevitanSargsjan,KS,HoltMolchanov}. \item \label{iII} Assume that $V(x)$ is compactly supported, or (weaker assumption) vanishing fast enough (see below). Put $V(x) = V_+(x) - V_-(x)$, where $V_+=\max(0,V)$ and $V_-=\max(0,-V)$, and \begin{equation}N_-(V):=\#\{\lambda_i \leq 0\} \leq N_-(-V_-(\cdot)).\end{equation} The estimate of $N_-(V)$ as a result can be reduced to the negative potentials (potential wells). We use the notation $N_-(V)$ assuming here that $V(x) = -V_-(x)\leq 0$. The following estimates of $N_-(V)$ are popular in applications (see \cite{ReedSimonVol4}): \begin{enumerate} \item\label{iBa}(Bargmann) \begin{equation} N_-(V) \leq 1+\int_0^\infty x V(x)\,dx. \end{equation} \item\label{iBb}(Calogero) If $V(x)$ decreases with $|x|$ as $|x|\to\infty$, then \begin{equation} N_-(V) \leq c_0 \int_0^\infty \sqrt{V(x)}\,dx. \end{equation} The Calogero estimate has the correct scaling in the following sense. \item\label{iBc} Consider the operator \begin{equation} H_\sigma \psi = -\psi'' + \sigma V_0(x) \psi,\qquad x\geq 0 \quad (\text{plus boundary condition}). \end{equation} Then as $\sigma\to\infty$, \begin{equation} N_-(\sigma V_0) \sim c_1 \sigma^{1/2} \int_0^\infty \sqrt{V(x)}\,dx. \end{equation} This is the so-called \emph{quasiclassical asymptotics}. It is an important problem to find such an estimate for $N_-(V)$, which has true scaling in $\mathbb{R}^d$, $d\geq 2$, \emph{i.e.,} for any $\sigma$, \begin{equation} N_-(\sigma V_0) \leq \sigma^{d/2} \Phi(V_0). \qquad (\text{Cwickl-Lieb-Rosenblum}) \end{equation} For $d\geq 3$ this is the CLR estimate \begin{equation} N_-(V) \leq c_d \int\limits_{\mathbb{R}^d} |V(x)|^{d/2}\,dx. \end{equation} For $d=2$ the recent results by Grigor'yan and Nadirashivili \cite{GriNad1} and Shargorodsky \cite{Shargorodsky} give the desirable (though not simple) estimate. The paper \cite{GriNad2} contains the justification of the physical conjecture by Madau and Wu on $N_-(V)$ for 2D operators. The case $d=1$ was studied in the relatively recent papers by K. Naimark, G. Rozenblum, M. Solomyak et al (see \cites{NaimarkSolomyak,RozenblumSolomyak} and references therein). \end{enumerate} \end{enumerate} In this paper we address the item \ref{iBohr} above in detail. Items \ref{iBa}, \ref{iBb}, \ref{iBc} will be the subject of future work. \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{InfiniteBlowupSG.pdf} \caption{Part of an infinite blow-up of $SG(2)$, which is Type (i) of the fractafold considered in \S\ref{sec:fractafold}.} \label{fig:InfiniteBlowup} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{LadderFractafold.pdf} \caption{The ladder periodic fractafold based on $SG(2)$, which is Type (ii) of the fractafold considered in \S\ref{sec:fractafold}.} \label{fig:LadderFractafold} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{HexagonalFractafold.pdf} \caption{The hexagonal periodic fractafold based on $SG(2)$, which is Type (ii) of the fractafold considered in \S\ref{sec:fractafold}.} \label{fig:HexagonalFractafold} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{TriangularFractalField.pdf} \caption{The triangular lattice finitely ramified fractal field based on $SG(2)$, which is considered in \S\ref{fr-fi}.} \label{fig:TriFractalfield} \end{figure} Our main objective is to consider, instead of the Euclidean space, a \emph{fractafold}, which according to Strichartz \cite{StrFractafold} is defined as ``a space that is locally modeled on a specified fractal, the fractal equivalent of a manifold.'' The first instance of a fractafold is the infinite Sierpinski gasket (Figure \ref{fig:InfiniteBlowup}). As shown by Barlow and Perkins \cite{BP}, the heat kernel $p_t(x,y)$ on the infinite Sierpinski gasket satisfies a sub-Gaussian estimate with respect to the Euclidean metric $d(\cdot,\cdot)$: $$ p_t(x,y)\asymp c_1 t^{-d_s/2}\exp\left( - c_2 \left(\frac{d(x,y)^{d_w}}{t}\right)^{1/(d_w-1)} \right),$$ where $d_s = 2\log3/\log 5$ and $d_w = \log 5/\log2 > 2$. Here $\asymp$ means that there are upper and lower estimates, but the constants $c_1,c_2$ in them may be different. We would like to note, however, that the heat kernel is not immediately relevant for spectral analysis, partially because its form is complicated, but mostly because the domain of the Laplacian is not an algebra under multiplication \cite{BST}. Other typical examples of fractafolds that we consider, see \cite{StrTep} and Section \ref{sec:examples} for details, are shown in Figures \ref{fig:LadderFractafold}, \ref{fig:HexagonalFractafold}, and \ref{fig:TriFractalfield}. For background concerning spectral analysis on fractafolds, see \cites{s1,s2,eigen,i1,i2,i3,RT,RST,StrichartzSHO,o1,OSt,OS-CT,r1,StrichartzFractalsInTheLarge,T,q}. Existence of gaps in the spectrum is investigated in \cites{g1,g2,g3,Hare}. Wave equation on fractals is discussed in \cites{s0,w1,w2,w3,w4}. Physics applications, and spectral zeta functions in particular, are given in \cites{ph1,ph-b,A,Dunne,Ben,Tzeta,dgv}. \section{Main results} \label{sec:mainresults} \subsection{Spectral asymptotics of $-\Delta+V$} \label{sec:setup} In all the examples to follow, $K$ is a compact set in $\mathbb{R}^d$ endowed with a Borel probability measure $\mu$ and a ``well-defined boundary'' $\partial K$ which has $\mu$-measure zero. We shall assume that there exists a well-defined self-adjoint Laplacian operator $-\Delta^\wedge$ (resp. $-\Delta^\vee$) on $L^2(K,\mu)$ satisfying the Dirichlet (resp. Neumann) condition on $\partial K$. Note that $\partial K$ might not coincide with the boundary of $K$ in the topological sense. We assume (as is well known in examples) that both $-\Delta^\wedge$ and $-\Delta^\vee$ have compact resolvents, and hence have pure point spectra. It then makes sense to introduce the eigenvalue counting function \begin{equation} \label{eq:ECF} N^{\rm b}(K,\mu, \lambda) := \#\left\{ \lambda_i(-\Delta^{\rm b})\leq \lambda\right\},\quad {\rm b}\in\{\wedge,\vee\}. \end{equation} \begin{assumption} \label{ass:existspecdim} There exists a positive constant $d_s$ such that \begin{equation} 0<\varliminf_{\lambda\to\infty} \lambda^{-d_s/2} N^{\rm b}(K,\mu,\lambda) \leq \varlimsup_{\lambda\to\infty} \lambda^{-d_s/2} N^{\rm b}(K,\mu,\lambda) <\infty, \end{equation} where ${\rm b} \in \{\wedge,\vee\}$. \end{assumption} A stronger condition than Assumption \ref{ass:existspecdim} is \begin{assumption}[Weyl asymptotics of the bare Laplacian] \label{ass:Weyl} There exist a positive constant $d_s$ and a right-continuous with left limits (c\`{a}dl\`{a}g), $T$-periodic function $G:\mathbb{R} \to \mathbb{R}_+$ satisfying \begin{enumerate}[label=(G\arabic*)] \item $0<\inf G \leq \sup G <\infty$. \item $G$ is independent of the boundary condition ${\rm b}\in \{\wedge,\vee\}$ \end{enumerate} such that as $\lambda\to\infty$, \begin{equation} \label{eq:Weyl} N^{\rm b}(K,\mu,\lambda) = \lambda^{d_s/2} \left[G\left(\frac{1}{2}\log\lambda\right)+ R^{\rm b}(\lambda)\right], \end{equation} where $R^{\rm b}(\lambda)$ denotes the remainder term of order $o(1)$. \end{assumption} \begin{remark} The parameter $d_s$ is often identified with the \emph{spectral dimension} of the bare Laplacian $-\Delta$ on $L^2(K,\mu)$. If $K$ is a domain in $\mathbb{R}^d$ with a nice boundary, and $\mu$ is the Lebesgue measure, then $d_s =d$ and $G$ is an explicit constant $(2\pi)^{-d}\mu(B)\mu(K)$, where $B$ is the unit ball in $\mathbb{R}^d$. However, there are classes of fractals $K$ for which (\ref{eq:Weyl}) holds with $G$ being possibly nonconstant. In many examples, the leading-order term in $R^{\rm b}(\lambda)$ gives information about the boundary of the domain. For an Euclidean domain in $\mathbb{R}^d$ with nice boundary, the leading-order term of $R^{\rm b}(\lambda)$ scales with $\lambda^{-1/2}$, and the sign of this term is negative (resp. positive) if ${\rm b}=\wedge$ (resp. if ${\rm b}=\vee$) \cites{BrCa,FlVa,Iv,LaPo}. \end{remark} We now consider an unbounded space $K_\infty$ which admits a cellular decomposition into copies of $K$. Formally, let $K_\infty := \cup_\alpha K_\alpha$, where \begin{itemize} \item Each $K_\alpha$ is isometric to $K$ via the map $\phi_\alpha: K\to K_\alpha$. \item We identify $\partial K_\alpha :=\phi_\alpha(\partial K)$ to be the boundary of $K_\alpha$, and $K_\alpha^\circ:=K_\alpha \backslash \partial K_\alpha$ the interior of $K_\alpha$. \item (Cells adjoin only on the boundary.) For all $\alpha \neq \alpha'$, $\left(K_\alpha \cap K_{\alpha'}\right) =\left(\partial K_\alpha \cap \partial K_{\alpha'}\right)$. \end{itemize} Let $\mu_\alpha:= \mu \circ \phi_\alpha^{-1}$ be the push-forward measure of $\mu$ onto $K_\alpha$. For any $\alpha \neq \alpha'$, it is direct to define the ``glued'' measure $\mu_{\alpha,\alpha'}$ on $K_\alpha \cup K_{\alpha'}$ in the natural way: \begin{equation} \forall B \in \mathcal{B}(K_\alpha \cup K_{\alpha'}) : \qquad \mu_{\alpha,\alpha'}(B) = \mu_\alpha(B\cap K_\alpha) + \mu_{\alpha'}(B \cap K_{\alpha'}). \end{equation} By extension we define the measure $\mu_\infty$ on $K_\infty$. \begin{proposition}[Decoupling of $L^2$] \label{prop:decoupling} For all $\alpha \neq \alpha'$ we have $K^\circ_\alpha \cap K^\circ_{\alpha'} =\emptyset$ and $L^2(K^\circ_\alpha \cup K^\circ_{\alpha'}, \mu_{\alpha,\alpha'}) = L^2(K^\circ_\alpha, \mu_\alpha) \oplus L^2(K^\circ_{\alpha'}, \mu_{\alpha'})$. \end{proposition} Proposition \ref{prop:decoupling} allows one to decouple the Laplacian on the glued measure space into the direct sum of the Laplacians on the individual components (see \cite{ReedSimonVol4}*{Proposition XIII.15.3}): \begin{equation} \Delta^{\rm b}_{K_\alpha\cup K_{\alpha'}}: = \Delta^{\rm b}_{K_\alpha} \oplus \Delta^{\rm b}_{K_{\alpha'}}, \end{equation} from which it follows that \begin{equation} N^{\rm b}(K_\alpha\cup K_{\alpha'}, \mu_{\alpha,\alpha'},\lambda) = N^{\rm b}(K_\alpha, \mu_\alpha,\lambda) + N^{\rm b}(K_{\alpha'}, \mu_{\alpha'}, \lambda). \end{equation} By extension we have that \begin{equation} N^{\rm b}(K_\infty,\mu_\infty,\lambda) = \sum_\alpha N^{\rm b}(K_\alpha,\mu_\alpha,\lambda). \end{equation} For future purposes we also put a metric $d: K_\infty\times K_\infty \to [0,\infty)$, and fix an origin $0\in K_\infty$. In proving our main results, the metric $d$ does not play a major role. However for practical applications, such as determining the spectral dimension of the Schr\"odinger operator, one needs to understand the interplay between the metric $d$ and the measure $\mu_\infty$; see Remark \ref{rem:pot} and Section \ref{sec:examples}. Let the potential $V$ be a nonnegative, locally bounded measurable function on $K_\infty$. (In general, $V$ can be a real-valued, locally bounded measurable function which is bounded below. By adding a suitable constant to $V$ one retrieves the case of a nonnegative potential.) \begin{assumption} \label{ass:sa} There exists a self-adjoint Laplacian $-\Delta$ on $L^2(K_\infty,\mu_\infty)$ [equivalently, a local regular Dirichlet form $(\widetilde{\mathcal{E}},\widetilde{\mathcal{F}})$ on $L^2(K_\infty, \mu_\infty)$], and that the potential $V(x) \to +\infty$ as $d(0,x)\to+\infty$. \end{assumption} \begin{proposition} \label{prop:pp} Under Assumption \ref{ass:sa}, the Schr\"odinger operator $(-\Delta +V)$, regarded as a sum of quadratic forms, is self-adjoint on $L^2(K_\infty,\mu_\infty)$, and has pure point spectrum. \end{proposition} \begin{proof} This uses the min-max principle as stated in \cite{ReedSimonVol4}*{Theorem XIII.2}, and then follows the proof of \cite{ReedSimonVol4}*{Theorem XIII.16}. \end{proof} By virtue of Proposition \ref{prop:pp}, we can define the eigenvalue counting function for $(-\Delta+V)$ on $K_\infty$: \begin{equation} N(K_\infty, \mu_\infty,V,\lambda) := \#\left\{\lambda_i\left(-\Delta+V\right)\leq \lambda\right\}. \end{equation} We are interested in the asymptotics of $N(K_\infty,\mu_\infty,V,\lambda)$ as $\lambda\to\infty$. In order to state the precise results, we will impose some mild conditions on the potential $V$. Given a potential $V$ on $K_\infty$, let $V^\wedge$ (resp. $V^\vee$) be the function which is piecewise constant on each cell $K_\alpha$, and takes value $\sup_{x\in K_\alpha} V(x)$ (resp. $\inf_{x\in K_\alpha} V(x)$) on $K_\alpha$. We introduce the associated distribution functions \begin{eqnarray} F^\wedge(V,\lambda) &:=& \mu_\infty\left( \{x\in K_\infty: V^\wedge(x) \leq \lambda\}\right),\\ F^\vee(V,\lambda) &:=& \mu_\infty\left( \{x\in K_\infty: V^\vee(x) \leq \lambda\}\right). \end{eqnarray} Note that $F^\wedge(V,\lambda) \leq F^\vee(V,\lambda)$. \begin{assumption} \label{ass:V-s} There exists a constant $C>0$ such that $F^\vee(V, 2\lambda) \leq C F^\wedge(V,\lambda)$ for all sufficiently large $\lambda$. \end{assumption} Note that this assumption implies that both $F^\vee(V,\cdot)$ and $F^\wedge(V,\cdot)$ have the doubling property: there exist $C^\vee, C^\wedge>0$ such that \begin{equation} F^\vee(V,2\lambda) \leq C^\vee F^\vee(V,\lambda) \quad\text{and}\quad F^\wedge(V,2\lambda) \leq C^\wedge F^\wedge(V,\lambda) \end{equation} for all sufficiently large $\lambda$. \begin{assumption} \label{ass:V} The potential $V$ on $K_\infty$ satisfies \begin{equation} \frac{F^\vee(V,\lambda)}{F^\wedge(V,\lambda)} = 1+o(1) \quad \text{as}~\lambda\to\infty. \end{equation} \end{assumption} \begin{remark}\label{rem:pot} To understand Assumption \ref{ass:V-s} or \ref{ass:V}, it helps to keep the following example in mind. Let $(K_\infty, \mu_\infty, d)$ be a metric measure space which admits a cellular decomposition into copies of the compact metric measure space $(K,\mu,d)$. Let ${\rm diam}_d(K)$ be the diamater of $K$ in the $d$-metric. Further suppose that $\mu_\infty$ is Ahlfors-regular: there exist positive constants $c_1$, $c_2$, and $\alpha$ such that \begin{equation} c_1 r^\alpha \leq \mu_\infty(B_d(x,r)) \leq c_2 r^\alpha \end{equation} for all $x\in K_\infty$ and sufficiently large $r>0$. As for the potential $V$, assume that there exist $\beta>1$ and $\gamma \in (0,1]$ such that \begin{equation}\label{eq:Vdb} c_3 d(0,x)^\beta \leq V(x) \leq c_4 d(0,x)^\beta, \end{equation} \begin{equation} \label{eq:Vreg} \frac{ |V(x)-V(y)|}{d(x,y)^\gamma} \leq c_5 [\max(d(0,x), d(0,y))]^{\beta-\gamma} \end{equation} for all $x,y \in K_\infty$. By a direct calculation one can verify that (\ref{eq:Vdb}) implies \begin{equation} c_6 \lambda^{\alpha/\beta}\leq F^{\rm b}(V,\lambda) \leq c_7 \lambda^{\alpha/\beta}, \end{equation} which satisfies Assumption \ref{ass:V-s}. Meanwhile, (\ref{eq:Vreg}) implies \begin{equation} V^\wedge(x)-V^\vee(x) \leq c_8 [{\rm diam}_d(K)]^\gamma d(0,x)^{\beta-\gamma} . \end{equation} Thus (\ref{eq:Vdb}) and (\ref{eq:Vreg}) together imply Assumption \ref{ass:V}. \end{remark} Our main results are the following. \begin{theorem}[Existence of spectral dimension] \label{thm:specdim} Under Assumptions \ref{ass:existspecdim}, \ref{ass:sa}, and \ref{ass:V-s}, we have that \begin{equation} 0<\varliminf_{\lambda\to\infty} \frac{N(K_\infty, \mu_\infty, V,\lambda)}{\lambda^{d_s/2} F(V,\lambda)} \leq \varlimsup_{\lambda\to\infty} \frac{N(K_\infty,\mu_\infty,V,\lambda)}{\lambda^{d_s/2}F(V,\lambda)} <\infty, \end{equation} where $F(V,\lambda) := \mu_\infty\left(\{x\in K_\infty: V(x)\leq \lambda\} \right)$. In particular, if $F(V,\lambda) = \Theta(\lambda^\beta)$ as $\lambda\to\infty$, then $d_s(V) = d_s + 2\beta$ is the effective spectral dimension of the Schr\"odinger operator $(-\Delta +V)$. \end{theorem} \begin{theorem}[Bohr's formula] \label{thm:Bohrmain} Under Assumptions \ref{ass:Weyl}, \ref{ass:sa}, and \ref{ass:V}, \begin{equation} \lim_{\lambda\to\infty} \frac{N(K_\infty,\mu_\infty,V,\lambda)}{g(V,\lambda)}=1, \end{equation} where \begin{equation} g(V,\lambda) := \int\limits_{K_\infty} \left[\left(\lambda-V(x)\right)_+ \right]^{d_s/2}G\left(\frac{1}{2}\log(\lambda-V(x))_+\right)\, \mu_\infty(dx), \end{equation} and $(f)_+ = \max\{f,0\}$. \end{theorem} In what follows we shall refer to $g$ as ``Bohr's asymptotic function.'' The proof of Theorem \ref{thm:Bohrmain}, discussed in Section \ref{sec:Bohr}, utilizes Dirichlet-Neumann bracketing on the eigenvalue counting function and on Bohr's asymptotic function. This is a relatively standard technique which is explained in the mathematical physics literature; see \emph{e.g.} \cite{ReedSimonVol4}*{\S XIII}. The novelty of our approach is to restate the sufficient condition on the potential $V$ in terms of its distribution function, which allows us to extend the classical Bohr's formula to a wider class of settings, such as on unbounded fractal spaces. \subsection{Laplace transform version} There are also analogs of Theorems \ref{thm:specdim} and \ref{thm:Bohrmain} for the Laplace-Stieltjes transform of the eigenvalue counting function \begin{equation} \mathcal{L}(K_\infty,\mu_\infty,V,t) := {\rm Tr}_{K_\infty}\{e^{-t(-\Delta+V)}\} = \int_0^\infty e^{-\lambda t}\,N(K_\infty,\mu_\infty,V,d\lambda) . \end{equation} When $V=0$ this is the trace of the heat semigroup associated with the bare Laplacian $-\Delta$. More generally, it can be regarded as the trace of the Feynman-Kac semigroup associated to the Markov process driven by $-\Delta$ subject to kiling with rate $V(x)$ at $x\in K_\infty$. The reason for stating the analog versions is because for certain compact metric measure spaces, it is not known whether an explicit Weyl asymptotic formula for the bare Laplacian (Assumption \ref{ass:Weyl}) exists. However it may be the case that an asymptotic formula for the \emph{heat kernel trace} (in some literature it is also called the \emph{partition function}) \begin{equation} \mathcal{L}(K,\mu,t) := {\rm Tr}\{e^{t\Delta}\} = \int\limits_K p_t(x,x) \,\mu(dx) \end{equation} exists in the $t\downarrow 0$ limit. Here $p_t(x,y) ~(t>0,~x,y\in K)$ is the heat kernel associated to the Markov semigroup $e^{t\Delta}$ generated by the self-adjoint Laplacian $-\Delta$ on $L^2(K,\mu)$. To be more precise, we denote by $\mathcal{L}^{\rm b}(K,\mu,t)$ the heat kernel trace of the Laplacian $-\Delta^{\rm b}$ on $L^2(K,\mu)$ with boundary condition ${\rm b} \in \{\wedge,\vee\}$. Then \begin{equation} \mathcal{L}^{\rm b}(K,\mu, t) = \int_0^\infty e^{-\lambda t} N^{\rm b}(K,\mu,d\lambda) = \int_K \, p_t^{\rm b}(x,x)\,\mu(dx), \end{equation} where $N^{\rm b}(K,\mu,\lambda)$ is as in (\ref{eq:ECF}), and $p_t^{\rm b}(x,y)$ is the heat kernel associated with the infinitesimal generator $-\Delta^{\rm b}$. \begin{assumption}[Existence of the spectral dimension for the bare Laplacian] \label{ass:SpecHKT} There exists a positive constant $d_s$ such that \begin{equation} 0< \varliminf_{t\downarrow 0} t^{d_s/2}\mathcal{L}^{\rm b}(K,\mu,t) \leq \varlimsup_{t\downarrow 0} t^{d_s/2}\mathcal{L}^{\rm b}(K,\mu,t) <\infty \end{equation} for ${\rm b} \in \{\wedge,\vee\}$. \end{assumption} \begin{theorem} \label{thm:specdimHKT} Under Assumptions \ref{ass:sa}, \ref{ass:V-s}, and \ref{ass:SpecHKT}, we have that \begin{equation} 0 < \varliminf_{t\downarrow 0} \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}(V,t)} \leq \varlimsup_{t\downarrow 0} \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}(V,t)} <\infty, \end{equation} where \begin{equation} \mathcal{F}(V,t) = \int_{K_\infty} \, e^{-t V(x)}\,\mu_\infty(dx). \end{equation} In particular, if $F(V,\lambda) := \mu_\infty\left(\{x\in K_\infty: V(x)\leq \lambda\} \right)=\Theta(\lambda^\beta)$ as $\lambda\to\infty$, then $d_s(V)=d_s+2\beta$ is the spectral dimension for the Schr\"odinger operator $(-\Delta+V)$. \end{theorem} \begin{assumption}[Weak Weyl asymptotics for the bare Laplacian] \label{ass:WeylHKT} There exists a positive constant $d_s$ and a continuous function $H:\mathbb{R}_+\to\mathbb{R}_+$, independent of the boundary condition ${\rm b}\in \{\wedge,\vee\}$ and with $0 < \inf H \leq \sup H < \infty$, such that as $t\downarrow 0$, \begin{equation} \mathcal{L}^{\rm b}(K,\mu, t) = t^{-d_s/2}\left[H(t) + \rho^{\rm b}(t)\right], \end{equation} where $\rho^{\rm b}(t)$ denotes the remainder term of order $o(1)$. \end{assumption} \begin{theorem}[Laplace transform version of Bohr's formula] \label{thm:LaplaceBohr} Under Assumptions \ref{ass:sa}, \ref{ass:WeylHKT}, and \ref{ass:V}, we have that \begin{equation} \label{eq:LaplaceBohr} \lim_{t\downarrow 0} \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2}H(t)\mathcal{F}(V,t)}=1, \end{equation} \end{theorem} Note that (\ref{eq:LaplaceBohr}) can also be interpreted as the asymptotic factorization of the trace of the Feynman-Kac semigroup: \begin{equation} \lim_{t\downarrow 0} \frac{{\rm Tr}_{K_\infty}\{e^{-t(-\Delta+V)}\}}{ {\rm Tr}_K\{e^{t\Delta}\} \cdot {\rm Tr}_{K_\infty}\{e^{-tV}\}}=1. \end{equation} \begin{remark} We make a few comments concerning the connections between Assumption \ref{ass:existspecdim}/\ref{ass:Weyl} and Assumption \ref{ass:SpecHKT}/\ref{ass:WeylHKT}. \begin{enumerate}[label=(\roman*)] \item Assumption \ref{ass:existspecdim} is equivalent to Assumption \ref{ass:SpecHKT}. \item Assumption \ref{ass:Weyl} implies Assumption \ref{ass:WeylHKT}. However, the reverse implication is possibly not true, since the classical technique of Tauberian theorems may not be applicable in this context. \item In order to prove Bohr's formula (Theorem \ref{thm:Bohrmain}), we impose in Assumption \ref{ass:Weyl} that the function $G$ be a periodic function. This is natural in light of the fractal examples we are interested in. However, to prove the Laplace transform version of Bohr's formula Theorem \ref{thm:LaplaceBohr ), one does not need to assume the log-periodicity in Assumption \ref{ass:WeylHKT}. This leads to the question of whether one could relax the periodicity of $G$ and still be able to prove the original Bohr's formula in greater generality (we do not address this question in the present work). \end{enumerate} \end{remark} \subsection{Application of the main results} To illustrate how our main results can be used, we now describe the ``harmonic oscillator'' problem on the Sierpinski gasket which was investigated in \cite{StrichartzSHO}. For discussions of more general unbounded potentials on other fractal-like spaces, see Section \ref{sec:examples}. \begin{example}[Harmonic oscillator on the infinite blow-up of the Sierpinski gasket] Let $K$ be the Sierpinski gasket ($SG$). To construct $SG$, we first set the three vertices $\{p_1, p_2, p_3\}$ of an equilateral triangle in $\mathbb{R}^2$, and then introduce the contraction maps $\Psi_j: \mathbb{R}^2 \to \mathbb{R}^2$, $\Psi_j(x) = \frac{1}{2}(x-p_j) + p_j$, $j=1,2,3$. Then $SG$ is the unique fixed point $K$ under the iterated function system consisting of the $\Psi_j$: $K = \cup_{j=1}^3 \Psi_j(K)$. Let $w=w_1 w_2 \cdots w_m$ be a word of length $|w|=m$ where each letter $w_j \in \{1,2,3\}$, and define the map $\Psi_w = \Psi_{w_1} \circ \cdots \circ \Psi_{w_m}$. We endow $SG$ with the uniform self-similar measure $\nu$ with $\nu(\Psi_w K) = 3^{-|w|}$. The theory of Kigami \cite{Kigami} allows us to define the standard Laplacian on $L^2(SG, \nu)$ with either Dirichlet or Neumann condition on the boundary $\partial(SG)=\{p_1,p_2,p_3\}$. Moreover, Kigami and Lapidus \cite{KigamiLapidus} proved that the eigenvalue counting function for the standard Laplacian satisfies \begin{equation} N^{\rm b}(SG,\nu,\lambda) = \lambda^{d_s/2} \left[G\left(\frac{1}{2}\log\lambda\right)+o(1)\right] \qquad ({\rm b} \in \{\wedge,\vee\}), \end{equation} where $d_s= 2\log3/\log 5$, and $G$ is a c\`{a}dl\`{a}g periodic function with period $\frac{1}{2}\log 5$ and contains discontinuities. Thus Assumption \ref{ass:Weyl} is satisfied. Next, for each infinite word $w=w_1 w_2 \cdots$ which is not eventually constant, define \begin{equation} SG_\infty^w := \bigcup_{m=0}^\infty \left(\Psi_{w_1}^{-1}\circ \cdots \circ \Psi_{w_m}^{-1}\right)(SG) \end{equation} to be the \emph{infinite blow-up} of $SG$ associated with the word $w$. This is an unbounded fractal space where the neighborhood of any point $x\in K_\infty$ is homeomorphic to $SG$, and thus is a fractal analog of a manifold, called a \emph{fractafold} by Strichartz \cite{StrichartzFractalsInTheLarge}. Properties of the Laplacian on $SG_\infty^w$ are discussed in \cite{StrFractafold}. Here we point out that by construction, $SG_\infty^w$ admits a cellular decomposition into copies of $SG$ which intersect on the boundary only. Thus the measure $\nu$ on $SG$ can be readily extended to the measure $\nu_\infty$ on $SG_\infty^w$. In \cite{StrichartzSHO} Fan, Khandker, and Strichartz studied the spectral problem of a harmonic oscillator potential $V$ on a class of infinite blow-ups of $SG$. They defined $V$ to be a solution to $-\Delta V=-1$ on $SG_\infty^w$ which grows unboundedly as $d(0,x)\to\infty$ and attains a minimum at some vertex $x_0\in K_\infty$. (The first condition is a suitable replacement of $V(x)=\frac{1}{2}|x|^2$, which is available only in the Euclidean setting.) Note that this implies that $V(x)$ grows at infinity at rate comparable to a positive power of $ R(x_0,x)$, where $R(\cdot,\cdot)$ is the effective resistance metric on $SG_\infty^w$. This verifies Assumption \ref{ass:V-s}. However we cannot verify Assumption \ref{ass:V} for general words $w$. Paper \cite{StrichartzSHO} also contains information about spectral dimension, which depends on the blow-ups of $SG$. Through a mix of computations and numerical simulations, the authors of \cite{StrichartzSHO} were able to find properties of the low-lying eigenfunctions, as well as the asymptotic growth rate of the eigenvalue counting function of $-\Delta+V$ \cite{StrichartzSHO}*{Theorem 8-1 and Eq. (8.18)}: \begin{equation} \label{SGHarmscaling} c\lambda^{d_s} \leq N(SG_\infty^w, \nu_\infty, V,\lambda) \leq C \lambda^{d_s}. \end{equation} Among the open questions posed in \cite{StrichartzSHO}*{Problem 8-3 and Conjecture 8-4} is finding the asymptotic ``Weyl ratio'' $\lambda^{-d_s(V)/2}N(K_\infty,\mu_\infty,V,\lambda)$ of the eigenvalue counting function. Here we can provide an indirect answer. Given that Assumptions \ref{ass:Weyl}, \ref{ass:sa}, and \ref{ass:V} are satisfied, Bohr's formula (Theorem \ref{thm:Bohrmain}) says that as $\lambda\to\infty$, \begin{equation}\label{eq:SGBohr} N(SG_\infty^w,\nu_\infty,V,\lambda) =(1+o(1)) \int_{SG_\infty^w} \, \left[\left(\lambda-V(x)\right)_+\right]^{d_s/2} G\left(\frac{1}{2}\log(\lambda-V(x))_+\right)\,d\nu_\infty(x). \end{equation} This in some sense answers the Weyl ratio question, in spite of the non-explicit nature of the integral on the right-hand side. \end{example} The rest of this paper is organized as follows. In Section \ref{sec:Bohr} we describe the tools needed to establish Bohr's formula in the setting of an unbounded space which admits a cellular decomposition according to the setup in Section \ref{sec:setup}. In Section \ref{sec:Bohrpot} we show how to restate the general sufficient condition for Bohr's formula in terms of distribution functions of $V^\vee$ and $V^\wedge$, and also give a ``weak'' version of Bohr's formula. We can show how the addition of an unbounded potential leads to the absence of gaps in the spectrum of $-\Delta +V$. This is of independent interest since the spectrum of the bare Laplacian on certain fractals (\emph{e.g.} the Sierpinski gasket) has gaps. In Section \ref{sec:LaplaceBohr} we establish the Laplace transform version of Bohr's formula. Finally, in Section \ref{sec:examples}, we discuss applications of our main results to various unbounded potentials on several types of unbounded fractal spaces. \section{The general Bohr's formula}\label{sec:Bohr} In this section and the next section, Assumptions \ref{ass:Weyl} and \ref{ass:sa} are in force. \subsection{Bohr's asymptotic functions} Let $-\Delta^\wedge$ (resp. $-\Delta^\vee$) be the Laplacian on $L^2(K_\infty,\mu_\infty)$ with Dirichlet (resp. Neumann) conditions on the gluing boundary $\cup_\alpha \partial K_\alpha$. For each potential $V$, let $V^\wedge$ (resp. $V^\vee$) be the piecewise constant function which takes value $\sup_{x\in K_\alpha} V(x)$ (resp. $\inf_{x\in K_\alpha} V(x)$) on $K_\alpha$. Thanks to Proposition \ref{prop:pp}, one can introduce the eigenvalue counting functions \begin{eqnarray} N(K_\infty, \mu_\infty,V,\lambda) &:=& \#\left\{\lambda_i \left(-\Delta+V\right)\leq \lambda\right\},\\ N^\wedge(K_\infty, \mu_\infty,V,\lambda) &:=& \#\left\{\lambda_i \left(-\Delta^\wedge+V^\wedge\right)\leq \lambda\right\},\\ N^\vee(K_\infty, \mu_\infty,V,\lambda) &:=& \#\left\{\lambda_i \left(-\Delta^\vee+V^\vee\right)\leq \lambda\right\}. \end{eqnarray} Note that since $(-\Delta^\vee + V^\vee) \leq (-\Delta +V) \leq (-\Delta^\wedge+ V^\wedge)$ in the sense of quadratic forms, \begin{equation} \label{eq:DNbracket} N^\wedge(K_\infty, \mu_\infty,V,\lambda)\leq N(K_\infty, \mu_\infty,V,\lambda) \leq N^\vee(K_\infty, \mu_\infty,V,\lambda). \end{equation} We shall show that under some mild additional conditions on $V$, $N(K_\infty, \mu_\infty, V,\lambda)$ is asymptotically comparable to the ``Bohr's asymptotic function'' \begin{equation} \label{eq:g} g(V,\lambda) := \int\limits_{K_\infty} \left[\left(\lambda-V(x)\right)_+ \right]^{d_s/2}G\left(\frac{1}{2}\log(\lambda-V(x))_+\right)\, d\mu_\infty(x), \end{equation} where $(f)_+:= \max\{f,0\}$, and $G$ is as appeared in Assumption \ref{ass:Weyl}. In order to estimate this rate of convergence, we introduce the functions \begin{equation} \label{eq:gb} g^{\rm b}(V,\lambda) := \int\limits_{K_\infty} \left[\left(\lambda-V^{\rm b}(x)\right)_+ \right]^{d_s/2}G\left(\frac{1}{2}\log(\lambda-V^{\rm b}(x))_+\right)\, d\mu_\infty(x) \end{equation} and \begin{equation} \label{eq:Rb} \mathcal{R}^{\rm b}(V,\lambda) := \int\limits_{K_\infty} \left[(\lambda-V^{\rm b}(x))_+\right]^{d_s/2} R^{\rm b}\left((\lambda-V^{\rm b}(x))_+\right)\,d\mu_\infty(x) \end{equation} for ${\rm b}\in\{\wedge, \vee\}$, where $R^{\rm b}$ is the remainder term which appeared in Assumption \ref{ass:Weyl}. Observe that since $V^{\rm b}(x)$ is constant on cells, the right-hand side expressions in (\ref{eq:gb}) and (\ref{eq:Rb}) are really discrete sums: \begin{eqnarray} \label{eq:gb2} g^{\rm b}(V,\lambda) &=&\sum_{\{\alpha: \left.V^{\rm b}\right|_{K_\alpha}\leq \lambda\}}\left[\lambda-\left.V^{\rm b}\right|_{K_\alpha}\right]^{d_s/2} G\left(\frac{1}{2}\log\left(\lambda-\left.V^{\rm b}\right|_{K_\alpha}\right)\right),\\ \label{eq:Rb2} \mathcal{R}^{\rm b}(V,\lambda) &=&\sum_{\{\alpha: \left.V^{\rm b}\right|_{K_\alpha}\leq \lambda\}}\left[\lambda-\left.V^{\rm b}\right|_{K_\alpha}\right]^{d_s/2} R^{\rm b}\left(\lambda-\left.V^{\rm b}\right|_{K_\alpha}\right). \end{eqnarray} Moreover, by Proposition \ref{prop:decoupling}, $K_\infty$ decouples into the various $K_\alpha$ according to the Dirichlet or Neumann boundary condition, so \begin{equation} \label{eq:Nbreakdown} N^{\rm b}(K_\infty,\mu_\infty,V,\lambda) = \sum_{\{\alpha: \left.V^{\rm b}\right|_{K_\alpha} \leq \lambda\}} N^{\rm b}\left(K_\alpha,\,\mu_\alpha,\,\lambda-\left.V^{\rm b}\right|_{K_\alpha}\right). \end{equation} Pulling (\ref{eq:Weyl}), (\ref{eq:gb2}), (\ref{eq:Rb2}), and (\ref{eq:Nbreakdown}) together we obtain \begin{equation} \label{eq:NgR} N^{\rm b}(K_\infty, \mu_\infty, V,\lambda)=g^{\rm b}(V,\lambda) + \mathcal{R}^b(V,\lambda) . \end{equation} \subsection{Monotonicity of Bohr's asymptotic functions} A key monotonicity result we need is \begin{proposition} \label{prop:monotoneg} Fix a potential $V$. Then each of the functions $\lambda \mapsto g(V,\lambda)$, $\lambda\mapsto g^\wedge(V,\lambda)$, and $\lambda\mapsto g^\vee(V,\lambda)$ is monotone nondecreasing for all $\lambda>0$. Moreover, $g^\wedge(V,\lambda) \leq g(V,\lambda) \leq g^\vee(V,\lambda)$. \end{proposition} This follows from the monotonicity of the integrand of the $g$ function. \begin{proposition} \label{prop:monotone} The function $W(\lambda)=\lambda^{d_s/2} G\left(\frac{1}{2}\log\lambda\right)$ is nondecreasing. \end{proposition} \begin{remark} This proposition implies, in particular, that $G$ has a c\`{a}dl\`{a}g version. Although the result is very simple, we could not find it in the literature. \end{remark} \begin{proof}[Proof of Proposition \ref{prop:monotone}] If $W$ was not nondecreasing, then there would existed $\lambda_2>\lambda_1>0$ such that $W(\lambda_2) =W(\lambda_1)=-\delta<0$, which contradicts \eqref{eq:Weyl} and the monotonicity of $ N^{\rm b}(K,\mu,\lambda) $. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:monotoneg}] Fix a potential $V$. For each $\lambda>0$ and $x\in K_\infty$, put \begin{equation} W(\lambda, V, x) = ((\lambda- V(x))_+)^{d_s/2} G\left(\frac{1}{2}\log((\lambda-V(x))_+)\right) \end{equation} and \begin{equation} W^{\rm b}(\lambda,V,x) =((\lambda- V^{\rm b}(x))_+)^{d_s/2} G\left(\frac{1}{2}\log((\lambda-V^{\rm b}(x))_+)\right). \end{equation} Observe that $W(\lambda, V, x)=W((\lambda-V(x))_+)$ and $W^{\rm b}(\lambda, V,x) = W((\lambda-V^{\rm b}(x))_+)$. Using Proposition \ref{prop:monotone} we deduce the following two consequences. First, $\lambda\mapsto W(\lambda,V,x)$ is nonnegative and monotone nondecreasing for each $x$. And since $g(V,\lambda)$ is the weighted integral of $W(\lambda,V,x)$ over $x$, it follows that $\lambda\mapsto g(V,\lambda)$ is also monotone nondecreasing. The monotonicity of $\lambda\mapsto g^{\rm b}(V,\lambda)$ is proved in exactly the same way. Second, the monotonicity of $W(\lambda)$ implies that $W^\wedge(\lambda,V,x)\leq W(\lambda,V,x) \leq W^\vee(\lambda,V,x)$ for each $x$, and upon integration over $x$ we get $g^\wedge(V,\lambda)\leq g(V,\lambda) \leq g^\vee(V,\lambda)$. \end{proof} \subsection{Bohr's asymptotics via Dirichlet-Neumann bracketing} We have all the necessary pieces to state the error of approximating $N(K_\infty,\mu_\infty, V,\lambda)$ by $g(V,\lambda)$. \begin{theorem}[Error estimate in Bohr's approximation] \label{thm:BohrError} Under Assumptions \ref{ass:Weyl} and \ref{ass:sa}, we have \begin{equation} \label{eq:bohrcomp} \left|\frac{N(K_\infty,\mu_\infty,V,\lambda)}{g(V,\lambda)} -1\right| \leq \max_{{\rm b}\in\{\wedge,\vee\}} \left|\frac{g^{\tilde{\rm b}}(V,\lambda)}{g^{\rm b}(V,\lambda)} -1+ \frac{\mathcal{R}^{\tilde{\rm b}}(V,\lambda)}{g^{\rm b}(V,\lambda)}\right|, \end{equation} where $\tilde{\rm b} = \wedge$ (resp. $\tilde{\rm b}=\vee$) if ${\rm b}=\vee$ (resp. if ${\rm b}=\wedge$). \end{theorem} \begin{proof} From (\ref{eq:DNbracket}) we have \begin{equation} N^\wedge(K_\infty, \mu_\infty,V,\lambda)\leq N(K_\infty, \mu_\infty,V,\lambda) \leq N^\vee(K_\infty, \mu_\infty,V,\lambda). \end{equation} Meanwhile by Proposition \ref{prop:monotoneg}, \begin{equation} g^\wedge(V,\lambda) \leq g(V,\lambda) \leq g^\vee(V,\lambda). \end{equation} Therefore \begin{equation} \label{eq:DNineq} \frac{N^\wedge(K_\infty, \mu_\infty,V,\lambda)}{g^\vee(V,\lambda)} \leq \frac{N(K_\infty, \mu_\infty,V,\lambda)}{g(V,\lambda)} \leq \frac{N^\vee(K_\infty, \mu_\infty,V,\lambda)}{g^\wedge(V,\lambda)}. \end{equation} Subtract $1$ from every term in the inequality (\ref{eq:DNineq}), and then use (\ref{eq:NgR}) to replace $N^{\rm b}(K_\infty,\mu_\infty,V,\lambda)$ with $g^{\rm b}(V,\lambda) + \mathcal{R}^b(V,\lambda)$. Finally, we can estimate the absolute value of the middle term of the inequality by the maximum of the absolute value on either side of the inequality. \end{proof} Having established the main error estimate, Theorem \ref{thm:BohrError}, we can now give an abstract condition on $V$ for which Bohr's formula holds. \begin{assumption} \label{ass:pot} The potential $V$ on $K_\infty$ satisfies \begin{equation} \frac{g^\vee(V,\lambda)}{g^\wedge(V,\lambda)} = 1+o(1) \quad \text{as}~\lambda\to\infty. \end{equation} \end{assumption} \begin{theorem}[Strong Bohr's formula] \label{thm:Bohr} Under Assumptions \ref{ass:Weyl}, \ref{ass:sa}, and \ref{ass:pot}, we have \begin{equation} \label{eq:strongBohr} \lim_{\lambda\to\infty} \frac{N(K_\infty,\mu_\infty,V,\lambda)}{g(V,\lambda)} =1. \end{equation} \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:Bohr}] Observe that Assumptions \ref{ass:Weyl} and \ref{ass:pot} together imply that the error term stated in Theorem \ref{thm:BohrError} is $o(1)$. \end{proof} \section{Connection between Bohr's formula and the distribution function of the potential} \label{sec:Bohrpot} Assumption \ref{ass:pot} can be too abstract for applications dealing with fractal spaces. We now explain how this assumption can be restated in terms of distribution functions of $V$: \begin{equation} F(V,\lambda) := \mu_\infty(\{x \in K_\infty: V(x)\leq \lambda\}) \quad \text{and}\quad F^{\rm b}(V,\lambda) : =\mu_\infty(\{x\in K_\infty: V^{\rm b}(x) \leq \lambda\}. \end{equation} \begin{lemma} We have that \begin{equation} g(V,\lambda) = \int_0^{W(\lambda)} \,F(V,\lambda-W^{-1}(t))\,dt \quad \text{and} \quad g^{\rm b}(V,\lambda) = \int_0^{W(\lambda)} \, F^{\rm b}(V,\lambda-W^{-1}(t))\,dt, \end{equation} where \begin{equation} W^{-1}(t) = \inf\{\lambda \geq 0: W(\lambda) \geq t\} \end{equation} is the generalized inverse of $W(\lambda) = \lambda^{d_s/2} G(\frac{1}{2}\log \lambda)$. \end{lemma} \begin{proof} We start with a fundamental identity in measure theory. For any nonnegative function $f$ on a $\sigma$-finite measure space $(X,m)$, Fubini's theorem tells us that \begin{equation} \int_X \, f(x) \,m(dx) = \int_0^\infty \, m(\{x\in X: f(x) \geq t\})\,dt. \end{equation} Applying this identity to $g(V,\lambda)$ we find \begin{equation} \label{eq:gmeas} g(V,\lambda) = \int_{K_\infty} W((\lambda-V(x))_+) \, d\mu_\infty(x) = \int_0^\infty\, \mu_\infty(\{x\in K_\infty: W((\lambda-V(x))_+) \geq t\})\, dt. \end{equation} Since $W$ is monotone nondecreasing (Proposition \ref{prop:monotone}), it has a well-defined generalized inverse $W^{-1}$, which satisfies \begin{equation} W(\lambda) \geq t \Longleftrightarrow \lambda \geq W^{-1}(t). \end{equation} So the right-hand term in (\ref{eq:gmeas}) can be further rewritten as \begin{equation} \int_0^\infty \, \mu_\infty\left(\{x\in K_\infty: (\lambda-V(x))_+ \geq W^{-1}(t)\}\right)\,dt. \end{equation} Now by assumption $V$ is a nonnegative potential, so $W^{-1}(t) \leq (\lambda-V(x))_+ \leq \lambda$, or equivalently, $t \leq W(\lambda)$. This places an upper bound on the integral, and we get \begin{equation} g(V,\lambda)=\int_0^{W(\lambda)} \, \mu_\infty\left(\{x\in K_\infty: V(x) \leq \lambda-W^{-1}(t)\}\right)\,dt = \int_0^{W(\lambda)} \, F(V, \lambda-W^{-1}(t))\,dt. \end{equation} The proof for $g^{\rm b}(V,\lambda)$ is identical. \end{proof} Observe that for $\lambda \leq \lambda'$, \begin{eqnarray} \nonumber g^\vee(V,\lambda) - g^\wedge(V,\lambda') &=& \int_0^{W(\lambda)} \, \left[F^\vee(V,\lambda-W^{-1}(t))- F^\wedge(V,\lambda'-W^{-1}(t))\right]\,dt\\ &&- \int_{W(\lambda)}^{W(\lambda')} \, F^\wedge(V,\lambda'-W^{-1}(t))\,dt, \label{eq:gdiff1} \end{eqnarray} and \begin{eqnarray} \nonumber g^\vee(V,\lambda') - g^\wedge(V,\lambda) &=& \int_0^{W(\lambda)} \, \left[F^\vee(V,\lambda'-W^{-1}(t))- F^\wedge(V,\lambda-W^{-1}(t))\right]\,dt\\ &&+ \int_{W(\lambda)}^{W(\lambda')} \, F^\vee(V,\lambda'-W^{-1}(t))\,dt. \label{eq:gdiff2} \end{eqnarray} These identities suggest that if the difference of the distribution functions $F^\vee(V,\lambda)-F^\wedge(V,\lambda)$ can be controlled, then one can control the difference $g^\vee(V,\lambda)-g^\wedge(V,\lambda)$. Indeed we have \begin{proposition} \label{prop:gdiffmeas} Assumption \ref{ass:V} implies Assumption \ref{ass:pot}. Therefore, the strong Bohr's formula (Theorem \ref{thm:Bohr}) holds under Assumptions \ref{ass:Weyl}, \ref{ass:sa}, and \ref{ass:V}. \end{proposition} \begin{proof} Let $h(V,\lambda) = \frac{F^\vee(V,\lambda)}{F^\wedge(V,\lambda)}-1 \geq 0$. Then \begin{eqnarray} 0 &\leq & g^\vee(V,\lambda) - g^\wedge(V,\lambda) \\ &=& \int_0^{W(\lambda)} \, [1+ h(V,\lambda-W^{-1}(t))-1] F^\wedge(V,\lambda-W^{-1}(t))\,dt\\ &\leq&\left( \sup_{0\leq t \leq W(\lambda)} h(V,\lambda-W^{-1}(t))\right) \int_0^{W(\lambda)} \, F^\wedge(V,\lambda-W^{-1}(t))\,dt\\ &=& \left(\sup_{0\leq s\leq \lambda} h(V,s) \right) g^\wedge(V,\lambda). \end{eqnarray} Assumption \ref{ass:V} implies that $\sup_{0\leq s\leq \lambda} h(V,s) = o(1)$ as $\lambda\to\infty$, so we obtain Assumption \ref{ass:pot}. \end{proof} \subsection{A weak version of Bohr's formula} Motivated by \cites{StrichartzSHO,OSt,OS-CT}, we also give a weak version of Bohr's formula as follows. \begin{theorem}[Weak Bohr's formula] \label{thm:Bohr-w} Let $\lambda^*>\lambda$ with $\lambda^*-\lambda=o(\lambda)$ and \begin{equation} \label{eq:Fasymp} \frac{F^\vee(V,\lambda)}{F^\wedge(V,\lambda^*)} = 1+o(1) \quad\text{and}\quad \frac{F^\wedge(V,\lambda)}{F^\vee(V,\lambda^*)} = 1+o(1). \end{equation} Then, with Assumptions \ref{ass:Weyl} and \ref{ass:sa}, we have \begin{equation} \label{eq:Bohr-w} \lim_{\lambda\to\infty} \frac{N(K_\infty,\mu_\infty,V,\lambda)}{g(V,\lambda^*)} =1, \end{equation} \end{theorem} The statement of Theorem \ref{thm:Bohr-w} is reminiscent of the situation when one compares two nondecreasing distribution jump functions with closely spaced jumps. When the jumps asymptotically coincide, then the difference of corresponding measures tends to zero in the sense of weak convergence. \begin{proof} By mimicking the proof of Theorem \ref{thm:BohrError} we get \begin{equation} \left|\frac{N(K_\infty,\mu_\infty,V,\lambda)}{g(V,\lambda^*)} -1\right| \leq \max_{{\rm b}\in\{\wedge,\vee\}} \left|\frac{g^{\tilde{\rm b}}(V,\lambda)}{g^{\rm b}(V,\lambda^*)} -1+ \frac{\mathcal{R}^{\tilde{\rm b}}(V,\lambda)}{g^{\rm b}(V,\lambda^*)}\right|. \end{equation} Since $\lambda^* -\lambda = o(\lambda)$ as $\lambda\to\infty$, the ratio $\mathcal{R}^{\tilde{\rm b}}(V,\lambda)/g^{\rm b}(V,\lambda^*)$ can be made to be $o(1)$. So the key estimate is to show that $g^{\tilde{\rm b}}(V,\lambda)/g^{\rm b}(V,\lambda^*) = 1+o(1)$ for \emph{both} ${\rm b}=\wedge$ \emph{and} ${\rm b}=\vee$. (This is to contrast with the case $\lambda'=\lambda$ as shown in Proposition \ref{prop:gdiffmeas}, where a one-sided bound suffices because $g^\vee(V,\lambda) - g^\wedge(V,\lambda) \geq 0$.) From (\ref{eq:gdiff1}) we find \begin{eqnarray} |g^\vee(V,\lambda)-g^\wedge(V,\lambda^*)| &\leq& W(\lambda) \left(\sup_{0\leq s \leq \lambda} [F^\vee(V,s) - F^\wedge(V,s+\lambda^*-\lambda)]]\right)\\ && + [W(\lambda^*)-W(\lambda)]\left(\sup_{0\leq s \leq \lambda^*-\lambda} F^\wedge(V,s)\right) \end{eqnarray} According to the first condition in (\ref{eq:Fasymp}), $\sup_{0\leq s \leq \lambda} [F^\vee(V,s) - F^\wedge(V,s+\lambda^*-\lambda)]] = o(F^\vee(V,\lambda))$ and $\sup_{0\leq s \leq \lambda^*-\lambda} F^\wedge(V,s) = o(F^\wedge(V,\lambda^*))$. This implies that the absolute value on the RHS of (\ref{eq:bohrcomp}) is $o(1)$ for ${\rm b} =\wedge$. Similarly, the second condition in (\ref{eq:Fasymp}) implies that the absolute value on the RHS of (\ref{eq:bohrcomp}) is $o(1)$ for ${\rm b}= \vee$ also. \end{proof} \section{Laplace transform (heat kernel trace) version of Bohr's formula} \label{sec:LaplaceBohr} In this section we impose Assumption \ref{ass:sa} and either one of Assumptions \ref{ass:SpecHKT} and \ref{ass:WeylHKT}, and prove Theorems \ref{thm:specdimHKT} and \ref{thm:LaplaceBohr}. Let us introduce the traces \begin{eqnarray} \mathcal{L}(K_\infty,\mu_\infty,V,t) &:=& {\rm Tr}_{K_\infty}\{e^{-t(-\Delta+V)}\},\\ \mathcal{L}^\wedge(K_\infty,\mu_\infty,V,t) &:=& {\rm Tr}_{K_\infty}\{e^{-t(-\Delta^\wedge+V^\wedge)}\},\\ \mathcal{L}^\vee(K_\infty,\mu_\infty,V,t) &:=& {\rm Tr}_{K_\infty}\{e^{-t(-\Delta^\vee+V^\vee)}\}. \end{eqnarray} Observe that $ \mathcal{L}^\wedge(K_\infty,\mu_\infty,V,t) \leq\mathcal{L}(K_\infty,\mu_\infty,V,t) \leq \mathcal{L}^\vee(K_\infty,\mu_\infty,V,t) $. Since $L^2(K_\infty,\mu_\infty) = \bigoplus_\alpha L^2(K_\alpha,\mu_\alpha)$, it follows that \begin{equation} \label{eq:HKTsum} \mathcal{L}^{\rm b}(K_\infty,\mu_\infty, V, t) = \sum_\alpha \mathcal{L}^{\rm b}(K_\alpha, \mu_\alpha, V, t),\quad {\rm b} \in \{\wedge, \vee\}, \end{equation} where \begin{equation} \mathcal{L}^{\rm b}(K_\alpha, \mu_\alpha, V,t) = {\rm Tr}_{K_\alpha}\left\{e^{-t(-\Delta^{\rm b}+V^{\rm b})}\right\}= \mathcal{L}^{\rm b}(K_\alpha, \mu_\alpha, t) \cdot \exp\left(-t \left.V^{\rm b}\right|_{K_\alpha}\right). \end{equation} Let \begin{equation} \mathcal{F}(V,t) :=\int\limits_{K_\infty} e^{-tV(x)}\, \mu_\infty(dx). \end{equation} Similarly define \begin{equation} \mathcal{F}^{\rm b}(V,t) := \int\limits_{K_\infty} e^{-tV^{\rm b}(x)}\, \mu_\infty(dx) \end{equation} for ${\rm b}\in \{\wedge, \vee\}$. Observe that $\mathcal{F}^\wedge(V,t) \leq \mathcal{F}(V,t) \leq \mathcal{F}^\vee(V,t)$, and that Assumption \ref{ass:sa} ensures that $\mathcal{F}(V,t)$ and $\mathcal{F}^{\rm b}(V,t)$ are finite for $t>0$. \begin{proof}[Proof of Theorem \ref{thm:specdimHKT}] Let us first note that \begin{equation} \label{eq:LF} \frac{\mathcal{L}^\wedge(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}^\vee(V,t)}\leq \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}(V,t)} \leq \frac{\mathcal{L}^\vee(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}^\wedge(V,t)}. \end{equation} By (\ref{eq:HKTsum}), \begin{align} \mathcal{L}^{\rm b}(K_\infty,\mu_\infty,V,t) &= \sum_\alpha \mathcal{L}^{\rm b}(K_\alpha, \mu_\alpha, V,t) \\ &= \sum_\alpha \mathcal{L}^{\rm b}(K_\alpha, \mu_\alpha, t) \cdot \exp\left(-t \left.V^{\rm b}\right|_{K_\alpha}\right) \\ &= \sum_\alpha \mathcal{L}^{\rm b}(K_\alpha, \mu_\alpha, t) \cdot \int\limits_{K_\alpha} e^{-t V^{\rm b}(x)} \, \mu_\alpha(dx). \label{eq:Lb} \end{align} Under Assumption \ref{ass:SpecHKT}, there exist positive constants $C_1$ and $C_2$ such that for all sufficiently small $t$, \begin{eqnarray} t^{d_s/2} \mathcal{L}^\vee(K_\infty,\mu_\infty,V,t) &\leq& C_1\sum_\alpha \int_{K_\alpha} \,e^{-tV^\vee(x)}\,\mu_\alpha (dx) = C_1 \mathcal{F}^\vee(V,t), \\ t^{d_s/2} \mathcal{L}^\wedge(K_\infty,\mu_\infty,V,t) &\geq& C_2 \sum_\alpha \int_{K_\alpha}\, e^{-tV^\wedge(x)} \, \mu_\alpha(dx) = C_2 \mathcal{F}^\wedge(V,t). \end{eqnarray} Meanwhile, by Fubini's theorem and by the nonnegativity of $V$, we have \begin{eqnarray} \mathcal{F}^{\rm b}(V,t) &=& \int_0^\infty \mu_\infty\left(\{x\in K_\infty: e^{-tV^{\rm b}(x)} \geq s\} \right)\,ds\\ &=& \int_{-\infty}^\infty \, \mu_\infty\left(\{x\in K_\infty: e^{-tV^{\rm b}(x)} \geq e^{-t\lambda} \right) te^{-t\lambda} \,d\lambda \\ &=& \int_0^\infty \, \mu_\infty\left(\{x \in K_\infty: V^{\rm b}(x) \leq \lambda\}\right) te^{-t\lambda} \, d\lambda\\ &=& \int_0^\infty \, F^{\rm b}(V,\lambda) t e^{-t\lambda}\,d\lambda. \end{eqnarray} Hence under Assumption \ref{ass:V-s}, there exists $\lambda_0>0$ such that \begin{eqnarray} \label{eFVt} \mathcal{F}^\vee(V,t) &=& \int_0^\infty F^\vee(V,\lambda) t e^{-t\lambda}\,d\lambda\\ &=& \int_0^{\lambda_0} F^\vee(V,\lambda)t e^{-t\lambda}\,d\lambda + \int_{\lambda_0}^\infty F^\vee(V,\lambda)te^{-t\lambda}\,d\lambda\\ &\leq& F^\vee(V,\lambda_0) \int_0^{\lambda_0} t e^{-t\lambda}\,d\lambda + C \int_{\lambda_0}^\infty F^\wedge\left(V,\frac{\lambda}{2}\right) t e^{-t\lambda} \, d\lambda\\ &=& F^\vee(V,\lambda_0) \left(1-e^{-t\lambda_0}\right) + C \int_{\lambda_0/2}^\infty \, F^\wedge(V,\lambda) \cdot 2te^{-2t\lambda}\,d\lambda\\ &\leq& F^\vee(V,\lambda_0) \left(1-e^{-t\lambda_0}\right) + C \mathcal{F}^\wedge(V,2t). \end{eqnarray} Therefore \begin{equation} \frac{\mathcal{F}^\vee(V,t)}{\mathcal{F}^\wedge(V,t)} \leq \frac{\mathcal{F}^\vee(V,t)}{\mathcal{F}^\wedge(V,2t)} \leq C+ F^\vee(V,\lambda_0) \frac{1-e^{-t\lambda_0}}{\mathcal{F}^\wedge(V,2t)}. \end{equation} Since $\lim_{t\downarrow 0} (1-e^{-t\lambda_0})=0$ and $t\mapsto \mathcal{F}^\wedge(V,2t)$ is monotone decreasing, it follows that \begin{equation} \varlimsup_{t\downarrow 0} \frac{\mathcal{F}^\vee(V,t)}{\mathcal{F}^\wedge(V,t)} \leq C+F^\vee(V,\lambda_0) \varlimsup_{t\downarrow 0} \frac{1-e^{-t\lambda_0}}{\mathcal{F}^\wedge(V,2t)} = C. \end{equation} Putting everything together we find \begin{eqnarray} \varlimsup_{t\downarrow 0} \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}(V,t)} &\leq& \left(\varlimsup_{t\downarrow 0} \frac{t^{d_s/2} \mathcal{L}^\vee(K_\infty,\mu_\infty,V,t)}{\mathcal{F}^\vee(V,t)}\right) \left( \varlimsup_{t\downarrow 0} \frac{\mathcal{F}^\vee(V,t)}{\mathcal{F}^\wedge(V,t)}\right), \\ \varliminf_{t\downarrow 0} \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}(V,t)} &\geq& \left(\varliminf_{t\downarrow 0} \frac{t^{d_s/2} \mathcal{L}^\wedge(K_\infty,\mu_\infty,V,t)}{\mathcal{F}^\wedge(V,t)}\right) \left( \varliminf_{t\downarrow 0} \frac{\mathcal{F}^\wedge(V,t)}{\mathcal{F}^\vee(V,t)}\right). \end{eqnarray} Thus \begin{equation} \label{eq:LFfinal} C_2 C^{-1}\leq \varliminf_{t\downarrow 0} \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}(V,t)}\leq \varlimsup_{t\downarrow 0} \frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2} \mathcal{F}(V,t)} \leq C_1 C. \end{equation} Finally, regarding the spectral dimension of $-\Delta+V$, we note that $F(V,\lambda)=\Theta(\lambda^\beta)_{\lambda\to\infty}$ is equivalent to $\mathcal{F}(V,t) = \Theta(t^{-\beta})_{t\downarrow 0}$, an easy consequence of Laplace transform. Thus according to (\ref{eq:LFfinal}), $\mathcal{L}(K_\infty,\mu_\infty,V,t) \asymp t^{-(d_s+2\beta)/2}$ as $t\downarrow 0$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:LaplaceBohr}] Combining (\ref{eq:Lb}) with Assumption \ref{ass:WeylHKT} we obtain \begin{equation} \label{eq:LF2} \mathcal{L}^{\rm b}(K_\infty,\mu_\infty,V,t) = t^{-d_s/2}\left[ H(t)+\rho^{\rm b}(t)\right] \mathcal{F}^{\rm b}(V,t) \end{equation} which, together with (\ref{eq:LF}) and after some manipulation, implies \begin{equation} \label{eq:LBerror} \left|\frac{\mathcal{L}(K_\infty,\mu_\infty,V,t)}{t^{-d_s/2}H(t) \mathcal{F}(V,t)}-1\right| \leq \max_{{\rm b}\in\{\wedge,\vee\}} \left| \left(1+\frac{\rho^{\tilde{\rm b}}(t)}{H(t)}\right) \frac{\mathcal{F}^{\tilde{\rm b}}(V,t)}{\mathcal{F}^{\rm b}(V,t)} -1\right|. \end{equation} Next, by Assumption \ref{ass:V} and \eqref{eFVt}, \begin{eqnarray} \mathcal{F}^\vee(V,t) &=& \mathcal{F}^\wedge(V,t) + \int_0^\infty \, \left(F^\vee(V,\lambda) -F^\wedge(V,\lambda)\right) \,te^{-t\lambda}\,d\lambda\\ &=& \mathcal{F}^\wedge(V,t) + o\left(\mathcal{F}^\wedge(V,t)\right)_{t\downarrow 0} \end{eqnarray} because for any $\delta>0$ there is $\lambda_\delta>0$ such that $ F^\vee(V,\lambda) -F^\wedge(V,\lambda) <\delta F^\vee(V,\lambda) $ when $\lambda>\lambda_\delta$. Thus $\frac{\mathcal{F}^\vee(V,t) }{\mathcal{F}^\wedge(V,t)} = 1+o(1)$ as $t\downarrow 0$. Hence (\ref{eq:LBerror}) implies (\ref{eq:LaplaceBohr}). \end{proof} \section{Examples} \label{sec:examples} In this section we provide several instances on both classical and fractal settings whereby the existence of the spectral dimension of $-\Delta+V$ can be proved, and moreover, Bohr's formula holds. \subsection{Euclidean spaces} One would be remiss not to mention the most classical setting, which is the Schr\"odinger operator $-\Delta+V$ on $\mathbb{R}^d$, where $\Delta =\sum_{i=1}^d (\partial^2/\partial x_i^2)$ and $V$ is an unbounded potential. See \emph{e.g.} \cite{ReedSimonVol4}*{\S XIII.15}. The key idea is to partition $\mathbb{R}^d$ (the unbounded space $K_\infty$) into cubes of side $1$ (the cells $K_\alpha$). Then by applying the machinery outlined in the previous section, one arrives at the following well-known result: if $V(x)=\Theta(|x|^\beta)$ as $|x|\to \infty$, then Bohr's formula holds, and the spectral dimension of this Schr\"odinger operator is $d(1+2/\beta)$. In dimension $1$ Bohr's formula can be established for logarithmically diverging potentials. The proof method involves solving a Sturm-Liouville ODE, which appears rather particular to one-dimensional settings, and may be difficult to generalize to higher dimensions. We refer the reader to \cites{NaimarkSolomyak, HoltMolchanov} for more details. \subsection{Infinite fractafolds based on nested fractals} \label{sec:fractafold} Nested fractals are introduced first by Lindstr\o m \cite{Lindstrom}. The typical examples to keep in mind are the Sierpinski gaskets $SG(n)$, where $n$ denotes the length scale of the subdivision. There are also higher-dimensional analogs of $SG$. On nested fractals, and more generally post-critically finite (p.c.f.) fractals, one can define a notion of the Laplacian (or a Brownian motion). See \emph{e.g.} \cite{BarlowStFlour}*{\S 2$\sim$\S4}, \cite{Kigami}*{Chapters 2$\sim$3}, \cite{StrBook}*{Chapters 1$\sim$2} for the relevant definitions and results. We will need just one result on the spectral asymptotics of the Laplacian on p.c.f. fractals with regular harmonic structure. \begin{proposition}[\cite{KigamiLapidus}*{Theorem 2.4},~\cite{Kigami}*{Theorem 4.1.5}] \label{prop:ECFspecKigami} Let $K$ be a p.c.f. fractal, and $\mu$ be a self-similar measure on $K$ with weight $(\mu_i)_{i=1}^N$. Assume that $\mu_i r_i <1$ for all $i\in \{1,2,\cdots, N\}$. Let $d_s$ be the unique number $d$ which satisfies $\sum_{i=1}^N \gamma_i^d=1$, where $\gamma_i = \sqrt{r_i \mu_i}$. Let $N^\wedge(K,\mu,\lambda)$ (resp. $N^\vee(K,\mu,\lambda)$) be the eigenvalue counting function for the Laplacian on $L^2(K,\mu)$ with Dirichlet (resp. Neumann) boundary condition. Then for ${\rm b}\in \{\wedge,\vee\}$, \begin{equation} 0<\varliminf_{\lambda \to\infty} \lambda^{-d_s/2} N^{\rm b}(K,\mu,\lambda) \leq \varlimsup_{\lambda\to\infty} \lambda^{-d_s/2} N^{\rm b}(K,\mu,\lambda) <\infty. \end{equation} Moreover: \begin{enumerate}[label=(\alph*)] \item Non-lattice case: If $\sum_{i=1}^N \mathbb{Z}\log \gamma_i$ is a dense subgroup of $\mathbb{R}$, then the limit $$\lim_{\lambda\to\infty} \lambda^{-d_s/2} N^{\rm b}(K,\mu,\lambda)$$ exists, and is independent of the boundary conditions. \item Lattice case: If $\sum_{i=1}^N \mathbb{Z} \log \gamma_i$ is a discrete subgroup of $\mathbb{R}$, let $T>0$ be its generator. Then as $\lambda\to\infty$, \begin{equation} N^{\rm b}(K,\mu,\lambda) = \left[G\left(\frac{\log \lambda}{2}\right) + o(1)\right] \lambda^{d_s/2}, \end{equation} where $G$ is a right-continuous, $T$-periodic function with $0< \inf G \leq \sup G <\infty$, and is independent of the boundary conditions. \end{enumerate} \end{proposition} We remark that the proof of Proposition \ref{prop:ECFspecKigami} relies upon Feller's renewal theorem~\cite{FellerVol2}. Our goal is to state Bohr's formula for the Schr\"odinger operator on a class of unbounded fractal spaces. As mentioned in Section \ref{sec:Intro}, one such candidate is a fractafold based on a nested fractal. We shall consider two types: \begin{enumerate}[label=(\roman*)] \item The infinite blow-ups of a nested fractal in $\mathbb{R}^d$, $d\geq 2$. (See Figure \ref{fig:InfiniteBlowup}). \item Infinite \textbf{periodic fractafolds} $K_\infty$ based on the planar Sierpinski gasket $K=SG(n)$, equipped with a metric $R$. (In practice, $R$ is taken to be the resistance metric, but the results to follow do not depend explicitly on the specifics of $R$.) The examples we will consider can be constructed by first defining an infinite ``cell graph'' $\Gamma$, and then replacing each vertex of $\Gamma$ by a copy of $K$, and gluing the $K_\alpha$ in a consistent way. With this construction the metric $R$ on $K$ extends to a metric $R$ on $K_\infty$ in the obvious way. For instance, one can construct the ladder periodic fractafold (Figure \ref{fig:LadderFractafold}) and the hexagonal periodic fractafold (Figure \ref{fig:HexagonalFractafold}). \end{enumerate} To establish Bohr's formula, we will need information about the measure growth of balls in $K_\infty$. For the infinite blow-ups of a nested fractal, it is direct to verify that for all $x\in K_\infty$ and $r>0$, \begin{equation} \label{eq:blowupmeas} c r^{d_{f,R}} \leq \mu_\infty(B_R(x,r)) \leq C r^{d_{f,R}}, \end{equation} where $d_{f,R}$ is the Hausdorff dimension of $K$ with respect to the metric $R$ on $K$. For the periodic fractafolds a slightly different analysis is needed. Let $d_\Gamma$ be the graph metric of the cell graph $\Gamma$, and $B_{d_\Gamma}(z,r) := \{y\in \Gamma : d_G(z,y) \leq r\}$ be the ball of radius $r$ centered at $z$ in $\Gamma$. Since $K_\infty$ is constructed by replacing each vertex of $\Gamma$ by a copy of $K$, we can estimate the volume growth of balls in $K_\infty$ using the cardinality of balls in $\Gamma$. \begin{proposition} \label{prop:cover} Let $D(K) := {\rm diam}_R(K)$. For all $x\in K_\infty$ and all $r> 2D(K)$, \begin{equation} \label{eq:cover} \mu_\infty\left(B_{d_\Gamma}\left(\psi(x),~r-2D(K)\right)\right) \leq \mu_\infty(B_R(x,r)) \leq \mu_\infty\left(B_{d_\Gamma}\left(\psi(x), ~r+2D(K)\right)\right), \end{equation} where $\psi(x)$ is the vertex in $\Gamma$ which is replaced by the cell $K_\alpha \ni x$ in the periodic fractafold construction. \end{proposition} \begin{proof} Let $\eta(r) := r/D(K)>2$. Then $B_R(x,r) = B_R(x,\eta(r) D(K))$ and \begin{equation} B_R(y,(\lfloor\eta(r)\rfloor-1) D(K)) \subseteq B_R(x,\eta(r) D(K)) \subseteq B_R(y,(\lceil\eta(r)\rceil+1) D(K)) \end{equation} for any $y$ which lies in the same cell $K_\alpha$ as $x$. Here $\lfloor \alpha \rfloor$ (resp. $\lceil \alpha \rceil$) denotes the largest integer less than or equal to $\alpha$ (resp. the smallest integer greater than or equal to $\alpha$). It is then direct to show that there exist $y$ such that $B_R(y,(\lceil\eta(r)\rceil+1)D(K))$ is covered by the union of all cells $K_\alpha$ which are at most distance $(\lceil\eta(r)\rceil+1)$ from $y$ in the $\Gamma$ metric. Since each cell has $\mu$-measure $1$, the $\mu$-measure of the cover is equal to the cardinality of $B_{d_\Gamma}(\psi(x),\lceil \eta(r)\rceil+1)$. The upper bound in (\ref{eq:cover}) follows by overestimating $\lceil\eta(r)\rceil+1$ by $\eta(r)+2$. The proof of the lower bound is similar. \end{proof} We can now state the main result of this subsection. \begin{proposition} \label{prop:SGV} On the infinite blow-up of a nested fractal (resp. the ladder periodic fractafold based on $SG(n)$, the hexagonal periodic fractafold based on $SG(n)$), Bohr's formula holds for potential of the form $V(x) \sim R(0,x)^\beta$ for any $\beta>0$. In particular, the spectral dimension of $-\Delta+ V$ is $d_s(V) = d_s+ 2 (d_h/\beta)$, where $d_h$ equals the Hausdorff dimension of the nested fractal with respect to the metric $R$ (resp. $1$, $2$). \end{proposition} \begin{proof} Since each $K_\alpha$ which makes up the cellular decomposition of $K_\infty$ is isometric to the same nested fractal $K$, by Proposition \ref{prop:ECFspecKigami} we have that Assumption \ref{ass:Weyl} holds. Because the cells $K_\alpha$ intersect at boundary points in a natural way, the Dirichlet form $(\mathcal{E}, \mathcal{F})$ corresponding to the Laplacian $-\Delta$ on $L^2(K_\infty,\mu_\infty)$ can be built up as a sum of the constituent Dirichlet forms on $L^2(K_\alpha,\mu_\alpha)$. Hence one can show self-adjointness of $-\Delta$ in the sense of quadratic forms. And since the potential $V(x)$ grows unboundedly as $d(0,x)\to +\infty$, Assumption \ref{ass:sa} then implies that $(-\Delta+V)$ has pure point spectrum. For condition (i), one can confirm that there exist constants $c$ and $C$ such that for all $x\in K_\infty$ and all sufficiently large $r>0$, \begin{equation} \label{eq:meas} cr^{d_h}\leq \mu_\infty(B_R(x,r)) \leq Cr^{d_h}. \end{equation} For the infinite blow-up (\ref{eq:meas}) follows from (\ref{eq:blowupmeas}) with $d_h = d_{h,R}$. As for the periodic fractafolds, note that the corresponding cell graphs $\Gamma$ satisfy \begin{equation} | B_\Gamma(z,r) |\asymp r^{d_{h,\Gamma}} \quad \text{for all}~z \in \Gamma~\text{and}~r>0, \end{equation} where $d_{h,\Gamma}$ equals $1$ (resp. $2$) in the case of the ladder fractafold (resp. the hexagonal fractafold). Combining this with Proposition \ref{prop:cover} we get (\ref{eq:meas}) with $d_h=d_{h,\Gamma}$. In all cases, we the find \begin{equation} F(\lambda) = \mu_\infty(\{x: V(x)<\lambda\}) \simeq \mu_\infty(B_R(0,\lambda^{1/\beta})) \simeq \lambda^{d_{h,\Gamma}/\beta}, \end{equation} and the same asymptotics holds for $F^\wedge(\lambda)$ and $F^\vee(\lambda)$. Finally, to see that condition (ii) holds, we use the inequality \begin{equation} [V^\wedge(x) - V^\vee(x)] \leq [R(0,x)+1]^\beta - [R(0,x)-1]^\beta \leq C_\beta [R(0,x)+1]^{\beta-1}, \end{equation} where $C_\beta$ is an explicit constant depending on $\beta$ only. Observe that the RHS is uniformly bounded from above by a constant multiple of $\lambda^{1-\beta^{-1}}$ for all $x$ in the set $\{x:V^\vee(x)\leq \lambda\}$. \end{proof} \subsection{Infinite fractal fields based on nested fractals}\label{fr-fi} There is another notion of an unbounded space based on compact fractals, which are known as \textbf{fractal fields}. The name originated from Hambly and Kumagai \cite{HamKumFields}, who were interested in studying fractal penetrating Brownian motions. Fractal fields differ from the fractafolds of the previous subsection in that we do not require neighborhoods of (junction) points in $K_\infty$ to be homeomorphic to $K$. First consider the triangular lattice finitely ramified Sierpinski fractal field introduced in \cite{StrTep}*{\S 6}, see Figure \ref{fig:TriFractalfield}. Notice that this fractal field admits a cellular decomposition whereby cells adjoin at boundary vertices of $SG(n)$. As a result, the proof strategy from the previous Proposition \ref{prop:SGV} applies in this setting. \begin{proposition} On the triangular lattice finitely ramified fractal field based on $SG(n)$, Bohr's formula holds for potential of the form $V(x) \sim R(0,x)^\beta$ for any $\beta>0$. In particular, the spectral dimension of $-\Delta+ V$ is $d_s(V) = d_s+ (4/\beta)$. \end{proposition} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{SGDoubleLadder.pdf} \caption{The double-ladder fractal field based on $SG(2)$.} \label{fig:doubleladder} \end{figure} Next we consider the double-ladder fractal field based on $SG(2)$, see Figure \ref{fig:doubleladder}. An important difference here is that pairs of $SG(2)$ cells may adjoin either at a point or along a boundary segment, which makes this space infinitely ramified. In order to analyze this example using our methods, one needs to understand the eigenvalue problem for the Laplacian on $SG(2)_\Omega:=SG(2) \setminus \partial\Omega$, where $\partial\Omega$ consists of the top vertex and the bottom edge of $SG(2)$. This was investigated by Qiu \cite{Qiu}, whose result we quote below. \begin{proposition}[\cite{Qiu}*{Theorem 3.10}] \label{prop:Qiu} Let $N^{\rm b}_\Omega(\lambda)$ be the eigenvalue counting function for the Laplacian on $SG(2)_\Omega$ with boundary condition ${\rm b} \in \{\wedge, \vee\}$ on the top vertex and the bottom edge of $SG(2)$. Then there exists a c\`{a}dl\`{a}g $\log 5$-periodic function $G:\mathbb{R}\to\mathbb{R}$, with $0<\inf G <\sup G <\infty$ and independent of ${\rm b}$, such that \begin{align} N^{\rm b}_\Omega(\lambda)= G(\log \lambda) \lambda^{\log 3/\log 5} + O\left(\lambda^{\log 2/\log 5} \log\lambda\right) \end{align} as $\lambda\to\infty$. \end{proposition} Using this result we can show the validity of Bohr's formula in this setting. \begin{proposition} On the double-ladder fractal field based on $SG(2)$, Bohr's formula holds for potential of the form $V(x) \sim R(0,x)^\beta$ for any $\beta>0$. \end{proposition} \begin{proof} The one nontrivial assumption to check is Assumption \ref{ass:Weyl}, which is furnished by Proposition \ref{prop:Qiu}. The other two assumptions, \ref{ass:sa} and \ref{ass:V}, are verified easily. The result then follows from Theorem \ref{thm:Bohrmain}. \end{proof} There are some obvious extensions of the double-ladder fractal field example, which we leave to the reader. An interesting open problem is to study the applicability of Bohr's formula to the original fractal field (or gasket tiling) in \cite{HamKumFields}, shown in Figure \ref{fig:SGFractalField}. We note that heat kernel estimates are established on this fractal field \cite{HamKumFields}*{Theorem 1.1}. However, to the best of the authors' knowledge, there is no corresponding Weyl asymptotic (or heat kernel trace asymptotic) estimate which is sharp to an $o(1)$ remainder. In particular, the fact that the $SG$ cells adjoin along edges rather than at points makes the analysis more delicate. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{SGFractalField.pdf} \caption{The $SG(2)$ fractal field (or gasket tiling) considered in \cite{HamKumFields}.} \label{fig:SGFractalField} \end{figure} \subsection{Infinite Sierpinski carpets} Let $F\subset \mathbb{R}^d~(d\geq 2)$ be a generalized Sierpinski carpet in the sense of \cite{BarlowBass,BBKT}, and let $F_n$ be its $n$th-level approximation. Following \cite{BarlowBass}, we call $\tilde{F}=\bigcup_{n\in \mathbb{N}_0} \ell^n F_n$ the \emph{pre-carpet}, and $F_\infty = \bigcup_{n\in \mathbb{N}_0} \ell^n F$ the \emph{infinite carpet}. The difference between the two is that $\tilde{F}$ is tiled by unit squares and has nonzero Lebesgue measure, whereas $F_\infty$ is tiled by copies of the same Sierpinski carpet $F$ and has zero Lebesgue measure. In both cases, we adopt the Euclidean metric $|\cdot|$ and regard $(K_\infty, \mu_\infty, |\cdot|)$ as the metric measure space, which has volume growth \begin{equation} c_1 r^{d_f} \leq \mu_\infty(B(x,r)) \leq c_2 r^{d_f} \qquad (x\in K_\infty, r>0), \end{equation} where $d_f=(\log m/\log \ell)$ is the Hausdorff dimension of the carpet $F$ with respect to the Euclidean metric. \begin{proposition} Bohr's formula holds on the pre-carpet $\tilde{F}$ with potential $V(x) \sim |x|^\beta$ for any $\beta>0$. In particular, the spectral dimension of $(-\Delta+V)$ on $\tilde{F}$ is $d+2(d_f/\beta)$, where $d$ is the dimension of the ambient space $\mathbb{R}^d$ in which $\tilde{F}$ lies. \end{proposition} The case of the infinite carpet is more nuanced. Hambly \cite{Hamblyspec} and Kajino \cite{Kajinospec} proved that the heat kernel trace of the bare Laplacian on $F$ satisfies Assumption \ref{ass:WeylHKT}, with $H$ a continuous periodic function of $\log t$ (though it is NOT known whether $H$ is non-constant). Kajino \cite{Kajinospec2} further showed the asymptotics of the heat kernel trace to \emph{all} orders of the boundary terms. Note that their results imply that the eigenvalue counting function satisfies the asymptotics $c_1 \lambda^{d_s/2} \leq N^{\rm b}(F,\mu,\lambda) \leq c_2\lambda^{d_s/2}$, but do NOT necessarily imply the sharper estimate, Assumption \ref{ass:Weyl}. As mentioned earlier, this is because the classical techniques of Tauberian theorems cannot be applied here. \begin{proposition} The Laplace transform version of Bohr's formula holds on the infinite carpet $F_\infty$ with potential $V(x) \sim |x|^\beta$ for any $\beta>0$. In particular, the spectral dimension of $(-\Delta+V)$ on $F_\infty$ is $d_s + 2(d_f/\beta)$, where $d_s$ is the spectral dimension of the bare Laplacian on $F$. \end{proposition} \begin{proof} By \cite{Hamblyspec}*{Theorem 1.1} and \cite{Kajinospec}*{Theorem 1.2}, Assumption \ref{ass:WeylHKT} is satisfied on the constituent Sierpinski carpet $F$. In fact, \cite{Kajinospec2}*{Theorem 4.10} provides a sharper result of the form \begin{equation} \mathcal{L}^{\rm b}(F,\mu,t) = t^{-d_s/2} H(-\log t) + \sum_{k=1}^d t^{-d_k/d_w} G^{\rm b}_k(-\log t) + O\left(\exp(-c t^{-\frac{1}{d_w-1}})\right) \end{equation} as $t\downarrow 0$, where $H$ and the $G_k^{\rm b}$ are continuous periodic functions, $d_k$ is the Minkowski dimension of $F \cap \{x=(x_1,\cdots, x_d)\in \mathbb{R}^d: x_1=\cdots = x_k =0\}$, and $d_s$ and $d_w$ are respectively the spectral dimension and the walk dimension of $F$. We turn our attention next to the potential term $\mathcal{F}^{\rm b}(V,t)$. It is direct to verify that for any $\beta>0$, \begin{eqnarray} \int\limits_{K_\infty} e^{-t|x|^\beta} \, d\mu_\infty(x) &\leq& \int_0^\infty e^{-t\lambda} \, \frac{d\mu_\infty(\{x: |x|^\beta<\lambda\})}{d\lambda} \,d\lambda \\ &=& \int_0^\infty t e^{-t\lambda} \mu_\infty(B(0,\lambda^{1/\beta})) \, d\lambda \\ &\leq& c_2 t \int_0^\infty e^{-t\lambda} \lambda^{d_f/\beta}\, d\lambda \leq C_2(d_f, \beta) t^{d_f/\beta}, \end{eqnarray} and similarly \begin{equation} \int\limits_{K_\infty} e^{-t|x|^\beta} \, d\mu_\infty(x) \geq C_1(d_f,\beta) t^{d_f/\beta}. \end{equation} Using the inequality $e^s \geq 1+s$ for $s \in \mathbb{R}$, we find \begin{eqnarray*} \left|e^{-t|x-y|^\beta}-e^{-t|x-z|^\beta}\right| &\leq& \max\left(e^{-t|x-y|^\beta},e^{-t|x-z|^\beta} \right)\cdot t\left(|x-y|^\beta-|x-z|^\beta\right)\\ &\leq& C_\beta \cdot t \cdot \max\left(e^{-t|x-y|^\beta},e^{-t|x-z|^\beta} \right)\cdot |y-z|. \end{eqnarray*} It follows that as $t\downarrow 0$, \begin{equation} \mathcal{F}^\vee(V,t)-\mathcal{F}^\wedge(V,t) \leq C \cdot O(t \mathcal{F}(V,t)) = o(\mathcal{F}(V,t)), \end{equation} leading to the error estimate \begin{equation} \left|\frac{\mathcal{L}(F_\infty,\mu_\infty,V,t)}{t^{-d_s/2} H(-\log t) \mathcal{F}(V,t)}-1\right| = O\left(t^{(d_0-d_1)/d_w}\right) \end{equation} as $t\downarrow 0$. The Laplace transform version of Bohr's formula then follows. \end{proof} \subsubsection*{Acknowledgements} We thank Luke Rogers for providing many constructive comments regarding this work. \begin{bibdiv} \begin{biblist} \bib{A}{article}{ author={Akkermans, Eric}, title={Statistical mechanics and quantum fields on fractals}, conference={ title={Fractal geometry and dynamical systems in pure and applied mathematics. II. Fractals in applied mathematics}, }, book={ series={Contemp. Math.}, volume={601}, publisher={Amer. Math. Soc., Providence, RI}, }, date={2013}, pages={1--21}, review={\MR{3203824}}, doi={10.1090/conm/601/11962}, } \bib{s2}{article}{ author={Allan, Adam}, author={Barany, Michael}, author={Strichartz, Robert S.}, title={Spectral operators on the Sierpinski gasket. I}, journal={Complex Var. Elliptic Equ.}, volume={54}, date={2009}, number={6}, pages={521--543}, issn={1747-6933}, review={\MR{2537254 (2010f:28011)}}, doi={10.1080/17476930802272978}, } \bib{w2}{article}{ author={U. Andrews}, author={G. Bonik}, author={J. P. Chen}, author={R. W. Martin}, author={A. Teplyaev}, title={Wave equation on one-dimensional fractals with spectral decimation and the complex dynamics of polynomials}, journal={arXiv:1505.05855}, date={2015}, } \bib{eigen}{article}{ author={Bajorin, N.}, author={Chen, T.}, author={Dagan, A.}, author={Emmons, C.}, author={Hussein, M.}, author={Khalil, M.}, author={Mody, P.}, author={Steinhurst, B.}, author={Teplyaev, A.}, title={Vibration modes of $3n$-gaskets and other fractals}, journal={J. Phys. A}, volume={41}, date={2008}, number={1}, pages={015101, 21}, issn={1751-8113}, review={\MR{2450694 (2010a:28008)}}, doi={10.1088/1751-8113/41/1/015101}, } \bib{BarlowStFlour}{article}{ author={Barlow, Martin T.}, title={Diffusions on fractals}, conference={ title={Lectures on probability theory and statistics}, address={Saint-Flour}, date={1995}, }, book={ series={Lecture Notes in Math.}, volume={1690}, publisher={Springer, Berlin}, }, date={1998}, pages={1--121}, review={\MR{1668115 (2000a:60148)}}, doi={10.1007/BFb0092537}, } \bib{BarlowBass}{article}{ author={Barlow, Martin T.}, author={Bass, Richard F.}, title={Brownian motion and harmonic analysis on Sierpinski carpets}, journal={Canad. J. Math.}, volume={51}, date={1999}, number={4}, pages={673--744}, issn={0008-414X}, review={\MR{1701339 (2000i:60083)}}, doi={10.4153/CJM-1999-031-4}, } \bib{BBKT}{article}{ author={Barlow, Martin T.}, author={Bass, Richard F.}, author={Kumagai, Takashi}, author={Teplyaev, Alexander}, title={Uniqueness of Brownian motion on Sierpi\'nski carpets}, journal={J. Eur. Math. Soc. (JEMS)}, volume={12}, date={2010}, number={3}, pages={655--701}, issn={1435-9855}, review={\MR{2639315 (2011i:60146)}}, } \bib{BP}{article}{ author={Barlow, Martin T.}, author={Perkins, Edwin A.}, title={Brownian motion on the Sierpi\'nski gasket}, journal={Probab. Theory Related Fields}, volume={79}, date={1988}, number={4}, pages={543--623}, issn={0178-8051}, review={\MR{966175 (89g:60241)}}, doi={10.1007/BF00318785}, } \bib{g3}{article}{ author={Begue, Matthew}, author={Kelleher, Daniel J.}, author={Nelson, Aaron}, author={Panzo, Hugo}, author={Pellico, Ryan}, author={Teplyaev, Alexander}, title={Random walks on barycentric subdivisions and the Strichartz hexacarpet}, journal={Exp. Math.}, volume={21}, date={2012}, number={4}, pages={402--417}, issn={1058-6458}, review={\MR{3004256}}, } \bib{ph-b}{article}{ author={Bellissard, J.}, title={Renormalization group analysis and quasicrystals}, conference={ title={Ideas and methods in quantum and statistical physics}, address={Oslo}, date={1988}, }, book={ publisher={Cambridge Univ. Press, Cambridge}, }, date={1992}, pages={118--148}, review={\MR{1190523 (93k:81045)}}, } \bib{BST}{article}{ author={Ben-Bassat, Oren}, author={Strichartz, Robert S.}, author={Teplyaev, Alexander}, title={What is not in the domain of the Laplacian on Sierpinski gasket type fractals}, journal={J. Funct. Anal.}, volume={166}, date={1999}, number={2}, pages={197--217}, issn={0022-1236}, review={\MR{1707752 (2001e:31016)}}, doi={10.1006/jfan.1999.3431}, } \bib{BrCa}{article}{ author={Brossard, Jean}, author={Carmona, Ren{\'e}}, title={Can one hear the dimension of a fractal?}, journal={Comm. Math. Phys.}, volume={104}, date={1986}, number={1}, pages={103--122}, issn={0010-3616}, review={\MR{834484 (87h:58218)}}, } \bib{w3}{article}{ author={J. F.-C. Chan}, author={S.-M. Nga}, author={A. Teplyaev}, title={One-dimensional wave equations defined by fractal Laplacians}, journal={to appear in J. d'Analyse M., arXiv:1406.0207}, date={2015}, } \bib{s0}{article}{ author={Coletta, Kevin}, author={Dias, Kealey}, author={Strichartz, Robert S.}, title={Numerical analysis on the Sierpinski gasket, with applications to Schr\"odinger equations, wave equation, and Gibbs' phenomenon}, journal={Fractals}, volume={12}, date={2004}, number={4}, pages={413--449}, issn={0218-348X}, review={\MR{2109985 (2005k:65245)}}, doi={10.1142/S0218348X04002689}, } \bib{s1}{article}{ author={Constantin, Sarah}, author={Strichartz, Robert S.}, author={Wheeler, Miles}, title={Analysis of the Laplacian and spectral operators on the Vicsek set}, journal={Commun. Pure Appl. Anal.}, volume={10}, date={2011}, number={1}, pages={1--44}, issn={1534-0392}, review={\MR{2746525 (2012b:28012)}}, doi={10.3934/cpaa.2011.10.1}, } \bib{dgv}{article}{ author={Derfel, Gregory}, author={Grabner, Peter J.}, author={Vogl, Fritz}, title={Laplace operators on fractals and related functional equations}, journal={J. Phys. A}, volume={45}, date={2012}, number={46}, pages={463001, 34}, issn={1751-8113}, review={\MR{2993415}}, doi={10.1088/1751-8113/45/46/463001}, } \bib{g1}{article}{ author={Drenning, Shawn}, author={Strichartz, Robert S.}, title={Spectral decimation on Hambly's homogeneous hierarchical gaskets}, journal={Illinois J. Math.}, volume={53}, date={2009}, number={3}, pages={915--937 (2010)}, issn={0019-2082}, review={\MR{2727362 (2012b:28015)}}, } \bib{Dunne}{article}{ author={Dunne, Gerald V.}, title={Heat kernels and zeta functions on fractals}, journal={J. Phys. A}, volume={45}, date={2012}, number={37}, pages={374016, 22}, issn={1751-8113}, review={\MR{2970533}}, doi={10.1088/1751-8113/45/37/374016}, } \bib{StrichartzSHO}{article}{ author={Fan, Edward}, author={Khandker, Zuhair}, author={Strichartz, Robert S.}, title={Harmonic oscillators on infinite Sierpinski gaskets}, journal={Comm. Math. Phys.}, volume={287}, date={2009}, number={1}, pages={351--382}, issn={0010-3616}, review={\MR{2480752 (2011f:35059)}}, doi={10.1007/s00220-008-0633-z}, } \bib{FellerVol2}{book}{ author={Feller, William}, title={An introduction to probability theory and its applications. Vol. II. }, series={Second edition}, publisher={John Wiley \& Sons, Inc., New York-London-Sydney}, date={1971}, pages={xxiv+669}, review={\MR{0270403 (42 \#5292)}}, } \bib{FlVa}{article}{ author={Fleckinger-Pell{\'e}, Jacqueline}, author={Vassiliev, Dmitri G.}, title={An example of a two-term asymptotics for the ``counting function'' of a fractal drum}, journal={Trans. Amer. Math. Soc.}, volume={337}, date={1993}, number={1}, pages={99--116}, issn={0002-9947}, review={\MR{1176086 (93g:58147)}}, doi={10.2307/2154311}, } \bib{GriNad1}{article}{ author={Grigor'yan, A.}, author={Nadirashvili, N.}, title={Negative eigenvalues of two-dimensional Schr{\"o}dinger operators}, journal={arXiv:1112.4986}, date={2014}, } \bib{GriNad2}{article}{ author={Grigor'yan, Alexander}, author={Nadirashvili, Nikolai}, title={Negative Eigenvalues of Two-Dimensional Schr\"odinger Operators}, journal={Arch. Ration. Mech. Anal.}, volume={217}, date={2015}, number={3}, pages={975--1028}, issn={0003-9527}, review={\MR{3356993}}, doi={10.1007/s00205-015-0848-z}, } \bib{Hamblyspec}{article}{ author={Hambly, B. M.}, title={Asymptotics for functions associated with heat flow on the Sierpinski carpet}, journal={Canad. J. Math.}, volume={63}, date={2011}, number={1}, pages={153--180}, issn={0008-414X}, review={\MR{2779136 (2012f:35548)}}, doi={10.4153/CJM-2010-079-7}, } \bib{HamKumFields}{article}{ author={Hambly, B. M.}, author={Kumagai, T.}, title={Diffusion processes on fractal fields: heat kernel estimates and large deviations}, journal={Probab. Theory Related Fields}, volume={127}, date={2003}, number={3}, pages={305--352}, issn={0178-8051}, review={\MR{2018919 (2004k:60219)}}, doi={10.1007/s00440-003-0284-0}, } \bib{Hare}{article}{ author={Hare, Kathryn E.}, author={Steinhurst, Benjamin A.}, author={Teplyaev, Alexander}, author={Zhou, Denglin}, title={Disconnected Julia sets and gaps in the spectrum of Laplacians on symmetric finitely ramified fractals}, journal={Math. Res. Lett.}, volume={19}, date={2012}, number={3}, pages={537--553}, issn={1073-2780}, review={\MR{2998138}}, doi={10.4310/MRL.2012.v19.n3.a3}, } \bib{ph1}{article}{ author={Hinz, Michael}, author={Teplyaev, Alexander}, title={Dirac and magnetic Schr\"odinger operators on fractals}, journal={J. Funct. Anal.}, volume={265}, date={2013}, number={11}, pages={2830--2854}, issn={0022-1236}, review={\MR{3096991}}, doi={10.1016/j.jfa.2013.07.021}, } \bib{HoltMolchanov}{article}{ author={Holt, J.}, author={Molchanov, S.}, title={On the Bohr formula for the one-dimensional Schr{\"o}dinger operator with increasing potential}, journal={Appl. Anal.}, volume={84}, date={2005}, number={6}, pages={555--569}, issn={0003-6811}, review={\MR{2151668 (2006g:34201)}}, doi={10.1080/00036810500047899}, } \bib{i1}{article}{ author={Ionescu, Marius}, author={Rogers, Luke G.}, title={Complex powers of the Laplacian on affine nested fractals as Calder\'on-Zygmund operators}, journal={Commun. Pure Appl. Anal.}, volume={13}, date={2014}, number={6}, pages={2155--2175}, issn={1534-0392}, review={\MR{3248383}}, doi={10.3934/cpaa.2014.13.2155}, } \bib{i2}{article}{ author={Ionescu, Marius}, author={Rogers, Luke G.}, author={Strichartz, Robert S.}, title={Pseudo-differential operators on fractals and other metric measure spaces}, journal={Rev. Mat. Iberoam.}, volume={29}, date={2013}, number={4}, pages={1159--1190}, issn={0213-2230}, review={\MR{3148599}}, doi={10.4171/RMI/752}, } \bib{i3}{article}{ author={Ionescu, Marius}, author={Pearse, Erin P. J.}, author={Rogers, Luke G.}, author={Ruan, Huo-Jun}, author={Strichartz, Robert S.}, title={The resolvent kernel for PCF self-similar fractals}, journal={Trans. Amer. Math. Soc.}, volume={362}, date={2010}, number={8}, pages={4451--4479}, issn={0002-9947}, review={\MR{2608413 (2011b:28017)}}, doi={10.1090/S0002-9947-10-05098-1}, } \bib{Iv}{article}{ author={Ivri{\u\i}, V. Ja.}, title={The second term of the spectral asymptotics for a Laplace-Beltrami operator on manifolds with boundary}, language={Russian, English translation: Functional Anal. Appl. {\bf14} (1980), 98--106.}, journal={Funktsional. Anal. i Prilozhen.}, volume={14}, date={1980}, number={2}, pages={25--34}, issn={0374-1990}, review={\MR{575202 (82m:58057)}}, } \bib{KigamiLapidus}{article}{ author={Kigami, Jun}, author={Lapidus, Michel L.}, title={Weyl's problem for the spectral distribution of Laplacians on p.c.f.\ self-similar fractals}, journal={Comm. Math. Phys.}, volume={158}, date={1993}, number={1}, pages={93--125}, issn={0010-3616}, review={\MR{1243717 (94m:58225)}}, } \bib{KS}{book}{ author={Kostju{\v{c}}enko, A. G.}, author={Sargsyan, I. S.}, title={Raspredelenie sobstvennykh znachenii}, language={Russian}, note={Samosopryazhennye obyknovennye differentsialnye operatory. [Selfadjoint ordinary differential operators]}, publisher={``Nauka'', Moscow}, date={1979}, pages={400}, review={\MR{560900 (81j:34034)}}, } \bib{LaPo}{article}{ author={Lapidus, Michel L.}, author={Pomerance, Carl}, title={Counterexamples to the modified Weyl-Berry conjecture on fractal drums}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={119}, date={1996}, number={1}, pages={167--178}, issn={0305-0041}, review={\MR{1356166 (96h:58175)}}, doi={10.1017/S0305004100074053}, } \bib{LevitanSargsjan}{book}{ author={Levitan, B. M.}, author={Sargsyan, I. S.}, title={Introduction to spectral theory: selfadjoint ordinary differential operators}, note={Translated from the Russian by Amiel Feinstein; Translations of Mathematical Monographs, Vol. 39}, publisher={American Mathematical Society, Providence, R.I.}, date={1975}, pages={xi+525}, review={\MR{0369797 (51 \#6026)}}, } \bib{NaimarkSolomyak}{article}{ author={Naimark, K.}, author={Solomyak, M.}, title={Regular and pathological eigenvalue behavior for the equation $-\lambda u''=Vu$ on the semiaxis}, journal={J. Funct. Anal.}, volume={151}, date={1997}, number={2}, pages={504--530}, issn={0022-1236}, review={\MR{1491550 (99b:34039)}}, doi={10.1006/jfan.1997.3149}, } \bib{Kajinospec}{article}{ author={Kajino, Naotaka}, title={Spectral asymptotics for Laplacians on self-similar sets}, journal={J. Funct. Anal.}, volume={258}, date={2010}, number={4}, pages={1310--1360}, issn={0022-1236}, review={\MR{2565841 (2011j:31010)}}, doi={10.1016/j.jfa.2009.11.001}, } \bib{Kajinospec2}{article}{ author={Kajino, Naotaka}, title={Log-periodic asymptotic expansion of the spectral partition function for self-similar sets}, journal={Comm. Math. Phys.}, volume={328}, date={2014}, number={3}, pages={1341--1370}, issn={0010-3616}, review={\MR{3201226}}, doi={10.1007/s00220-014-1922-3}, } \bib{Kigami}{book}{ author={Kigami, Jun}, title={Analysis on fractals}, series={Cambridge Tracts in Mathematics}, volume={143}, publisher={Cambridge University Press, Cambridge}, date={2001}, pages={viii+226}, isbn={0-521-79321-1}, review={\MR{1840042 (2002c:28015)}}, doi={10.1017/CBO9780511470943}, } \bib{w4}{article}{ author={Kusuoka, Shigeo}, author={Zhou, Xian Yin}, title={Waves on fractal-like manifolds and effective energy propagation}, journal={Probab. Theory Related Fields}, volume={110}, date={1998}, number={4}, pages={473--495}, issn={0178-8051}, review={\MR{1626955 (99i:58148)}}, doi={10.1007/s004400050156}, } \bib{Lindstrom}{article}{ author={Lindstr{\o}m, Tom}, title={Brownian motion on nested fractals}, journal={Mem. Amer. Math. Soc.}, volume={83}, date={1990}, number={420}, pages={iv+128}, issn={0065-9266}, review={\MR{988082 (90k:60157)}}, doi={10.1090/memo/0420}, } \bib{o1}{article}{ author={Okoudjou, Kasso A.}, author={Rogers, Luke G.}, author={Strichartz, Robert S.}, title={Szeg\"o limit theorems on the Sierpi\'nski gasket}, journal={J. Fourier Anal. Appl.}, volume={16}, date={2010}, number={3}, pages={434--447}, issn={1069-5869}, review={\MR{2643590 (2011c:35380)}}, doi={10.1007/s00041-009-9102-0}, } \bib{OSt}{article}{ author={Okoudjou, Kasso A.}, author={Strichartz, Robert S.}, title={Weak uncertainty principles on fractals}, journal={J. Fourier Anal. Appl.}, volume={11}, date={2005}, number={3}, pages={315--331}, issn={1069-5869}, review={\MR{2167172 (2006f:28011)}}, doi={10.1007/s00041-005-4032-y}, } \bib{OS-CT}{article}{ author={Okoudjou, Kasso A.}, author={Saloff-Coste, Laurent}, author={Teplyaev, Alexander}, title={Weak uncertainty principle for fractals, graphs and metric measure spaces}, journal={Trans. Amer. Math. Soc.}, volume={360}, date={2008}, number={7}, pages={3857--3873}, issn={0002-9947}, review={\MR{2386249 (2008k:42121)}}, doi={10.1090/S0002-9947-08-04472-3}, } \bib{Qiu}{article}{ author={Qiu, Hua}, title={Exact spectrum of the Laplacian on a domain in the Sierpinski gasket}, journal={arXiv:1206.1381v2}, date={2012}, } \bib{q}{article}{ author={Quint, J.-F.}, title={Harmonic analysis on the Pascal graph}, journal={J. Funct. Anal.}, volume={256}, date={2009}, number={10}, pages={3409--3460}, issn={0022-1236}, review={\MR{2504530 (2010e:37053)}}, doi={10.1016/j.jfa.2009.01.011}, } \bib{ReedSimonVol4}{book}{ author={Reed, Michael}, author={Simon, Barry}, title={Methods of modern mathematical physics. IV. Analysis of operators}, publisher={Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London}, date={1978}, pages={xv+396}, isbn={0-12-585004-2}, review={\MR{0493421 (58 \#12429c)}}, } \bib{r1}{article}{ author={Rogers, Luke G.}, title={Estimates for the resolvent kernel of the Laplacian on p.c.f. self-similar fractals and blowups}, journal={Trans. Amer. Math. Soc.}, volume={364}, date={2012}, number={3}, pages={1633--1685}, issn={0002-9947}, review={\MR{2869187}}, doi={10.1090/S0002-9947-2011-05551-0}, } \bib{RT}{article}{ author={Rogers, Luke G.}, author={Teplyaev, Alexander}, title={Laplacians on the basilica Julia sets}, journal={Commun. Pure Appl. Anal.}, volume={9}, date={2010}, number={1}, pages={211--231}, issn={1534-0392}, review={\MR{2556753 (2011c:28024)}}, doi={10.3934/cpaa.2010.9.211}, } \bib{RST}{article}{ author={Rogers, Luke G.}, author={Strichartz, Robert S.}, author={Teplyaev, Alexander}, title={Smooth bumps, a Borel theorem and partitions of smooth functions on P.C.F.\ fractals}, journal={Trans. Amer. Math. Soc.}, volume={361}, date={2009}, number={4}, pages={1765--1790}, issn={0002-9947}, review={\MR{2465816 (2010f:28020)}}, doi={10.1090/S0002-9947-08-04772-7}, } \bib{RozenblumSolomyak}{article}{ author={Rozenblum, G.}, author={Solomyak, M.}, title={On spectral estimates for the Schr\"odinger operators in global dimension 2}, journal={Algebra i Analiz}, volume={25}, date={2013}, number={3}, pages={185--199}, issn={0234-0852}, translation={ journal={St. Petersburg Math. J.}, volume={25}, date={2014}, number={3}, pages={495--505}, issn={1061-0022}, }, review={\MR{3184603}}, doi={10.1090/S1061-0022-2014-01301-5}, } \bib{Shargorodsky}{article}{ author={Shargorodsky, Eugene}, title={On negative eigenvalues of two-dimensional Schr\"odinger operators}, journal={Proc. Lond. Math. Soc. (3)}, volume={108}, date={2014}, number={2}, pages={441--483}, issn={0024-6115}, review={\MR{3166359}}, doi={10.1112/plms/pdt036}, } \bib{Ben}{article}{ author={Steinhurst, Benjamin A.}, author={Teplyaev, Alexander}, title={Existence of a meromorphic extension of spectral zeta functions on fractals}, journal={Lett. Math. Phys.}, volume={103}, date={2013}, number={12}, pages={1377--1388}, issn={0377-9017}, review={\MR{3117253}}, doi={10.1007/s11005-013-0649-y}, } \bib{StrFractafold}{article}{ author={Strichartz, Robert S.}, title={Fractafolds based on the Sierpi\'nski gasket and their spectra}, journal={Trans. Amer. Math. Soc.}, volume={355}, date={2003}, number={10}, pages={4019--4043 (electronic)}, issn={0002-9947}, review={\MR{1990573 (2004b:28013)}}, doi={10.1090/S0002-9947-03-03171-4}, } \bib{StrichartzFractalsInTheLarge}{article}{ author={Strichartz, Robert S.}, title={Fractals in the large}, journal={Canad. J. Math.}, volume={50}, date={1998}, number={3}, pages={638--657}, issn={0008-414X}, review={\MR{1629847 (99f:28015)}}, doi={10.4153/CJM-1998-036-5}, } \bib{g2}{article}{ author={Strichartz, Robert S.}, title={Laplacians on fractals with spectral gaps have nicer Fourier series}, journal={Math. Res. Lett.}, volume={12}, date={2005}, number={2-3}, pages={269--274}, issn={1073-2780}, review={\MR{2150883 (2006e:28013)}}, doi={10.4310/MRL.2005.v12.n2.a12}, } \bib{StrBook}{book}{ author={Strichartz, Robert S.}, title={Differential equations on fractals}, note={A tutorial}, publisher={Princeton University Press, Princeton, NJ}, date={2006}, pages={xvi+169}, isbn={978-0-691-12731-6}, isbn={0-691-12731-X}, review={\MR{2246975 (2007f:35003)}}, } \bib{w1}{article}{ author={Strichartz, Robert S.}, title={Waves are recurrent on noncompact fractals}, journal={J. Fourier Anal. Appl.}, volume={16}, date={2010}, number={1}, pages={148--154}, issn={1069-5869}, review={\MR{2587585 (2011a:35540)}}, doi={10.1007/s00041-009-9103-z}, } \bib{StrTep}{article}{ author={Strichartz, Robert S.}, author={Teplyaev, Alexander}, title={Spectral analysis on infinite Sierpi\'nski fractafolds}, journal={J. Anal. Math.}, volume={116}, date={2012}, pages={255--297}, issn={0021-7670}, review={\MR{2892621}}, doi={10.1007/s11854-012-0007-5}, } \bib{T}{article}{ author={Teplyaev, Alexander}, title={Spectral analysis on infinite Sierpi\'nski gaskets}, journal={J. Funct. Anal.}, volume={159}, date={1998}, number={2}, pages={537--567}, issn={0022-1236}, review={\MR{1658094 (99j:35153)}}, doi={10.1006/jfan.1998.3297}, } \bib{Tzeta}{article}{ author={Teplyaev, Alexander}, title={Spectral zeta functions of fractals and the complex dynamics of polynomials}, journal={Trans. Amer. Math. Soc.}, volume={359}, date={2007}, number={9}, pages={4339--4358 (electronic)}, issn={0002-9947}, review={\MR{2309188 (2008j:11119)}}, doi={10.1090/S0002-9947-07-04150-5}, } \end{biblist} \end{bibdiv} \end{document}
1,108,101,563,496
arxiv
\section{Introduction} As per the principles of classical theory, black holes do not emit particles. But in 1974, there was a paradigm shift in black hole physics when Stephen Hawking discovered that black holes can emit particles using the techniques of quantum field theory in curved space time \cite{haw1, haw2}. Since then, many researches have paid more attentions to the study of Hawking radiation of black holes. Bekenstein \cite{bek1, bek2} and Bardeen et. al \cite{bar} developed the black hole thermodynamics and showed that the black hole entropy is proportional to the area of the event horizon. Damour and Ruffini \cite{dam} proposed a method for studying Hawking radiation using tortoise coordinate transformation where the gravitational field is taken to be independent of time. The Dirac equations and Maxwell's electromagnetic field equations expressed in Newman-Penrose formalism and Klein-Gordon equation are transformed to a standard wave equation at the event horizon by using tortoise co-ordinate transformation and Newman-Penrose spin coefficients. Separating the wave equation, the incoming and the outgoing wave equations are derived and corresponding surface gravity and Hawking temperatures at the event horizon of black holes are obtained. Following this method, many authors have obtained interesting results in \cite{wu1,wu2,ibo, lanj, ibuab, ibung}. Further, various methods have been proposed for the study of Hawking radiation as a tunneling process caused by quantum fluctuations across the event horizon of the black hole. According to Kraus and Wilczek \cite{kra1, kra2} and Parikh and Wilczek \cite{par}, the potential barrier is created by the outgoing particle and subsequently the imaginary part of the radial action is derived in accordance with the semiclassical approximation. Zhang and Zhao \cite{zha1,zha2,zha3} further extended this method to a charged rotating black hole and showed that the spectrum is no longer purely thermal. Angheben et. al \cite{ang} put forward a slightly different method as an extension of Padmanbhan's works \cite{sri}. In this method, Hawking radiation is investigated as a tunneling for the extremal rotating black hole using the relativistic Hamilton-Jacobi equation, WKB approximation and Feynman prescription without considering the back reaction of the emitted particles. The Hawking radiation in different types of black holes using Hamilton-Jacobi equation have also been discussed in \cite{yan,wan,chen,ibu1}. Kerner and Mann \cite{ker} investigated the Hawking radiation of black hole by applying Dirac equation, Pauli Sigma matrices, Feynmann prescription and WKB approximation. Choosing appropriate Gamma matrices from the line element and inserting suitable wave function into the Dirac equation, the action of the radiant particle is obtained. Refs. \cite{ban1, ban2} discussed the Hawking temperature as a tunneling by applying the Hamilton-Jacobi equation beyond the semiclassical approximation. Using first law of black hole mechanics, correction to Benkenstein-Hawking area law containing logarithmic term is obtained. Applying their method, many interesting results have been derived in \cite{maj,ibu2,akh2,dou}. Kruglov \cite{kru1, kru2} investigated the tunneling of vector boson particles by applying WKB approximation, Feynmann prescription and Hamilton-Jacobi ansatz to Proca equation. Using Proca equation, the tunneling of vector boson particles in complicated black holes have also been discussed in \cite{ger,sak1,sak2,ibu3,gonz,ovg1}. {\bf Ref. \cite{ovg2} investigated the Hawking temperatures for different black holes by applying the topological method. Akhmedova et. al \cite{akh1} investigated Hawking radiation using quasi-classical calculation and showed that quasi-classical calculations are associated with temporal contribution and no analogue for the temporal contribution exists in the quantum mechanical tunneling problem.} Quantum gravity theories such as string theory, loop quantum gravity theory and Gedankan experiments \cite{tow, ven} indicate the existence of a minimum observable length in Planck scale which leads to the generalised uncertainty principle (GUP). Based on GUP, the tunneling of scalar particles and fermions near the event horizon of black holes have been studied in \cite{ablu,ibu4,kene, waji}. It is believed that Lorentz symmetry, which is the foundation of general relativity, may break at high energy level. Therefore, many researchers \cite{liu1,zha4,liu2,liu3} have proposed gravity models on the basis of Lorentz symmetry violation theory. Refs. \cite{cas, nas} have introduced ether-like vectors $u^{\alpha}$ for the study of Lorentz symmetry violation of Dirac equation in flat space time. Refs. \cite{hor,lin,kos1,kos2,blu,jac,onik} have studied the modified Hawking radiation and entropy of black holes by using Hamilton-Jacobi equation with Lorentz violation theory in curved space time. The objective of the paper is to study the modified Hawking temperatures of scalar particles crossing the event horizons of Riemann space time and BTZ black hole by considering the Lorentz violation theory in curved space time. The paper also intends to study the modified Bekenstein-Hawking entropy of the Schwarzschild-de Sitter black hole. For this purpose, we utilise the modified Hamilton-Jacobi equation in Lorentz violation theory. This paper is organised as follows. In Section 2, the modified Hamilton-Jacobi equation is revisited in Lorentz violation theory. In Section 3, Hawking temperature of Riemann space time is modified by using modified Hamilton-Jacobi equation induced by Lorentz violation theory in curved space time. The modified Hawking temperatures for uncharged and charged non-rotating BTZ black hole are studied by using modified Hamilton-Jacobi equation in Section 4 and Section 5 respectively. In Section 6, the change in Bekenstein-Hawking entropy is studied by using the modified Hamilton-Jacobi equation. In Section 7, discussion and conclusion are given. \section{Revisiting the modified Hamilton-Jacobi equation } Based on high energy quantum gravity theory, Lorentz invariance needs to be corrected at the Planck scale. According to Ref. \cite{gom}, the Lagrangian $\mathcal{L}$ in the new Lorentz violation equation of scalar particles with mass $m$ in the presence of ether-like vectors $u^\alpha$ is \begin{eqnarray} \mathcal{L}=\frac{1}{2}[\partial_\mu \psi \partial^\mu\psi+\lambda(u^\alpha\partial_\alpha \psi)^2+m^2\psi^2], \end{eqnarray} where $\lambda$ is a proportionality constant and the ether like vectors $u^\alpha$ satisfy \begin{eqnarray} u^\alpha u_\alpha=const. \end{eqnarray} Therefore, in accordance with the principle of least action, the corrected scalar field equation in flat space time is \begin{eqnarray} \partial_\mu \partial^\mu \psi +\lambda u^\alpha u^\beta \partial_\alpha \partial_\beta \psi +m^2\psi=0. \end{eqnarray} Further, the action of scalar particles based on Lorentz violationg scalar field theory is \begin{eqnarray} \mathbf{S}=\int d^4x\sqrt{-g}\frac{1}{2}[\partial_\mu \psi \partial^\mu\psi+\lambda(u^\alpha\partial_\alpha \psi)^2+m^2\psi^2]. \end{eqnarray} The condition given in Eq. (2) holds true for the vector $u^\alpha$ in the above equation. Taking into account the electromagnetic potential, the action of scalar particles is \begin{eqnarray} \mathbf{S}&=&\int d^4x\sqrt{-g}\frac{1}{2}\{(\partial_\mu +ieA_mu)\psi(\partial^\mu-ieg^{\mu \nu}A_\nu)\psi\cr && +\lambda u^\alpha(\partial_\alpha+)\psi u^\beta(\partial_\beta-ieA_\beta)\psi+m^2\psi^2\}. \end{eqnarray} The scalar field equation under Lorentz violation in curved space time can be written from the above equation as \begin{eqnarray} -\frac{1}{\sqrt{-g}}(\partial_\mu-ieA_\mu)[\sqrt{-g}(g^{\mu\nu}+\lambda u^\mu u^\nu)(\partial_\nu-ieA_\nu)\psi]+m^2\psi=0, \end{eqnarray} where $A_\mu$ is the electromagnetic potential of the black hole and $e$ is the charge of the particle. Eq. (6) is the modified form of Klein-Gordon equation in curved space time induced by Lorentz violation theory. Equating the proportionality constant $\lambda$ to zero, we get the original Klein-Gordon equation coupled with an electromagnetic potential. In order to obtain the modified Hamilton-Jacobi equation in curved space time, we can write the wave function $\psi$ of Eq. (6) as \begin{eqnarray} \psi=\psi_0 \mathbf{e}^{\frac{i }{\hbar}S}, \end{eqnarray} where $S(t,r,\theta,\phi)$ is the Hamilton principle function of scalar particles and $\hbar$ is the Planck's constant. Substituting Eq. (7) in Eq. (6) and neglecting the higher order terms of $\hbar$ in accordance with the semiclassical approximation, we get \begin{eqnarray} (g^{\mu\nu}+\lambda u^\mu u^\nu)(\partial_\mu S-eA_\mu)(\partial_\nu S-eA_\nu)+m^2=0. \end{eqnarray} Eq. (8) is the modified form of Hamilton-Jacobi equation of scalar field particles in curved space time induced by Lorentz violation. This will be used to investigate the modified Hawking temperatures and change in Bekenstein-Hawking entropy in the later sections. \section{Hawking temperature of Riemann space time for scalar particle} The line element of the static Riemann space time, as given in Ref. \cite{ch1}, is \begin{eqnarray} ds^2&=&-a^2dt^2+b^2dx^2+c^2dy^2+d^2dz^2, \end{eqnarray} where $a,b,c$ and $d$ are the functions of $(x,y,z)$ and the location of the event horizon of Eq. (9) is at $x=\xi$. The contravariant and covariant components of Riemann space time, as given in Ref. \cite{ch2}, can be written as \begin{eqnarray} g_{00}&=&-q^2(x-\xi)=-a^2 ,\cr g^{11}&=&p^2(x,y,z)(x-\xi)=\frac{1}{b^2},\cr g^{22}&=&\theta,\cr g^{33}&=&\phi, \end{eqnarray} where $q^2, p^2, \theta^2$ and $\phi$ are arbitrary functions which are not zero and nonsingular at the event horizon. According to Ref. \cite{ch2}, the surface gravity and the Hawking temperature near the event horizon of Riemann space time are calculated as \begin{eqnarray} \kappa=\lim_{g_{00} \to 0}\frac{1}{2}\sqrt{-\frac{g_{11}}{g_{00}}}\frac{\partial g_{00}}{\partial x}=\frac{1}{2}p(\xi)q(\xi) \end{eqnarray} and \begin{eqnarray} T_0=\frac{p(\xi)q(\xi)}{4\pi} \end{eqnarray} respectively. To study the modified Hawking temperature, the ether-like field $u^\alpha$ are constructed as \begin{eqnarray} u^t&=&\frac{c_t}{\sqrt{-g_{tt}}}=\frac{c_t}{a}, \cr u^x&=&\frac{c_x}{\sqrt{g_{xx}}}=\frac{c_x}{b}, \cr u^y&=&\frac{c_y}{\sqrt{g_{yy}}}=\frac{c_y}{c}, \cr u^z&=&\frac{c_z}{\sqrt{g_{zz}}}=\frac{c_z}{d}, \end{eqnarray} where $c_t,c_x, c_y$ and $c_z$ are arbitrary constants and $u^\alpha$ satisfies the following property \begin{eqnarray} u^\alpha u_\alpha=-c_t^2+c_x^2+c_y^2+c_z^2={\rm constant}. \end{eqnarray} Using Eqs. (9) and (13) in Eq. (8), we get \begin{eqnarray} &&-\frac{1}{a^2}(1+\lambda c_t^2)\left(\frac{\partial S}{\partial t}\right)^2+\frac{1}{b^2}(1+\lambda c_x^2)\left( \frac{\partial S}{\partial x}\right)^2+\frac{1}{c^2}(1+\lambda c_y^2)\left( \frac{\partial S}{\partial y}\right)^2\cr && +\frac{1}{d^2}(1+\lambda c_z^2)\left( \frac{\partial S}{\partial z}\right)^2+\left(\frac{2\lambda c_t c_x}{ab}\frac{\partial S}{\partial t}+\frac{2\lambda c_x c_y}{bc}\frac{\partial S}{\partial y}+\frac{2\lambda c_x c_z}{bd}\frac{\partial S}{\partial z}\right)\frac{\partial S}{\partial x}\cr &&+ \frac{2\lambda c_t c_x}{ad}\frac{\partial S}{\partial t}\frac{\partial S}{\partial z}+\frac{2\lambda c_t c_y}{ac}\frac{\partial S}{\partial t}\frac{\partial S}{\partial y} +\frac{2\lambda c_y c_z}{cd}\frac{\partial S}{\partial c}\frac{\partial S}{\partial d} + m^2=0. \end{eqnarray} To study the modified Hawking temperature of Riemann space time in the Lorentz violation theory, the action $S$ can be written as \begin{eqnarray} S=-\omega t+R(x)+W(y,z)+\epsilon, \end{eqnarray} where $\omega$ is the energy of the emitted scalar particle and $\epsilon$ is a complex constant. Using Eq. (16) in Eq. (15), we obtain a quadratic equation in $\frac{\partial R}{\partial x}$ as \begin{eqnarray} A_1\left(\frac{\partial R}{\partial x}\right)^2+B_1\frac{\partial R}{\partial x}+ C_1= 0, \end{eqnarray} where \begin{eqnarray} A_1&=&p^2(x-\xi)(1+\lambda c_x^2), \cr B_1&=&-\frac{2p\lambda c_t c_x \omega }{q} + 2\lambda c_x c_y \sqrt{\theta}\sqrt{p^2(x-\xi)}\frac{\partial W}{\partial y}+ 2\lambda c_x c_z \sqrt{p^2(x-\xi)}\sqrt{\phi}\frac{\partial W}{\partial z}, \cr C_1&=& \theta \left(\frac{\partial W}{\partial y}\right)^2+\lambda \theta c_y^2\left(\frac{\partial W}{\partial y}\right)^2-\frac{2\lambda c_t c_y \omega \sqrt{\theta}}{\sqrt{q^2(x-\xi)}}\frac{\partial W}{\partial y}+2\lambda c_y c_z \sqrt{\theta}\sqrt{\phi}\frac{\partial W}{\partial y}\frac{\partial W}{\partial z} \cr && -\frac{1}{q^2(x-\xi)}\omega^2+\phi \left(\frac{\partial W}{\partial z}\right)^2+\frac{\lambda c_t^2}{q^2(x-\xi)}\omega^2-\frac{2\lambda c_t c_z \sqrt{\phi}}{\sqrt{q^2(x-\xi)}} \cr && +\lambda c_z^2 \phi \left(\frac{\partial W}{\partial z}\right)^2+m^2. \end{eqnarray} From Eq. (17), we obtain the two roots as \begin{eqnarray} {\rm R}_\pm=\int \frac{\omega \left(\lambda c_t c_x \pm \sqrt{1-\lambda c_t^2+\lambda c_x^2}\right)}{pq(x-\xi)(1+\lambda c_x^2)} dr, \end{eqnarray} where $R_+$ and $R_-$ correspond to scalar particles moving away from and approaching the black hole respectively. Completing the above integral using Feynmann prescription, we get \begin{eqnarray} {\rm Im}R_\pm =\frac{ \pi\omega{\left(\lambda c_t c_x \pm \sqrt{1-\lambda c_t^2 +\lambda c_x^2}\right)} }{p(\xi)q(\xi)(1+\lambda c_x^2)}. \end{eqnarray} The probabilities of ingoing and outgoing particles across the event horizon of the black hole are given by \begin{eqnarray} {\rm Prob(emmission)}=\exp (-2{\rm ImS})=\exp[-2({\rm ImR_+} + {\rm Im\epsilon})] \end{eqnarray} and \begin{eqnarray} {\rm Prob(absorption)}=\exp({\rm -2ImS}=\exp[-2({\rm ImR_-+Im\epsilon})]. \end{eqnarray} Since the ingoing particle has a $100\%$ chance of entering the black hole in accordance with the semiclassical approximation, it indicates that ${\rm Im\epsilon=-Im R_-}$. The tunneling probability of the scalar particles at the event horizon $x=\xi$ of Riemann space time is \begin{eqnarray} \Gamma=\frac{\rm {Prob(emission)}}{\rm {Prob(absorption)}}=\exp\left( -\frac{ 4\pi\omega\sqrt{1-\lambda c_t^2 +\lambda c_x^2}}{p(\xi)q(\xi)(1+\lambda c_x^2 )}\right). \end{eqnarray} The modified Hawking temperature is given by \begin{eqnarray} T=\frac{p(\xi)q(\xi)}{4\pi \beta}=\frac{T_0}{\beta}, \end{eqnarray} where \begin{eqnarray} \beta=\frac{\sqrt{1-\lambda c_t^2 +\lambda c_x^2}}{1+\lambda c_x^2}. \end{eqnarray} Thus, we obtain the modified Hawking temperature of Riemann space time due to the Lorentz violation theory in curved space time. Here, $T_0$ is the original Hawking temperature of the Riemann space time. The modified Hawking temperature near the event horizon $x=\xi$ of Riemann space time increases or decreases if $0<\beta<1$ or $1<\beta<\infty$ respectively . If $\beta$ tends to 1, the actual Hawking temperature is recovered. It is observed that this modified Hawking temperature not only depends on the Lorentz violation parameter $\lambda$ but also on the ether-like vectors $u^\alpha$. \section{Tunneling of uncharged non-rotating BTZ black hole } The non-rotating uncharged BTZ black hole in spherical coordinates, as given by Ref. \cite{far}, is \begin{eqnarray} ds^2= Z(r)dt^2-Z^{-1}(r)dr^2-r^2d\phi^2, \end{eqnarray} where \begin{eqnarray} Z(r)=-M+\frac{r^2}{l^2}. \end{eqnarray} Eq. (26) has a singularity at $Z(r)=0$ and the value of black hole mass $M$ is \begin{eqnarray} M=\frac{r^2}{l^2}. \end{eqnarray} According to Ref. \cite{far}, the Hawking temperature of BTZ black hole is given as \begin{eqnarray} {\rm T_0'}=\frac{\sqrt{M}}{2\pi l}. \end{eqnarray} To study the modified Hawking temperature, the ether-like vectors $u^\alpha$ are constructed as \begin{eqnarray} u^t&=&\frac{c_t}{\sqrt{g_{tt}}}=\frac{c_t}{\sqrt{Z(r)}}, \cr u^r&=&\frac{c_r}{\sqrt{-g_{rr}}}=c_r\sqrt{Z(r)},\cr u^\phi&=&\frac{c_\phi}{\sqrt{-g_{\phi \phi}}}=\frac{c_\phi}{r}, \end{eqnarray} where $c_t,c_r,$ and $c_\phi$ are arbitrary constants and $u^\alpha$ satisfies the following property \begin{eqnarray} u^\alpha u_\alpha=c_t^2-c_r^2-c_\phi^2={\rm constant}. \end{eqnarray} Using Eqs. (26) and (30) in Eq. (8), we obtain \begin{eqnarray} &&Z(r)(\lambda c_r^2-1)\left(\frac{\partial S}{\partial r}\right)^2+\frac{1}{Z(r)}(1+\lambda c_t^2)\left( \frac{\partial S}{\partial t}\right)^2+\frac{1}{r^2}(\lambda c_\phi^2-1)\left(\frac{\partial S}{\partial\phi}\right)^2+\cr && 2\lambda c_t c_r \frac{\partial S}{\partial t}\frac{\partial S}{\partial r}+2\lambda c_r c_\phi \frac{\sqrt{Z(r)}}{r}\frac{\partial S}{\partial \phi}\frac{\partial S}{\partial r}+2\lambda c_t c_\phi \frac{1}{\sqrt{Z(r)}r}\frac{\partial S}{\partial t}\frac{\partial S}{\partial \phi}+m^2=0. \end{eqnarray} Since Eq. (32) contains the variables $r$, $t$ and $\phi$, the action $S$ can be written as \begin{equation} S=-\omega t+R(r)+j\phi+\alpha, \end{equation} where $\omega$, $j$ and $\alpha$ are the energy of the emitted particle, angular momentum and complex constant respectively. Using Eq. (33) in Eq. (32), we obtain a quadratic equation in $\frac{\partial R}{\partial r}$ as \begin{eqnarray} A_2\left(\frac{\partial R}{\partial r}\right)^2+B_2\frac{\partial R}{\partial r}+ C_2 = 0, \end{eqnarray} where \begin{eqnarray} A_2&=&Z(r){\lambda c_r^2-1}, \cr B_2&=&-2\lambda c_t c_r \omega + 2\lambda c_r c_\phi \frac{\sqrt{Z(r)}}{r},\cr C_2&=& \frac{1}{Z(r)}(1+\lambda c_t^2) \omega^2 + \frac{1}{r^2}(\lambda c_\phi^2-1)j^2-\frac{2\lambda c_t c_\phi \omega j}{\sqrt{Z(r)}r}+m^2. \end{eqnarray} The two roots of Eq. (34) are given by \begin{eqnarray} {\rm R}_\pm=\int \frac{\omega \left(\lambda c_t c_r \pm \sqrt{1+\lambda c_t^2-\lambda c_r^2}\right)}{Z(r)(\lambda c_r^2-1)} dr, \end{eqnarray} where $R_+$ and $R_-$ are the scalar particles moving away from and approaching the black hole respectively. Solving the above integral, we have \begin{eqnarray} {\rm Im}R_\pm= \frac{\pi \omega l (\lambda c_t c_r\pm \sqrt{1+\lambda c_t^2-\lambda c_r^2)}}{2\sqrt{M}(\lambda c_r^2-1)}. \end{eqnarray} The tunneling probability of scalar particles across the black hole event horizon is \begin{eqnarray} \Gamma={\rm \frac{Prob(emmission)}{Prob(absorption)}}=\exp \left[ -\frac{2\pi \omega l \sqrt{1+\lambda c_t^2-\lambda c_r^2}}{\sqrt{M}(\lambda c_r^2-1)}\right]. \end{eqnarray} The modified Hawking temperature of uncharged non-rotating BTZ black hole is calculated as \begin{eqnarray} {\rm T}=\frac{\sqrt{M}}{2\pi l \gamma}=\frac{T_0'}{\gamma}, \end{eqnarray} where \begin{eqnarray} \gamma=\frac{\sqrt{1+\lambda c_t^2-\lambda c_r^2}}{\lambda c_r^2-1}. \end{eqnarray} Thus, we obtain the modified Hawking temperature due to the Lorentz violation theory in curved space time. Here, $T_0'$ is the original Hawking temperature of the uncharged non-rotating BTZ black hole. The modified Hawking temperature of non-rotating BTZ black hole increases or decreases if $0<\gamma<1$ or $1<\gamma<\infty$. If $\gamma$ tends to 1, the Lorentz violation is cancelled and the actual Hawking temperature is recovered. It is observed that this modified Hawking temperature not only depends on the Lorentz violation parameter $\lambda$ but also on the ether-like vectors $u^\alpha$. \section{Tunneling of charged non-rotating BTZ black hole} The line element of a charged non-rotating BTZ black hole with mass $M$ and charge $Q$ is given in Refs. \cite{ban, mar, clem} as \begin{eqnarray} ds^2= Y(r)dt^2-\frac{1}{Y(r)}dr^2-r^2d\phi^2, \end{eqnarray} where \begin{eqnarray} Y(r)=\left(\frac{r^2}{l^2}-M-\frac{Q^2}{2} \ln \left(\frac{r}{l}\right)\right). \end{eqnarray} The electromagnetic potential of BTZ black hole is \begin{eqnarray} A_\mu=(A_t,0,0), \end{eqnarray} where $A_t=-Q\ln\left(\frac{r_+}{l}\right)$. The locations of the horizons are given by \begin{eqnarray} \frac{r_\pm^2}{l^2}-M-\frac{Q^2}{2}\ln\frac{r_\pm}{l}=0, \end{eqnarray} where $r_+$ and $r_-$ denote the event and Cauchy horizon respectively. According to Ref. \cite{ejaz}, the Hawking temperature of the charged non-rotating BTZ black hole is given as \begin{eqnarray} T_h=\frac{1}{4\pi}\left(\frac{2r_+}{l^2}-\frac{Q^2}{2r_+}\right). \end{eqnarray} The heat capacity of the charged non-rotating BTZ black hole is calculated as \begin{eqnarray} C_h&=&T_h\left(\frac{\partial S}{\partial T_h}\right)\cr &=&\frac{8\pi^2l^2T_h}{1+\frac{Q^2l^2}{4r_+}}. \end{eqnarray} By considering the limit of the outer horizon, $Y(r)$ can be written as \begin{eqnarray} Y(r_+)=\left(\frac{2r_+}{l^2}-\frac{Q^2}{2r_+}\right)(r-r_+) \end{eqnarray} To study the modified Hawking temperature, the ether-like vectors $u^\alpha$ are constructed as \begin{eqnarray} u^t&=&\frac{c_a}{\sqrt{g_{tt}}}=\frac{c_a}{\sqrt{Y(r)}} , \cr u^r&=&\frac{c_b}{\sqrt{-g_{rr}}}=c_b \sqrt{Y(r)},\cr u^\phi&=&\frac{c_c}{\sqrt{-g_{\phi \phi}}}=\frac{c_c}{r}, \end{eqnarray} where $c_a$, $c_b$ and $c_c$ are arbitrary constants and $u^\alpha$ satisfies the following property \begin{eqnarray} u^\alpha u_\alpha=c_a^2-c_b^2-c_c^2={\rm constant}. \end{eqnarray} Using Eqs. (41) and (48) in Eq. (8), we obtain \small \begin{eqnarray} &&Y(r)(\lambda c_b^2-1)\left(\frac{\partial S}{\partial r}\right)^2+\frac{1}{Y(r)}(1+\lambda c_a^2)\left(eA_t- \frac{\partial S}{\partial t}\right)^2+\frac{1}{r^2}(\lambda c_c^2-1)\left(\frac{\partial S}{\partial\phi}\right)^2+m^2+ \cr && \left(2\lambda c_a c_b \frac{\partial S}{\partial t} +2\lambda c_b c_c \frac{\sqrt{Y(r)}}{r}\frac{\partial S}{\partial \phi}-2\lambda c_a c_b eA_t\right)\frac{\partial S}{\partial r}+2\lambda c_a c_c \frac{1}{\sqrt{Y(r)}r}\frac{\partial S}{\partial t}\frac{\partial S}{\partial \phi}=0. \end{eqnarray} \normalsize To obtain the modified Hawking temperature and heat capacity, the action $S$ in Eq. (50) can be written as \begin{equation} S=-\omega t+R(r)+j\phi+\delta \end{equation} where $\omega$ and $j$ denote the energy and angular momentum of the emitted particle. $\delta$ is a complex constant. Using Eq. (51) in Eq. (50), a quadratic equation in $\frac{\partial R}{\partial r}$ is obtained as \begin{eqnarray} A_3\left(\frac{\partial R}{\partial r}\right)^2+B_3\frac{\partial R}{\partial r}+ C_3 = 0, \end{eqnarray} where \begin{eqnarray} A_3&=&Y(r)(\lambda c_b^2-1), \cr B_3&=&-2\lambda c_a c_b \omega + \frac{2\lambda c_b c_c \sqrt{Y(r)}j}{r}-2\lambda c_a c_b eA_t,\cr C_3&=& \frac{1}{Y(r)}(1+\lambda c_a^2) \omega^2 + \frac{1}{r^2}(\lambda c_c^2-1)j^2-\frac{2\lambda c_a c_c \omega j}{\sqrt{Y(r)}r}+m^2. \end{eqnarray} Solving for Eq. (52), the two roots obtained are given by \begin{eqnarray} {\rm R}_\pm=\int\frac{\lambda c_a c_b (\omega+eA_t) \pm \sqrt{(\omega+eA_t)^2(1+\lambda c_a^2-\lambda c_b^2)}}{Y(r)(\lambda c_b^2-1)}dr, \end{eqnarray} where $R_+$ and $R_-$ denote the scalar particles moving away from and approaching the black hole respectively. Solving the above integral, we obtain \begin{eqnarray} {\rm Im}R_\pm=\frac{\pi (\omega+eA_t) (\lambda c_a c_b\pm \sqrt{1+\lambda c_a^2-\lambda c_b^2})}{\left(\frac{2r_+}{l^2}-\frac{Q^2}{2r_+}\right)(\lambda c_b^2-1)}, \end{eqnarray} The tunneling probability of the scalar particle across the black hole event horizon is \begin{eqnarray} \Gamma={\rm \frac{Prob(emmission)}{Prob(absorption)}}=\exp\left[-\frac{4\pi(\omega+eA_t) \sqrt{1+\lambda c_a^2-\lambda c_b^2}}{\left(\frac{2r_+}{l^2}-\frac{Q^2}{2r_+}\right)(\lambda c_b^2-1)}\right]. \end{eqnarray} The modified Hawking temperature is given by \begin{eqnarray} {\rm T}&=&\frac{1}{4\pi\zeta}\left(\frac{2r_+}{l^2}-\frac{Q^2}{2r_+}\right) \cr &=&\frac{T_h}{\zeta}. \end{eqnarray} where \begin{eqnarray} \zeta=\frac{ \sqrt{1+\lambda c_a^2-\lambda c_b^2}}{\lambda c_b^2-1}. \end{eqnarray} Thus, we obtain the modified Hawking temperature of charged non-rotating BTZ black hole due to the Lorentz violation theory in curved spacetime. Here, $T_h$ is the original Hawking temperature of the charged non-rotating BTZ black hole. If $0<\zeta<1$ or $1<\zeta<\infty$, the modified Hawking temperature near the event horizon of the charged non-rotating BTZ black hole increases or decreases respectively. If $\zeta$ tends to 1, the original value of Hawking temperature is recovered. The modified heat capacity across the event horizon of the charged non-rotating BTZ black hole is calculated as \begin{eqnarray} C_M&=&\frac{\partial M}{\partial T}\cr &=&\frac{\zeta 8\pi^2 l^2 \left(\frac{r_+^2}{l^2}-\frac{Q^2}{2r_+}\right)}{2\pi \left(1+\frac{Q^2 l^2}{4r_+^2}\right)} \cr &=&\zeta C_h. \end{eqnarray} where $C_h$ denotes the original heat capacity of the charged non-rotating BTZ black hole. Thus, we obtain the modified heat capacity due to the Lorentz violation theory in curved space time. If $0<\zeta<1$ or $1<\zeta<\infty$, the modified heat capacity of charged non-rotating BTZ black hole decreases or increases respectively. If $\zeta$ tends to 1, the original heat capacity is recovered. It is observed that this modified Hawking temperature and heat capacity not only depends on the Lorentz violation parameter $\lambda$ but also on the ether-like vectors $u^\alpha$. \section{ Tunneling of Schwarzschild-de Sitter black hole} The line element of the Swarzschild-de Sitter black hole (SdS) in spherical polar coordinates is given by Ref. \cite{ati} as \begin{eqnarray} ds^2&=&-\frac{\Delta}{r^2} dt^2+\frac{r^2}{\Delta} dt^2+r^2(d\theta^2 + \sin^2\theta d\theta^2), \end{eqnarray} where \begin{eqnarray} \Delta=\left(r^2-2mr-\frac{r^4}{l^2}\right). \end{eqnarray} Eq. (61) has two real roots $r_h$ and $r_c$ for $0<\eta=\frac{M^2}{l^2}<\frac{1}{27}$. The horizons of the SdS black hole, namely the event horizon and the cosmological horizon, are respectively located at \begin{eqnarray} &&r_h=\frac{2m}{\sqrt{3\eta}}\cos\frac{\pi+\psi}{3}, \cr && r_c=\frac{2m}{\sqrt{3\eta}}\cos\frac{\pi-\psi}{3}, \end{eqnarray} where \begin{eqnarray} \psi=\cos^{-1}(3\sqrt{3\eta}). \end{eqnarray} When $\eta$ takes the value $\frac{1}{27}$, the two horizons $r_c$ and $r_h$ coincide. For $\eta<\frac{1}{27}$, we expand $r_h$ as \begin{eqnarray} r_h=2m\left(1+\frac{4m^2}{l^2}+...\right). \end{eqnarray} To study the modified entropy of black hole, the ether-like vectors $u^\alpha$ are constructed as \begin{eqnarray} u^t&=&\frac{c_d}{\sqrt{-g_{tt}}}=\frac{ c_d r}{\sqrt{\Delta}}, \cr u^r&=&\frac{c_e}{\sqrt{g_{rr}}}=\frac{c_e \sqrt{\Delta}}{r}, \cr u^\theta&=&\frac{c_f}{\sqrt{g_{\theta \theta}}}=\frac{c_f}{r} ,\cr u^\phi&=&\frac{c_g}{\sqrt{-g_{\phi \phi}}}=\frac{c_g}{r\sin\theta}, \end{eqnarray} where $c_d, c_e, c_f$ and $c_g$ are arbitrary constants and $u^\alpha$ satisfies the following property \begin{eqnarray} u^\alpha u_\alpha=-c_d^2+c_e^2+c_f^2+c_g^2={\rm constant}. \end{eqnarray} Using Eqs. (60) and (65) in Eq. (8), we obtain \begin{eqnarray} &&-\frac{1}{\Delta}(1+\lambda c_t^2)\left(\frac{\partial S}{\partial t}\right)^2+\Delta(1+\frac{\lambda c_r^2}{r^2})\left(\frac{\partial S}{\partial r}\right)^2+\frac{1}{r^2}(1+\lambda c_\theta^2)\left(\frac{\partial S}{\partial \theta}\right)^2\cr &&+\frac{1}{r^2 \sin^2\theta}(1+\lambda c_\phi^2)\left(\frac{\partial S}{\partial \phi}\right)^2+ \left(2\lambda c_t c_r \frac{\partial S}{\partial t}+\frac{2\lambda c_r c_\theta}{\sqrt{\Delta}r}\frac{\partial S}{\partial \theta}+\frac{2\lambda c_r c_\phi}{\sqrt{\Delta} r \sin\theta}\frac{\partial S}{\partial \phi}\right)\frac{\partial S}{\partial r}\cr && +\frac{2\lambda c_\theta c_\phi}{r^2\sin\theta}\frac{\partial S}{\partial \theta}\frac{\partial S}{\partial \phi}+\frac{2\lambda c_t c_\theta\sqrt{\Delta}}{r}\frac{\partial S}{\partial t}\frac{\partial S}{\partial \theta}+\frac{2\lambda c_t c_\phi \sqrt{\Delta}}{r \sin\theta}\frac{\partial S}{\partial t}\frac{\partial S}{\partial \phi}+m^2=0. \end{eqnarray} To derive the modified entropy of SdS black hole, the action $S$ in Eq. (67) can be written as \begin{equation} S=-\omega t+W(r,\theta)+j\phi. \end{equation} where $W(r,\theta)$, $\omega$ and $j$ are the generalised momentum, energy of the emitted particle and angular momentum. Using Eq. (68) in Eq. (67), we obtain a quadratic equation in $\frac{\partial W}{\partial r}$ as \begin{eqnarray} A_4\left(\frac{\partial W}{\partial r}\right)^2+B_4\frac{\partial W}{\partial r}+ C_4 = 0, \end{eqnarray} where \begin{eqnarray} A_4&=&\frac{\Delta}{r^2}\left(1+\lambda c_r^2 \right),\cr B_4&=&2\lambda c_r c_\theta \frac{\sqrt{\Delta}}{r^2}\left(\frac{\partial W}{\partial \theta}\right)-2\lambda c_t c_r \omega +2\lambda c_r c_\phi j \frac{\sqrt{\Delta}}{r^2\sin^2\theta},\cr C_4&=& \left(\frac{1}{r^2}+\frac{\lambda c_\theta^2}{r^2}\right)\left(\frac{\partial W}{\partial\theta}\right)^2+\left(\frac{2\lambda c_\theta c_\phi j}{r^2 \sin\theta}-\frac{2\lambda c_t c_\theta \omega}{\sqrt{\Delta}}\right)\left(\frac{\partial W}{\partial \theta}\right) \frac{\omega^2 \lambda c_t^2 r^2}{\Delta}-\frac{\omega^2 r^2}{\Delta} \cr && +\frac{j^2}{r^2\sin^2\theta}-\frac{2\lambda c_t c_\phi \omega j}{\sqrt{\Delta \sin\theta}}+\frac{\lambda c_\phi^2 j^2}{r^2\sin^2\theta}+m^2. \end{eqnarray} Solving for Eq. (69), two roots having physical meanings are given as \begin{eqnarray} {\rm W}_\pm =\int \frac{\lambda c_t c_r \omega \pm \sqrt{\omega^2(1+\lambda c_r^2-\lambda c_t^2)}}{\frac{\Delta}{r^2}(1+\lambda c_r^2)}dr, \end{eqnarray} where $W_+$ and $W_-$ correspond to the outgoing and ingoing particle respectively. Solving the above integral for the outgoing particle using Feynmann prescription, we get \begin{eqnarray} {\rm Im}W_+&=& \frac{ \pi r_h^2\omega\left(\lambda c_t c_r + \sqrt{1+\lambda c_r^2-\lambda c_t^2}\right)}{\Delta,_ r(r_h)(1+\lambda c_r^2)} \cr &=&\frac{\pi r_h^2\omega\left(\lambda c_t c_r +\sqrt{1+\lambda c_r^2-\lambda c_t^2}\right)}{\left(r_h-m-2\frac{r_h^3}{l^2}\right)(1+\lambda c_r^2)}, \end{eqnarray} Using Eq. (64) in the above equation, we get \begin{eqnarray} {\rm Im}W_+=\frac{\pi4m^2\left(1+\frac{4m^2}{l^2}+...\right)^2\left(\lambda c_t c_r + \sqrt{1+\lambda c_r^2-\lambda c_t^2}\right)}{\left[2m\left(1+\frac{4m^2}{l^2}+...\right)-m-\frac{2}{l^2}\left\{2m\left(1+\frac{4m^2}{l^2}+...\right)\right\}^3\right](1+\lambda c_r^2)}\omega . \end{eqnarray} There is a change in the mass of the SdS black hole from $m$ to $m-\omega$ when a particle with energy $\omega$ tunnels out. Since SdS black hole is non-rotating, the angular velocity of the particle at the horizon is zero. Therefore, the angular momentum is zero. Considering the self-gravitational interaction, we can compute the imaginary part of the action from Eq. (73) in the integral form as \begin{eqnarray} {\rm Im}W_+ =\pi\sigma\int\limits_{0}^\omega{\frac{4m^2\left(1+\frac{4m^2}{l^2}+...\right)^2}{2m\left(1+\frac{4m^2}{l^2}+...\right)-m-\frac{2}{l^2}\left\{2m\left(1+\frac{4m^2}{l^2}+...\right)\right\}^3}d\omega'}, \end{eqnarray} where \begin{eqnarray} \sigma=\frac{\lambda c_t c_r \pm \sqrt{1+\lambda c_r^2-\lambda c_t^2}}{1+\lambda c_r^2}. \end{eqnarray} Replacing $m$ by $m-\omega$ in Eq. (74), we get \begin{eqnarray} {\rm Im}W_+ =-\pi\sigma\int\limits_{m}^{m-\omega}{\frac{4(m-\omega')^2X^2}{2(m-\omega')X-(m-\omega')-\frac{2}{l^2}\left\{2(m-\omega')X\right\}^3}d(m-\omega)} \end{eqnarray} where \begin{eqnarray} X=1+\frac{4(m-\omega')^2}{l^2}+... . \end{eqnarray} Neglecting the terms $(m-\omega)^j$ for $j\geq5$ and solving the above integral, we get \begin{eqnarray} {\rm Im}W_+=-\frac{\pi\sigma}{2}\left[ 4(m-\omega)^2\left(1+\frac{8(m-\omega)^2}{l^2}\right)-4m^2\left(1+\frac{8m^2}{l^2}\right)\right]. \end{eqnarray} The tunneling probability for the black hole is \begin{eqnarray} \Gamma\sim \exp(-2 {\rm Im}W_+)&=&\exp\left\{\pi\sigma \left[ 4(m-\omega)^2\left(1+\frac{8(m-\omega)^2}{l^2}\right)-4m^2\left(1+\frac{8m^2}{l^2}\right)\right]\right\}\cr &=&\exp\left\{\pi\sigma(r_f^2-r_i^2)\right\}\cr &=&\exp\left[\sigma(\Delta S_{BH})\right], \end{eqnarray} where $r_f=2(m-\omega)\left(1+\frac{4(m-\omega)^2}{l^2}\right)$ and $r_i=2m\left(1+\frac{4m^2}{l^2}\right)$ are the locations of the event horizons of the black hole before and after the emission of the particle and $\Delta S_{BH}=[S_{BH}(m-\omega)-S_{BH}(m)]$ denotes the change in Bekenstein-Hawking entropy of SdS black hole in the absence of Lorentz violation theory. The change in Bekenstein-Hawking entropy of SdS black hole in the Lorentz violation theory is calculated as \begin{eqnarray} \sigma \Delta S_{BH}=\sigma[S_{BH}(m-\omega)-S_{BH}(m)]. \end{eqnarray} If $0<\sigma<1$ or $1<\sigma<\infty$, the modified entropy of the SdS black hole decreases or increases respectively. If $\sigma$ tends to 1, Lorentz violation is eliminated and the actual change in Bekenstein-Hawking entropy is recovered. It is observed that this modified Bekenstein-Hawking entropy not only depends on the Lorentz violation parameter $\lambda$ but also on the ether-like vectors $u^\alpha$. \section{Discussion and Conclusion} The Hawking temperatures of Riemann space time, charged and uncharged non-rotating BTZ black holes are modified in the Lorentz violation theory as $\frac{T_0}{\beta}$,$\frac{T_0'}{\gamma}$ and $\frac{T_h}{\zeta}$. The modified Hawking temperatures increase or decrease near the event horizon of black holes if $0<\beta, \gamma, \zeta<1$ or $1<\beta, \gamma, \zeta < \infty$. In addition to the modified Hawking temperature, the heat capacity of the charged non-rotating BTZ black hole is also modified as $\zeta C_h$. The modified heat capacity increases or decreases as $1<\zeta<\infty$ or $0<\zeta<1$. Figure 1 shows the graph of the original Hawking temperature and modified Hawking temperature of a charged non-rotating BTZ black hole for the parameters $Q=0.3$, $l=1$, $c_a=0.9$, $c_b=1.2$ and $\lambda=1$. Here, it is observed that the original temperature ($T_0$) and modified Hawking temperature ($T$) are zero at $r_+= 0.15$. When $0.15<r_+<\infty$, the original Hawking temperature $(T_0)$ is greater than the modified Hawking temperature $(T)$ and both are found to be positive. When $0<r_+<0.15$, then $T_0<T$ and both are found to be negative for the above set of parameters. Consequently, the original heat capacity ($C_h$) and the modified heat capacity ($C_M$) of the charged non-rotating BTZ black hole are also illustrated in Figure 2 for the above same set of parameters. It is also observed that the original and modified heat capacity are zero at $r_+ = 0.15$. If $0.15<r_+<\infty$, the heat capacities are positive and $C_M>C_h$. If $0<r_+<0.15$, the heat capacities are negative and $C_M<C_h$. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{hawking.png} \caption{Plot of Hawking temperature and modified Hawking temperature as a function of the event horizon radius $r_+$ of a charged non-rotating BTZ black hole with $Q=0.3$, $l=1$ $c_a=0.9$, $c_b=1.2$ and $\lambda=1$. } \label{fig:my_label} \end{figure} \newpage \begin{figure} \centering \includegraphics[width=0.8\textwidth]{heat.png} \caption{Plot of heat capacity and modified heat capacity as a function of the event horizon radius $r_+$ of a charged non-rotating BTZ black hole with $Q=0.3$, $l=1$, $c_a=0.9$, $c_b=1.2$ and $\lambda=1$.} \label{fig:my_label} \end{figure} The change in Bekenstein-Hawking entropy of SdS black hole is also studied using the Hamilton-Jacobi equation with Lorentz violation theory in curved space time. The modified Bekenstein-Hawking entropy increases or decreases depending on $\sigma\in(1,\infty)$ or $\sigma\in(0,1)$ respectively. In this paper, we investigate the tunneling of scalar particles across the event horizon of Riemann space time, BTZ black hole and SdS black hole by using the modified Hamilton-Jacobi equation with Lorentz violation theory in curved space time. The Hawking temperatures across the event horizon of Riemann space time, charged and uncharged non-rotating BTZ black hole are modified. The Bekenstein-Hawking entropy of the SdS black hole is also modified due to Lorentz violation theory. The modified Hawking temperatures and entropy depend on the values of $\beta$, $\gamma$, $\zeta$ and $\sigma$ which in turn depend on the values of Lorentz violation parameter $\lambda$ and ether-like vectors $u^\alpha$. For all the above cases, when the terms $\beta$, $\gamma$ and $\zeta$ tend to 1, the Lorentz violation theory is cancelled and the original Hawking temperatures of Riemann space time and BTZ black hole are recovered. Similarly, the original Bekenstein-Hawking entropy of SdS black hole is also recovered when $\sigma$ tends to 1. It is worth mentioning that the modified Hawking temperatures and entropy depend not only on the value of Lorentz violation paramter $\lambda$ but also on the ether-like vectors $u^\alpha$. \section*{Acknowledgements} First author acknowledges Council of Scientific and Industrial Research (CSIR), New Delhi for providing financial support.
1,108,101,563,497
arxiv
\section{Introduction} It has been known since 60s that there is an infinite dimensional symmetry group underlying asymptotically flat spacetimes known as the BMS group \cite{bms, sachs}. The role of BMS group in quantum theory was elucidated in a series of remarkable papers by Ashtekar et. al. \cite{aaprl,AS,aajmp}. In \cite{aaprl} the radiative modes of the full non-linear gravitational field were isolated and equipped with a symplectic structure, thus paving the way for (asymptotic) quantization of gravity. In \cite{AS}, it was shown that the BMS group is a dynamical symmetry group of the radiative phase space and the corresponding Hamiltonians were obtained. The reasons behind the enlargement of the translation subgroup (of the Poincare group) to supertranslations was clarified in \cite{aajmp}, where it was shown that the space of `vacuum configurations' (i.e. points in phase space for which the fluxes of all BMS momenta vanish identically) are in one to one correspondence with supertranslations (modulo translations). This in turn led to the first detailed relation between the BMS supertranslations and the infrared issues in quantum gravity \cite{aaoxf,aabook}. In particular, it clarified the need to use coherent states which lead to an S-matrix free of infrared divergences \cite{kf,akhoury}. In recent months there has been a renewed interest in analyzing these symmetries in the context of quantum gravity S-matrix. There are two reasons for this resurgence. First being a series of fascinating papers by Strominger et. al. \cite{strominger1,strominger2,st} where a precise relationship between Ward identities associated with supertranslation symmetries and Weinberg's soft graviton theorem \cite{weinberg} was unravelled. The second reason is an extremely interesting proposal by Barnich and Troessaert \cite{barnich1,barnich2,barnich3} that this symmetry can be naturally extended to include the Virasoro group, which in turn may shed new light on duality between quantum gravity in the bulk and conformal field theory on the boundary. In the literature this group is referred to as the extended BMS group. The two ideas mentioned above converged in \cite{virasoro} where it was shown that the Ward identities associated to precisely such Virasoro symmetries follow from the so-called subleading soft theorem for gravitons. This theorem, conjectured by Strominger, was proved at tree-level in the so-called holomorphic soft limit in \cite{cs}, where its validity was also checked in a number of examples. A more general proof for the theorem was later given in \cite{plefka,bern}. See \cite{gj,naculich,white,sterman,beneke} for earlier works on soft graviton amplitudes and \cite{bal1,bal2,schwab,afkami,zlotnikov,rojas,skinner,mason} for an incomplete list of recent related works. However as noted in \cite{virasoro}, whereas for the supertranslation symmetries the Ward identities are in fact equivalent to Weinberg's soft graviton theorem, such an equivalence could not be established as far as the Virasoro symmetries and the subleading theorem were concerned. Motivated by the need to establish such an equivalence, in this paper we propose a different extension of the BMS group. Instead of extending the global conformal symmetries to the Virasoro symmetries as in \cite{barnich1}, we extend them to smooth vector fields on the sphere. We refer to this group as the generalized BMS group and denote it by $\mathbf{G}$. We show $\mathbf{G}$ is the semi-direct product of supertranslations with smooth diffeomorphisms of the conformal sphere ($\textrm{Diff}(S^{2})$) and that it preserves the space of asymptotically flat solutions to Einstein's equations. However, contrary to the BMS group it does not preserve the leading order \emph{kinematical} metric components, for instance by generating arbitrary diffeomorphisms of the conformal sphere at infinity. We define charges associated to this symmetry ($\textrm{Diff}(S^{2})$) in the radiative phase space of the gravitational field. Our definition of these charges is motivated by the charges one obtains for extended BMS symmetry. Although this definition is ad-hoc and not derived by systematic analysis, we show its associated ``Ward identities" are in one-to-one correspondence with the subleading soft graviton theorem. The analysis performed here is rather similar in spirit to the recent work by Lysov, Pasterski and Strominger for massless QED \cite{lowsym}. Exactly as in that case, our charges do not form a closed algebra. We leave the interpretation of this non-closure for future investigations. The outline of this paper is as follows. In section \ref{sec2} we define $\mathbf{G}$ and show that it preserves asymptotic flatness. We show $\mathbf{G}$ can be characterized as the group of diffeomorphism that preserve null infinity and which are asymptotically volume preserving. In section \ref{sec3} we review the radiative phase space formulation of Ashtekar and show how the action of extended BMS is Hamiltonian\footnote{Modulo certain subtleties related to the IR sector.}. We emphasize on using the radiative phase space framework carefully since, as illustrated in appendix \ref{PBapp}, the weakly non-degeneracy of the symplectic structure implies that certain seemingly natural Poisson bracket relations are ill-defined and their use can lead to incorrect results.\\ Just as the BMS group can be defined purely in terms of structures available at null infinity without referring to spacetime, we present $\mathbf{G}$ from the perspective of null infinity in section \ref{sec4}. In this section we also present our prescription for the Hamiltonian action of the generators of $\mathbf{G}$ on the radiative phase space of gravity. In section \ref{sec5} we analyze the ``Ward identities'' associated to this prescription and show their equivalence with the subleading soft graviton theorem. \section{Spacetime picture} \label{sec2} \subsection{Proposal for a generalization of the BMS group} Let us for concreteness focus on future null infinity ${\cal I}^{+}$. Following \cite{strominger2} we refer to the algebra of asymptotic symmetries at ${\cal I}^{+}$ as $\textrm{BMS}^{+}$. In the original derivation of BMS algebra, through an interplay between fall-off conditions and Einstein equations, one arrives at the following form of asymptotically flat metrics (we take expressions from \cite{strominger2,barnich2}): \begin{multline} ds^2 = (1 + O(r^{-1})) du^2 - (2+ O(r^{-2}))d u d r + \\ (r^2 q_{AB} +r C_{AB}(u,\hat{x}) +O(1)) dx^A d x^B +O(1) dx^A du .\label{bmsfalloff} \end{multline} Here $x^A$ are coordinates on the 2-sphere, $q_{AB}$ the round unit sphere metric (whose covariant derivative we denote by $D_A$) and $\hat{x}$ denotes points on the sphere. $C_{AB}$ is trace-free and unconstrained by Einstein equations, whereas the remaining metric components are determined by $C_{AB}$ through Einstein's equations. $C_{AB}$ is referred to as the free gravitational radiative data. $\textrm{BMS}^{+}$ is defined as the algebra of vector fields which preserve the fall-offs (\ref{bmsfalloff}). It is generated by vector fields of the (asymptotic) form, \begin{equation} \xi_+^{a}\partial_{a}\ =\ V_+^{A} \partial_A + u \alpha_{+} \partial_u - r \alpha_{+} \partial_r + f_{+} \partial_u + \lambda_+^a \partial_a , \label{bmsxi} \end{equation} where $V_+^{A}$ is a conformal Killing vector field (CKV) of the sphere, $\alpha_{+} = (D_A V_+^{A})/2$ and $f_{+}=f_{+}(\hat{x})$ any smooth function on the sphere. $\lambda_+^a \partial_a$ denotes the next terms in the $1/r$ expansion \cite{barnich2}: \begin{equation} \lambda_+^a \partial_a = -\frac{u}{r} D^A\alpha \, \partial_A +\frac{u}{2} D_B D^B \alpha \, \partial_r + \ldots \end{equation} One can similarly define the algebra $\textrm{BMS}^{-}$ of asymptotic symmetries associated to past null infinity ${\cal I}^{-}$. In \cite{strominger2} Strominger introduced the remarkable notion of $\textrm{BMS}^{0} \subset \textrm{BMS}^{+}\times\textrm{BMS}^{-}$, which he argued to be a symmetry of quantum gravity S-matrix. This group maps an incoming scattering data, characterized by fields on ${\cal I}^{-}$, to outgoing scattering data, characterized by fields on ${\cal I}^{+}$, while conserving total energy. Identifying the null generators of ${\cal I}^{+}$ and ${\cal I}^{-}$ through ${\cal I}^{+}|_{u=-\infty}\ =\ {\cal I}^{-}|_{v=+ \infty}\ = i^{0}$, the group is defined by the conditions \cite{strominger2}: \begin{equation} V_+^{A}(\hat{x})\ =\ V_-^{A}(\hat{x}), \quad f_{+}(\hat{x})\ =\ f_{-}(\hat{x}) \label{bmsnot}. \end{equation} We now consider the scenario where in $\xi_+^a$ given in (\ref{bmsxi}), $V_+^{A}$ is not CKV. A simple computation reveals that under the diffeomorphisms generated by such vector fields, the metric coefficients whose fall-offs are violated are, \begin{equation} \begin{array}{lll} {\cal L}_{\xi^{+}}g_{AB}\ =\ O(r^{2})\\ \vspace*{0.1in} {\cal L}_{\xi_{+}}g_{u u} = O(1). \end{array} \end{equation} Thus, relaxing the CKV condition forces us to consider metrics where the $O(r^{2})$ part of $g_{AB}$ is not necessarily the round metric and such that $\ g_{uu}\ =\ O(1)$. We are thus lead to consider metrics of the form:\footnote{The form (\ref{gralfalloff}) is of the type of metrics considered in \cite{barnich2} except that we take $q_{AB}$ to be $u$-independent and we do not require $q_{AB}$ to be a conformal rescaling of the unit round metric.} \begin{equation} ds^2 = O(1) du^2 - (2+O(r^{-2}) )d u d r + (r^2 q_{AB} +O(r)) dx^A d x^B +O(1) dx^A du , \label{gralfalloff} \end{equation} with $q_{AB}$ no longer the standard metric on $S^{2}$. We can now ask if these spacetimes with more general fall-offs of the metric coefficients are asymptotically flat. As shown in \cite{aabook} the answer is in the affirmative. This can be most easily seen from the conformal description of asymptotic flatness. In this description, asymptotic flatness is captured by the existence of a conformal factor $\Omega$ such that $\Omega^2 ds^2$ has a well defined limit at null infinity and satisfies a number of properties. It can be shown that such spacetimes admit coordinates in a neighborhood of null infinity for which the metric fall-offs include those of the form (\ref{gralfalloff}), with $\Omega \sim 1/r $ \cite{aabook,aaunpublished}. We refer to this group of asymptotic symmetries at future null infinity as generalized $\textrm{BMS}^{+}$ group and denote it by $\mathbf{G}^{+}$. $\mathbf{G}^{+}$ is a semi-direct product of supertranslations and $\textrm{Diff}(S^{2})$, with supertranslations being a normal Abelian subgroup exactly as in the case of the BMS group. One can similarly define a corresponding group associated to ${\cal I}^{-}$ and we refer to it as $\mathbf{G}^{-}$. Following the strategy used for the BMS \cite{strominger2} and extended BMS \cite{virasoro} cases, we define the subgroup $\mathbf{G}^0$ of $\mathbf{G}^{+}\times\mathbf{G}^{-}$ by the identification (\ref{bmsnot}) for generators of $\mathbf{G}^{+}$ and $\mathbf{G}^{-}$. It then follows that $\mathbf{G}^0$ reduces to $\textrm{BMS}^{0}$ when $V^A$ is CKV. From now on we drop the labels $+,-$ and parametrize the generalized BMS vector fields by $(V^{A}, f)$. \subsection{Characterization of $\mathbf{G}$} We now ask if there is any geometrical characterization of the generalized BMS vector fields. Recall that BMS vector fields can be characterized as asymptotic Killing vector fields: \begin{equation} \nabla_{(a} \xi_{b)} \to 0 \quad \text{as} \quad r \to \infty. \end{equation} Whereas generalized BMS clearly do not satisfy this condition, it turns out they are asymptotic divergence-free vector fields: \begin{equation} \nabla_a \xi^a \to 0 \quad \text{as} \quad r \to \infty. \label{asymvolzero} \end{equation} Indeed, a simple calculation shows: \begin{eqnarray} \nabla_a \xi^a & = & \partial_{u}\xi^{u}\ +\ D_{A}V^{A}\ +\ \frac{1}{r^{2}}\partial_{r}(r^{2}\xi^{r})\ +\ O(r^{-1}) \\ & = & \alpha\ +\ 2\alpha\ -\ 3\alpha + O(r^{-1})\ \\ & = & \ O(r^{-1}). \end{eqnarray} We now show the converse, namely that generalized BMS vector fields are characterized by (\ref{asymvolzero}) and the preservation of the fall-offs (\ref{gralfalloff}). A general vector field preserving $\mathcal{I}$ has the following general form as $r \to \infty$: \begin{equation} \xi^a = \mathring{\xi}^A(u,\hat{x}) \partial_A + \mathring{\xi}^u(u,\hat{x}) \partial_u + r \mathring{\xi}^r(u,\hat{x}) \partial_r + \ldots \label{gralxi} \end{equation} where the dots indicate terms of the form: $O(r^{-1}) \partial_A + O(r^{-1}) \partial_u + O(1) \partial_r$. We only focus on the leading terms in the $1/r$ expansion. Subleading terms are determined by requiring preservation of the fall-offs (\ref{gralfalloff}), and their form depend on the specific metric coefficients in (\ref{gralfalloff}). \\ Eq. (\ref{asymvolzero}) gives: \begin{equation} \nabla_a \xi^a = O(r^{-1}) \iff D_A \mathring{\xi}^A + \partial_u \mathring{\xi}^u + 3 \mathring{\xi}^r =0\label{volzero} . \end{equation} The components of (\ref{gralfalloff}) leading to restrictions on the leading part of (\ref{gralxi}) are: \begin{equation} \mathcal{L}_{\xi}g_{ur}= O(r^{-1}) \iff \partial_u \mathring{\xi}^u + \partial_r \mathring{\xi}^r =0 \label{guu} \end{equation} \begin{equation} \mathcal{L}_{\xi}g_{uA}= O(r), \; \mathcal{L}_{\xi}g_{uu}= O(1) \iff \partial_u \mathring{\xi}^A=0, \; \partial_u \mathring{\xi}^r=0 .\label{uindep} \end{equation} It is easy to verify that the most general solution to Eqns. (\ref{volzero}), (\ref{guu}), (\ref{uindep}) is given by: \begin{eqnarray} \mathring{\xi}^A(u,\hat{x}) & = & V^A(\hat{x}) \\ \mathring{\xi}^u(u,\hat{x}) & = & u \alpha(\hat{x}) + f(\hat{x}) \\ \mathring{\xi}^r(u,\hat{x}) & = & - \alpha(\hat{x}) \end{eqnarray} with $V^A(\hat{x})$ and $f(\hat{x})$ undetermined and $ \alpha = (D_A V^A)/2$ as before. Thus, we recover the leading term of (\ref{bmsxi}) with the CKV condition on $V^A$ dropped. This precisely represents the proposed generalized BMS vector fields. The preservation of (\ref{gralxi}) for the remaining metric components impose conditions on the subleading terms of $\xi^a$ indicated by the dots in (\ref{gralxi}). \subsection{Difficulties in extracting a map on radiative data}\label{sec2.3} We recall that BMS vector fields have a well defined action on the unconstrained radiative data characterized by $C_{AB}$. For $\xi^a$ as in (\ref{bmsxi}) with $f=0$ the action is given by \cite{strominger2}: \begin{equation} \delta_V C_{AB}= \mathcal{L}_V C_{AB} - \alpha C_{AB} + \alpha u \partial_u C_{AB} . \end{equation} Although generalized BMS vector fields map an asymptotically flat spacetime to another one, they do not induce any obvious map on the free radiative data. As they change the zeroth order structure, the linear in $r$ coefficients of $g_{AB}$ do not represent all free data. In order to bring out the differences with the BMS case, consider the action of generalized BMS vector field on the $g_{AB}$ metric components (again we consider the case $f=0$ and $g_{ab}$ as in (\ref{bmsfalloff})): \begin{equation}\label{deltavcab} \mathcal{L}_{\xi}g_{AB} = r^2 ( \mathcal{L}_V q_{AB} - 2 \alpha q_{AB}) + r ( \mathcal{L}_V C_{AB} - \alpha C_{AB} + \alpha u \partial_u C_{AB}) + u r s_{AB} , \end{equation} where \begin{equation}\label{softjuly28} s_{AB}= -2 D_A D_B \alpha + D_C D^C \alpha \, q_{AB}. \end{equation} Since the zeroth order structure changes, the action of generalized BMS encodes the physical transformations (i.e. change in the radiative data) as well as ``gauge transformations" induced by the change in the zeroth order structure. It is not clear to us how to extract out the gauge-invariant change in the News from this action. This point will be important in defining the action of generalized BMS operator in quantum theory. We return to this issue in section \ref{sec4C}. \section{Radiative phase space}\label{sec3} \subsection{Review of Ashtekar formulation} \label{sec3.1} In this section we recall Ashtekar's description of the radiative phase space of gravity following mostly references \cite{AS,aabook}. We will only present the end result of the description, and encourage the reader to look at \cite{aaprl,AS,aabook} for its motivation from spacetime perspective, as well as reference \cite{am} for its relation with ADM phase space. The idea is to start with $\mathcal{I}$ (which will eventually stand for either future or past null infinity) as an abstract 3-manifold, topologically $S^2 \times \mathbb{R}$, and ruled by preferred directions or `rays' so that there is a canonical projection $\mathcal{I} \to \hat{\mathcal{I}} \sim S^2$ with $\hat{\mathcal{I}}$ the space of rays. Next, one endows $\mathcal{I}$ with a `universal structure' which plays the role of kinematical arena. This universal structure is given by an equivalence class of pairs $(q_{ab},n^a)$ where $n^a$ is a vector field tangent to the rays and $q_{ab}$ a $(0,+,+)$ degenerate metric that is given by the pull-back of a 2-metric $\hat{q}_{ab}$ on $\hat{\mathcal{I}}$, so that $q_{ab}n^b=0$ and $\mathcal{L}_n q_{ab}$=0. Each pair is referred to as a `frame'. The equivalence is given by: \begin{equation} (q_{ab},n^a) \sim (\omega^2 q_{ab},\omega^{-1} n^a) ,\quad \forall \omega: \mathcal{I} \to \mathbb{R} : \mathcal{L}_n \omega =0, \label{equivframes} \end{equation} and the corresponding equivalence class $[(q_{ab},n^a)]$ gives the `universal structure'. The BMS group discussed in the previous section arises in this context as the group of diffeomorphism of $\mathcal{I}$ that preserve this universal structure \cite{aabook}. We now describe the dynamical degrees of freedom and associated phase space. The description uses a fixed `frame' $(q_{ab},n^a) \in [(q_{ab},n^a)]$, so that, strictly speaking, one arrives at a family of phase spaces parametrized by the frames $(q_{ab},n^a) \in [(q_{ab},n^a)]$. One then shows that there exists a natural isomorphism between the different phase spaces associated to the different frames. Below we present the phase space associated to a given frame. The isomorphism, crucial for the implementation of boosts in phase space, is described in appendix \ref{boostapp}. A derivative operator $D_a$ on $\mathcal{I}$ is said to be compatible with a frame $(q_{ab},n^a)$ if it satisfies: \begin{equation} D_c q_{ab}=0, \quad D_a n^b =0, \quad 2 D_{(a} V_{b)}= \mathcal{L}_{V} q_{ab} \; \; {\rm if } \; \; V_a n^a=0, \label{compatibleD} \end{equation} where the Lie derivative is along any vector $V^a$ satisfying $V_a= q_{ab}V^b$. Introduce the following equivalence relation on derivative operators satisfying (\ref{compatibleD}):\footnote{This equivalence relation is unrelated to the one in (\ref{equivframes}). From the spacetime perspective, (\ref{equivframes}) arises from different values the conformal factor $\Omega$ can take at $\mathcal{I}$, whereas (\ref{compatibleD}) arises from different values the derivative of the conformal factor (along directions off $\mathcal{I}$) can take at $\mathcal{I}$.} \begin{equation} D'_a \sim D_a \quad {\rm if } \quad D'_a k_b = D_a k_b + f n^c k_c q_{ab} \label{equivalenceD} \end{equation} for some function $f$. The phase space, denoted by $\Gamma$, is the space of equivalence classes $[D_a]$ of (torsion-free) derivative operators satisfying (\ref{compatibleD}). A parametrization of this space is obtained as follows. Fix a derivative $\mathring{D}_a$ satisfying (\ref{compatibleD}). It can be shown that any other derivative $D_a$ satisfying (\ref{compatibleD}) is given by: \begin{equation} D_a k_b =\mathring{D}_a k_b+ (\Sigma_{ab} n^c)k_c ,\label{defSigma} \end{equation} where $\Sigma_{ab}$ is symmetric and satisfies $\Sigma_{ab} n^b=0$. Such tensors parametrize the space of connections $D_a$ compatible with $(q_{ab},n^a)$. From (\ref{equivalenceD}) it follows that \begin{equation} \sigma_{ab} := \Sigma_{ab}-\frac{1}{2} q_{ab} q^{cd}\Sigma_{cd} = ((D_a - \mathring{D}_a) k_b )^\text{TF} \quad \text{for any } k_b \, : \, n^b k_b = 1 , \label{defsigma} \end{equation} can be used to parametrize the space of equivalence classes $[D_a]$. We recall that $q^{ab}$ is defined up to $v^{(a}n^{b)}$ so that the trace-free symbol `TF' is only well defined on tensors annihilated by $n^a$. In terms of this parametrization the symplectic structure reads: \begin{equation} \Omega(\sigma_1 , \sigma_2) = \int d^3 V q^{ac} q^{bd} ( \sigma^1_{ab} \mathcal{L}_n \sigma^2_{cd} - \sigma^2_{ab} \mathcal{L}_n \sigma^1_{cd}) \label{sympstr}, \end{equation} where $d^3V = \epsilon_{abc}$, with $\epsilon_{ab} =\epsilon_{abc}n^c$ the area form of $q_{ab}$. Let us now make contact with the spacetime picture of section \ref{sec2}. For concreteness we focus in future null infinity. For spacetime metrics as in (\ref{bmsfalloff}), $\mathcal{I}$ is described by the coordinates $(u,x^A)$ with $n^a\partial_a = \partial_u$ and $q_{ab} d x^a d x^b= q_{AB} d x^A d x^B$. One can verify that the nonzero components of $\sigma_{ab}$ are: $\sigma_{AB}= (1/2) C_{AB}$. The News tensor is then given by\footnote{Our convention for the News tensor, taken from \cite{AS}, differs by a sign from that used in \cite{st,virasoro}.} \begin{equation} N_{AB}(u,\hat{x})= -2 \dot{\sigma}_{AB}(u,\hat{x}), \label{news} \end{equation} where $\dot{\sigma}_{AB} \equiv \mathcal{L}_n \sigma_{AB} \equiv \partial_u \sigma_{AB}$. We conclude by describing the fall-offs of radiative phase space. In $(u,x^A)$ coordinates they are given by \cite{AS}: \begin{equation} \sigma_{AB}(u,\hat{x}) = \sigma^{\pm}_{AB}(\hat{x}) + O(u^{-\epsilon}) \quad \text{as} \quad u \to \pm \infty, \label{falloffs} \end{equation} where $\epsilon>0$ and the limiting values $\sigma^{\pm}_{AB}(\hat{x})$ are kept unspecified (but smooth in $\hat{x}$). These fall-offs ensure the convergence of the integral defining the symplectic structure (\ref{sympstr}).\footnote{The fall-offs used by Strominger based on the analysis of CK spaces corresponds to $\epsilon=1/2$ \cite{strominger2}. It thus seems that the range $0 < \epsilon < 1/2$ is not relevant for gravitational scattering. We nevertheless keep $\epsilon$ general as all we need in our analysis is $\epsilon>0$.} \subsubsection{Poisson brackets subtleties} \label{pbssec} We comment on a subtlety associated to the Poisson brackets that was noticed in \cite{st}. From the radiative phase space perspective the symplectic form (\ref{sympstr}) is the fundamental structure whereas Poisson brackets are derived quantities. We recall that in this approach the Hamiltonian vector field (HVF) $X_F$ of a phase space function $F$ is defined as the solution to the equation \begin{equation} \Omega( X_F, \cdot ) = d F, \label{defhvf} \end{equation} and that, given two phase space functions $F$ and $G$ admitting HVFs, their Poisson bracket is defined by $\{ F, G \} := \Omega(X_F,X_G) = X_G(F)$. In \cite{AS} it is shown that $\Omega$ is weakly non-degenerate, that is, $\Omega$ considered as a map from $T \Gamma$ to $T^* \Gamma$ is injective but not necessarily surjective. Thus, there is no guarantee that one can always solve Eq. (\ref{defhvf}) (but if there is a solution, it is unique). As discussed in appendix \ref{PBapp}, an example of a function not admitting a HVF is given by $F[\sigma]:= \int_{\mathcal{I}} d^3 V F^{AB}(u,\hat{x}) \sigma_{AB}(u,\hat{x})$ with $\int_{-\infty}^\infty du F^{AB}(u,\hat{x}) \neq 0$. In particular, one cannot define PBs between $\sigma_{AB}(u,\hat{x}) $ and $\sigma_{AB}(u',\hat{x}')$. Fortunately, these `undefined PBs' are nowhere needed in the analysis. \subsection{(Extended) BMS action on $\Gamma$} \label{sec4B} Let $D_a$ be a connection as in (\ref{compatibleD}) with $[D_a]$ the corresponding element in radiative phase space. Under the action of a BMS vector field $\xi^a$ the connection changes by $\delta_\xi D_a= [\mathcal{L}_\xi,D_a]$. If $\xi^a$ preserves the frame (case of supertranslations and rotations), the transformed connection $D'_a \approx D_a +\delta_\xi D_a$ is compatible with the frame and one can directly read off the phase space action from $\delta_\xi D_a$. For boosts however, the transformed connection is compatible with the frame $(q'_{ab}, n'^a) \approx ((1+ 2\alpha) q_{ab}, (1-\alpha) n^a)$. One thus needs to use the isomorphism between the phase spaces associated to the different frames in order to obtain the phase space action. The resulting action reads (see appendix \ref{boostapp} for its derivation): \begin{equation}\label{boost1} \begin{array}{lll} (X_{\xi})_{ab} = ([\mathcal{L}_\xi,D_a]k_{b} +2 k_{(a} \partial_{b)} \alpha)^{\text{TF}}, \end{array} \end{equation} where $k_{a}$ is any covector satisfying $n^{a} k_{a}\ =\ 1$. In $(u,x^A)$ coordinates, for a `pure rotation/boost' vector field \begin{equation} \xi^a \partial_a = V^A \partial_A + u \alpha \partial_u, \label{boostrot} \end{equation} the expression takes the form: \begin{equation} (X_{V})_{AB} = \mathcal{L}_V \sigma_{AB} - \alpha \sigma_{AB} +u \alpha \dot{\sigma}_{AB} - u (D_A D_B \alpha )^{\text{TF}} .\label{XxiAB} \end{equation} Following \cite{st,virasoro}, we refer to the piece linear in $\sigma$ as the `hard term' and the $\sigma$-independent, linear in $u$ piece as the `soft term'. The soft term appears to violate the fall-offs (\ref{falloffs}). However the CKV nature of $V^{A}$ implies $(D_A D_B\alpha)^{\text{TF}}$ vanishes. The above analysis goes through if we replace $V^a$ by a local CKV so that $\xi^a$ represents a generator of the extended BMS group. In this case however, the soft term does not vanish. In $(z,\bar{z})$ coordinates the action takes the form: \begin{equation} (X_{V})_{zz} = \mathcal{L}_V \sigma_{zz} - \alpha \sigma_{zz} +u \alpha \dot{\sigma}_{zz} - \frac{u}{2} D^3_z V^z ,\label{Xxiext} \end{equation} where we used the fact that $D_{z}D_{z} (D_{\bar{z}}V^{\bar{z}})=0$ for local CKV. Similar expression holds for the $\bar{z}\zb$ component. In quantum theory, the action (\ref{Xxiext}) is generated by the charge $Q=Q_H+Q_S$ given in Eq. (5.10) of \cite{virasoro}. \subsection{Mode functions} In this section we describe the classical functions in radiative phase space that correspond to the standard creation/annihilation operators of gravitons in quantum theory. These are essentially given by the $zz$ and $\bar{z} \bar{z}$ components of the Fourier transform of $\sigma_{AB}$, \begin{equation} \sigma_{AB}(\omega,\hat{x}) := \int_{-\infty}^\infty \sigma_{AB}(u,\hat{x}) e^{i \omega u} du \label{sigmaw}. \end{equation} As long as $\omega \neq 0$, (\ref{sigmaw}) admits a HVF \footnote{In a distributional sense; strictly speaking one should integrate (\ref{sigmaw}) with a smearing function in $(\omega,\hat{x})$ with support outside $\omega =0$.} and hence we can find their PBs. The non vanishing PBs are found to be:\footnote{In the present subsection as well as in section \ref{sec5}, $\gamma_{z \bar{z}} \equiv q_{z \bar{z}} = \sqrt{\gamma} = 2 (1 + z \bar{z})^{-2}$.} \begin{equation} \{ \sigma_{zz}(\omega,z,\bar{z}), \sigma_{\bar{z} \bar{z}}(\omega',z',\bar{z}') \} = \frac{\pi}{i \omega} \sqrt{\gamma} \; \delta(\omega+\omega') \delta^{(2)}(z-z') .\label{pbmode} \end{equation} For later purposes, we note that the relation of the mode functions (\ref{sigmaw}) with the Fourier transform of the News tensor (\ref{news}) is given by: \begin{equation} \sigma_{AB}(\omega,\hat{x}) = (2 i \omega)^{-1} N_{AB}(\omega,\hat{x}). \label{sigmanewsw} \end{equation} Following sections 5 of \cite{st} and 5.3 of \cite{virasoro} (see also \cite{am,frolov,dp1}), we can find the relation of (\ref{sigmaw}) with the creation/annihilation functions from standard perturbative gravity. Following \cite{virasoro} we take coordinates in past null infinity that are antipodally related to those of future null infinity. In that case the expressions relating `in' quantities take the same form as the expressions relating `out' quantities. The following discussion thus applies to either case. The `annihilation function' $a_\pm(\omega,\hat{x})$, $\omega >0$, of a helicity $\pm 2$ graviton is found to be given by: \begin{equation} a_+(\omega,\hat{x}) = \frac{4 \pi i }{\sqrt{\gamma}} \sigma_{zz}(\omega,\hat{x}), \; \quad a_-(\omega,\hat{x}) = \frac{4 \pi i }{\sqrt{\gamma}} \sigma_{\bar{z}\zb}(\omega,\hat{x}). \label{asigma1} \end{equation} Since $\overline{\sigma_{zz}}(\omega)=\overline{\sigma}_{\bar{z} \bar{z}}(\omega)=\sigma_{\bar{z} \bar{z}}(-\omega)$, the relations for the `creation functions' have the opposite relation between helicity and holomorphic components: \begin{equation} a_+(\omega,\hat{x})^\dagger = -\frac{4 \pi i }{\sqrt{\gamma}} \sigma_{\bar{z} \bar{z}}(-\omega,\hat{x}), \; \quad a_-(\omega,\hat{x})^\dagger = -\frac{4 \pi i }{\sqrt{\gamma}} \sigma_{z z}(-\omega,\hat{x}) ,\label{asigma2} \end{equation} where in the present classical context, the dagger just means complex conjugation. The Poisson bracket (\ref{pbmode}) implies \begin{equation} \{ a_h (\omega,\hat{x}), a_{h'}(\omega',\hat{x}')^\dagger \} = \frac{2 (2 \pi )^3}{i \omega \sqrt{\gamma}} \delta_{h h'} \delta(\omega-\omega') \delta(\hat{x},\hat{x}') , \label{PBaa} \end{equation} and corresponds to the Poisson brackets the functions have from the perspective of perturbative gravity: $\{a^h_{\vec{p}}, (a^{h'}_{\vec{q}})^\dagger \} = -i 2 E_{\vec{p}} \, \delta_{h h'} (2 \pi)^3 \delta^{(3)}(\vec{p}-\vec{q})$, with $\vec{p}= \omega \hat{x}$ and $\vec{q}= \omega' \hat{x}'$. \subsection{Action of BMS on mode functions} \label{sec4D} The action of BMS on the mode functions can be obtained by taking the Fourier transform of (\ref{boost1}). Here we are interested in rotations and boost, so we focus attention in the action of a BMS vector field of the form (\ref{boostrot}). Taking the Fourier transform of (\ref{XxiAB}) one finds: \begin{equation} X_V (\sigma_{AB}(\omega,\hat{x}))= \mathcal{L}_{V} \sigma_{AB}(\omega,\hat{x}) - 2 \alpha \sigma_{AB}(\omega,\hat{x}) - \alpha \omega \partial_\omega \sigma_{AB}(\omega,\hat{x}). \label{boost2} \end{equation} From (\ref{asigma1}), (\ref{asigma2}) one can verify that the corresponding action on the creation/annihilation functions is given by the differential operator: \begin{equation} J^h_V := V^z \partial_z + V^{\bar{z}} \partial_{\bar{z}}- \frac{1}{2} (D_z V^z + D_{\bar{z}} V^{\bar{z}})\omega \partial_\omega + \frac{h}{2} ( \partial_z V^z - \partial_{\bar{z}} V^{\bar{z}}) , \label{JVh} \end{equation} according to \begin{equation} X_V (a_h(\omega,z,\bar{z}) ) = J^h_V a_h(\omega,z,\bar{z}) \ ; \ \quad X_V (a_h(\omega,z,\bar{z})^\dagger ) = J^{-h}_V a_h(\omega,z,\bar{z})^\dagger . \label{Xah} \end{equation} In quantum theory, $J^h_V$ represents the total angular momentum of a helicity $h=\pm 2$ graviton. \section{Generalized BMS and radiative phase space} \label{sec4} \subsection{Intrinsic characterization of generalized BMS group} From the perspective of null infinity, the proposed generalized BMS vector fields $\xi^a$ are given by supertranslations and vector fields of the form (\ref{boostrot}) with the CKV condition on $V^A$ dropped. The dropping of the CKV condition implies that $\xi^a$ does not preserve the universal structure $[(q_{ab},n^a)]$ described in section \ref{sec3.1}. It is natural to ask whether there is any other geometrical structure that is kept invariant under the action of generalized BMS. As we now show, such geometrical structure is given by equivalence classes of pairs $[(\epsilon_{abc},n^a)]$ with $n^a$ as before, $\epsilon_{abc}$ the volume form satisfying $\mathcal{L}_n \epsilon_{abc}=0$, and equivalence relation given by \begin{equation} (\epsilon_{abc},n^a) \sim (\omega^3 \epsilon_{abc},\omega^{-1} n^a) ,\quad \forall \omega: \mathcal{I} \to \mathbb{R} : \mathcal{L}_n \omega =0. \label{equiveps} \end{equation} First, we notice that any generalized BMS vector field still satisfies $\mathcal{L}_{\xi}n^a= - \alpha n^a$, whereas its action on the volume form is \cite{AS,aabook}: \begin{equation} \begin{array}{lll} {\cal L}_{\xi}\epsilon_{abc}\ =\ 3\alpha\epsilon_{abc}, \end{array} \end{equation} hence it keeps the pair $(\epsilon_{abc},n^a)$ in the same equivalence class (\ref{equiveps}). Conversely, one can verify that the group of symmetries of $[(\epsilon_{abc},n^a)]$ is given by generalized BMS group. This can be shown along the same lines as the proof given for the BMS case \cite{aabook}. One finds that supertranslations are again a normal subgroup, and the quotient group is now the group of diffeomorphisms on the sphere. \subsection{An example: action on radiative phase space of a massless scalar field} \label{sec5B} One example of a radiative phase space where the underlying kinematical structure is provided by the (equivalence class) of pairs $[(\epsilon_{abc}, n^{a})]$ is that of a massless scalar field \cite{AS}. As we show below, in this case it is indeed true that the generalized BMS group has a symplectic action. The symplectic structure of the radiative phase space $\Gamma_\phi$ of a massless scalar field $\phi$ is given by \cite{AS}: \begin{equation} \Omega_\phi(\phi_1,\phi_2)=\int d^3V (\phi_1 \mathcal{L}_n \phi_2 - \phi_2 \mathcal{L}_n \phi). \label{sympstrphi} \end{equation} The symplectic structure (\ref{sympstrphi}) is defined in terms of the pair $(\epsilon_{abc}, n^a)$ and there is a canonical isomorphism between different choices of pairs in the class (\ref{equiveps}) given by \cite{AS}: \begin{equation} (\epsilon_{abc}, n^a) \to (\omega^3 \epsilon_{abc},\omega^{-1} n^a), \quad \phi \to \omega^{-1} \phi . \label{isomphi} \end{equation} The action of a generalized BMS vector field $\xi^a$ on $\Gamma_\phi$ can be obtained as in the BMS case for gravity discussed in section \ref{sec4B} and appendix \ref{boostapp}: First compute the variation of $\phi$ under $\xi^a$ and then use the canonical isomorphism (\ref{isomphi}) to express the `transformed field' in the original `frame'. The result is: \begin{equation} X^\phi_\xi = \mathcal{L}_\xi \phi + \alpha \phi \label{xiphi}. \end{equation} The form (\ref{xiphi}) is the same as the one given in \cite{AS} for the action of BMS. It is easy to verify that (\ref{xiphi}) is symplectic and that $[X_{\xi}, X_{\xi'}]=X_{[\xi,\xi']}$. \subsection{The case of gravitational radiative phase space} \label{sec4C} Since generalized BMS does not preserve the universal structure $[(q_{ab},n^a)]$, and there is no (known to us) natural isomorphism between the various universal structures that generalized BMS can map to (namely those compatible with $[(\epsilon_{abc},n^a)]$), we lack a geometrical framework from which we can attempt to derive an action of generalized BMS on the radiative phase space of gravity. Thus, the strategy followed in sections \ref{sec4B} and \ref{sec5B} is not available. This problem is the phase-space counterpart of the issue discussed in section \ref{sec2.3}: As generalized BMS vector fields change the leading order metric at ${\cal I}$, it is not clear how to deduce an action of $\mathbf{G}$ on the free data.\\ We shall limit ourselves to present an ad-hoc HVF $X_\xi$. The interest in this proposal lies in the fact that the associated ``Ward identities'' will be shown to be in one to one correspondence with the Cachazo-Strominger (CS) soft theorem.\\ There are however two shortcomings of our proposal which we hope to address in the future investigations.\\ \noindent{(1)} The HVFs do not represent an action of generalized BMS since in general $[X_{\xi},X_{\xi'}] \neq X_{[\xi,\xi']}$.\footnote{The situation is thus analogous to the recently proposed symmetries for massless QED that follow from the subleading soft photon theorem \cite{lowsym}. \\ \noindent{(2)} The HVFs do not respect the fall-off behaviour of the radiative data and hence strictly speaking are not well defined on the entire phase space. (This infrared divergence is also present when the underlying vector fields are local CKVs.) Our definition for the HVF is exactly the same as in (\ref{Xxiext}), where $V^{A}$ is an arbitrary (smooth) vector field on the sphere and $\alpha = (D_A V^A)/2$. It is sum of a hard and soft terms: \begin{equation} X_V = X^{\rm hard}_V + X^{\rm soft}_V, \end{equation} where \begin{equation} (X^{\rm hard}_V)_{zz} = \mathcal{L}_V \sigma_{zz} - \alpha \sigma_{zz} +u \alpha \dot{\sigma}_{zz} \label{Xhom} \ ; \ \quad (X^{\rm soft}_V)_{zz} := -\frac{u}{2} D^3_{z} V^z , \end{equation} and corresponding $z \to \bar{z}$ expressions for $(X_V )_{\bar{z} \bar{z}}$. It can be seen that $X^{\rm hard}_V$ preserves the fall-offs (\ref{falloffs}). Further, as shown in appendix \ref{sympapp}, it is symplectic: \begin{equation} \Omega(X^{\rm hard}_{V}(\sigma_1),\sigma_2) + \Omega(\sigma_1,X^{\rm hard}_{V}(\sigma_2)) =0 \quad \forall \; \sigma_1, \sigma_2 \in \Gamma. \label{svf} \end{equation} Being linear in $\sigma$, its Hamiltonian can be found by: \begin{equation} H^{\rm hard}_V(\sigma) := \frac{1}{2}\Omega(X^{\rm hard}_V(\sigma),\sigma), \end{equation} which leads to the same expression as the Hamiltonian for boosts (with the CKV condition on $V^A$ dropped). Unless $D^3_{z} V^z =D^3_{\bar{z}} V^{\bar{z}}=0$, $X^{\rm soft}_V$ diverges linearly in $u$ and hence is not well defined on $\Gamma$. At a formal level $X^{\rm soft}_V$ is however symplectic since it is just a c-number vector field. We can make sense of the `would be' Hamiltonian on the subspace $\Gamma_0 \subset \Gamma$ given by: \begin{equation} \Gamma_0 := \{ \sigma_{AB} \in \Gamma : \sigma_{AB}(u,\hat{x}) = \sigma^+(\hat{x}) + O(u^{-1-\epsilon}) \quad \text{as } \; u \to \pm \infty \} . \end{equation} For $\sigma \in \Gamma_0$ we define: \begin{equation} H^{\rm soft}_V(\sigma) := \Omega( X^{\rm soft}_V,\sigma) = - \int d^3 V u \; (\dot{\sigma}^{zz} D^3_z V^z+ \dot{\sigma}^{\bar{z}\zb} D^3_{\bar{z}} V^{\bar{z}}). \label{Hinhom} \end{equation} \begin{comment} It will be useful for later purposes to express $H^{\rm soft}(\sigma)$ in terms of the Fourier transform of the News tensor (\ref{news}), \begin{equation} N_{AB}(\omega,\hat{x})= \int_{-\infty}^\infty N_{AB}(u,\hat{x}) e^{i \omega u} du. \label{newsw} \end{equation} We then have that $N_{AB}(\omega=0,\hat{x}) = -2 [\sigma_{AB}(\hat{x})]$ and \begin{equation} H^{\rm soft}_V(\sigma) = i \partial_{\omega} \oint d^2 V( N^{zz} D_z^3 V^z +N^{\bar{z} \bar{z}}D_{\bar{z}}^3 V^{\bar{z}} ) |_{\omega=0}. \label{Hinhom2} \end{equation} \end{comment} Finally, for $\sigma \in \Gamma_0$ the total Hamiltonian is defined by \begin{equation} H_V(\sigma) := H^{\rm hard}_V(\sigma) + H^{\rm soft}_V(\sigma). \end{equation} We will use these expressions to define the hard and soft operator in quantum theory. In \cite{virasoro} $X_{V}$ is derived directly from the action of $V^{A}$ on $C_{AB}$ as given in Eq. (\ref{deltavcab}). If we follow this prescription here, it will lead to an expression for $X_{V}$ different from the one given above. However as the action of $\xi^a \partial_a = V^A \partial_A + u \alpha \partial_u$ changes the leading order metric at ${\cal I}$, this procedure is not applicable in this case. \subsection{Action of generalized BMS generators on mode functions} For $\omega \neq 0$, the action of $X_V$ on the mode functions $\sigma_{AB}(\omega,\hat{x})$ is fully determined by the term $X^{\rm hard}_V$. By taking the Fourier transform of (\ref{Xhom}) we arrive at the analogue of equation (\ref{boost2}) (with an additional `trace-free' symbol on the Lie derivative term). The corresponding action on the functions $a_{\pm}(\omega,\hat{x})$ is given by the same equations as in the boost/rotation case, (\ref{Xah}), with the CKV condition on $V^A$ dropped. We thus find: \begin{equation} \{ a_h(\omega,z,\bar{z}), H_V \} = J^h_V \, a_h(\omega,z,\bar{z}), \quad \{ a_h(\omega,z,\bar{z})^\dagger, H_V \} = J^{-h}_V \, a_h(\omega,z,\bar{z})^\dagger ,\label{PBaV} \end{equation} with $J^h_V$ the same differential operator given in Eq. (\ref{JVh}): \begin{equation} J^h_V = V^z \partial_z + V^{\bar{z}} \partial_{\bar{z}}- \frac{1}{2} (D_z V^z + D_{\bar{z}} V^{\bar{z}})\omega \partial_\omega + \frac{h}{2} ( \partial_z V^z - \partial_{\bar{z}} V^{\bar{z}}) . \label{JVh2} \end{equation} The non-closure of the HVFs $X_V$ manifests in a particular simple form through the non-closure of the commutator of operators $J^h_V$ for general smooth vector fields. A simple calculation reveals: \begin{equation} [J^h_{V}, J^h_{W}] a_h = J^h_{[V,W]} \, a_h + h \, (\partial_{\bar{z}} V^{z} \, \partial_z W^{\bar{z}} - \partial_z V^{\bar{z}}\, \partial_{\bar{z}}W^{z}) \, a_h. \end{equation} Thus, the `non-closure' is proportional to the helicity. This is in accordance with the discussion of section \ref{sec5B}: The action of generalized BMS on the mode functions of a massless scalar field lacks a helicity contribution and the non-closure term is absent there. \section{Generalized BMS and subleading soft theorem} \label{sec5} In this section we show the equivalence between CS soft theorem and generalized BMS symmetries. After summarizing the content of the soft theorem in section \ref{sec4A}, in section \ref{sec4A1} we propose the Ward identities for smooth vector fields belonging to the generalized BMS algebra. Although our derivation is simply a repeat of the derivation given in \cite{virasoro}, we express it in a slightly different form which facilitates the proof of the equivalence.\\ We then argue, in section \ref{sec4B1}, that the derivation of Ward identities associated to CS soft theorem as given in \cite{virasoro} goes through for smooth vector fields on the sphere. In section \ref{sec5.4} we show that using Ward identities for generalized BMS algebra, we can obtain the CS soft theorem. This derivation parallels the derivation for the case of supertranslations as mentioned in \cite{st}. We conclude in section \ref{sec5.5} with a comparison of this equivalence with the equivalence between Ward identities for supertranslations and Weinberg's soft graviton theorem.\\ In the following we work with the Fock space $\mathcal{H}^{\rm out}$ generated by the standard creation/annihilation operators with nontrivial commutators given by $i$ times the PBs (\ref{PBaa}): \begin{equation} [ a^{\rm out}_h (\omega,\hat{x}), a^{\rm out}_{h'} (\omega',\hat{x}')^\dagger ] = \frac{2 (2 \pi )^3}{ \omega \sqrt{\gamma}} \delta_{h h'} \delta(\omega-\omega') \delta(\hat{x},\hat{x}') , \end{equation} and with the analogue $\mathcal{H}^{\rm in}$ Fock space. The nature of the present section is rather formal. In particular, we do not construct the operator associated to $H_V$ but rather assume that (i) it is normal ordered so that its action on the vacuum is determined by the soft term; (ii) its commutator with creation/annihilation operators is given by $i$ times the PBs (\ref{PBaV}). Below we consider `in' and `out' states of the form: \begin{equation} \langle {\rm out} | := \langle 0 | \prod_{i=1}^{n_{\rm out}} a^{\rm out}_{h_i}(E^{\rm out}_i,\kh^{\rm out}_i) \; , \; \quad | {\rm in} \rangle := \prod_{i=1}^{n_{\rm in}} a^{\rm in}_{h_i}(E^{\rm in}_i,\kh^{\rm in}_i)^\dagger | 0 \rangle \label{inout}. \end{equation} The subleading soft operator which acts on asymptotic Fock states can be read off from Eq. (\ref{Hinhom}) and it precisely matches with the operator $Q_{S}^{\rm out}$ as given in \cite{virasoro}: \begin{equation} \begin{array}{lll} (H^{\rm soft}_V)^{\rm out}\ =\ \frac{1}{2}\int_{{\cal I}^{+}}dud^{2}z{D}_{z}^{3}V^{z}N^{z}_{\bar{z}}\ =\ Q_{\textrm{S}}^{\textrm{out}}\ . \end{array} \end{equation} \subsection{CS soft theorem} \label{sec4A} In this section we summarize the content of CS soft theorem. We express the soft-factor in terms of a vector field on the sphere appearing in Eq. (6.6) of \cite{virasoro}. This will facilitate the discussions in the subsequent sections. CS subleading soft theorem for an outgoing soft graviton of helicity $h_s$ and momentum $q^\mu$ parametrized by $(\omega,z_s,\bar{z}_s)$ can be written as \cite{virasoro}:\footnote{The subsequent analysis can be easily extended to the case of incoming soft gravitons.} \begin{equation} \lim_{\omega \to 0^+} (1+ \omega \partial_\omega) \langle {\rm out} | a^{\rm out}_{h_s} (\omega,z_s,\bar{z}_s) S | {\rm in} \rangle = \sum_{i} S^{(1) \, h_s}_i \langle {\rm out} | S | {\rm in} \rangle , \label{CSthm} \end{equation} where the sum runs over all ingoing and outgoing particles. For an outgoing particle of momentum $k^\mu$ and helicity $h$ the soft factor is given by \cite{cs}: \begin{equation} S^{(1) \, h_s}_{(k,h)} = (q \cdot k)^{-1} \epsilon^{h_s}_{\mu \nu}(q) k^\mu q_\rho J^{\rho \nu}, \label{S1} \end{equation} where $\epsilon^{h_s}_{\mu \nu}(q)= \epsilon^{h_s}_\mu(q) \epsilon^{h_s}_\nu(q)$ is the polarization tensor of the soft graviton and $J^{\rho \nu}$ the total angular momentum of the $(k,h)$ particle. Following Strominger and collaborators, we seek to express (\ref{S1}) in holomorphic coordinates. Let $(E,z,\bar{z})$ be the parametrization of the 4-momentum $k^\mu$. As discussed in section \ref{sec4D}, the total angular momentum can be expressed in terms of the differential operator $J^h_V$ given in Eq. (\ref{JVh}). The six CKVs corresponding to the $(\mu,\nu)$ components are: \begin{equation} V^A_{i 0} := D^A \hat{k}_i \, , \quad V^A_{i j} := \hat{k}_i D^A \hat{k}_j - \hat{k}_j D^A \hat{k}_i , \quad i,j=1,2,3, \; A=z,\bar{z}, \end{equation} so that, \begin{equation} J_{\mu \nu} = J^{h}_{V_{\mu \nu}}, \quad \mu,\nu=0,1,2,3 , \end{equation} where $\hat{k}$ is the unit direction on the sphere parametrized by $(z,\bar{z})$. For the polarization tensor we follow \cite{st,virasoro} and take: \begin{equation} \epsilon^+(q)^\mu = \frac{1}{\sqrt{2}}(\bar{z}_s,1,-i,-\bar{z}_s) , \quad \epsilon^-(q)^\mu = \frac{1}{\sqrt{2}}(z_s,1,i,-z_s) .\label{polvec} \end{equation} Notice that (\ref{S1}) takes the form of a function of $(z,\bar{z})$ times a linear combination of boosts and rotations (with coefficients depending on $z_s,\bar{z}_s$ and $h_s$). Thus, all $(z,\bar{z})$-independent factors multiplying $J^{\rho \nu}$ can be realized as linear combinations of CKVs. For instance: \begin{equation} \epsilon^{+}_{\nu}(q) q_\rho J^{\rho \nu} = J^{h}_{\epsilon^{+}_{\nu}(q) q_\rho V^{\rho \nu}}. \end{equation} Taking this into account, (\ref{S1}) can be written as: \begin{equation} S^{(1) \, +}_{(k,h)} = (z-z_s)^{-1} J^{h}_{(\bar{z}-\bar{z}_s)^2\partial_{\bar{z}} } \, , \quad S^{(1) \, -}_{(k,h)} = (\bar{z}-\bar{z}_s)^{-1} J^{h}_{(z-z_s)^2 \partial_{z}}. \label{S12} \end{equation} We finally show that (\ref{S12}) can in fact be written in terms of the vector fields, \begin{equation} K_{(+, \, z_s, \bar{z}_s)} := (z-z_s)^{-1}(\bar{z}-\bar{z}_s)^2\partial_{\bar{z}} \, , \quad K_{(-, \, z_s, \bar{z}_s)} := (\bar{z}-\bar{z}_s)^{-1} (z-z_s)^2 \partial_{z} , \label{defK} \end{equation} according to: \begin{equation} S^{(1) \, +}_{(k,h)} = J^{h}_{K_{(+, \, z_s, \bar{z}_s)}} \, , \quad S^{(1) \, -}_{(k,h)} = J^{h}_{K_{(-, \, z_s, \bar{z}_s)}}. \label{S13} \end{equation} Let us discuss the `$-$', case, the `$+$' one being analogous. From the definition of $J^h_V$, (\ref{JVh2}) one can verify: \begin{eqnarray} J^{h}_{(\bar{z}-\bar{z}_s)^{-1} (z-z_s)^2 \partial_{z}}= (\bar{z}-\bar{z}_s)^{-1} J^{h}_{(z-z_s)^2 \partial_{z}} + \frac{1}{2}( -E \partial_ E + h) (z_s-z)^2 \partial_z \frac{1}{(\bar{z}-\bar{z}_s)}. \end{eqnarray} The second term is proportional to $(z_s-z)^2 \delta^{(2)}(z,z_s)$. As long as (\ref{CSthm}) is understood as a distribution to be smeared against a smooth function on the sphere, this term vanishes and we obtain (\ref{S13}). \subsection{Proposed Ward identities} \label{sec4A1} In this section we motivate a proposal for the ``Ward identities".\footnote{The quotation marks are placed to remind us that the proposed charges do not yield a representation of the generalized BMS algebra on the radiative phase space.} This proposal is a straightforward generalization of the Ward identities proposed for the local CKVs associated to the extended BMS algebra. We repeat the derivation here in the interest of pedagogy and for introducing notation for later use.\\ Consider the analogue of the Virasoro symmetry proposed in \cite{virasoro}, but with with $V^A$ a smooth vector field on the sphere rather than a local CKV: \begin{equation} H^{\rm out}_V S= S H^{\rm in}_V \label{HxiS} . \end{equation} The evaluation of (\ref{HxiS}) between the states (\ref{inout}) is obtained by using the commutators (see Eq. (\ref{PBaV})): \begin{equation} [a^{\rm out}_h(\omega,\hat{x}),H^{\rm out}_V] = i J^{\; h}_V a^{\rm out}_h(\omega,\hat{x}) , \quad [ a^{\rm in}_h(\omega,\hat{x})^\dagger,H^{\rm in}_V ] = i J^{-h}_V a^{\rm in}_h(\omega,\hat{x}) , \label{commaR} \end{equation} together with the action $H^{\rm in (out)}_V$ on the in (out) vacuum. This action is determined by the soft part of $H^{\rm in (out)}_V$ (\ref{Hinhom}). Following \cite{virasoro}, we express (\ref{Hinhom}) in terms the Fourier transform of the News tensor so that (the prescription for the $\omega \to 0$ limit is described below): \begin{multline} H^{\rm in}_V | 0 \rangle =\\ \hspace*{0.1in}-\frac{i}{2}\lim_{\omega \to 0} \partial_{\omega} \oint d^2 V( N^{\rm in}_{z z}(\omega,\hat{x}) D^{z} D^{z} D_{\bar{z}} V^{\bar{z}}) + N^{\rm in}_{\bar{z}\zb}(\omega,\hat{x}) D^{\bar{z}} D^{\bar{z}} D_z V^z| 0 \rangle , \\ \label{Hinvac} \end{multline} and similar expression for $\langle 0 | H^{\rm out}_V$. The matrix element of (\ref{HxiS}) between the `in' and `out' states implies then: \begin{multline} \frac{1}{2}\lim_{\omega \to 0} \partial_\omega \oint d^2 V D^{z} D^{z} D_{\bar{z}} V^{\bar{z}}\\ \hspace*{0.5in}\left( \langle {\rm out} | N^{\rm out}_{z z}(\omega,\hat{x}) S | {\rm in} \rangle - \langle {\rm out} | S N^{\rm in}_{z z}(\omega,\hat{x}) | {\rm in} \rangle \right) + z \leftrightarrow \bar{z} = \\ \sum_i J^{h_i}_{V_i} \langle {\rm out} | S | {\rm in} \rangle. \label{HxiSme} \end{multline} The sum runs over all `in' and `out' particles, with the convention that for an `in' particle one takes $J^{h_i}_{V_i} = J^{-h^{\rm in}_i}_{V_i}$ according to (\ref{commaR}). We now focus on the LHS of (\ref{HxiSme}). Firstly we need to specify how the $\omega \to 0$ limit is taken. We take $\omega \to 0^+$ in (\ref{HxiSme}) so that only the `out' term survives. This prescription is slightly different than the one given in \cite{virasoro}. However it leads to the same form of Ward identities as given in \cite{virasoro}.\footnote{For superstranslations, this prescription also leads to the same Ward identities of \cite{st}.} With this prescription, and using Eqns. (\ref{sigmanewsw}), (\ref{asigma1}) the LHS of (\ref{HxiSme}) takes the form: \begin{equation} \text{LHS} = \frac{1}{4 \pi} \lim_{\omega \to 0^+} (1+ \omega \partial_\omega) \int d^2 z ( D^3_{\bar{z}} V^{\bar{z}} \langle {\rm out} | a^{\rm out}_+(\omega,\hat{x}) S \rangle + D^3_{z} V^z \langle a^{\rm out}_-(\omega,\hat{x}) S | {\rm in} \rangle ) \label{lhsward} \end{equation} where we used $\sqrt{\gamma}\sqrt{\gamma}\gamma^{z \bar{z}}\gamma^{z \bar{z}}=1$. Substituting Eq. (\ref{lhsward}) in Eq. (\ref{HxiSme}) we obtain the proposed identities. They take precisely the same form as the Virasoro Ward identities of \cite{virasoro} : \begin{multline} \frac{1}{4 \pi} \lim_{\omega \to 0^+} (1+ \omega \partial_\omega)\\ \hspace*{0.4in}\int d^2 z ( D^3_{\bar{z}} V^{\bar{z}} \langle {\rm out}| a^{\rm out}_+(\omega,\hat{x}) S | {\rm in} \rangle + D^3_{z} V^z \langle {\rm out}| a^{\rm out}_-(\omega,\hat{x}) S | {\rm in} \rangle ) = \\ \sum_i J^{h_i}_{V_i} \langle {\rm out} | S | {\rm in} \rangle. \label{wardid} \end{multline} \subsection{From CS theorem to generalized BMS symmetries}\label{sec4B1} The purpose of this section is to show that remarkably enough, the derivation of the Virasoro Ward identities given in \cite{virasoro} does not make use of the CKV property of the vector fields in question, so that the identities hold for arbitrary smooth vector field on the sphere.\footnote{In fact, due to their singular nature, it is not clear to us how the derivation works for local CKVs.} From Eqns. (\ref{CSthm}), (\ref{S13}), CS theorem can be written as: \begin{equation} \lim_{\omega \to 0^+} (1+ \omega \partial_\omega) \langle {\rm out} | a^{\rm out}_{h_s} (\omega,z,\bar{z}) S | {\rm in} \rangle = \sum_{i} J^{h_i}_{K^i_{(h_s, \, z, \bar{z})}} \langle {\rm out} | S | {\rm in} \rangle . \label{CSthm2} \end{equation} Let $V^A \partial_A$ be any smooth vector field on the sphere. In the following we work with $V^z \partial_z$ and $V^{\bar{z}} \partial_{\bar{z}}$ components separately. Multiplying the LHS of Eq. (\ref{CSthm2}) with $h_s=-2$ by $(4\pi)^{-1} D^3_z V^z $ and integrating over $(z,\bar{z})$, we obtain the LHS of the proposed Ward identity (\ref{wardid}) for the vector $V^z \partial_z$. The same operation on the RHS of (\ref{CSthm2}) is given by: \begin{equation} (4 \pi)^{-1} \sum_{i} \int d^2 z D^3_z V^z J^{h_i}_{K^i_{(-, \, z, \bar{z})}} \langle {\rm out} | S | {\rm in} \rangle = \sum_i J^{h_i}_{W_i}\langle {\rm out} | S | {\rm in} \rangle \label{SW} \end{equation} where \begin{equation} W_i := (4 \pi)^{-1} \int d^2 z D^3_z V^z {K^i_{(-, \, z, \bar{z})}} . \label{Wi} \end{equation} In order to integrate by parts in (\ref{Wi}), we need to specify the tensor index structure of $K^i_{(-, \, z, \bar{z})}$ with respect to the $(z,\bar{z})$ coordinates. This tensor structure is given by $a^{\rm out}_- (\omega,z,\bar{z}) \sim \sigma_{\bar{z}\zb}/\sqrt{\gamma}$ due to Eqns. (\ref{asigma1}), (\ref{CSthm2}). Following \cite{virasoro}, this is captured by $\hat{\epsilon}_{\bar{z} \bar{z}} := \sqrt{\gamma}$. We thus obtain (to avoid confusion we set $K^i_- \equiv K^i_{(-, \, z, \bar{z})}$): \begin{multline} \int d^2 z D^3_z V^z K^{i}_{-} = \int d^2 z \sqrt{\gamma} \gamma^{z \bar{z}}\gamma^{z \bar{z}} D_z D_z (D_z V^z) (\hat{\epsilon}_{\bar{z} \bar{z}} K^{i}_{-}) = \\ - \int d^2 z \sqrt{\gamma} V^z D_z D^{\bar{z}} D^{\bar{z}} (\hat{\epsilon}_{\bar{z} \bar{z}} K^{i}_{-}) = 4 \pi V^{z_i}(z_i,\bar{z}_i) \partial_{z_i} , \end{multline} where in the last equation we used an identity given in Eq. (6.7) of \cite{virasoro}: \begin{equation} \gamma^{z \bar{z}} D^3_z( \hat{\epsilon}_{\bar{z} \bar{z}} K^{i}_{-}) = -4 \pi \delta^{(2)}(z-z_i) \partial_{z_i}. \end{equation} Using this result back in (\ref{SW}) we recover the RHS of the proposed Ward identity (\ref{wardid}) for the vector $V^z \partial_z$. Similar discussion applies for $h_s=+2$ and the vector $V^{\bar{z}} \partial_{\bar{z}}$. Adding the two results one obtains the Ward identity (\ref{wardid}) for the vector field $V^A \partial_A$. \subsection{From Ward identity to soft theorem} \label{sec5.4} CS theorem can be recovered as the Ward identity associated to the vector fields (\ref{defK}).\footnote{As in the case of supertranslations, this derivation requires a choice of a non-smooth (in the present case $C^{1}$) vector field. It is understood that this is due the use of sharp momentum eigenstates.} For the case of an outgoing negative helicity soft graviton with direction $(z_s, \bar{z}_s)$, we choose $V^A$ in (\ref{wardid}) by \begin{equation} V^A \partial_A = K_{(-, \, z_s, \bar{z}_s)} = (\bar{z}-\bar{z}_s)^{-1}(z-z_s)^2\partial_{z}. \end{equation} One can verify that \begin{equation} D^3_{z} K_{(-, \, z_s, \bar{z}_s)}^{z} = 4 \pi \delta^{2}(z-z_s). \label{D3K} \end{equation} Using (\ref{D3K}) in (\ref{wardid}) we recover CS theorem (\ref{CSthm2}) for $h_s=-2$. Similar discussion applies for a positive helicity soft graviton.\\ \subsection{Comparison with supertranslation case} \label{sec5.5} We now note the following subtlety regarding this equivalence. Recall that Weinberg's soft graviton theorem is equivalent to the Ward identities associated to the supertranslation symmetries \cite{st}. As supertranslations are parametrized by a single function, it is rather surprising that the associated Ward identities can give rise to the soft graviton theorem for both positive as well as negative helicity soft particles. That this is possible is due to a so called global constraint which underlie the definition of CK spaces. On future null infinity, it is given by: \begin{equation} [D_{z}^{2}C_{\bar{z}\zb}\ -\ D_{\bar{z}}^{2}C_{zz}]_{{\cal I}^{+}_{\pm}}\ =\ 0 \end{equation} It can be re-written in terms of the zero mode of the News tensor as \begin{equation} D_{z}^{2}N_{\bar{z}\zb}^{\textrm{out}}(\omega=0,\hat{x})\ =\ D_{\bar{z}}^{2}N_{zz}^{\textrm{out}}(\omega=0,\hat{x}). \end{equation} This constraint ensures that the operator insertions due to positive and negative helicity soft gravitons are equivalent to each other. (For more details we refer the reader to \cite{strominger2}.) This is consistent with the remarkable structure of Weinberg's soft term which does not depend on the angular momenta of the external particles. However this constraint does not imply that the operator insertions associated to ``subleading" soft positive helicity gravitons (i.e. when the leading order pole is projected out from the insertion) are equivalent to those of negative helicity gravitons. This is consistent with the fact that this subleading theorem is equivalent to Ward identity associated to vector fields on a sphere which are parametrized by two independent functions. This is in turn reflected in the structure of the sub-leading CS soft term which depends on the angular momenta of the scattering particles. \section{Outlook} Motivated by the desire to understand the subleading soft graviton theorem as arising from Ward identities associated to asymptotic symmetries, we considered a \emph{distinct} generalization of the BMS group than the one proposed in \cite{barnich1}. We showed that $\mathbf{G}$, which is essentially obtained by dropping a single condition from the definition of the BMS group (namely that the vector fields defined on the conformal sphere be CKVs) is a semi-direct product of supertranslations and diffeomorphisms of the conformal sphere, $\mathbf{G} = \textrm{ST} \rtimes \textrm{Diff}(S^{2})$. We argued that $\mathbf{G}$ acts as a symmetry group on the space of all asymptotically flat geometries which are in a suitable neighborhood of Minkowski space-time. \\ Associated to vector fields which generate $\textrm{Diff}(S^{2}) = \mathbf{G}/\textrm{{ST}}$ we proposed a definition of the flux in the radiative phase space of Ashtekar which was motivated by the definition of corresponding flux for the Virasoro symmetries. The reason why we have not been able to derive this flux expression from first principles (as one can do for any vector field belonging to extended BMS) can be most easily understood as follows.\footnote{For pedagogy we restrict our attention to future null infinity.} \\ In the case of Virasoro symmetries, the derivation of flux in the radiative phase space is based upon the action of extended BMS vector fields on the free data quantified by $C_{AB}$ \cite{virasoro}. $C_{AB}$ is the free (radiative) data in the sense that it is unconstrained and that all the other dynamical metric components in the neighborhood of null infinity are determined from Einstein's equations using $C_{AB}$. However what constitutes the free data is ``frame dependent" in the sense that it depends on the chosen `kinematical', leading order metric at null infinity. As extended BMS group preserves the leading order metric at ${\cal I}^{+}$, it maps a given radiative data into a different radiative data. Due to the fact that $\mathbf{G}$ changes the leading order structure of the metric components we have been unable to derive the action of its proposed flux from first principles. However we think that its appeal lies in the fact that the related Ward identities are equivalent to the subleading soft graviton theorem.\\ Yet another unresolved issue with $H_{V}$ (as is also the case for new class of asymptotic symmetries proposed for massless QED \cite{lowsym}) is that the fluxes associated to $\mathbf{G}$ do not form a closed algebra. It is conceivable that this is due to the fact that the radiative phase space of Ashtekar is based upon the existence of a fixed kinematical structure (namely the conformal metric on the sphere and the null vector field $n^{a}$) which is in turn tied to the existence of a fixed space-time metric at leading order in $r$. This expectation is borne out by the fact that in the case of massless scalar field where the radiative phase space does not refer to the entire conformal metric but only the volume form, these symmetries do indeed form a closed algebra.\footnote{Note that if this expectation turned out to be true, then both the issues mentioned above are two sides of the same coin.}\\ In light of what is said above, there appear certain natural directions in which a systematic derivation of the fluxes associated to $\mathbf{G}$ (such that they form a closed algebra) could be obtained, namely by weakening the dependence of radiative phase space on the universal structure. Detailed implementation of this idea is currently under investigation.\\ In summary our proposal for $\mathbf{G}$ as a group of asymptotic symmetries for low energy gravitational scattering processes is at best a tentative one. However due to its relationship with the subleading soft theorem we believe that further investigation of the above mentioned issues is warranted.\\ \noindent {\bf Acknowledgements}\\ We are indebted to Abhay Ashtekar for stimulating discussions and suggestions. We are grateful to A. P. Balachandran and Sachindeo Vaidya for insightful discussions associated to asymptotic symmetries in gauge theories. We would also like to thank the participants of the workshop ``Asymptotia" held at Chennai Mathematical Institute for many discussions related to BMS group. We thank Burkhard Schwab and an anonymous referee for their comments on the manuscript. AL is supported by Ramanujan Fellowship of the Department of Science and Technology.
1,108,101,563,498
arxiv
\section{Introduction} A number of researchers have shown that ear recognition is a viable alternative to more common biometrics such as fingerprint, face and iris~\cite{Chang2003,Kumar2012,Yan2005}. The ear is stable over time, less invasive to capture, and does not require as much control during image acquisition as other biometrics. And, it is reasonable to assert that the ear has less privacy issues when compared to faces. Traditionally ear recognition research has been performed on ear images that were captured in an ideal setting. In an ideal setting, the ears are all captured in the same position, with the identical lighting, and identical resolution. With the advances in computer vision and pattern recognition techniques, research of ear recognition is shifting to a more challenging scenario whereby ear images are acquired from real world (unconstrained) settings~\cite{Emersic2017,Emersic2017b}. It is more difficult to recognize ears in the wild. In this paper we use "ears in the wild" and "unconstrained ears" interchangeably. Figure~\ref{fig1} illustrates the difficulty of recognizing individuals through ear images in the wild. \begin{figure}[!ht] \centering \includegraphics[height=1.7in]{figs/fig1a.png}\hfill \includegraphics[height=1.7in]{figs/fig1b.png}\hfill \includegraphics[height=1.7in]{figs/fig1c.png}\hfill \includegraphics[height=1.7in]{figs/fig1d.png}\hfill \includegraphics[height=1.7in]{figs/fig1e.png} \caption{Example of a challenging task for ear recognition in an unconstrained setting: given five images of four different subjects, can you tell which pair of images belong to the same person?} \label{fig1} \end{figure} This example mostly illustrates the problem of pose variation, but many other factors may affect the recognition performance: different acquisition devices, low resolution, illumination variations, occlusions caused by hair and head accessories, earrings, ear plugs and so on. To overcome these recognition challenges, ear recognition has to achieve good results for non-cooperative subjects. This will make ear biometric recognition very useful for practical purposes, like video surveillance and continuous authentication. In order to carry out the task of recognizing humans through their ears, a common sequence of steps is usually followed (see Figure~\ref{fig_diagram}): \begin{figure}[!ht] \centering \includegraphics[width=12.0cm]{figs/diagram.png} \caption{Diagram of our ear recognition framework. Given an unconstrained image, ears are cropped using ground truth annotations. Landmark detection is performed to obtain the information required to normalize pose and scale variations. Normalized images are described by different feature extractors and matched through distance metrics. Scores are fused and a recognition decision is made.} \label{fig_diagram} \end{figure} \begin{description} \item[\bf Acquisition step:]~captures a digital biometric sample using an appropriate sensor. For all of our experiments we use images from five publicly available databases (Section~\ref{sec_databases}). \item[\bf Localization step:]~locates the biometric information and separates it from existing irrelevant parts of the acquired sample. The images we used were either already cropped or the ground truth location of the ears was provided; thus we do not perform the localization step. However, it is possible to find successful approaches that perform ear detection in the wild in the literature~\cite{Zhang2017,Emersic2017c}. \item[\bf Normalization step:]~reshapes the input sample to a standard format to reduce unwanted variations. We use a landmark detector based on Convolutional Neural Networks (CNN)~\cite{Krizhevsky2012} to locate a set of 55 landmarks (Section~\ref{sec_landmark}), which are then employed to translate, rotate and scale the input image to a standard configuration (Section~\ref{sec_normalization}). \item[\bf Features Description step:]~selects discriminant features from a normalized sample and usually reduces its dimensionality. We use a state-of-the-art CNN architecture for face recognition for the task of ear recognition in the wild (Section~\ref{sec_learned}), as well as different traditional ear description approaches (Sections~\ref{sec_pca}~and~\ref{sec_handcrafted}). \item[\bf Recognition step:]~compares descriptors and decides whether they belong to the same person or not. All images are compared to each other using the descriptor's distance metric. All scores are normalized using Min-max normalization~\cite{Jain2005}, then score level fusion~\cite{Kittler1998} is used to combine results of different descriptors and inform the decision. (Section~\ref{sec_fusion}). \end{description} After implementing our framework and evaluating the results, the following contributions were reached: \begin{itemize} \item we designed and developed a two-stage CNN-based landmark detector that achieves accurate results even in the presence of variations not seen in the training data (Section~\ref{sec_landmark}). We use the detector to automatically normalize images and instantly observed a boost in the recognition rate; \\ \item we devised a CNN-based ear descriptor based on a state-of-the-art face recognition architecture that outperformed other state-of-the-art ear recognition works that are based on CNNs; \\ \item we showed that handcrafted and learned descriptors are complementary, and thus a considerable increase in performance can be reached when both are fused. \end{itemize} \section{Databases} \label{sec_databases} There are many things that can affect the performance of ear recognition and some sets of ear images are easier than others. Therefore, it is a good idea for researchers to test on multiple image datasets when feasible. A test may include using ideal images then progressing to unconstrained more difficult images to recognize. In this work we use five different databases to train and evaluate our ear recognition framework. We use images from the Indian Institute of Technology Delhi Ear Database (IIT), the West Pomeranian University of Technology Ear Database (WPUTE), the Annotated Web Ears database (AWE), the In-the-wild Ear Database (ITWE) and the Unconstrained Ear Recognition Challenge database (UERC). More details about each of them are given in the subsequent sections. \subsection{Indian Institute of Technology Delhi Ear Database} The IIT database~\cite{Kumar2012} was released in two different formats, a raw version and a normalized version. We use the raw version for our experiments. It contains 493 images with size $272\times204$ from 125 different subjects. Each image shows a small region around the left ear and was collected in an indoor environment in a well-controlled acquisition setup, which makes this database a suitable benchmark for a nearly ideal ear recognition scenario. Figure~\ref{fig_iit} shows some raw images provided by the IIT database. \begin{figure}[!ht] \centering \subfloat[]{\label{fig_iit}\includegraphics[height=0.84in]{figs/iit1.png}\includegraphics[height=0.84in]{figs/iit2.png}\includegraphics[height=0.84in]{figs/iit3.png}} \\ \subfloat[]{\label{fig_wpute}\includegraphics[height=1.1in]{figs/wp1.png}\includegraphics[height=1.1in]{figs/wp4.png}\includegraphics[height=1.1in]{figs/wp5.png}\includegraphics[height=1.1in]{figs/wp6.png}} \\ \subfloat[]{\label{fig_awe}\includegraphics[height=1.44in]{figs/awe1.png}\includegraphics[height=1.44in]{figs/awe2.png}\includegraphics[height=1.44in]{figs/awe3.png}\includegraphics[height=1.44in]{figs/awe4.png}} \\ \subfloat[]{\label{fig_itwe}\includegraphics[height=1.2in]{figs/itwe1.png}\includegraphics[height=1.2in]{figs/itwe2.png}\includegraphics[height=1.2in]{figs/itwe3.png}\includegraphics[height=1.2in]{figs/itwe4.png}} \caption{Example of images from (a) IIT, (b) WPUTE, (c) AWE and (d) ITWE databases. While all IIT images have a similar controlled disposition ({\it i.e.} pose, resolution, illumination), the other three databases are unconstrained and present different challenges for recognition: occlusions caused by hair and head accessories, ear plugs, earrings, variations in illumination and pose. The AWE and ITWE images have the most variation in terms of resolution and age. The WPUTE images on the other hand have same subject images that were mostly acquired in a single session and have the same size.} \label{fig_databases} \end{figure} \subsection{West Pomeranian University of Technology Ear Database} The WPUTE database~\cite{Frejlichowski2010} was originally created to evaluate the performance of ear recognition in the wild. It contains images that state-of-the-art ear recognition approaches could not handle at that time. The images reflect the challenges associated with ear recognition, such as occlusions caused by hair, earrings, and ear plugs. The database also provides images with variations in gender, ethnicity, pose, illumination and acquisition sensor. However, because the vast majority of images from a same person were acquired in a single session, intraclass variation is minimal. Thus, although the preprocessing step was heavily affected by these variations, some of the variations could in fact benefit the recognition task ({\it e.g.} a person wearing the same earring in all acquisitions). This database provides 3348 images with size $380\times500$ from 474 different subjects ({\it i.e.} each subject has at least 4 images) showing a small region around the ear. However, 1388 of them are duplicates, which may have inflated the reported accuracy of some works in the literature~\cite{Zhou2017}, and we also found 6 images that were mistakenly labeled as left ears while they actually were right ears. After removing duplicates and fixing labels, 1960 images are available for use, 982 from left ears and 978 from right ears. Some examples of WPUTE images are shown in Figure~\ref{fig_wpute}. \subsection{Annotated Web Ears database} The AWE database~\cite{Emersic2017} contains 1000 images from 100 different subjects ({\it i.e.} 10 images per subject) which were collected from searches for public figures on the Internet. Image size varies from $15\times29$ to $473\times1022$ pixels, with size $83\times160$ on average. Ears were tightly cropped, so the proportion of background pixels is the smallest among all databases used in this work. All variations presented in the WPUTEDB database are also present in the AWE database in an intenser form. Although it labels ears as left and right, with 520 and 480 images respectively, the images may have been inadvertently flipped horizontally before being released on the Internet. So, it is possible that there are some noisy labels. Some of the challenges encountered in the AWE database are exemplified in Figure~\ref{fig_awe}. \subsection{In-the-wild Ear Database} The ITWE database~\cite{Zhou2017} is divided in two sets, Collection A and Collection B. Collection A was collected using Google image search and contains 605 images without identity reference, but with 55 manually annotated landmarks. The position of these landmarks can be observed in Figure~\ref{fig_diagram}. This collection was randomly split in a training set with 500 images and a test set with 105 images. It is suitable for training ear detection and normalization approaches, but not for recognition purposes. For this reason, Collection B was created for recognition evaluation and contains 2058 images from 231 different subjects taken from three public databases for face recognition in the wild: VGG-Face~\cite{Parkhi2015}, LFW~\cite{Huang2007} and Helen Dataset~\cite{Le2012}. Bounding boxes for each ear were obtained by a detector based on histograms of oriented gradients (HOG)~\cite{Dalal2005} which was trained on images from Collection A, and these box coordinates were released together with this collection. Images in both Collection A and Collection B include cluttered backgrounds ({\it e.g.} face, body parts, scenario) and vary considerably in size and ear resolution. Variations in ear images of the ITWE database are comparable to the AWE database ones, but there is no differentiation between left and right ears ({\it i.e.}. The ITWE images are horizontally flipped so that all have the same orientation), which is a problem for recognizing people with asymmetric ears ({\it i.e.} about 10\% of people according to Yan~and~Bowyer~\cite{Yan2005}). In addition, we were able to find many mislabeled samples. We did not fix any of them for comparison purposes. Some examples of ITWE images are presented in Figure~\ref{fig_itwe}. \subsection{Unconstrained Ear Recognition Challenge database} The UERC database~\cite{Emersic2017b} is an extension of the AWE database and was built for competition purposes. The major change is the amount of images and subjects, which increased considerably. The database is divided in two parts, with 2304 images from 166 subjects for training and 9500 images from 3540 subjects for testing. The subjects designated for training have at least 10 images, while subjects in test may contain only one image. A portion of the subjects in training and testing ({\it i.e.} 150 and 180 subjects, respectively) have exactly 10 images. Ears may be left or right oriented, but ground truth annotations of the orientation are only available for training images. \subsection{Discussion} The five sets of images we use have different levels of difficulty so that we can conduct a fair test and evaluate the performance of our framework implementation, then compare our results to the state-of-the-art. While the IIT ear images are not unconstrained, they can be used to detect overfitting to the wild scenarios ({\it i.e.} using images from easier databases should always result in higher accuracy), a problem that was already observed in works that recognize faces in the wild~\cite{Dahia2017}. Although all the remaining databases are unconstrained, based on their descriptive characteristics, we conclude that WPUTE and UERC are respectively the least and the most challenging unconstrained image sets, while AWE and ITWE have a similar difficulty level. \section{Landmark detection} \label{sec_landmark} Even with the recent emergence of deep learning methods for biometric recognition in uncontrolled scenarios, normalization is still necessary to achieve better results. For instance, a landmark-based orientation and scale normalization is a standard procedure in face recognition state-of-the-art works~\cite{Wen2016,Wu2015}. With this in mind, we pursued a similar path for the ear recognition problem by investigating the use of CNNs for the landmark detection task. To this end, we use images and annotations provided in Collection A of the ITWE database for CNN training and accuracy evaluation. As only 500 images are available for training, we performed different data augmentation operations in order to avoid overfitting and increase the network generalization power. For each training image, we use principal component analysis (PCA)~\cite{Abdi2010} on the 2D coordinates of the annotated landmarks to obtain the upright orientation of the ear ({\it i.e.} we assume it corresponds to the direction of the first component). Then, we create multiple images by rotating the upright ear from $-45^\circ$ to $45^\circ$ with steps of $3^\circ$. Each ear is also transformed by a random scale change of up to 20\% of the original ear size in both axes, as well as a random translation of up to 20\% of the original ear size in each axis. After applying all these modifications, images were rescaled to 96$\times$96 pixels and we ended up with 15500 training images. The architecture of our network is based on a common design nowadays, even for landmark detection~\cite{Sun2013}, which consists of alternating between convolution and max pooling layers in the beginning, and then following with a sequence of fully-connected layers. We use rectified linear units in convolution and fully-connected layers to train models from scratch. We also added dropouts after all max pooling and the first fully-connected layers to avoid overfitting the training data. A complete description of our architecture is presented in Table~\ref{cnn-landmarks}. It was implemented using TensorFlow, and the optimization to minimize the mean squared error in the output was carried out by the Nesterov's Momentum algorithm~\cite{Sutskever2013} for 2000 epochs. \begin{table}[!h] \caption{Network architecture for landmark detection in ear images. It receives as input a grayscale image with 96$\times$96 pixels and outputs a 110-dimensional vector representing 2D coordinates for 55 predefined landmarks.\label{cnn-landmarks}}{ \centering \small \begin{tabular}{c|c|c|c|c|c|c} \textbf{\#} & \textbf{Type} & \textbf{Input} & \textbf{Filter} & \textbf{Stride} & \textbf{Drop} & \textbf{Output} \\ \hline 1 & Conv/Relu & 96$\times$96$\times$1 & 3$\times$3$\times$1$\times$32 & 1 & & 96$\times$96$\times$32 \\ 2 & MaxPool & 96$\times$96$\times$32 & 2$\times$2 & 2 & 10\% & 48$\times$48$\times$32 \\ 3 & Conv/Relu & 48$\times$48$\times$32 & 2$\times$2$\times$32$\times$64 & 1 & & 48$\times$48$\times$64 \\ 4 & MaxPool & 48$\times$48$\times$64 & 2$\times$2 & 2 & 20\% & 24$\times$24$\times$64 \\ 5 & Conv/Relu & 24$\times$24$\times$64 & 2$\times$2$\times$64$\times$128 & 1 & & 24$\times$24$\times$128 \\ 6 & MaxPool & 24$\times$24$\times$128 & 2$\times$2 & 2 & 30\% & 12$\times$12$\times$128 \\ & Flattening & 12$\times$12$\times$128 & & & & 18432 \\ 7 & Fc/Relu & 18432 & & & 50\% & 1000 \\ 8 & Fc/Relu & 1000 & & & & 1000 \\ 9 & Fc & 1000 & & & & 110 \end{tabular} }{} \end{table} Although this network achieved a good accuracy considering the level of variations in unconstrained scenarios, we evaluated a two-stage solution, whereby the first network is used to create an easier landmark detection scenario by reducing scale and translation variations, and the second network is used to generate the 2D coordinates for landmarks. To this end, we use the coordinates obtained by the network described above to refine the center and the orientation of an ear using PCA, and then feed the rectified image to a second network that was trained in a more controlled scenario. The second network has the same architecture and optimization procedure of the first one, the only difference is the training data, which uses less variation in the augmentation process. Rotations are performed from $-15^\circ$ to $15^\circ$ with steps of $1^\circ$, and random scale and translation changes are limited to up to 10\% of the original ear size. \section{Geometric normalization} \label{sec_normalization} After landmark detection, we normalize the ears by applying PCA on the retrieved landmarks. We use the first component as the orientation of the ear and the center of the oriented bounding box as the center of the ear. We then interpolate a $128\times128$ image with these parameters considering that the distance between the center of the ear and the top of the image is equal to two times the squared root of the first eigenvalue in the original image. However, as ears in the wild may present significant pose variations, this also occurs in width variations that may affect the recognition performance, as shown in Figures~\ref{fig_norm1}~and~\ref{fig_norm2}. Thus, we use different sampling rates in x and y directions in a way that the distance between the center of the ear and one side of the image is equal to two times the squared root of the second eigenvalue in the original image. This way, the width and the height of the normalized ear are approximately the same, as may be seen in Figures~\ref{fig_norm3}~and~\ref{fig_norm4}, and image variations caused by pose become less intense. \begin{figure}[!ht] \centering \subfloat[]{\label{fig_norm1}\includegraphics[width=1.15in]{figs/norm1.png}}\hfill \subfloat[]{\label{fig_norm2}\includegraphics[width=1.15in]{figs/norm2.png}}\hfill \subfloat[]{\label{fig_norm3}\includegraphics[width=1.15in]{figs/norm3.png}}\hfill \subfloat[]{\label{fig_norm4}\includegraphics[width=1.15in]{figs/norm4.png}} \caption{Normalization results (a)-(b) with and (c)-(d) without the same sampling rate in both axis for two ear images of the same person with pose variations. Different sampling rates are employed to reduce the difference in the width of the ear caused by pose variations. While (b) is 25\% wider than (a), the difference in width between (c) and (d) is negligible.} \label{fig_norm} \end{figure} \section{Description and matching} \label{sec_recognition} We evaluate three different description and matching schemes based on 1) holistic image features, 2) handcrafted features and 3) learned features. We then investigate if fusing some of them can achieve a higher accuracy. More details are given in the following sections. \subsection{Holistic features} \label{sec_pca} PCA was one of the first methods employed to the ear recognition problem~\cite{Chang2003}, as it provides a holistic description of the sample images while reducing the dimensionality of the data. However, even the pioneer works using PCA already reported a performance drop caused by variations in pose and illumination, and such variations are much more intense in recent uncontrolled databases. We used a PCA implementation available in the Face Identification Evaluation System~\cite{Yambor2002} as a baseline approach, and its feature vectors were matched though the Mahalanobis distance. The first 20 eigenvectors were dropped to avoid illumination and background variations, and we kept 60\% of the eigenvectors in our PCA descriptor. \subsection{Handcrafted features} \label{sec_handcrafted} As holistic features are strongly affected by different variations, specialists designed different feature extraction approaches, which are known as handcrafted features, seeking to overcome some of these problems. Emersic~\emph{et~al.}~\cite{Emersic2017,Emersic2017b} released a toolbox that allows the extraction of the best performing state-of-the-art handcrafted features for ear recognition: local binary patterns (LBP), binarized statistical image features (BSIF), local phase quantization features (LPQ), rotation invariant LPQs (RILPQ), patterns of oriented edge magnitudes (POEM), HOG, dense scale-invariant feature transform (DSIFT) and Gabor wavelets. All descriptors were extracted using the default parameters of the toolbox. For matching, as in Emersic~\emph{et~al.}'s work~\cite{Emersic2017}, we compared histogram-based descriptors using the chi-square distance and Gabor descriptors using the cosine distance. \subsection{Learned features} \label{sec_learned} Considering that the performance of handcrafted descriptors degrades when using uncontrolled ear images~\cite{Emersic2017}, we employed CNNs in so that we could boost improve performance and so that we could learn more about the images, as well as how to describe them in a more discriminative and concise way. The CNN that we implemented is a state-of-the-art CNN architecture employed for face recognition in the wild~\cite{Wen2016} and we trained it from scratch for the ear recognition in the wild problem. We present a complete description of the chosen CNN architecture as well as specific layer configurations in Table~\ref{cnn-recognition}. This network was also implemented using TensorFlow, and the Adam optimization algorithm~\cite{Kingma2014} was used to minimize the weighted sum of softmax and center losses. As in Wen~\emph{et~al.}'s~work~\cite{Wen2016}, we set the center loss weight to $0.003$. \begin{table}[!h] \caption{Network architecture for feature extraction in ear images. It receives as input a grayscale image with 128$\times$128 pixels and outputs a 512-dimensional vector containing a discriminative feature representation of the input image.\label{cnn-recognition}}{ \footnotesize \begin{tabular}{c|c|c|c|c|c|c} \textbf{\#} & \textbf{Type} & \textbf{Input} & \textbf{Filter} & \textbf{Stride} & \textbf{Drop} & \textbf{Output} \\ \hline 1 & Conv/Relu & 128$\times$128$\times$1 & 3$\times$3$\times$1$\times$128 & 1 & & 128$\times$128$\times$128 \\ 2 & Conv/Relu & 128$\times$128$\times$128 & 3$\times$3$\times$128$\times$128 & 1 & & 128$\times$128$\times$128 \\ 3 & MaxPool & 128$\times$128$\times$128 & 2$\times$2 & 2 & 10\% & 64$\times$64$\times$128 \\ 4 & Conv/Relu & 64$\times$64$\times$128 & 3$\times$3$\times$128$\times$128 & 1 & & 64$\times$64$\times$128 \\ 5 & MaxPool & 64$\times$64$\times$128 & 2$\times$2 & 2 & 20\% & 32$\times$32$\times$128 \\ 6 & Conv/Relu & 32$\times$32$\times$128 & 3$\times$3$\times$128$\times$256 & 1 & & 32$\times$32$\times$256 \\ 7 & MaxPool & 32$\times$32$\times$256 & 2$\times$2 & 2 & 30\% & 16$\times$16$\times$256 \\ 8 & Conv/Relu & 16$\times$16$\times$256 & 3$\times$3$\times$256$\times$256 & 1 & & 16$\times$16$\times$256 \\ 9 & MaxPool & 16$\times$16$\times$256 & 2$\times$2 & 2 & & 8$\times$8$\times$256 \\ 10 & Conv/Relu & 8$\times$8$\times$256 & 3$\times$3$\times$256$\times$256 & 1 & & 8$\times$8$\times$256 \\ & Flattening 8 & 8$\times$8$\times$256 & & & & 16384 \\ & Flattening 9 & 8$\times$8$\times$256 & & & & 16384 \\ & Concat 8\&9 & 16384/16384 & & & & 32768 \\ 11 & Fc & 32768 & & & & 512 \\ \end{tabular} }{} \end{table} This CNN outputs 512-dimensional descriptors that can be matched through the cosine distance, making the entire processing time ({\it i.e.} description and matching) comparable to that of handcrafted descriptors presented in Section~\ref{sec_handcrafted}. For a given training set, the network optimization was performed in batches of 128 images for 1000 epochs using softmax loss only, and then the weighted sum of softmax and center losses was used until convergence was reached ({\it i.e.} no improvement after 50 epochs). As in Section~\ref{sec_landmark}, we performed data augmentation operations to increase the number of training images: random rotation between $-10^\circ$ and $10^\circ$; random crop with 85\% to 100\% of the original image size; random contrast change increasing or decreasing the range of pixel intensities in up to 50\%. To help understand what kind of patterns are being encoded by the CNN, we provide a visual analysis of some of the learned filters in Figure~\ref{fig_cnnvis}. \begin{figure}[!ht] \centering \includegraphics[width=3.9cm]{figs/a1.png}\hfill\includegraphics[width=3.9cm]{figs/c1.png}\hfill\includegraphics[width=3.9cm]{figs/b1.png}\\ \vspace{0.5cm} \hfill1st MaxPool\hfill~~~~~\hfill1st MaxPool\hfill~~~~~\hfill1st MaxPool\hfill~\\ \vspace{0.3cm} \includegraphics[width=3.9cm]{figs/a2.png}\hfill\includegraphics[width=3.9cm]{figs/c2.png}\hfill\includegraphics[width=3.9cm]{figs/b2.png}\\ \vspace{0.5cm} \hfill2nd MaxPool\hfill~~~~~\hfill2nd MaxPool\hfill~~~~~\hfill2nd MaxPool\hfill~\\ \vspace{0.3cm} \includegraphics[width=3.9cm]{figs/a3.png}\hfill\includegraphics[width=3.9cm]{figs/c3.png}\hfill\includegraphics[width=3.9cm]{figs/b3.png}\\ \vspace{0.5cm} \hfill3rd MaxPool\hfill~~~~~\hfill3rd MaxPool\hfill~~~~~\hfill3rd MaxPool\hfill~\\ \vspace{0.3cm} \includegraphics[width=3.9cm]{figs/a4.png}\hfill\includegraphics[width=3.9cm]{figs/c4.png}\hfill\includegraphics[width=3.9cm]{figs/b4.png}\\ \vspace{0.5cm} \hfill4th MaxPool\hfill~~~~~\hfill4th MaxPool\hfill~~~~~\hfill4th MaxPool\hfill~\\ \vspace{0.3cm} \includegraphics[width=3.9cm]{figs/a5.png}\hfill\includegraphics[width=3.9cm]{figs/c5.png}\hfill\includegraphics[width=3.9cm]{figs/b5.png}\\ \vspace{0.1cm} \hfill(a)\hfill~~~~~\hfill(b)\hfill~~~~~\hfill(c)\hfill~\\ \vspace{0.2cm} \caption{Visualization of the activations for three selected filters in each max pooling layer of our CNN for three different test images, (a)-(b) the first two from a same person and (c) the third one from a different subject. In our interpretation, the first column of activations in all of them illustrate the perception of presence/absence of earrings. As may be observed, even for images from the same subject (a) and (b), the first column has more intense activations in the bottom part of the image when there is an earring in the image. The second and third columns for each image were used to show different concepts learned in different depths of the network. The first max pooling layer usually contains low level features, such as the vertical and horizontal edges. In the second layer, we start encountering more complex concepts, such as helix contours and borders from specific ear parts. As we go deeper in network, it becomes harder to interpret the meaning of the features, although we can always find some activations concentrated in specific parts of the ear, such as concha contours in the third layer and internal ear parts in the fourth layer.} \label{fig_cnnvis} \end{figure} \section{Score fusion} \label{sec_fusion} There are different kinds of multimodal systems that address problems associated with single modality systems~\cite{Jain2004}, but a multimodal system based on multiple matchers is the most adequate one for wild scenarios. The reason is that it is not always possible to have multiple biometric traits ({\it e.g.} face and ear), multiple units of a biometric trait ({\it e.g.} thumb and index fingerprints) or multiple samples of the same biometric trait ({\it e.g.} face in video), but we can always apply multiple matching techniques to a single biometric sample. In order to fuse matchers based on the descriptors previously presented, we evaluated different fusion schemes at score level, such as sum, min, max and product rules, and ended up using the sum rule~\cite{Kittler1998} as it achieved the best results in our experiments. Before fusion, score normalization is carried out considering an identification scenario, where the only scores available at a single time are the ones between the probe and all gallery images. To this end, we discover the minimum and maximum score values and then perform a min-max normalization~\cite{Jain2005}. \section{Experimental results} We designed our experiments to validate each module of our recognition framework. Thus, in the following sections we present separate results for landmark detection, geometric normalization, CNN-based description and descriptor fusion. We also compare our results to the state-of-the-art when possible. \subsection{Landmark detection results} Zhou~and~Zaferiou~\cite{Zhou2017} evaluated different variations of Active Appearance Models (AAM) using the test set from ITWE's Collection A. Their best result was achieved by training a holistic AAM based on SIFT features. As initialization for their landmark detector, they used a HOG-based ear detector. They computed the cumulative error distribution using the test set of the same database, where the error for an image is the normalized point-to-point error with respect to the diagonal of the bounding box for the ground truth annotations. Their best result is shown as a line with solid squares in Figure~\ref{fig_landmark}. \begin{figure}[!ht] \centering \includegraphics[width=12.0cm]{figs/landmark.png} \caption{Cumulative error distribution for landmark detection using the proposed approach and Zhou~and~Zaferiou's approach~\cite{Zhou2017}.} \label{fig_landmark} \end{figure} We performed the same evaluation for the proposed landmark detector in four different scenarios. In the first one, the ground truth annotations were used to obtain the center and the size of the ear. This reflects the performance of our method in a perfect scenario, in which ear's location and size are reliably retrieved by an ear detector. Then, to simulate scenarios in which the ear detector does not perform that well, we added random variations with up to 20\%, 30\% and 40\% of the ear size to the ground truth values of the first scenario. Results for these four scenarios are also shown in Figure~\ref{fig_landmark}. As may be observed, the two-stage landmark detector performs slightly better than the single-stage one when using up to 20\% of variation, and there is no significant difference in performance between ground truth initialization and an initialization with 20\% of variation. This is expected, as this amount of variation was taken into account during the training stage. For larger variations that are unknown to the training, a single-stage landmark detector can have a considerable drop in performance, but our two-stage solution does not experience a considerable drop. It is able to perform at least as well as the state-of-the-art~\cite{Zhou2017}. \subsection{Normalization results} Since there is no normalization ground truth, we evaluated the benefits of the normalization process described in Section~\ref{sec_normalization} by checking the difference in the recognition performance with and without normalization for different handcrafted descriptors. To this end, we normalized all images from the AWE database and followed the same protocol proposed by Emersic~\emph{et~al.}~\cite{Emersic2017} that was released in their toolbox. We used the development set that contains 60\% of the AWE images. We performed a 10-fold cross-validation, and report the mean accuracy and standard deviation in Table~\ref{table-handcrafted}. It is worth saying that even images that were not correctly normalized were still used in all recognition experiments, and that all folds are exactly the same in our and Emersic~\emph{et~al.}'s works. Table~\ref{table-handcrafted} shows Emersic~\emph{et~al.}'s results, our reproduction of their experiments and our results using normalized images in terms of Rank-1 and Equal Error Rate(EER). It is possible to observe that our results without normalization are very similar to the ones reported by Emersic~\emph{et~al.}, showing that our reproduction of their experiments was successful, and that our results with normalization obtained higher Rank-1 recognition rates for all features and a lower EER in most of the cases. These results illustrate that an effective normalization approach can help improve performance when using description techniques that are not necessarily robust to wild ear variations. \begin{table}[!h] \caption{Rank-1 and EER results for the AWE database as reported by Emersic~\emph{et~al.}~\cite{Emersic2017} and as in our reproduction of their experiments using images with (norm) and without (raw) normalization.\label{table-handcrafted}}{ \footnotesize \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\textbf{Emersic~\emph{et~al.}'s work}} & \multicolumn{2}{c|}{\textbf{This work (raw)}} & \multicolumn{2}{c|}{\textbf{This work (norm)}} \\ \hline \textbf{Method} & \textbf{Rank-1} & \textbf{EER} & \textbf{Rank-1} & \textbf{EER} & \textbf{Rank-1} & \textbf{EER} \\ \hline LBP & 43.5$\pm$7.1 & 32.0$\pm$7.4 & 43.5$\pm$7.1 & 32.3$\pm$2.2 & \bf 50.5$\pm$6.8 & \bf 29.8$\pm$2.2 \\ BSIF & 48.4$\pm$6.8 & 30.0$\pm$9.6 & 48.4$\pm$6.4 & \bf 30.2$\pm$2.9 & \bf 53.1$\pm$7.8 & 30.8$\pm$2.7 \\ LPQ & 42.8$\pm$7.1 & 30.0$\pm$7.4 & 42.6$\pm$7.0 & 31.7$\pm$2.7 & \bf 47.8$\pm$8.9 & \bf 29.7$\pm$3.4 \\ RILPQ & 43.3$\pm$9.4 & 34.0$\pm$6.4 & 43.5$\pm$9.2 & 34.3$\pm$3.2 & \bf 46.4$\pm$8.4 & \bf 33.8$\pm$4.3 \\ POEM & 49.6$\pm$6.8 & 29.1$\pm$9.1 & 49.6$\pm$6.8 & \bf 29.8$\pm$2.6 & \bf 54.3$\pm$7.8 & 30.6$\pm$4.3 \\ HOG & 43.9$\pm$7.9 & 31.9$\pm$7.8 & 48.1$\pm$8.8 & 30.5$\pm$2.1 & \bf 57.1$\pm$8.1 & \bf 26.8$\pm$3.6 \\ DSIFT & 43.4$\pm$8.6 & 30.9$\pm$8.4 & 42.2$\pm$9.0 & \bf 33.1$\pm$2.8 & \bf 45.8$\pm$10.3 & 33.2$\pm$3.1 \\ Gabor & 39.8$\pm$7.1 & 32.1$\pm$8.1 & 39.8$\pm$7.1 & 31.7$\pm$3.8 & \bf 42.6$\pm$6.2 & \bf 30.7$\pm$3.4 \\ \hline \end{tabular} }{} \end{table} \subsection{CNN description results} \label{sec-rescnn} In order to learn features for the problem of recognizing ears in the wild, we divided each of the IIT, WPUTE, AWE and ITWE databases in two sets, one for training and one for testing. The division was conducted in a subject-independent way ({\it i.e.} no subject has images in both training and testing sets) by taking the first half of the subjects rounded up and using their images for training, and using the remaining ones for testing. After the automatic normalization process, images in databases with both left and right ears were flipped in a way that all ears had the same orientation. Finally, each image in the training set was transformed into a set of 20 modified images during the data augmentation stage, and we trained five different descriptors: one for the training set of each chosen database, and one using all these training sets combined. We evaluated the EER performance of these five descriptors by performing an all-versus-all comparison in all testing sets available and the results are presented in Table~\ref{table-crossmatching}. As may be observed, the best performance for all unconstrained testing sets was obtained by the descriptor learned using all training sets, followed by the descriptor learned using the training set from the same database. This shows that every database has different types of variations that tend to be overrepresented in models learned from a single database. When all databases are combined, the model benefits from both a wider training set ({\it i.e} more subjects) and less database overfitting. The models do not appear to be overfitting the unconstrained images, as the performance for the IIT test set is about 2\% for all models. \begin{table}[!h] \caption{EER results for all testing sets using descriptors learned from each database or from all databases combined. Each row represents a different CNN descriptor and each column shows the accuracy for a specific database.\label{table-crossmatching}}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{\textbf{TEST}} \\ \hline \textbf{TRAIN} & IIT & WPUTE & WPUTE$^*$ & AWE & AWE$^*$ & ITWE \\ \hline IIT & \bf 1.76\% & 29.62\% & 25.85\% & 35.29\% & 33.56\% & 35.47\% \\ WPUTE & 2.12\% & 15.95\% & \bf 9.40\% & 29.87\% & 28.33\% & 29.46\% \\ AWE & 2.12\% & 25.03\% & 20.04\% & 26.53\% & 23.52\% & 25.68\% \\ ITWE & 2.37\% & 23.50\% & 18.93\% & 27.51\% & 25.30\% & 22.09\% \\ ALL & 2.59\% & \bf 15.17\% & 9.59\% & \bf 25.42\% & \bf 22.93\% & \bf 19.68\% \\ \hline \end{tabular} }{\\$^*$ As WPUTE and AWE distinguish left and right ears, we also show results considering only genuine matchings between ear images from the same side of the head.} \end{table} We also show in Table~\ref{table-crossmatching} that knowing whether or not the image is of the left ear or right ear is helpful during the recognition process. If we only consider genuine matchings as the matchings between ear images from the same side of the head, the EER is reduced in about 4-6\% for the WPUTE database and in approximately 2-3\% for the AWE database in all tests. These results corroborate the findings of Yan~and~Bowyer~\cite{Yan2005} regarding ear asymmetry, but in an uncontrolled scenario. However, it is not always possible to have this information, so we did not consider ear asymmetry in the following experiments and classified matchings between different ears of the same person as genuine. Zhou~and~Zaferiou~\cite{Zhou2017} used transfer learning in order to employ CNN descriptors previously trained in a different domain~\cite{Simonyan2014} for ear recognition. To this end, they evaluated both Support Vector Machines (SVM) and Linear Discriminant Analysis (LDA) for matching those descriptors, and achieved about 30\% EER for the ITWE database. Their testing/training proportion was 80\%/20\%, and the division was not made in a subject-independent manner. Even though we considered a more difficult scenario, with a 50\%/50\% testing/training subject-independent split, we still achieve a considerably lower EER in all cases where a true unconstrained database was used for training ({\it i.e.} AWE, ITWE and ALL), as may be observed in Table~\ref{table-crossmatching}. These results show that even when a small number of images are available for the ear domain, it may be worth it to train a domain-specific CNN. \subsection{Fusion results} The first round of fusion experiments was performed using the testing set of the AWE database, since this was the most challenging one in our previous experiment. We evaluate the fusion of all possible pairs of features, including all holistic, handcrafted and learned features presented in Section~\ref{sec_recognition}. The chosen CNN model was the one with the best result in Table~\ref{table-crossmatching} (ALL). Table~\ref{table-awe} shows individual results for each feature, as well as the top fusion results in terms of Rank-1, Rank-5, Area Under Curve (AUC) and EER. As may be observed, although learned features and top handcrafted features perform equally well individually for Rank-1 and Rank-5, fusion results are dominated by CNN combinations. We believe this is caused by a larger correlation among handcrafted features, which usually have a similar design inspiration that was exploited in slightly different ways by different experts ({\it e.g} quantize gradients, encode neighbors). Thus, CNN is probably learning something complementary to the experts' knowledge, which is corroborated by the fact that nearly all combinations between CNN and one handcrafted feature perform better than all combinations between two handcrafted features. \begin{table}[!h] \caption{Individual and fusion results for all descriptors in Section~\ref{sec_recognition} using the AWE database. Individual results were grouped by descriptor type, and handcrafted features were grouped in to categories, the first one (Handcrafted I) for methods based on neighborhood encoding and the second one (Handcrafted II) for methods based on gradient orientations.\label{table-awe}}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Type} & \textbf{Descriptors} & \textbf{Rank-1} & \textbf{Rank-5} & \textbf{AUC} & \textbf{EER} \\ \hline Holistic & PCA & 43.0\% & 64.4\% & 0.866 & 37.87\% \\ \hline Handcrafted I & POEM & 65.2\% & 85.0\% & 0.948 & 31.68\% \\ & BSIF & 63.8\% & 83.4\% & 0.939 & 30.82\% \\ & LBP & 62.0\% & 82.6\% & 0.939 & 30.00\% \\ & LPQ & 59.6\% & 84.2\% & 0.942 & 30.52\% \\ & RILPQ & 55.0\% & 79.2\% & 0.926 & 34.07\% \\ \hline Handcrafted II & HOG & 64.2\% & 86.2\% & 0.955 & 29.33\% \\ & DSIFT & 57.8\% & 78.4\% & 0.916 & 32.99\% \\ & GABOR & 50.2\% & 75.6\% & 0.911 & 32.56\% \\ \hline Learned & CNN & 64.2\% & 86.2\% & 0.957 & \bf 22.89\% \\ \hline Sum fusion & CNN+HOG & \bf 75.6\% & \bf 90.6\% & \bf 0.972 & \bf 22.87\% \\ & CNN+POEM & 75.4\% & 90.4\% & 0.968 & 24.29\% \\ & CNN+LPQ & 72.8\% & 88.6\% & 0.966 & 23.61\% \\ & CNN+RILPQ & 72.0\% & \bf 90.6\% & 0.962 & 25.11\% \\ & HOG+BSIF & 70.8\% & 88.6\% & 0.963 & 28.34\% \\ & CNN+BSIF & 70.2\% & 89.8\% & 0.963 & 24.18\% \\ & CNN+LBP & 70.0\% & 89.4\% & 0.964 & 23.53\% \\ & HOG+RILPQ & 70.0\% & 86.4\% & 0.957 & 29.89\% \\ & CNN+GABOR & 69.4\% & 88.6\% & 0.963 & 24.56\% \\ & HOG+LPQ & 69.0\% & 86.4\% & 0.960 & 28.67\% \\ \hline \end{tabular} }{} \end{table} In our second round of fusion experiments, we reproduced two experiments proposed by Emersic~\emph{et~al.}~\cite{Emersic2017b} to evaluate challenge participants through the UERC database, one to evaluate the overall performance and other to evaluate the scalability of the recognition approaches. To this end, we normalize the UERC training images and use them to learn a sixth CNN descriptor ({\it i.e.} data augmentation was used to balance the classes in a way that each subject ended up with 200 images). As UERC test images do not have the same orientation and ground truth annotations are not provided, we also trained a simple side classifier by changing the output size of the network presented in Table~\ref{cnn-landmarks} to two classes (left and right) and then training it for the UERC training images using softmax loss and the Adam optimization algorithm. Because images of this database are already cropped, the entire testing process was fully automatic. For the overall performance evaluation, only the first 1,800 test images from 180 subjects are used in an all-versus-all comparison. In this experiment, we only use CNN and three other handcrafted features: HOG, POEM and LBP. HOG and POEM obtained the best fusion results with CNN in Table~\ref{table-awe}, and LBP was a baseline approach for participants of the UERC challenge~\cite{Emersic2017b}. Table~\ref{table-uerc} shows individual results for each feature, the fusion results using weighted sum, and the best results reported in the UERC challenge. As may be observed, our normalization resulted in a considerable boost in the performance for handcrafted descriptors. We achieve more than 20\% improvement in Rank-1 when comparing our LBP result to its baseline version without normalization. Again, individual ranking performances of learned and handcrafted features were similar, but the CNN fusion pairings stood out. Our performance was higher than all participants of the challenge except University of Colorado Colorado Springs (UCCS), whose results we have not verified, as they appear to be using test images for training. CMC curves for the best performing works are presented in Figure~\ref{fig_fusion1}. \begin{table}[!h] \caption{Individual and fusion results for CNN, HOG, POEM and LBP in the overall performance evaluation through the UERC protocol, as well as the top scoring participants of the UERC challenge.\label{table-uerc}}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Type} & \textbf{Descriptors} & \textbf{Rank-1} & \textbf{Rank-5} & \textbf{AUC} & \textbf{EER} \\ \hline Handcrafted I & POEM & 36.83\% & 58.44\% & 0.907 & 36.17\% \\ & LBP & 35.00\% & 55.11\% & 0.897 & 35.81\% \\ \hline Handcrafted II & HOG & 39.78\% & 60.56\% & 0.916 & 35.51\% \\ \hline Learned & CNN & 36.94\% & 60.56\% & 0.930 & \bf 26.77\% \\ \hline Sum fusion & CNN+HOG & \bf 49.06\% & \bf 69.94\% & \bf 0.951 & 27.84\% \\ & CNN+POEM & 47.28\% & \bf 70.00\% & 0.948 & 28.21\% \\ & CNN+LBP & 45.22\% & 67.44\% & 0.946 & 28.05\% \\ & HOG+POEM & 43.06\% & 64.33\% & 0.926 & 35.14\% \\ & HOG+LBP & 41.22\% & 60.89\% & 0.919 & 35.11\% \\ & POEM+LBP & 38.56\% & 59.00\% & 0.911 & 35.39\% \\ \hline Literature & UCCS~\cite{Emersic2017b} & \it 90.4\%$^*$ & \it 100.0\%$^*$ & \it 0.994$^*$ & \\ & IAU~\cite{Emersic2017b} & 38.5\% & 63.2\% & 0.940 & \\ & ITU-II~\cite{Emersic2017b} & 27.3\% & 48.3\% & 0.877 & \\ & LBP-baseline~\cite{Emersic2017b} & 14.3\% & 28.6\% & 0.759 & \\ \hline \end{tabular} }{\\$^*$ These results still require verification.} \end{table} For the scalability evaluation, we match all images from subjects with at least two images to all other test images, totaling 7,442$\times$9,499 matching pairs. This experiment increases the number of subjects to 3,540 and also adds many images with poor quality, affecting considerably the performance of the evaluated approaches. In Table~\ref{table-uerc2} we show results for CNN, HOG and POEM, for all possible fusion among two of them, and for the best performing approaches in the UERC challenge. We can see that the combination of CNN and HOG was again the best performing method for lower ranks, and that these results show our approach as the most scalable unconstrained ear recognition approach. CMC curves for the best performing works are presented in Figure~\ref{fig_fusion2} and show how well our approach performs for lower ranks, outperforming all other works by at least 10\% in most ranks before Rank-300. \begin{table}[!h] \caption{Individual and fusion results for CNN, HOG and POEM in the scalability evaluation through the UERC protocol, as well as the top scoring participants of the UERC challenge.\label{table-uerc2}}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Type} & \textbf{Descriptors} & \textbf{Rank-1} & \textbf{Rank-5} & \textbf{AUC} & \textbf{EER} \\ \hline Handcrafted I & POEM & 17.98\% & 28.48\% & 0.851 & 40.38\% \\ \hline Handcrafted II & HOG & 18.50\% & 28.78\% & 0.851 & 40.80\% \\ \hline Learned & CNN & 17.13\% & 28.73\% & 0.873 & 35.92\% \\ \hline Sum fusion & CNN+HOG & \bf 24.17\% & \bf 36.43\% & 0.881 & 36.24\% \\ & CNN+POEM & 23.02\% & 35.70\% & \bf 0.882 & \bf 35.82\% \\ & HOG+POEM & 20.57\% & 32.12\% & 0.856 & 40.26\% \\ \hline Literature & UCCS~\cite{Emersic2017b} & \it 22.3\%$^*$ & \it 26.3\%$^*$ & \it 0.858$^*$ & \\ & IAU~\cite{Emersic2017b} & 8.4\% & 14.2\% & 0.810 & \\ & ITU-II~\cite{Emersic2017b} & 6.9\% & 12.8\% & 0.844 & \\ \hline \end{tabular} }{\\$^*$ These results still require verification.} \end{table} \begin{figure}[!ht] \centering \subfloat[]{\label{fig_fusion1}\includegraphics[width=10.3cm]{figs/uerc1.png}}\hfill \subfloat[]{\label{fig_fusion2}\includegraphics[width=10.3cm]{figs/uerc2.png}} \caption{CMC curves for all participants of the UERC challenge plus our best fusion results obtained by combining CNN and HOG considering the (a) overall performance evaluation and (b) scalability evaluation protocols.} \label{fig_fusion} \end{figure} Table~\ref{table-uerc2} also shows that our CNN outperformed the other two top-scoring CNN-based approaches proposed by researchers from the Islamic Azad University (IAU) and the Istanbul Technical University (ITU-II)~\cite{Emersic2017b}. Similarly to Zhou~and~Zaferiou~\cite{Zhou2017}, IAU and ITU-II employed transfer learning approaches in a network from a different domain~\cite{Simonyan2014} and were not able achieve results as high as our domain-specific CNN. \subsection{Discussion} Unconstrained ear recognition is a very challenging problem, and recent efforts to provide data for unconstrained ear images for research are helpful. Initial databases such as IIT and WPUTE were captured images instead of wild images. They do not have much intraclass and interclass variations. The initial wild databases collected from the Internet such as AWE and ITWE still lack interclass variability due to their small number of subjects. The UERC database is a vast repository with thousands of subjects and images with intraclass variations and interclass variations. It is the most challenging ear dataset that we are aware of. Although initial ear recognition works have consistently used ear alignment before recognition~\cite{Chang2003,Kumar2012}, researches for unconstrained ear recognition were mostly focused in finding robust features~\cite{Emersic2017}. Even among the UERC participants, only the Imperial College London (ICL) has used an alignment step, although they used an AAM-based solution~\cite{Zhou2017} that may not be as successful for wild images as recent techniques such as CNNs (see Figure~\ref{fig_landmark}). Nevertheless, we attribute a big part of the success in our results to the normalization step. It considerably increased the performance of traditional methods, such as handcrafted features in Tables~\ref{table-handcrafted}~and~\ref{table-uerc}, and also helped the deep learning process by letting it focus on what matters the most for the recognition task. It also helped in our cross-dataset experiments shown in Table~\ref{table-crossmatching}, as we do not have problems with different cropping areas or noise in ear location. Our CNN descriptors were comparable to the best handcrafted descriptors in terms of Rank-N results, but they performed better in terms of EER in all experiments, meaning that they were more accurate for verification purposes. In addition, our performance was favorably compared to the best performing participants of the UERC challenge, as shown in Table~\ref{table-uerc2}, and to other state-of-the-art work in Section~\ref{sec-rescnn}. There were two factors that may have helped to achieve these results: we trained CNNs from scratch specifically to our problem domain, and we used a discriminative learning technique based on center loss that was proposed by Wen~\emph{et~al.}~\cite{Wen2016}. Finally, as learned and handcrafted features were achieving similar ranking results for our normalized images, we decided to combine them through score fusion in order to pursue a better performance. We ended up discovering that the combination of our CNN descriptors and handcrafted descriptors achieved much better results in all experiments. None of the combinations between a pair of handcrafted features could get close to the top scores, which may be explained by the fact that handcrafted features are highly correlated due to their similar design. On the other hand, CNN descriptors do not follow an expert's design and are probably learning some discriminative information that is complementary to most handcrafted descriptors, as may be observed in Tables~\ref{table-awe},~\ref{table-uerc}~and~\ref{table-uerc2}. \section{Conclusion} Unconstrained ear recognition is a very challenging problem. To address this challenge, we provide a framework that combines handcrafted features and CNN. We test our framework using the most challenging publicly available ear database that we are aware of. Our results are considerably better than recently published works and are less impacted by database scale. We gain invaluable lessons that can further enhance unconstrained ear recognition research: \begin{itemize} \item handcrafted features are not dead; this is consistent with the finding in action recognition works~\cite{Zhu2016};\\ \item handcrafted features and CNN are complementary;\\ \item normalization is critical and enhances performance recognition of handcrafted features;\\ \item CNN combined with any of the aforementioned descriptors improves recognition. \end{itemize} There is still a lot to be learned in order to address the issues with unconstrained ear recognition. Our work demonstrates that CNN and handcrafted features are a good starting point. \bibliographystyle{unsrt}
1,108,101,563,499
arxiv
\section{Introduction} Let $(A,{\mathscr A},\mu)$ and $(B,{\mathscr B},\nu)$ be probability spaces, ${\mathscr F}$ a sub-$\sigma$-algebra of the product $\sigma$-algebra ${\mathscr A}\times {\mathscr B}$ in $A\times B$, and $X$ a Banach space. For $1\le p,q\le \infty$ we define $L_{\mathscr F}^p(\mu; L^q(\nu;X))$ to be the closed subspace in $L^p(\mu; L^q(\nu;X))$ consisting of those functions which have a strongly ${\mathscr F}$-measurable representative. It is easy to see (e.g., by using \cite[Corollary 1.7]{HNVW}) that $$ L_{\mathscr F}^p(\mu; L^q(\nu;X))= L^p(\mu; L^q(\nu;X)) \cap L_{\mathscr F}^1( \mu\times \nu;X).$$ Furthermore, $L_{\mathscr F}^p(\mu; L^q(\nu;X))$ is closed in $L^p(\mu; L^q(\nu;X))$. Indeed, if $f_n\to f$ in $L^p(\mu; L^q(\nu;X))$ with each $f_n$ in $L_{\mathscr F}^p(\mu; L^q(\nu;X))$, then also $f_n\to f$ in $L^1(\mu\times \nu;X)$, and therefore $f\in L_{\mathscr F}^1(\mu\times\nu;X)$. The reader is referred to \cite{DU, HNVW} for the basic theory of the Lebesgue-Bochner spaces and conditional expectations in these spaces. The same reference contains some standard results concerning the Radon-Nikod\'ym property that will be needed later on. The aim of this paper is to provide a necessary and sufficient condition in order that conditional expectation ${\mathbb E}(\cdot|{\mathscr F})$ restrict to a bounded linear operator on $L^p(\mu;L^q(\nu;X))$ when $1<p,q<\infty$. We also show that ${\mathbb E}(\cdot|{\mathscr F})$ need not to be contractive. An example is given which shows that this result does not extend to the pair $p=\infty$, $q=2$. Characterisations of conditional expectation operators on general classes of Banach function spaces $E$ (and their vector-valued counterparts) have been given by various authors (see, e.g., \cite{DHP} and the references therein), but these works usually {\em assume} that a bounded operator $T: E\to E$ is given and investigate under what circumstances it is a conditional expectation operator. We have not been able to find any paper addressing the problem of establishing sufficient conditions for conditional expectation operators to act in concrete Banach function spaces such as the mixed-norm $L^p(L^q)$-spaces investigated here. \section{Results} Throughout this section, $(A,{\mathscr A},\mu)$ and $(B,{\mathscr B},\nu)$ are probability spaces. If $1\le p,q\le \infty$, their conjugates $1\le p',q'\le \infty$ are defined by $\frac1p+\frac1{p'} =1$ and $ \frac1q+\frac1{q'} =1$. It is clear that every $f\in L_{{\mathscr F}}^p(\mu; L^q(\nu))$ induces a functional $\phi_f\in (L_{{\mathscr F}}^{p'}(\mu;L^{q'}\!(\nu)))^*$ in a canonical way, and the resulting mapping $f\mapsto \phi_f$ is contractive. The first main result of this note reads as follows. \begin{theorem}\label{thm1} Let $1 <p\le\infty$ and $1<q\le \infty$. If $f\mapsto \phi_f$ establishes an isomorphism of Banach spaces $$L_{\mathscr F}^p(\mu; L^q(\nu)) \simeq (L_{{\mathscr F}}^{p'}(\mu;L^{q'}\!(\nu)))^* ,$$ then for any Banach space $X$ the conditional expectation operator ${\mathbb E}(\cdot|{\mathscr F})$ on $L^1(\mu\times\nu;X)$ restricts to a bounded projection on $L^p(\mu; L^q(\nu;X))$. \end{theorem} \begin{proof} We will show that ${\mathbb E}(f|{\mathscr F})\in L^p(\mu; L^q(\nu;X))$ for all $f\in L^p(\mu; L^q(\nu;X))$. A standard closed graph argument then gives the boundedness of ${\mathbb E}(\cdot|{\mathscr F})$ as an operator in $L^p(\mu; L^q(\nu;X))$. Since $\Vert {\mathbb E}(f|{\mathscr F})\Vert_X \le {\mathbb E} (\Vert f\Vert_X | {\mathscr F})$ $\mu\times\nu$-almost everywhere, it suffices to prove that ${\mathbb E}(g|{\mathscr F})\in L^p(\mu; L^q(\nu))$ for all $g\in L^p(\mu; L^q(\nu))$. To prove the latter, consider the inclusion mapping $$I: L_{\mathscr F}^{p'}(\mu; L^{q'}\!(\nu)) \to L^{p'}\!(\mu;L^{q'}\!(\nu)).$$ Every $g\in L^p(\mu; L^q(\nu))$ defines an element of $(L^{p'}\!(\mu; L^{q'}\!(\nu)))^*$ in the natural way and we have, for all $F\in {\mathscr F}$, $$ \langle {{\bf 1}}_F, I^* g\rangle = \langle I {{\bf 1}}_F,g\rangle = \int_{F} g \,{\rm d}\mu\times \nu .$$ The implicit use of Fubini's theorem to rewrite the double integral over $A$ and $B$ as an integral over $A\times B$ in the second equality is justified by non-negativity, writing $g = g^+ - g^-$ and considering these functions separately. On the other hand, viewing $g$ and ${{\bf 1}}_F$ as elements of $L^1(\mu\times\nu)$ and $L^\infty(\mu\times\nu)$ respectively, we have $$ \int_{F} g \,{\rm d}\mu\times \nu = \int_{F} {\mathbb E}(g|{\mathscr F}) \,{\rm d}\mu\times \nu = \langle {{\bf 1}}_F, {\mathbb E}(g|{\mathscr F}) \rangle.$$ We conclude that $\langle {{\bf 1}}_F, I^* g\rangle = \langle {\mathbb E}(g|{\mathscr F}),{{\bf 1}}_F \rangle$, where on the left the duality is between $L^{p'}\!(\mu; L^{q'}\!(\nu))$ and its dual, and on the right between $L^1(\mu\times \nu)$ and $L^\infty(\mu\times\nu)$. Passing to linear combinations of indicators, it follows that $$ \sup_\phi|\langle \phi, I^* g\rangle| = \sup_\phi|\langle {\mathbb E}(g|{\mathscr F}), \phi \rangle| = \Vert {\mathbb E}(g|{\mathscr F})\Vert_1 < \infty, $$ where both suprema run over the simple functions $\phi$ in $L^\infty_{\mathscr F}(\mu\times\nu)$ of norm $\le 1$. Denoting their closure by $L^\infty_{0,{\mathscr F}}(\mu\times\nu)$, it follows that $I^* g$ defines an element of $(L^\infty_{0,{\mathscr F}}(\mu\times\nu))^*$. This identification is one-to-one: for if $\langle \phi, I^* g\rangle = 0$ for all simple ${\mathscr F}$-measurable functions $\phi$, then $\langle \phi, I^* g\rangle =0$ for all $\phi\in L_{{\mathscr F}}^{{ p'}}\!(\mu; L^{{ q'}}\!(\nu))$, noting that the simple ${\mathscr F}$-measurable functions are dense in $L_{{\mathscr F}}^{{ p'}}\!(\mu; L^{{ q'}}\!(\nu))$ (here we use that ${ p'}$ and ${ q'}$ are finite). As an element of $(L^\infty_{0,{\mathscr F}}(\mu\times\nu))^*$, $I^* g$ equals the function ${\mathbb E}(g|{\mathscr F})$, viewed as an element in the same space. Since the embedding of $ L_{\mathscr F}^1(\mu\times\nu)$ into $(L^\infty_{0,{\mathscr F}}(\mu\times\nu))^*$ is isometric, it follows that $I^* g = {\mathbb E}(g|{\mathscr F}) \in L_{\mathscr F}^1(\mu\times\nu)$. Since $I^* g \in (L_{{\mathscr F}}^{p'}(\mu;L^{q'}\!(\nu)))^* $, by the assumption of the theorem we may identify $I^* g$ with a function in $L^p(\mu;L^q(\nu))$. We conclude that ${\mathbb E}(g|{\mathscr F}) = I^* g\in L_{\mathscr F}^{p}(\mu; L^{q}(\nu))$. \end{proof} { If we make a stronger assumption, more can be said: \begin{theorem}\label{thm2} Suppose that $1<p,q<\infty$ and let $X$ be a non-zero Banach space. Then the following assertions are equivalent: \begin{enumerate} \item[\rm(1)] the conditional expectation operator ${\mathbb E}(\cdot|{\mathscr F})$ restricts to a bounded projection on the space $L^p(\mu; L^q(\nu;X))$; \item[\rm(2)] { the conditional expectation operator ${\mathbb E}(\cdot|{\mathscr F})$ restricts to a bounded projection on the space $L^{p'}\!(\mu; L^{q'}\!(\nu;X))$;} \item[\rm(3)] $f\mapsto \phi_f$ induces an isomorphism of Banach spaces $$L_{\mathscr F}^p(\mu; L^q(\nu)) \simeq (L_{{\mathscr F}}^{p'}(\mu;L^{q'}\!(\nu)))^* .$$ \end{enumerate} \end{theorem} \begin{remark} In \cite{LYZ} it is shown that condition (3) is satisfied if \begin{equation}\label{LYZ} \hbox{ $I \times {\mathbb E}_\nu$ maps $L^1_{\mathscr F}(\mu\times\nu)$ into itself.} \end{equation} Here ${\mathbb E}_\nu$ denotes the bounded operator on $L^1(\nu)$ defined by $$ {\mathbb E}_\nu f := ({\mathbb E}_\nu f){{\bf 1}},$$ with ${\mathbb E}_\nu f = \int f\,{\rm d} \nu$. \end{remark} The proof of Theorem \ref{thm2} is based on the following elementary lemma. \begin{lemma}\label{lem:proj} Let $P$ be a bounded projection on a Banach space $X$. Let $X_0 = {\mathsf{R}}(P)$, $X_1 = {\mathsf{N}}(P)$, $Y_0 = {\mathsf{R}}(P^*)$ and $Y_1 = {\mathsf{N}}(P^*)$, so that we have direct sum decompositions $X = X_0\oplus X_1$ and $X^* = Y_0\oplus Y_1$. Then we have natural isomorphisms of Banach spaces $X_0^* = Y_0$ and $X_1^* = Y_1$. \end{lemma} \begin{proof}[Proof of Theorem \ref{thm2}] We have already proved (3)$\Rightarrow$(1). For proving (1)$\Rightarrow$(2)$\Rightarrow$(3) there is no loss of generality in assuming that $X$ is the scalar field, for instance by observing that the proof of Theorem \cite[Theorem 2.1.3]{HNVW} also works for mixed $L^p(L^q)$-spaces. \smallskip (1)$\Rightarrow$(2):\ The assumption (1) implies that $L_{\mathscr F}^{p}(\mu;L^{q}(\nu))$ is the range of the bounded projection $({\mathbb E}(\cdot|{\mathscr F}))$ in $L^{p}(\mu;L^{q}(\nu))$. Moreover, $\langle {\mathbb E}(f|{\mathscr F}),g\rangle = \langle f,{\mathbb E}(g|{\mathscr F})\rangle$ for all $f\in L^p(\mu;L^q(\nu))$ and $g\in L^{p'}\!(\mu;L^{q'}\!(\nu))$, since this is true for $f$ and $g$ in the (dense) intersections of these spaces with $L^2(\mu\times \nu)$. It follows that the conditional expectation ${\mathbb E}(\cdot|{\mathscr F})$ is bounded on $L^{p'}\!(\mu;L^{q'}\!(\nu)) =(L^{p}(\mu;L^{q}(\nu)) )^* $ and equals $({\mathbb E}(\cdot|{\mathscr F}))^*$. Clearly it is a projection and its range equals $L_{\mathscr F}^{p'}\!(\mu;L^{q'}\!(\nu))$. \smallskip (2)$\Rightarrow$(3):\ This implication follows Lemma \ref{lem:proj}. \end{proof} Inspection of the proof of Theorem \ref{thm1} shows that if for all $f\in L_{\mathscr F}^{p}(\mu; L^{q}(\nu))$ we have $\Vert f\Vert_{L_{\mathscr F}^p(\mu;L^q(\nu))} = \Vert f\Vert_{(L_{\mathscr F}^{p'}(\mu; L^{q'}\!(\nu)))^*}$, then ${\mathbb E}(\cdot|{\mathscr F})$ is contractive on $L^p(\mu;L^q(\nu))$. The next example, due to Qiu \cite{Qiu}, shows that the conditional expectation, when it is bounded, may fail to be contractive. \begin{example}\label{ex:Qiu} Let $A = B = \{0,1\}$ with ${\mathscr A} = {\mathscr B} = \{\emptyset, \{0\}, \{1\},\{0,1\}\}$ and $\mu = \nu$ the measure on $\{0,1\}$ that gives each point mass $\frac12$, and let ${\mathscr F}$ be the $\sigma$-algebra generated by the three sets $\{(0,1)\}$, $\{(1,1)\}$, $\{(0,0),(1,0)\}$. If we think of $B$ as describing discrete `time', then ${\mathscr F}$ is the progressive $\sigma$-algebra corresponding to the filtration $({\mathscr F}_t)_{t\in\{0,1\}}$ in $A$ given by ${\mathscr F}_0 = \{\emptyset,\{0,1\}\}$ and ${\mathscr F}_1 = \{\emptyset, \{0\}, \{1\},\{0,1\}\}$. Let $f: A\times B \to {\mathbb R}$ be defined by $$ f(0,0) = 0, \quad f(1,0) = 1, \quad f(0,1) = 1, \quad f(1,1)=0. $$ Then $$ {\mathbb E}(f|{\mathscr F})(0,0) = \frac12, \quad {\mathbb E}(f|{\mathscr F})(1,0) = \frac12, \quad {\mathbb E}(f|{\mathscr F})(0,1) = 1, \quad {\mathbb E}(f|{\mathscr F})(1,1)=0. $$ Hence in this example we have \begin{equation*} \begin{aligned} \Vert f\Vert_{L^p(\mu;L^2(\nu))} & = \Big[\Big(\frac12\Big)^{p/2} + \Big(\frac12\Big)^{p/2}\Big]^{1/p}, \\ \Vert {\mathbb E}(f|{\mathscr F})\Vert_{L^p(\mu;L^2(\nu))} & = \Big[\Big(\frac18\Big)^{p/2} + \Big(\frac58\Big)^{p/2}\Big]^{1/p}. \end{aligned} \end{equation*} Consequently, for large enough $p$ the conditional expectation fails to be contractive in $L^p(\mu;L^2(\nu))$. \end{example} We continue with two examples showing that the condition expectation operator on $L^1(\mu\times\nu)$ may fail to restrict to a bounded operator on $L^p(\mu;L^q(\nu))$. The first was communicated to us by Gilles Pisier. \begin{example}\label{ex:Pisier} Let $(A,{\mathscr A}, \mu)$ and $(B,{\mathscr B}, \nu)$ be probability spaces and let $(C,{\mathscr C},\P) = (A,{\mathscr A}, \mu)\times (B,{\mathscr B}, \nu)$ be their product. Consider the infinite product $(C,{\mathscr C},\P)^{\mathbb N} = (C^{\mathbb N}, {\mathscr C}^{\mathbb N},\P^{\mathbb N})$; with an obvious identification it may be identified with $ (A^{\mathbb N},{\mathscr A}^{\mathbb N}, \mu^{\mathbb N})\times (B^{\mathbb N},{\mathscr B}^{\mathbb N}, \nu^{\mathbb N})$. Consider the sub-$\sigma$-algebra ${\mathscr F}^{\mathbb N}$ of ${\mathscr A}^{\mathbb N}\times{\mathscr B}^{\mathbb N} = {\mathscr C}^{\mathbb N}$, where ${\mathscr F}\subseteq {\mathscr A}\times{\mathscr B} = {\mathscr C}$ is a given sub-$\sigma$-algebra. Let $T:= {\mathbb E}(\cdot|{\mathscr F})$ and $T^{\mathbb N} := {\mathbb E}(\cdot|{\mathscr F}^{\mathbb N})$ be the conditional expectation operators on $L^1(\mu\times \nu)$ and $L^1(\mu^{\mathbb N}\times\nu^{\mathbb N})$, respectively. For a function $f\in L^\infty(\mu^{\mathbb N}\times \nu^{\mathbb N})$ of the form $f = f_1\otimes\cdots\otimes f_N\otimes {{\bf 1}}\otimes{{\bf 1}}\otimes \hdots$ with $f_n\in L^1(\mu\times \nu)$ for all $n=1,\dots,N$, we have $$ T^{\mathbb N} f = Tf_1 \otimes\cdots\otimes T f_N\otimes {{\bf 1}}\otimes{{\bf 1}}\otimes \hdots$$ By an elementary computation, $$ \Vert f\Vert_{L^p(\mu^{\mathbb N};L^q(\nu^{\mathbb N}))} = \prod_{n=1}^N \Vert f_n \Vert_{L^p(\mu;L^q(\nu))}$$ and $$\Vert T^{\mathbb N} f\Vert_{L^p(\mu^{\mathbb N};L^q(\nu^{\mathbb N}))} = \prod_{n=1}^N \Vert Tf_n \Vert_{L^p(\mu;L^q(\nu))}.$$ This being true for very $N\ge 1$ we see that $T^{\mathbb N}$ is bounded if and only if $T$ is contractive. Example \ref{ex:Qiu}, however, shows that the latter need not always be the case. \end{example} The second example is due to Tuomas Hyt\"onen: \begin{example}\label{ex:Tuomas} Let ${\mathscr B}$ the Borel $\sigma$-algebra of $[0, 1)$. For $A \in {\mathscr B} \times{\mathscr B}$, let $$\widetilde A := \{(y, x) : (x, y) \in A\}$$ and let $${\mathscr F} := \{A \in {\mathscr B} \times{\mathscr B} :\, \widetilde A= A\}$$ be the symmetric sub-$\sigma$-algebra of the product $\sigma$-algebra. Then ${\mathbb E}(\cdot|{\mathscr F} )$ does not restrict to a bounded operator on $L^p(L^q):= L^p(0, 1; L^q (0, 1))$ when $p \not = q$. To see this let $\widetilde f(x, y) := f (y, x)$. One checks that $$ {\mathbb E}(f |{\mathscr F} ) = \frac12(f + \widetilde f) \ge \frac12 \widetilde f$$ if $f \ge 0$. In particular, ${\mathbb E}(\phi \otimes \psi |{\mathscr F} ) \ge \frac12 \psi \otimes \phi$ if $\phi , \psi \ge 0$. Let then $\phi \in L^p(0, 1)$, $\psi \in L^q (0, 1)$ be positive functions such that only one of them is in $L^{p\vee q} (0, 1)$. If $f = \phi \otimes \psi$, then $$\Vert f \Vert_{L^p (L^q )} = \Vert \phi \Vert_{L^p} \Vert \psi \Vert_{L^q} < \infty$$ but $$\Vert {\mathbb E}(f |{\mathscr F} )\Vert_{ L^p (L^q )}\ge \frac12 \Vert \psi \otimes \phi \Vert_{ L^p(L^q )} = \frac12\Vert \psi \Vert_{ L^p }\Vert \phi \Vert_{ L^q} .$$ If $p> q$, then $\Vert \psi \Vert_{ L^p} = \infty$, and if $p < q$, then $\Vert \phi \Vert _{L^q} = \infty$, so that in either case $\Vert {\mathbb E}(f |{\mathscr F} )\Vert_{ L^p (L^q ) }= \infty$. \end{example} Let us check that \eqref{LYZ} fails in the above examples. As in Example \ref{ex:Qiu} let $A = B = \{0,1\}$ with ${\mathscr A} = {\mathscr B} = \{\emptyset, \{0\}, \{1\},\{0,1\}\}$, $\mu = \nu$ the measure on $\{0,1\}$ that gives each point mass $\frac12$, and ${\mathscr F}$ the $\sigma$-algebra generated by the three sets $\{(0,1)\}$, $\{(1,1)\}$, $\{(0,0),(1,0)\}$. Let $f: A\times B \to {\mathbb R}$ be defined by $$ f(0,0) = 1, \quad f(1,0) = 1, \quad f(0,1) = 0, \quad f(1,1)=1. $$ This function is ${\mathscr F}$-measurable, but $(I\otimes {\mathbb E}_\nu)f$ is not: $$(I\otimes {\mathbb E}_\nu)f(0,0) = \frac12, \ \ (I\otimes {\mathbb E}_\nu)f(1,0) = 1, \ \ (I\otimes {\mathbb E}_\nu)f(0,1) = \frac12, \ \ (I\otimes {\mathbb E}_\nu)f(1,1) = 1. $$ Thus \eqref{LYZ} fails in Example \ref{ex:Qiu}. It is clear that if we start from this example, \eqref{LYZ} also fails in Example \ref{ex:Pisier}. In Example \ref{ex:Tuomas} \eqref{LYZ} also fails, for obvious reasons. An interesting example where condition \eqref{LYZ} is satisfied is the case when $A =[0,1]$ is the unit interval, $B = \Omega$ a probability space, and ${\mathscr F} = {\mathscr P}$ the progressive $\sigma$-algebra in $[0,1]\times \Omega$. From Theorem \ref{thm1} we therefore obtain the following result: \begin{corollary}\label{cor:LYZ} For all $1<p,q<\infty$ and all Banach spaces $X$, the conditional expectation with respect to the progressive $\sigma$-algebra on $[0,1]\times\Omega$ is bounded on $L^p(0,1;L^q(\Omega;X))$. \end{corollary} This quoted result of \cite{LYZ} plays an important role in the study of well-posedness and control problems for stochastic partial differential equations. For example, in \cite{Lu2}, it is used to show the well-posedness of stochastic Schr\"odinger equations with non-homogeneous boundary conditions in the sense of transposition solutions, in \cite{Lu1} it is applied to obtain a relationship between null controllability of stochastic heat equations, and in \cite{LYZ} it is used to establish a Pontryagin type maximum for controlled stochastic evolution equations with non-convex control domain. As a consequence of (a special case of) \cite[Theorem A.3]{DY} we obtain that the assumptions of Theorem \ref{thm1} are also satisfied for progressive $\sigma$-algebra ${\mathscr F} = {\mathscr P}$ if we replace $L^p(0,1;L^q(\Omega;X))$ by $L^p(\Omega;L^q(0,1;X))$. The quoted theorem is stated in terms of the predictable $\sigma$-algebra $\mathscr{G}$. However, since every progressively measurable set $P\in\mathscr{P}$ is of the form $P = G \Delta N$ with $G\in \mathscr{G}$ and $N$ a null set in the product $\sigma$-algebra $\mathscr{F}\times \mathscr{B}([0,1])$ (see \cite[Lemma 3.5]{CW}), we have $L_\mathscr{G}^p(\Omega;L^q(0,1;X)) = L_\mathscr{P}^p(\Omega;L^q(0,1;X))$. Therefore, \cite[Theorem A.3]{DY} remains true if we replace the predictable $\sigma$-algebra by the progressive $\sigma$-algebra and we obtain the following result: \begin{corollary} For all $1<p,q<\infty$ and all Banach spaces $X$, the conditional expectation with respect to the progressive $\sigma$-algebra on $\Omega\times[0,1]$ is bounded on $L^p(\Omega;L^q(0,1;X))$. \end{corollary} \begin{proof} In the scalar-valued case we apply \cite[Theorem A.3]{DY} (with $J$ a singleton). The vector-valued case then follows from the observation, already made in the proof of Theorem \ref{thm2}, that Theorem \cite[Theorem 2.1.3]{HNVW} also holds for mixed $L^p(L^q)$-spaces. \end{proof} Our final example shows that condition (2) in Theorem \ref{thm2} fails for the pair $p=1$, $q=2$ even when $X$ is the scalar field. \begin{example} Let $\{\mathscr{F}_t\}_{t\in[0,1]}$ be the filtration generated by a one-dimensional standard Brownian motion $\{W(t)\}_{t\in[0,1]}$ defined on a probability space $(\Omega,\mathscr{F},{\mathbb{P}})$. Let $\mathscr{P}$ be the associated progressive $\sigma$-algebra on $\Omega\times [0,1]$. We will show that \begin{equation*} L^\infty_\mathscr{P}(\Omega;L^2(0,1)) \subsetneq (L^1_\mathscr{P}(\Omega;L^2(0,1)))^* \end{equation*} in the sense that the former is contained isometrically as a {\em proper} closed subspace of the latter. For $v\in L^1_\mathscr{P}(\Omega;L^2(0,1))$ consider the solution $x$ to the following problem: \begin{equation}\label{eq2} \left\{\begin{aligned} \,{\rm d} x(t) & =v(t)\,{\rm d} W(t), \quad t\in [0,1],\\ x(0) & =0. \end{aligned} \right. \end{equation} By the classical well-posedness theory of SDEs (e.g. \cite[Chapter V, Section 3]{Protter}), $x\in L^1_\mathscr{P}(\Omega;C([0,1]))$ and \begin{equation}\label{eq3} \Vert x\Vert_{L^1_\mathscr{P}(\Omega;C([0,1]))}\leq C\Vert v\Vert _{L^1_\mathscr{P}(\Omega;L^2(0,1))} \end{equation} for some constant $C$ independent of $v$. Let $\xi\in L^{\infty}_{\mathscr{F}_1}(\Omega)$. Define a linear functional $L$ on $L^1_\mathscr{P}(\Omega;L^2(0,1))$ as follows: $$ L(v):={\mathbb{E}}(\xi x(1)). $$ By \eqref{eq3}, $L$ is bounded. Suppose now, for a contradiction, that $(L^1_\mathscr{P}(\Omega;L^2(0,1)))^* = L^{\infty}_\mathscr{P}(\Omega;L^2(0,1))$ with equivalent norms. Then there is an $f\in L^{\infty}_\mathscr{P}(\Omega;L^2(0,1))$ such that \begin{equation}\label{eq4} L(v)={\mathbb{E}}\int_0^1 f(t)v(t)\,{\rm d} t \end{equation} for all $v\in L^1_\mathscr{P}(\Omega;L^2(0,1))$. On the other hand, by the martingale representation theorem there is a $g\in L_\mathscr{P}^2(\Omega;L^2(0,1))$ such that \begin{equation}\label{eq:MRT} \xi = {\mathbb{E}}(\xi) + \int_0^1 g(t)\,{\rm d} W(t). \end{equation} Take now $v\in L_\mathscr{P}^2(\Omega;L^2(0,1))$ in \eqref{eq2}. Then by It\^o's formula, \begin{equation}\label{eq5} {\mathbb{E}}(\xi x(1))= {\mathbb{E}}\int_0^1 g(t)v(t)\,{\rm d} t. \end{equation} Since \eqref{eq4} and \eqref{eq5} hold for all $v\in L_\mathscr{P}^2(\Omega;L^2(0,1))$, it follows that $f=g$ for almost all $(t,\omega)\in (0,1)\times\Omega$. Hence, $g\in L^{\infty}_\mathscr{P}(\Omega;L^2(0,1))$. This leads to a contradiction, since it would imply that the isometry from $\{\xi\in L_{{\mathscr F}_1}^2(\Omega): \, {\mathbb E} \xi = 0\}$ into $L_{\mathscr{P}}^2(\Omega;L^2(0,1))$ given by \eqref{eq:MRT} sends $\{\xi\in L_{{\mathscr F}_1}^\infty(\Omega):\, {\mathbb E} \xi = 0\}$ into $L^{\infty}_\mathscr{P}(\Omega;L^2(0,1))$. This is known to be false (see, e.g., \cite[Lemma A.1]{Fd}). \end{example} It would be interesting to determine an explicit representation for the dual of $L_\mathscr{P}^1(\Omega;L^2(0,1))$. \begin{remark} In \cite{LYZ}, the authors proved that $(L^1_\mathscr{P}(0,1;L^2(\Omega)))^* = L^\infty_\mathscr{P}(0,1;L^2(\Omega)).$ It seems that this result cannot be obtained by the method in this paper. \end{remark} {\em Acknowledgment} -- The authors thank Gilles Pisier for pointing out an error in an earlier version of the paper and communicating to us Example \ref{ex:Pisier} and Tuomas Hyt\"onen for showing us Example \ref{ex:Tuomas}.
1,108,101,563,500
arxiv
\section{Combined JUNO and ORCA analysis} \label{sec:ana} \subsection{Combination strategy} \label{sec:ana:combination} As described in Secs.~\ref{sec:juno} and \ref{sec:orca}, the detectors involved in this combined analysis work in very different conditions. This is true both in terms of their detection techniques and backgrounds, and of the sources and energies of neutrinos relevant for each individual analysis. The only common parameters to both experiments are the neutrino oscillation parameters, which are the core of the present study. It is also important to note at this point that not all parameters used to describe standard neutrino oscillations have an impact on the results of this analysis. On one hand, JUNO has no sensitivity to either $\theta_{23}$ or $\delta_{\text{CP}}$ as this experiment measures $\bar{\nu}_e \rightarrow \bar{\nu}_e$ oscillations which do not depend on those parameters~\cite{An:2015jdp}. On the other hand, ORCA has negligible sensitivity to $\theta_{12}$ and $\Delta m^2_{21}$ as the measured $\nu_\mu + \bar{\nu}_\mu$ oscillations happen at a much smaller $L/E$ than the one required for the development of oscillations with a frequency given by $\Delta m^2_{21}$~\cite{Adrian-Martinez:2016fdl}. The four oscillation parameters that impact a single experiment are accounted for implicitly in the $\chi^2$ function computation for JUNO and ORCA following the prescription outlined below, while the remaining two oscillation parameters, $\Delta m^2_{31}$ and $\theta_{13}$, have to be considered explicitly in the joint analysis. In JUNO, for every value of $\Delta m^2_{31}$ and $\theta_{13}$, the $\chi^2$ function is minimized using a grid with 61 uniformly spaced values of $\sin^2 \theta_{12}$ between 0.30225 and 0.31775. The value of $\Delta m^2_{21}$ is kept fixed at its assumed true value given that JUNO will be able to determine this parameter quickly. Studies have shown that profiling this parameter or keeping it fixed would have an impact smaller than about 0.1 units of $\chi^2$, which is negligible in the joint analysis. In ORCA, for every value of $\Delta m^2_{31}$ and $\theta_{13}$, the $\theta_{12}$ and $\Delta m^2_{21}$ values are kept fixed to their assumed true values given that ORCA has little sensitivity to those parameters, while $\theta_{23}$ and $\delta_{\text{CP}}$ are minimized without constraints. This minimization is performed twice, with the initial value of $\theta_{23}$ being located in a different octant for each minimization. Only the smallest value is kept as the global minimum of the $\chi^2$ for ORCA. This is done to ensure that the minimizer is not trapped in a possible local minimum. In order to combine the separate JUNO and ORCA analyses, their obtained $\chi^2$ values at a fixed test value of $\Delta m^2_{31}$ and $\theta_{13}$ are calculated and summed. The true value of the oscillation parameters considered in this study are the best-fit values from Ref.~\cite{Esteban:2018azc} obtained ``with SK data'', unless it is explicitly stated otherwise. For added clarity, those parameters are explicitly shown in Tab.~\ref{tab:ana:true_osc}. Given that neither JUNO nor ORCA are as sensitive to $\theta_{13}$ as current reactor neutrino experiments~\cite{Adey:2018zwh,Bak:2018ydk,DoubleChooz:2019qbj}, a prior on that parameter was added to the combined $\chi^2$ from Ref.~\cite{Esteban:2018azc}. The full expression used is shown in Eq.~\eqref{eq:ana:combined_chi2} where $\Delta m^2_{31}$ and $\theta_{13}$ are the tested values of those oscillation parameters, and the last term corresponds to the added prior with $\sin^2\theta_{13}^{\text{GF}}$ being the current global best fit for $\sin^2 \theta_{13}$ and $\sigma_{\sin^2 \theta_{13}^{\text{GF}}}$ its uncertainty. \begin{equation} \chi^2 \! \left(\Delta m^2_{31}, \theta_{13}\right) = \chi^2_{\text{JUNO}} \! \! \left(\Delta m^2_{31}, \theta_{13}\right) + \chi^2_{\text{ORCA}} \! \! \left(\Delta m^2_{31}, \theta_{13}\right) + \frac{\left(\sin^2 \theta_{13} - \sin^2 \theta_{13}^{\text{GF}}\right)^2}{\sigma^2_{\sin^2 \theta_{13}^{\text{GF}}}}. \label{eq:ana:combined_chi2} \end{equation} For each set of true parameters studied, the combined $\chi^2$ from Eq.~\eqref{eq:ana:combined_chi2} is calculated for each NMO in a $101 \times 21$ grid in the $\left(\Delta m^2_{31}, \sin^2 \theta_{13}\right)$ space, called the $\chi^2$ map, centered around the assumed true values of the oscillation parameters and spanning uniformly a $\pm 10\%$ interval in $\Delta m^2_{31}$ from the central value and a $\pm 6\%$ interval in $\sin^2 \theta_{13}$ from the central value. More explicitly, when assuming true normal ordering with the best-fit values from Ref.~\cite{Esteban:2018azc}, the tested values of $\Delta m^2_{31}$ in the grid will run from ${-2.78080 \times 10^{-3}}$~eV$^2$ to ${-2.27520 \times 10^{-3}}$~eV$^2$ and from ${2.27520 \times 10^{-3}}$~eV$^2$ to ${2.78080 \times 10^{-3}}$~eV$^2$ with step of ${0.00506 \times 10^{-3}}$~eV$^2$, and those of $\sin^2 \theta_{13}$ in the grid will be from $0.0210278$ to $0.0237122$ with a step of $0.0001342$. It is worth noting that when the true value of the oscillation parameters is changed, as in Sec.~\ref{sec:sens:orca}, or when assuming inverted ordering, the grid described above is changed so that the central value of the grid corresponds to the true oscillation parameters. Using the $\chi^2$ map above, for each set of true oscillation parameters tested, the NMO sensitivity is determined by calculating the $\overline{\Delta \chi^2} = \chi^2_{WO} - \chi^2_{TO}$, where $\chi^2_{WO}$ ($\chi^2_{TO}$) is the minimum value of $\chi^2$ in the $\chi^2$ map in the wrong (true) ordering region of the map. The $\overline{\Delta \chi^2}$ is then converted into a median sensitivity $S(\sigma) = \sqrt{\overline{\Delta \chi^2}}$~\cite{Wilks:1938dza}. The same procedure is also used separately for ORCA and JUNO to obtain the corresponding non-combined sensitivities, computed for each experiment alone. The $\overline{\Delta \chi^2}$ notation is used above rather than $\Delta \chi^2$, to highlight the fact that an Asimov approximation is being used in the entirety of this paper, therefore, the median sensitivity is always calculated. \begin{table}[t!] \begin{center} \caption{Global best-fit values for the oscillation parameters (from Ref.~\cite{Esteban:2018azc}) and assumed to be the ``true value'' in this analysis. Uncertainties are shown for the parameter where a prior based on the global best-fit value was used. } \label{tab:ana:true_osc} \begin{tabular}{l|cc} Parameter & Normal Ordering & Inverted Ordering \\ \hline $\sin^2 \theta_{23}$ & 0.563 & 0.565 \\ $\sin^2 \theta_{13}$ & 0.02237$^{+0.00066}_{-0.00065}$ & 0.02259$\pm$0.00065 \\ $\Delta m^2_{31}$ & $2.528 \times 10^{-3}$~eV$^2$ & $-2.435 \times 10^{-3}$~eV$^2$ \\ $\delta_\text{CP}$ & 221$^\circ$ & 282$^\circ$ \\ $\sin^2 \theta_{12}$ & \multicolumn{2}{|c}{0.310} \\ $\Delta m^2_{21}$ & \multicolumn{2}{|c}{$7.39 \times 10^{-5}$~eV$^2$} \\ \end{tabular} \end{center} \end{table} \subsection{Results} \label{sec:results} Fig.~\ref{fig:Dm31Scan} depicts the profile $\overline{\Delta \chi^{2}}$ scan on the test values of $\Delta m_{31}^2$ with 6 years of JUNO and ORCA data taking. The four profiles correspond to true normal (top) and inverted (bottom) orderings while fitting the true or wrong ordering. Since the Asimov dataset is used, when assuming the true ordering on the fit both experiments show the same best-fit values at the true $\Delta m_{31}^2$ and their $\overline{\Delta \chi^{2}}$ minima yield zero. However, when assuming wrong ordering on the fit, the minima of $\overline{\Delta \chi^{2}}$ are no longer at zero and the sensitivity to the NMO is obtained from this difference. After 6 years, JUNO will be able to exclude the wrong ordering with the significance of $\sim 2.3\sigma$ for either NMO. On the other hand, ORCA is expected to reach a significance of more than $6\sigma$ ($3\sigma$) for true NO (IO). \begin{figure}[tbh] \flushright \includegraphics[width=0.95\linewidth]{Legend_Figs/Legend_systematicsORCA.png} \\ \vspace*{0.2cm} \includegraphics[width=0.49\linewidth]{ORCA_Figs/Dm31scan_NO_IO_ORCAsyst_v2_JP.pdf} \includegraphics[width=0.49\linewidth]{ORCA_Figs/Dm31scan_NO_NO_ORCAsyst_v2_JP.pdf}\\ \includegraphics[width=0.49\linewidth]{ORCA_Figs/Dm31scan_IO_IO_ORCAsyst_v2_JP.pdf} \includegraphics[width=0.49\linewidth]{ORCA_Figs/Dm31scan_IO_NO_ORCAsyst_v2_JP.pdf} \caption{$\overline{\Delta \chi ^2}$ profile for only JUNO (red), only ORCA (blue), and the combination of JUNO and ORCA (green) as a function of test values of $\Delta m^2_{31}$ for 6 years of data taking assuming baseline (solid) or optimistic (dashed) systematics. } \label{fig:Dm31Scan} \end{figure} Fig.~\ref{fig:Dm31Scan} also shows how the combination of JUNO and ORCA would exceed the NMO sensitivity of each experiment alone. The key advantage of the combination comes from the tension in $\Delta m_{31}^2$ best fits of the two experiments when assuming the wrong ordering. This tension arises from the fact that each experiment observes neutrino oscillations starting from a different neutrino flavor ($\bar\nu_e$ for JUNO, $\nu_\mu + \bar\nu_\mu$ for ORCA). Due to this difference the effective oscillation frequency will be a different combination of the various $\Delta m^2_{ij}$ for each experiment~\cite{Nunokawa:2005nx,deGouvea:2005hk}. Since the combination requires a single resulting $\Delta m_{31}^2$ best fit, this tension together with strong constraints in $\Delta m_{31}^2$ from both experiments, and particularly from JUNO, provides the synergy effect in which the combined $\overline{\Delta \chi^2}$ minimum is enhanced to a higher value than simply adding the $\overline{\Delta \chi^2}$ minima from each experiment. This latter scenario, in which the median sensitivity can be obtained as the square root of the sum, will be referred to as ``simple sum'' in the following discussion. It is shown only to highlight the benefit from doing the combination between JUNO and ORCA properly. In Tab.~\ref{table:NMO6years}, the NMO sensitivities after 6 years of collected data are presented for the combination, each experiment standalone, and the ``simple sum'' of their sensitivities. The combination reaches $8\sigma$ for true NO and $5\sigma$ for true IO. This combined sensitivity exceeds the ``simple sum'' case, which only obtains $7\sigma$ for true NO and $4\sigma$ for true IO. More importantly, a $5\sigma$ significance is achieved for both NMO scenarios within 6 years of combined analysis while each experiment alone, or the ``simple sum'' of sensitivities, cannot achieve the same performance, sometimes even at significantly longer timescales. \begin{table} \centering \caption{Asimov median sensitivity to NMO after 6 years of data taking for each experiment alone, the ``simple sum'', and the combination of the two experiments, assuming the baseline scenario for systematics.} \label{table:NMO6years} \begin{tabular}{c|c|c|c|c} True NMO & JUNO, 8 cores&ORCA &Simple Sum&Combination\\\hline NO & $2.3\sigma$ & $6.5\sigma$ & $6.9\sigma$ &$7.8\sigma$\\ IO & $2.4\sigma$ & $3.6\sigma$ & $4.3\sigma$ &$5.1\sigma$ \\ \end{tabular} \end{table} The time evolution of the NMO sensitivity for JUNO, ORCA, and their corresponding combination is presented in Fig.~\ref{fig:NMO_Time} assuming that the two experiments start at the same time. JUNO alone would need 6--10 years of operation to reach 3$\sigma$ of NMO sensitivity. ORCA has the capability to reach a $5\sigma$ significance after 3~years in the case of true NO. However, it would take more than 10~years of exposure to reach $5\sigma$ sensitivity in the case of IO. Due to the synergy effect discussed above, the combination would help significantly to reduce the time needed to reach a $5\sigma$ NMO sensitivity when compared to ORCA, especially if the neutrino mass ordering is inverted. With the combined analysis, a $5\sigma$ significance can be obtained within 2 (6) years in the case of true NO (IO) respectively. \begin{figure}[t] \flushright \includegraphics[width=0.95\linewidth]{Legend_Figs/Legend_systematicsORCA.png} \\ \vspace*{0.2cm} \includegraphics[width=0.49\linewidth]{ORCA_Figs/JUNO-8cores_ORCA_NO_Time_v4_JP.pdf} \includegraphics[width=0.49\linewidth]{ORCA_Figs/JUNO-8cores_ORCA_IO_Time_v4_JP.pdf} \caption{NMO sensitivity as a function of time for only JUNO (red), only ORCA (blue), and the combination of JUNO and ORCA (green), assuming baseline (solid) or optimistic (dashed) systematics.} \label{fig:NMO_Time} \end{figure} As discussed in Sec.~\ref{ORCAanalysis}, the ORCA analysis is also performed using a set of systematics similar to those of Ref.~\cite{Bezerra:2019dao}, as a cross-check for an optimistic approach. Fig.~\ref{fig:Dm31Scan} and Fig.~\ref{fig:NMO_Time} show that both the optimistic and the baseline systematics give a very similar $\Delta \chi^2$ minimum value and thus yield the same NMO sensitivity for the ORCA-only analysis. However, the optimistic approach provides a much tighter constraint on $\Delta m_{31}^2$, as shown in Fig.~\ref{fig:Dm31Scan}, which causes the combination to reach sensitivities that are 1--2$\sigma$ higher than in the case of the baseline scenario. This comes from the difference in the implementation of the energy scale systematics. The energy scale applied at the detector response (baseline) is more strongly correlated with $\Delta m_{31}^2$ compared to the energy scale at the unoscillated flux (optimistic). \section{Conclusions} \label{sec:end} This paper presents an evaluation of the sensitivity to the neutrino mass ordering achieved by a combined analysis of the JUNO and KM3NeT/ORCA experiments. It is worth pointing out explicitly that in all cases the combined analysis is more powerful than simply adding the sensitivities for both experiments together. As discussed above, this is due to the tension that arises in the $\Delta m^2_{31}$ measurement between JUNO and ORCA when the wrong neutrino mass ordering is assumed. The results show that this combination significantly reduces the time required to reach a $5\sigma$ determination of the neutrino mass ordering for any value of the oscillation parameters. In all cases, a $5\sigma$ measurement can be obtained within 6~years for the combined analysis, while it could take more than 10~years using only ORCA data, depending on the true ordering. The gain in time is larger in cases where ORCA alone would require a longer time for reaching a $5\sigma$ sensitivity due to the uncertainty on the $\theta_{23}$ value. In the favorable case of true normal ordering and $\theta_{23}$ in the second octant, a $5\sigma$ NMO determination would be feasible after less than 2~years of data taking with the combined analysis. In this favorable scenario, which also corresponds to the current global best-fit value, the neutrino mass ordering would be determined at least a year ahead of what can be done using only ORCA data. The boost for the NMO sensitivity obtained by combining JUNO and ORCA presented in this study is in line with what has been presented by previous studies considering the combination of JUNO with the IceCube Upgrade or with PINGU in Refs.~\cite{Blennow:2013vta,Bezerra:2019dao}. However, given the differences between PINGU and ORCA, it is important to confirm the result also for the combination of JUNO and ORCA. Of particular interest is the different treatment of the energy scale systematics between this and previous studies. This uncertainty impacts directly the $\Delta m^2_{31}$ determination with ORCA and thus also the combined result. As shown in this paper, changing the treatment of this systematic uncertainty from an optimistic to a more realistic scenario may significantly affect the power of the combination of JUNO and ORCA. Nevertheless, a $5\sigma$ determination of the neutrino mass ordering can be effectively reached even in the ORCA baseline scenario for systematics. Because the gain in time to reach the determination of the neutrino mass ordering in the combination of JUNO and ORCA does not come exclusively from each experiment's own ability to determine the neutrino mass ordering, the combination is sensitive to systematic uncertainties and detector effects in a different way than either experiment do independently. For instance, even if the JUNO energy resolution is critical for the measurement of the neutrino mass ordering using only JUNO data, it has only a small impact in the combined analysis. Alternatively, changing the ORCA systematics between optimistic and baseline systematics has a small impact on the power of ORCA alone to determine the neutrino mass ordering, however it has a larger impact on the combined analysis. These differences arise from the fact that the combination depends strongly on the measurement of $\Delta m^2_{31}$ by each experiment rather than simply on their measurements of the neutrino mass ordering directly. In the cases where the gain in time to reach $5\sigma$ is of only a few years, this JUNO-ORCA combination is particularly interesting to provide a somewhat independent validation of the result obtained by the ORCA experiment alone, with a different dependency on the systematic uncertainties. \section{Introduction} \label{sec:intro} The discovery of neutrino flavor oscillations, implying that neutrinos are massive particles, is so far one of the few observational hints towards physics beyond the Standard Model. As such, it has potentially far-reaching implications in many aspects of fundamental physics and cosmology, from the matter-antimatter asymmetry in the Universe to the naturalness problem of elementary particles (see {\it e.g.} Refs.~\cite{Gonzalez-Garcia:2002bkq,Fukugita,Mohapatra}). Since the first conclusive observations of neutrino oscillations at the turn of the century~\cite{Fukuda:1998mi,Ahmad:2002jz,Eguchi:2002dm}, a variety of experiments targeting solar, reactor, atmospheric and accelerator neutrinos have achieved an increasingly precise determination of the parameters of the neutrino flavor mixing matrix~\cite{Zyla:2020zbs,Esteban:2020cvm, deSalas:2020pgw,Capozzi:2017ipn}. Despite this tremendous progress, some fundamental properties of neutrinos are yet to be determined, such as their absolute masses, whether they are Majorana particles and therefore are their own anti-particle, the existence and strength of CP-violation in the neutrino sector, and the ordering of the masses ($m_1$, $m_2$ and $m_3$) of the neutrino mass eigenstates (respectively, $\nu_1$, $\nu_2$ and $\nu_3$): either normal ordering (NO, $m_1 < m_2 < m_3$) or inverted ordering (IO, $m_3 < m_1 < m_2$). This last question is a prime experimental goal because its determination would have direct consequences on, {\it e.g.}, the measurement of leptonic CP violation in future long baseline experiments~\cite{Barger:2001yr} and the interpretation of results from planned experiments searching for neutrino-less double-beta decay to establish the Dirac vs.\@{} Majorana nature of neutrinos~\cite{Dolinski:2019nrj}. The measurement of the neutrino mass ordering (NMO) is on the agenda of several ongoing neutrino experiments in the GeV energy domain that probe long-baseline \mbox{$\nu_\mu$ -- $\nu_e$} oscillations in Earth matter. Such experiments are sensitive to the atmospheric mass splitting $\Delta m^2_{31}$~\cite{deSalas:2018bym}. However, none of these experiments alone, either accelerator-based (such as T2K~\cite{Abe:2019ffx} or NO$\nu$A~\cite{Acero:2019ksn}) or using atmospheric neutrinos (such as Super-Kamiokande~\cite{Abe:2017aap} or IceCube~\cite{Aartsen:2014yll}), has the capability to unambiguously resolve the NMO (in other words, the sign of $\Delta m^2_{31}$) within the next few years. Even combining all available data, including those from reactor experiments, into global neutrino oscillation fits, has so far yielded only a mild preference for normal ordering, which has faded away again since the inclusion of the latest results of T2K~\cite{Abe:2021gky} and NO$\nu$A~\cite{Kolupaeva:2020pug}. Considering that a high-confidence ($>5\sigma$) determination of the NMO with the next-generation accelerator experiments DUNE~\cite{Acciarri:2015uup}, T2HK~\cite{Abe:2015zbg} and T2HKK~\cite{Abe:2016ero} is only envisaged for 2030 or beyond, alternative paths to the NMO measurement are being pursued on a shorter timescale. JUNO~\cite{An:2015jdp,Djurcic:2015vqa,Abusleme:2021zrw} and KM3NeT/ORCA~\cite{Adrian-Martinez:2016fdl} are the two next-generation neutrino detectors aiming at addressing the NMO measurement within this decade. ORCA, the low-energy branch of the KM3NeT network of water Cherenkov neutrino telescopes, will determine the NMO by probing Earth matter effects on the atmospheric neutrino oscillations in the GeV energy range. JUNO is a medium-baseline ($\sim 53$~km) reactor neutrino experiment that is sensitive to the NMO through the interplay between the fast oscillations driven by $\Delta m^2_{31}$ and $\Delta m^2_{32}$ in the $\bar{\nu}_e$ disappearance channel, where matter effects play only a small role~\cite{Li:2016txk}. As reactor $\bar{\nu}_e$ disappearance measurement is not affected by CP violation~\cite{An:2015jdp}, the JUNO measurement will be independent of the unknown CP violating phase $\delta_\mathrm{CP}$. Both detectors are currently under construction and target completion within the first half of this decade. The JUNO detector is planned to be completed in 2022, while ORCA is foreseen to be deployed incrementally until 2025, with 6 out of the total 115 detection lines already installed and taking data~\cite{ICRC_ORCA}. Combining the data from the two experiments is not only motivated by their almost simultaneous timelines, but also by the gain in sensitivity that arises from the complementarity of their approaches to the measurement of the NMO. This boost essentially comes from the expected tension between ORCA and JUNO in the best fit of $\vert\Delta m^2_{31}\vert$ from the $\bar\nu_e$ and $\nu_\mu$ disappearance channels when the assumed ordering is wrong. Provided that the measurement uncertainties in each experiment are small enough, the wrong mass ordering can be excluded with a high confidence level from the combination of the two datasets, even if the intrinsic NMO sensitivity of each experiment would not reach that level. This effect was first pointed out in relation with accelerator neutrino experiments ~\cite{ Nunokawa:2005nx, deGouvea:2005hk}, then reassessed in the context of the combination of a reactor experiment (Daya Bay II, now evolved into JUNO) and an atmospheric neutrino experiment (PINGU~\cite{TheIceCube-Gen2:2016cap}, a proposed low-energy extension of the IceCube neutrino telescope), showing that a strong boost in NMO sensitivity can indeed be reached with a combined fit~\cite{Blennow:2013vta}. The potential of this method was further explored in a combined study with JUNO and PINGU using detailed simulation tools for both experiments~\cite{Bezerra:2019dao}, leading to the same conclusion. In this paper, a complete study of the combined sensitivity of JUNO and ORCA to the NMO is presented, based on the same theoretical approach and using up-to-date detector configurations and expected performances. The main features and detection principles of the JUNO experiment are described in Sec.~\ref{sec:juno}, along with the standalone $\chi^2$ analysis used to determine the JUNO-only NMO sensitivity. The same is done for ORCA in Sec.~\ref{sec:orca}, based on the updated detector configuration, and simulation and reconstruction tools used for the latest NMO sensitivity study following Ref.~\cite{Aiello:2021jfn}. In this case, special attention is paid to the treatment of systematic uncertainties, in particular with the introduction of a systematic error on the measured energy scale at the detector level (not considered in Ref.~\cite{Bezerra:2019dao} for PINGU). That systematic effect is shown to degrade the precision of the $\Delta m^2_{31}$ measurement of ORCA alone, thereby affecting the sensitivity of the combined study. The JUNO/ORCA combined $\chi^2$ analysis is presented in Sec.~\ref{sec:ana} for the baseline reactor configuration of JUNO, and the most realistic systematics treatment adopted for ORCA. The enhanced sensitivity achieved with the combined $\chi^2$ analysis over the simple sum of individual $\chi^2$ is also demonstrated. Sec.~\ref{sec:sens} presents further sensitivity studies exploring the impact of the energy resolution in JUNO and changing the number of reactor cores available towards a more optimistic scenario. The stability of the combined performance versus the true value of $\theta_{23}$ and $\Delta m^2_{31}$ is also addressed. The conclusions drawn from these results are presented in Sec.~\ref{sec:end}. \section{Jiangmen Underground Neutrino Observatory (JUNO)} \label{sec:juno} The Jiangmen Underground Neutrino Observatory~\cite{An:2015jdp,Djurcic:2015vqa,Abusleme:2021zrw} (JUNO) is a multipurpose experiment being built in the south of China. Among its goals is the determination of the NMO via the precise measurement of $\bar\nu_e$ from the Yangjiang and the Taishan Nuclear Power Plants (NPP) located 53~km away from the detector. The JUNO detector is divided into 3 parts: the Central Detector, the Water Cherenkov Detector and the Top Tracker. The Central Detector is composed of 20~kton of liquid scintillator placed in a 35.4~m diameter acrylic sphere. Around this acrylic sphere, about 18k~20'' and 26k~3'' photomultiplier tubes (PMT) monitor the liquid scintillator volume to detect neutrino interactions occurring inside, in particular the Inverse Beta Decay (IBD) interactions produced by $\bar\nu_e$ from the NPPs. The IBD interactions are detected in the JUNO Central Detector via the prompt detection of the scintillation light of the positron produced in the interaction and its subsequent annihilation, along with the delayed detection of a 2.2~MeV gamma-ray produced in the neutron capture on hydrogen and subsequent de-excitation of the deuteron. Due to the kinematics of the IBD, most of the available energy of the incident $\bar\nu_e$ is transferred to the positron. Therefore, to do a precise measurement of neutrino oscillations, a good energy resolution to measure the visible energy of the prompt signal is critical, as will be discussed later. The measured visible energy is smaller than the incident $\bar\nu_e$ energy by about 0.8~MeV, due to the mass difference between the initial and final particles ($-1.8$~MeV) and to the light emitted in the positron annihilation ($+1.0$~MeV). The acrylic sphere is placed in the center of the Water Cherenkov Detector, a cylindrical ultra-pure water pool (44~m height, 43.5~m diameter), which serves to shield the Central Detector from external radioactivity and to provide a veto for atmospheric muons and for muon-induced background such as cosmogenic nuclei and fast neutrons. The Water Cherenkov Detector and the Top Tracker, located on top of it to precisely track atmospheric muons, compose the Veto System of JUNO. In addition to the JUNO detector described above, the project also includes the Taishan Antineutrino Observatory (JUNO-TAO) detector~\cite{Abusleme:2020bzt}. This detector will be installed at a distance of 30~m from one of the Taishan's reactors to determine the reactor $\bar\nu_e$ spectrum with a better energy resolution than JUNO, effectively reducing the impact of possible unknown substructures in the reactor neutrino spectra~\cite{Dwyer:2014eka} on the measurement of neutrino oscillations. The precise distance of JUNO to each reactor core of the Yangjiang and Taishan NPPs, provided in Ref.~\cite{An:2015jdp}, is used in this study rather than just the distance to the NPP complex. In addition to them, the NPPs of Daya-Bay at 215~km and Huizhou at 265~km will also contribute to the total number of detected reactor neutrinos. However, given the much larger distance, the oscillation pattern will not be the same and these neutrinos are part of JUNO's intrinsic background. The Yangjiang NPP is already fully operational, with 6 reactor cores and a total of 17.4~GW of thermal power, as is the Daya-Bay NPP, with a similar total thermal power. The Taishan NPP has already 2 reactor cores operational out of the 4 initially foreseen, totaling a thermal power of 9.2~GW. At present, it is unknown if the remaining 2 reactor cores, which would bring another additional 9.2~GW of thermal power, will be built. The Huizhou NPP is under construction and is expected to be ready by about 2025~\cite{Abusleme:2021zrw} with 17.4~GW thermal power. \subsection{Modeling JUNO for this study} \label{sec:juno:modeling} For this study, the performance of JUNO closely follows that provided in Ref.~\cite{An:2015jdp}. In particular, a 73\% IBD detection efficiency and an energy resolution of $3\%/\sqrt{E/\text{MeV}}$ are assumed in the Central Detector which contains $1.5 \times 10^{33}$ target protons. Given that the energy resolution is critical for the JUNO sensitivity, the impact of a $\pm 0.5\%/\sqrt{E/\text{MeV}}$ change in energy resolution is discussed in Sec.~\ref{sec:sens:juno}. As in Ref.~\cite{An:2015jdp}, the nominal running time of the experiment is considered to be 1000~effective days every 3~years. The observed $\bar\nu_e$ spectrum in JUNO will be produced by the interplay between the spectrum produced by the NPPs, the IBD cross-section~\cite{Vogel:1999zy}, and the neutrino oscillations which are to be measured. To determine the spectrum produced by the NPPs, the ILL\footnote{Institut Laue-Langevin} $\bar\nu_e$ spectra~\cite{VonFeilitzsch:1982jw,Schreckenbach:1985ep,Hahn:1989zr} are used given that the flux normalization is in better agreement with previous data~\cite{Adey:2018qct}. The fine structure of the spectrum will be precisely determined independently using JUNO-TAO, as mentioned beforehand. Therefore, it is not explicitly included in the spectrum shape. The spectrum is calculated assuming a fission fragments content of $\ce{^{235}U} : \ce{^{239}Pu} :\ce{^{238}U} : \ce{^{241}Pu} = 0.564 : 0.304 : 0.076 : 0.056$ which is similar to the one from Daya-Bay~\cite{An:2013zwz}, and using the fission energies for each of these isotopes from Ref.~\cite{Kopeikin:2004cn}. In addition to reactor neutrinos, the IBD event selection contains some background events. The backgrounds considered in this analysis are taken from Ref.~\cite{An:2015jdp}, in terms of their rate, shape, and uncertainties. The three dominant components of this background are cosmogenic events, geo-neutrinos, and accidental coincidences mainly from radioactive background, with expected rates of about 1.6, 1.1, and 0.9~events per day, respectively. As a reference, the Daya-Bay and Huizhou NPPs are expected to yield a total of 4.6~events per day in JUNO while the Taishan and Yangjiang NPPs are expected to produce a total of 54.3~events per day, assuming the normal ordering world best-fit~\cite{Esteban:2018azc} oscillation parameters, considering all 4~Taishan NPP reactors operational. A notable difference from Ref.~\cite{An:2015jdp} is our conservative choice to consider, as baseline, only the 2 existing reactors in the Taishan NPP (referred to as ``JUNO 8~cores'' hereafter) rather than the foreseen 4 reactors (referred to as ``JUNO 10~cores'' hereafter). This reduces the total number of expected signal neutrinos by about 25\% in this study in comparison to Ref.~\cite{An:2015jdp}. For completeness, the JUNO official baseline with a total of 4 Taishan reactors is also considered and discussed in Sec.~\ref{sec:sens:juno}. Although not yet completed, the Huizhou NPP is considered to be active for the whole duration of JUNO in both cases, even if it is possible that JUNO will start before its completion. This is again a conservative assumption given that the Huizhou NPP $\bar\nu_e$ are an intrinsic background to the neutrino oscillation measurements in JUNO. The expected event distribution as a function of the visible energy is shown in Fig.~\ref{fig:juno:event_distribution}, for 6~years of data with 8~cores and assuming the best-fit oscillation parameters from Ref.~\cite{Esteban:2018azc} for normal ordering. The events corresponding to the expected remaining non-reactor neutrino background are also highlighted in the plot. They are concentrated in the lower energy part of the measured spectrum where the energy resolution of JUNO is not sufficient to see the rapid oscillation pattern. \begin{figure} \begin{center} \includegraphics[width=.7\textwidth]{JUNO_Figs/JUNO_evt_distribution.pdf} \caption{Expected event distribution for 6~years of data with JUNO 8~cores as a function of the visible energy of the prompt signal. The current world best-fit~\cite{Esteban:2018azc} oscillation parameters for normal ordering are assumed. The shaded region corresponds to the non-reactor neutrino background events. } \label{fig:juno:event_distribution} \end{center} \end{figure} \subsection{Sensitivity analysis} In JUNO, the measurement of the neutrino oscillation parameters and, in particular, of the neutrino mass ordering, is done by fitting the measured positron energy spectrum. As shown in Fig.~\ref{fig:juno:event_distribution}, this spectrum exhibits two notable features. The first is a slow oscillatory behavior due to $\theta_{12}$ and $\Delta m^2_{21}$, which causes a large deficit in the number of events in the whole energy range shown in Fig.~\ref{fig:juno:event_distribution} and that has a minimum at about 2.2~MeV. The second is a rapid oscillatory behavior due to $\theta_{13}$ and $\Delta m^2_{31}$ that starts being visible in the figure at about 2~MeV, and for which several periods are shown. If inverted ordering is assumed rather than normal ordering in Fig.~\ref{fig:juno:event_distribution}, the position of the rapid oscillation maxima and minima would change, as the oscillation frequencies producing the pattern would be different~\cite{An:2015jdp}. This is the signature that JUNO will use to measure the NMO. The sensitivity of JUNO to determine the NMO is calculated using the $\chi^2$ difference between the data being fitted under the two ordering hypotheses. This strategy is also applied for the combined analysis. While this procedure will be further detailed in Sec.~\ref{sec:ana}, it is useful to describe already at this point the $\chi^2$ function used in JUNO. It is also worth pointing out here that in this study no statistical fluctuations are added to any of the simulated samples, and the expected statistical uncertainties at various timelines are taken into account in the computed $\chi^2$ values. This approach is commonly referred to as ``Asimov'' approach~\cite{Cowan:2010js}, and the ``Asimov'' term will be used in this paper to highlight that this approximation is being used. For this analysis the measured JUNO visible energy spectrum is divided in $n=207$~bins of 0.03~MeV between 1.00~MeV and 7.21~MeV. The JUNO $\chi^2$ function used in this analysis has the following form: \begin{equation} \chi^2 = \bm{\Delta} \times \bm{\mathcal{M}}^{-1} \times \bm{\Delta}^T. \label{eq:juno:chi2} \end{equation} In this expression, $\bm{\Delta}$ is a $1 \times n$ matrix whose content is the difference between the observed and expected rates. $\bm{\Delta}$ is defined as $\bm{\Delta} = \bm{D} - (\bm{S} + \bm{B})$, where $\bm{D}$, $\bm{S}$, and $\bm{B}$ correspond, respectively, to the data, the signal prediction for a given set of oscillation parameters, and the expected background. $\bm{D}$ is given by the Asimov sample for the true value of the oscillation parameters, while $\bm{S}$ and $\bm{B}$ correspond to the expected signal given the test oscillation parameters and the background. $\bm{\Delta}^T$ is the transpose matrix of $\bm{\Delta}$. The matrix $\bm{\mathcal{M}}$ whose inverse is present in Eq.~\eqref{eq:juno:chi2} is a $n \times n$ covariance matrix. This matrix is calculated as $\bm{\mathcal{M}} = \bm{\mathcal{M}}_{stat} + \bm{\mathcal{M}}_{S} + \bm{\mathcal{M}}_{B}$. $\bm{\mathcal{M}}_{stat}$ corresponds to the statistical uncertainty in each bin from the total expected number of events ($\bm{S} + \bm{B}$). $\bm{\mathcal{M}}_{S}$ and $\bm{\mathcal{M}}_{B}$ correspond, respectively, to the covariance matrices of the signal and background as described in Ref.~\cite{An:2015jdp}. This JUNO-only analysis was validated by comparing the NMO sensitivity with previous results in Refs.~\cite{An:2015jdp,Bezerra:2019dao}. Cross-checks have been performed using the same set of oscillation parameters and reactor cores as in those references, showing agreement within 0.1--0.5 $\chi^2$ units. \section{Oscillation Research with Cosmics in the Abyss (ORCA)} \label{sec:orca} The KM3NeT Collaboration is currently building a set of next-generation water Cherenkov telescopes in the depths of the Mediterranean Sea~\cite{Adrian-Martinez:2016fdl}. Two tridimensional arrays of PMTs will be deployed: ARCA and ORCA (for \underline{A}stroparticle and \underline{O}scillation \underline{R}esearch with \underline{C}osmics in the \underline{A}byss, respectively). ARCA is a gigaton-scale detector which will mainly search for astrophysical neutrinos in the \mbox{TeV--PeV} energy range. ORCA, subject of this study, is a denser and smaller array (Mton-scale) optimized for oscillation physics with atmospheric neutrinos at energies above 1~GeV. The ORCA detector will contain 115 detection units, each of them being a vertical line about 200~m long, anchored to the seabed and supporting 18 digital optical modules. These modules are glass spheres that contain 31~PMTs and related electronics. The array of detection units is arranged in a cylindrical shape with an average radius of 115~m and an average distance between the lines of 20~m. On each detection unit, the vertical spacing between the optical modules is around 9~m. The total instrumented volume of ORCA covers about $6.7 \times 10^6$ m$^3$, corresponding to 7.0 Mtons of seawater. The ORCA PMTs detect the Cherenkov light induced by charged particles originating in neutrino interactions in and around the detector. Such detectable interactions occur mainly through charged current (CC) and neutral current (NC) deep inelastic scattering processes off nucleons in water molecules~\cite{Formaggio:2013kya}. The pattern and timing of the digitalized PMT output, or hits, recorded by the digital optical modules are used to identify neutrino events and reconstruct their energy and angular direction. The topologies of neutrino-induced events in the energy range of interest for ORCA can be separated into two broad classes. If the final state includes a sufficiently energetic muon, a track-like signature will be produced. This is the case for CC interactions of $\nu_\mu/\overline{\nu}_{\mu}$, and $\nu_\tau/\overline{\nu}_\tau$ with subsequent muonic decay of the tau lepton. Shower-like events correspond to all other interaction channels, where only hadronic and electromagnetic showers are produced. This includes CC interactions of $\nu_e/\overline{\nu}_e$ and $\nu_\tau/\overline{\nu}_\tau$ with non-muonic decays, as well as NC interactions of all flavors. \subsection{Modeling the ORCA detector for this study} The analysis presented here is based on a detailed Monte Carlo (MC) simulation of the ORCA detector response to atmospheric neutrinos. The generation of neutrino interactions in seawater in the energy range \mbox{1--100~GeV} is performed using gSeaGen~\cite{Aiello:2020kch}, a GENIE~\cite{Andreopoulos:2009rq}-based software developed within the KM3NeT Collaboration. All secondary particles are tracked with the package KM3Sim~\cite{Tsirigotis:2011zza} based on GEANT4~\cite{Agostinelli:2002hh} which simulates and propagates the photons induced by the Cherenkov effect, accounting also for light absorption and scattering, and records the signals reaching the PMTs. The optical background due to radioactive decays of $^{40}$K naturally present in seawater is simulated by adding uncorrelated (single-PMT) and correlated (inter-PMT) random noise hits, based on the rates measured with the first deployed detection units~\cite{Ageron:2019ksi}. The background of atmospheric muons is also simulated. The PMT response, readout, and triggering are simulated using KM3NeT custom software packages. The resulting trigger rate is about 54~kHz for noise events, 50~kHz for atmospheric muons and about 8~mHz for atmospheric neutrinos. The total simulated sample includes more than 15~years of atmospheric neutrinos, 1.4~days of noise events, and 14~days of atmospheric muons, which proves sufficient to probe the background contamination at a percent level~\cite{Aiello:2021jfn}. The MC neutrino sample and event selection adopted in this study are identical to those used in the latest ORCA NMO sensitivity study and are extensively described in Ref.~\cite{Aiello:2021jfn}. From the detected signals, the energy and direction of the events are reconstructed using dedicated algorithms developed for shower-like~\cite{Hofestaedt} and track-like~\cite{Quinn} event topologies. A set of preselection cuts are applied, requiring that events are well reconstructed, with an up-going direction (corresponding to a reconstructed zenith angle $\theta_\mathrm{reco} >90^\circ$), and obey certain criteria of containment in the detector instrumented volume. Events that pass the preselection cuts are then processed by a classification algorithm based on random decision forests for the determination of event topologies (or particle identification --- PID) and background rejection. Based on a single score scale $\eta$ from 0 to 1 provided by the classifier, the sample is then divided into 3 classes of events, also called PID classes: tracks ($0.7 < \eta \leq 1$), intermediate events ($0.3<\eta\leq 0.7$), and showers ($0\leq\eta\leq 0.3$). The two extreme PID classes provide a higher purity level for the genuine track-like and shower-like events. As discussed in Ref.~\cite{Aiello:2021jfn}, the selection and background suppression cuts are sufficient to reduce pure noise events to a negligible rate and result in an atmospheric muon background contamination of only 3\%. The impact of such contamination is expected to be insignificant and atmospheric muons are not included in this study. The analysis relies on the computation of the expected energy and zenith angle $(E,\theta)$ distributions of atmospheric neutrino events for each PID class. Such distributions are obtained with the SWIM package~\cite{Bourret:2018kug}, a KM3NeT analysis framework developed for calculating event distributions for a given hypothesis using a full MC approach to model the detector response. The incoming atmospheric neutrino flux is taken from Ref.~\cite{Honda:2015fha}, for the Gran Sasso site without mountain over the detector, assuming minimum solar activity. The probabilities of neutrino flavor transitions along their path through the Earth are computed with the software OscProb~\cite{OscProb}, using a radial model of the Earth with 42 concentric shells of constant electron density, for which mass density values are fixed and follow the Preliminary Reference Earth Model~\cite{Dziewonski:1981xy}. The rate of events interacting around the detector is computed for each interaction type $\nu_x \in \{\accentset{\brobor}\nu_e$~CC, $\accentset{\brobor}\nu_\mu$~CC, $\accentset{\brobor}\nu_\tau$~CC, $\accentset{\brobor}\nu$~NC\} using neutrino-nucleon cross-sections weighted for water molecules as obtained with GENIE. In order to obtain the expected event distribution as observed by the detector, the SWIM package uses a binned detector response matrix built from the MC sample that maps the events generated with interaction type $\nu_x$ and true variables $(E_\mathrm{true},\theta_\mathrm{true})$ into the corresponding reconstructed variables $(E_\mathrm{reco}, \theta_\mathrm{reco})$ and PID class (track, intermediate and shower). Given that there are 8 different interaction types and 3 different PID bins, the global response matrix is a collection of 24 4-dimensional matrices used for the transformation $(E_\mathrm{true},\theta_\mathrm{true}) \longrightarrow (E_\mathrm{reco}, \theta_\mathrm{reco})$. These matrices are built using MC-generated events and the outcome of their processing through the reconstruction and classification algorithms, so that the ensemble of matrices account for detection and reconstruction efficiencies, misidentification probabilities and errors on reconstructed variables (including all correlations). This approach is different from the one in Ref.~\cite{Aiello:2021jfn} which uses parametrized response functions obtained from the MC distributions. While this full MC method ensures that all the information on the detector response, including potential correlations between parameters, is taken into account, its accuracy depends on the size of the MC sample. To account for statistical fluctuations in the MC production that could affect the response functions used to build the matrix, the Beeston-Barlow light method~\cite{Barlow:1993dm} has been adopted, as described in the next section. Fig.~\ref{evt_distribution} depicts the expected neutrino event distributions calculated with SWIM for all PID classes, binned in reconstructed variables ($E_\mathrm{reco}, \cos\theta_\mathrm{reco}$) for 6~years of data taking with ORCA, assuming normal ordering and oscillation parameters from Ref.~\cite{Esteban:2018azc}. In this study, 20 linear bins are chosen for the reconstructed cosine of the zenith angle while the reconstructed energy is binned logarithmically with 20 bins in the range of 2--80~GeV. \begin{figure}[th] \centering \includegraphics[width=0.45\linewidth]{ORCA_Figs/Evt_Trk_v2.pdf} \includegraphics[width=0.45\linewidth]{ORCA_Figs/Evt_Shw_v2.pdf} \includegraphics[width=0.45\linewidth]{ORCA_Figs/Evt_Int_v2.pdf} \caption{Expected event distribution for ORCA in 3~PID classes for 6~years of exposure and true normal ordering assumption, with the oscillation parameter values from Ref.~\cite{Esteban:2018azc}.} \label{evt_distribution} \end{figure} \subsection{Sensitivity analysis}\label{ORCAanalysis} The ORCA analysis uses the 2D distribution of expected neutrino events in each PID class as a function of the reconstructed neutrino energy and the cosine of the zenith angle. The SWIM package performs the computation and minimization of the test statistic chosen as the log-likelihood ratio between a hypothetical model and data, which for this analysis is an Asimov dataset with an assumed true hypothesis. The ORCA $\chi^2$ is built as follows: \begin{equation}\label{Chi2_Poisson} \chi^2 = -2 \sum_{l} \left( d_l - \beta_l\mu_l - d_l \ln \frac{d_l}{\beta_l\mu_l} \right) + \sum _j \frac{(p_j-p^0_j)^2}{\sigma_j ^2} + \sum_k \frac{(1-\beta_k)^2}{\varrho_k^2}. \end{equation} The first sum is the Poisson likelihood of the data $d_l$ given the expectation $\mu_l$ at bin $l$, where the latter also depends on the nuisance parameters $p_j$. The $\beta$ parameters are introduced based on the ``Beeston-Barlow light method"~\cite{Barlow:1993dm} to account for fluctuations due to finite MC statistics. The second sum accounts for the Gaussian priors on nuisance parameters $p_j$ with mean values $p_j^0$ and variances $\sigma_j$. The third sum represents the Gaussian priors to the $\beta$ parameters that are expected to be normally distributed. The fluctuations $\beta_k$ are assumed to be bin-to-bin uncorrelated and independent of the model parameters. Given this assumption, the values of $\beta_k$ are solved analytically as the solution of $\partial \chi^2/\partial \beta_k = 0$. In addition, the variances $\varrho_k$ of $\beta_k$ can also be evaluated with a probabilistic model which describes the calculation of the response matrix as a single binomial process~\cite{Casadei:2009ic,Paterno:2004cb}. Finally, both $\beta$ and its variance can be estimated analytically and used directly in the calculation of the $\chi^2$ without any requirement for additional minimization. The full description of this procedure can also be found in Ref.~\cite{Bourret:2018kug}. This implementation results in a $\sim 0.2\sigma$ decrease in the sensitivity, reflecting a correction of the overestimation caused by the limited MC sample size. Two different sets of systematic uncertainties are used in this study (see~Tab.~\ref{syst}). The ``baseline'' scenario corresponds to the standard set of ORCA systematics adopted for oscillation analyses, similar to Ref.~\cite{Aiello:2021jfn}. Uncertainties related to the incident flux include the spectral index of the neutrino flux energy distribution (free without any constraints) and the flux skew systematics. These skew parameters are introduced to describe the uncertainties in the ratio of different event types, namely $\nu_e/\bar{\nu}_e$, $\nu_\mu/\bar{\nu}_\mu$, and $(\nu_e+\bar{\nu}_e)/(\nu_\mu+\bar{\nu}_\mu)$, while preserving in each case the total associated flux \cite{Bourret:2018kug}. They are constrained with the priors adapted from Ref.~\cite{Barr:2006it}. A NC normalization systematic is implemented as a scaling factor for the number of NC events. To account for detector-related uncertainties, an energy scale systematic is introduced as a global shift of the neutrino true energy in all detector response functions. This implementation captures the effect of undetected variations in the parameters affecting the amount of light recorded by the detector, such as the absorption length and the PMT efficiencies, that would not be accounted for in the reconstruction~\cite{Adrian-Martinez:2016fdl}. Finally, normalization factors for each PID class are also included to account for any possible systematic effects (in the flux, cross-section, or detector response) that would vary the total number of events in each class. \begin{table}[t!] \centering \caption{Baseline (see Ref.~\cite{Aiello:2021jfn}) and optimistic (see Ref.~\cite{Bezerra:2019dao}) scenarios for the treatment of systematics considered in the ORCA analysis. The cross ($\times$) indicates that the systematic is not included.} \begin{tabular}{c|c|c} \label{syst} Parameter & Baseline scenario & Optimistic scenario \\\hline Flux spectral index & \multicolumn{2}{c}{free}\\ Flux $\nu_e/\bar{\nu}_e$ skew & \multicolumn{2}{c}{7\% prior}\\ Flux $\nu_\mu/\bar{\nu}_\mu$ skew & \multicolumn{2}{c}{5\% prior}\\ Flux $(\nu_e+\bar{\nu}_e)/(\nu_\mu+\bar{\nu}_\mu)$ skew & \multicolumn{2}{c}{2\% prior}\\ NC normalization & \multicolumn{2}{c}{10\% prior}\\ Detector energy scale & 5\% prior & $\times$ \\ PID-class norm. factors & free & $\times$\\ Effective area scale & $\times$ &10\% prior\\ Flux energy scale & $\times$ & 10\% prior \\\hline \end{tabular} \end{table} \begin{figure}[b!] \vspace{1cm} \centering \includegraphics[scale=0.6]{./ORCA_Figs/ORCA_syst_chart.pdf} \caption{Implementation of the two different systematic approaches in the SWIM workflow used in the ORCA analysis.} \label{fig:orcasens:systchart} \end{figure} A second scenario is based on the study of Ref.~\cite{Bezerra:2019dao}, developed for the PINGU detector. That analysis does not apply normalization factors to the PID classes. It uses an overall scaling factor which represents a universal systematic uncertainty on all effective areas (or equivalently, on the combined $\nu + \bar{\nu}$ event rate). This effective area scaling, together with an energy scale uncertainty introduced at the flux level, are the only systematics introduced to account, {\it e.g.}, for potential variations in the detection efficiency of the optical modules. Contrary to the baseline case, these systematics are not introduced at the detector response level and are therefore considered as more optimistic in the rest of our study. The difference between the two approaches is further illustrated in~Fig~\ref{fig:orcasens:systchart}, showing the implementation of each set of systematic uncertainties into the workflow of the SWIM framework. The ``baseline'' systematic set is believed to be more accurate to describe the uncertainties in the ORCA detector. It is therefore used for all presented results, unless when stated explicitly that the ``optimistic'' systematics are used for the sake of cross-checks and comparisons. \section{Further sensitivity studies} \label{sec:sens} \subsection{Impact of energy resolution in JUNO and 10 reactor cores scenario} \label{sec:sens:juno} One of the most challenging design specifications of JUNO is the required energy resolution of the central detector. Reaching a level of about $3\%/\sqrt{E/\text{MeV}}$ is essential for JUNO to be able to reach a $3\sigma$ sensitivity to determine the neutrino mass ordering by itself. In this sense, if the energy resolution worsens to $3.5\%/\sqrt{E/\text{MeV}}$, the required time to reach a $3\sigma$ sensitivity would increase by more than a factor of 2~\cite{An:2015jdp}. A significant amount of effort has been made within the JUNO Collaboration to reach this goal of $3\%/\sqrt{E/\text{MeV}}$, and a description of how to get there using a data-driven approach relying on calibration data has been presented in Ref.~\cite{Abusleme:2020lur}, where a 3.02$\%/\sqrt{E/\text{MeV}}$ energy resolution has been achieved, with a worsening of this energy resolution to 3.12$\%/\sqrt{E/\text{MeV}}$ after considering some imperfections in the detector. Nevertheless, it is still extremely interesting to evaluate the sensitivity of the combined NMO analysis to the energy resolution of JUNO. In the present study a $\pm 0.5\%/\sqrt{E/\text{MeV}}$ variation of the energy resolution was considered. While this accounts for a larger departure from the JUNO target energy resolution than the one described above, it was chosen to test the robustness of the combination procedure. As shown in Fig.~\ref{fig:sens:juno:energy_resolution}, the impact of this variation of the energy resolution in the combined analysis is fairly small in comparison to the impact on the JUNO-only analysis. The reason for this small impact is that the added power to discriminate the neutrino mass orderings in this combination comes mostly from the displacement between the $\Delta m^2_{31}$ best-fit values obtained by ORCA and JUNO for the wrong ordering assumption rather than from the direct measurement of the neutrino mass ordering in JUNO, as discussed previously. In this scenario, a worsening of the energy resolution would slightly reduce the precision of JUNO to measure $\Delta m^2_{31}$, while the best-fit value of $\Delta m^2_{31}$ for each ordering would not change significantly. Therefore, the tension of the $\Delta m^2_{31}$ best fit between JUNO and ORCA remains, which preserves the high sensitivity of the analysis. \begin{figure}[t] \begin{center} \flushright \includegraphics[width=0.95\linewidth]{Legend_Figs/Legend_EresJUNO.png} \\ \vspace*{0.2cm} \includegraphics[width=0.49\linewidth]{JUNO_Figs/JUNO-8cores_ORCA115_NO_EnRes_JP.pdf} \includegraphics[width=0.49\linewidth]{JUNO_Figs/JUNO-8cores_ORCA115_IO_EnRes_JP.pdf} \caption{NMO sensitivity as a function of time for only JUNO (red), only ORCA (blue), and the combination of JUNO and ORCA (green), considering a better (dotted) and worse (dashed) energy resolution for JUNO than the nominal one (solid) by $\pm 0.5\%/\sqrt{E/\text{MeV}}$.} \label{fig:sens:juno:energy_resolution} \end{center} \end{figure} \begin{figure}[ht!] \vspace{1cm} \flushright \includegraphics[width=0.95\linewidth]{Legend_Figs/Legend_coresJUNO.png} \\ \vspace*{0.05cm} \begin{center} \includegraphics[width=0.49\linewidth]{JUNO_Figs/JUNO-8_10cores_ORCA115_NO_JP.pdf} \includegraphics[width=0.49\linewidth]{JUNO_Figs/JUNO-8_10cores_ORCA115_IO_JP.pdf} \caption{NMO sensitivity as a function of time for only JUNO (red), only ORCA (blue), and the combination of JUNO and ORCA (green), considering 2 (solid) or 4 (dashed) Taishan NPP reactors, corresponding respectively to 8 or 10~reactor cores at 53~km from JUNO.} \label{fig:sens:juno:10cores} \end{center} \end{figure} As discussed in Sec.~\ref{sec:juno:modeling}, there is a possibility that 2 additional reactors could be built at the Taishan NPP, as originally planned. This would double the number of neutrinos produced by that NPP. In this scenario, JUNO by itself would be able to reach $3\sigma$ about 3~years earlier, as shown in Fig.~\ref{fig:sens:juno:10cores}. In combination with ORCA however, there is negligible impact to the time required for the combined sensitivity to reach $5\sigma$ assuming true normal ordering, at the current best fit value. About 9~months are gained in the inverted ordering scenario which is still a significantly smaller impact than for the standalone JUNO. This behavior is, as in the case of the JUNO energy resolution dependency, due to the fact that the boost obtained from the combination relies on the difference between the JUNO and ORCA best-fit values for $\Delta m^2_{31}$ in the wrong ordering scenario, rather than due to the precision of each experiment to measure the neutrino mass ordering separately. \subsection{Dependence on $\Delta m^{2}_{31}$ and $\theta_{23}$} \label{sec:sens:orca} This section presents the dependence of the analysis on the true value of the oscillation parameters, focusing particularly on $\theta_{23}$ and $\Delta m^2_{31}$. Those parameters are chosen because the true value of $\theta_{23}$ is known to have a strong influence on the ORCA sensitivity, and because the boost in sensitivity in the combined analysis, as discussed previously, is directly tied to the $\Delta m^2_{31}$ measurement, and therefore it is essential to ensure that such a boost is valid for any true value of $\Delta m^2_{31}$. Fig.~\ref{fig:Th23} shows the dependence of the NMO sensitivity on the true value of $\theta_{23}$ for 6~years of data taking. As mentioned in Sec.~\ref{sec:ana:combination}, JUNO has no sensitivity to $\theta_{23}$. In the case of ORCA however, the NMO sensitivity potential depends strongly on the true value of $\theta_{23}$ as this parameter affects the amplitude of the detected oscillation pattern. After 6~years of data taking, ORCA has the sensitivity to reject the wrong ordering with a significance of 3--7$\sigma$ and only reaches a $5\sigma$ sensitivity for true NO with $\theta_{23}$ in the second octant. The combination curve also follows a similar $\theta_{23}$ dependence as the ORCA-standalone curve, however thanks to the boost from JUNO, it is shifted to higher sensitivities and the joint fit ensures a $5\sigma$ discovery after about 6~years regardless of the true value of $\theta_{23}$ and of the true NMO. It is worth noting here that the current global best-fit value of $\theta_{23}$ is in the upper octant with values of about $49^\circ$ for both orderings. This value is the one used in the studies described in Secs.~\ref{sec:results} and \ref{sec:sens:juno}, which explains why in those studies the sensitivity for true NO is always much higher than $5\sigma$ after 6 years of data taking. Fig.~\ref{fig:Dm31} illustrates the dependence of NMO sensitivity on the true value of $\Delta m_{31}^2$. Both JUNO and ORCA standalone sensitivities depict a slight dependence on the true value of $\Delta m_{31}^2$ with the opposite slope for each experiment. The combination is also quite flat with respect to the true $\Delta m_{31}^2$, reaching a significance of $8\sigma$ in the case of NO and $5\sigma$ in the case of IO. The effect of the boost described previously, relying on the difference between the wrong ordering measurement of $\Delta m_{31}^2$, is preserved over the whole $\Delta m_{31}^2$ range. \begin{figure}[ht!] \flushright \includegraphics[width=0.95\linewidth]{Legend_Figs/Legend_simple.png} \\ \vspace*{0.2cm} \centering \includegraphics[width=0.49\linewidth]{ORCA_Figs/JUNO-8cores_ORCA115_NO_TH23_v2_JP.pdf} \includegraphics[width=0.49\linewidth]{ORCA_Figs/JUNO-8cores_ORCA115_IO_TH23_v2_JP.pdf} \caption{NMO sensitivity as a function of the true $\theta_{23}$ value for 6~years of data taking for only JUNO (red), only ORCA (blue), and the combination of JUNO and ORCA (green). The vertical lines indicate the global best-fit values used in this analysis (from Ref.~\cite{Esteban:2018azc}). }\label{fig:Th23} \end{figure} \begin{figure}[ht!] \flushright \includegraphics[width=0.95\linewidth]{Legend_Figs/Legend_simple.png} \\ \vspace*{0.2cm} \centering \includegraphics[width=0.49\linewidth]{ORCA_Figs/JUNO-8cores_ORCA115_NO_DM31_v2_JP.pdf} \includegraphics[width=0.49\linewidth]{ORCA_Figs/JUNO-8cores_ORCA115_IO_DM31_v2_JP.pdf} \caption{NMO sensitivity as a function of the true $\Delta m_{31}^2$ value for 6~years of data taking for only JUNO (red), only ORCA (blue), and the combination of JUNO and ORCA (green). The vertical lines indicate the global best-fit values used in this analysis (from Ref.~\cite{Esteban:2018azc}). }\label{fig:Dm31} \end{figure}
1,108,101,563,501
arxiv
\section{Conclusion} \label{s:conclusion} This paper investigated the CP decomposition with a stochastic gradient descent algorithm for multi-way data analysis. This leads to a new method named Fast Parallel-CP Decomposition (FP-CPD) for tensor decomposition. The proposed method guarantees the convergence for a given non-convex problem by modeling the second order derivative of the loss function and incorporating little noise to the gradient update. Furthermore, FP-CPD employs Nesterov's method to compensate for the optimization process's delays and accelerate the convergence rate. Based on laboratory and real datasets from the area of SHM, our FP-CPD, with a one-class SVM model for anomaly detection, achieve accurate results in damage detection, localization, and assessment in online and one-class settings. Among the key future work is how to parallelize the tensor decomposition with FP-CPD. Also, it would be useful to apply FP-CPD with datasets from different domains. \section{Introduction} There has been an exponential growth of data which is generated by the accelerated use of modern computing paradigms. A prominent example of such paradigms is the Internet of Things (IoTs) in which everything is envisioned to be connected to the Internet. One of the most promising technology transformations of IoT is a smart city. In such cities, enormous number of connected sensors and devices continuously collect massive amount of data about things such as city infrastructure to analyse and gain insights on how to manage the city efficiently in terms of resources and services. The adoption of smart city paradigm will result in massive increase of data volume (data collected from a large number of sensors) as well as a number of data features which increase data dimensionality. To make prices and in-depth insights from such data, advanced and efficient techniques including multi-way data analysis were recently adopted by research communities. The concept of multi-way data analysis was introduced by Tucker in 1964 as an extension of standard two-way data analysis to analyze multidimensional data known as tensor \cite{kolda2009tensor}. It is often used when traditional two-way data analysis methods such as Non-negative Matrix Factorization (NMF), Principal Component Analysis (PCA) and Singular Value Decomposition (SVD) are not capable of capturing the underlying structures inherited in multi-way data~\cite{Cichocki2015}. In the realm of multi-way data, tensor decomposition methods such as $Tucker$ and $CANDECOMP/PARAFAC$ (CP) \cite{kolda2009tensor,Rendle2009} have been extensively studied and applied in various fields including signal processing~\cite{DeLathauwer1996}, civil engineer ~\cite{Khoa2017}, recommender systems \cite{Rendle2009}, and time series analysis~\cite{Cong2015}. The CP decomposition has gained much popularity for analyzing multi-way data due to its ease of interpretation. For example, given a tensor $\mathcal{X} \in \mathbb{R} ^{I_1 \times \dots \times I_N} $, CP method decomposes $\mathcal{X}$ by $N$ loading matrices $A^{(1)}, \dots, A^{(N)}$ each represents one mode explicitly, where $N$ is the tensor order and each matrix $A$ represents one mode explicitly. In contrast to $Tucker$ method, the three modes can interact with each other making it difficult to interpret the resultant matrices. The CP decomposition approach often uses the Alternating Least Squares (ALS) method to find the solution for a given tensor. The ALS method follows the batch mode training process which iteratively solves each component matrix by fixing all the other components, then it repeats the procedure until it converges \cite{khoa2017smart}. However, ALS can lead to sensitive solutions \cite{elden1980perturbation}\cite{anaissi2018regularized}. Moreover, in the domain of big data and IoTs such as smart cities, the ALS method raises many challenges in dealing with data that is continuously measured at high velocity from different sources/locations and dynamically changing over time. For instance, a structural health monitoring (SHM) data can be represented in a three-way form as $location \times feature \times time$ which represents a large number of vibration responses measured over time by many sensors attached to a structure at different locations. This type of data can be found in many other application domains including \cite{acar2009unsupervised,sun2008incremental,kolda2008scalable,anaissi2018tensor}. The iterative nature of employed CP decomposition methods involve intensive computational processing in each iteration. A significant challenge arises in such algorithms (including ALS and its variations) when the input tensor is sparse and has N dimension. This means as the dimensionality of the tensor increases, the calculations involved in the algorithm becomes computationally more expensive and thus incremental, parallel and distributed algorithms for CP decomposition becomes essential to achieving a more reasonable performance This is especially the case in large applications and computing paradigms such as smart cites. The efficient processing of CP decomposition problem has been investigated with different hardware architecture and techniques including MapReduce structure \cite{Kang2012} and shared and distributed memory structures~\cite{Smith2015,Kaya2015}. Such approaches present algorithms that require alternating hardware architectures to enable parallel and fast execution of CP decomposition methods. The MapReduce and distributed computing approaches could also incur additional performance from network data communication and transfer. Our goal is to devise a parallel and efficient CP decomposition execution method with minimal hardware changes to the operating environment and without incurring additional performance resulting from new hardware architectures. Thus, to address the aforementioned problems, we propose an efficient solver method called FP-CPD (Fast Parallel-CP Decomposition) for analyzing large-scale high-order data in parallel based on stochastic gradient descent. The scope of this paper is smart cities and, in particular, SHM of infrastructure such as bridges. The novelty of our proposed method is summarized in the following contributions: \begin{enumerate} \item \textbf{Parallel CP Decomposition.} Our FP-CPD method is capable of efficiently learning large scale tensors in parallel and updating $\mathcal{X}^{(t+1)}$ in one step. \item \textbf{Global convergence guarantee.} We followed the perturbation approach which adds a little noise to the gradient update step to reinforce the next update step to start moving away from a saddle point toward the correct direction. \item \textbf{Optimal convergence rate.} Our method employs Nesterov's Accelerated Gradient (NAG) method into the SGD algorithm to optimally accelerate the convergence rate \cite{sutskever2013importance}. It achieves a global convergence rate of $O(\frac{1}{T^2})$ comparing to $O(\frac{1}{T})$ for traditional SGD. \item \textbf{Empirical analysis on structural datasets.} We conduct experimental analysis using laboratory-based and real-life datasets in the field of SHM. The experimental analysis shows that our method can achieve more stable and fast tensor decomposition compared to other known existing online and offline methods. \end{enumerate} The remainder of this paper is organized as follows. Section \ref{s:related} introduces background knowledge and review of the related work. Section \ref{s:method} describes our novel FP-CPD algorithm for parallel CP decomposition based on SGD algorithm augmented with the NAG method and perturbation approach. Section \ref{s:motiv} presents the motivation of this work. Section \ref{s:results} shows the performance of D-CPD on structural datasets and presents our experimental results on both laboratory-based and real-life datasets. The conclusion and discussion of future research work are presented in Section \ref{s:conclusion}. \section{Fast Parallel CP Decomposition (FP-CPD)} \label{s:method} Given an $N^{th}$-order tensor $\mathcal{X} \in \mathbb{R}^{I_1 \times \dots \times I_N}$, we solve the CP decomposition by splitting the problem into a convex $N$ sub-problems since its loss function $L$ defined in Equation \ref{eq:decomp} is non-convex problem which may have many local minima. In case of distributing this solution, another challenge is raised where the value of the $w^{(t)}$ must be globally updated before computing $w^{(t+1)}$ where $w$ represents $A, B$ and $C$. However, the structure and the process of tensor decomposition allows us to exploit this challenge. For illustration purposes, we present our FP-CPD method based on three-way tensor data. The same logic can be naturally extended to handle a higher-order tensor, though. \vspace{1em} \begin{definition} \label{def1} Two training points $x_1 = (i_1,j_1,k_1) \in \mathcal{X}$ and $x_2 = (i_2,j_2,k_2) \in \mathcal{X}$ are interchangeable with respect to the loss function $L$ defined in Equation \ref{eq:decomp} if they are not sharing any dimensions, i.e, $i_1\neq i_2, j_1 \neq j_2$ and $k_1 \neq k_2$. \end{definition} \vspace{1em} Based on Definition \ref{def1}, we develop a new algorithm, called FP-CPD, to carry the tensor decomposition process in parallel. The core idea of FP-CPD algorithm is to find and run the CPD in parallel by considering all the defined interchangeable training points in one single step without affecting the final outcome of $w$. Our FP-CPD algorithm partitions the training tensor $\mathcal{X} \in \Re^{I \times J \times K} $ into set of potentially independent blocks $\mathcal{X}_1,\dots, \mathcal{X}_b$. Each block consists of $t$ interchangeable training points which are identified by finding all the the possible combinations of each dimension of a given tensor $\mathcal{X}$. To illustrate this process, we consider a three-order tensor $\mathcal{X} \in \mathbb{R}^{3 \times 3 \times 3}$ as shown in Figure \ref{tensor_para}. This tensor is partitioned into $d$ independent blocks which cover the entire given training data $\mathcal{D}_{b=1}^{d} \mathcal{X}_b$. The value of $d = \frac{i \times j \times k}{\min(i,j,k)}$. Each $\mathcal{X}_b$ contains a parallelism parameter $p$ which deduces the possible number of tasks that can be run in parallel. In our three-way tensor example $p =3$ interchangeable training points. \begin{figure*}[!t] \centering \includegraphics[scale=0.8]{tensor} \caption{ Independent blocks for $\mathcal{X} \in \Re^{3 \times 3 \times 3} $ } \label{tensor_para} \end{figure*} \subsubsection{\textbf{The FP-CPD Algorithm }} Given the set of independent blocks $\mathcal{D}_{b=1}^{d} \mathcal{X}_b$, we can decompose $\mathcal{X} \in \Re^{I \times J \times K} $ in parallel into three matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $ and $ C \in \Re^{K \times R}$, where $R$ is the latent factors. In this context, we reconstitute our loss function defined in Equation \ref{eq:als} to be the sum of losses per block:$ L (A, B, C) = \sum_{b=1}^{d} L_b ( A, B, C) $. This new loss function provides the rational of our parallel CP decomposition which will allow SGD algorithm to learn all the possible interchangeable data points within each block in parallel. Therefore, SGD computes the partial derivative of the loss function $L_b (A, B, C) = \sum_{(i,j,k) \in \mathcal{D}_{b} } L_{i,j,k}(A, B, C)$ with respect to the three modes $A, B$ and $C$ alternatively as follows: \begin{eqnarray}\label{eq:partial} \frac{\partial L_b}{\partial A }(X^{(1)}; A) = (X^{(1)} - A \times (C \circ B)) \times (C \circ B) \nonumber\\ \frac{\partial L_b}{\partial B }(X^{(2)}; B) = (X^{(2)} - B \times (C \circ A)) \times (C \circ A)\\ \frac{\partial L_b}{\partial C }(X^{(3)}; C) = (X^{(3)} - C \times (B \circ A)) \times (B \circ A)\nonumber \end{eqnarray} where $X^{(i)}$ is an unfolding matrix of tensor $\mathcal{X}$ in mode $i$. The gradient update step for $A, B$ and $C$ are as follows: \begin{eqnarray}\label{eq:update} A^{(t+1)} := A^{(t)} + \eta^{(t)} \frac{\partial L_b}{\partial A } (X^{(1, t)} ;A^{(t)} ) \nonumber\\ B^{(t+1)} := B^{(t)} + \eta^{(t)} \frac{\partial L_b}{\partial B } (X^{(2, t)} ;B^{(t)} ) \\ C^{(t+1)} := C^{(t)} + \eta^{(t)} \frac{\partial L_b}{\partial C } (X^{(3, t)} ;C^{(t)} ) \nonumber \end{eqnarray} \subsubsection{\textbf{Convergence}} Regardless if we are applying parallel SGD or just SGD, the partial derivative of SGD in non-convex setting may encounter data points with $\frac{\partial L}{\partial w } = 0$ even though it is not at a global minimum. These data points are known as \textit{saddle points} which may detente the optimization process to reach the desired local minimum if not escaped \cite{ge2015escaping}. These saddle points can be identified by studying the second-order derivative (aka Hessian) $\frac{\partial L}{\partial w }^2$. Theoretically, when the $\frac{\partial L}{\partial w }^2(x;w)\succ 0$, $x$ must be a local minimum; if $\frac{\partial L}{\partial w }^2(x;w) \prec 0$, then we are at a local maximum; if $\frac{\partial L}{\partial w }^2(x;w)$ has both positive and negative eigenvalues, the point is a saddle point. The second order methods guarantee convergence, but the computing of Hessian matrix $H^{(t)}$ is high, which makes the method infeasible for high dimensional data and online learning. Ge \etal \cite{ge2015escaping} show that saddle points are very unstable and can be escaped if we slightly perturb them with some noise. Based on this, we use the perturbation approach which adds Gaussian noise to the gradient. This reinforces the next update step to start moving away from that saddle point toward the correct direction. After a random perturbation, it is highly unlikely that the point remains in the same band and hence it can be efficiently escaped (i.e., no longer a saddle point). We further incorporate Nesterov's method into the perturbed-SGD algorithm to accelerate the convergence rate. Recently, Nesterov's Accelerated Gradient (NAG) \cite{nesterov2013introductory} has received much attention for solving convex optimization problems \cite{guan2012nenmf,nitanda2014stochastic,ghadimi2016accelerated}. It introduces a smart variation of momentum that works slightly better than standard momentum. This technique modifies the traditional SGD by introducing velocity $\nu$ and friction $\gamma$, which tries to control the velocity and prevents overshooting the valley while allowing faster descent. Our idea behind Nesterov's is to calculate the gradient at a position that we know our momentum is about to take us instead of calculating the gradient at the current position. In practice, it performs a simple step of gradient descent to go from $w^{(t)} $ to $w^{(t+1)}$, and then it shifts slightly further than $w^{(t+1)}$ in the direction given by $\nu^{(t-1)}$. In this setting, we model the gradient update step with NAG as follows: \begin{eqnarray}\label{eq:nagNe} A^{(t+1)} := A^{(t)} + \eta^{(t)} \nu^{(A, t)} + \epsilon - \beta ||A||_{L_{1,b}} \end{eqnarray} where \begin{eqnarray}\label{eq:velNe} \nu^{(A, t)} := \gamma \nu^{(A, t-1)} + (1-\gamma) \frac{\partial L_b}{\partial A } (X^{(1, t)} ,A^{(t)} ) \end{eqnarray} where $\epsilon$ is a Gaussian noise, $\eta^{(t)}$ is the step size, and $||A||_{L_{1,b}}$ is the regularization and penalization parameter into the $L_1$ norms to achieve smooth representations of the outcome and thus bypassing the perturbation surrounding the local minimum problem. The updates for $(B^{(t+1)} , \nu^{(B, t)})$ and $(C^{(t+1)} ,\nu^{(C, t)} )$ are similar to the aforementioned ones. With NAG, our method achieves a global convergence rate of $O(\frac{1}{T^2})$ comparing to $O(\frac{1}{T})$ for traditional gradient descent. Based on the above models, we present our FP-CPD algorithm \ref{FP-CPD}. \begin{algorithm} \caption{ FP-CPD algorithm} \label{FP-CPD} \textbf{Input}: Tensor $X \in \Re^{I \times J \times K} $ , number of components $R$\\ \textbf{Output}: Matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $ and $ C \in \Re^{K \times R}$ \begin{itemize} \item Initialize $A,B,C$ \item Repeat { \setlength\itemindent{10pt}{ \item\textbf{Form} $d$ blocks $\{\mathcal{X}_1,\dots, \mathcal{X}_b\}$ \item\textbf{for} $b=1, \dots, d$ \textbf{do} \setlength\itemindent{20pt}{ \item $IP$ = Find all interchangeable data points in block \item[]$\mathcal{X}_b$ (Definition \ref{def1}) \item \textbf{for each} $p$ in $IP$ \textbf{do} {in parallel} \setlength\itemindent{30pt}{ \item Compute the partial derivative of $A, B$ and $C$ \item[]using Equation \ref{eq:partial} \item Compute $\nu$ of $A, B$ and $C$ using Equation \ref{eq:velNe} \item Update $A, B$ and $C$ using Equation \ref{eq:nagNe} } } \item \textbf{end for each} } \item \textbf{end for} } \item \textbf{until convergence} \end{itemize} \end{algorithm} \section{Motivation} \label{s:motiv} Numerous types of data are naturally structured as multi-way data. For instance, structural health monitoring (SHM) data can be represented in a three-way form as $location \times feature \times time$. Arranging and analyzing the SHM data in a multidimensional form would allow us to capture the correlation between sensors at different locations and at the same time which was not possible using the standard two-way matrix $time\times feature$. Furthermore, in SHM only positive data instances i.e healthy state are available. Thus, the problem becomes an anomaly detection problem in higher-order datasets. Rytter \cite{rytter1993vibrational} affirms that damage identification also requires also damage localization and severity assessment which are considered much more complex than damage detection since they require a supervised learning approach \cite{worden2006application}. Given a positive three-way SHM data $\mathcal{X} \in \mathbb{R}^{feature \times location \times time}$, FP-CPD decomposes $\mathcal{X}$ into three matrices $A, B$ and $C$. The $C$ matrix represents the temporal mode where each row contains information about the vibration responses related to an event at time $t$. The analysis of this component matrix can help to detect the damage of the monitored structure. Therefore, we use the $C$ matrix to build a one-class anomaly detection model using only the positive training events. For each new incoming $\mathcal{X}_{new}$, we update the three matrices $A, B$ and $C$ incrementally as described in Algorithm \ref{FP-CPD}. Then the constructed model estimates the agreement between the new event $C_{new}$ and the trained data. For damage localization, we analyze the data in the location matrix $B$, where each row captures meaningful information for each sensor location. When the matrix $B$ is updated due to the arrival of a new event $\mathcal{X}_{new}$, we study the variation of the values in each row of matrix $B$ by computing the average distance from $B$'s row to $k$-nearest neighboring locations as an anomaly score for damage localization. For severity assessment in damage identification, we study the decision values returned from the one-class model. This is because a structure with more severe damage will behave much differently from a normal one. \section{Background and Related work} \label{s:related} \subsection{CP Decomposition} Given a three-way tensor $\mathcal{X} \in \Re^{I \times J \times K} $, CP decomposes $\mathcal{X}$ into three matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $and $ C \in \Re^{K \times R}$, where $R$ is the latent factors. It can be written as follows: \begin{eqnarray}\label{eq:decomp} \mathcal{X}_{(ijk)} \approx \sum_{r=1}^{R}A_{ir} \circ B_{jr} \circ C_{kr} \end{eqnarray} where "$\circ$" is a vector outer product. $R$ is the latent element, $A_{ir}, B_{jr} $ and $C_{kr}$ are r-th columns of component matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $and $ C \in \Re^{K \times R}$. The main goal of CP decomposition is to decrease the sum square error between the model and a given tensor $\mathcal{X}$. Equation \ref{eq:als} shows our loss function $L$ needs to be optimized. \begin{eqnarray}\label{eq:als} L (\mathcal{X}, A, B, C) = \min_{A,B,C} \| \mathcal{X} - \sum_{r=1}^R \ A_{ir} \circ B_{jr} \circ C_{kr} \|^2_f, \end{eqnarray} where $\|\mathcal{X}\|^2_f$ is the sum squares of $\mathcal{X}$ and the subscript $f$ is the Frobenius norm. The loss function $L$ presented in Equation \ref{eq:als} is a non-convex problem with many local minima since it aims to optimize the sum squares of three matrices. Several algorithms have been proposed to solve CP decomposition \cite{symeonidis2008tag,lebedev2014speeding,rendle2009learning}. Among these algorithms, ALS has been heavily employed which repeatedly solves each component matrix by locking all other components until it converges \cite{papalexakis2017tensors}. The rational idea of the least square algorithm is to set the partial derivative of the loss function to zero with respect to the parameter we need to minimize. Algorithm \ref{ALS} presents the detailed steps of ALS. \begin{algorithm} % \textbf{Alternating Least Squares\\} \textbf{Input}: Tensor $\mathcal{X} \in \Re^{I \times J \times K} $, number of components $R$\\ \textbf{Output}: Matrices $A \in \Re^{I \times R}$, $B \in \Re^{J \times R} $ and $ C \in \Re^{K \times R}$ \begin{enumerate} \item[1:] Initialize $A,B,C$ \item[2:] Repeat {\setlength\itemindent{6pt} \item[3:] $A = \underset{A}{\arg\min} \frac{1}{2} \| X_{(1)} - A ( C \odot B)^T\|^2 $ \item[4:] $B = \underset{B}{\arg\min} \frac{1}{2} \| X_{(2)} - B ( C \odot A)^T\|^2 $ \item[5:] $C = \underset{C}{\arg\min} \frac{1}{2} \| X_{(3)} - C ( B \odot A)^T\|^2 $ \item[]($X_{(i)} $ is the unfolded matrix of $X$ in a current mode) } \item[6:] until convergence \end{enumerate} \caption{ Alternating Least Squares for CP} \label{ALS} \end{algorithm} Zhou \etal \cite{zhou2008large} suggest that ALS can be easily parallelized for matrix factorization methods, but its not scalable for large scale data especially when it deals with multi-way tensor data. Later Zhou \etal. \cite{zhou2016accelerating} proposed another method called onlineCP to address the problem of online CP decomposition using ALS algorithm. The method was able to incrementally update the temporal mode in multi-way data but failed for non-temporal modes \cite{khoa2017smart} and not parallelized. \subsection{Stochastic Gradient Descent} A stochastic gradient descent algorithm is a key tool for optimization problems. Here, the aim is to optimize a loss function $L(x,w)$, where $x$ is a data point drawn from a distribution $\mathcal{D}$ and $w$ is a variable. The stochastic optimization problem can be defined as follows: \begin{eqnarray}\label{eq:sgd} w = \underset{w}{argmin} \; \mathbb{E}[L(x,w)] \end{eqnarray} The stochastic gradient descent method solves the above problem defined in Equation \ref{eq:sgd} by repeatedly updates $w$ to minimize $L(x,w)$. It starts with some initial value of $w^{(t)}$ and then repeatedly performs the update as follows: \begin{eqnarray}\label{eq:sgdu} w^{(t+1)} := w^{(t)} + \eta \frac{\partial L}{\partial w } (x^{(t)} ,w^{(t)} ) \end{eqnarray} where $\eta$ is the learning rate and $x^{(t)}$ is a random sample drawn from the given distribution $\mathcal{D}$. This method guarantees the convergence of the loss function $L$ to the global minimum when it is convex. However, it can be susceptible to many local minima and saddle points when the loss function exists in a non-convex setting. Thus it becomes an NP-hard problem. Note, the main bottleneck here is due to the existence of many saddle points and not to the local minima \cite{ge2015escaping}. This is because the rational idea of gradient algorithm depends only on the gradient information which may have $\frac{\partial L}{\partial u } = 0$ even though it is not at a minimum. Previous studies have used SGD for parallel matrix factorization. Gemulla \cite{gemulla2011large} proposed a new parallel method for matrix factorization using SGD. The authors indicate the method was able to handle large scale data with fast convergence efficiently. Similarly, Chin \etal \cite{chin2015fast} proposed a fast parallel SGD method for matrix factorization in recommender systems. The method also applies SGD in shared memory systems but with a careful consideration to the load balance of threads. Naiyang \etal \cite{guan2012nenmf} applies Nesterov's optimal gradient method to SGD for non-negative matrix factorization. This method accelerates the NMF process with less computational time. Similarly, Shuxin \etal \cite{zheng2017asynchronous} used an SGD algorithm for matrix factorization using Taylor expansion and Hessian information. They proposed a new asynchronous SGD algorithm to compensate for the delay resultant from a Hessian computation. Recently, SGD has attracted several researchers working on tensor decomposition. For instance, Ge \etal \cite{ge2015escaping} proposed a perturbed SGD (PSGD) algorithm for orthogonal tensor optimization. They presented several theoretical analysis that ensures convergence; however, the method is not applicable to non-orthogonal tensor. They also did not address the problem of slow convergence. Similarly, Maehara \etal \cite{maehara2016expected} propose a new algorithm for CP decomposition based on a combination of SGD and ALS methods (SALS). The authors claimed the algorithm works well in terms of accuracy. Nevertheless, its theoretical properties have not been completely proven and the saddle point problem was not addressed. Rendle and Thieme \cite{rendle2010pairwise} propose a pairwise interaction tensor factorization method based on Bayesian personalized rank. The algorithm was designed to work only on three-way tensor data. To the best of our knowledge, this is the first work applies a parallel SGD algorithm augmented with Nesterov's optimal gradient and perturbation methods for fast parallel CP decomposition of multi-way tensor data. \section{Evaluation} \label{s:results} In this section we presents the details of the experimental settings and the comparative analysis between our proposed FP-CPD algorithm and the alike parallel tensor decomposition algorithms; PSGD and SALS. We first analyze the effectiveness and speed of the training process of the three algorithms based on four real-world datasets from SHM. We, then, evaluate the performance of our approach, along with other baselines using the SHM datastes, in terms of damage detection, assessment and localization. \subsection{Experiment Setup and Datasets} \label{sec:setup} We conducted all our experiments using a dual Intel Xeon processors with 32 GB memory and 12 physical cores. We use R development environment to implement our FP-CPD algorithm and PSGD and SALS algorithms with the help of the two packages \textbf{rTensor} and \textbf{e1071} for tensor tools and one-class model. We run our experiments on four real-world datasets, all of which inherently entails multi-way data structure. The datasets are collected from sensors that measure the health of building, bridge or road structures. Specifically, these datasets comprise of: \begin{enumerate} \item bridge structure measurement data collected from sensors attached to a cable-stayed bridge in Western Sydney, Australia (BRIDGE) \cite{anaissi2018tensor}. \item building structure measurement data collected from sensors attached to a specimen building structure obtained from Los Alamos National Laboratory (LANL) \cite{larson1987alamos} (BUILDING). \item measurements data collected from loop detectors in Victoria, Australia (ROAD) \cite{schimbinschi2015traffic}. \item road measurements collected from sensors attached to two buses travelling through routes in the southern region of New South Wales, Australia (BUS) \cite{anaissi2019smart}. \end{enumerate} All the datasets are stored in a three-way tensor represented by $sensor \times frequency \times time$. Further details about these datasets are summarized in Table \ref{datasets}. Using these datasets, we run a number of experiment sets to evaluate our proposed FP-CPD method as detailed in the following sections. \begin{table} \centering \caption{Details of datasets} \label{datasets} \begin{tabular}{l l c r} \hline Datasets & Size & \begin{tabular}[x]{@{}c@{}}Slice Size\\ $S = \prod_{i=1}^{N-1} I_i$ \end{tabular} & Source \\ \hline BRIDGE & $ \mathcal{X} \in \Re^{24 \times 1200 \times 262}$ & 28,800 & \cite{anaissi2018tensor} \\ BUILDING &$ \mathcal{X} \in \Re^{24 \times 8192 \times 240}$ & 196,608 & \cite{larson1987alamos} \\ ROAD &$ \mathcal{X} \in \Re^{96 \times 4666 \times 1826}$ & 447,936 & \cite{schimbinschi2015traffic} \\ BUS &$ \mathcal{X} \in \Re^{2 \times 2000 \times 5346}$ & 4,000 & \cite{anaissi2019smart} \\ \hline \end{tabular} \end{table} \subsection{Evaluating Performance of FP-CPD } \label{sec:speed} The goal of first experiment set is to evaluate the performance of our FP-CPD method in terms of training time error rate. To achieve this, we compare the performance of our proposed FP-CPD and PSGD and SALS algorithms. To make a fair and objective comparison, we implemented the three algorithms under the same experimental settings as described in ~\ref{sec:setup}. We evaluated the performance of each method by plotting the time needed to complete the training process versus the root mean square error (RMSE). We run the same experiment on the four datasets (BRIDGE, BUILDING, ROAD and BUS). Figure \ref{comp} shows the RMSE and the training time of the three algorithms resulted from our experiments. As illustrated in the figure, our FP-CPD algorithm significantly outperformed the PSGD and SALS algorithms in terms of convergence and training speed. The SALS algorithm was the slowest among the three algorithms due to the fact that CP decomposition is a non-convex problem which can be better handled using scholastic methods. Furthermore, another important factor that contributed to the significant performance improvements in our FP-CPD method is the utilization of the Nesterov method along with the perturbation approach in our FP-CPD method. From the first experiment set, it can be concluded that our FP-CPD method is more effective in terms of RMSE and can carry on training faster compared to similar parallel tensor decomposition methods. \begin{figure*}[!t] \centering \captionsetup[subfloat]{justification=centering} \subfloat[BUILDING] {{\includegraphics[height=2.5in,width=3in]{Plot-Data1} }} \subfloat[BUS] {{\includegraphics[height=2.5in,width=3in]{Plot-Data2} }} \\ \subfloat[ROAD] {{\includegraphics[height=2.5in,width=3in]{Plot-Data3} }}% \subfloat[ BRIDGE] {{\includegraphics[height=2.5in,width=3in]{Plot-Data4} }}% \caption{Comprison of training time and RSME of FP-CPD, SALS and PSGD on the four datasets.}% \label{comp} \end{figure*} \subsection{Evaluating Effectiveness of FP-CPD} \begin{figure*}[!t] \centering \captionsetup[subfloat]{justification=centering} \subfloat[FP-CPD.] {{\includegraphics[height=2.5in,width=3in]{ocsvm_wsu_necpd} }} \subfloat[ SALS.] {{\includegraphics[height=2.5in,width=3in]{ocsvm_wsu_als} }}% \caption{Damage estimation applied on Bridge data using decision values obtained by one-class SVM.}% \label{dv_est} \end{figure*} \begin{figure*}[!t] \captionsetup[subfloat]{justification=centering} \subfloat[FP-CPD.] {{\includegraphics[trim=0cm 1cm 3cm 0cm,clip=true,height=2.5in,width=3.5in]{bridge_necpd_loc} }} \subfloat[ SALS.] {{\includegraphics[trim=0cm 1cm 3cm 0cm,clip=true,height=2.5in,width=3.5in]{bridge_als_loc} }}% \caption{ Damage localization for the Bridge data: FP-CPD successfully localized damage locations.} \label{fig:wsu_necpd_loc} \end{figure*} \begin{figure*}[!t] \centering \captionsetup[subfloat]{justification=centering} \subfloat[FP-CPD.] {{\includegraphics[height=2.4in,width=2.8in]{ocsvm_build_necpd} }} \subfloat[ SALS.] {{\includegraphics[height=2.4in,width=2.8in]{ocsvm_build_als} }}% \caption{Damage estimation applied on Building data using decision values obtained by one-class SVM.}% \label{fig:dv_build} \end{figure*} \begin{figure*}[!t] \captionsetup[subfloat]{justification=centering} \subfloat[FP-CPD.] {{\includegraphics[height=2.5in,width=3.5in]{build_necpd_loc} }} \subfloat[ SALS.] {{\includegraphics[height=2.5in,width=3.5in]{build_als_loc} }}% \caption{ Damage localization for the Building data: FP-CPD successfully localized damage locations.} \label{fig:build_necpd_loc} \end{figure*} Our FP-CPD method demonstrated better speed and RSME in comparison to PSGD and SALS methods. However, it is still crucial to ensure that the proposed method is also capable of achieving accurate results in practical tensor decomposition problems. Therefore, the second experiment set aims to demonstrate the accuracy of our model in practice, specifically building structures in smart cities. To achieve this, we evaluate the performance of our FP-CPD in terms of its accuracy to detect damage in build and bridge structures, assessing the severity of detected damage and the localization of the detected damage. We carry on the evaluation on the BRIDGE and BUILDING datasets which are explained in the following sections. For comparative analysis, we choose SALS method as a baseline competitor to our FP-CPD. This is because PSGD has similar convergence as FP-CPD but the later takes less time to train as illustrated in section~\ref{sec:speed}. \subsubsection{The Cable-Stayed Bridge Dataset} \label{s:data_wsu} In this dataset, 24 uni-axial accelerometers and 28 strain gauges were attached at different locations of the Cable-Stayed bridge to measure the vibration and strain responses of the bridge. Figure~\ref{fig:wsuloc} illustrates the positioning of the 24 sensors on the bridge deck. The data of interest in our study is the accelerations data which were collected from sensors $Ai$ with $i\in [1;24]$. The bridge is in healthy condition. In order to evaluate the performance of damage detection methods, two different stationary vehicles (a car and a bus) with different masses were placed on the bridge to emulate two different levels of damage severity \cite{kody2013identification,cerda2012indirect}. The three different categories of data were collected in that study are: "\textit{Healthy-Data}" when the bridge is free of vehicles; "\textit{Car-Damage}" when a light car vehicle is placed on the bridge close to location $A10$; and "\textit{Bus-Damage}" when a heavy bus vehicle is located on the bridge at location $A14$. This experiment generates 262 samples (i.e., events) separated into three categories: "\textit{Healthy-Data}" (125 samples), "\textit{Car-Damage}" data (107 samples) and "\textit{Bus-Damage}" data(30 samples). Each event consists of acceleration data for a period of 2 seconds sampled at a rate of 600Hz. The resultant event's feature vector composed of 1200 frequency values. Figure~\ref{fig:wsuloc} illustrates the setup of the sensors on the bridge under evaluation. \begin{figure*} \centering \includegraphics[scale=0.75]{wsulocation.png} \caption{The locations on the bridge's deck of the 24 $Ai$ accelerometers used in the BRIDGE dataset. The cross girder $j$ of the bridge is displayed as $CGj$ \cite{anaissi2018tensor}.} \label{fig:wsuloc} \end{figure*} \subsubsection{The LANL Building Dataset} \label{s:data_b} This data is based on experiments conducted by LANL \cite{larson1987alamos} using a specimen for a three-story building structure as shown in Figure \ref{fig:alamos}. Each joint in the building was instrumented by two accelerometers. The excitation data was generated using a shaker placed at corner $D$. Similarly, for the sake of damage detection evaluation, the damage was simulated by detaching or loosening the bolts at the joints to induce the aluminum floor plate moving freely relative to the Unistrut column. Three different categories of data were collected in this experiment: "\textit{Healthy-Data}" when all the bolts were firmly tightened; "\textit{Damage-3C}" data when the bolt at location 3C was loosened; and "\textit{Damage-1A3C}" data when the bolts at locations 1A and 3C were loosened simultaneously. This experiment generates 240 samples (i.e., events) which also were separated into three categories: \textit{Healthy-Data} (150 samples), "\textit{Damage-3C}" data (60 samples) and "\textit{Damage-1A3C}" data(30 samples). The acceleration data was sampled at 1600 Hz. Each event was measured for a period of 5.12 seconds resulting in a vector of 8192 frequency values. \begin{figure*} \centering \includegraphics[scale=0.7]{alamos.jpg} \caption{Three-story building and floor layout \cite{larson1987alamos}.} \label{fig:alamos} \end{figure*} \subsubsection{Feature Extraction} \label{section:fe} The raw signals of the sensing data collected in the aforementioned experiments exist in the time domain. In practice, time domain-based features may not capture the physical meaning of the physical structure. Thus, it is important to convert the generated data to a frequency domain. For all the datasets, we initially normalized the time-domain features to have zero mean and one standard deviation. Then we used the fast Fourier transform method to convert them into the frequency domain. The resultant three-way data collected from the Cable-Stayed Bridge now has a structure of 600 features $\times$ 24 sensors $\times$ 262 events. For the LANAL BUILDING dataset, we computed the difference between signals of two adjacent sensors which resulted in 12 different joints in the three stories as in \cite{larson1987alamos}. Then we selected the first 150 frequencies as a feature vector which resulted in a three-way data with a structure of 768 features $\times$ 12 locations $\times$ 240 events. \subsubsection{Experiments} \label{section:setup} For both BUILDING and BRIDGE datasets, we applied the following procedures: \begin{itemize} \item Using the bootstrap technique, we selected 80\% of the healthy samples randomly for training and the remaining 20\% for testing in addition to the damage samples. We computed the accuracy of our FP-CPD model based on the average results over ten trials of the bootstrap experiment. \item We used the core consistency diagnostic (CORCONDIA) technique described in \cite{bro2003new} to determine the number of rank-one tensors $\mathcal{X}$ in the FP-CPD. \item We used the one-class support vector machine (OSVM) \cite{scholkopf2000support} as a model for anomaly detection. The Gaussian kernel parameter $\sigma$ in OCSVM is tuned using the Edged Support Vector (ESV) algorithm \cite{anaissi2018gaussian}, and the rate of anomalies $\nu$ was set to 0.05. \item We used the $\Fscore$ measure to compute the accuracy of data values resulted from our model for damage detection. It is defined as $\textrm{\Fscore} = 2 \cdot \dfrac{\textrm{Precision} \times \textrm{Recall} }{\textrm{Precision} + \textrm{Recall}}$ where $\textrm{Precision} = \dfrac{\textrm{TP} }{\textrm{TP} + \textrm{FP}}$ and $\textrm{Recall} = \dfrac{\textrm{TP} }{\textrm{TP} + \textrm{FN}}$ (the number of true positive, false positive and false negative are abbreviated by TP, FP and FN, respectively). \item We compared the results of the competitive method SALS proposed in \cite{maehara2016expected} against the ones resulted from our FP-CPD method. \end{itemize} \subsubsection{Results and Discussion} \label{sec:results} \subsubsection{The Cable-Stayed Bridge Dataset:} \label{s:wsueval} Our FP-CPD method with one-class SVM was initially validated using the vibration data collected from the cable-stayed bridge (described in Section \ref{s:data_wsu}). The healthy training three-way tensor data (i.e., \textbf{training} set) was in the form of $ \mathcal{X} \in \Re^{24 \times 600 \times 100}$. The 137 examples related to the two damage cases were added to the remaining 20\% of the healthy data to form a \textbf{testing} set, which was later used for model evaluation. We conducted the experiments as followed the steps described in Section\label{section:setup}. As a result, this experiment generates a damage detection accuracy $\Fscore$ of $1 \pm 0.00$ on the \textbf{testing} data. On the other hand, the $\Fscore$ accuracy of one-class SVM using SALS is recorded at $0.98 \pm 0.02$. As demonstrated from the results of this experiment, the tensor analysis with our proposed FP-CPD is capable to capture the underlying structure in multi-way data with better convergence. This is further illustrated by plotting the decision values returned from one-class SVM based FP-CPD (as shown in Figure \ref{dv_est}). We can clearly separate the two damage cases ("Car-Damage" and "Bus-Damage") in this dataset where the decision values are further decreased for the samples related to the more severe damage cases (i.e., "Bus-Damage"). These results suggest using the decision values obtained by our FP-CPD and one-class SVM as structural health scores to identify the damage severity in a one-class aspect. In contrast, the resultant decision values of one-class SVM based on SALS are also able to track the progress of the damage severity in the structure but with a slight decreasing trend in decision values for "Bus-Damage" as shown in Figure \ref{dv_est}. The last step in this experiment is to analyze the location matrix $B$ obtained from FP-CPD to locate the detected damage. Each row in this matrix captures meaningful information for each sensor location. Therefore, we calculate the average distance from each row in the matrix $B_{new}$ to $k$-nearest neighboring rows. Figure \ref{fig:wsu_necpd_loc} shows the obtained $k$-nn score for each sensor. The first 25 events (depicted on the x-axis) represent healthy data, followed by 107 events related to "Car-Damage" and 30 events to "Bus-Damage". It can be clearly observed that FP-CPD method can localize the damage in the structure accurately. Whereas, sensors $A10$ and $A14$ related to the "Car-Damage" and "Bus-Damage" respectively behave significantly different from all the other sensors apart from the position of the introduced damage. In addition, we observed that the adjacent sensors to the damage location (e.g $A9$, $A11$, $A13$ and $A15$) react differently due to the arrival pattern of the damage events. The SALS method, however, is not able to accurately locate the damage since it fails to update the location matrix $B$ incrementally. \subsubsection{The Building Dataset:} Following the experimental procedure described in Section \label{section:setup}, our second experiment was conducted using the acceleration data acquired from 24 sensors instrumented on the three-story building as described in Section \ref{s:data_b}. The healthy three-way data (i.e., \textbf{training} set) is in the form of $ X \in \Re^{12 \times 768 \times 120}$. The remaining 20\% of the healthy data and the data obtained from the two damage cases were used for testing (i.e., \textbf{testing} set). The experiments we conducted using FP-CPD with one-class SVM have achieved an $\Fscore$ of $95 \pm 0.01$ on the \textbf{testing} data compared to $0.91 \pm 0.00$ obtained from one-class SVM and SALS experiments. Similar to the BRIDGE dataset, we further analyzed the resultant decision values which were also able to characterize damage severity. Figure \ref{fig:dv_build} demonstrates that the more severe damage to the $1A$ and $3C$ location test data, the more deviation from the training data with lower decision values. Similar to the BRIDGE dataset, the last experiment is to compute the $k$-nn score for each sensor based on the $k$-nearest neighboring of the average distance between each row of the matrix $B_{new}$. Figure \ref{fig:build_necpd_loc} shows the resultant $k$-nn score for each sensor. The first 30 events (depicted on the x-axis) represent the healthy data, followed by 60 events describing when the damage was introduced in location $3C$. The last 30 events represent the damage occurred in both locations $1A$ and $3C$. It can be clearly observed that the FP-CPD method is capable to accurately localize the structure's damage where sensors $1A$ and $3C$ behave significantly different from all the other sensors apart from the position of the introduced damage. However, the SALS method is not able to locate that damage since it fails to update the location matrix $B$ incrementally. In summary, the above experiments on the four real datasets demonstrate the effectiveness of our proposed FP-CPD method in terms of time needed to carry out training during tensor decomposition. Specifically, our FP-CPD significantly improves speed of model training and error rate compared to similar parallel tensor decomposition methods, PSGD and SALS. Furthermore, the other experiment sets on the BRIDGE and BUILDING datasets showed empirical evidence of the ability of our model to accurately carry on tensor decomposition on practical case studies. In particular, the experimental results demonstrated that our FP-CPD is able to detect damage in the build and bridge structures, assess the severity of detected damage and localize of the detected damage more accurately than SALS method. Therefore, it can be concluded that our FP-CPD tensor decomposition method is able to achieve faster tensor model training with minimal error rate while carrying on accurate tensor decomposition in practical cases. Such performance and accuracy gains can be beneficial for many parallel tensor decomposition cases in practice especially in real-time detection and identification problems. We demonstrated such benefits with real use cases in structural health monitoring namely building and bridge structures.
1,108,101,563,502
arxiv
\section{Introduction} Quantum information science is a novel discipline which addresses how quantum systems may be exploited to improve the processing, transmission, and storage of information. This field has fostered new experiments and novel views on the conceptual foundations of quantum mechanics, and also inspired much current research on coherent quantum phenomena, with quantum optical systems playing a prominent role. Yet, the development of quantum information had so far little impact on the way that quantum mechanics is taught, both at graduate and undergraduate levels. This tutorial is devoted to review the mathematical tools of quantum mechanics and to present a modern reformulation of the basic postulates which is suitable to describe quantum systems in interaction with their environment, and with any kind of measuring and processing devices. \par We use Dirac braket notation throughout the tutorial and by {\em system} we refer to a single given degree of freedom (spin, position, angular momentum,...) of a physical entity. Strictly speaking we are going to deal with systems described by finite-dimensional Hilbert spaces and with observable quantities having a discrete spectrum. Some of the results may be generalized to the infinite-dimensional case and to the continuous spectrum. \par The postulates of quantum mechanics are a list of prescriptions to summarize \begin{itemize} \item[1.] how we describe the states of a physical system; \item[2.] how we describe the measurements performed on a physical system; \item[3.] how we describe the evolution of a physical system, either because of the dynamics or due to a measurement. \end{itemize} In this section we present a pico\-review of the ba\-sic po\-stu\-lates of quantum mechanics in or\-der to intro\-duce notation and point out both i) the implicit assumptions contained in the standard formulation, and ii) the need of a reformulation in terms of more general mathematical objects. For our purposes the postulates of quantum mechanics may be grouped and summarized as follows \begin{postulate}[States of a quantum system]\label{pqs} The possible states of a physical system correspond to normalized vectors $|\psi\rangle$, $\langle \psi|\psi\rangle=1$, of a Hilbert space $H$. Composite systems, either made by more than one physical object or by the different degrees of freedom of the same entity, are described by tensor product $H_1 \otimes H_2 \otimes ...$ of the corresponding Hilbert spaces, and the overall state of the system is a vector in the global space. As far as the Hilbert space description of physical systems is adopted, then we have the {\em superposition principle}, which says that if $|\psi_1\rangle$ and $|\psi_2\rangle$ are possible states of a system, then also any (normalized) linear combination $\alpha|\psi_1\rangle+\beta|\psi_2\rangle$, $\alpha,\beta\in {\mathbbm C}$, $|\alpha|^2+|\beta|^2=1$ of the two states is an admissible state of the system. \end{postulate} \begin{postulate}[Quantum measurements]\label{pqm} Observable quantities are described by Hermitian operators $X$. Any hermitian operator $X=X^\dag$, admits a spectral decomposition $X=\sum_x x P_x$, in terms of its real eigenvalues $x$, which are the possible value of the observable, and of the projectors $P_x=|x\rangle\langle x|$, $P_x,P_{x'}=\delta_{xx'}P_x$ on its eigenvectors $X|x\rangle=x|x\rangle$, which form a basis for the Hilbert space, i.e. a complete set of orthonormal states with the properties $\langle x|x'\rangle = \delta_{xx'}$ (orthonormality), and $\sum_x |x\rangle\langle x| =\mathbbm I$ (completeness, we omitted to indicate the dimension of the Hilbert space). The probability of obtaining the outcome $x$ from the measurement of the observable $X$ is given by $p_x =\left|\langle\psi| x\rangle\right|^2$, i.e \begin{align} p_x = \langle\psi| P_x| \psi \rangle =\sum_n \langle \psi| \varphi_n\rangle\langle \varphi_n |P_x|\psi\rangle =\sum_n \langle \varphi_n |P_x|\psi\rangle \langle \psi| \varphi_n\rangle = \hbox{Tr}\left[|\psi\rangle\langle\psi|\, P_x\right]\,, \end{align} and the overall expectation value by $$\langle X\rangle = \langle\psi| X|\psi\rangle = \hbox{Tr}\left[|\psi\rangle\langle\psi|\, X \right]\,.$$ This is the {\em Born rule}, which represents the fundamental recipe to connect the mathematical description of a quantum state to the prediction of quantum theory about the results of an actual experiment. The state of the system {\em after} the measurement is the (normalized) projection of the state {\em before} the measurement on the eigenspace of the observed eigenvalue, i.e. $$ |\psi_x\rangle = \frac{1}{\sqrt{p_x}}\, P_x |\psi\rangle\,. $$ \end{postulate} \begin{postulate}[Dynamics of a quantum system]\label{pqd} The dynamical evolution of a physical system is described by unitary operators: if $|\psi_0\rangle$ is the state of the system at time $t_0$ then the state of the system at time $t$ is given by $|\psi_t\rangle = U(t,t_0) |\psi_0\rangle$, with $U(t,t_0)U^\dag(t,t_0)=U^\dag(t,t_0) U(t,t_0)=\mathbbm I$. \end{postulate} We will denote by $L(H)$ the linear space of (linear) operators from $H$ to $H$, which itself is a Hilbert space with scalar product provided by the trace operation, i.e. upon denoting by $|A\rangle\rangle$ operators seen as elements of $L(H)$, we have $\langle\langle A| B\rangle\rangle = \hbox{Tr}[A^\dag B]$ (see Appendix \ref{apTR} for details on the trace operation). \par As it is apparent from their formulation, the postulates of quantum mechanics, as reported above, are about a closed isolated system. On the other hand, we are mostly dealing with system that interacts or have interacted with the rest of the universe, either during their dynamical evolution, or when subjected to a measurement. As a consequence, one may wonder whether a suitable modification is needed, or in order. This is indeed the case and the rest of his tutorial is devoted to review the tools of quantum mechanics and to present a modern reformulation of the basic postulates which is suitable to describe, design and control quantum systems in interaction with their environment, and with any kind of measuring and processing devices. \section{Quantum states} \subsection{Density operator and partial trace} Suppose to have a quantum system whose preparation is not completely un\-der con\-trol. What we know is that the system is prepared in the state $|\psi_k\rangle$ with probability $p_k$, i.e. that the system is described by the statistical ensemble $\{p_k, |\psi_k\rangle\}$, $\sum_k p_k=1$, where the states $\{|\psi_k\rangle\}$ are not, in general, orthogonal. The expected value of an observable $X$ may be evaluated as follows \begin{align}\notag \langle X\rangle & = \sum_k p_k \langle X \rangle_k = \sum_k p_k \langle\psi_k | X| \psi_k \rangle = \sum_{n\,p\,k} p_k \langle\psi_k|\varphi_n\rangle\langle\varphi_n | X|\varphi_p\rangle\langle\varphi_p| \psi_k \rangle \\ \notag &= \sum_{n\,p\,k} p_k \langle\varphi_p| \psi_k \rangle\langle\psi_k|\varphi_n\rangle \langle\varphi_n |X|\varphi_p\rangle = \sum_{n\,p} \langle\varphi_p| \varrho| \varphi_n\rangle \langle\varphi_n |X|\varphi_p\rangle \\ \notag &=\sum_{p} \langle\varphi_p| \varrho\,X|\varphi_p\rangle = \hbox{Tr}\left[\varrho\,X\right]\, \end{align} where $$\varrho = \sum_k p_k\,|\psi_k\rangle\langle\psi_k |$$ is the {\em statistical (density) operator} describing the system under investigation. The $|\varphi_n\rangle$'s in the above formula are a basis for the Hilbert space, and we used the trick of suitably inserting two resolutions of the identity $\mathbbm I = \sum_n |\varphi_n\rangle\langle\varphi_n|$. The formula is of course trivial if the $|\psi_k\rangle$'s are themselves a basis or a subset of a basis. \begin{theorem}[Density operator] An operator $\varrho$ is the density operator associated to an ensemble $\{p_k, |\psi_k\rangle\}$ is and only if it is a positive $\varrho\geq 0$ (hence selfadjoint) operator with unit trace $\hbox{\rm Tr}\left[\varrho\right]=1$. \end{theorem} \begin{proof}: If $\varrho=\sum_k p_k |\psi_k\rangle\langle\psi_k |$ is a density operator then $\hbox{Tr}[\varrho]=\sum_k p_k=1$ and for any vector $|\varphi\rangle \in H$, $\langle\varphi|\varrho|\varphi\rangle = \sum_k p_k |\langle\varphi|\psi_k\rangle|^2 \geq 0$. Viceversa, if $\varrho$ is a positive operator with unit trace than it can be diagonalized and the sum of eigenvalues is equal to one. Thus it can be naturally associated to an ensemble. \qed \end{proof} As it is true for any operator, the density operator may be expressed in terms of its matrix elements in a given basis, i.e. $\varrho=\sum_{np} \varrho_{np} |\varphi_n\rangle\langle\varphi_p|$ where $\varrho_{np}=\langle\varphi_n|\varrho|\varphi_p\rangle$ is usually referred to as the {\em density matrix} of the system. Of course, the density matrix of a state is diagonal if we use a basis which coincides or includes the set of eigenvectors of the density operator, otherwise it contains off-diagonal elements. \par Different ensembles may lead to the same density operator. In this case they have the same expectation values for any operator and thus are physically indistinguishable. In other words, different ensembles leading to the same density operator are actually the same state, i.e. the density operator provides the natural and most fundamental quantum description of physical systems. How this reconciles with Postulate \ref{pqs} dictating that {\em physical systems are described by vectors in a Hilbert space}? \par In order to see how it works let us first notice that, according to the postulates reported above, the action of "measuring nothing" should be described by the identity operator $\mathbbm I$. Indeed the identity it is Hermitian and has the single eigenvalues $1$, corresponding to the persistent result of measuring nothing. Besides, the eigenprojector corresponding to the eigenvalue $1$ is the projector over the whole Hilbert space and thus we have the consistent prediction that the state after the "measurement" is left unchanged. Let us now consider a situation in which a bipartite system prepared in the state $|\psi_{\scriptscriptstyle \!AB}\rangle\rangle \in H_{{\scriptscriptstyle A}} \otimes H_{{\scriptscriptstyle B}}$ is subjected to the measurement of an observable $X=\sum_x P_x \in L(H_{\scriptscriptstyle A}) $, $P_x=|x\rangle\langle x|$ i.e. a measurement involving only the degree of freedom described by the Hilbert space $H_{\scriptscriptstyle A}$. The overall observable measured on the global system is thus $\boldsymbol{X}=X\otimes \mathbbm I_{\scriptscriptstyle B}$, with spectral decomposition $\boldsymbol{X}= \sum_x x\, \boldsymbol{Q}_x$, $\boldsymbol{Q}_x=P_x\otimes \mathbbm I_{\scriptscriptstyle B}$. The probability distribution of the outcomes is then obtained using the Born rule, i.e. \begin{align} p_x = \hbox{Tr}_{\scriptscriptstyle \!AB} \Big[|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} |\, P_x \otimes \mathbbm I_{\scriptscriptstyle B}\Big]\,. \label{b1} \end{align} On the other hand, since the measurement has been performed on the sole system $A$, one expects the Born rule to be valid also at the level of the single system $A$, and a question arises on the form of the object $\varrho_{\scriptscriptstyle A}$ which allows one to write $p_x = \hbox{Tr}_{\scriptscriptstyle A} \left[\varrho_{\scriptscriptstyle A}\, P_x\right]$ i.e. the Born rule as a trace only over the Hilbert space $H_{\scriptscriptstyle A}$. Upon inspecting Eq. (\ref{b1}) one sees that a suitable mapping $|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} | \rightarrow \varrho_{\scriptscriptstyle A}$ is provided by the partial trace $\varrho_{\scriptscriptstyle A}=\hbox{Tr}_{\scriptscriptstyle B}\big[|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle \langle\psi_{\scriptscriptstyle \!AB} |\big]$. Indeed, for the operator $\varrho_{\scriptscriptstyle A}$ defined as the partial trace, we have $\hbox{Tr}_{\scriptscriptstyle A}[\varrho_{\scriptscriptstyle A}]= \hbox{Tr}_{\scriptscriptstyle \!AB}\left[|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle \langle\psi_{\scriptscriptstyle \!AB} |\right]=1$ and, for any vector $|\varphi\rangle\in H_{\scriptscriptstyle A}$ , $\langle\varphi_{\scriptscriptstyle A}|\varrho_{\scriptscriptstyle A}|\varphi_{\scriptscriptstyle A}\rangle = \hbox{Tr}_{\scriptscriptstyle \!AB} \left[|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle \langle\psi_{\scriptscriptstyle \!AB} |\, |\varphi_{\scriptscriptstyle A}\rangle\langle\varphi_{\scriptscriptstyle A}|\otimes \mathbbm I_{\scriptscriptstyle B}\right] \geq 0$. Being a positive, unit trace, operator $\varrho_{\scriptscriptstyle A}$ is itself a density operator according to Theorem 1. As a matter of fact, the partial trace is the unique operation which allows to maintain the Born rule at both levels, i.e. the unique operation leading to the correct description of observable quantities for subsystems of a composite system. Let us state this as a little theorem \cite{nie00} \begin{theorem}[Partial trace] The unique mapping $|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} | \rightarrow \varrho_{\scriptscriptstyle A} = f(\psi_{\scriptscriptstyle \!AB})$ from $H_{\scriptscriptstyle A} \otimes H_{\scriptscriptstyle B}$ to $H_{\scriptscriptstyle A}$ for which $\hbox{\rm Tr}_{\scriptscriptstyle \!AB} \left[|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} |\, P_x \otimes \mathbbm I_{\scriptscriptstyle B}\right] = \hbox{\rm Tr}_{\scriptscriptstyle A} \left[f(\psi_{\scriptscriptstyle \!AB})\, P_x\right]$ is the partial trace $f(\psi_{\scriptscriptstyle \!AB})\equiv \varrho_{\scriptscriptstyle A} = \hbox{\rm Tr}_{\scriptscriptstyle B}\left[|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle \langle\psi_{\scriptscriptstyle \!AB} |\right]$. \end{theorem} \begin{proof} Basically the proof reduces to the fact that the set of operators on $H_{\scriptscriptstyle A}$ is itself a Hilbert space $L(H_{\scriptscriptstyle A})$ with scalar product given by $\langle\langle A| B\rangle\rangle = \hbox{Tr}[A^\dag B]$. If we consider a basis of operators $\{M_k\}$ for $L(H_{\scriptscriptstyle A})$ and expand $f(\psi_{\scriptscriptstyle \!AB}) =\sum_k M_k \hbox{Tr}_{\scriptscriptstyle A}[M_k^\dag f(\psi_{\scriptscriptstyle \!AB})]$, then since the map $f$ has to preserve the Born rule, we have $$ f(\psi_{\scriptscriptstyle \!AB}) =\sum_k M_k \hbox{Tr}_{\scriptscriptstyle A}[M_k^\dag\, f(\psi_{\scriptscriptstyle \!AB})] = \sum_k M_k \hbox{Tr}_{\scriptscriptstyle \!AB}\left[M_k^\dag\otimes\mathbbm I_{\scriptscriptstyle B}\, |\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} |\right]\, $$ and the thesis follows from the fact that in a Hilbert space the decomposition on a basis is unique. \qed \end{proof} The above result can be easily generalized to the case of a system which is initially described by a density operator $\varrho_{\scriptscriptstyle \!AB}$, and thus we conclude that when we focus attention to a subsystem of a composite larger system the unique mathematical description of the act of ignoring part of the degrees of freedom is provided by the partial trace. It remains to be proved that the partial trace of a density operator is a density operator too. This is a very consequence of the definition that we put in the form of another little theorem. \begin{theorem} The partial traces $\varrho_{\scriptscriptstyle A} = \hbox{\rm Tr}_{\scriptscriptstyle B}[\varrho_{\scriptscriptstyle \!AB}]$, $\varrho_{\scriptscriptstyle B} = \hbox{\rm Tr}_{\scriptscriptstyle A}[\varrho_{\scriptscriptstyle \!AB}]$ of a density operator $\varrho_{\scriptscriptstyle \!AB}$ of a bipartite system, are themselves density operators for the reduced systems. \end{theorem} \begin{proof} We have $\hbox{Tr}_{\scriptscriptstyle A} [\varrho_{\scriptscriptstyle A}] = \hbox{Tr}_{\scriptscriptstyle B} [\varrho_{\scriptscriptstyle B}] = \hbox{Tr}_{\scriptscriptstyle \!AB}[\varrho_{\scriptscriptstyle \!AB}]=1$ and, for any state $|\varphi_{\scriptscriptstyle A}\rangle\in H_{\scriptscriptstyle A}$, $|\varphi_{\scriptscriptstyle B}\rangle\in H_{\scriptscriptstyle B}$, \begin{align} \langle\varphi_{\scriptscriptstyle A}|\varrho_{\scriptscriptstyle A}|\varphi_{\scriptscriptstyle A}\rangle &= \hbox{Tr}_{\scriptscriptstyle \!AB} \left[\varrho_{\scriptscriptstyle \!AB}\, |\varphi_{\scriptscriptstyle A}\rangle\langle\varphi_{\scriptscriptstyle A}|\otimes \mathbbm I_{\scriptscriptstyle B}\right] \geq 0 \notag \\ \langle\varphi_{\scriptscriptstyle B}|\varrho_{\scriptscriptstyle B}|\varphi_{\scriptscriptstyle B}\rangle &= \hbox{Tr}_{\scriptscriptstyle \!AB} \left[\varrho_{\scriptscriptstyle \!AB}\,\mathbbm I_{\scriptscriptstyle A} \otimes |\varphi_{\scriptscriptstyle B}\rangle\langle\varphi_{\scriptscriptstyle B}| \right] \geq 0\,. \notag \quad \qed \end{align} \end{proof} \subsubsection{Conditional states} From the above results it also follows that when we perform a measurement on one of the two subsystems, the state of the "unmeasured" subsystem after the observation of a specific outcome may be obtained as the partial trace of the overall post measurement state, i.e. the projection of the state before the measurement on the eigenspace of the observed eigenvalue, in formula \begin{align} \varrho_{{\scriptscriptstyle B} x} = \frac{1}{p_x} \hbox{Tr}_{\scriptscriptstyle A}\left[ P_x\otimes\mathbbm I_{\scriptscriptstyle B} \,\varrho_{\scriptscriptstyle \!AB}\, P_x\otimes\mathbbm I_{\scriptscriptstyle B} \right] = \frac{1}{p_x} \hbox{Tr}_{\scriptscriptstyle A}\left[\varrho_{\scriptscriptstyle \!AB}\, P_x\otimes\mathbbm I_{\scriptscriptstyle B} \right]\,\label{conditional} \end{align} where, in order to write the second equality, we made use of the circularity of the trace (see Appendix \ref{apTR}) and of the fact that we are dealing with a factorized projector. The state $\varrho_{{\scriptscriptstyle B} x}$ will be also referred to as the "conditional state" of system $B$ after the observation of the outcome $x$ from a measurement of the observable $X$ performed on the system $A$. \begin{exercise} Consider a bidimensional system (say the spin state of a spin $\frac12$ particle) and find two ensembles corresponding to the same density operator. \end{exercise} \begin{exercise} Consider a spin $\frac12$ system and the ensemble $\{p_k,|\psi_k\}$, $k=0,1$, $p_0=p_1=\frac12$, $|\psi_0\rangle=|0\rangle$, $|\psi_1\rangle=|1\rangle$, where $|k\rangle$ are the eigenstates of $\sigma_3$. Write the density matrix in the basis made of the eigenstates of $\sigma_3$ and then in the basis of $\sigma_1$. Then, do the same but for the ensemble obtained from the previous one by changing the probabilities to $p_0=\frac14$, $p_1=\frac34$. \end{exercise} \begin{exercise} Write down the partial traces of the state $|\psi\rangle\rangle=\cos\phi\, |00\rangle\rangle + \sin\phi\, |11\rangle\rangle$, where we used the notation $|jk\rangle\rangle=|j\rangle\otimes |k\rangle$. \end{exercise} \subsection{Purity and purification of a mixed state} As we have seen in the previous section when we observe a portion, say $A$, of a composite system described by the vector $|\psi_{\scriptscriptstyle \!AB}\rangle\rangle\in H_{\scriptscriptstyle A} \otimes H_{\scriptscriptstyle B}$, the mathematical object to be inserted in the Born rule in order to have the correct description of observable quantities is the partial trace, which individuates a density operator on $H_{\scriptscriptstyle A}$. Actually, also the converse is true, i.e. any density operator on a given Hilbert space may be viewed as the partial trace of a state vector on a larger Hilbert space. Let us prove this constructively: if $\varrho$ is a density operator on $H$, then it can be diagonalized by its eigenvectors and it can be written as $\varrho=\sum_k \lambda_k |\psi_k\rangle\langle\psi_k|$; then we introduce another Hilbert space $K$, with dimension at least equal to the number of nonzero eqigenvalues of $\varrho$ and a basis $\{|\theta_k\rangle\}$ in $K$, and consider the vector $|\varphi\rangle\rangle \in H \otimes K$ given by $|\varphi\rangle\rangle=\sum_k \sqrt{\lambda_k}\, |\psi_k\rangle \otimes |\theta_k\rangle$. Upon tracing over the Hilbert space $K$, we have $$ \hbox{Tr}_{\scriptscriptstyle K} \left[|\varphi\rangle\rangle\langle\langle\varphi|\right] = \sum_{kk^\prime} \sqrt{\lambda_k\lambda_{k^\prime}}\, |\psi_k\rangle\langle\psi_{k^\prime}|\, \langle\theta_{k^\prime}|\theta_k\rangle= \sum_k \lambda_k\, |\psi_k\rangle\langle\psi_k| = \varrho\:. $$ Any vector on a larger Hilbert space which satisfies the above condition is referred to as a {\em purification} of the given density operator. Notice that, as it is apparent from the proof, there exist infinite purifications of a density operator. Overall, putting together this fact with the conclusions from the previous section, we are led to reformulate the first postulate to say that {\em quantum states of a physical system are described by density operators}, i.e. positive operators with unit trace on the Hilbert space of the system. \par A suitable measure to quantify how far a density operator is from a projector is the so-called {\em purity}, which is defined as the trace of the square density operator $\mu[\varrho]= \hbox{Tr}[\varrho^2]=\sum_k \lambda_k^2$, where the $\lambda_k$'s are the eigenvalues of $\varrho$. Density operators made by a projector $\varrho=|\psi\rangle\langle\psi|$ have $\mu=1$ and are referred to as {\em pure states}, whereas for any $\mu<1$ we have a {\em mixed state}. Purity of a state ranges in the interval $1/d \leq \mu \leq 1$ where $d$ is the dimension of the Hilbert space. The lower bound is found looking for the minimum of $\mu=\sum_k \lambda_k^2$ with the constraint $\sum_k \lambda_k=1$, and amounts to minimize the function $F=\mu+\gamma\sum_k \lambda_k$, $\gamma$ being a Lagrange multipliers. The solution is $\lambda_k=1/d$, $\forall k$, i.e. the {maximally mixed} state $\varrho=\mathbbm I/d$, and the corresponding purity is $\mu=1/d$. \par When a system is prepared in a pure state we have the maximum possible information on the system according to quantum mechanics. On the other hand, for mixed states the degree of purity is connected with the amount of information we are missing by looking at the system only, while ignoring the {\em environment}, i.e. the rest of the universe. In fact, by looking at a portion of a composite system we are ignoring the information encoded in the correlations between the portion under investigation and the rest of system: This results in a smaller amount of information about the state of the subsystem itself. In order to emphasize this aspect, i.e. the existence of residual ignorance about the system, the degree of mixedness may be quantified also by the Von Neumann (VN) entropy $S[\varrho]=-\hbox{Tr}\left[\varrho\,\log\varrho\right]=-\sum_n \lambda_n \log\lambda_n$, where $\{\lambda_n\}$ are the eigenvalues of $\varrho$. We have $0\leq S[\varrho] \leq \log d$: for a pure state $S[|\psi\rangle\langle\psi|]=0$ whereas $S[\mathbbm I/d]=\log d$ for a maximally mixed state. VN entropy is a monotone function of the purity, and viceversa. \begin{exercise} Evaluate purity and VN entropy of the partial traces of the state $|\psi\rangle\rangle=\cos\phi\, |01\rangle\rangle + \sin\phi\, |10\rangle\rangle$. \end{exercise} \begin{exercise} Prove that for any pure bipartite state the entropies of the partial traces are equal, though the two density operators need not to be equal. \end{exercise} \begin{exercise} Take a single-qubit state with density operator expressed in terms of the Pauli matrices $\varrho=\frac12 (\mathbbm I + r_1 \sigma_1 + r_2 \sigma_2+ r_3 \sigma_3)$ (Bloch sphere representation), $r_k=\hbox{\rm Tr}[\varrho\, \sigma_k]$, and prove that the Bloch vector $(r_1,r_2,r_3)$ should satisfies $r_1^2+r_2^2+r_3^3\leq 1$ for $\varrho$ to be a density operator. \end{exercise} \section{Quantum measurements} In this section we put the postulates of standard quantum measurement theory under closer scrutiny. We start with some formal considerations and end up with a reformulation suitable for the description of any measurement performed on a quantum system, including those involving external systems or a noisy environment \cite{Per93,Bergou}. \par Let us start by reviewing the postulate of standard quantum measurement theory in a pedantic way, i.e. by expanding Postulate \ref{pqm}; $\varrho$ denotes the state of the system before the measurement. \begin{itemize} \item[{\bf [2.1]}] Any observable quantity is associated to a Hermitian operator $X$ with spectral decomposition $X=\sum_x \,x\,|x\rangle\langle x|$. The eigenvalues are real and we assume for simplicity that they are nondegenerate. A measurement of $X$ yields one of the eigenvalues $x$ as possible outcomes. \item[{\bf [2.2]}] The eigenvectors of $X$ form a basis for the Hilbert space. The projectors $P_x=|x\rangle\langle x|$ span the entire Hilbert space, $\sum_x P_x=\mathbbm I$. \item[{\bf [2.3]}] The projectors $P_x$ are orthogonal $P_xP_{x^\prime} = \delta_{xx^\prime}P_x$. It follows that $P_x^2=P_x$ and thus that the eigenvalues of any projector are $0$ and $1$. \item[{\bf [2.4]}] { (Born rule)} The probability that a particular outcome is found as the measurement result is $$p_x= \hbox{Tr}\left[P_x\varrho P_x\right] = \hbox{Tr}\left[\varrho P_x^2\right] \stackrel{\bigstar}{=} \hbox{Tr}\left[\varrho P_x\right]\,.$$ \item[{\bf [2.5]}] { (Reduction rule)} The state after the measurement (reduction rule or projection postulate) is $$\varrho_x = \frac1{p_x}\,P_x\varrho P_x\,,$$ if the outcome is $x$. \item[{\bf [2.6]}] If we perform a measurement but we do not record the results, the post-measurement state is given by $\widetilde{\varrho}=\sum_x p_x\,\varrho_x = \sum_x P_x\varrho P_x$. \end{itemize} The formulations {\bf [2.4]} and ${\bf [2.5]}$ follow from the formulations for pure states, upon invoking the existence of a purification: \begin{align} p_x &= \hbox{Tr}_{\scriptscriptstyle \!AB} \left[P_x \otimes \mathbbm I_{\scriptscriptstyle B}\, |\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} |\,P_x \otimes \mathbbm I_{\scriptscriptstyle B} \right]= \hbox{Tr}_{\scriptscriptstyle \!AB} \left[ |\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} |\,P_x^2 \otimes \mathbbm I_{\scriptscriptstyle B} \right] \notag \\ &=\hbox{Tr}_{\scriptscriptstyle A}\left[\varrho_{\scriptscriptstyle A} P_x^2\right] \, \end{align} \begin{align} \varrho_{{\scriptscriptstyle A} x} &= \frac1{p_x} \hbox{Tr}_{\scriptscriptstyle B} \left[ P_x \otimes\mathbbm I_{\scriptscriptstyle B}\, |\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} |\,P_x \otimes \mathbbm I_{\scriptscriptstyle B}\right]= \frac1{p_x}P_x \, \hbox{Tr}_{\scriptscriptstyle B} \left[ |\psi_{\scriptscriptstyle \!AB}\rangle\rangle\langle\langle\psi_{\scriptscriptstyle \!AB} |\right] P_x \notag \\ &=\frac1{p_x}P_x \, \varrho_{\scriptscriptstyle A}\,P_x\,. \end{align} The message conveyed by these postulates is that we can only predict the spectrum of the possible outcomes and the probability that a given outcome is obtained. On the other hand, the measurement process is random, and we cannot predict the actual outcome of each run. Independently on its purity, a density operator $\varrho$ does not describe the state of a single system, but rather an ensemble of identically prepared systems. If we perform the same measurement on each member of the ensemble we can predict the possible results and the probability with which they occur but we cannot predict the result of individual measurement (except when the probability of a certain outcome is either $0$ or $1$). \subsection{Probability operator-valued measure and detection operators} The set of postulates {\bf [2.*]} may be seen as a set of recipes to generate probabilities and post-measurement states. We also notice that the number of possible outcomes is limited by the number of terms in the orthogonal resolution of identity, which itself cannot be larger than the dimensionality of the Hilbert space. It would however be often desirable to have more outcomes than the dimension of the Hilbert space while keeping positivity and normalization of probability distributions. In this section will show that this is formally possible, upon relaxing the assumptions on the mathematical objects describing the measurement, and replacing them with more flexible ones, still obtaining a meaningful prescription to generate probabilities. Then, in the next sections we will show that there are physical processes that fit with this generalized description, and that actually no revision of the postulates is needed, provided that the degrees of freedom of the measurement apparatus are taken into account. \par The Born rule is a prescription to generate probabilities: its textbook form is the right term of the starred equality in ${\bf [2.4]}$. However, the form on the left term has the merit to underline that in order to generate a probability it sufficient if the $P_x^2$ is a positive operator. In fact, we do not need to require that the set of the $P_x$'s are projectors, nor we need the positivity of the underlying $P_x$ operators. So, let us consider the following generalization: we introduce a set of positive operators $\Pi_x\geq 0$, which are the generalization of the $P_x$ and use the prescription $p_x=\hbox{Tr}[\varrho\,\Pi_x]$ to generate probabilities. Of course, we want to ensure that this is a true probability distribution, i.e. normalized, and therefore require that $\sum_x \Pi_x= \mathbbm I$, that is the positive operators still represent a resolution of the identity, as the set of projectors over the eigenstates of a selfadjoint operator. We will call a decomposition of the identity in terms of positive operators $\sum_x \Pi_x=\mathbbm I$ a {\em probability operator-valued measure} (POVM) and $\Pi_x\geq0$ the elements of the POVM. \par Let us denote the operators giving the post-measurement states (as in ${\bf [2.5]}$) by $M_x$. We refer to them as to the {\em detection operators}. As noted above, they are no longer constrained to be projectors. Actually, they may be any operator with the constraint, imposed by ${\bf [2.4]}$ i.e. $p_x = \hbox{Tr}[M_x\varrho\, M_x^\dag] = \hbox{Tr}[\varrho\, \Pi_x]$. This tells us that the POVM elements have the form $\Pi_x=M_x^\dag M_x$ which, by construction, individuate a set of a positive operators. There is a residual freedom in designing the post-measurement state. In fact, since $\Pi_x$ is a positive operator $M_x=\sqrt{\Pi_x}$ exists and satisfies the constraint, as well as any operator of the form $M_x=U_x\,\sqrt{\Pi_x}$ with $U_x$ unitary. This is the most general form of the detection operators satisfying the constraint $\Pi_x=M_x^\dag M_x$ and corresponds to their polar decomposition. The POVM elements determine the absolute values leaving the freedom of choosing the unitary part. \par Overall, the detection operators $M_x$ represent a generalization of the projectors $P_x$, while the POVM elements $\Pi_x$ generalize $P_x^2$. The postulates for quantum measurements may be reformulated as follows \begin{itemize} \item[{\bf [II.1]}] Observable quantities are associated to POVMs, i.e. decompositions of identity $\sum_x \Pi_x = \mathbbm I$ in terms of positive $\Pi_x\geq 0$ operators. The possible outcomes $x$ label the elements of the POVM and the construction may be generalized to the continuous spectrum. \item[{\bf [II.2]}] The elements of a POVM are positive operators expressible as $\Pi_x=M^\dag_x\,M_x$ where the detection operators $M_x$ are generic operators with the only constraint $\sum_x M^\dag_x\,M_x = \mathbbm I$. \item[{\bf [II.3]}] (Born rule) The probability that a particular outcome is found as the measurement result is $p_x= \hbox{Tr}\left[M_x\varrho M_x^\dag\right] = \hbox{Tr}\left[\varrho M_x^\dag M_x\right] = \hbox{Tr}\left[\varrho \Pi_x\right]$. \item[{\bf [II.4]}] (Reduction rule) The state after the measurement is $\varrho_x = \frac1{p_x}\,M_x\varrho M_x^\dag$ if the outcome is $x$. \item[{\bf [II.5]}] If we perform a measurement but we do not record the results, the post-measurement state is given by $\widetilde{\varrho}=\sum_x p_x\,\varrho_x = \sum_x M_x\varrho M_x^\dag$. \end{itemize} Since orthogonality is no longer a requirement, the number of elements of a POVM has no restrictions and so the number of possible outcomes from the measurement. The above formulation generalizes both the Born rule and the reduction rule, and says that any set of detection operators satisfying ${\bf [II.2]}$ corresponds to a legitimate operations leading to a proper probability distribution and to a set of post-measurement states. This scheme is referred to as a {\em generalized measurement}. Notice that in ${\bf [II.4]}$ we assume a reduction mechanism sending pure states into pure states. This may be further generalized to reduction mechanism where pure states are transformed to mixtures, but we are not going to deal with this point. \par Of course, up to this point, this is just a formal mathematical generalization of the standard description of measurements given in textbook quantum mechanics, and few questions naturally arise: Do generalized measurements describe physically realizable measurements? How they can be implemented? And if this is the case, does it means that standard formulation is too restrictive or wrong? To all these questions an answer will be provided by the following sections where we state and prove the Naimark Theorem, and discuss few examples of measurements described by POVMs. \subsection{The Naimark theorem} The Naimark theorem basically says that any generalized measurement satisfying {\bf [II.*]} may be viewed as a standard measurement defined by {\bf [2.*]} in a larger Hilbert space, and conversely, any standard measurement involving more than one physical system may be described as a generalized measurement on one of the subsystems. In other words, if we focus attention on a portion of a composite system where a standard measurement takes place, than the statistics of the outcomes and the post-measurement states of the subsystem may be obtained with the tools of generalized measurements. Overall, we have \begin{theorem}[Naimark] For any given {\rm POVM} $\sum_x \Pi_x = \mathbbm I$, $\Pi_x\geq 0$ on a Hilbert space $H_{\scriptscriptstyle A}$ there exists a Hilbert space $H_{\scriptscriptstyle B}$, a state $\varrho_{\scriptscriptstyle B}=|\omega_{\scriptscriptstyle B}\rangle\langle\omega_{\scriptscriptstyle B}| \in L(H_{\scriptscriptstyle B})$, a unitary operation $U\in L(H_{\scriptscriptstyle A} \otimes H_{\scriptscriptstyle B})$, $UU^\dag=U^\dag U=\mathbbm I$, and a projective measurement $P_x$, $P_xP_x^\prime=\delta_{xx^\prime} P_x$ on $H_{\scriptscriptstyle B}$ such that $\Pi_x=\hbox{\rm Tr}_{\scriptscriptstyle B} [\mathbbm I \otimes \varrho_{\scriptscriptstyle B}\, U^\dag \mathbbm I \otimes P_x\,U]$. The setup is referred to as a {\em Naimark extension} of the {\rm POVM}. Conversely, any measurement scheme where the system is coupled to another system, from now on referred to as the ancilla, and after evolution, a projective measurement is performed on the ancilla may be seen as the Naimark extension of a {\rm POVM}, i.e. one may write the Born rule $p_x=\hbox{\rm Tr}[\varrho_{\scriptscriptstyle A}\,\Pi_x]$ and the reduction rule $\varrho_{\scriptscriptstyle A} \rightarrow \varrho_{{\scriptscriptstyle A} x}=\frac1{p_x}M_x\varrho_{\scriptscriptstyle A} M_x^\dag$ at the level of the system only, in terms of the POVM elements $\Pi_x=\hbox{\rm Tr}_{\scriptscriptstyle B} [\mathbbm I \otimes \varrho_{\scriptscriptstyle B}\, U^\dag \mathbbm I \otimes P_x\,U]$ and the detection operators $M_x|\varphi_{\scriptscriptstyle A}\rangle = \langle x|U|\varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B}\rangle\rangle$. \end{theorem} Let us start with the second part of the theorem, and look at what happens when we couple the system under investigation to an additional system, usually referred to as ancilla (or apparatus), let them evolve, and then perform a projective measurement on the ancilla. This kind of setup is schematically depicted in Figure 1. \begin{figure}[h!] \centerline{\includegraphics[width=0.85\textwidth]{fpovm.pdf}} \caption{Schematic diagram of a generalized measurement. The system of interest is coupled to an ancilla prepared in a known state $|\omega_{\scriptscriptstyle B}\rangle$ by the unitary evolution $U$, and then a projective measurement is performed on the ancilla.} \end{figure} \par The Hilbert space of the overall system is $H_{\scriptscriptstyle A}\otimes H_{\scriptscriptstyle B}$, and we assume that the system and the ancilla are initially independent on each other, i.e. the global initial preparation is $R=\varrho_{\scriptscriptstyle A}\otimes \varrho_{\scriptscriptstyle B}$. We also assume that the ancilla is prepared in the pure state $\varrho_{\scriptscriptstyle B}=|\omega_{\scriptscriptstyle B}\rangle\langle\omega_{\scriptscriptstyle B}|$ since this is always possible, upon a suitable purification of the ancilla degrees of freedom, i.e. by suitably enlarging the ancilla Hilbert space. Our aim it to obtain information about the system by measuring an observable $X$ on the ancilla. This is done after the system-ancilla interaction described by the unitary operation $U$. According to the Born rule the probability of the outcomes is given by $$ p_x = \hbox{Tr}_{\scriptscriptstyle \!AB} \left[U \varrho_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B} U^\dag\, \mathbbm I\otimes |x\rangle\langle x|\right] = \hbox{Tr}_{\scriptscriptstyle A}\left[\varrho_{\scriptscriptstyle A}\,\underbrace{ \hbox{Tr}_{\scriptscriptstyle B}\left[ \mathbbm I \otimes \varrho_{\scriptscriptstyle B}\, U^\dag\,\mathbbm I\otimes |x\rangle\langle x|U\right]} \right] \vspace{-2mm} $$ \begin{flushright}{\mbox{\large${\;\Pi_x}$}$\qquad\qquad \qquad\qquad$}\end{flushright} where the set of operators $\Pi_x=\hbox{Tr}_{\scriptscriptstyle B}\left[ \mathbbm I \otimes \varrho_{\scriptscriptstyle B}\, U^\dag\,\mathbbm I\otimes |x\rangle\langle x|U\right] = \langle\omega_{\scriptscriptstyle B}| U^\dag \mathbbm I \otimes P_x U | \omega_{\scriptscriptstyle B}\rangle $ is the object that would permit to write the Born rule at the level of the subsystem $A$, i.e. it is our candidate POVM. \par In order to prove this, let us define the operators $M_x\in L(H_{\scriptscriptstyle A})$ by their action on the generic vector in $H_{\scriptscriptstyle A}$ $$ M_x |\varphi_{\scriptscriptstyle A}\rangle = \langle x| U | \varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B}\rangle\rangle $$ where $| \varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B}\rangle\rangle = | \varphi_{\scriptscriptstyle A}\rangle\otimes |\omega_{\scriptscriptstyle B}\rangle$ and the $|x\rangle$'s are the orthogonal eigenvectors of $X$. Using the decomposition of $\varrho_{\scriptscriptstyle A}=\sum_k \lambda_k |\psi_k\rangle\langle\psi_k |$ onto its eigenvectors the probability of the outcomes can be rewritten as \begin{align}\notag p_x &= \hbox{Tr}_{\scriptscriptstyle \!AB} \left[U \varrho_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B} U^\dag\, \mathbbm I\otimes |x\rangle\langle x|\right] = \sum_k \lambda_k \hbox{Tr}_{\scriptscriptstyle \!AB} \left[U |\psi_k,\omega_{\scriptscriptstyle B}\rangle\rangle\langle\langle\omega_{\scriptscriptstyle B},\psi_k | U^\dag\, \mathbbm I\otimes |x\rangle\langle x| \right]\\ \notag & = \sum_k \lambda_k \hbox{Tr}_{\scriptscriptstyle A} \left[\langle x|U |\psi_k,\omega_{\scriptscriptstyle B}\rangle\rangle\langle\langle\omega_{\scriptscriptstyle B},\psi_k | U^\dag|x\rangle\right] = \sum_k \lambda_k \hbox{Tr}_{\scriptscriptstyle A} \left[M_x|\psi_k\rangle\langle\psi_k | M_x^\dag\right] \\ &= \hbox{Tr}_{\scriptscriptstyle A} \left[M_x \varrho_{\scriptscriptstyle A} M_x^\dag\right] = \hbox{Tr}_{\scriptscriptstyle A} \left[\varrho_{\scriptscriptstyle A}\, M_x^\dag M_x\right]\,, \label{bigstar1} \end{align} which shows that $\Pi_x=M_x^\dag M_x$ is indeed a positive operator $\forall x$. Besides, for any vector $|\varphi_{\scriptscriptstyle A}\rangle$ in $H_{\scriptscriptstyle A}$ we have \begin{align} \langle\varphi_{\scriptscriptstyle A}| \sum_x M^\dag_x M_x |\varphi_{\scriptscriptstyle A}\rangle &= \sum_x \langle\langle \omega_{\scriptscriptstyle B},\varphi_{\scriptscriptstyle A}|U^\dag |x\rangle\langle x| U|\varphi_{\scriptscriptstyle A}, \omega_{\scriptscriptstyle B}\rangle\rangle \notag \\ &=\langle\langle \omega_{\scriptscriptstyle B},\varphi_{\scriptscriptstyle A}|U^\dag U|\varphi_{\scriptscriptstyle A}, \omega_{\scriptscriptstyle B}\rangle\rangle=1\,, \label{bigstar2} \end{align} and since this is true for any $|\varphi_{\scriptscriptstyle A}\rangle$ we have $\sum_x M_x^\dag M_x=\mathbbm I$. Putting together Eqs. (\ref{bigstar1}) and (\ref{bigstar2}) we have that the set of operators $\Pi_x=M^\dag_x M_x$ is a POVM, with detection operators $M_x$. In turn, the conditional state of the system $A$, after having observed the outcome $x$, is given by \begin{align} \varrho_{{\scriptscriptstyle A} x} &= \frac1{p_x} \hbox{Tr}_{\scriptscriptstyle B} \left[U \varrho_{\scriptscriptstyle A} \otimes |\omega_{\scriptscriptstyle B}\rangle\langle\omega_{\scriptscriptstyle B} | U^\dag\, \mathbbm I\otimes P_x \right] = \frac1{p_x}\sum_k \lambda_k \langle x|U|\psi_k,\omega_{\scriptscriptstyle B}\rangle\rangle\langle\langle\omega_{\scriptscriptstyle B},\psi_k |U^\dag |x\rangle \notag \\ &= \frac1{p_x} M_x \varrho_{\scriptscriptstyle A} M_x^\dag \end{align} This is the half of the Naimark theorem: if we couple our system to an ancilla, let them evolve and perform the measurement of an observable on the ancilla, which projects the ancilla on a basis in $H_{\scriptscriptstyle B}$, then this procedure also modify the system. The transformation needs not to be a projection. Rather, it is adequately described by a set of detection operators which realizes a POVM on the system Hilbert space. Overall, the meaning of the above proof is twofold: on the one hand we have shown that there exists realistic measurement schemes which are described by POVMs when we look at the system only. At the same time, we have shown that the partial trace of a spectral measure is a POVM, which itself depends on the projective measurement performed on the ancilla, and on its initial preparation. Finally, we notice that the scheme of Figure 1 provides a general model for any kind of detector with internal degrees of freedom. \par Let us now address the converse problem: given a set of detection operators $M_x$ which realizes a POVM $\sum_x M^\dag_x M_x=\mathbbm I$, is this the system-only description of an indirect measurement performed a larger Hilbert space? In other words, there exists a Hilbert space $H_{\scriptscriptstyle B}$, a state $\varrho_{\scriptscriptstyle B}= |\omega_{\scriptscriptstyle B}\rangle\langle\omega_{\scriptscriptstyle B}|\in L(H_{\scriptscriptstyle B})$, a unitary $U\in L(H_{\scriptscriptstyle A} \otimes H_{\scriptscriptstyle B})$, and a projective measurement $P_x=|x\rangle\langle x|$ in $H_{\scriptscriptstyle B}$ such that $M_x |\varphi_{\scriptscriptstyle A}\rangle = \langle x| U | \varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B}\rangle\rangle$ holds for any $|\varphi_{\scriptscriptstyle A}\rangle \in H_{\scriptscriptstyle A}$ and $\Pi_x= \langle\omega_{\scriptscriptstyle B}| U^\dag \mathbbm I \otimes P_x U | \omega_{\scriptscriptstyle B}\rangle $? The answer is positive and we will provide a constructive proof. Let us take $H_{\scriptscriptstyle B}$ with dimension equal to the number of detection operators and of POVM elements, and choose a basis $|x\rangle$ for $H_{\scriptscriptstyle B}$, which in turn individuates a projective measurement. Then we choose an arbitrary state $|\omega_{\scriptscriptstyle B}\rangle \in H_{\scriptscriptstyle B}$ and define the action of an operator U as $$ U\,|\varphi_{\scriptscriptstyle A}\rangle \otimes |\omega_{\scriptscriptstyle B} \rangle = \sum_x M_x\,|\varphi_{\scriptscriptstyle A}\rangle \otimes |x\rangle $$ where $|\varphi_{\scriptscriptstyle A}\rangle \in H_{\scriptscriptstyle A}$ is arbitrary. The operator $U$ preserves the scalar product \begin{align}\notag \langle\langle \omega_{\scriptscriptstyle B},\varphi_{\scriptscriptstyle A}^\prime | U^\dag U| \varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B} \rangle\rangle = \sum_{x x^\prime} \langle \varphi_{\scriptscriptstyle A}^\prime | M_{x^\prime}^\dag M_x|\varphi_{\scriptscriptstyle A}\rangle \langle x^\prime|x\rangle = \sum_{x} \langle\varphi_{\scriptscriptstyle A}^\prime | M_{x^\prime}^\dag M_x|\varphi_{\scriptscriptstyle A}\rangle = \langle\varphi_{\scriptscriptstyle A}^\prime | \varphi_{\scriptscriptstyle A}\rangle \end{align} and so it is unitary in the one-dimensional subspace spanned by $|\omega_{\scriptscriptstyle B}\rangle$. Besides, it may be extended to a full unitary operator in the global Hilbert space $H_{\scriptscriptstyle A}\otimes H_{\scriptscriptstyle B}$, eg it can be the identity operator in the subspace orthogonal to $|\omega_{\scriptscriptstyle B}\rangle$. Finally, for any $|\varphi_{\scriptscriptstyle A}\rangle\in H_{\scriptscriptstyle A}$, we have $$\langle x|U|\varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B}\rangle\rangle = \sum_{x^\prime} M_{x^\prime}|\varphi_{\scriptscriptstyle A}\rangle \langle x|x^\prime\rangle = M_x |\varphi_{\scriptscriptstyle A}\rangle\,,$$ and $$\langle\varphi_{\scriptscriptstyle A}|\Pi_x|\varphi_{\scriptscriptstyle A}\rangle= \langle\varphi_{\scriptscriptstyle A}|M_x^\dag M_x|\varphi_{\scriptscriptstyle A}\rangle= \langle\langle\omega_{\scriptscriptstyle B},\varphi_{\scriptscriptstyle A}|U^\dag \mathbbm I\otimes P_x U|\varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B}\rangle\rangle\,,$$ that is, $\Pi_x= \langle\omega_{\scriptscriptstyle B}| U^\dag \mathbbm I \otimes P_x U | \omega_{\scriptscriptstyle B}\rangle$. \hfill $\qed$ \par This completes the {proof of the Naimark theorem}, which asserts that there is a one-to-one correspondence between POVM and indirect measurements of the type describe above. In other words, an indirect measurement may be seen as the physical implementation of a POVM and any POVM may be realized by an indirect measurement. \par The emerging picture is thus the following: In measuring a quantity of interest on a physical system one generally deals with a larger system that involves additional degrees of freedom, besides those of the system itself. These additional physical entities are globally referred to as the apparatus or the ancilla. As a matter of fact, the measured quantity may be always described by a standard observable, however on a larger Hilbert space describing both the system and the apparatus. When we trace out the degrees of freedom of the apparatus we are generally left with a POVM rather than a PVM. Conversely, any conceivable POVM, i.e. a set of positive operators providing a resolution of identity, describe a generalized measurement, which may be always implemented as a standard measurement in a larger Hilbert space. \par Before ending this Section, few remarks are in order: \begin{itemize} \item[{\bf R1}] The possible Naimark extensions are actually infinite, corresponding to the intuitive idea that there are infinite ways, with an arbitrary number of ancillary systems, of measuring a given quantity. The construction reported above is sometimes referred to as the {\em canonical extension} of a POVM. The Naimark theorem just says that an implementation in terms of an ancilla-based indirect measurement is always possible, but of course the actual implementation may be different from the canonical one. \item[{\bf R2}] The projection postulate described at the beginning of this section, the scheme of indirect measurement, and the canonical extension of a POVM have in common the assumption that a nondemolitive detection scheme takes place, in which the system after the measurement has been modified, but still exists. This is sometimes referred to as a {\em measurement of the first kind} in textbook quantum mechanics. Conversely, in a demolitive measurement or {\em measurement of the second kind}, the system is destroyed during the measurement and it makes no sense of speaking of the state of the system after the measurement. Notice, however, that for demolitive measurements on a field the formalism of generalized measurements provides the framework for the correct description of the state evolution. As for example, let us consider the detection of photons on a single-mode of the radiation field. A demolitive photodetector (as those based on the absorption of light) realizes, in ideal condition, the measurement of the number operator $a^\dag a$ without leaving any photon in the mode . If $\varrho=\sum_{np} \varrho_{np}|n\rangle\langle p|$ is the state of the single-mode radiation field a photodetector of this kind gives a natural number $n$ as output, with probability $p_n=\varrho_{nn}$, whereas the post-measurement state is the vacuum $|0\rangle\langle 0|$ independently on the outcome of the measurement. This kind of measurement is described by the orthogonal POVM $\Pi_n=|n\rangle\langle n|$, made by the eigenvectors of the number operator, and by the detection operator $M_n=|0\rangle\langle n|$. The proof is left as an exercise. \item[{\bf R3}] We have formulated and proved the Naimark theorem in a restricted form, suitable for our purposes. It should be noticed that it holds in more general terms, as for example with extension of the Hilbert space given by direct sum rather than tensor product, and also relaxing the hypothesis \cite{Pau}. \subsubsection{Conditional states in generalized measurements} If we have a composite system and we perform a projective measurement on, say, subsystem $A$, the conditional state of the unmeasured subsystem $B$ after the observation of the outcome $x$ is given by Eq. (\ref{conditional}), i.e. it is the partial trace of the projection of the state before the measurement on the eigenspace of the observed eigenvalue. One may wonder whether a similar results holds also when the measurement performed on the subsystem a $A$ is described by a POVM. The answer is positive and the proof may be given in two ways. The first is based on the observation that, thanks to the existence of a canonical Naimark extension, we may write the state of the global system after the measurement as $$\varrho_{{\scriptscriptstyle \!AB} x} = \frac1{p_x} M_x \otimes \mathbbm I_{\scriptscriptstyle B}\, \varrho_{\scriptscriptstyle \!AB}\, M_x^\dag \otimes \mathbbm I_{\scriptscriptstyle B}\,,$$ and thus the conditional state of subsystem $B$ is the partial trace $\varrho_{{\scriptscriptstyle B} x}=\hbox{Tr}_{\scriptscriptstyle A} [\varrho_{{\scriptscriptstyle \!AB} x}]$ i.e. $$\varrho_{{\scriptscriptstyle B} x}= \frac1{p_x}\hbox{Tr}_{\scriptscriptstyle A} [M_x \otimes \mathbbm I_{\scriptscriptstyle B}\, \varrho_{\scriptscriptstyle \!AB}\, M_x^\dag \otimes \mathbbm I_{\scriptscriptstyle B} ]= \frac1{p_x}\hbox{Tr}_{\scriptscriptstyle A} [ \varrho_{\scriptscriptstyle \!AB}\, M_x^\dag M_x \otimes \mathbbm I_{\scriptscriptstyle B} ]= \frac1{p_x}\hbox{Tr}_{\scriptscriptstyle A} [ \varrho_{\scriptscriptstyle \!AB}\, \Pi_x \otimes \mathbbm I_{\scriptscriptstyle B} ]\,,$$ where again we used the circularity of partial trace in the presence of factorized operators. A second proof may be offered invoking the Naimark theorem only to ensure the existence of an extension, i.e. a projective measurement on a larger Hilbert space $H_{\scriptscriptstyle C} \otimes\ H_{\scriptscriptstyle A}$, which reduces to the POVM after tracing over $H_{\scriptscriptstyle C}$. In formula, assuming that $P_x \in L(H_{\scriptscriptstyle C} \otimes\ H_{\scriptscriptstyle A})$ is a projector and $\sigma\in L(H_{\scriptscriptstyle C})$ a density operator \begin{align}\notag \varrho_{{\scriptscriptstyle B} x} &=\frac1{p_x}\hbox{Tr}_{{\scriptscriptstyle C}{\scriptscriptstyle A}} \left[ P_x\otimes\mathbbm I_{\scriptscriptstyle B}\, \varrho_{\scriptscriptstyle \!AB} \otimes \sigma\, P_x\otimes\mathbbm I_{\scriptscriptstyle B} \right]=\frac1{p_x}\hbox{Tr}_{{\scriptscriptstyle C}{\scriptscriptstyle A}} \left[ \varrho_{\scriptscriptstyle \!AB} \otimes \sigma\, P_x\otimes\mathbbm I_{\scriptscriptstyle B} \right] \\ \notag &=\frac1{p_x}\hbox{Tr}_{{\scriptscriptstyle A}} \left[ \varrho_{\scriptscriptstyle \!AB} \Pi_x\otimes\mathbbm I_{\scriptscriptstyle B} \right]\,. \end{align} \end{itemize} \subsection{Joint measurement of non commuting observables} \label{jm} A common statement about quantum measurements says that it is not possible to perform a joint measurement of two observables $Q_{\scriptscriptstyle A}$ and $P_{\scriptscriptstyle A}$ of a given system $A$ if they do not commute, i.e. $[Q_{\scriptscriptstyle A},P_{\scriptscriptstyle A}]\neq 0$. This is related to the impossibility of finding any common set of projectors on the Hilbert space $H_{\scriptscriptstyle A}$ of the system and to define a joint observable. On the other hand, a question arises on whether common projectors may be found in a larger Hilbert space, i.e. whether one may implement a joint measurement in the form of a generalized measurement. The answer is indeed positive \cite{art1,yue82}: This Section is devoted to describe the canonical implementation of joint measurements for pair of observables having a (nonzero) commutator $[Q_{\scriptscriptstyle A},P_{\scriptscriptstyle A}]=c\, \mathbbm I \neq 0$ proportional to the identity operator. \par The basic idea is to look for a pair of commuting observables $[X_{\scriptscriptstyle \!AB},Y_{\scriptscriptstyle \!AB}]=0$ in a larger Hilbert space $H_{\scriptscriptstyle A} \otimes H_{\scriptscriptstyle B}$ which {\em trace} the observables $P_{\scriptscriptstyle A}$ and $Q_{\scriptscriptstyle A}$, i.e. which have the same expectation values \begin{align} \langle X_{\scriptscriptstyle \!AB}\rangle\equiv \hbox{Tr}_{\scriptscriptstyle \!AB}[X_{\scriptscriptstyle \!AB}\, \varrho_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B}] &= \hbox{Tr}_{\scriptscriptstyle A} [Q_{\scriptscriptstyle A}\, \varrho_{\scriptscriptstyle A}] \equiv \langle Q_{\scriptscriptstyle A} \rangle \notag \\ \langle Y_{\scriptscriptstyle \!AB}\rangle\equiv \hbox{Tr}_{\scriptscriptstyle \!AB}[Y_{\scriptscriptstyle \!AB}\, \varrho_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B}] &= \hbox{Tr}_{\scriptscriptstyle A} [P_{\scriptscriptstyle A}\, \varrho_{\scriptscriptstyle A}] \equiv\langle P_{\scriptscriptstyle A} \rangle \label{j1} \end{align} for any state $\varrho_{\scriptscriptstyle A} \in H_{\scriptscriptstyle A}$ of the system under investigation, and a fixed suitable preparation $\varrho_{\scriptscriptstyle B}\in H_{\scriptscriptstyle B}$ of the system $B$. A pair of such observables may be found upon choosing a replica system $B$, identical to $A$, and considering the operators \begin{align} X_{\scriptscriptstyle \!AB} & = Q_{\scriptscriptstyle A} \otimes \mathbbm I_{\scriptscriptstyle B} + \mathbbm I_{\scriptscriptstyle A} \otimes Q_{\scriptscriptstyle B} \notag \\ Y_{\scriptscriptstyle \!AB} & = P_{\scriptscriptstyle A} \otimes \mathbbm I_{\scriptscriptstyle B} - \mathbbm I_{\scriptscriptstyle A} \otimes P_{\scriptscriptstyle B} \label{j2} \end{align} where $Q_{\scriptscriptstyle B}$ and $P_{\scriptscriptstyle B}$ are the analogue of $Q_{\scriptscriptstyle A}$ and $P_{\scriptscriptstyle A}$ for system $B$, see \cite{BV10} for more details involving the requirement of covariance. The operators in Eq. (\ref{j2}), taken together a state $\varrho_{\scriptscriptstyle B} \in H_{\scriptscriptstyle B}$ satisfying \begin{align} \hbox{Tr}_{\scriptscriptstyle B} [Q_{\scriptscriptstyle B}\, \varrho_{\scriptscriptstyle B}] =\hbox{Tr}_{\scriptscriptstyle B} [P_{\scriptscriptstyle B}\, \varrho_{\scriptscriptstyle B}]=0\,, \label{j3} \end{align} fulfill the conditions in Eq. (\ref{j1}), i.e. realize a joint generalized measurement of the noncommuting observables $Q_{\scriptscriptstyle A}$ and $P_{\scriptscriptstyle A}$. The operators $X_{\scriptscriptstyle \!AB}$ and $Y_{\scriptscriptstyle \!AB}$ are Hermitian by construction. Their commutator is given by \begin{align} [X_{\scriptscriptstyle \!AB},Y_{\scriptscriptstyle \!AB}]= [Q_{\scriptscriptstyle A},P_{\scriptscriptstyle A}] \otimes \mathbbm I_{\scriptscriptstyle B} - \mathbbm I_{\scriptscriptstyle A} \otimes [Q_{\scriptscriptstyle B},P_{\scriptscriptstyle B}] = 0\,. \label{j4} \end{align} Notice that the last equality, i.e. the fact that the two operators commute, is valid only if the commutator $[Q_{\scriptscriptstyle A},P_{\scriptscriptstyle A}]=c\, \mathbbm I$ is proportional to the identity. More general constructions are needed if this condition does not hold \cite{jsp1}. \par Since the $[X_{\scriptscriptstyle \!AB},Y_{\scriptscriptstyle \!AB}]=0$ the complex operator $Z_{\scriptscriptstyle \!AB} = X_{\scriptscriptstyle \!AB} + i\, Y_{\scriptscriptstyle \!AB}$ is {\em normal} i.e. $[Z_{\scriptscriptstyle \!AB},Z_{\scriptscriptstyle \!AB}^\dag]=0$. For normal operators the spectral theorem holds, and we may write \begin{align} Z_{\scriptscriptstyle \!AB} = \sum_z z\, P_z \qquad P_z=\kket{z} \bbra{z}\qquad Z_{\scriptscriptstyle \!AB} \kket{z} = z \kket{z} \label{j5} \end{align} where $z\in {\mathbbm C}$, and $P_z$ are orthogonal projectors on the eigenstates $\kket{z}\equiv \kket{z}_{\scriptscriptstyle \!AB}$ of $Z_{\scriptscriptstyle \!AB}$. The set $\{P_z\}$ represents the common projectors individuating the joint observable $Z_{\scriptscriptstyle \!AB}$. Each run of the measurement returns a complex number, whose real and imaginary parts correspond to a sample of the $X_{\scriptscriptstyle \!AB}$ and $Y_{\scriptscriptstyle \!AB}$ values, aiming at sampling $Q_{\scriptscriptstyle A}$ and $P_{\scriptscriptstyle A}$. The statistics of the measurement is given by \begin{align} p_{\scriptscriptstyle Z}(z) = \hbox{Tr}_{\scriptscriptstyle \!AB}[\varrho_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B}\, P_z]= \hbox{Tr}_{\scriptscriptstyle A}[\varrho_{\scriptscriptstyle A}\, \Pi_z] \label{j6} \end{align} where the POVM $\Pi_z$ is given by \begin{align} \Pi_z = \hbox{Tr}_{\scriptscriptstyle B}[\mathbbm I_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B}\, P_z]\,. \label{j7} \end{align} The mean values $\langle X_{\scriptscriptstyle \!AB}\rangle=\langle Q_{\scriptscriptstyle A}\rangle$ and $\langle Y_{\scriptscriptstyle \!AB}\rangle= \langle P_{\scriptscriptstyle A}\rangle$ are the correct ones by construction, where by saying "correct" we intend the mean values that one would have recorded by measuring the two observables $Q_{\scriptscriptstyle A}$ and $P_{\scriptscriptstyle A}$ separately in a standard (single) projective measurement on $\varrho_{\scriptscriptstyle A}$. On the other hand, the two marginal distributions $$ p_{\scriptscriptstyle X} (x) = \int \! dy\, p_{\scriptscriptstyle Z} (x+ i y) \qquad p_{\scriptscriptstyle Y} (y) = \int \! dx\, p_{\scriptscriptstyle Z} (x+ i y)\,, $$ need not to reproduce the distributions obtained in single measurements. In particular, for the measured variances $\langle \Delta X_{\scriptscriptstyle \!AB}^2\rangle = \langle X_{\scriptscriptstyle \!AB}^2\rangle - \langle X_{\scriptscriptstyle \!AB}\rangle^2$ and $\langle\Delta Y_{\scriptscriptstyle \!AB}\rangle$ one obtains \begin{align} \langle \Delta X_{\scriptscriptstyle \!AB}^2\rangle &= \hbox{\rm Tr} \left[ (Q_{\scriptscriptstyle A}^2\otimes \mathbbm I_{\scriptscriptstyle B} + \mathbbm I_{\scriptscriptstyle A} \otimes Q_{\scriptscriptstyle B}^2 + 2\, Q_{\scriptscriptstyle A} \otimes Q_{\scriptscriptstyle B})\, \varrho_{\scriptscriptstyle A} \otimes \varrho_{\scriptscriptstyle B} \right] - \langle Q_{\scriptscriptstyle A} \rangle^2 \notag \\ &= \langle \Delta Q_{\scriptscriptstyle A}^2 \rangle + \langle Q_{\scriptscriptstyle B}^2\rangle \notag \\\langle \Delta Y_{\scriptscriptstyle \!AB}^2\rangle &= \langle \Delta P_{\scriptscriptstyle A}^2 \rangle + \langle P_{\scriptscriptstyle B}^2\rangle\, \label{j8} \end{align} where we have already taken into account that $\langle Q_{\scriptscriptstyle B} \rangle = \langle P_{\scriptscriptstyle B}\rangle =0$. As it is apparent from Eqs. (\ref{j8}) the variances of $X_{\scriptscriptstyle \!AB}$ and $Y_{\scriptscriptstyle \!AB}$ are larger than those of the original, non commuting, observables $Q_{\scriptscriptstyle A}$ and $P_{\scriptscriptstyle A}$. \par Overall, we may summarize the emerging picture as follows: a joint measurement of a pair of non commuting observables corresponds to a generalized measurement and may be implemented as the measurement of a pair of commuting observables on an enlarged Hilbert space. Mean values are preserved whereas the non commuting nature of the original observables manifests itself in the broadening of the marginal distributions, i.e. as an additional noise term appears to both the variances. The uncertainty product may be written as \begin{align} \langle \Delta X_{\scriptscriptstyle \!AB}^2\rangle \langle \Delta Y_{\scriptscriptstyle \!AB}^2\rangle & = \langle \Delta Q_{\scriptscriptstyle A}^2\rangle \langle \Delta P_{\scriptscriptstyle A}^2\rangle + \langle \Delta Q_{\scriptscriptstyle A}^2\rangle \langle P_{\scriptscriptstyle B}^2\rangle + \langle Q_{\scriptscriptstyle B}^2\rangle \langle \Delta P_{\scriptscriptstyle A}^2\rangle + \langle Q_{\scriptscriptstyle B}^2\rangle \langle P_{\scriptscriptstyle B}^2\rangle\,, \notag \\ & \geq \frac14 \big|[Q_{\scriptscriptstyle A},P_{\scriptscriptstyle A}]\big|^2 + \langle \Delta Q_{\scriptscriptstyle A}^2\rangle \langle P_{\scriptscriptstyle B}^2\rangle + \langle Q_{\scriptscriptstyle B}^2\rangle \langle \Delta P_{\scriptscriptstyle A}^2\rangle + \langle Q_{\scriptscriptstyle B}^2\rangle \langle P_{\scriptscriptstyle B}^2\rangle\,, \label{j9} \end{align} where the last three terms are usually referred to as the {\em added noise} due to the joint measurement. If we perform a joint measurement on a minimum uncertainty state (MUS, see Appendix \ref{apUR}) for a given pair of observables (e.g. a coherent state in the joint measurement of a pair of conjugated quadratures of the radiation field) and use a MUS also for the preparation of the replica system (e.g. the vacuum), then Eq. (\ref{j9}) rewrites as \begin{align} \langle \Delta X_{\scriptscriptstyle \!AB}^2\rangle \langle \Delta Y_{\scriptscriptstyle \!AB}^2\rangle = \big|[Q_{\scriptscriptstyle A},P_{\scriptscriptstyle A}]\big|^2\,. \label{10} \end{align} This is four times the minimum attainable uncertainty product in the case of a measurement of a single observable (see Appendix \ref{apUR}). In terms of rms' $\Delta X = \sqrt{\langle \Delta X^2\rangle}$ we have a factor $2$, which is usually referred to as the $3$ dB of added noise in joint measurements. The experimental realization of joint measurements of non commuting observables has been carried out for conjugated quadratures of the radiation field in a wide range of frequencies ranging from radiowaves to the optical domain, see e.g. \cite{wal}. \subsection{About the so-called Heisenberg principle} \label{HP} Let us start by quoting Wikipedia about the Heisenberg principle \cite{wikiHP} \begin{quote} {\em Published by Werner Heisenberg in 1927, the principle implies that it is impossible to simultaneously both measure the present position while "determining" the future momentum of an electron or any other particle with an arbitrary degree of accuracy and certainty. This is not a statement about researchers' ability to measure one quantity while determining the other quantity. Rather, it is a statement about the laws of physics. That is, a system cannot be defined to simultaneously measure one value while determining the future value of these pairs of quantities. The principle states that a minimum exists for the product of the uncertainties in these properties that is equal to or greater than one half of the reduced Planck constant. }\end{quote} As is it apparent from the above formulation, the principle is about the precision achievable in the measurement of an observable and the disturbance introduced by the same measurement on the state under investigation, which, in turn, would limit the precision of a subsequent measurement of the conjugated observable. The principle, which has been quite useful in the historical development of quantum mechanics, has been inferred from the analysis of the celebrated Heisenberg' gedanken experiments, and thus is heuristic in nature. However, since its mathematical formulation is related to that of the uncertainty relations (see Appendix \ref{apUR}), it is often though as a theorem following from the axiomatic structure of quantum mechanics. This is not the case: here we exploit the formalism of generalized measurements to provide an explicit example of a measurement scheme providing the maximum information about a given observable, i.e. the statistics of the corresponding PVM, while leaving the state under investigation in an eigenstate of the conjugated observable. \par Let us consider the two noncommuting observables $[A,B]= c\, \mathbbm I$ and the set of detection operators $M_a = |b\rangle\langle a|$ where $|a\rangle$ and $|b\rangle$ are eigenstates of $A$ and $B$ respectively, i.e. $A|a\rangle=a|a\rangle$, $B|b\rangle=b|b\rangle$. According to the Naimark theorem the set of operators $\{M_a\}$ describe a generalized measurement (e.g. an indirect measurement as the one depicted in Fig. 1) with statistics $p_a = \hbox{\rm Tr} [\varrho\, \Pi_a]$ described by the POVM $\Pi_a = M^\dag_a M_a = |a\rangle\langle a|$ and where the conditional states after the measurement are given by $\varrho_a = \frac{1}{p_a} M_a \varrho M_a^\dag = |b\rangle\langle b|$. In other words, the generalized measurement described by the set $\{M_a\}$ has the same statistics of a Von-Neumann projective measurement of the observable $A$, and leave the system under investigating in an eigenstate of the observable $B$, thus {\em determining its future value with an arbitrary degree of accuracy and certainty} and contrasting the formulation of the so-called Heisenberg principle reported above. An explicit unitary realization of this kind of measurement for the case of position, as well as a detailed discussion on the exact meaning of the Heisenberg principle, and the tradeoff between precision and disturbance in a quantum measurement, may be found in \cite{Ozawa02}. \subsection{The quantum roulette} Let us consider $K$ projective measurements corresponding to $K$ nondegenerate isospectral observables $X_k$, $k=1,...,K$ in a Hilbert space $H$, and consider the following experiment. The system is sent to a detector which at random, with probability $z_k$, $\sum_k z_k=1$, perform the measurement of the observable $X_k$. This is known as the quantum roulette since the observable to be measured is chosen at random, eg according to the outcome of a random generator like a roulette. The probability of getting the outcome $x$ from the measurement of the observable $X_k$ on a state $\varrho\in L(H)$ is given by $p_x^{(k)} = \hbox{Tr}[\varrho\,P^{(k)}_x]$, $P^{(k)}_x=|x\rangle_k{}_k\langle x|$, and the overall probability of getting the outcome $x$ from our experiment is given by $$ p_x = \sum_k z_k p_x^{(k)}=\sum_k z_k \hbox{Tr}[\varrho\,P^{(k)}_x] = \hbox{Tr}[\varrho\,\sum_k z_k P^{(k)}_x] = \hbox{Tr}[\varrho\,\Pi_x]\,,$$ where the POVM describing the measurement is given by $\Pi_x=\sum_k z_k P^{(k)}_x$. This is indeed a POVM and not a projective measurement since $$\Pi_x\Pi_{x^\prime} = \sum_{kk^\prime} z_k z_{k^\prime} P^{(k)}_xP^{(k^\prime)}_{x^\prime}\neq \delta_{xx^\prime} \Pi_x\,.$$ Again, we have a practical situation where POVMs naturally arise in order to describe the statistics of the measurement in terms of the Born rule and the system density operator. A Naimark extension for the quantum roulette may be obtained as follows. Let us consider an additional {\em probe} system described by the Hilbert space $H_{\scriptscriptstyle P}$ of dimension $K$ equal to the number of measured observables in the roulette, and the set of projectors $Q_x=\sum_k P^{(k)}_x \otimes |\theta_k\rangle\langle\theta_k|$ where $\{|\theta_k\rangle\}$ is a basis for $H_{\scriptscriptstyle P}$. Then, upon preparing the probe system in the superposition $|\omega_P\rangle=\sum_k \sqrt{z_k} |\theta_k\rangle$ we have that $p_x=\hbox{Tr}_{{\scriptscriptstyle S}{\scriptscriptstyle P}} [\varrho\otimes |\omega_{\scriptscriptstyle P}\rangle\langle\omega_{\scriptscriptstyle P}|\, Q_x]$ and, in turn, $\Pi_x = \hbox{Tr}_{\scriptscriptstyle P}[\mathbbm I_{\scriptscriptstyle S}\otimes|\omega_{\scriptscriptstyle P}\rangle\langle \omega_{\scriptscriptstyle P}|\, Q_x]=\sum_k z_k P^{(k)}_x$. The state of the system after the measurement may be obtained as the partial trace \begin{align}\notag \varrho_x &= \frac1{p_x} \hbox{Tr}_{\scriptscriptstyle P} \left[Q_x\,\varrho\otimes|\omega_{\scriptscriptstyle P} \rangle\langle\omega_{\scriptscriptstyle P}|\,Q_x \right] \\\notag & = \frac1{p_x} \sum_k \sum_{k^\prime} \hbox{Tr}_{\scriptscriptstyle P} \left[ P_x^{(k)}\otimes|\theta_k\rangle\langle\theta_k| \,\varrho\otimes|\omega_{\scriptscriptstyle P} \rangle\langle\omega_{\scriptscriptstyle P}|\, P_x^{(k^\prime)}\otimes|\theta_{k^\prime}\rangle\langle\theta_{k^\prime}| \right] \\ \notag & = \frac1{p_x} \sum_k z_k P_x^{(k)} \varrho\, P_x^{(k)}\:. \end{align} Notice that the presented Naimark extension is not the canonical one. \begin{exercise} Prove that the operators $Q_x$ introduced for the Naimark extension of the quantum roulette, are indeed projectors. \end{exercise} \begin{exercise} Take a system made by a single qubit system and construct the canonical Naimark extension for the quantum roulette obtained by measuring the observables $\sigma_\alpha=\cos\alpha\, \sigma_1 + \sin\alpha\,\sigma_2$, where $\sigma_1$ and $\sigma_2$ are Pauli matrices and $\alpha\in[0,\pi]$ is chosen at random with probability density $p(\alpha)=\pi^{-1}$. \end{exercise} \section{Quantum operations} In this Section we address the dynamical evolution of quantum systems to see whether the standard formulation in terms of unitary evolutions needs a suitable generalization. This is indeed the case: we will introduce a generalized description and see how this reconciles with what we call Postulate \ref{pqd} in the Introduction. We will proceed in close analogy with what we have done for states and measurements. We start by closely inspecting the physical motivations behind any mathematical description of quantum evolution, and look for physically motivated conditions that a map, intended to transform a quantum state into a quantum state, from now on a {\em quantum operation}, should satisfy to be admissible. This will lead us to the concept of complete positivity, which suitably generalizes the motivations behind unitarity. We then prove that any quantum operation may be seen as the partial trace of a unitary evolution in a larger Hilbert space, and illustrate a convenient form, the so-called Kraus or operator-sum representation, to express the action of a quantum operation on quantum states. \par By quantum operation we mean a map $\varrho \rightarrow {\cal E} (\varrho)$ transforming a quantum state $\varrho$ into another quantum state ${\cal E} (\varrho)$. The basic requirements on ${\cal E}$ to describe a physically admissible operations are those captured by the request of unitarity in the standard formulation, i.e. \begin{itemize} \item[${\boldsymbol{Q1}}$] The map is positive and trace-preserving, i.e. ${\cal E} (\varrho) \geq 0$ (hence selfadjoint) and $\hbox{Tr} [{\cal E} (\varrho)] = \hbox{Tr}[\varrho]=1$. The last assumption may be relaxed to that of being trace non-increasing $0\leq \hbox{Tr} [{\cal E} (\varrho)]\leq 1$ in order to include evolution induced by measurements (see below). \item[${\boldsymbol{Q2}}$] The map is linear ${\cal E}(\sum_k p_k \varrho_k) = \sum_k p_k {\cal E}(\varrho_k)$, i.e. the state obtained by applying the map to the ensemble $\{p_k, \varrho_k\}$ is the ensemble $\{p_k, {\cal E}(\varrho_k)\}$. \item[${\boldsymbol{Q3}}$] The map is completely positive (CP), i.e. besides being positive it is such that if we introduce an additional system, any map of the form ${\cal E} \otimes \mathbbm I $ acting on the extended Hilbert space is also positive. In other words, we ask that the map is physically meaningful also when acting on a portion of a larger, composite, system. As we will see, this request is not trivial at all, i.e. there exist maps that are positive but not completely positive. \end{itemize} \subsection{The operator-sum representation} This section is devoted to state and prove a theorem showing that a map is a quantum operation if and only if it is the partial trace of a unitary evolution in a larger Hilbert space, and provides a convenient form, the so-called Kraus decomposition or operator-sum representation \cite{Pre,nota}, to express its action on quantum states. \begin{theorem}[Kraus] A map ${\cal E}$ is a quantum operation i.e. it satisfies the requirements $\boldsymbol{Q1}$-$\boldsymbol{Q3}$ {if and only if} is the partial trace of a unitary evolution on a larger Hilbert space with factorized initial condition or, equivalently, it possesses a Kraus decomposition i. e. its action may be represented as ${\cal E}(\varrho) = \sum_k M_k \varrho M^\dag_k$ where $\{M_k\}$ is a set of operators satisfying $\sum_k M_k^\dag M_k=\mathbbm I$. \end{theorem} \begin{proof} The first part of the theorem consists in assuming that ${\cal E}(\varrho)$ is the partial trace of a unitary operation in a larger Hilbert space and prove that it has a Kraus decomposition and, in turn, it satisfies the requirements $\boldsymbol{Q1}$-$\boldsymbol{Q3}$. Let us consider a physical system $A$ prepared in the quantum state $\varrho_{\scriptscriptstyle A}$ and another system $B$ prepared in the state $\varrho_{\scriptscriptstyle B}$. $A$ and $B$ interact through the unitary operation $U$ and we are interested in describing the effect of this interaction on the system $A$ only, i.e. we are looking for the expression of the mapping $\varrho_{\scriptscriptstyle A} \rightarrow \varrho^\prime_{\scriptscriptstyle A} = {\cal E}(\varrho_{\scriptscriptstyle A})$ induced by the interaction. This may be obtained by performing the partial trace over the system $B$ of the global $AB$ system after the interaction, in formula \begin{align}\notag {\cal E}(\varrho_{\scriptscriptstyle A}) &= \hbox{Tr}_{\scriptscriptstyle B} \left[U\, \varrho_{\scriptscriptstyle A} \otimes \varrho_{\scriptscriptstyle B} U^\dag \right] = \sum_{s} p_s \hbox{Tr}_{\scriptscriptstyle B} \left[U\, \varrho_{\scriptscriptstyle A} \otimes |\theta_s\rangle\langle\theta_s| U^\dag \right] \\ &= \sum_{st} p_s \langle\varphi_t | U | \theta_s \rangle \, \varrho_{\scriptscriptstyle A} \langle \theta_s | U^\dag | \varphi_t\rangle = \sum_k M_k \,\varrho_{\scriptscriptstyle A} M^\dag_k \label{add1} \end{align} where we have introduced the operator $M_{k} =\sqrt{p_s}\langle\varphi_t|U|\theta_s\rangle$, with the polyindex $k\equiv st$ obtained by a suitable ordering, and used the spectral decomposition of the density operator $\varrho_{\scriptscriptstyle B} = \sum_s p_s |\theta_s\rangle\langle\theta_s |$. Actually, we could have also assumed the additional system in a pure state $|\omega_{\scriptscriptstyle B}\rangle$, since this is always possible upon invoking a purification, i.e. by suitably enlarging the Hilbert space. In this case the elements in the Kraus decomposition of our map would have be written as $\langle\varphi_t| U|\omega_{\scriptscriptstyle B}\rangle $. The set of operators $\{M_k\}$ satisfies the relation $$\sum_k M^\dag M_k = \sum_{st} p_s \theta_s | U^\dag | \varphi_t\rangle\langle\varphi_t | U | \theta_s \rangle = \sum_{s} p_s \langle \theta_s | U^\dag U | \theta_s \rangle =\mathbbm I\,.$$ Notice that the assumption of a factorized initial state is crucial to prove the existence of a Kraus decomposition and, in turn, the complete positivity. In fact, the dynamical map ${\cal E}(\varrho_{\scriptscriptstyle A}) = \hbox{Tr}_{\scriptscriptstyle B} \left[U\, \varrho_{\scriptscriptstyle \!AB}\, U^\dag\right]$ resulting from the partial trace of an initially correlated preparation $\varrho_{\scriptscriptstyle \!AB}$ needs not to be so. In this case, the dynamics can properly be defined only on a subset of initial states of the system. Of course, the map can be extended to all possible initial states by linearity, but the extension may not be physically realizable, i.e. may be not completely positive or even positive \cite{PP94}. \par We now proceed to show that for map of the form (\ref{add1}) (Kraus decomposition) the properties $\boldsymbol{Q1}$-$\boldsymbol{Q3}$ hold. Preservation of trace and of the Hermitian character, as well as linearity, are guaranteed by the very form of the map. Positivity is also ensured, since for any positive operator $O_{\scriptscriptstyle A}\in L(H_{\scriptscriptstyle A})$ and any vector $|\varphi_{\scriptscriptstyle A}\rangle \in H_{\scriptscriptstyle A}$ we have \begin{align} \langle\varphi_{\scriptscriptstyle A}| {\cal E}(O_{\scriptscriptstyle A})|\varphi_{\scriptscriptstyle A}\rangle &= \langle\varphi_{\scriptscriptstyle A}|\sum_k M_k\, O_{\scriptscriptstyle A} M_k^\dag|\varphi_{\scriptscriptstyle A}\rangle = \langle\varphi_{\scriptscriptstyle A}|\hbox{Tr}_{\scriptscriptstyle B}[ U\, O_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B}\, U^\dag]|\varphi_{\scriptscriptstyle A}\rangle \notag \\ &= \hbox{Tr}_{{\scriptscriptstyle A}{\scriptscriptstyle B}}[U^\dag|\varphi_{\scriptscriptstyle A}\rangle\langle\varphi_{\scriptscriptstyle A}| \otimes \mathbbm I\, U\, O_{\scriptscriptstyle A}\otimes\varrho_{\scriptscriptstyle B}\, ] \geq 0 \quad\forall\, O_{\scriptscriptstyle A}, \forall\, \varrho_{\scriptscriptstyle B}, \forall\, |\varphi_{\scriptscriptstyle A}\rangle \,.\notag\end{align} Therefore it remains to be proved that the map is completely positive. To this aim let us consider a positive operator $O_{\scriptscriptstyle \!AC} \in L(H_{\scriptscriptstyle A}\otimes H_{\scriptscriptstyle C})$ and a generic state $|\psi_{{\scriptscriptstyle \!AC}}\rangle\rangle$ on the same enlarged space, and define $$|\omega_k \rangle\rangle= \frac1{\sqrt{N_k}}M_k \otimes \mathbbm I_{\scriptscriptstyle C} |\psi_{{\scriptscriptstyle A}{\scriptscriptstyle C}}\rangle\rangle \qquad N_k=\langle\langle\psi_{{\scriptscriptstyle A}{\scriptscriptstyle C}}| M_k^\dag M_k \otimes \mathbbm I_{\scriptscriptstyle C}|\psi_{{\scriptscriptstyle A}{\scriptscriptstyle C}}\rangle\rangle\geq0\,.$$ Since $O_{\scriptscriptstyle \!AC}$ is positive we have $$ \langle\langle\psi_{{\scriptscriptstyle \!AC}}| (M_k^\dag\otimes \mathbbm I_{\scriptscriptstyle C}) \, O_{\scriptscriptstyle \!AC} (M_k \otimes \mathbbm I_{\scriptscriptstyle C}) | \psi_{{\scriptscriptstyle \!AC}}\rangle\rangle = N_k \langle\langle\omega_k | O_{\scriptscriptstyle \!AC} | \omega_k\rangle\rangle \geq 0 $$ and therefore $\langle\langle\psi_{{\scriptscriptstyle \!AC}}|{\cal E} \otimes \mathbbm I_{\scriptscriptstyle C} (O_{\scriptscriptstyle \!AC}) |\psi_{{\scriptscriptstyle \!AC}}\rangle\rangle = \sum_k N_k \langle\langle\omega_k | O_{\scriptscriptstyle \!AC} | \omega_k\rangle\rangle \geq 0 $, which proves that for any positive $O_{\scriptscriptstyle \!AC}$ also ${\cal E} \otimes \mathbbm I_{\scriptscriptstyle C} (O_{\scriptscriptstyle \!AC})$ is positive for any choice of $H_{\scriptscriptstyle C}$, i.e. ${\cal E}$ is a CP-map. \par Let us now prove the second part of the theorem, i.e. we consider a map ${\cal E}:L(H_{\scriptscriptstyle A}) \rightarrow L(H_{\scriptscriptstyle A})$ satisfying the requirements $\boldsymbol{Q1}$-$\boldsymbol{Q3}$ and show that it may be written in the Kraus form and, in turn, that its action may be obtained as the partial trace of a unitary evolution in a larger Hilbert. We start by considering the state $|\varphi\rangle\rangle = \frac{1}{\sqrt{d}}\sum_k |\theta_k\rangle\otimes |\theta_k\rangle\in H_{\scriptscriptstyle A}\otimes H_{\scriptscriptstyle A}$ and define the operator $\varrho_{{\scriptscriptstyle A}\sma} = {\cal E} \otimes \mathbbm I (|\varphi\rangle\rangle\langle\langle \varphi|)$. From the complete positivity and trace preserving properties of ${\cal E}$ we have that $\hbox{Tr}[\varrho_{{\scriptscriptstyle A}\sma}]=1$, and $\varrho_{{\scriptscriptstyle A}\sma}\geq 0$, i.e. $\varrho_{{\scriptscriptstyle A}\sma}$ is a density operator. Besides, this establishes a one-to-one correspondence between maps $L(H_{\scriptscriptstyle A}) \rightarrow L(H_{\scriptscriptstyle A})$ and density operators in $L(H_{\scriptscriptstyle A}) \otimes L(H_{\scriptscriptstyle A})$ which may be proved as follows: for any $|\psi\rangle=\sum_k \psi_k| \theta_k\rangle\in H_{\scriptscriptstyle A}$ define $|\tilde\psi\rangle=\sum_k \psi_k^* | \theta_k\rangle$ and notice that $$ \langle\tilde\psi| \varrho_{{\scriptscriptstyle A}\sma} | \tilde\psi\rangle = \frac1d \langle\tilde\psi| \sum_{kl}{\cal E}(|\theta_k \rangle\langle\theta_l|) \otimes |\theta_k\rangle\langle\theta_l| \,|\tilde\psi\rangle =\frac1d \sum_{kl} \psi_l^* \psi_k\, {\cal E}(|\theta_k\rangle\langle\theta_l|) = \frac1d\,{\cal E}(|\psi\rangle\langle\psi |)\,, $$ where we used linearity to obtain the last equality. Then define the operators $M_k|\psi\rangle = \sqrt{d p_k}\langle\tilde\psi | \omega_k\rangle\rangle$, where $|\omega_k\rangle\rangle$ are the eigenvectors of $\varrho_{{\scriptscriptstyle A}\sma}=\sum_k p_k|\omega_k\rangle\rangle\langle\langle\omega_k| $: this is a linear operator on $H_{\scriptscriptstyle A}$ and we have $$ \sum_k M_k |\psi\rangle\langle\psi| M_k^\dag = d \sum_k p_k \langle\tilde\psi|\omega_k\rangle\rangle\langle\langle\omega_k|\tilde\psi\rangle = d \langle\tilde\psi|\varrho_{{\scriptscriptstyle A}\sma}|\tilde\psi\rangle = {\cal E}(|\psi\rangle\langle\psi |) $$ for all pure states. Using again linearity we have that ${\cal E}(\varrho) = \sum_k M_k \varrho M^\dag_k$ also for any mixed state. It remains to be proved that a unitary extension exists, i.e. to prove that for any map on $L(H_{\scriptscriptstyle A})$ which satisfies $\boldsymbol{Q1}$-$\boldsymbol{Q3}$, and thus possesses a Kraus decomposition, there exist: i) a Hilbert space $H_{\scriptscriptstyle B}$, ii) a state $|\omega_{\scriptscriptstyle B}\rangle \in H_{\scriptscriptstyle B}$, iii) a unitary $U\in L(H_{\scriptscriptstyle A}\otimes H_{\scriptscriptstyle B})$ such that ${\cal E}(\varrho_{\scriptscriptstyle A}) =\hbox{Tr}_{\scriptscriptstyle B} [U\,\varrho_{\scriptscriptstyle A} \otimes |\omega_{\scriptscriptstyle B}\rangle\langle\omega_{\scriptscriptstyle B}| U^\dag]$ for any $\varrho_{\scriptscriptstyle A} \in L(H_{\scriptscriptstyle A})$. To this aim we proceed as we did for the proof of the Naimark theorem, i.e. we take an arbitrary state $|\omega_{\scriptscriptstyle B}\rangle \in H_{\scriptscriptstyle B}$, and define an operator $U$ trough its action on the generic $\varphi_{\scriptscriptstyle A}\rangle \otimes |\omega_{\scriptscriptstyle B} \rangle\in H_{\scriptscriptstyle A} \otimes H_{\scriptscriptstyle B}$, $ U\,|\varphi_{\scriptscriptstyle A}\rangle \otimes |\omega_{\scriptscriptstyle B} \rangle = \sum_k M_k\,|\varphi_{\scriptscriptstyle A}\rangle \otimes |\theta_k\rangle$, where the $|\theta_k\rangle$'s are a basis for $H_{\scriptscriptstyle B}$. The operator $U$ preserves the scalar product \begin{align}\notag \langle\langle \omega_{\scriptscriptstyle B},\varphi_{\scriptscriptstyle A}^\prime | U^\dag U| \varphi_{\scriptscriptstyle A},\omega_{\scriptscriptstyle B} \rangle\rangle = \sum_{k k ^\prime} \langle \varphi_{\scriptscriptstyle A}^\prime | M_{k^\prime}^\dag M_k|\varphi_{\scriptscriptstyle A}\rangle \langle \theta_{k^\prime}|\theta_k\rangle = \sum_{k} \langle\varphi_{\scriptscriptstyle A}^\prime | M_{k}^\dag M_k|\varphi_{\scriptscriptstyle A}\rangle = \langle\varphi_{\scriptscriptstyle A}^\prime | \varphi_{\scriptscriptstyle A}\rangle \end{align} and so it is unitary in the one-dimensional subspace spanned by $|\omega_{\scriptscriptstyle B}\rangle$. Besides, it may be extended to a full unitary operator in the global Hilbert space $H_{\scriptscriptstyle A}\otimes H_{\scriptscriptstyle B}$, eg it can be the identity operator in the subspace orthogonal to $|\omega_{\scriptscriptstyle B}\rangle$. Then, for any $\varrho_{\scriptscriptstyle A}$ in $H_{\scriptscriptstyle A}$ we have \begin{align}\notag \hbox{Tr}_{\scriptscriptstyle B} \left[ U \varrho_{\scriptscriptstyle A} \otimes |\omega_{\scriptscriptstyle B}\rangle\langle\omega_{\scriptscriptstyle B} |\, U^\dag \right]&= \sum_s p_s\, \hbox{Tr}_{\scriptscriptstyle B} \left[ U |\psi_s\rangle\langle\psi_s | \otimes |\omega_{\scriptscriptstyle B}\rangle\langle\omega_{\scriptscriptstyle B} |\, U^\dag\right] \\ \notag &= \sum_{skk^\prime} p_s\, \hbox{Tr}_{\scriptscriptstyle B} \left[ M_k |\psi_s\rangle\langle\psi_s |\,M_{k^\prime}^\dag \otimes |\theta_k\rangle\langle\theta_{k^\prime}| \right] \\ \notag &= \sum_{sk} p_s\, M_k |\psi_s\rangle\langle\psi_s |\,M_{k}^\dag = \sum_{k} M_k \varrho_{\scriptscriptstyle A} M_{k}^\dag \qquad \qed \end{align} \end{proof} The Kraus decomposition of a quantum operation generalizes the unitary description of quantum evolution. Unitary maps are, of course, included and correspond to maps whose Kraus decomposition contains a single elements. The set of quantum operations constitutes a semigroup, i.e. the composition of two quantum operations is still a quantum operation: $${\cal E}_2 ({\cal E}_1(\varrho)) = \sum_{k_1} M^{(1)}_{k_1} {\cal E}_2(\varrho) M^{(1)\dag}_{k_1}= \sum_{k_1k_2} M^{(1)}_{k_1} M^{(2)}_{k_2} \varrho M^{(2)\dag}_{k_2} M^{(1)\dag}_{k_1} =\sum_{\boldsymbol{k}} \boldsymbol{M}_{\boldsymbol{k}} \varrho \boldsymbol{M}_{\boldsymbol{k}}^\dag\,, $$ where we have introduced the polyindex $\boldsymbol{k}$. Normalization is easily proved, since $\sum_{\boldsymbol{k}} \boldsymbol{M}_{\boldsymbol{k}}^\dag \boldsymbol{M}_{\boldsymbol{k}} = \sum_{k_1k_2} M^{(2)\dag}_{k_2} M^{(1)\dag}_{k_1} M^{(1)}_{k_1} M^{(2)}_{k_2}=\mathbbm I$. On the other hand, the existence of inverse is not guaranteed: actually only unitary operations are invertible (with a CP inverse). \par The Kraus theorem also allows us to have a unified picture of quantum evolution, either due to an interaction or to a measurement. In fact, the modification of the state in the both processes is described by a set of operators $M_k$ satisfying $\sum_k M^\dag_k M_k = \mathbbm I$. In this framework, the Kraus operators of a measurement are what we have referred to as the detection operators of a POVM. \subsubsection{The dual map and the unitary equivalence} Upon writing the generic expectation value for the evolved state ${\cal E}(\varrho)$ and exploiting both linearity and circularity of trace we have $$\langle X\rangle = \hbox{Tr} [{\cal E}(\varrho )\, X]= \sum_k \hbox{Tr}[M_k \varrho M_k^\dag \, X] = \sum_k \hbox{Tr}[\varrho\, M_k^\dag X M_k] =\hbox{Tr}[\varrho {\cal E}^\vee (X)]\,,$$ where we have defined the dual map ${\cal E}^\vee (X)=\sum_k M_k^\dag X M_k$ which represents the "Heisenberg picture" for quantum operations. Notice also that the elements of the Kraus decomposition $M_k=\langle\varphi_k | U|\omega_{\scriptscriptstyle B}\rangle$ depend on the choice of the basis used to perform the partial trace. Change of basis cannot have a physical effect and this means that the set of operators $$N_k=\langle\theta_k| U|\omega_{\scriptscriptstyle B}\rangle = \sum_s \langle\theta_k|\varphi_s\rangle\langle\varphi_s| U|\omega_{\scriptscriptstyle B}\rangle = \sum_s V_{ks} M_s \,,$$ where the unitary $V\in L(H_{\scriptscriptstyle B})$ describes the change of basis, and the original set $M_k$ actually describe the same quantum operations, i.e. $\sum_k N_k \varrho N_k^\dag=\sum_k M_k \varrho M_k^\dag$, $\forall \varrho$. The same can be easily proved for the system $B$ prepared in mixed state. The origin of this degree of freedom stays in the fact that if the unitary $U$ on $H_{\scriptscriptstyle A} \otimes H_{\scriptscriptstyle B}$ and the state $|\omega_{\scriptscriptstyle B}\rangle\in H_{\scriptscriptstyle B}$ realize an extension for the map ${\cal E}:L(H_{\scriptscriptstyle A}) \rightarrow L(H_{\scriptscriptstyle A})$ then any unitary of the form $(\mathbbm I \otimes V) U$ is a unitary extension too, with the same ancilla state. A quantum operation is thus identified by an equivalence class of Kraus decompositions. An interesting corollary is that any quantum operation on a given Hilbert space of dimension $d$ may be generated by a Kraus decomposition containing at most $d^2$ elements, i.e. given a Kraus decomposition ${\cal E}(\varrho) = \sum_k M_k \varrho M_k^\dag$ with an arbitrary number of elements, one may exploit the unitary equivalence and find another representation ${\cal E}(\varrho) = \sum_k N_k \varrho N_k^\dag$ with at most $d^2$ elements. \subsection{The random unitary map and the depolarizing channel} A simple example of quantum operation is the random unitary map, defined by the Kraus decomposition ${\cal E}(\varrho) = \sum_k p_k U_k \varrho U^\dag_k$, i.e. $M_k=\sqrt{p_k}\, U_k$ and $U_k^\dag U_k=\mathbbm I$. This map may be seen as the evolution resulting from the interaction of our system with another system of dimension equal to the number of elements in the Kraus decomposition of the map via the unitary $V$ defined by $V|\psi_{\scriptscriptstyle A}\rangle\otimes|\omega_{\scriptscriptstyle B}\rangle=\sum_k \sqrt{p_k}\, U_k |\psi_{\scriptscriptstyle A}\rangle\otimes|\theta_k\rangle$, $|\theta_k\rangle$ being a basis for $H_{\scriptscriptstyle B}$ which includes $|\omega_{\scriptscriptstyle B}\rangle$. If "we do not look" at the system $B$ and trace out its degree of freedom the evolution of system $A$ is governed by the random unitary map introduced above. \begin{exercise} Prove explicitly the unitarity of V. \end{exercise} The operator-sum representation of quantum evolutions have been introduced, and finds its natural application, for the description of propagation in noisy channels, i.e. the evolution resulting from the interaction of the system of interest with an external environment, which generally introduces noise in the system degrading its coherence. As for example, let us consider a qubit system (say, the polarization of a photon), on which we have encoded binary information according to a suitable coding procedure, traveling from a sender to a receiver. The propagation needs a physical support (say, an optical fiber) and this unavoidably leads to consider possible perturbations to our qubit, due to the interaction with the environment. The resulting open system dynamics is usually governed by a Master equation, i.e. the equation obtained by partially tracing the Schroedinger (Von Neumann) equation governing the dynamics of the global system, and the solution is expressed in form of a CP-map. For a qubit $Q$ in a noisy environment a quite general description of the detrimental effects of the environment is the so-called depolarizing channel \cite{nie00}, which is described by the Kraus operator $M_0 = \sqrt{1-\gamma}\,\sigma_0$, $M_k=\sqrt{\gamma/3}\,\sigma_k$, $k=1,2,3$, i.e. $${\cal E}(\varrho) = (1-\gamma) \varrho + \frac{\gamma}{3} \sum_k \sigma_k\, \varrho\,\sigma_k \qquad 0\leq \gamma\leq 1\,.$$ The depolarizing channel may be seen as the evolution of the qubit due to the interaction with a four-dimensional system through the unitary $$V|\psi_{\scriptscriptstyle Q}\rangle\otimes|\omega_{\scriptscriptstyle E}\rangle =\sqrt{1-\gamma} |\psi_{\scriptscriptstyle Q}\rangle\otimes|\omega_{\scriptscriptstyle E}\rangle + \sqrt{\frac{\gamma}3} \sum_{k=1}^3 \sigma_k |\psi_{\scriptscriptstyle Q}\rangle\otimes|\theta_k\rangle\,,$$ $|\theta_k\rangle$ being a basis which includes $|\omega_{\scriptscriptstyle E}\rangle$. From the practical point view, the map describes a situation in which, independently on the underlying physical mechanism, we have a probability $\gamma/3$ that a perturbation described by a Pauli matrix is applied to the qubit. If we apply $\sigma_1$ we have the so-called spin-flip i.e. the exchange $|0\rangle \leftrightarrow |1\rangle$, whereas if we apply $\sigma_3$ we have the phase-flip, and for $\sigma_2$ we have a specific combination of the two effects. Since for any state of a qubit $\varrho + \sum_k \sigma_k\varrho\sigma_k= 2\mathbbm I$ the action of the depolarizing channel may be written as $${\cal E}(\varrho) = (1-\gamma) \varrho + \frac{\gamma}{3} (2 \mathbbm I -\varrho) = \frac23 \gamma \mathbbm I + (1- \frac43 \gamma )\varrho = p\varrho + (1-p) \frac{\mathbbm I}2\,,$$ where $p=1-\frac43 \gamma$, i.e. $-\frac13\leq p\leq 1$. In other words, we have that the original state $\varrho$ is sent to a linear combination of itself and the maximally mixed state $\frac{\mathbbm I}2$, also referred to as the depolarized state. \begin{exercise} Express the generic qubit state in Bloch representation and explicitly write the effect of the depolarizing channel on the Bloch vector. \end{exercise} \begin{exercise} Show that the purity of a qubit cannot increase under the action of the depolarizing channel. \end{exercise} \subsection{Transposition and partial transposition} The transpose $T(X)=X^{\scriptscriptstyle T}$ of an operator $X$ is the conjugate of its adjoint $X^{\scriptscriptstyle T} = (X^\dag)^* = (X^*)^\dag$. Upon the choice of a basis we have $X=\sum_{nk} X_{nk} |\theta_n\rangle\langle\theta_k |$ and thus $X^{\scriptscriptstyle T}=\sum_{nk} X_{nk} |\theta_k\rangle\langle\theta_n | =\sum_{nk} X_{kn} |\theta_n\rangle\langle\theta_k |$. Transposition does not change the trace of an operator, neither its eigenvalues. Thus it transforms density operators into density operators: $\hbox{Tr}[\varrho]=\hbox{Tr}[\varrho^{\scriptscriptstyle T}]=1$ $\varrho^{\scriptscriptstyle T} \geq 0$ if $\varrho\geq 0$. As a positive, trace preserving, map it is a candidate to be a quantum operation. On the other hand, we will show by a counterexample that it fails to be completely positive and thus it does not correspond to physically admissible quantum operation. Let us consider a bipartite system formed by two qubits prepared in the state $|\varphi\rangle\rangle=\frac1{\sqrt{2}}\, |00\rangle\rangle +|11\rangle\rangle$. We denote by $\varrho^\tau = \mathbbm I \otimes T (\varrho)$ the partial transpose of $\varrho$ i.e. the operator obtained by the application of the transposition map to one of the two qubits. We have \begin{align} \notag \big(|\varphi\rangle\rangle\langle \langle\varphi|\big)^\tau &= \frac12 \left( \begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{array} \right)^\tau \\ \notag & =\frac12 \Big( |0\rangle\langle 0| \otimes|0\rangle\langle 0| + |1\rangle\langle 1| \otimes|1\rangle\langle 1| + |0\rangle\langle 1| \otimes|0\rangle\langle 1| + |1\rangle\langle 0| \otimes|1\rangle\langle 0| \Big)^\tau \\ \notag & =\frac12 \Big( |0\rangle\langle 0| \otimes|0\rangle\langle 0| + |1\rangle\langle 1| \otimes|1\rangle\langle 1| + |0\rangle\langle 1| \otimes|1\rangle\langle 0| + |1\rangle\langle 0| \otimes|0\rangle\langle 1| \Big) \\ \notag & =\frac12 \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{array} \right) \end{align} Using the last expression it is straightforward to evaluate the eigenvalues of $\varrho^\tau$, which are $+\frac12$ (multiplicity three) and $-\frac12$. In other words $\mathbbm I \otimes T$ is not a positive map and the transposition is not completely positive. Notice that for a factorized state of the form $\varrho_{\scriptscriptstyle \!AB}=\varrho_{\scriptscriptstyle A} \otimes \varrho_{\scriptscriptstyle B}$ we have $\mathbbm I \otimes T (\varrho_{\scriptscriptstyle \!AB}) = \varrho_{\scriptscriptstyle A} \otimes \varrho_{\scriptscriptstyle B}^{\scriptscriptstyle T} \geq 0$ i.e. partial transposition preserves positivity in this case . \begin{exercise} Prove that transposition is not a CP-map by its action on any state of the form $ |\varphi\rangle\rangle = \frac1{\sqrt{d}} \sum_k |\varphi_k\rangle\otimes |\theta_k\rangle$. Hint: the operator $\mathbbm I \otimes T (|\varphi\rangle\rangle\langle\langle\varphi|)\equiv E$ is the so-called swap operator since it "exchanges" states as $E(|\psi\rangle_{\scriptscriptstyle A}\otimes|\varphi\rangle_{\scriptscriptstyle B}) = |\varphi\rangle_{\scriptscriptstyle A} \otimes |\psi\rangle_{\scriptscriptstyle B}$. \end{exercise} \section{Conclusions} In this tutorial, we have addressed the postulates of quantum mechanics about states, measurements and operations. We have reviewed their modern formulation and introduced the basic mathematical tools: density operators, POVMs, detection operators and CP-maps. We have shown how they provide a suitable framework to describe quantum systems in interaction with their environment, and with any kind of measuring and processing devices. The connection with the standard formulation have been investigated in details building upon the concept of purification and the Theorems of Naimark and Stinespring/Kraus-Choi-Sudarshan. \par The framework and the tools illustrated in this tutorial are suitable for the purposes of quantum information science and technology, a field which has fostered new experiments and novel views on the conceptual foundation of quantum mechanics, but has so far little impact on the way that it is taught. We hope to contribute in disseminating these notions to a larger audience, in the belief that they are useful for several other fields, from condensed matter physics to quantum biology. \begin{acknowledgement} I'm grateful to Konrad Banaszek, Alberto Barchielli, Maria Bondani, Mauro D'Ariano, Ivo P. Degiovanni, Marco Genoni, Marco Genovese, Paolo Giorda, Chiara Macchiavello, Sabrina Maniscalco, Alex Monras, Stefano Olivares, Jyrki Piilo, Alberto Porzio, Massimiliano Sacchi, Ole Steuernagel, and Bassano Vacchini for the interesting and fruitful discussions about foundations of quantum mechanics and quantum optics over the years. I would also like to thank Gerardo Adesso, Alessandra Andreoni, Rodolfo Bonifacio, Ilario Boscolo, Vlado Buzek, Berge Englert, Zdenek Hradil, Fabrizio Illuminati, Ludovico Lanz, Luigi Lugiato, Paolo Mataloni, Mauro Paternostro, Mladen Pavi\v{c}i\'{c}, Francesco Ragusa, Mario Rasetti, Mike Raymer, Jarda \v{R}eh\'{a}\v{c}ek, Salvatore Solimeno, and Paolo Tombesi. \end{acknowledgement}
1,108,101,563,503
arxiv
\section{Introduction} The universe appears to consist of approximately 30\% dark matter, which clusters and drives the formation of large-scale structure in the universe, and 70\% dark energy, which drives the late-time acceleration of the universe (see Ref. \cite{Sahni} for a recent review, and references therein). Numerous models for the dark energy have been proposed; in one class of models, the dark energy is simply taken to be barotropic fluid, in which the pressure $p$ and energy density $\rho$ are related by \begin{equation} p = f(\rho). \end{equation} One of the first cases to be investigated in detail was the equation of state \begin{equation} \label{constantw} p= w\rho, \end{equation} where $w$ is a constant \cite{turner,caldwelletal}. For an equation of state of this form, the density of the dark energy depends on the scale factor, $a$, as \begin{equation} \rho \propto a^{-3(w+1)}. \end{equation} A more complex equation of state arises in the Chaplygin gas model \cite{Kamenshchik,Bilic}, for which the equation of state has the form \begin{equation} p = - \frac{A}{\rho}, \end{equation} where $A$ is a constant, leading to a density evolution of the form \begin{equation} \rho = \sqrt{A + \frac{B}{a^6}}. \end{equation} In this model, the density interpolates between dark-matter-like evolution at early times and dark-energy-like (i.e., constant density) evolution at late times. Thus, the Chaplygin gas model can serve as a unified model of dark matter and dark energy. (The dark matter sector of this model may have problems with structure formation \cite{Sandvik,Bean}, although see the discussion in Refs. \cite{beca1,beca2} for an alternate viewpoint). The Chaplygin gas model was generalized by Bento, et al., \cite{Bento} to an equation of state of the form \begin{equation} \label{genC1} p_{gcg} = -\frac{A}{\rho_{gcg}^{\alpha}}, \end{equation} where $A$ and $\alpha$ are constants. This equation of state leads to the density evolution \begin{equation} \label{genC} \rho_{gcg} = \left[A + \frac{B}{a^{3(1+\alpha)}}\right]^{1/(1+\alpha)}. \end{equation} Again, the density evolution in the generalized Chaplygin gas models changes from $\rho \propto a^{-3}$ at early times to $\rho = constant$ at late times. Note that such an equation of state can also be modelled as a dissipative matter fluid where the dissipative pressure is proportional to the energy density; a number of exact solutions for this model have been discussed in Refs. \cite{john1,john2}. In equation (\ref{genC1}), one normally takes $\alpha > -1$, while $A$ is chosen to be sufficiently small that $w > -1$ for the generalized Chaplygin gas. In this paper, we relax these constraints and examine Chaplygin gas models described by equation (\ref{genC1}) for which $\alpha$ and $w$ can lie outside this range. In this case, the Chaplygin gas no longer serves as a unified model for dark matter and dark energy, but it can be taken to be a model for dark energy alone. This sort of generalization for the special case of $\alpha = 1$ was previously undertaken by Khalatnikov \cite{Kh}. The models presented here can be derived as special cases of the more generic models examined in Refs. \cite{Gonzalez,CL}, and can also arise in the context of $k$-essence models \cite{Chimento}. These possible generalizations are mentioned in Ref. \cite{Lopez}, although only the case $\alpha > -1$, $w < -1$ is explored in detail. In addition to noting some interesting features of these models, our main new result is a set of observational constraints on these models. \section{GENERALIZED CHAPLYGIN GAS AS A DARK ENERGY COMPONENT} The equation of state for the generalized Chaplygin gas (GCG) is given by equation (\ref{genC1}). In order to produce a negative pressure and give the currently observed acceleration of the universe, equation (\ref{genC1}) must have $A > 0$, so we will confine our attention to this case. We assume further that $\alpha \ne -1$, since for $\alpha = -1$, the GCG is equivalent to the dark energy fluid described by equation (\ref{constantw}). Note that a particular choice of $A$ and $\alpha$ does not uniquely determine the GCG model; one also needs to specify, for example, the value of $\rho$ at a particular redshift. Integration of the energy conservation equation $T^{\mu}_{\nu;\mu}=0$ for $\alpha \neq -1$ results in \begin{equation} \rho_{gcg} = \rho_{gcg0}[A_s + (1-A_s)(1+z)^{3(1+\alpha)}]^{1/(1+\alpha)}, \end{equation} where \begin{equation} A_s = A/\rho_{gcg0}^{1+\alpha}, \end{equation} with $\rho_{gcg0}$ being the present value of $\rho_{gcg}$. The choice of $A_s$ and $\alpha$ {\it does} uniquely specify the GCG model. \begin{figure}[t!] \epsfxsize 3.2in \epsfbox{rho.eps} \caption{The energy density $\rho$ of the generalized Chaplygin gas (normalized to the present density $\rho_0$) as a function of the scale factor $a$ (taken to be $1$ at present). Solid curve is Case 2(a) (early phantom with $\alpha >0$: $A_{s} = 1.1, \alpha = 0.3$), dashed curve is Case 2(b) (early phantom with $-1 < \alpha < 0$: $A_{s}=1.2, \alpha=-0.95$), dashed-dot curve is Case 3 (transient GCG, with $A_{s} = 0.9, \alpha = -1.5$) and dotted curve is Case 4 (late phantom, with $A_{s} = 1.1, \alpha = -1.1$).} \end{figure} Since we are considering the GCG as a dark energy candidate, the expression for the Hubble parameter near the present becomes \begin{eqnarray} H^{2} &=& H_{0}^{2}\biggl[\Omega_{m0}(1+z)^{3} + \Omega_{gcg0}[A_s \nonumber\\ \label{H} &+& (1-A_s)(1+z)^{3(1+\alpha)}]^{1/(1+\alpha)}\biggr], \end{eqnarray} where we have assumed a normal dust-like dark matter component with present density parameter $\Omega_{m0}$, and $\Omega_{gcg0}$ is the present value of the density parameter for the GCG component. We have neglected the radiation, which makes a negligible contribution to the total density around the present epoch. We assume a flat universe: $\Omega_{gcg0}=1-\Omega_{m0}$. The equation of state for the GCG fluid is \begin{equation} w = -{A_s\over{A_s + (1-A_s)(1+z)^{3(1+\alpha)}}}. \end{equation} Taking $z=0$ in this equation, it is clear that \begin{equation} \label{w0} w_0 = -A_s, \end{equation} where $w_0$ is the present-day value of $w$ for the Chaplygin gas. Equation (\ref{w0}) gives the physical significance of $A_s$. The behavior of the GCG can vary significantly, depending on the values of $A_s$ and $\alpha$. Our aim in this paper is to explore all of these possible behaviours. We consider first two trivial cases. For $A_s = 1$, the GCG behaves exactly as a cosmological constant at all times for all values of $\alpha$. For $A_s =0$, the GCG behaves as a standard pressureless ($w=0$) dust component for at all times for all values of $\alpha$. \begin{figure}[t!] \epsfxsize 3.2in \epsfbox{eos.eps} \caption{As Fig. 1, for the generalized Chaplygin gas equation of state parameter $w$, as a function of the scale factor $a$.} \end{figure} Now consider the four non-trivial cases corresponding to the pair of choices $A_s < 1$, $A_s >1$, and $\alpha < -1$, $\alpha > -1$. \vspace{5mm} Case 1) $0< A_s < 1, \alpha > -1$ ({\it Standard GCG}). As previously discussed, in this region the GCG behaves as a pressureless dust component at early times, evolving asymptotically to a de-Sitter regime ($w \rightarrow -1$) at late times. Hence the GCG in this parameter region can act as a unified model for dark energy and dark matter (UDM). This is the standard model of the generalized Chaplygin gas, previously explored in great detail \cite{Bento,gcg1,gcg2,gcg3,gcg4,beca3}. The remaining three cases correspond to ``new" versions of the generalized Chaplygin gas. \vspace{5mm} Case 2) $A_s > 1$, $ \alpha > -1$ ({\it Early Phantom GCG}). In this case, the GCG acts as a phantom component with $w < -1$ at all times, but $w$ asymptotically approaches $-1$ at late times. Hence, one has an early phantom behaviour in this region of parameter space. In this case, $\rho_{gcg} = 0$ at $1+z_b=\left[{A_s/(A_s-1)}\right]^{1\over{3(1+\alpha)}}$ and then $\rho_{gcg}$ grows to become a constant at late times. The behavior of this model at early times depends on the value of $\alpha$. (a) For $\alpha > 0$, we have $p_{gcg} \rightarrow -\infty$ when $\rho_{gcg} \rightarrow 0$ at $z = z_b$. Hence, there is a pressure singularity at $z = z_b$, with $\ddot a \rightarrow \infty$, while the scale factor $a$ and the expansion rate $\dot a$ remain finite. This is the so-called ``sudden singularity" previously discussed by Barrow \cite{Barrow}. In the classification scheme of Nojiri, Odintsov, and Tsujikawa \cite{NOT}, this is a Type II singularity. The GCG density $\rho$, the equation of state parameter $w$, and the deceleration parameter $q$ for this case are shown in Figs. 1-3, respectively, as solid curves. (b) For $0 > \alpha > -1$, both $\rho_{gcg}$ and $p_{gcg}$ go to zero at $z = z_b$. Although $w \rightarrow - \infty$ at $z = z_b$, there is no singularity in this case. Afterwards the GCG behaves as a growing phantom component and then asymptotically approaches the de-Sitter regime ($w \rightarrow -1$) at late times. The values of $\rho$, $w$, and $q$ are shown in this case in Figs. 1-3 as dashed curves. \begin{figure}[t!] \epsfxsize 3.2in \epsfbox{dec.eps} \caption{As Fig. 1, for the deceleration parameter $q$ as a function of the scale factor $a$, where we have taken $\Omega_{gcg0} = 0.7$ and $\Omega_{m0} = 0.3$.} \end{figure} This model has also been examined in some detail in Ref. \cite{Lopez}. Their main point of emphasis is the interesting fact that this model allows for a phantom equation of state $w < -1$, while avoiding a future singularity. We note further that for $0 > \alpha > -1$, this model is free of either past or future singularities, while still allowing for a phantom equation of state. \vspace{5mm} \begin{figure*}[htb!] \begin{center} \includegraphics[height=6.5cm]{chialpha.eps} \includegraphics[height=6.5cm]{chiom.eps} \end{center} \caption{Confidence contours in $\Omega_{m0}-A_{s}$ (left) and $\alpha-A_{s}$ (right) parameter space, marginalizing over $\alpha$ and $\Omega_{m0}$ respectively. Solid and dashed curves are the $68.3\%$ and $95\%$ confidence levels, respectively.} \end{figure*} Case 3) $0 < A_s < 1, \alpha < -1$ ({\it Transient GCG}). In this case, one has a de-Sitter regime ($w \rightarrow -1$) at early times, while $\rho_{gcg}$ asymptotically approaches pressureless dust ($w \rightarrow 0$) at late times. The GCG density $\rho$, the GCG equation of state parameter $w$, and the deceleration parameter $q$ for this case are shown in Figs. 1-3, respectively, as dot-dash curves. If the GCG serves as dark energy in this case, the acceleration is a transient phenomenon. Models with transient acceleration are desirable from the point of view of string theory, as the existence of future horizons in an eternally-accelerating universe leads to a well-known problem in constructing the S-matrix in such models \cite{Smatrix1,Smatrix2}. Hence, a fair amount of effort has gone into constructing models in which the currently-observed acceleration is a transient phenomenon \cite{Cline,Blais,Sahni2,Bilic2}. This case for the GCG represents another such model. \vspace{5mm} Case 4) $A_s > 1, \alpha < -1$ ({\it Late Phantom GCG}) In this case $\rho_{gcg}$ starts as a constant and then grows and eventually hits a singularity in the future. The equation of state resembles that of a cosmological constant ($w \approx -1$) at early times, and then becomes phantom-like in the future. The future singularity occurs at $1+z_{s}= \left[{A_{s}/{(A_{s}-1)}}\right]^{1\over{3(\alpha+1)}}$. Note that this singularity occurs at a finite value of the scale factor, which is different from the standard phantom scenario in which the scale factor blows up simultaneously with the energy density of the dark energy. This singularity also differs from that in Case 2, as in this case, we have $\rho \rightarrow \infty$ and $p \rightarrow -\infty$ as $z \rightarrow z_s$; in the classification scheme of Ref. \cite{NOT}, this is a Type III singularity. The GCG density $\rho$, the equation of state parameter $w$, and the deceleration parameter $q$ for this case are shown in Figs. 1-3, respectively, as dotted curves. \vspace{5mm} It is well known that, under very general assumptions, dark energy arising from a scalar field cannot evolve from $w < -1$ to $w > -1$ or vice-versa \cite{Vikman}; this has been dubbed the ``phantom divide" \cite{Hu}. The GCG model also displays a phantom divide, characterized by whether or not $A_s >1$ or $A_s < 1$. For all models with $A_{s} < 1$, we have $w > -1$ at all times and for all values of $\alpha$, while $A_s >1$ gives phantom behavior ($w < -1$) at all times and for all $\alpha$. Note also that all of the GCG models display asymptotic de Sitter behavior ($w \rightarrow -1$). For $\alpha < -1$, the de Sitter behavior occurs asymptotically in the past, while $\alpha > -1$ gives de Sitter behavior in the asymptotic future. These results are independent of $A_s$, which simply determines whether $w$ approaches $-1$ from above or below. Many other barotropic models have been discussed in the literature, e.g., the Van der Waals model \cite{VDW} and the wet fluid model \cite{water}. However, the barotropic model which most closely resembles the GCG models examined here is the model of Ref. \cite{Stefancic} (see also \cite{NOT}), which has the equation of state \begin{equation} \label{Stef} p = -\rho - B\rho^\beta \end{equation} While this model is qualitatively different from the GCG models, it displays similar behavior in certain limits, particularly with regard to singularities in the evolution. In particular, for $B>0$ and $\beta > 1$, the density $\rho$ is an increasing function of the scale factor, so that the second term in equation (\ref{Stef}) is dominant at late times. Thus, this model approaches the behavior of our Case 4 model at late times, with a similar singularity \cite{NOT,Stefancic}. \section{CONSTRAINTS FROM SUPERNOVA DATA} In this section, we will examine constraints on these various versions of the GCG from the supernova Ia data. In deriving these limits, we assume that the GCG acts purely as dark energy, and we take the dark matter to be a separate component. The observations of supernovae measure essentially the apparent magnitude $m$, which is related to the luminosity distance $d_L$ by \begin{align} m(z) = {\cal M} + 5 \log_{10} D_L(z) ~, \end{align} where \begin{align} D_L(z) \equiv {{H_0}\over{c}} d_L(z)~, \label{DL} \end{align} is the dimensionless luminosity distance and \begin{align} d_L(z)=(1 + z) d_M(z)~, \label{dL} \end{align} with $d_M(z)$ being the comoving distance given by \begin{align} d_M(z)=c \int_0^z {{1}\over{H(z')}} dz'~. \label{dm} \end{align} Also, \begin{align} {\cal M} = M + 5 \log_{10} \left({{c/H_0}\over{1~\mbox{Mpc}}}\right) + 25~, \end{align} where $M$ is the absolute magnitude. For our analysis, we consider the data set compiled by Riess {\it et al.} \cite{riess}. The total data set contains the previously published 230 data points from Tonry {\it et al.} \cite{tonry}, along with the 23 points from Barris {\it et al.} \cite{barris}. But Riess {\it et al.} have discarded various points where the classification of the supernovae was not certain or the photometry was incomplete, increasing the reliablity of the sample. Ultimately the final set contains 143 points plus the 14 points discovered recently using the Hubble Space Telescope (HST), and this set of 157 points is named the ``gold" sample by Riess {\it et al.} The data points in these samples are given in terms of the distance modulus \begin{align} \mu_{\rm obs}(z) \equiv m(z) - M(z)~ = 5 \log d_{L} +25, \end{align} where $d_{L}$ is measured in Mpc. The $\chi^2$ is calculated from \begin{align} \chi^2 = \sum_{i=1}^n \left[ {{\mu_{\rm obs}(z_i) - \mu_{\rm th}(z_i; H_{0}, c_{\alpha})}\over{\sigma_{\mu_{\rm obs}}(z_i)}} \right]^2~, \label{chisq2} \end{align} where present day Hubble parameter, $H_{0}$, is a nuisance parameter and $c_{\alpha}$ are the model parameters. Marginalizing our likelihood functions over the nuisance parameter $H_{0}$ yields the confidence intervals in the $c_{\alpha}$ parameter space. \begin{figure}[t!] \epsfxsize 3.5in \epsfbox{chiall.eps} \caption{ The $68.3\%$ confidence contours in $\alpha-A_{s}$ parameter space for different values of $\Omega_{m0}$. Solid, dashed, dash-dot and dotted lines are for $\Omega_{m0} = 0.35, 0.30, 0.25$, and $0.20$, respectively}. \end{figure} For our purposes, we have three model parameters, $\Omega_{m0}$, $\alpha$ and $A_{s}$. For our best fit analysis, we vary the parameters in the following ranges: $\Omega_{m0}$ from $0.2$ to $0.4$, $\alpha$ from $-4$ to $4$ and $A_{s}$ from $0$ to $5$. The best fit values for the parameters in this case are: $\Omega_{m0}=0.39$, $\alpha = -3.87$ and $A_{s}=4.99$ together with $\chi^{2}_{min} = 172.79$. In Figure 4 we show the confidence contours in the $\Omega_{m0}-A_{s}$, and $\alpha-A_{s}$ parameter space by marginalizing over $\alpha$ and $\Omega_{m0}$ respectively. In both cases, $A_{s}=1$ ($\Lambda$CDM) is rejected at the $95\%$ confidence level, while the data favor $A_{s} > 1$ ($w_0 < -1$). Moreover, from the allowed $\alpha-A_{s}$ space, one can see that a large portion of the allowed region corresponds to Case 4 (late phantom GCG). This is consistent with previous analyses that suggest that the supernova data slightly favor $w < -1$ at present \cite{phantom}. There is also a small region at the $95\%$ confidence level corresponding to Case 2(b) (early phantom GCG without an initial singularity). To see how the model parameters depend on $\Omega_{m0}$, we plot in Fig. 5 the $68.3\%$ confidence contours in the $\alpha - A_s$ space for different values of $\Omega_{m0}$. This figure shows that the allowed parameters for the GCG depend very sensitively on the assumed value for $\Omega_{m0}$; for this set of choices for $\Omega_{m0}$, any of the four cases is possible. For $\Omega_{m0}=0.2$, the allowed region falls under the Case 1, where GCG behaves as a dust-like dark energy component, asymptotically approaching a cosmological constant. For $\Omega_{m0} = 0.25$, the data allow Case 1 as well as Case 3 where the acceleration of the universe is only a transient phenomena as it asymptotically approaches dust-like behavior. For $\Omega_{m0}=0.3$ the data allow Case 2 (both a and b) where the GCG behaves as phantom dark energy at early times but approaches a cosmological constant at late times, as well as Case 4 where the GCG evolves from cosmological-constant-type behaviour to phantom behaviour. For $\Omega_{m0} = 0.35$ only Case 4 is allowed. Our results show that $\Omega_{m} \geq 0.3$ is necessary in order to have a phantom-like equation of state. This conclusion is consistent with the recent results of Jassal et al. \cite{hari}, although they used a different parametrization for the dark energy equation of state. \begin{figure}[t!] \epsfxsize 3.0in \epsfbox{sgcg.eps} \caption{Evolution of the matter density perturbation $\delta$ as a function of the scale factor $a$ (normalized to $a=1$ at the present) for the standard GCG case with $\Omega_{m0} = 0.3$.} \end{figure} These results show that the value of $\Omega_{m0}$ is crucial in determining which types of GCG dark energy are consistent with the supernova data. The constraint on $\Omega_{m0}$ in $\Lambda$CDM models from WMAP is $\Omega_{m0} = 0.29 \pm 0.07$ \cite{wmap}, which is also consistent with recent results from SDSS observations, while the more recent 2dFGRS analysis gives $\Omega_{m0} = 0.237 \pm 0.020$ \cite{Sanchez}. Of course, when $w$ is allowed to vary from $w=-1$, these limits become functions of $w$, but it is clear that the current observational constraints on $\Omega_{m0}$ are insufficient to rule out any of our four possible GCG models using supernova data alone. \section{GROWTH OF LINEAR DENSITY PERTURBATIONS} In this section, we study the growth of density perturbations for the mixture of a matter fluid and a GCG dark energy fluid in the linear regime on subhorizon scales. In performing this calculation, it is necessary to assume a particular clustering behavior for the dark energy. However, this behavior will depend on the physical model that gives rise to the Chaplygin gas equation of state. For example, if the Chaplygin gas is taken to be a perfect fluid satisfying equation (\ref{genC1}), then the GCG component will cluster gravitationally with a sound speed given by $c_s^2 = -w \alpha$ \cite{Bean}. On the other hand, it is also possible to generate minimally-coupled scalar field models with the equation of state given by equation (\ref{genC1}) \cite{Kamenshchik,gcg4} (see also Ref. \cite{Lopez}). Such models always have $c_s=1$ on subhorizon scales and therefore do not cluster on small scales. The evolution of density perturbations for a GCG dark energy component that can cluster gravitationally was examined in Ref. \cite{Bean}, while the case of a smooth, unclustered GCG dark energy component was examined in Ref. \cite{Mul}. We take the latter approach here. In this case, the only effect of the GCG evolution is to alter the growth of dark matter perturbations through the effect of the GCG energy density on the expansion of the universe. However, it is important to remember that the results will be very different for the case in which the GCG component is allowed to cluster. \begin{figure}[t] \epsfxsize 3.0in \epsfbox{tgcg.eps} \caption{Evolution of the matter density perturbation $\delta$ as a function of the scale factor $a$ (normalized to $a=1$ at the present) for the transient GCG case with $\Omega_{m0} = 0.3$.} \end{figure} Assuming the GCG to be a smooth component, the growth equation for the linear matter density perturbation, $\delta$, is given by \begin{equation} \label{delta} \delta ^{''} + (2 + {{\dot{H}}\over{{H}^{2}}})\delta^{'} + 3c_{1}\delta = 0, \end{equation} where ``prime'' denotes the derivative with respect to $\ln(a)$, ``dot'' denotes the derivative with respect to $t$, and $H$ is the Hubble parameter for the background expansion given in equation ({\ref H}). In equation (\ref{delta}), $\delta$ is the linear matter density contrast, $\delta = \delta\rho_{m}/\rho_{m}$, and $c_{1}$ is given by \begin{equation} c_{1} = -{1\over{2}}{\Omega_{m0}\over{\Omega_{m0} + \Omega_{gcg0}[1+A_{s}(a^{3(\alpha+1)}-1)]^{1/(1+\alpha)}}}. \end{equation} One can easily check that for $A_{s} = 1$, the equation reduces to that for the $\Lambda$CDM model. We have integrated equation (\ref{delta}) numerically from $a = 10^{-3}$ to $a =1$ (taken to be the present). The initial conditions are choosen such that at $a = 10^{-3}$, the standard solution $\delta \sim a$ for Einstein-deSitter universe is reached. Also we have assumed the matter density parameter $\Omega_{m0}= 0.3$ throughout. We have studied the solutions for the standard GCG (Case 1), transient GCG (Case 3) and late phantom GCG (Case 4) models. Figs. 6, 7, and 8 show the behaviour of $\delta$ as a function of the scale factor for these three cases. The standard GCG case has been studied previously by Multamaki et al. \cite{Mul}, who showed that for parameters slightly deviating from the $\Lambda$CDM universe ($A_s = 1$), $\delta$ deviates grossly from the standard $\Lambda$CDM result, making this model hard to reconcile with the present observational data. The behaviour of $\delta$ in Fig. 6 agrees with this result. \begin{figure}[t] \epsfxsize 3.0in \epsfbox{pgcg.eps} \caption{Evolution of the matter density perturbation $\delta$ as a function of the scale factor $a$ (normalized to $a=1$ at the present) for the late phantom GCG case with $\Omega_{m0} = 0.3$.} \end{figure} On the other hand, one can see from Figs. 7 and 8 (the transient and late phantom cases respectively), with values of $\alpha$ and $A_s$ deviating significantly from the $\Lambda$CDM case, that the behaviour of $\delta$ is practically indistinguishable from the $\Lambda$CDM case. This is an interesting result; it shows that models with GCG dark energy can have quite different equations of state from $\Lambda$CDM at either early or late times, yet still give similar results for the growth of linear density perturbations. The reason for these results is clear from the behavior of $\rho$ for these models (Fig. 1). Both the transient and late phantom GCG models, like the cosmological constant, contribute negligibly to the density of the universe at early times; this density is dominated by the matter component. At low redshift, the GCG in both models begins to dominate the expansion (just as the cosmological constant does in $\Lambda$CDM), with the GCG density decreasing in time (for the transient model) or increasing in time (for the late phantom model). However, these deviations from the behavior of the cosmological constant occur over a very short range in redshift, and by forcing $\Omega_0$ for the dark energy to be the same in all three cases, the results for the evolution of density perturbations are almost exactly the same. \begin{figure}[t] \epsfxsize 3.0in \epsfbox{compare.eps} \caption{Comparison of the evolution of the matter density perturbation $\delta$ for the transient and standard GCG cases with $\Omega_{m0} = 0.3$.} \end{figure} For the standard GCG, in contrast, the dust-like behavior results in a significant contribution to the density of the universe as long as dark matter is the dominant component; if the GCG is assumed not to cluster, the result is a significant decrease in the perturbation growth. This is illustrated more clearly in Fig. 9, where we show $\delta$ as function of the scale factor for both the transient and standard GCG models with the same $A_{s}$ ($A_s = 0.7$), along with the $\Lambda$CDM model. Since the observations related to density perturbations are consistent with the $\Lambda$CDM model, our results suggest that, unlike the standard GCG model, both the transient and late phantom GCG models are consistent with the linear growth of density perturbations inferred from observations. We have not shown the results for the early phantom model, but for the case where there is no early singularity, the results are also nearly identical to the $\Lambda$CDM model. Again, we emphasize that these results assume a non-clustering GCG. For the case where the GCG clusters as a perfect fluid, the growth of density perturbations would be quite different. \section{CONCLUSIONS} Our exploration of the the full $A_s - \alpha$ parameter space for the generalized Chaplygin gas yields three additional models beyond the standard GCG. Although these cannot serve as unified models for dark matter and dark energy, they can have interesting consequences when treated as models for dark energy alone. The early phantom model (Case 2) has the interesting property of serving as phantom dark energy ($w < -1$) without a late big rip singularity, and for some choices of parameters it is also free of an initial singularity. The transient CGC model (Case 3) provides a mechanism for accelerated behavior at the present, but it asymptotically approaches dust-like behavior at late times (i.e., its time evolution is exactly opposite to the standard GCG model). Thus, it provides a mechanism to allow for present-day acceleration without a future horizon. The late phantom GCG model gives $w < -1$ with $w$ decreasing with time, and it results in a future singularity at a finite value of the scale factor. All of these models can be made consistent with the type Ia supernovae observations, for an appropriate choice of $\Omega_{m0}$; the question of which models are allowed is extremely sensitive to the value of $\Omega_{m0}$. If the GCG is assumed not to cluster, then all of these models (with the exception of the subset of early phantom models with an initial singularity) are also consistent with the growth of linear density perturbations. Of course, if the GCG is assumed to cluster, then these results will be significantly altered. Note that all three of our ``new" GCG models have a de Sitter phase, and all three models can be tuned arbitrarily close to a cosmological constant at the present, either by pushing the phantom-like behavior arbitrarily far into the past (early phantom model), or by pushing the dust-like or phantom-like behavior arbitrarily far into the future (transient GCG and late phantom model, respectively). These limits may seem uninteresting, as they reduce to the $\Lambda$CDM model over all observable ranges, but the one exception is the transient GCG model. A dust-like phase for this model, even in the far future, eliminates the problem of future horizons. Thus, the transient GCG model can be made arbitrarily similar to the $\Lambda$CDM model but at the same time can resolve the possible conflict between the accelerating universe and string theory. \acknowledgments R.J.S. was supported in part by the Department of Energy (DE-FG05-85ER40226).
1,108,101,563,504
arxiv
\section{Introduction} The aim of contemporary and future computing systems is to deliver higher performance at lower power budgets \cite{parallelcomputing}. This includes embedded systems or "edge-devices" that add further limits to energy-usage due to limited battery life. Applications such as key-word spotting, facial recognition, language translation and others have become ubiquitous with the recent developments in the field of deep learning \textit{models} \cite{energysurvey}. Specifically, Convolutional Neural Networks (hereafter referred to as ConvNets) have achieved state-of-art results in various vision domains and natural language processing domains \cite{energysurvey}. To enable such applications for embedded devices, optimization efforts are spreads across all levels: At the \textit{algorithmic level}, newer compact neural network designs \cite{mobilenets, googlenet, squeezenet}, compression and pruning techniques \cite{compression,yangdesign}, reduced precision \cite{cour} and scheduling techniques \cite{lane2} are being proposed to save memory and increase throughput. At the \textit{software level}, device-specific software implementations such as TensorRT \cite{tensorrt}, ARM Compute library \cite{armcompute}, Qualcomm's Snapdragon Neural Processing Engine (NPE) \cite{qualnpe}, CoreML \cite{coreml} and TensorflowLite \cite{tlite} aim to accelerate deep learning inferences or \textit{deployment} on existing mobile platforms. These libraries are complementary to existing deep learning frameworks such as Tensorflow \cite{tensorflow}, Caffe2 \cite{caffe} and others in which deep learning models must first be designed and \textit{trained}. At the \textit{hardware level}, application-specific hardware have emerged such as specialized GPUs (e.g. Jetson TX2) \cite{tx2}, FPGAs and ASICs \cite{tpu1,graphcore,wavecomputing, eie,eye,coproc}. Despite this massive scale of efforts towards developing energy-efficient solutions to deep learning problems, there are surprisingly very few studies that measure energy for deep learning workloads \cite{cref,nvidiawhite,lane2,hotspots,analysis}. We consolidate our observations from these works and attribute the lack of adoption of energy-use as an evaluation criteria to the following reasons: \begin{itemize} \item Lack of energy-measurement support in existing deep learning frameworks: Currently, popular frameworks such as Caffe, Torch, Tensorflow and others provide designers tools to benchmark their application's performance through timing measurements. There is no support for energy measurements as these are challenging to obtain consistently across platforms. They rely on the availability of power measurements facilities. Therefore, majority of the performance benchmarks (covered in Related work in Section 7), have been gathered on high-end CPUs and GPUs \cite{convnetbench, fathom,cnngpuperformance} which are not representative of platforms in the embedded domain. \item Accuracy as a key metric to evaluate models: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) \cite{ImageNetChallenge} has been a major test-bed for the development of innovative ConvNet models. However, a given published accuracy is often achieved by averaging the accuracy from an ensemble of models that are executed on desktop or server systems \cite{analysis}. This implies that more computational resources are used to achieved the desired accuracy. However, on embedded platforms or specialized hardware, resource-budgets are a major concern leading to pruned versions of existing models or smaller models being chosen for deployment \cite{mobilenets,hanlearning}. \item Lack of systematic method and reporting standards: Measuring energy requires careful and tedious experimentation and is faced with different sources of variability. For example, power measurement facilities can vary from system-to-system. This includes different types of power meters \cite{eie,nvidiawhite,lane, caffepresso, analysis}, power sensors \cite{ti}, analytical models such as CACTI \cite{runtime,eie} and energy estimation models \cite{yangdesign}. Furthermore, various methodological choices such as the rate at which power is sampled, baseline device power measurement, statistical validity of the measurements and measuring energy at a consistent point \cite{cref,nvidiawhite}, can lead to different results in power measurement, the details of which are often not reported. \end{itemize} Our work addresses the above issues by first developing a benchmark framework by integrating Caffe \cite{caffe}, a deep learning framework, and vendor-specific tools such as ARM Streamline Performance Analyzer \cite{ARMstreamline}, as shown in Figure \ref{fig:eval1}, for profiling the energy and performance (or execution time) of ConvNet models. The context for our evaluations include object recognition tasks, a single software execution framework: Caffe with back-ends like OpenBLAS \cite{openblas} for the CPU and NVidia's CuDNN \cite{cudnn} for the GPU, and a representative embedded platform such as the Jetson TX1. We do not evaluate models developed in other frameworks and focus on pre-trained models derived from Caffe's Model Zoo \cite{modelzoo}. Our methodology focuses on power measurements made using the on-board power monitoring sensor (TI-INA3221x \cite{ti} available on the Jetson TX1) for single image inferences. Second, we use this framework for building an initial energy-prediction model using two performance counters: SIMD instruction counts and bus accesses (or main memory accesses). The energy prediction model is built for the Convolutional (or Conv) layers in a ConvNet. We then further refine this model in an attempt to make energy predictions directly from the application parameters. To the best of our knowledge, this is the first energy estimation model available for predicting the energy consumption of all the Conv layers in a ConvNet model for the Jetson TX1. Our work shows that performance and energy are important metrics to evaluate deep learning models in conjunction with accuracy. It highlights that for the Jetson TX1, there are still unexplored ConvNet models that have high accuracy, low energy and high performance characteristics. Finally, our work, serves as a guideline on \textit{how} to develop a systematic energy-benchmarking with the twofold objectives of 1) to understand the granularity at which power\footnote{subsequently deriving energy measures} measurements should be made on a given system, for example, system level and component level (CPU, GPU, memory) measurement, 2) understand the granularity at which power measurements should be made for the application, for example, measurement of the entire application or specific phases (in our case, layer-specific) performance and energy. \section{Primer on Convolutional Neural Networks} \begin{table*}[] \centering \caption{ConvNet models in the literature} \label{model} \resizebox{1.0\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|l|} \hline \textbf{ConvNet} & \textbf{\begin{tabular}[c]{@{}l@{}}Naming\\ Convention in graphs\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Top-5\\ accuracy (\%)\end{tabular}} & \textbf{Dataset} & \textbf{\# Layers} & \textbf{Parameters} & \textbf{Model Size} \\ \hline AlexNet & alexNet {\cite{imageNet}} & 80.3 & ImageNet & 5 Conv + 3 FC & 62 M & 244 MB \\ \hline GoogleNet & googleNet {\cite{googlenet}} & 90.85 & ImageNet & 57 Conv + 1 FC & 6.9 M & 54 MB \\ \hline \begin{tabular}[c]{@{}l@{}}Residual Net\end{tabular} & resNet50 {\cite{resnet}} & 93.29 & ImageNet & 53 Conv + 1 FC & 25 M & 103 MB \\ \hline SqueezeNet & squeezeNet {\cite{squeezenet}} & 80.3 & ImageNet & 26 Conv & 1.2 M & 5 MB \\ \hline \begin{tabular}[c]{@{}l@{}}SqueezeNet with\\ Deep Compression\end{tabular} & sqCompressed {\cite{squeezenet}} & 80.3 & ImageNet & 26 Conv & 1.2 M & 675.8 KB \\ \hline \begin{tabular}[c]{@{}l@{}}SqueezeNet with \\ Residual Connections\end{tabular} & squeezeNetRes {\cite{squeezenet}} & 82.5 & ImageNet & 26 Conv & 1.2 M & 6.3 MB \\ \hline VGG & vgg- small {\cite{vgg}} & 86.9 & ImageNet & 5 Conv + 3 FC & 102 M & 393 MB \\ \hline MobileNet & mobileNet {\cite{mobilenets}} & 70.6 & ImageNet & 27 Conv & 29 M & 17 MB \\ \hline Places-CDNS-8s & Places-CDNS-8s {\cite{placescdns}} & 86.8 & ImageNet & 8 Conv + 3 FC & 60 M & 241.6 MB \\ \hline Inception-BN & Inception-BN {\cite{inceptbn}} & 89.0 & ImageNet & 69 Conv + 1 FC & 1.4 B & 134.6 MB \\ \hline ALL-CNN-C & ALL-CNN-C {\cite{allcnn}} & 90.92 & CIFAR 10 & 9 Conv & 1.3 M & 5.5 MB \\ \hline \end{tabular} } \end{table*} To provide computers with ability to perform intelligent tasks such as understanding images and audio, learning, and others, the field of \textit{Machine Learning} focuses on developing mathematical \textit{models} or algorithms to acquire knowledge by extracting information from raw data. A \textit{Convolutional Neural Network} (ConvNet) extracts information or features such as edges, color blobs through a process called \textit{feature extraction} from images, and uses this information to provide a \textit{classification} output (or a decision). It is composed of \textit{layers} to transform the raw input data into a meaningful probabilistic output. Figure \ref{fig:model}, shows the data dimensions involved in a convolution (or Conv) operation. Other typical layers found are the pooling (pool), batch norm (norm), Rectified linear unit (ReLU) and fully-connected (fc), that lend the model certain properties\footnote{This has not been explained for the sake of simplicity and the reader is advised to refer to \cite{lecunCNN} for more information.}. Each layer in a model has a certain number of computation, memory requirement and communication cost and each of these have implications in terms of energy-use \cite{iandola}. \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/conv.jpg} \caption{Standard Convolution in ConvNets} \label{fig:model} \end{figure} Typically, during the \textit{training phase}, to ensure that the complexity of the model is kept in check, a time and space complexity analysis of each layer can be carried out \cite{deeperembed}. The former can be computed by counting the number of Multiply-accumulates (MAC) while the latter includes the cost for storing the input feature map ($ I_{x} \times I_{y} \times I_{z}$) to each layer, the corresponding filter weights ($ K_{x} \times K_{y} \times I_{z} \times O_{z} $) and biases and the output feature map ($ O_{x} \times O_{y} \times O_{z}$) dimensions. $N$ denotes the batch size. Here, x, y and z represent the Cartesian axes. The computational cost of a standard Convolution operation is given by: \begin{equation} \label{compeq1} O_{x} \times O_{y} \times O_{z} \times K_{x} \times K_{y} \times I_{z} \end{equation} and the storage cost or bandwidth (in bytes) is given by: \begin{equation} \label{compeq2} (I_{x} \times I_{y} \times I_{z} + K_{x} \times K_{y} \times I_{z} \times O_{z}+ O_{x} \times O_{y} \times O_{z}) \times 4 \end{equation} Tools to extract this from Caffe's ConvNet definition file are emerging \cite{caffe2any}. Current ConvNet architectures, given in \autoref{model}, have parameters or weights typically in the order of a million. These weights are stored in 32-bit floating point precision, implying a model size of the given network to be four times its number of parameters. Large model sizes limits the storage of the parameters entirely on current on-chip SRAMs \cite{hanlearning}. Therefore, there is growing interest to develop compact sized models and prune existing models to fit on fast and low energy on-chip cache memories as well as reduce the number of computations performed \cite{googlenet,squeezenet,resnet,compression,mobilenets}. For example, topologies such as the \textit{fire} module in SqueezeNet, \textit{inception} module in GoogleNet and \textit{depth-wise separable convolutions} in MobileNet\footnote{Note this is a re-implementation in Caffe \url{https://github.com/chuanqi305/MobileNet-SSD}} aim to reduce the number of computation of the model by targeting the Conv layers. The computational cost of a depth-wise separable convolution is computationally more efficient than the standard convolution \cite{mobilenets} and is given by: \begin{equation} \label{compeq3} O_{x} \times O_{y} \times K_{x} \times K_{y} \times I_{z} + I_{z} \times O_{z} \times O_{x} \times O_{y} \end{equation} While model sizes are reducing to bring the energy-costs down another important aspect is the managing the data movement between layers. Table \ref{model} gives a list of the models chosen for this study where Column 5 represents the cumulative counts of two types of layers: Conv and fc present in each model. Recently, in newer ConvNet models the use of global average pooling layer has been used instead of traditional fully-connected layers \cite{nin}. In our study, we target the Conv layers as a first step towards developing an energy prediction model. Later, we envision extending the methodology to build more sophisticated predictors to predict energy for other types of layers. Our work reports coarse-grained energy consumption of models such as AlexNet, SqueezeNet and GoogleNet and compare it against values reported in the literature \cite{nvidiawhite}. We also include other models that have no known reported energy measurements, given in \autoref{model}. The top-5 accuracy of the model is the top-5 predictions of the object category in a given image from the ImageNet dataset \cite{ImageNetChallenge} and is a measure of how well the model performs for the task of image classification. ALL-CNN-C is typical model for smaller datasets like CIFAR-10 dataset. \section{Power and performance measurements} The embedded system chosen for the power and performance measurements is the Jetson TX1 which has a 256 CUDA core Maxwell GPU running at 1GHz and a quad-core of ARM Cortex A57/A53 running at 1.9GHz. By default, the TX1 can be cross-compiled with the Linux kernel version (3.10.96) using Jetpack 2.3 and has a host operating system of Ubuntu 16.04. However, to enable power measurements, the kernel has to be modified to include the TI-INA3221x power monitor that is shipped with it. This power monitor provides system level power at VDD\_IN , CPU level power at VDD\_CPU and GPU level power at VDD\_GPU, as shown in Figure \ref{fig:power}. This is the post-regulation power after AC to DC conversion of the wall power and DC to DC conversion (pre-regulation power conversion) as required by the system-on-chip (SoC). Power values (mW) are instantaneous and are accessible from the hardware counters on the \textit{/sys} file system (\textit{sysfs}) which is read by our sampling script. Given, the ability to measure power using the monitoring chip at different levels, a sample power profile of an inference executing on the mobile GPU is shown in the Figure \ref{fig:prof2}. Note, that the chip does not provide a facility to explicitly measure DRAM power. \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/regulator.png} \caption{Power monitor on the Jetson TX1} \label{fig:power} \end{figure} Here, the x-axis represents execution time for the entire application and the y-axis represents the specific power values read from the different counters using a sampling rate of the sampling software or the user sampling rate. In addition, Figure \ref{fig:profs} also shows the various phases of the application such as the set up phase running on the CPU and the inference phase running on the GPU (see Section 4 on how to map application phases to the power profile). The variation in power values over the entire duration of the application is a result of the techniques that exists for power management in embedded systems including Dynamic voltage and frequency scaling (DVFS) techniques that take into account the characteristics of the application (for example if a task is memory bound frequency can be lowered) to gate cores at specific voltage or frequency levels and Power mode management (PMM) that can make use of idle time intervals in an application to switch specific components into low power modes when not in use \cite{mittal}. The Jetson TX1, allows dynamic power scaling of its CPU as well as GPU through the use of its default governors \cite{jetsongov} and this GPU frequency scaling behaviour as the inference application executes, is seen in Figure \ref{fig:prof}. Let us consider a deep learning application running on the GPU, the energy consumption of the inference is the area under the peak of the GPU power curve in Figure \ref{fig:prof} and can be calculated by Equation \ref{eq1}: \begin{equation} \label{eq1} E_{inference}=\sum_{i=0}^T E_{dt}= P_{i+1} \times dt \end{equation} where $E_{dt}$ is the area of the rectangular strip shown and is calculated using the $i+1^{th}$ power sample $P_{i+1}$ over that duration $dt= t_{i+1}-t_{i}$. The total energy for the inference $E_{inference}$ is the sum of all the rectangular areas over the duration of the inference $T$. In our study, we report the execution time per image (s/image) or performance and the energy per image (J/image) for single image inferences. \begin{comment} The next step, involves determining a set of consistent metrics to report. Many metrics are useful depending on the user context, for example, inference performance and performance/watt reported in \cite{nvidiawhite} for single image inferences. \end{comment} \begin{figure}[!t] \centering \subfloat[GPU, CPU, System power profiles]{\includegraphics[width=\columnwidth]{acmart-master/figs/image1.png} \label{fig:prof2}} \hfil \subfloat[GPU power profile with GPU frequency]{\includegraphics[width=\columnwidth]{acmart-master/figs/image2.png} \label{fig:prof}} \caption{Sample power and frequency profiles with inference running on the GPU} \label{fig:profs} \end{figure} \section{Evaluation framework} \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/streamline.jpg} \caption{Overview of the evaluation framework} \label{fig:eval1} \end{figure} The evaluation framework is composed of two components: the hardware component that includes the target device and the power monitor that provides direct power values, and the software component as shown in Figure \ref{fig:eval1} that implements the methodology to acquire accurate and consistent power and performance measurements. The software component is divided into three distinct parts: \begin{itemize} \item Many existing deep learning frameworks such as Caffe, Tensorflow, Torch, Theano and others allow users to define the software environment within which the inference application is executed \cite{surveyefficientdnns}. It is used to define the type of inference application, the choice of algorithm to use, the implementation of the algorithm, device selection, for example, CPU or GPU and the number of inferences. \item The power sampling method defines the procedure used to collect power samples and the experimental set up such as the user-defined sampling rate, the devices conditions when these measurements were made, for example, interaction with the device power management or DVFS system, the baseline power and idle conditions. \item The post-processing step processes this raw power to derive the energy measurements at specific phases of the application or the device component level. \end{itemize} \subsection{Deep learning framework} To carry out an inference, we chose the Caffe framework \cite{caffe} as its C++ interface allowed integration with ARM Streamline tool. Recent work such as Fathom \cite{fathom} show that most of these frameworks share similar characteristics in terms of the underlying implemented operations. Hence, such a methodology could be adopted for other frameworks as well. We chose to execute all the ConvNet models in Caffe's master branch. Caffe has experimental support for other classes of deep learning models such as Recurrant Neural Networks and Long Short Term Memory network (LSTMs), hence, we focus on only Caffe's ConvNet models. Our Caffe (version 1.0.0-rc3) was compiled for the GPU with Cuda (8.0) and CuDNN (5.1.5) and CPU with OpenBLAS (libopenblas\_cortexa57p-r.0.2.20.dev.a) with a max num-thread of 4. The application selected was an inference using a ConvNet model on a single RGB image (224 x 224 pixels) taken from ImageNet dataset \cite{ImageNetChallenge}. All computations are 32-bit floating point. Caffe's command line test interface was used to run an inference with arguments for the model \textit{prototxt} file (deploy.prototxt), pre-trained weights file (.caffemodel), device selection flag (GPU=0) for a single iteration. When selecting CPU for inferences, the number of threads was varied by setting the environment variable OMP\_NUM\_THREADS before calling Caffe's test interface. The power monitor provides the CPU power averaged power over the four cores. The pre-trained weight file for the selected ConvNet models, given in Table \ref{model}, is available in Caffe's model-zoo repository and this weight file was stored on the local disk on the Jetson TX1. \subsection{Power sampling method} The power sampling method integrates vendor-specific tool such as ARM's Streamline Performance Analyser that provides CPU hardware performance counter values such as instruction per cycle, cache misses, CPU activity and others. This tool runs on a host laptop system with Ubuntu 14.04 and remotely communicates the target device. However, by default this tool is not configured to provide power sample values and requires a custom C++ function to be defined to read power values from the \textit{sysfs} interface on the target device. The target device has to be set up to communicate with the Streamline tool with the help of a gator module and gator daemon \cite{gator}. The gator module was built as a loadable kernel module in the modified Linux kernel used for the Jetson and the gator daemon was cross-compiled on the host for the TX1. ARM Streamline can be configured for Mali-based GPUs, however since the Jetson comes with an NVidia GPU, additional tools such as $nvprof$ would have to be used in conjunction with this tool to generate GPU profile traces and correlate these to energy measurements obtained. \textit{Nvprof} provides us with information related to the type of kernels running on the GPU, GPU utilization and other metrics. Our work differs from prior works that have used GPU based profiling tools such as \textit{nvprof} to analyse the performance of ConvNets \cite{cnngpuperformance} or existing performance benchmark on desktop GPUs \cite{convnetbench}, where we restrict our studies to fine-grained energy and performance measurements on the CPUs. \subsubsection{ARM Streamline: Mapping per layer functions to power profiles} Studies such as \cite{kimcomp}, show the usefulness in mapping the power profile of the inference to its individual layers but omit the method used for the mapping process while other studies show the benefits of exploiting per layer energy information for better pruning \cite{yangdesign} or better heterogeneous scheduling techniques \cite{deepx}. Therefore, we provide the following method to map per layer functions in Caffe by integrating ARM's Streamline Performance tool into our evaluation framework: \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/FunctionStruct.jpg} \caption{Function structure in Caffe} \label{fig:structure} \end{figure} \begin{itemize} \item In the first step, we identify the specific functions of the algorithm, as shown in Figure \ref{fig:structure}, and map these functions to the layer-wise interpretation of a ConvNet model. This step is necessary as it helps us decide where the ``annotations or markers''\footnote{The markers can be used to denote the start and end of a function call} of ARM Streamline can be placed. The inference phase in Caffe begins by calling Caffe's $test()$ function that creates an object of type $net$ to provide the details for the model architecture and storage of the pre-trained weights. To run the actual inference it calls the $net$ class function $Forward()$. Depending on whether a loss needs to be calculated (the loss is usually calculated during the training phase) it calls $ForwardFromTo()$. At this stage there are three possible paths during the inference. Caffe can be configured to call CuDNN library for certain layers (for example, a convolution layer) if the execution runs on the GPU or Caffe's own implementation of a layer (for example, a pooling layer) or an implementation (for example, a fully-connected layer known as \textit{inner product} in Caffe) that depends on other external back-ends of the GPU and CPU. In this work, we demonstrate mapping of energy profiles to the highest level function call of a layer at the $net::ForwardFromTo()$ level and provide measurements for particular layer type. However, for cases such as SqueezeNet and GoogleNet that are composed of layer modules, one could insert the markers at lower levels in the hierarchy to extract measurements of its Conv or pooling layers. \item To integrate ARM's Streamline tool with Caffe, we insert markers in the function of interest with the help of macros \textit{ANNOTATE\_SETUP} and \textit{ANNOTATE\_MARKER\_COLOR\_STR}. \item We create custom counters that can read the power values from $sysfs$ interface. This is done by configuring Streamline to read VDD\_IN, VDD\_GPU and VDD\_CPU on the target device, using Streamline annotation macros such as \textit{ANNOTATE\_ABSOLUTE\_COUNTER} to define a custom counter and \textit{ANNOTATE\_COUNTER\_VALUE}. to associate a power value to this counter. \item We use the command-line interface of ARM Streamline to begin a capture session. A capture session begins the process of data collection remotely from the target device. Before starting a capture session, the target device has to be set up with the gator daemon (Refer Section 3) and the C++ executable to read the custom counters. On the host side, the configuration for a \textit{capture session} is defined in a $session.xml$ file which defines the target device IP address, the sampling frequency, resolution and the chosen hardware counters. The sampling frequency which can be set to one of three provided choices: Normal=1kHz, Low=100Hz and None. For experiments involving this tool, we set the sampling rate to Normal with resolution set to high to provide higher decimal precision of the power values collected. \end{itemize} \subsection{Post processing step} Python scripts were developed to process the measured power data into meaningful energy measurements. The ARM Streamline tool does not associate the power values to the code-annotations explicitly. Therefore, we matched the collected function markers to the corresponding power values using time stamps. For baseline energy of the GPU, the instantaneous power values were captured during the 10s sleep call. The total baseline energy for the total time ($T=10s$), was then calculated using Equation \ref{eq1} given in Section 3. Dividing total baseline energy by total time, the total baseline power for a single run was calculated. This was repeated for 10 runs and the average baseline power was considered ($0.06 \pm 0.02W$). Similarly, for all other energy measurements, the energy was calculated using Equation \ref{eq1} using the desired time interval for either the entire application, a specific phase or a specific layer and the energy was reported for the minimum execution time across the $n$ runs. This choice is made based on the observation that the run with minimum execution time does not always correspond to minimum energy. Error bars were plotted based on the variation from minimum to maximum energy measured across runs. Variation in the distribution of values exists across different sets of runs and we report the energy in the context of this variation. \section{Performance and Energy Measurement} The current methodology is designed to exploit the on-board power sensors to obtain energy measurements and this section is intended to demonstrate its usefulness and limitations to evaluate existing ConvNet algorithms on the Jetson TX1 and Caffe software framework. Such an evaluation could be done for other platforms such as Snapdragon \cite{qualnpe}. However, here we restrict ourselves to evaluation within this context. \subsection{Component level Measurement versus system level measurements} As an example, to study the energy consumption at a component level, we sample the post-regulation power to the GPU. Figure \ref{fig:comp}, shows the performance and energy on our system at different levels (CPU, GPU and System) for an inference executing on the GPU. Here, the measurements are extracted for the inference phase of the application without any set up time in Caffe. Since the baseline power is very small, see Subsection 4.3, we consider the energy measurement without baseline. We compare the execution behaviour of three popular Convnet models (AlexNet, SqueezeNet and GoogleNet) whose relative energy consumption have been compared previously \cite{yangdesign}. Here, the authors build a theoretical model of energy consumption based on the number of MACs and memory accesses for a specialized hardware platform to predict the energy-use of these ConvNets. However, such a energy estimation model does not exist for our system and relies on experimentation to understand if such models exhibit similar energy consumption characteristics. On our system, we obtain an energy consumption of 3.4x for ResNet50 compared to GoogleNet, 1.1x for GoogleNet compared to AlexNet. On comparison with results in \cite{yangdesign}, these models exhibit similar behaviour to the reported results. However, our results show inconsistencies with energy \textit{estimates} for SqueezeNet. The authors of \cite{yangdesign} report a higher energy consumption for SqueezeNet compared to AlexNet, whereas we report 1.9x for AlexNet compared to SqueezeNet. In the given scenario, our power measurements were obtained only for the GPU which does not include the energy due to memory accesses. This may skew the interpretation of a result in favour of a given system. Studies focused on comparing execution behaviour on different system \cite{eie}, for example, an inference on the CPU versus that on the GPU, should provide an upper bound to the energy consumption on the GPU by including related CPU energy consumption as well. This is because on our system, even if the application executes on the GPU, the CPU is still required to drive the GPU during that duration. Since, ConvNet models have storage costs (refer Section 2), including the associated energy due to memory is an important factor. However, on some devices it may not always be possible to measure component-level power at the CPU only or GPU only or DRAM only. In such scenarios, the highest level of power that can be measured would be the System-level power. For the power monitor on the TX1, we observe that the System power may equal the sum of the component power measurements plus the energy consumed by other system resources such as memory. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{acmart-master/figs/bar.png} \caption{Energy for inference step on GPU, measured at CPU, GPU and System level} \label{fig:comp} \end{figure} This suggests that \textit{energy measurement} has several methodological choices to be made and is a time-consuming process. However, a systematic methodology guarantees that the actual execution behaviour on the given platform will be captured. \textit{Energy estimation models}, on the other hand, relies on accurately capturing the execution behaviour of the application on the platform and may not always be applicable for all systems. In our work, we explore the possibility of developing an energy-prediction model for ConvNets based on \textit{actual} energy measurements on the Jetson TX1 as an alternative. We use a set of ConvNet models to create an energy predictor that can then be used to predict the energy consumption of other unseen ConvNets. \subsection{Coarse-grained versus fine-grained application measurements} \begin{figure}[!t] \centering \includegraphics[height=5.0cm]{acmart-master/figs/figure_1.png} \caption{System total energy + Baseline for inference step on CPU and GPU} \label{fig:sysenergy} \end{figure} Figure \ref{fig:sysenergy} shows a \textit{coarse-grained level} system energy and execution time of the overall ConvNet inference. This includes Caffe's set up for model initialization and the inference phase. The top-5 accuracy is reported as published in the literature. Here, the system level energy is composed of the power consumed by the CPU, GPU and other system resources such as memory and any disk access. To illustrate our results, we plot each evaluated ConvNet model in a 3D bar plot visualization with actual energy, time and accuracy measurements. Each bar represents a model with a colour and number code. Error bars\footnote{Error bars indicate maximum variation from minimum execution time} are plotted along the time and energy axes. Such a visualization puts into perspective the relative performance of each ConvNet on the metrics of time to solution, energy and accuracy on the given platform. For example, keeping in mind the set of models chosen for this study, we can find volumes of the graphs that are not populated such as the high accuracy, low energy and low time regime. If we systematically make trade-offs for a given model along any axis dimension, it would correspond to a model moving in a volume of space around its current position. This figure also shows the performance in time and energy for AlexNet, which is well studied in the literature, for two situations: inference running on the GPU (bar 1 in Figure \ref{fig:sysenergy}) and inference running on the CPU (bars 7,8,9 and 10 where each bar represents $n$ CPU threads and $n$ varies from 1 to 4). The CPU (with 4 threads) consumes 1.3x more energy and is 1.23x slower in time when compared to the GPU. To exploit parallelism on the CPU, the computation in certain layers such as the conv and fc layers are commonly re-structured into matrix-matrix and matrix-vector operations and relies heavily on BLAS libraries (Such as ATLAS, OpenBLAS) to make effective use of the CPU. However, we observe that the performance and energy does not scale with the number of threads, owing to the small amount of computation in a single image inference. We also compare different ConvNets executed on the GPU. For example, we observe similar performance of SqueezeNet model and a modified SqueezeNet (sqCompressed) with deep compression. In the latter, the inference is performed with the decompressed version of the model where the benefits of compression are only exploited before actual execution. Therefore, there is no clear performance benefits during execution. SqueezeNetRes in another variation of the original SqueezeNet model with modified residual connections \cite{squeezenet, resnet}. This optimizations gives SqueezeNet a higher accuracy with similar execution behaviour as the original model. We can thus evaluate which optimization techniques lead to better trade-offs in performance, energy and accuracy simultaneously. \begin{table}[] \centering \caption{Correlation between measured time and energy} \label{correlation} \begin{tabular}{|l|l|} \hline \textbf{} & \textbf{\begin{tabular}[c]{@{}l@{}} Pearson's Correlation\\ coefficient\end{tabular}} \\ \hline \textit{alexNetGPU} & 0.99 \\ \hline \textit{googlenetGPU} & 0.80 \\ \hline \textit{squeezeNetGPU} & 0.51 \\ \hline \textit{Googlenetbatch16GPU} & 0.91 \\ \hline \textit{googlenet1batch1CPU} & 0.99 \\ \hline \end{tabular} \end{table} To study the execution behaviour at a \textit{fine-grained level} we extract per-layer system energy and performance measurements for each ConvNet. Figure \ref{fig:layertrend}, visualizes the per-layer execution behaviour of the selected ConvNet models (AlexNet, SqueezeNet and GoogleNet) for inferences executing on the GPU. Here, each point represents a top-level ConvNet layer type. The ordering of each layer along the x-axis is not representative of its actual execution order. AlexNet was built using conv, pool, relu, norm and fc layers, SqueezeNet encapsulates inside its \textit{fire} module a combination of $1\times1$ and $3\times3$ conv, each followed by relu layers and finally, GoogleNet has an arrangement of $1\times1$ and $3\times3$ conv filters along with max pooling into an inception module. \begin{figure}[!t] \centering \subfloat[AlexNet]{\includegraphics[width=\columnwidth]{acmart-master/figs/alexnettrends.png} \label{fig:layertrend1}} \hfil \subfloat[SqueezeNet]{\includegraphics[width=\columnwidth]{acmart-master/figs/squeezenettrends.png} \label{fig:layertrend2}} \hfil \subfloat[GoogleNet]{\includegraphics[width=\columnwidth]{acmart-master/figs/googlenettrends.png} \label{fig:layertrend3}} \caption{Per layer performance and system energy profile of ConvNets with batch 1 running on GPU} \label{fig:layertrend} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/googlenettrends16gpu.png} \caption{Per layer performance and system energy profile for GoogleNet batch 16 inference running on GPU} \label{fig:googlenet16gpu} \end{figure} From Figure \ref{fig:layertrend1} (AlexNet), we observe that time and energy are highly correlated with a Pearson's correlation factor of 0.99, as given in Table \ref{correlation}. Here, a fc layer that takes longer to execute and also consumes more energy. The fc layer relies on the matrix-vector implementation from the $CuBLAS$ library that may not have been optimized to account for the fact that mobile GPUs have limited GPU memory. Therefore, unlike performance benchmarks reported for desktop GPUs \cite{convnetbench,cnngpuperformance}, the smaller GPU memory results in spilling to main memory leading to high energy consumption and lower performance. The correlation of time and energy start to diminish for models like GoogleNet and SqueezeNet on single image inferences, as seen in Figure \ref{fig:layertrend}. Taking GoogleNet as an example, we execute it with a larger batch size of 16. As GPUs are known to be very efficient for larger batch sizes \cite{nvidiawhite}, we observed in Figure \ref{fig:layertrend3} that the correlation in both time and energy starts to hold true again. Finally, Figure \ref{fig:googlenetcpu} shows per-layer trends for GoogleNet with a batch size 1 running on the CPU. Here again, there is a strong correlation in time and energy indicating time could also be used as a proxy for energy. \section{Multi-Variable Regression model to predict energy consumption} Given our ability to extract per-layer energy measurements, we demonstrate that it is possible to build an \textit{energy prediction} model for Convnets on the Jetson TX1 platform and Caffe framework. Our prediction model is based on the CPU measurements targeting its \textit{SIMD instruction} and \textit{bus accesses} performance counters. This model is based on multi-variable linear regression, as given in \autoref{regression}, where the two dependent variables are SIMD instruction counts and number of bus accesses and the independent variable is \textit{Energy}. The Jetson TX1 system consists of a memory hierarchy with L1, L2 caches and main memory. However, for an initial prediction model, we restrict the prediction model to bus accesses (equivalent to last-level cache misses) as these are more expensive, in terms of energy, than cache accesses \cite{hanlearning}. \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/googlenettrends_cpu.png} \caption{Per layer performance and system energy profile for GoogleNet batch 1 inference running on CPU} \label{fig:googlenetcpu} \end{figure} \begin{equation} \label{regression} \hat{E} = x_{1} \times bus\_accesses_{conv} + x_{2} \times SIMD_{conv} \end{equation} The performance counters for SIMD instructions and bus accesses can be sampled using ARM's Streamline tool with code annotations to acquire per-layer counts. In our study, we focus on only the Conv layers for two reasons: The main component of a ConvNet is its Conv layers where current models, as given in \autoref{model}, tend to have multiple layers of Conv stacked on top of each other. This gives sufficient data points to build a sophisticated prediction model. Current research focuses on optimizing the Conv layers as they occupy 95\% of the execution time \cite{cnngpuperformance} which implies having a model to predict the cost of adding a Conv layer to the overall performance and energy consumption of a ConvNet should prove useful. The assumption here is that in our system, the Conv layers leverage standard optimized OpenBLAS routines to perform Matrix-Matrix multiplications on the CPU and CuDNN library for the GPU. For our CPU-based energy prediction model, we restrict the inferences to single-threaded executions on the CPU. One could think of more sophisticated models based on multi-threaded executions or a GPU. Similar experiments could be carried out for the GPU where additional executions would be required from tools such as \textit{nvprof} to obtain GPU-specific performance counters and correlating these to the collected energy values from separate runs. \subsection{Energy prediction model from measured SIMD and bus accesses} \label{mv} We adopt the standard \textit{supervised learning} approach in machine learning to build a regression model \cite{mlprob}. In the training phase, the training data helps to establish a relationship, if any, between energy consumption and the two chosen performance counters. Once this prediction model is obtained, it can be tested on example ConvNet models not seen during the training phase. Initially, the models from Table \ref{model} were qualitatively selected to form a training set on the following basis: AlexNet and VggNet-small represent structurally similar models that have fewer layers but relatively large number of parameters (62M-102M), SqueezeNet and SqueezeNetRes are an important class of ConvNets that are trained to keep the model size low without compromising accuracy (1.2M parameters, $\sim{80}$\% top-5 accuracy). GoogleNet has the best trade-off of size and accuracy (6.9M, 90.85\%) and ResNet50 represent the current state-of-the-art model in terms of accuracy in image classification tasks (93.2\%, 25M). SqueezeNet model with deep compression was excluded from the training set, as this is equivalent to a decompressed SqueezeNet model in terms of performance characteristics (refer Subsection 5.1). As a first experiment, we focused on building and evaluating the robustness of the regression coefficients by training several prediction models on this set of ConvNets. The set of prediction models were obtained by excluding a single ConvNet during the training phase. This is also known as \textit{Leave one out cross validation} \cite{mlprob}. Table \ref{tablecompare1} shows the prediction coefficients $x_1$ and $x_2$ derived for each excluded model. Here, each model is given in Column 1. Column 2, represents the coefficient for the total bus accesses ($x_1$) and Column 3 represents the coefficient for the total SIMD counts ($x_2$) in all the Conv layers for a given ConvNet. The first row represents a single experiment to form a regression model by excluding AlexNet as a data point during training. We then used the coefficients derived to obtain a training error which evaluates the regression model on the training set, as well a test error on the test set (data points excluded during training). We use the \textit{relative error} to quantify the performance of the predictor in relation to actual raw measurements obtained which is given by Equation \ref{relerr}: \begin{equation} \label{relerr} Rel.Err (\%) = \frac{(|predicted\_value - actual\_value|)}{actual\_value} \times 100 \end{equation} For example, in the first row, the average relative training error for the all ConvNets (which does not include AlexNet) in the training set is 5.36 $\pm$ 3.36\% and its test error on AlexNet is $2.23$\%. The average represents the bus access and SIMD counts over 5 separate runs and is shown in Figure \ref{busSIMD}. The corresponding average measured energy and average measured time is given in Column 5 and Column 6 of Table \ref{tablecompare1}. We provide timing measurement here for the sake of completeness. The value of each coefficient represents how strongly dependent the independent coefficient is on each of its dependent variables. We find that the coefficient for bus accesses contributes greater to the energy consumption which is consistent with the fact that accessing main memory cost more energy than the execution of a SIMD operation \cite{hanlearning}. For most cases, the average relative training error (that is, $4.81 \pm 3.19$\%) by including all the ConvNets (or allNets) in the training set is lower than by excluding individual models. \textit{Given the scenario where we can measure performance counters such as SIMD instruction and bus accesses, we are able to predict the energy consumption of unseen test ConvNets with an average relative test error of approximately $\sim{8\%}$ compared to actual energy measurements}. Finally, to alleviate this need of having to measure SIMD and bus accesses counts, we attempted to explore the possibility of building prediction models for the two dependent variables $bus\_accesses_{conv}$ and $SIMD_{conv}$ themselves. This data was then fed into our current prediction model based on \textit{allNets} to obtain a final \textit{estimate} of energy consumption of any given ConvNet on the CPU of the Jetson TX1 platform, as discussed in the next subsections. \begin{table*}[] \centering \caption{Regression Model to predict Energy} \label{tablecompare1} \resizebox{0.9\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline & \textbf{Bus access (x1)} & \textbf{SIMD (x2)} & \textbf{\begin{tabular}[c]{@{}l@{}} Predicted \\ Energy (mJ) ($\hat{E}$)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Avg. Measured \\ Energy (E) (mJ)\end{tabular}} & \textbf{Measured Time (sec)} & \textbf{\begin{tabular}[c]{@{}l@{}}Avg. Relative \\ Train error (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Relative \\ Test error (\%)\end{tabular}} \\ \hline \textit{alexNet} & 3.37E-05 & 3.16E-06 & 951.28 & $ 930.44 $ & 0.1682 & 5.36 $\pm$ 3.36 & 2.23 \\ \hline \textit{resNet50} & 3.89E-05 & 2.47E-06 & 4686.75 & $5261.42 $& 0.9468 & 2.03$\pm$ 2.06 & 10.92 \\ \hline \textit{squeezeNet} & 4.09E-05 & 2.70E-06 & 1388.74 & $1240.29 $ & 0.2652 & 5.26 $\pm$ 1.88 & 11.96 \\ \hline \textit{googleNet} & 3.76E-05 & 2.93E-06 & 2212.37 & $2072.48$ & 0.4228 & 5.76 $\pm$ 3.58 & 6.74 \\ \hline \textit{squeezeNetRes} & 3.30E-05 & 3.20E-06 & 1365.02 & $1371.62 $& 0.2558 & 5.66 $\pm$ 2.5 & 0.48 \\ \hline \textit{vggNet-small} & 1.27E-05 & 4.75E-06 & 3509.11 & $3027.99 $ & 0.5646 & 3.41 $\pm$ 2.67 & 15.88 \\ \hline \textit{allNets} & 3.34E-05 & 3.18E-06 & \multicolumn{3}{l|}{} & 4.81 $\pm$ 3.19 & \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}} Avg. Rel. Test Error \\ (excluding a ConvNet)\end{tabular}} & \multicolumn{6}{l|}{\textbf{}} & \textbf{8.04 $\pm$ 5.96} \\ \hline \end{tabular} } \end{table*} \subsection{Predicting Conv layer SIMD counts} SIMD operations exploit the data parallelism in Matrix-Matrix multiplication to obtain higher efficiency. Since the computation in a Conv layer can be transformed to a Matrix-Matrix multiplication operation, we explore the relationship between the application MAC count and its measured SIMD instructions for every Conv layer. Given a description of the configurations of every layer in the Caffe's model prototxt file (See Section 4.1), we use Equation \ref{compeq1} in Section 2, to determine the MAC count for a Conv layer. Note, that for certain ConvNets AlexNet and MobileNet Equation \ref{compeq3} was used instead. The total MAC for all the Conv layers in a given ConvNet is tabulated in Column 3 of Table \ref{macSIMD}. This MAC count for all the Conv layers can then be used to build a simple linear regression model, as given in Equation \ref{simdEq}, to predict the SIMD counts for all the Conv layers in a ConvNet. The data corresponding to the measured SIMD ($y$) and predicted SIMD ($\hat{y}$) counts are shown in Column 2 and 4 of Table \ref{tablecompare2}. We use the same training set from subsection \ref{mv}, to build a simple SIMD predictor from MAC counts. \begin{equation} \label{simdEq} \hat{y}= c_{1} \times x \end{equation} We obtain a slope of 0.24, as show in Figure \ref{macSIMD}, which confirms that the SIMD width is 4 for the ARM CPUs on the Jetson TX1.\textit{Therefore, given an appropriate calculation of MAC count from the application, we can build a SIMD predictor that obtains an average relative test error of $0.65 \pm 0.94$\% compared to actual SIMD measurements}. Considering all the ConvNets the average relative test error is $1.06 \pm 0.80$\% \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/Bus_SIMD.png} \caption{Bus Access versus SIMD count} \label{busSIMD} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{acmart-master/figs/SIMD_MAC.png} \caption{SIMD versus MAC counts} \label{macSIMD} \end{figure} \begin{table}[] \centering \caption{SIMD prediction table} \label{tablecompare2} \resizebox{0.5\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|} \hline & \textbf{\begin{tabular}[c]{@{}l@{}}Avg. Measured (y) \\ SIMD \end{tabular}} & \textbf{MAC (x)} & \textbf{Predicted SIMD ($\hat{y}$)} & \textbf{\begin{tabular}[c]{@{}l@{}} Relative\\ error (\%)\end{tabular}} \\ \hline \textit{alexNet} & 166326858 & 665784864 & 163383605 & 1.76 \\ \hline \textit{resNet-50} & 936965249 & 3855925248 & 946244055 & 0.99 \\ \hline \textit{squeezeNet} & 212510630 & 861339936 & 211372820 & 0.53 \\ \hline \textit{googleNet} & 383528521 & 1581647872 & 388136387 & 1.20 \\ \hline \textit{squeezenetRes} & 213932097 & 861339936 & 211372820 & 1.19 \\ \hline \textit{vgg -small} & 638627941 & 2541337632 & 623644254 & 2.34 \\ \hline \multicolumn{5}{|l|}{\textbf{Test Set}} \\ \hline \textit{MobileNet-224} & 139589662 & 567716352 & 139317592 & 0.12 \\ \hline \textit{Places-CNDS-8s} & 492978185 & 1967702016 & 482874074 & 2.04 \\ \hline \textit{ALL-CNN-C} & 66909070 & 270798336 & 66453911 & 0.37 \\ \hline \textit{Inception-BN} & 834842927 & 3400527872 & 834489539 & 0.02 \\ \hline \textbf{Avg. Relative Test Error (\%)} & \multicolumn{3}{l|}{\textbf{}} & \textbf{0.65 $\pm$ 0.94} \\ \hline \end{tabular} } \end{table} \subsection{Predicting Conv layer bus accesses} Conv layers are often preceded and succeeded by data transformation operations such as \textit{im2col} to transform it into a Matrix-Matrix computation, and \textit{col2im} to transform it back into the original 2D image layout \cite{cudnn}. This is because these Conv layers are interleaved with pooling, relu and other layers which required data in the specific 2D format. Each ConvNet model differs in terms of how it is interleaved (See Section 2 on Conv layer topologies). Even though, we can calculate the bandwidth of each Conv layer as given in Equation \ref{compeq2}, the data re-structuring and associated complex cache memory hierarchy make the relationship between data movement between layers and performance counters such as cache and bus accesses non-trivial. However, somewhat surprisingly we found that a linear relationship exists between the total number of measured bus accesses and SIMD counts in the Conv layers, which can be seen in Figure \ref{busSIMD}. Therefore, a similar linear regression predictor, as given in Equation \ref{simdEq}, was built to determine the bus access counts for the Conv layers from the measured SIMD counts. We find a linear relationship exists with a line of slope of 0.0663 between SIMD and bus access counts. We then use the predicted SIMD ($\hat{y}$) counts obtained previously to predict the bus access counts ($\hat{z}$) for all the ConvNets, as given in Column 4 of Table \ref{tablecompare3}. \textit{ For most ConvNets we are able to obtain a good prediction of bus access counts from SIMD with an average relative test error of $17.09 \pm 13$ \%}. However, three of the ConvNets from the original test set exhibit a spike in individual relative errors with MobileNet around $\sim 73 \%$ while the other two are below 50\% ( All-CNN-C around $\sim 38\%$ and squeezeNetRes $\sim 32\%$). Our predicted results for bus accesses from predicted SIMD counts by including MobileNet increases the avearage relative test error by $1.3x$ when compared to excluding MobileNet. We hypothesize that MobileNet exhibits different-to-typical characteristics in its data access patterns that cannot be trivially predicted from SIMD counts alone and is left for future exploration. \begin{table}[] \centering \caption{Bus Access prediction table} \label{tablecompare3} \resizebox{0.5\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|} \hline & \textbf{\begin{tabular}[c]{@{}l@{}}Predicted \\ SIMD ($\hat{y}$) \end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Avg. Measured \\ Bus access (z) \end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Predicted Bus\\ Access ($\hat{z}$) \end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Relative\\ error (\%)\end{tabular}} \\ \hline \textit{alexNet} & 166326858 & 12635625 & 10847037 & 14.15 \\ \hline \textit{resNet-50} & 936965249 & 61100440 & 62821142 & 2.81 \\ \hline \textit{squeezeNet} & 212510630 & 19929941 & 14033041 & 29.58 \\ \hline \textit{googleNet} & 383528521 & 28927569 & 25768374 & 10.92 \\ \hline \textit{squeezenetRes} & 213932097 & 20600111 & 140330412 & 31.87 \\ \hline \textit{vgg - small} & 638627941 & 37448187 & 41403742 & 10.56 \\ \hline \textit{MobileNet-224} & 139589662 & 34642804 & 9249294 & 73.30 \\ \hline \textit{Places-CNDS-8s} & 492978185 & 31498902 & 32058009 & 1.77 \\ \hline \textit{ALL-CNN-C} & 66909070 & 7172165 & 4411875 & 38.48 \\ \hline \textit{Inception-BN} & 834842927 & 64169256 & 55401760 & 13.66 \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Avg. Rel. Test \\ Error w/o MobileNet (\%)\end{tabular}} & \multicolumn{3}{l|}{\textbf{}} & \textbf{17.09 $\pm$ 13} \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Avg. Rel. Test Error \\ with MobileNet (\%)\end{tabular}} & \multicolumn{3}{l|}{} & \textbf{22.71 $\pm$ 21.6} \\ \hline \end{tabular} } \end{table} \subsection{Energy prediction model from simulated data} Our final step, is to estimate the energy consumption of all the Conv layers in a ConvNet directly from the application parameters, which in our case is the Conv layer MAC count. The prediction for the SIMD ($\hat{y}$) and bus access counts ($\hat{z}$) for the Conv layers can now be fed into our initial energy prediction model, given in Subsection 6.1. We consider the regression coefficients derived from using \textit{allNets}. This data is tabulated in Table \ref{tablecompare4}, where the predicted energy ($\hat{E}$) is given in Column 2. \textit{Therefore, by excluding MobileNet, we are able to predict the energy consumption of the Conv layers of any given ConvNet, solely using MAC count, with an average relative error test rate of $7.08\pm 6.0\%$.} \begin{table*}[] \centering \caption{Energy Prediction Results for coeffs. $x_1 =3.34E-05$ and $x_{2} = 3.18E-06$ } \label{tablecompare4} \resizebox{0.8\textwidth}{!}{% \begin{tabular}{|l|l|l|l|l|} \hline & \textbf{\begin{tabular}[c]{@{}l@{}}Predicted \\ Energy ($\hat{E}$) (mJ)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Average Measured \\ Energy (E) (mJ)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Average Measured \\ Time (sec)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Relative \\ error (\%)\end{tabular}} \\ \hline \textit{alexNet} & 881.41 & 930.45 & 0.17 & 5.26 \\ \hline \textit{resNet-50} & 5104.76 & 5261.42 & 0.95 & 2.97 \\ \hline \textit{squeezeNet} & 1140.30 & 1240.30 & 0.27 & 8.06 \\ \hline \textit{googleNet} & 2093.90 & 2072.49 & 0.42 & 1.03 \\ \hline \textit{squeezeNetRes} & 1140.30 & 1371.62 & 0.25 & 16.86 \\ \hline \textit{vgg - small} & 3364.41 & 3028.00 & 0.56 & 11.11 \\ \hline \textit{Places-CNDS-8s} & 2604.99 & 2613.46 & 0.46 & 0.32 \\ \hline \textit{ALL-CNN-C} & 358.50 & 422.29 & 0.08 & 15.10 \\ \hline \textit{Inception-BN} & 4501.87 & 4641.14 & 0.84 & 3.00 \\ \hline \textit{MobileNet} & 751.58 & 1824.60 & 0.35 & 58.80 \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Average Relative Test \\ Error (\%) w/o MobileNet\end{tabular}} & \multicolumn{3}{l|}{\textbf{}} & \textbf{7.08 $\pm$ 6.0} \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Average Relative Test \\ Error (\%) with MobileNet\end{tabular}} & \multicolumn{3}{l|}{\textbf{}} & \textbf{17.33 $\pm$ 12.2} \\ \hline \end{tabular} } \end{table*} \section{Related Work} This section focuses on several benchmarking frameworks that exists for evaluation of deep learning models. We provide a snapshot view of the main purpose of each framework. Most benchmarks focus on either a single application domain on a specific bottleneck layer (for example, a conv layer) on a single platform while others are generic, targeting either many application areas or different hardware systems. Moreover, most of them have relied solely on benchmarking of time rather than energy to characterize the execution behaviour of the application. \textit{convnet-benchmark} \cite{convnetbench}, benchmarks all public open-source implementations of ConvNets. This provides overall and layer-wise timing benchmarks of the Convolution operation. However, this benchmarking work is carried for a single application area on a single desktop machine consisting of 6-core Intel Core i7 CPU and an NVIDIA Titan X GPU and does not include executions on embedded platform. \textit{Fathom} \cite{fathom}, provides a comprehensive benchmarking suite to include state-of-the-art neural network models from both image processing and language processing domains. The authors focus on understanding the holistic execution behaviour in terms of execution time of these models while breaking down its execution behaviour in terms of high-level Tensorflow operations. All experiments were performed on the Skylake i7-6700k desktop CPU. \textit{DAWNbench} \cite{dawn}, proposes an end-to-end performance evaluation of deep learning models. Rather than focusing on the performance of the computation within layers, the authors study the performance impact of the interaction between various optimizations techniques during training. For example, the impact on convergence rate during training with different batch sizes. Studies on the interaction of various energy-efficiency techniques such as low-precision, compression techniques and others, remain unexplored and we envision the possibility of such studies with the development of better energy benchmarking tools along the lines presented in this work. Few studies have emerged that report energy and performance of deep learning models on the TX1 platform \cite{nvidiawhite, eie, analysis}. However, these studies are often adhoc to study a limited set of deep learning models and platform-to-platform comparisons. These studies lack in a consistent methodology to acquire power for the Jetson TX1 and provide minimum details of the adopted method. Our work instead develops a detailed methodology to acquire power measurements using the power sensor on-board the TX1. Similar approaches to our energy profiling approach exist that provide energy consumption at a functional level \cite{hotspots,energyml}. In \cite{energyml}, the authors use code-annotations to demarcate specific phases of a decision-tree machine learning model. However, our work is aimed towards understanding fine-grained energy consumption of neural network algorithms. BenchIP \cite{benchip} is an industry-level benchmark suite and methodology that evaluates the efficiency deep learning workloads comprising ConvNets and Long Short Term Memory networks (LSTM) models on representative platforms from desktop, server and embedded domains. The authors evaluate each layer in isolation as well as end-to-end model executions in Caffe. However, it differs from our approach, where we study each layer in isolation in the context of an actual inference. To the best of our knowledge, this benchmark suite is yet to be open-sourced. While benchmarking efforts are continuing to grow, it is a time-consuming effort. Therefore, energy estimation tools have been proposed to evaluate deep learning models \cite{yangdesign}. This work provides a model of energy consumption based on the number of MACs and bandwidth, and associates a hardware cost of each to estimate the energy consumption of a neural net. However, this estimation methodology is developed for specialized dataflow implementations on a specialized hardware accelerator "Eyeriss" \cite{eye} and may not be representative for models executing on other platforms. Our work takes a similar approach by building a model of energy consumption using only MAC counts and regression analysis to build an initial energy estimation model for thev CPU on the Jetson TX1 platform. \section{Conclusions and Future Work} Deployment of deep learning applications on mobile and embedded platforms remains a challenge due to limited power budgets available on such devices. Efforts to improve energy consumption of deep learning applications have begun to emerge from the development of compact ConvNet models to building specialized hardware. Existing benchmarking efforts tend to characterize the execution behaviour of deep learning applications on high-end desktop CPUs and GPUs and often neglect embedded platforms. Therefore, we present "SyNERGY", a framework to enable measurement of \textit{fine-grained} performance and energy of deep learning applications targeting embedded platforms such as the Jetson TX1. We demonstrate a systematic methodology using its power monitoring sensor (TI-INA3221x) and integrate Caffe and ARM's Streamline Performance Analyser, to enable coarse-grained and fine-grained energy profiling for single image inferences. We report energy measurements for several popular ConvNets such as AlexNet, SqueezeNet and GoogleNet for an entire inference, specific layers and at different levels such as CPU, GPU and System. Through our benchmarking framework, we were able to build an initial energy prediction model using 11 representative ConvNet models. Our initial energy prediction model was based on data gathered from SIMD and bus access CPU performance counters with actual execution runs of these models. We are able to predict the energy-use of all the Conv layers in a ConvNet model with an average relative test error rate of $8.04 \pm 5.96$\% over actual energy measurements. Furthermore, we build upon this model to make predictions directly from the application parameters. Our predictor achieves $17.33 \pm 12.2$\% (or $7.08 \pm 6.0$\% if we exclude MobileNet) average relative test-error by using only Multiply-Accumulate (MAC) counts calculated directly from the application description as input to predictors for SIMD and bus accesses. Future work, includes extending this energy prediction model to other layers in the model such as fully-connected layers such that we can make energy predictions for an \textit{entire} ConvNet. This opens up the possibility of targeting other deep learning models such as those in natural language processing domains. We also aim to build an energy predictor for deep learning applications on other embedded platforms such as the Snapdragon. In terms of the quality of the energy-predictor itself, improvements to the predictor can be made by targeting other performance counters by including L1 and L2 cache access. Opportunities for in-depth analysis work to guide the use of power management techniques such as DVFS to reduce energy consumption at specific layers could also be explored. \section{Acknowledgements}
1,108,101,563,505
arxiv
\section{Introduction} Twenty years after the discovery of the first exoplanet around a Solar-type star \citep{MQ1995}, the rapidly growing population of detected low-mass extra-solar planets has become considerable and their number will keep increasing, which generates a strong need \mybf{for} theoretical modeling and predictions. Because the mass of such planets is not large enough to allow them to accrete a voluminous gaseous envelope, they are mainly composed of a telluric core. In many cases \citep[e.g. 55 Cnc e, see][]{Demory2016}, this core is \mybf{covered} by a thin atmospheric layer as observed on the Earth, Venus and Mars. The \mybf{rotation} of these bodies strongly affects their surface temperature equilibrium and atmospheric global circulation \citep[][]{FL2014}. Therefore, it is a key quantity to understand their climate, particularly in the \mybf{so-called} ``habitable zone'' \mybf{\citep[][]{Kasting1993}}. The rotational dynamics of super-Venus and Venus-like planets is driven by the tidal torques exerted both on the rocky and atmospheric layers \citep[see][]{Correia2008}. The solid torque, which is induced by the gravitational tidal forcing of the host star, tends to despin the planet to pull it back to synchronization. The atmospheric torque is the sum of two contributions. The first one, caused by the gravitational tidal potential of the star\mybf{,} acts on the spin in a similar way as the solid tide. The second one, called ``thermal tide'', results from the perturbation due to the heating of the atmosphere by the stellar bolometric flux. The torque induced by this component is in opposition with those gravitationally generated. Therefore, it pushes the angular velocity of the planet away from synchronization. Although the mass of the atmosphere often represents a negligible fraction of the mass of the planet (denoting $ f_{\rm A} $ this fraction, we have $ f_{\rm A} \sim 10^{-4} $ for Venus), thermal tides can be of the same order of magnitude and even stronger than solid tides \citep{DI1980}. This competition naturally gives birth to prograde and retrograde rotation equilibria in the semi-major axis range defined by the habitable zone, in which Venus-like planets are submitted to gravitational and thermal tides of comparable intensities \citep{GS1969,DI1980,CL01}. Early studies of this effect were based on analytical models developed for the Earth \citep[e.g.][]{CL70} that present a singularity at synchronisation, while \citet{CL01} avoid this drawback by a smooth interpolation annuling the torque at synchronization. Only recently, the atmospheric tidal perturbation has been computed numerically with Global Circulation Models (GCM) \citep[][]{Leconte2015}, and analytically \citep{ADLM2016} (P1), who generalized the reference work of \cite{CL70} by including the dissipative processes (radiation, thermal diffusion) that regularize the behaviour of the atmospheric tidal torque at the synchronization. Here, we revisit the equilibrium rotation of super-Earth planets based on the atmospheric tides model presented in P1. For the solid torque, we use the simplest physical prescription, a Maxwell model \citep{RMZL2012,Correia2014}, because the rheology of these planets is unknown. \section{Tidal torques} \subsection{Physical set-up} For simplicity, we consider a planet in a circular orbit of radius $a$ and mean motion $n$ around a star of mass $ M_* $ and luminosity $ L_* $ exerting on the planet thermal and gravitational tidal forcing (Fig~\ref{fig:schema1}). The planet, of mass $ M_{\rm P} $ and spin $ \Omega$, has zero obliquity so that the tidal frequency is $ \sigma = 2 \omega $, where $ \omega = \Omega - n $. It is composed of a telluric core of radius $ R $ covered by a thin atmospheric layer of mass $ M_{\rm A} = f_{\rm A} M_{\rm P} $ and pressure height scale $ H $ such that $ H \ll R $. This fluid layer is assumed to be homogeneous in composition, of specific \mybf{gas} constant $ \mathcal{R}_{\rm A} = \mathcal{R}_{\rm GP} / m $ ($ \mathcal{R}_{\rm GP} $ and $ m $ being the perfect \mybf{gas} constant and the mean molar mass respectively), \mybf{in hydrostatic equilibrium and subject to convective instability, i.e. $ N \approx 0 $, $ N $ designating the Brunt-Väisälä frequency, as observed on the surface of Venus \citep[][]{Seiff1980}}. Hence, the pressure height scale is \begin{equation} H = \frac{\mathcal{R}_{\rm A} T_0}{g}, \end{equation} where $ T_0 $ is the equilibrium \mybf{surface} temperature of the atmosphere and $ g $ the surface gravity which are related to the equilibrium radial distributions of density $ \rho_0 $ and pressure $ p_0 $ as $ p_0 = \rho_0 g H $. We introduce the first adiabatic exponent of the \mybf{gas} $ \Gamma_1 = \left( \partial \ln p_0 / \partial \ln \rho_0 \right)_{\rm S} $ (the subscript $ S $ being the specific macroscopic entropy) and the parameter \mybf{$ \kappa = 1 - 1 / \Gamma_1 $}. The radiative losses of the atmosphere, \mybf{treated as a Newtonian cooling}, \mybf{lead us} to define a radiative frequency $ \sigma_0 $, given by \begin{equation} J_{\rm rad} = C_p \sigma_0 \delta T, \end{equation} where \mybf{$ C_p = \mathcal{R}_{\rm A} / \kappa $ is the thermal capacity of the atmosphere per unit mass and} $ J_{\rm rad} $ is the radiated power per unit mass caused by the temperature variation $ \delta T $ around the equilibrium state. \mybf{It shall be noticed here that the Newtonian cooling describes the radiative losses of optically thin atmospheres, where the radiative transfers between layers can be ignored. We apply this modeling to optically thick atmosphere, such as Venus' one, because the numerical simulations by \cite{Leconte2015} show that it \mybf{can also describe} well tidal dissipation in these cases, with an effective Newtonian cooling frequency.} For more details about this physical set-up, we refer the reader to P1. \begin{figure}[htb] \centering {\includegraphics[width=0.475\textwidth] {ADLM2016_fig1.pdf} \textsf{\caption{\label{fig:schema1} Tidal elongation of a Venus-like rotating planet, composed of a solid core (brown) and a gaseous atmosphere (blue), and submitted to gravitational and thermal forcings.} }} \end{figure} \subsection{Atmospheric tidal torque} For null obliquity and eccentricity, the tidal gravitational potential is reduced to the quadrupolar term of the multipole expansion \citep{Kaula1964}, $ U = U_2 \left( r \right) P_2^2 \left( \cos \theta \right) e^{i \left( \sigma t + 2 \varphi \right) } $, where $ t $ is the time, $ \varphi $ the longitude, $ \theta $ the colatitude, $ r $ the radial coordinate, $ P_2^2 $ the normalized associated Legendre polynomial of order $ \left( l , m \right) = \left( 2 , 2 \right) $ and $ U_2 $ its radial function. Since $ H \ll R $, $ U_2 $ is assumed to be constant. \mybf{The thick atmosphere of \cora{a Venus-like} planet absorbs most of the stellar flux in its upper regions, which are, as a consequence, strongly thermally forced; only 3 \% of the flux reaches the surface \citep[][]{PY1975}. However the tidal effects resulting from the heating by the ground determine the tidal mass redistribution since the atmosphere is far denser near the surface than in upper regions \citep[][]{DI1980,SZ1990}. These layers can also be considered in solid rotation with the solid part as a first approximation because the velocity of horizontal winds is less than $ 5 \ {\rm m.s^{-1}} $ below an altitude of 10 km \citep[][]{Marov1973}. So,} introducing the mean power per unit mass $ J_2 $, we choose for the thermal forcing \mybf{the heating at the ground distribution $ J = J_2 \tau_{\rm J} e^{- \tau_{\rm J} x} P_2^2 \left( \cos \theta \right) e^{i \left( \sigma t + 2 \varphi \right)} $, where $ \tau_{\rm J} \gg 1 $ represents the damping rate of the heating with altitude and depends on the vertical thermal diffusion of the atmosphere at the surface \citep[e.g.][]{CL70}}. This allows us to establish for the atmospheric tidal torque the expression \mybf{(see P1, Eq. 174)} \begin{equation} \hat{\mathcal{T}}_{\rm A} = 2 \pi \kappa \frac{R^2 \rho_0 \left( 0 \right)}{g} U_2 J_2 \frac{\sigma}{\sigma^2 + \sigma_0^2}, \label{Torque_a1} \end{equation} where $ \rho_0 \left( 0 \right) $ is the density at the ground. \pcor{This function of $ \sigma $ is of the same form as the one given by \cite{ID1978} (Eq.~4).} The quadrupolar tidal gravitational potential and heat power are given by \mybf{(P1, Eq.~74)} \begin{equation} \begin{array}{rcl} \displaystyle U_2 = \frac{1}{2} \sqrt{\frac{3}{2}} \frac{R^2 \mathcal{G} M_*}{a^3} & \mbox{and} & \displaystyle J_2 = \frac{1}{2} \sqrt{\frac{3}{2}} \frac{\alpha \varepsilon R^2 L_*}{M_{\rm A} a^2}, \end{array} \end{equation} where $ \mathcal{G} $ designates the gravitational constant, $ \varepsilon $ the effective fraction of power absorbed by the atmosphere and $ \alpha $ a shape factor depending on the spatial distribution of tidal heat sources\footnote{ Denoting $ \mathcal{F} \left( \Psi \right) $ the distribution of tidal heat sources as a function of the stellar zenith angle $ \Psi $, the parameter $ \alpha $ is defined as $ \alpha = \int_0^{\pi} \mathcal{F} ( \Psi ) P_2 ( \cos \Psi ) \sin \Psi d \Psi $, where $ P_2(X)=\sqrt{5/8} ( 3 X^2 - 1 ) $ is the normalized Legendre polynomial of order $ n = 2 $. If we assume a heat source of the form $\mathcal{F} \left( \Psi \right) = \cos \Psi$ if $\Psi \in [ 0,\frac{\pi}{2} ]$, and $\mathcal{F} \left( \Psi \right) = 0$ else, we get the shape factor $\alpha = 1/8\sqrt{5/2} \approx 0.19$. }. The tidal torque exerted on the atmosphere $\hat{\mathcal{T}}_{\rm A}$ is partly transmitted to the telluric core. The efficiency of this dynamical (viscous) coupling between the two layers is weighted by a parameter $ \beta $ ( $ 0 \leq \beta \leq 1 $). Hence, with $ \omega_0 = \sigma_0 / 2 $, the transmitted torque ${\mathcal{T}}_{\rm A} = \beta\hat{\mathcal{T}}_{\rm A}$ (Eq.~\ref{Torque_a1}) becomes \begin{equation} \mathcal{T}_{\rm A} = \frac{3}{32} \frac{\kappa \beta R^4 \mathcal{G} M_* \alpha \varepsilon L_* }{\mathcal{R}_{\rm A} T_0 \, a^5} \frac{\omega}{\omega^2 + \omega_0^2}. \label{tideA} \end{equation} \subsection{Solid tidal torque} For simplicity and because of the large variety of possible rheologies, we assume that the telluric core behaves like a damped oscillator. In this framework, called \mybf{the} ``Maxwell model'', the tidal torque exerted on an homogeneous body can be expressed \citep[e.g.][]{RMZL2012} \begin{equation} \mathcal{T}_{S} = \frac{3 }{4} \frac{\mathcal{G} M_*^2 R^5}{a^6} \Im \left\{ k_2 \right\} ,\ \mbox{with} \ \Im \left\{ k_2 \right\} = - \frac{3 K}{2 } \frac{ \sigma_{\rm M} \sigma }{ \sigma_{\rm M}^2 + \left( 1 + K \right)^2 \sigma^2}, \end{equation} the imaginary part of the second order Love number ($ k_2 $), $ K $ a non-dimensional rheological parameter, $ \sigma_{\rm M} $ the relaxation frequency of the material composing the body, with \begin{equation} \begin{array}{rcl} \displaystyle K = \frac{38 \pi}{3} \frac{ R^4 G}{\mathcal{G} M_{\rm P}^2} & \mbox{and} & \displaystyle \sigma_{\rm M} = \frac{G}{\eta}, \end{array} \end{equation} where $ G $ and $ \eta $ are the effective shear modulus and viscosity of the telluric core. Finally, introducing the frequency $ \omega_{\rm M} = \sigma_{\rm M}/ \left( 2 + 2 K \right) $, we obtain \begin{equation} \mathcal{T}_{S} = - \frac{9}{8} \frac{\mathcal{G} M_*^2 R^5 K \omega_{\rm M}}{ \left( 1 + K \right) a^6} \frac{\omega}{\omega^2 + \omega_{\rm M}^2}. \label{tideS} \end{equation} \section{Rotational equilibrium states} \subsection{Theory} The total torque exerted on the planet is the sum of the two previous contributions: $\mathcal{T}_{\rm S+A} = \mathcal{T}_{\rm S} + \mathcal{T}_{\rm A}$. When the atmospheric and solid torques are of the same order of magnitude, several \mybf{equilibria} can exist, corresponding to $ \mathcal{T}_{\rm S+A} = 0 $ (Fig.~\ref{fig:exemple_couples}). The synchronization is given by $ \Omega_0 = n $ and non-synchronized retrograde and prograde states of equilibrium, denoted $ \Omega_{-} $ and $ \Omega_{+} $ respectively, are expressed as functions of $ a $ (Eqs.~\ref{tideA},\ref{tideS}) \begin{equation} \begin{array}{rcl} \Omega_{- } \left( a \right) = n - \omega_{\rm eq} \left( a \right) & \mbox{and} & \Omega_{+} \left( a \right) = n + \omega_{\rm eq} \left( a \right) , \end{array} \label{Omega_pm} \end{equation} \mybf{where the difference to synchronization, $ \omega_{\rm eq} $, is given by} \begin{equation} \begin{array}{rcl} \displaystyle \omega_{\rm eq} \left( a \right) = \omega_{\rm M} \sqrt{ \frac{a - A \omega_0^2 / \omega_{\rm M}}{A \omega_{\rm M} - a} } & \! \! \! \mbox{with} & \! \! \! \displaystyle A = 12 \frac{ \mathcal{R}_{\rm A} T_0 M_* R K}{\alpha \varepsilon L_* \kappa \beta \left( 1 + K \right)}. \end{array} \label{weq} \end{equation} \begin{figure}[] \centering {\includegraphics[width=0.475\textwidth] {ADLM2016_fig2.pdf} \textsf{\caption{\label{fig:exemple_couples} Normalized total tidal torque exerted on a Venus-like planet (in red) and its solid (in green) and atmospheric (in blue) components as functions of the forcing frequency $ \omega/n $ with $ \omega_0/n = 2 $ and $ \omega_{\rm M} / n = 6 $.} }} \end{figure} \mybf{As a first approximation, we consider that parameters $ A $, $ \omega_0 $ and $ \omega_{\rm M} $ do not vary with the star-planet distance. In this framework,} the equilibrium of synchronization is defined for all orbital \mybf{radii}, \mybf{while} those associated with Eq.~(\ref{weq}) only exist in a zone delimited by $ a_{\rm inf} < a < a_{\rm sup} $, \mybf{the boundaries being} \begin{equation} \begin{array}{rcl} a_{\rm inf} = A \omega_0 \lambda^{-1} & \mbox{and} & a_{\rm sup} = A \omega_0 \lambda \end{array} \label{abd} \end{equation} with the ratio $ \lambda = \max \left( \omega_0/\omega_{\rm M} ,\omega_{\rm M}/\omega_0 \right) $. In the particular case where $ \omega_0 = \omega_{\rm M} $, the distance \mybf{$ a_{\rm eq} = A \omega_0 $} corresponds to an orbit for which the atmospheric and solid tidal torques counterbalance each other whatever the angular velocity ($ \Omega $) of the planet. Studying the derivative of $ \omega_{\rm eq} $ (Eq.~\ref{weq}), we note that when the star-planet distance increases, \begin{itemize} \item[$\bullet$] if $ \omega_{\rm M} < \omega_0 $, the pseudo-synchronized states of equilibrium get closer to the synchronization, \item[$\bullet$] if $ \omega_{\rm M} > \omega_0 $, they move away from it. \end{itemize} To determine the stability of the identified equilibria, we introduce the first order variation $ \delta \omega $ such that $ \omega = \omega_{\rm eq} + \delta \omega $ and study the sign of $ \mathcal{T}_{\rm S+A} $ at the vicinity of an equilibrium position for a given $ a $. We first treat the synchronization, for which \begin{equation} \begin{array}{rcl} \mathcal{T}_{\rm S+A} \, \propto \, {\rm sign} \left( a - a_{\rm eq} \right) \delta \omega & \mbox{with} & \displaystyle a_{\rm eq} = A \frac{\omega_0^2}{\omega_{\rm M}}. \end{array} \label{stab1} \end{equation} Note that $ a_{\rm eq} = a_{\rm inf} $ if $ \omega_0 < \omega_{\rm M} $ and $ a_{\rm eq} = a_{\rm sup} $ \mybf{otherwise}. At the vicinity of non-synchronized equilibria, we have \begin{equation} \mathcal{T}_{\rm S+A} \, \propto \, n^4 \omega_{\rm M} \frac{\omega_{\rm eq}^2 \left( \omega_0^2 - \omega_{\rm M}^2 \right)}{\left( \omega_{\rm eq}^2 + \omega_{\rm M}^2 \right)^2 \left( \omega_{\rm eq}^2 + \omega_0^2 \right) } \delta \omega. \label{stab2} \end{equation} Therefore, within the interval $ a \in \left] a_{\rm inf} , a_{\rm sup} \right[ $, \begin{itemize} \item[$\bullet$] if $ \omega_{\rm M} < \omega_0 $, the synchronized state of equilibrium is stable and the pseudo-synchronized ones are unstable, \item[$\bullet$] if $ \omega_{\rm M} > \omega_0 $, the synchronized state of equilibrium is unstable and the pseudo-synchronized ones are stable. \end{itemize} For $ a < a_{\rm inf} $ the gravitational tide predominates and the equilibrium at $ \Omega = n $ is stable. \mybf{It becomes unstable for $a > a_{\rm sup}$, because the torque is driven by the thermal tide. } \begin{figure*}[htb] \centering {\includegraphics[width=0.30\textwidth]{ADLM2016_fig3a.pdf} \hspace{0.5cm} \includegraphics[width=0.30\textwidth]{ADLM2016_fig3b.pdf} \hspace{0.5cm} \includegraphics[width=0.30\textwidth]{ADLM2016_fig3c.pdf} \\[0.5mm] \includegraphics[width=0.30\textwidth]{ADLM2016_fig3d.pdf} \hspace{0.5cm} \includegraphics[width=0.30\textwidth]{ADLM2016_fig3e.pdf} \hspace{0.5cm} \includegraphics[width=0.30\textwidth]{ADLM2016_fig3f.pdf} \textsf{\caption{\label{fig:Venus} {\it Top:} Solid (left panel), atmospheric (middle panel) and total tidal torque (right panel) exerted on a Venus-like planet as functions of the reduced forcing frequency $ \omega / n $ (horizontal axis) and orbital radius $ a $ in logarithmic scale (vertical axis). The color level corresponds to the torque in logarithmic scale with isolines at $ \log_{10} \left( \mathcal{T} \right) = 8, 10, 12, \ldots, 26 $. {\it Bottom:} Sign of the solid (left panel), atmospheric (middle panel) and total (right panel) tidal torque as functions of the same parameters: white (blue) areas are associated with positive (negative) torques. Stable (unstable) states of equilibrium are \cora{designated} by blue (red) lines \mybf{(with $ A = 1.88 \times 10^{19} \ {\rm m.s} $)}. The pink band corresponds to the habitable zone for the black body equilibrium temperature $ T_{\rm eq} = 288 {\rm K} \pm 20 \% $ for a $ 1 M_\sun $ Solar-type star at the age of the Sun.} }} \end{figure*} \subsection{Comparison with previous works} \label{subsec:comparison_works} In many early studies dealing with the spin rotation of Venus, the effect of the gravitational component on the atmosphere is ignored so that the atmospheric tide corresponds to a pure thermal tide \citep[e.g.][]{DI1980,CL01,Correia2003,Correia2008,Cunha2015}. Moreover, the torque induced by the tidal elongation of the solid core is generally assumed to be linear in these works. This amounts to considering that $ \left| \omega \right| \ll \omega_{\rm M} $ and $ \omega_0 \ll \omega_{\rm M} $. Hence, the relation Eq.~(\ref{weq}) giving the positions of non-synchronized equilibria can be simplified: \begin{equation} \displaystyle \omega_{\rm eq} = \sqrt{ \frac{\omega_{\rm M}}{A} a - \omega_0^2 }. \end{equation} Following \cite{Correia2008}, we take $ \omega_0 = 0 $. We then obtain $ \omega_{\rm eq} \, \propto \, \sqrt{a} $ while \cite{Correia2008} found $ \omega_{\rm eq} \, \propto \, a $. This difference can be explained by the linear approximation of the \mybf{sine} of the phase lag done in this previous work. Given that the condition $ \omega_0 < \omega_{\rm M} $ is satisfied, the non-synchronized states of equilibrium are stable in this case. By using a Global Circulation Model (GCM), \cite{Leconte2015} obtained numerically for the atmospheric tidal torque an expression similar to the one given by Eq.~(\ref{Torque_a1}). They computed with this expression the possible states of equilibrium of Venus-like planets by applying to the telluric core the Andrade and constant-Q models and showed that for both models, \cora{up to} five equilibrium positions could appear. This is not the case when the Maxwell model is used, as shown by the present work. Hence, although these studies agree well on the existence of several non-synchronized states of equilibrium, they also remind us that the theoretical number and positions of \cora{these equilibria} depend on the models used to compute tidal torques and more specifically on the rheology chosen for the solid part. \begin{table}[h] \centering \begin{tabular}{ l l l } \hline \hline \textsc{Parameters} & \textsc{Values} & \textsc{Units} \\ \hline $ \mathcal{G} $ & $ 6.67384 \times 10^{-11} $ & $ {\rm m^3. kg^{-1}. s^{-2} } $ \\ $ \mathcal{R}_{\rm GP} $ & $ 8.314 $ & $ {\rm J. mol^{-1} K^{-1} } $ \\ $ L_* $ & $ 3.846 \times 10^{26} $ & ${\rm W}$ \\ $ \mathcal{G} M_* $ & $ 1.32712 \times 10^8 $ & ${\rm km^3.s^{-2} }$ \\ $ R $ & $ 6051.8 $ & $ {\rm km} $ \\ $ M_{\rm P} $ & $ 4.8676 \times 10^{24} $ & $ {\rm kg} $ \\ $ f_{\rm A} $ & $ 10^{-4} $ & -- \\ $ \mathcal{M}_{\rm A} $ & $ 43.45 $ & $ {\rm g.mol^{-1}} $ \\ $ T_0 $ & $ 737 $ & K \\ $ \kappa \ $ & $ 0.286 $ & -- \\ $ \omega_0 $ & $ 3.77 \times 10^{-7} $ & $ {\rm s^{-1}} $\\ $ \alpha $ & $ 0.19 $ & -- \\ $ \beta $ & $ 1.0 $ & -- \\ $ \varepsilon $ & $ 0.04 $ & -- \\ $ G $ & $ 10^{11} $ & Pa \\ $ \omega_{\rm M}$ & $ 1.075 \times 10^{-4} $ & ${\rm s^{-1}}$ \\ \hline \end{tabular} \textsf{\caption{\label{parameters} Numerical values used in the case of Venus-like planets. Parameters from $ \mathcal{G} $ to $ T_0 $ are given by Nasa fact sheets\protect\footnotemark and \cite{codata2010}, the value of $ \kappa $ corresponds to \mybf{a} perfect gas, $ \omega_0 $ is the radiative frequency prescribed by \cite{Leconte2015} for Venus, $ \alpha $ is computed for a tidal heat power per unit mass proportional to $ \cos \Psi $ on the day side and equal to zero on the night side, the atmospheric tidal torque is assumed to be entirely transmitted to the telluric core ($ \beta = 1 $), $ \varepsilon $ is consistant with the estimation of \cite{DI1980} for the effective heating of Venus' atmosphere by the ground, and \mybf{we take for} $ G $ the shear modulus of silicates \citep[][]{RMZL2012}. }} \end{table} \section{The case of Venus-like planets} We now illustrate the previous results for Venus-like planets (Table~\ref{parameters}). The frequency $ \omega_{\rm M} $ is adjusted so that the present angular velocity of Venus corresponds to the retrograde state of equilibrium identified in the case where the condition $ \omega_{\rm M} > \omega_0 $ is satisfied ($ \Omega_{-} $ in Eq.~\ref{Omega_pm}). In Fig.~\ref{fig:Venus}, we plot the resulting tidal torque and its components, as well as their signs, as functions of the tidal frequency and orbital radius. These maps show that the torque varies over a very large range, particularly the solid component ($ \mathcal{T}_{\rm S} \, \propto \, a^{-6} $ while $ \mathcal{T}_{\rm A} \,\propto \, a^{-5} $). The combination of the solid and atmospheric responses generates the non-synchronized states of equilibrium observed on the bottom left panel, which are located in the interval $ \left] a_{\rm inf} , a_{\rm sup} \right[ $ (see Eq.~\ref{abd}) and move away from the synchronization when $ a $ increases (see Eq.~\ref{weq}), as predicted analytically. For illustration, \mybf{in Fig.~\ref{fig:explo}}, \cora{we show} the outcome of \cora{a different value} of $ \omega_{\rm M} $, with $ \omega_0 > \omega_{\rm M} $, contrary to \mybf{Fig.~\ref{fig:Venus}} where $ \omega_{\rm M} > \omega_0 $. We observe the behaviour predicted analytically in Sect.~3. In Fig.~\ref{fig:Venus}, the non-synchronized equilibria are stable and move away from the synchronization \cora{when $ a $ increases, but} they are unstable in Fig.~\ref{fig:explo}. \mybf{Note that the value of the solid Maxwell frequency obtained for stable non-synchronized states of equilibrium, $ \omega_{\rm M} = 1.075 \times 10^{-4} \ {\rm s^{-1}} $, is far higher than those of typical solid bodies \citep[$\omega_{\rm M} \sim 10^{-10} \ {\rm s^{-1}}$, see][]{Efroimsky2012}. \cora{The Maxwell model, because of its decreasing rate as a function of $\omega$ (i.e. $\propto \, \omega^{-1}$), underestimates the tidal torque for tidal frequencies greater than $\omega_{\rm M}$, which leads to overestimate the Maxwell frequency when equalizing atmospheric and solid torques}. Using the Andrade model for the solid part \cora{could give} more realistic values of $ \omega_{\rm M} $\cora{,} as proved numerically by \cite{Leconte2015}, \cora{because the decreasing rate of the torque is lower in the Andrade model than in the Maxwell model (i.e. $\propto \, \omega^{-\alpha}$ with $\alpha=0.2-0.3$).} } \mybf{Finally, we must discuss the assumption we made when we supposed that the parameters of the system did not depend on the star-planet distance. The surface temperature of the planet and the radiative frequency actually vary with $ a $. If we consider that $ T_0 $ is determined by the balance between the heating of the star and the black body radiation of the planet, then $ T_0 \, \propto \, a^{-1/2} $. As $ \sigma_0 \, \propto \, T_0^3 $, we have $ \sigma_0 \, \propto \, a^{-3/2} $. These new dependences modify neither the expressions of $ \omega_{\rm eq} $ (Eq.~\ref{weq}), nor the stability conditions of the states of equilibrium (Eqs.~\ref{stab1} and \ref{stab2}). However, they have repercussions on the boundaries of the region where non-synchronized states exist. This changes are illustrated by Fig.~\ref{fig:explo} (bottom \cora{panel}), which shows the stability diagram of Fig.~\ref{fig:Venus} (bottom left \cora{panel}) computed with the functions $ T_0 \left( a \right) = T_{0;\Venus} \left( a / a_\Venus \right)^{-1/2} $ and $ \sigma_0 \left( a \right) = \sigma_{0;\Venus} \left( a / a_\Venus \right)^{-3/2} $ ($ a_\Venus $, $ T_{0;\Venus} $ and $ \sigma_{0;\Venus} $ being the semi-major axis of Venus and the constant temperature and radiative frequency of Table~\ref{parameters} respectively). } \begin{figure}[htb] \centering { \includegraphics[width=0.4\textwidth]{ADLM2016_fig4a.pdf} \includegraphics[width=0.4\textwidth]{ADLM2016_fig4b.pdf} \textsf{\caption{\label{fig:explo} \mybf{Sign of the total tidal torque as a function of the tidal frequency $ \omega / n $ (horizontal axis) and orbital radius ($a$) (vertical axis) for two different cases. {\it Top:}} with $ \omega_0 = 3.77 \times 10^{-7} \ {\rm s^{-1}} $ and $ \omega_{\rm M} = 3.7 \times 10^{-8} \ {\rm s^{-1}} $ ($ \omega_0 > \omega_{\rm M} $). \mybf{{\it Bottom:} with $ T_0 $ and $ \omega_0 $ depending on the star-planet distance \mybf{and $ \omega_{\rm M} > \omega_0 $}. White (blue) areas correspond to positive (negative) torque. Blue (red) lines \cora{designate} stable (unstable) states of equilibrium.} }}} \end{figure} \footnotetext{ \url{http://nssdc.gsfc.nasa.gov/planetary/factsheet/}} \section{Discussion} A physical model for solid and atmospheric tides \cora{allows us} to determine analytically the possible rotation equilibria of Venus-like planets and their stability. Two regimes exist depending on the hierarchy of the characteristic frequencies associated with dissipative processes (viscous friction in the solid layer and thermal radiation in the atmosphere). These dissipative mechanisms have an impact on the stability of non-synchronized equilibria, which can appear in the habitable zone since they are caused by solid and atmospheric tidal torques of comparable intensities. This study can be used to constrain the equilibrium rotation of observed super-Earths and therefore to infer the possible climates of such planets. Also here the Maxwell rheology was used. This work can be directly applied to alternative rheologies such as the Andrade model \citep[][]{Efroimsky2012}. \mybf{The modeling used here is based on an approach where the atmosphere is assumed to rotate uniformly with the solid part of the planet \citep{ADLM2016}. As general circulation is likely to play an important role in the atmospheric tidal response, we envisage to examine the effect of differential rotation and corresponding zonal winds of the tidal torque in future studies.} \begin{acknowledgements} P. Auclair-Desrotour and S. Mathis acknowledge funding by the European Research Council through ERC grant SPIRE 647383. This work was also supported by the ``exoplanètes'' action from Paris Observatory and by the CoRoT/{\it Kepler} and PLATO CNES grant at CEA-Saclay. A. C. M. Correia acknowledges support from CIDMA strategic project UID/MAT/04106/2013. \end{acknowledgements} \bibliographystyle{aa}
1,108,101,563,506
arxiv
\section{Introduction}\label{sec:intro} Open star clusters (OCs) are stellar systems formed in giant molecular clouds (GMCs) located in the disk of the Milky Way \citep[e.g.,][]{lada2003}. Unlike their compact halo counterparts (globular clusters), the stellar members of OCs have a looser spatial distribution, hence the name ``open''. The formation and evolution of OCs is closely related to Galactic star formation. Studying the spatial distribution of stars in OCs therefore provides an opportunity to uncover the mechanisms and conditions of star cluster formation in the Galaxy. The earliest morphological study of OCs dates back a century \citep{jeans1916}. In the decades that followed, further systematic studies were carried out; notable studies include those of \citet{oort1979} and \citet{bergond2001}. They investigated the spatial distribution of a handful of nearby OCs, and found that the flattening of the projected shape of OCs tends to be parallel to the Galactic plane. The pioneering work of \citet{chen2004} determined the 2D morphology of nearby 31 OCs using 2MASS infrared photometry; they took an important first step in the statistical investigation of OC morphology. However, they did not reach a firm quantitative conclusion due to the challenges arising from membership determination. Differences between the morphologies of young and old OCs were identified by \citep{jones2002}. Young OCs tend to have a higher degree of substructure. \citet{Sanchez2009} found that clusters with fractal-like structures are generally younger than clusters with smooth radial density profiles. \citet{kounkel2019} identified several hundreds of filamentary structures younger than 100~Myr, most of which were associated with nearby OCs. One string-like structure in \citet{kounkel2019}, for example, hosts two coeval open clusters, NGC\,2232 and LP\,2439 \citep{pang2020}. The extended substructures of young OCs are thought to have been inherited from the primordial shape of the parental GMCs \citep{ballone2020}, in which star formation takes place along the densest filamentary substructures \citep{jerabkova2019}. Most regions in GMCs are not self-gravitating, and are supported by large-scale turbulence. Elongated shapes, such as triaxial and prolate shapes are therefore common among GMCs \citep{jones2002}. The triaxiality is consistent with the non-equilibrium state of GMCs. The dense cores in GMCs, where OCs are formed, are pulled together by self-gravity, with an observed elongated shape \citep{curry2002}. After the first stars have formed, the gas surrounding the OCs is rapidly removed by stellar radiation \citep{krumkolz2009,dinnbier2020c}, stellar winds \citep{weaver1977}, and/or supernovae \citep{mckee1977}. Stars that escape from the cluster after gas expulsion reduce the gravitational potential of the cluster, and form a tidal ``tail~I'' \citep[following the definition and nomenclature of][]{dinnbier2020b}. Expansion has been observed in very young OCs with ages less than 5\,Myr \citep{kuhn2019}, as well as in young clusters that are tens of millions of years old \citep{brandner2008,bravi2018,getman2018, karnath2019, pang2020}. Simultaneously, the stellar members of an OC interact with each other through two-body relaxation, which results in the observed “mass segregation” in star clusters \citep{hillenbrand1998,pang2013, tang2018}, in which low-mass stars are dispersed to the outskirts of the cluster and massive stars tend to migrate to the central region of the cluster. Consequently, a dense core will form, while low-mass stars continue to escape from the cluster, mainly at low speeds through Lagrange points \citep{kupper2008}, and form an S-shaped tidal ``tail~II'' \citep[following the nomenclature of][]{dinnbier2020b}. The reduction of cluster members further lowers the gravitational potential, which results in expansion of OCs and consequently a lower stellar number density. \citet{chen2004} proposed that the internal relaxation process causes the inner part of a cluster to evolve into a spherical spatial distribution. As the Galactic disk is abundant in stars, spiral arms, and GMCs, OCs are subjected to external tidal perturbations, such as disk shocks, spiral arm passages, and encounters with molecular clouds \citep{spitzer1958,lamers2005,kruijssen2012}. Stars escape the cluster as a consequence of gas expulsion, close encounters or evaporation, and due to their exposure to the Coriolis force produced by the Galactic tidal field, and migrate to more tangential orbits. As a consequence, the star cluster stretches. Furthermore, when OCs cross the Galactic plane, the disk tidal field compresses them and flattens their shapes. The projected major axis of the elongated shapes of OCs are known to be aligned with the Galactic plane in most cases \citep{oort1979,bergond2001,chen2004}. As OCs evolve, their shapes continue to distort and members disperse, leading to the inevitable dissolution of the entire cluster. Expansion has been identified in old open clusters as a sign of an ongoing disruption process \citep{pang2018}. Giant tidal tails extending from OCs have been directly observed in recent years \citep{roser2019, meingast2019, tang2019, furnkranz2019, zhang2020}. These observed tidal tails are thought to be composed of both a ``tail I'', driven by gas expulsion, and by a ``tail II'', driven by evaporation \citep{dinnbier2020a, dinnbier2020b}. The {\it Gaia} Early Data Release 3 \citep[EDR\,3;][]{gaia2020} has revolutionized the study of OC morphology by providing parallaxes with a 30\% higher precision and proper motions with double accuracy, as compared to those in the {\it Gaia} Data Release 2 \citep[DR\,2;][]{gaia2018a}. It is desirable to represent the stellar distribution of OCs in three dimensions in order to reveal the formation process and early evolution of OCs. Besides, it is necessary that this kind of analysis is carried out by parameterizing the cluster shape in an objective, quantitative, as well as systematic manner. In this study, we conduct a statistical analysis of the morphology of 13~OCs located within 500\,pc from the Sun (see Table~\ref{tab:general}) in the solar neighborhood based on {\it Gaia} EDR\,3 data. The distances to the target clusters range between $\approx$86\,pc (Coma Berenices) and $\approx$476\,pc (NGC\,2422). The target OCs span a representative range in ages, from $\approx$25\,Myr (NGC\,2232) to $\approx$2.65\,Gyr (NGC\,6774). Among the OCs in this study, three clusters have membership determination that was carried out in previous works: Coma Berenices \citep{tang2019}, Blanco~1 \citep{zhang2020}, and NGC\,2232 \citep{pang2020}. We are motivated to quantify the shapes of the clusters in the sample, and establish their relation to the dynamical state of each of the OCs, which is quantified with kinematic data from the literature. The present study is a pioneering work setting up the tools to quantify 3D morphology of OCs using {\it Gaia} EDR\,3 data. At the same time, it is also a reminiscent analogy to the studies that quantified the morphology of elliptical galaxies \citep{benacchio1980,padilla2008}. The paper is organized as follows. In Section~\ref{sec:gaia_member} we discuss the quality and limitations of the {\it Gaia} EDR\,3 data, and describe our input data-set for member star identification. We then present the algorithm, \textsc{StarGO}, which is used to determine cluster members. The properties of the identified member candidates of the 13 target OCs are discussed in Section~\ref{sec:result}. The 3D morphology of target OCs and the parameterization of the cluster shapes are presented in Section~\ref{sec:3D}, in which we reconstruct the distances with a Bayesian method (Section~\ref{sec:dis_correct}). The dynamical state of the OCs are quantified using kinematic data in Section~\ref{sec:dynamics}. In Section~\ref{sec:nbody} we compare our observational findings with $N$-body simulations. Finally, we provide a brief summary in Section~\ref{sec:summary}. \section{Data Analysis and Member identification}\label{sec:gaia_member} \subsection{Gaia EDR\,3 Data Processing and Analysis}\label{sec:target_selection} The {\it Gaia} EDR\,3 \citep{gaia2020} has provided parallaxes ($\varpi$) and proper motions (PMs; $\mu_\alpha \cos\delta, \mu_\delta$) with unprecedented precision and sensitivity for more than 1.8 billion sources with magnitude brighter than 21~mag in the $G$~band ($330-1050$~nm). The uncertainty in the $G$~band photometry is in the range of 0.2--6~mmag for stars brighter than 20~mag. The median error of $\varpi$ ranges from $\sim$0.02--0.03~mas for bright sources ($G$\,$<$\,15~mag) to $\sim$1.3~mas for faint stars ($G$\,$\approx$\,21~mag). The corresponding uncertainties in the PMs for these sources are 0.02--0.03\,mas\,yr$^{-1}$ and 1.4\,mas\,yr$^{-1}$, respectively \citep{gaia2020}. Beside PMs, about 7.2~million stars have radial velocity (RV) measurements in the {\it Gaia} DR\,2, which are transferred to EDR\,3 \citep{torra2020,seabroke2020}. These RV measurements have a typical uncertainty of 2~$\rm km\,s^{-1}$ \citep{lindegren2018}. Unreliable or erroneous RVs in the data release have been discarded \citep[see][]{boubert2019}. The following analysis is carried out for these 13 OCs in the the sample. The spatial and kinematic structures of the ten target clusters are investigated using {\it Gaia} EDR\,3 data within 100\,pc from the cluster center taken from the member catalogs of \citet{liu2019} and \citet{gaia2018b} in Cartesian Galactocentric coordinates (see definition in Appendix~\ref{sec:apx_coordinate}). In order to remove possible artifacts in the {\it Gaia} EDR\,3 from our sample, we apply a general astrometric quality cut, as described in \citet[in their Appendix~C]{lindegren2018}, which selects stars with parallaxes and photometric measurements within 10 percent uncertainty. Hereafter, we refer to this set as ``Sample~I''. Generally, the number of stars in Sample~I ranges from 122\,154 to 456\,527 for the clusters in our study. The $G$-band magnitude of the sources in Sample~I ranges between $\sim$3.0\,mag and $\sim$20.7\,mag. For most clusters in the sample, measurements become significantly incomplete for $G \ga 19$~mag. We construct a 2D density map of PMs to select stars around the over-density location of the 13 target clusters. Figure~\ref{fig:som}~(a) shows a 2D density map of PMs for the cluster NGC\,2516 as an example. This map only shows bins with over-densities $>3\sigma$ in Sample~I. Several over-densities stand out. There is an over-density near the average PMs of the cluster (indicated with a blue cross) provided by \citet{liu2019}. An over-density of nearby clusters can also be seen in Figure~\ref{fig:som} (PM plots for other twelve OCs are provided in Appendix~\ref{sec:apx_som}; see Figures~\ref{fig:som_a1}, \ref{fig:som_a2} and \ref{fig:som_a3}). In this work we focus on the target clusters and we do not investigate their neighbors. We apply a circular cut (the black circle in Figure~\ref{fig:som}~(a)) to only include the target cluster for further analysis. Note that the radius of the circle is chosen to include as many potential members as possible, while simultaneously excluding most unrelated nearby structures. The radius is therefore different for each cluster in the sample. Application of this circular cut reduces the number of candidate members for each cluster. Hereafter, we refer to this set of stars as ``Sample~II''. The number of stars in Sample~II drops to below 10\,000 for most clusters. The stars in this sample have magnitudes ranging between $G \sim 3.8$~mag and $G \sim 20.6$~mag. All samples are complete for $G\la 18-18.5$~mag. \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=1.\textwidth]{NGC2516_som.pdf} \caption{ (a)~ 2D density map of the proper motion vectors for the regions around NGC\,2516 in sample~I. The blue cross indicates the mean over-density generated by NGC\,2516 obtained from \citet{liu2019}. Each bin is smoothed by neighboring eight bins. Only bins with a number count $>3\sigma$ are shown, where $\sigma$ is the standard deviation of all bins. The color indicates the number count in each bin. (b)~ Histogram of the distribution of $u$. The orange line denotes the selections of $u$ that produces a 5\% contamination rate among the identified candidates, for the orange patch in the 2D neural network (panel (c)). (c)~ 2D neural network} resulting from SOM, the neurons with a $u$-selection of 5\% contamination rate (orange line in panel (b)) are shaded as orange. Among these, the neurons corresponding to the member candidates of the target cluster NGC\,2516 are highlighted in blue. \label{fig:som} \end{figure*} In this study we use the 5D parameters of stars in Sample~II (R.A., Decl., $\varpi$, $\mu_\alpha \cos\delta$, and $\mu_\delta$) from {\it Gaia} EDR\,3. Since only a small fraction of the stars in each cluster have RV measurements, we adopt the higher-accuracy radial velocities from \citet{jackson2020} and \citet{bailey2018} as supplementary data. The RVs of stars in the ten target clusters are obtained from \citet{jackson2020}; these are part of the Gaia-ESO Survey \citep[GES,][]{gilmore2012} with an uncertainty of 0.4~km\,s$^{-1}$, obtained using FLAMES (the Fiber Large Array Multi Element Spectrograph) combined with the GIRAFFE and UVES (Ultraviolet and Visual Echelle Spectrograph) spectrographs mounted on the 8-m UT2-Kueyen telescope of the ESO Very Large Telescope facility. \citet{bailey2018} obtained RVs of stars in NGC\,2422 with M2FS (the Michigan/Magellan Fiber System), a multi-object fibre-fed spectrograph on the Magellan/Clay 6.5-m telescope, with a median uncertainty of 0.08~km\,s$^{-1}$. We use the RVs from Gaia DR\,2 for the clusters Coma Berenice and NGC\,6774, neither of which is included in the above-mentioned spectroscopic surveys. The distance to each individual star is computed as $1/\varpi$, from which we compute for each source the Galactocentric Cartesian coordinates ($X, Y, Z$). The transformation is performed by using the Python \texttt{Astropy} package \citep{astropy2013, astropy2018}. There is an asymmetric error in the distance that arises from the direct inversion of $\varpi$ \citep{zhang2020}. We adopt a Bayesian method to correct individual distances of stars, as outlined in Section~\ref{sec:3D}. \subsection{Membership determination}\label{sec:stargo} The unsupervised machine learning method, \textsc{StarGO} \citep{yuan2018}\footnote{\url{https://github.com/salamander14/StarGO}} has proven to be successful in membership determination of OCs, e.g., for the Coma Berenices cluster \citep{tang2019}, Blanco\,1 \citep{zhang2020}, NGC\,2232 and LP\,2439 \citep{pang2020}. The algorithm is based on the Self-Organizing-Map (SOM) method that maps high-dimensional data onto a two-dimension neural network, while preserving the topological structures of the data. We apply \textsc{StarGO} to map a 5D data set ($X, Y, Z$, $\mu_\alpha \cos\delta, \mu_\delta$) of ten target clusters (Sample~II) onto a 2D neural network in order to determine member candidates. Stars are fed to the neural network sequentially. We therefore scale the number of neurons to the number of stars in Sample~II. We adopt a network with 100$\times$100--150$\times$150 neurons (depending on the number of stars in Sample~II of each cluster) represented by the 100$\times$100 (150$\times$150) grid elements to study Sample~II (an illustration for NGC\,2516 is provided in Figure~\ref{fig:som}~(c)). Each neuron is assigned a random 5D weight vector with the same dimensions as the observed 5D parameters ($X, Y, Z$, $\mu_\alpha \cos\delta, \mu_\delta$) that are provided to the algorithm. During each iteration, the weight vector of each neuron is updated so that it is closer to the input vector of an observed star. The learning process is iterated 400 times (600 times for 150$\times$150 grids) until the weight vectors converge. When stars associated with neurons are spatially and kinematically coherent (e.g., when they are cluster members), the 5D weight vectors of the adjacent neurons are similar. Therefore, the value of the difference of weight vectors between these adjacent neurons, $u$, is small. Neurons with similar small values of $u$ group together in the 2D neural network as patches (see Figure~\ref{fig:som}~(c)). Different groups of stars form different patches. The value of $u$ is smaller for neurons located inside the patch, and larger for neurons outside the patch. The $u$ values of neurons inside patches generate an extended tail towards small values in the $u$-histogram (see panel (b) in Figure~\ref{fig:som}). The selection of $u$ is made by applying a cut to the tail of the $u$-distribution. This cut is made to ensure a similar contamination rate of $\sim$5\% among members, which has been applied to NGC\,2232 in \citet{pang2020}. We adopt this 5\% field star contamination $u$-cut as a member selection criteria for the ten target clusters, which corresponds to the blue patch in Figure~\ref{fig:som}~(c). We evaluate the contamination rate from the smooth Galactic disk population using the {\it Gaia} DR\,2 mock catalog \citep{rybizki2018}. An identical PM cut as described in Section~\ref{sec:target_selection} is also applied to the mock catalog in the same volume of the sky. Each of these mock stars is attached to the trained 2D neural network. We then consider the mock stars associated with selected patches as contamination. The numbers of identified members of each target cluster are listed in Table~\ref{tab:general}. We provide a detailed member list of all 13 target clusters in Table~\ref{tab:memberlist}, with parameters obtained in this study. The members lists of these 13 target clusters therefore form homogeneous data sets. \section{General properties of target open clusters}\label{sec:result} To evaluate the validity our membership identification, we cross-match the members in target clusters with two independently published catalogs that both identify star clusters using all-sky Gaia DR\,2 data: \citet{liu2019} and \citet{cantat2020}. \citet{liu2019} used a friend-of-friend (FoF) cluster finder to identify star clusters in {\it Gaia} DR\,2 in the five-dimensional parameter space ($l, b, \varpi, \mu_\alpha\cos\delta$, and $\mu_\delta$). Members in the catalog of \citet{cantat2020} are compiled from \citet{cantat2020a,castro2018,castro2019,castro2020} and are identified using the unsupervised membership assignment code \textsc{UPMASK} \citep{cantat2018}. All of the target clusters presented in this work are generally in good agreement with both catalogs, and have a comparable number of identified members (see the last two columns in Table~\ref{tab:general}). Coma Berenices, Blanco~1, and NGC\,6774 are absent in \citet{liu2019}'s catalog. We display the positions of all identified members of the 13 target clusters in the Galactic coordinates in Figure~\ref{fig:lb}. Coma Berenices (grey triangles) and Blanco~1 (grey diamonds) occupy the regions of the Northern and Southern Galactic poles, respectively. The other OCs are within $\sim$15 degrees from the Galactic plane. Although NGC\,2451A and NGC\,2451B appear to overlap in the 2D projection, they are separated by a distance of $\approx 200$\,pc along the line-of-sight (see Table~\ref{tab:general}). Extended tidal tails are clearly visible in Coma Berenices and Blanco~1. An elongated shape is observed in the other clusters, notably in NGC\,2547, NGC\,2516, NGC\,2232 and NGC\,2451B. Note that a 2D elongated projected morphology that we see in projection, must have an even more prominent elongation in its 3D morphology. We will carry out an detailed investigation of the 3D morphology of the clusters in our sample in Section~\ref{sec:3D}. \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=1.\textwidth]{all_clusters_lb.pdf} \caption{ 2D projection of identified member stars of each target cluster in Galactic coordinates ($l, b$). Each of the 13 cluster for which the members are obtained via {\it Gaia} EDR\,3 in this study are denoted with different colours and symbols. Among these, three clusters are colored indicated in grey. The members of these clusters were also identified using {\it Gaia} DR\,2 in earlier studies. Member stars of NGC\,2232 are indicated with grey stars, those of Coma Berenices with grey triangles, and those of Blanco\,1 with grey diamonds. } \label{fig:lb} \end{figure*} We show the members of each cluster in the color-magnitude diagram (CMD; Figure~\ref{fig:iso}). The member stars of each cluster track a clear locus of a main sequence, which is consistent with the PARSEC isochrone (black solid curves in Figure~\ref{fig:iso}) for which the sensitivity curves are provided by \citet{mazapellaniz2018}. The distribution of the stars in the CMD shows that the field stars' main sequence (which is bluer than that of the clusters) is largely filtered out, therefore further confirming the reliability of our identified members of each cluster. We adopt ages for the target clusters from previous studies (except for NGC\,2451B) when they are in good agreement with the locations of the members in the CMD. We fit the values of $E(B-V)$ and of the metallicity, which are not available from literature. We list the cluster ages and related parameters in Table~\ref{tab:general}. The ages of the clusters span a wide range, from 25\,Myr for the youngest cluster (NGC\,2232) to 2.65\,Gyr for the oldest cluster (NGC\,6774). Such a wide age range in the sample of target clusters allows us to probe the influence of the secular dynamical evolution of the star clusters and the interaction with their environments on their morphology. The majority of the clusters in the sample are relatively young, with ages younger than 100\,Myr. Four clusters are of intermediate-age, with ages between 100\,Myr and 800\,Myr. An extended main sequence turn-off (eMSTO) of $\sim$0.3\,mag in the color $G_{BP}-G_{RP}$ is observed in two intermediate-age clusters, NGC\,2516 (123\,Myr) and NGC\,6633 (426\,Myr). The eMSTO region has been observed in many other star clusters \citep{li2014,li2017, milone2018,li2019}, which is a result of stars with a wide distribution of rotation rates \citep{bastian2009, dantona2017}. At the same time, the binary sequence locus of equal-mass systems is clearly seen for most clusters. In the oldest cluster NGC\,6774, we observe blue straggler candidates. Sixteen white dwarf members are found in five of the target clusters (IC\,2391, Blanco\,1, NGC\,2516, Coma Berenices, and NGC\,6774). The majority of these have been cataloged in \citet{fusillo2019}. The white dwarfs in NGC\,2516 and NGC\,6774 gather at very similar locations in the CMD. A detailed study has been carried out for three of the white dwarfs in NGC\,2516 by \citet[][IDs: NGC\,2516-1,2,5]{koester1996}. These white dwarfs were estimated to have ages of 120--160\,Myr (based on the cooling age, the main sequence lifetime, and the lifetime of red giants), which is consistent with the age of the cluster determined in our study. \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=0.85\textwidth]{iso_all.pdf} \caption{ The color-magnitude diagrams obtained from the {\it Gaia} EDR\,3 absolute magnitude M$_{G}$ (adopting the distance after the correction described in Section~\ref{sec:dis_correct}) for member stars (blue dots) of 13 target OCs identified by \textsc{StarGO}. The PARSEC isochrones of the adopted/fitted age are indicated with the black solid curves, with $E(B-V)$ and metallicities provided by literature or estimated in work (Table~\ref{tab:general}). } \label{fig:iso} \end{figure*} \begin{deluxetable*}{cc rccR c RR LL rrr \tablecaption{General parameters of target clusters \label{tab:general} } \tabletypesize{\scriptsize} \tablehead{ \colhead{Cluster} & \colhead{Age} & \colhead{$Dist_{cor}$} & \colhead{$erDist_{cor}$} & \colhead{$r_{\rm h}$} & \colhead{$r_t$} & \colhead{} & \colhead{$M_{cl}$} & \colhead{$M_{dyn}$} & \colhead{$Z$} & \colhead{$E(B-V)$} & \colhead{memb.} & \colhead{CG20} & \colhead{LP19} \\ \colhead{} & \colhead{(Myr)} & \multicolumn{4}{c}{(pc)} & \colhead{} & \multicolumn{2}{c}{(M$_\sun$)} & \colhead{(dex)} & \colhead{(mag)} & \multicolumn{3}{c}{(number)} \\ \cline{3-6} \cline{8-9} \cline{12-14} \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} && \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)} & \colhead{(12)} & \colhead{(13)} } \startdata IC\,2391 & 50$^{a,C}$ & 151.5 & 0.8 & 2.5 & 7.6 && 140.2 & 315.8 & 0.030$^W$ & 0.01$^{b,C}$ & 219 & 190 (86\%) & 135 (94\%)\\ IC\,2602 & 45$^{a,C}$ & 151.9 & 0.7 & 3.7 & 8.4 && 188.1 & 464.8 & 0.020$^W$ & 0.01$^W$ & 318 & 267 (86\%) & 135 (94\%)\\ IC\,4665 & 36$^{c,C}$ & 347.3 & 2.8 & 6.0 & 7.9 && 158.5 & 530.5 & 0.015$^W$ & 0.23$^W$ & 197 & 142 (85\%) & 74 (91\%)\\ NGC\,2422 & 73$^{d,C}$ & 476.5 & 3.2 & 4.9 & 11.4 && 480.2 & 1112.4 & 0.017$^{d,C}$ & 0.13$^W$ & 466 & 312 (75\%) & 335 (68\%)\\ NGC\,2516 & 123$^{e,C}$ & 410.5 & 3.1 & 7.9 & 18.3 && 1973.3 & 3368.2 & 0.020$^{e,C}$ & 0.05$^{e,C}$ & 2690 & 640 (98\%) & 1365 (96\%)\\ NGC\,2547 & 40$^{e,C}$ & 387.4 & 2.7 & 5.6 & 9.8 && 303.9 & 1032.2 & 0.015$^{e,C}$ & 0.04$^{e,C}$ & 452 & 192 (89\%) & 214 (88\%)\\ NGC\,6633 & 426$^{f,C}$ & 394.3 & 2.4 & 5.3 & 10.2 && 337.3 & 1698.2 & 0.022$^W$ & 0.18$^W$ & 300 & 133 (89\%) & 164 (88\%)\\ NGC\,6774 & 2650$^{g,C}$ & 306.5 & 2.0 & 4.7 & 7.8 && 152.6 & 1039.0 & 0.020$^{g,C}$ & 0.11$^{g,C}$ & 154 & 136 (80\%) & \nodata\\ NGC\,2451A & 58$^{h,C}$ & 192.6 & 1.3 & 4.9 & 8.3 && 182.2 & 738.5 & 0.015$^W$ & 0.01$^{h,C}$ & 311 & 266 (80\%) & 204 (93\%)\\ NGC\,2451B & 50$^W$ & 362.9 & 2.6 & 6.0 & 9.1 && 242.1 & 566.7 & 0.020$^W$ & 0.05$^{h,C}$ & 359 & 207 (73\%) & 109 (85\%)\\ NGC\,2232 & 25$^{i,C}$ & 319.1 & 2.3 & 6.8 & 8.6 && 205.8 & 263.7 & 0.015$^{i,C}$ & 0.07$^{i,C}$ & 281 & 169 (90\%) & 93 (89\%)\\ Blanco\,1 & 100$^{j,C}$ & 236.7 & 2.1 & 6.7 & 10.2 && 342.9 & 605.0 & 0.015$^{j,C}$ & 0.0$^{j,C}$ & 703 & 369 (97\%) & \nodata\\ Coma Berenices & 700$^{k,C}$ & 86.4 & 0.3 & 4.7 & 6.8 && 101.6 & 574.0 & 0.015$^{k,C}$ & 0.0$^{k,C}$ & 158 & 129 (84\%) & \nodata\\ \enddata \tablecomments{ $Dist_{cor}$ is the mean corrected distance of members in each cluster. $erDist_{cor}$ is the error in corrected distance following the Bayesian model described in Section~\ref{sec:dis_correct}. $r_{\rm h}$ and $r_t$ are half-mass and tidal radii of each cluster. The metallicity, $Z$, and reddening, $E(B-V)$, of several clusters are taken from literature (indicated with a capital $C$), some are fitted in this work indicated with a capital $W$. When the referenced age fits the members, we adopt the age from previous works (indicated with a capital $C$). The quantity ${M}_{cl}$ is the mass of each star cluster. The last two columns show the number of matched members in \citet{cantat2018} (CG18) and \citet{liu2019} (LP19),and the corresponding percentages. The age, $Z$ and $E(B-V)$ for some clusters are adopted from a:\citet{marsden2009}; b:\citet{postnikova2020}; c: \citet{miret2019}, d: \citet{bailey2018}, e:\citet{gaia2018a}, f:\citet{williams2007}, g: \citet{olivares2019}, h: \citet{balog2009}, i:\citet{pang2020}, j: \citet{zhang2020}, k: \citet{tang2019}. } \end{deluxetable*} \section{3D morphology of open clusters}\label{sec:3D} \subsection{Distances through Bayesian parallax inversion}\label{sec:dis_correct} It is known that the morphology of star clusters appears to be stretched along the line-of-sight, when distances are obtained by simple parallax inversion \citep[see, e.g.,][]{carrera2019}. Such artificial elongation is a consequence of computing the distance to each star by directly inverting the Gaia EDR\,3 or DR\,2 parallax, $1/\varpi$. Even when the errors in the parallax measurements $\Delta\varpi$ have a symmetric distribution, taking the reciprocal introduces a skewed distribution of errors on the distances, which results in a systematic bias in the distance to each cluster. We perform Monte Carlo simulations to estimate the contribution of the parallax error $\Delta \varpi$ on the uncertainty in the cluster distance \citep[see also][]{zhang2020}. The mean value of $\Delta \varpi$ of each cluster is adopted to estimate the parallax induced uncertainty in the distance, which is typically 0.4--12.6\,pc for the clusters in our sample. To mitigate this issue, we follow the method introduced by \citet{bailer2015} and treat the inversion problem within a Bayesian framework. Our approach closely follows the distance correction procedure described in \citet{carrera2019}. In this approach a prior distribution is assumed for each star. The Bayesian theorem is adopted to estimate the prior through the likelihood function computed from the observed parallax and its nominal error. The prior is composed of two components, one representing the star cluster density and the other representing the field. The former follows a normal distribution, the latter an exponentially decreasing density as in \citet{bailer2015}. The standard deviation of the cluster component coincides with the standard deviation of cluster-centric distance of the member stars (i.e., the distances of the stars from the center of each star cluster). We combine these two components with weights proportional to the membership probability. We apply a value of 95\% for the cluster term, and 5\% for the field star term (see Section~\ref{sec:stargo}). The mean of the posterior is considered as the corrected distance for each star. Further details about this parallax inversion approach can be found in \cite{carrera2019} and in \cite{pang2020}. Furthermore, we carry out Monte-Carlo simulations to estimate the uncertainty in the corrected distances that result from our procedure. Three types of clusters are simulated to test our Bayesian procedure: (i) spherical star clusters with a uniform spatial distribution of stars; (ii) elongated star clusters with an elongation perpendicular to the line-of-sight; and (iii) elongated star clusters with an elongation along the line-of-sight. For the uniform model, the uncertainty in our corrected distance increases monotonically with distance (solid black curve in Figure~\ref{fig:simulation_dis_er}). At a distance of 500\,pc, the error in the mean corrected distance of all stars becomes as large as 3.0\,pc. When the elongation of the cluster is perpendicular to the line-of-sight, errors are very similar to those of a uniform cluster, and reach a slightly larger error of 3.4\,pc at a distance of 500\,pc (dotted black curve in Figure~\ref{fig:simulation_dis_er}). The situation is different for the model in which the elongation is along the line-of-sight; in this case the uncertainty is as large as 6.3\,pc at a distance of 500\,pc (dashed black curve in Figure~\ref{fig:simulation_dis_er}). Details of the procedure of the simulations are described in Appendix~\ref{sec:dis_correct_er}. These findings show that the quality of the {\it Gaia} parallaxes play an important role in determining the recovered intrinsic cluster morphology from measurements. Artificial morphological elongation due to parallax errors becomes most severe when the intrinsic elongation happens to align with the line-of-sight. Intrinsically elongated clusters will suffer from larger uncertainty in the corrected distance, especially when their elongation is aligned with the line-of-sight. \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=1.\textwidth]{dis+dis_error+13_dr3.pdf} \caption{ Dependence of the uncertainty in the corrected distance on cluster distances based on simulations described in Appendix~\ref{sec:dis_correct_er}. The black solid curve represents a star cluster in which the members have a uniform spatial distribution. The dotted and dashed curves are clusters with elongated shape perpendicular to and parallel to the line-of-sight, respectively. The colored solar symbols and grey symbols indicate errors in the corrected distances when adopting a star cluster with a uniform stellar density located at the distance of each of the clusters in our study. The color coding of each cluster is identical to that in Figure~\ref{fig:lb}. } \label{fig:simulation_dis_er} \end{figure*} \subsection{Presentation of 3D morphology}\label{sec:3D_inte} In Figures~\ref{fig:3d1}, \ref{fig:3d2} and \ref{fig:3d3} we show the 3D spatial distributions after correcting distances for member stars in the 13 target clusters. The corrected 3D positions of the members in all 13 target clusters are presented in Table~\ref{tab:memberlist}, together with other parameters from {\it Gaia} EDR\,3. We also present the 3D positions of the 13 target clusters in Table~\ref{tab:general_xyz_uvw}. The Bayesian method has provided a reasonable correction of the stretched shapes along the line-of-sight of each cluster (grey dots in Figures~\ref{fig:3d1}, \ref{fig:3d2} and \ref{fig:3d3}). To estimate the uncertainty in the corrected distance to each cluster, we carry out additional simulations for each individual target cluster, with a uniform model (for details, see Appendix~\ref{sec:dis_correct_er}). We apply the mean parallax error to the members of each cluster and move the simulated cluster to the same distance as each target cluster. The corresponding uncertainty in the distance correction of each cluster (Table~\ref{tab:general}) is represented with a colored symbol in Figure~\ref{fig:simulation_dis_er}, that follows the curve of the uniform model. The most distant star cluster, NGC\,2422 (476\,pc), has an uncertainty of 3.2\,pc in the corrected distance. The uncertainty in the distance obtained through the Bayesian distance correction is much smaller than the error that arises from directly inverting {\it Gaia} parallax (see Section~\ref{sec:dis_correct}). To quantify the size of each star cluster, we compute their tidal radii as \begin{equation} r_t=\left( \frac{GM_{cl}}{2(A-B)^2}\right)^{\frac{1}{3}}\quad , \end{equation} \citep{pinfield1998}. Here, $G$ is the gravitational constant, $M_{cl}$ is the total mass of the star cluster (i.e., the sum of the masses of the individual member stars), and the parameters $A$ and $B$ are the Oort constants \citep[$A=15.3\pm0.4\rm~km~s^{-1}~kpc^{-1}$ and $B=-11.9\pm0.4\rm~km~s^{-1}~kpc^{-1}$; see][]{bovy2017}. In the analysis below, we assume that candidate members located within tidal radius are gravitationally bound to the star cluster, while members outside are unbound. The mass of each individual member star is obtained from the nearest point in the fitted isochrone that is searched for using the $k$-D tree method \citep{millman2011}. The tidal radius of each cluster is indicated with a black circle in each panel of Figures~\ref{fig:3d1}, \ref{fig:3d2} and \ref{fig:3d3}. \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=\textwidth]{Final_xyz-1_with_line1.pdf} \caption{3D spatial position of member stars in four target clusters: IC\,2391, IC\,2602, IC\,4665, NGC\,2422, in heliocentric Cartesian coordinates ($X,Y,Z$; see definition in Appendix~\ref{sec:apx_coordinate}) after distance correction via a Bayesian approach (see Section~\ref{sec:dis_correct}). The blue dots represent member stars in each cluster. The tidal radius of each cluster is indicated with a black circle. The dashed line indicates the direction of the line-of-sight. The grey dots in the background show the spatial distribution of members without distance correction. } \label{fig:3d1} \end{figure*} \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=\textwidth]{Final_xyz-1_with_line2.pdf} \caption{3D spatial position of members in four target clusters: NGC\,2516 and NGC\,2547, NGC\,6633, NGC\,6774, in heliocentric Cartesian coordinates ($X,Y,Z$; see definition in Appendix~\ref{sec:apx_coordinate}), after distance correction via a Bayesian approach (see Section~\ref{sec:dis_correct}). Colors and symbols are the same as in Figure~\ref{fig:3d1}. Filament-like substructures present in the young cluster NGC\,2547, and tidal-tail-like substructures in the older clusters NGC\,2516, NGC\,6633, and NGC\,6774. } \label{fig:3d2} \end{figure*} \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=0.8\textwidth]{Final_xyz-1_with_line3.pdf} \caption{3D spatial position of members in five target clusters: NGC\,2451A, NGC\,2451B, NGC\,2232, Blanco\,1 and Coma Berenices, in heliocentric Cartesian coordinates ($X,Y,Z$; see definition in Appendix~\ref{sec:apx_coordinate}), after distance correction via a Bayesian approach (see Section~\ref{sec:dis_correct}). Colors and symbols are the same as in Figure~\ref{fig:3d1}. Filament-like substructures present in the young clusters NGC\,2451B and NGC\,2232, and tidal tails in the older clusters Blanco\,1 and Coma Berenices. } \label{fig:3d3} \end{figure*} \begin{deluxetable*}{ccc} \tablecaption{Columns for the table of corrected 3D positions of members in all target clusters. \label{tab:memberlist} } \tabletypesize{\scriptsize} \tablehead{ \colhead{Column} & \colhead{Unit} & \colhead{Description} } \startdata Cluster Name & & Name of the target cluster \\ {\it Gaia} ID & & Object ID in {\it Gaia} EDR\,3\\ ra & degree & R.A. at J2016.0 from {\it Gaia} EDR\,3\\ er\_RA & mas & Positional uncertainty in R.A. at J2016.0 from {\it Gaia} EDR\,3 \\ dec & degree & Decl. at J2016.0 from {\it Gaia} EDR\,3 \\ er\_DEC & mas & Positional uncertainty in decl. at J2016.0 from {\it Gaia} EDR\,3 \\ parallax & mas & Parallax from {\it Gaia} EDR\,3\\ er\_parallax & mas & Uncertainty in the parallax \\ pmra & mas~yr$^{-1}$ & Proper motion with robust fit in $\alpha \cos\delta$ from {\it Gaia} EDR\,3 \\ er\_pmra & mas~yr$^{-1}$ & Error of the proper motion with robust fit in $\alpha \cos\delta$ \\ pmdec & mas~yr$^{-1}$ & Proper motion with robust fit in $\delta$ from {\it Gaia} EDR\,3 \\ er\_pmdec & mas~yr$^{-1}$ & Error of the proper motion with robust fit in $\delta$ \\ Gmag & mag & Magnitude in $G$ band from {\it Gaia} EDR\,3 \\ BR & mag & Magnitude in $BR$ band from {\it Gaia} EDR\,3 \\ RP & mag & Magnitude in $RP$ band from {\it Gaia} EDR\,3 \\ Gaia\_radial\_velocity & km~s$^{-1}$ & Radial velocity from {\it Gaia} DR\,2 \\ er\_Gaia\_radial\_velocity & km~s$^{-1}$ & Error of radial velocity from {\it Gaia} EDR\,3\\ Jackson\_radial\_velocity & km~s$^{-1}$ & Radial velocity from Gaia/ESO survey \citep{jackson2020}\\ er\_Jackson\_radial\_velocity & km~s$^{-1}$ & Error of radial velocity from Gaia/ESO survey \citep{jackson2020}\\ Bailey\_radial\_velocity & km~s$^{-1}$ & Radial velocity from \citet{bailey2018} \\ er\_Bailey\_radial\_velocity & km~s$^{-1}$ & Error of radial velocity from \citet{bailey2018} \\ Mass & M$_\odot$ & Stellar mass obtained in this study\\ X\_obs & pc & Heliocentric Cartesian X coordinate computed via direct inverting {\it Gaia} EDR\,3 parallax $\varpi$ \\ Y\_obs & pc & Heliocentric Cartesian Y coordinate computed via direct inverting {\it Gaia} EDR\,3 parallax $\varpi$ \\ Z\_obs & pc & Heliocentric Cartesian Z coordinate computed via direct inverting {\it Gaia} EDR\,3 parallax $\varpi$ \\ X\_cor & pc & Heliocentric Cartesian X coordinate after distance correction in this study \\ Y\_cor & pc & Heliocentric Cartesian Y coordinate after distance correction in this study \\ Z\_cor & pc & Heliocentric Cartesian Z coordinate after distance correction in this study \\ Dist\_cor & pc & The corrected distance of individual member\\ \enddata \tablecomments{A machine readable full version of this table is available online.} \end{deluxetable*} \begin{deluxetable*}{c RRR c RRR \tablecaption{3D positions and velocities of 13 target clusters \label{tab:general_xyz_uvw} } \tabletypesize{\scriptsize} \tablehead{ \colhead{Cluster} & \colhead{$X_m$} & \colhead{$Y_m$} & \colhead{$Z_m$} && \colhead{$U$} & \colhead{$V$} & \colhead{$W$} \\ \colhead{} & \multicolumn{3}{c}{(pc)} & \colhead{} & \multicolumn{3}{c}{(km~s$^{-1}$)} \\ \cline{2-4} \cline{6-8} \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} && \colhead{(5)} & \colhead{(6)} & \colhead{(7)} } \startdata IC 2391 & 1.22 & -150.35 & -17.92 && -23.88 & -15.56 & -5.75 \\ IC 2602 & 50.61 & -141.97 & -12.98 && -7.92 & -22.16 & -0.73 \\ IC 4665 & 283.76 & 167.91 & 101.81 && -3.58 & -17.41 & -8.85 \\ NGC 2422 & -299.34 & -369.47 & 26.22 && -30.66 & -22.29 & -10.86 \\ NGC 2516 & 26.65 & -394.41 & -112.85 && -21.99 & -25.02 & -4.51 \\ NGC 2547 & -37.06 & -381.18 & -57.98 && -16.15 & -9.99 & -10.95 \\ NGC 6633 & 315.25 & 228.84 & 56.99 && -20.64 & -17.66 & -7.60 \\ NGC 6774 & 278.57 & 106.22 & -67.20 && 49.05 & -19.42 & -24.30 \\ NGC 2451A & -58.09 & -182.22 & -23.22 && -27.16 & -14.30 & -12.78 \\ NGC 2451B & -109.29 & -343.86 & -43.58 && -18.75 & -8.28 & -12.11 \\ NGC 2232 & -261.38 & -179.95 & -41.41 && -20.44 & -13.08 & -10.86 \\ Blanco 1 & 42.91 & 11.44 & -233.01 && -18.55 & -6.62 & -9.80 \\ Coma Berenices & -7.25 & -6.07 & 85.27 && -2.47 & -5.63 & -0.33 \\ \enddata \tablecomments{$X_m$, $Y_m$, $Z_m$ is 3D position of 13 target clusters in the heliocentric Cartesian coordinates, taken as the median value of all members. $U$, $V$, $W$ are mean 3D velocities of each cluster in the heliocentric Cartesian coordinates.} \end{deluxetable*} The global morphology of an OC can generally be described with a dense central core (or nucleus) and an outer halo (or corona). The halo is much more extended and has a low stellar number density \citep{nilakshi2002}. However, the number of members in the halo can be substantial \citep{meingast2020}. Both Blanco\,1 and Coma Berenices show two grand tidal tails spanning up to 50--60\,pc from the cluster center, which belong to the halo region, accounting for more than 36\% and 50\% of their members, respectively. The direction of the tidal tails in Coma Berenices and Blanco\,1 are found to be parallel to the Galactic plane, in agreement with previous studies \citep{bergond2001, chen2004}. No apparent elongation is present in the young clusters IC\,2391, IC\,2602. IC\,2391 is more centrally compact showing a clear core, while IC\,2602 is more populous. Despite the age of 36\,Myr, IC\,4665 has a sparse distribution without a clear central concentration, which may be a consequence of rapid gas expulsion \citep[][see more discussion in Section~\ref{sec:nbody}]{pang2020, dinnbier2020a}. An elongated shape along the line-of-sight is apparent for the region containing the stars that are gravitationally bound to the cluster (i.e., inside tidal radius) for NGC\,2422, NGC\,2547, NGC\,6633, and Blanco\,1 (the angle between elongation and the line-of-sight, $\phi$, is presented in Table~\ref{tab:morph_kin}). The errors in the corrected distances to these clusters ($\approx$ 2.1--3.2 pc) are much smaller than the extent of their elongated regions ($\approx$20--30 pc). Therefore, the detection of the elongations are robust. Six clusters, IC\,2391, IC\,2602, NGC\,2457, NGC\,2451A, NGC\,2516 and Blanco\,1 overlap with previous work by \citet{meingast2020} based on {\it Gaia} DR\,2. Approximately 60--90\% of our members cross-match with members determined by \citet{meingast2020}. The majority of the matched members is located within tidal radius of the cluster. Our current member identification method is unable to confirm the membership of the stars in the vast extended stellar corona that were identified by \citet{meingast2020}. As a result of the higher accuracy of the proper motion measurements in {\it Gaia} EDR\,3, the extended filamentary structures of NGC\,2232 that were once identified as two separate groups (purple and green) in \citet{pang2020} (using {\it Gaia} DR\,2 data and with the same selection technique) are now identified as members of NGC\,2232. This confirms the conclusion of \citet{pang2020} that the coeval filamentary structures are closely related to NGC\,2232, which are formed at the same time in the parental molecular clouds \citep{jerabkova2019,beccari2020,tian2020}. Similar filament-like substructures are also found in another two young clusters, NGC\,2547 and NGC\,2451B. On the other hand, tidal-tail-like structures extending up to 10--20\,pc are detected in three older clusters: NGC\,2516, NGC\,6633 and NGC\,6774. The diffuse spatial distribution of the oldest cluster (NGC\,6774) implies its advanced dissolution state, after having experienced substantial secular dynamical evolution. The 3D morphology of OCs again confirms the presence of the 2D elongation that we observed in NGC\,2547 and NGC\,2516, NGC\,2232 and NGC\,2451B in Figure~\ref{fig:lb}. \subsection{Parameterization of 3D morphologies}\label{sec:par_3d} From the 3D distribution of member stars in each cluster (Figures~\ref{fig:3d1}, \ref{fig:3d2}, and \ref{fig:3d3}), the general shape of member distribution within the tidal radius can be approximated with an ellipsoid. We perform ellipsoid fitting\footnote{\url{https://github.com/marksemple/pyEllipsoid_Fit}} to the 3D morphology of each cluster in order to quantify the shape of the distribution of bound stars in the target clusters (we do not include the members located outside tidal radius, since their number is small). NGC\,2516 is shown as an example to illustrate the ellipsoid fitting for the bound member stars (see Figure~\ref{fig:ellipsoid_tidal}). The fitted ellipsoid (green surface) is centered at the median position of bound members, which we consider as the cluster center. The three semi-axes of the ellipsoid $a$, $b$, $c$ are the free parameters in this fit, where $a$ is the semi-major axis (red line), $b$ the semi-intermediate axis (pink line), and $c$ the semi-minor axis (orange line). We use the lengths of the semi-axes $a$, $b$, $c$, and axis ratios $b/a$ and $c/a$ to describe the morphology of the clusters, and the direction of the semi-major axis $a$ of the fitted ellipsoid as the direction of elongation of the star cluster. Smaller values of the axis ratios $b/a$ and $c/a$ indicate a more elongated structure. The fitted values of the morphological parameters ($a$, $b$, $c$, $b/a$ and $c/a$) are listed in Table~\ref{tab:morph_kin}. We also compute for each cluster the angle $\theta$ between the direction of $a$ and the Galactic plane (the projection of $a$ on the Galactic plane), and the angle $\phi$ between the direction of $a$ and the line-of-sight. The values of these two angles are listed in Table~\ref{tab:morph_kin}. The fitted ellipsoids for stars inside the tidal radius of other twelve target OCs are presented in Appendix~\ref{sec:apx_ellipsoid} (Figure~\ref{fig:apx_ellip_all}). \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=0.75\textwidth]{NGC2516_ellip_dr3_median.pdf} \caption{ Ellipsoid fitting for the 3D spatial positions in heliocentric Cartesian cooridinates, ($X,Y,Z$), for the cluster members within tidal radius of NGC\,2516, after distance correction through a Bayesian approach (see Section~\ref{sec:dis_correct}). The green surface represents the fitted ellipsoid. Blue dots are members within tidal radius. The three axes of the ellipsoid ($a$, $b$, and $c$) are indicated in red, pink, and orange, respectively. } \label{fig:ellipsoid_tidal} \end{figure*} \begin{deluxetable*}{cRRR c RR c RR c LLL} \tablecaption{Morphological and kinematic parameters of target clusters \label{tab:morph_kin} } \tabletypesize{\scriptsize} \tablehead{ \colhead{Cluster Name} & \colhead{$a$} & \colhead{$b$} & \colhead{$c$} & \colhead{} & \colhead{$b/a$} & \colhead{$c/a$} & \colhead{} & \colhead{$\theta$} & \colhead{$\phi$} & \colhead{} & \colhead{$\sigma_{RV}$}& \colhead{$\sigma_{pmra}$} & \colhead{$\sigma_{pmdec}$} \\ \colhead{} & \multicolumn{3}{c}{(pc)} & \colhead{} & \multicolumn{2}{c}{(axis ratio)} & \colhead{} & \multicolumn{2}{c}{(degrees)} & \colhead{} & \multicolumn{3}{c}{(km~s$^{-1}$)} \\ \cline{2-4} \cline{6-7} \cline{9-10} \cline{12-14} \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} && \colhead{(5)} & \colhead{(6)} && \colhead{(7)} & \colhead{(8)} && \colhead{(9)} & \colhead{(10)} & \colhead{(11)} } \startdata IC 2391 & 5.43\pm0.16 & 3.49\pm0.06 & 2.21\pm0.05 && 0.64\pm0.02 & 0.41\pm0.02 && 16.76 & 46.94 && 0.34^{+0.22}_{-0.20} & 0.50^{+0.07}_{-0.06} & 0.43\pm0.06 \\ IC 2602 & 5.48\pm0.11 & 4.37\pm0.04 & 3.66\pm0.05 && 0.80\pm0.02 & 0.67\pm0.02 && 5.44 & 41.35 && 0.20^{+0.15}_{-0.13} & 0.50^{+0.07}_{-0.06} & 0.50^{+0.07}_{-0.06} \\ IC 4665 & 6.13\pm3.03 & 4.49\pm0.40 & 3.72\pm0.48 && 0.73\pm0.37 & 0.61\pm0.31 && 38.78 & 22.50 && 0.38\pm0.16 & 0.40^{+0.07}_{-0.05} & 0.28\pm0.05 \\ NGC 2422 & 6.30\pm4.04 & 5.54\pm0.49 & 4.57\pm0.56 && 0.88\pm0.57 & 0.73\pm0.47 && 19.01 & 79.28 && 0.71^{+0.18}_{-0.15} & 0.47^{+0.07}_{-0.05} & 0.50\pm0.07 \\ NGC 2516 & 10.03\pm3.75& 9.32\pm0.48 & 6.98\pm0.53 && 0.93\pm0.35 & 0.70\pm0.27 && 11.93 & 31.29 && 0.72\pm0.07 & 0.82\pm0.04 & 0.80\pm0.04 \\ NGC 2547 & 7.42\pm2.78 & 6.14\pm0.39 & 2.88\pm0.44 && 0.83\pm0.31 & 0.39\pm0.16 && 4.20 & 10.81 && 0.68^{+0.09}_{-0.08} & 0.39\pm0.04 & 0.42\pm0.04 \\ NGC 6633 & 8.20\pm2.21 & 5.60\pm0.31 & 3.34\pm0.38 && 0.68\pm0.19 & 0.41\pm0.12 && 1.22 & 23.90 && 0.76^{+0.32}_{-0.22} & 0.47^{+0.09}_{-0.07} & 0.77\pm0.19 \\ NGC 6774 & 6.31\pm1.45 & 4.37\pm0.22 & 2.98\pm0.27 && 0.69\pm0.16 & 0.47\pm0.12 && 18.30 & 17.53 && 0.55^{+0.17}_{-0.15} & 0.39\pm0.04 & 0.71^{+0.09}_{-0.07} \\ NGC 2451A & 5.43\pm0.56 & 4.98\pm0.11 & 3.52\pm0.12 && 0.92\pm0.10 & 0.65\pm0.07 && 14.33 & 81.06 && 0.23^{+0.35}_{-0.17} & 0.68^{+0.10}_{-0.08} & 0.37^{+0.06}_{-0.05} \\ NGC 2451B & 5.96\pm2.62 & 5.69\pm0.37 & 4.14\pm0.42 && 0.95\pm0.42 & 0.69\pm0.31 && 22.12 & 48.56 && 0.25^{+0.20}_{-0.16} & 0.48^{+0.07}_{-0.05} & 0.34^{+0.05}_{-0.03} \\ NGC 2232 & 6.11\pm2.01 & 5.12\pm0.28 & 3.37\pm0.36 && 0.84\pm0.28 & 0.55\pm0.19 && 4.55 & 26.05 && 0.10^{+0.11}_{-0.07} & 0.27^{+0.05}_{-0.03} & 0.29^{+0.05}_{-0.03} \\ Blanco~1 & 8.28\pm1.66 & 4.65\pm0.25 & 4.01\pm0.31 && 0.56\pm0.12 & 0.48\pm0.10 && 78.35 & 12.53 && 0.32\pm0.08 & 0.40\pm0.03 & 0.36\pm0.03 \\ Coma Berenices & 4.91\pm0.02 & 4.23\pm0.02 & 3.43\pm0.01 && 0.86\pm0.01 & 0.70\pm0.01 && 14.3 & 79.38 && 0.61\pm0.18 & 0.22^{+0.04}_{-0.03} & 0.32^{+0.05}_{-0.04} \\ \enddata \tablecomments{$a$, $b$, $c$ are the semi-major, semi-intermediate and semi-minor axes of the fitted ellipsoid for each star cluster in the sample. $\theta$ is the angle between the direction of $a$ and the Galactic plane. The quantity $\phi$ is the angle between the direction of $a$ and the line-of-sight. $\sigma_{RV}$ is the RV dispersion (within the tidal radius); $\sigma_{pmra}$ and $\sigma_{pmdec}$ are the dispersions of the R.A. and Decl. components of the PMs (within the tidal radius). The values in columns 9--11 are obtained using the MCMC method; each best-fit value is the median of the posterior distribution, and the uncertainties are the corresponding 16- and 84-percentiles of the posterior.} \end{deluxetable*} The clusters NGC\,2516, NGC\,2547, NGC\,2451A, NGC\,2451B and NGC\,2232 have axes ratios of approximately $b/a=0.8-0.95$, while $c/a=0.4-0.7$. The morphologies of these five clusters resemble oblate spheroids. The other clusters, IC\,2602, IC\,4665, and NGC\,2422, have shapes that are well described by prolate spheroids, with a difference between $b/a$ and $c/a$ of less than $\sim$10\%. After excluding the prominent tails outside the tidal radius for Coma Berenices and Blanco\,1, prolate spheroidal distributions fit both clusters. The morphologies of the remaining clusters (IC\,2391, NGC\,6633 and NGC\,6774) can be approximated as triaxial ellipsoids. The non-spherical shape for the bound region of most target clusters is likely a result of the interplay of internal and external dynamical processes. Relaxed stars gradually evaporate, primarily through the Lagrange points \citep{kupper2008}, a process that depends on the motion of the each cluster through the Galactic disk. In addition, the external tidal field exerts a force that pulls the cluster apart along the axis that connects the cluster to the Galactic center. Due to differential rotation, the induced tidal tails always tilt with respect to the cluster orbit \citep{tang2019}. That is the reason why the tidal tails of Coma Berenices and Blanco\,1 are parallel to the Galactic plane (see Figure~\ref{fig:3d3}). The bound region (within the tidal radius) of most clusters has an elongation direction that is more or less aligned with the Galactic plane (see Figure~\ref{fig:apx_ellip_all}), with an angle (between $a$ and the disk) of $|\theta|<20^\circ$ (see Table~\ref{tab:morph_kin}). This result is thus in agreement with earlier findings \citep{oort1979, bergond2001, chen2004}, and also indicates that despite their young age, most clusters have already been affected by the external Galactic tides. The direction of the semi-major axis $a$ does not align with the direction of the line-of-sight (see the values of $\phi$ in Table~\ref{tab:morph_kin}), confirming the reliability of our distance correction (Section~\ref{sec:dis_correct}). Although Blanco\,1 appears to show evidence of having been affected by the tidal force, its bound region has an elongation that is closely aligned with the vertical ($Z$) direction of perpendicular to the Galactic plane (with an angle of $\approx11.7^\circ$). While stars escape mainly through the two Lagrange points, the evaporation process stretches all the way between the two Lagrange points \citep{kupper2008} and generates the elongated shape in the distribution of the bound stars. The unbound stars are subjected to Galactic tides so that their orbits become more tangential and form the tidal tails around Blanco\,1, which probably constitutes both ``tail I'' and ``tail II'' \citep{dinnbier2020a}. The distribution of the bound population of stars in the oldest cluster NGC\,6774 can be describe with an triaxial ellipsoid. The secular relaxation process in NGC\,6774 results in significant mass loss, that results in the formation of tidal-tail-like structures beyond the tidal radius \citep[see Figure~\ref{fig:3d2} and][]{yeh2019}, and the escape velocity is consequently greatly reduced. A phase of global evaporation must have taken place. Internal stellar dynamics, such as two-body relaxation, tend to produce an isotropic velocity distribution in the radial direction. Therefore, the core of the OC becomes more spherical as it evolves. \citet{chen2004} indeed noted that the projected flattening of OCs decreases as cluster grow older. The axis ratios $b/a$ and $c/a$ provide appropriate tools to investigate this phenomenon, since they probe the bound region of OCs where internal dynamical processes dominate. However, among the 13 target clusters, no correlation appears to exist between $b/a$ and age. One may expect a decreasing trend between elongation and age if star clusters inherit their elongated shape from their parent GMC. As OCs evolve toward older age, their initial shape is ``forgotten'' as internal relaxation processes increase the sphericity of the clusters, especially the shape of the bound region. This process continues until the time when evaporation becomes the dominant process in the evolution of the cluster. A larger sample of OCs is required to further quantify the relation between morphology and cluster dynamics. Additional work is also needed to further deepen our understanding of the evolution of the shapes of OCs as they reside in the Galactic field. Numerical modelling will be a useful tool to obtain a theoretical benchmark in the future. \section{Dynamical states of open clusters}\label{sec:dynamics} \subsection{3D velocity and velocity dispersion}\label{sec:dispersion} The observed 3D morphology of OCs is thought to be driven by cluster dynamics. However, few studies have been carried out to investigate the relationship between cluster morphology and stellar dynamics. In this study, we connect the 3D morphology and the dynamical state of open clusters for the first time. We use PMs and RVs from {\it Gaia} EDR\,3, and RVs from the literature, as discussed in Section~\ref{sec:gaia_member}. Considering extended structures in target clusters, we adopt the median position of the members in each cluster as the cluster centers, and we use the average velocity in each cluster as the origin of the reference frame, values of which are listed in Table~\ref{tab:general_xyz_uvw}. We present the 3D velocity vectors of the member stars superposed on the 3D spatial positions (relative to that of the cluster center) in Figures~\ref{fig:3d_vel1} and~\ref{fig:3d_vel2}. The tidal radii and the projections of the $a$, $b$ and $c$ axes of the fitted ellipsoids of each cluster are overplotted. The majority of the members with 3D velocity measurements are located within the tidal radius. Members with 3D velocities that differ more than 2$\sigma$ from the mean value are excluded from the velocity vector plots. These high-velocity stars all have large RVs. They are most likely binary candidates \citep[see, e.g.,][for details]{kouwenhoven2008}, and are also located on the binary sequence in the CMD. One star with extraordinary RV is a blue straggler candidate in NGC\,6774. Its peculiar velocity may have originated from a close encounter or from a merger event. Figures~\ref{fig:3d_vel1} and~\ref{fig:3d_vel2} show that the directions of the velocity vectors of a large number of members align with the major axis of the fitted ellipsoid that coincides with the direction of elongation. All clusters from young to old, are expanding, as the majority of members are seen to move away from the cluster center. Expansion in young clusters is thought to be driven by gas expulsion \citep{baumgardt2007, dinnbier2020a, dinnbier2020b, pang2020}. After the gas expulsion, member stars expand radially and therefore reduce the depth of the gravitational potential wells of the OCs. \begin{figure*}[p] \centering \includegraphics[angle=0, width=1.\textwidth]{sy_vectors1_new_2sigma_edr3.pdf} \caption{ The relative 3D velocity vectors for members of eight target clusters, projected onto $X$-$Y$ and $Y$-$Z$ planes. The blue vectors represent the velocities of member stars, relative to the mean motion of each cluster. The center of each cluster is indicated with the (+) symbol. Black circles denote the tidal radii. The scale of the velocity vectors is indicated in the bottom-left corner of each panel. } \label{fig:3d_vel1} \end{figure*} \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=1.\textwidth]{sy_vectors2_new_2sigma_edr3.pdf} \caption{ The relative 3D velocity vectors for members of five target clusters, projected onto $X$-$Y$ and $Y$-$Z$ planes. Colors and symbols are identical to those in Figure~\ref{fig:3d_vel1} } \label{fig:3d_vel2} \end{figure*} In order to quantitatively analyse the dynamical states of our target OCs, we compute the RVs and PMs' dispersion of the bound members in each cluster. The likelihood function for the RV distribution is a combination of two Gaussian components, one for cluster members, and the other one for field stars \citep[equations 1 and 8 in][]{cottaar2012}. The Gaussian distribution of cluster members is broadened by the orbital motions of unresolved binary systems, and also by the uncertainties in the RV measurement. To model the broadening introduced by binary stars, we adopt distributions of orbital parameters that are characteristic for solar-type stars in the Galactic field: (i) a log-normal orbital period distribution for the binaries \citep{raghavan2010}; (ii) a flat mass ratio distribution between $q=0$ and $q=1$ \citep{duchene2013}; and (iii) a flat eccentricity distribution between $e=0$ and the maximum value \citep{parker2009}. The adopted parameters of binary stars are more computationally efficient but still comparable to the more realistic models \citep[e.g.,][]{marks2011b,marks2011}. However, as pointed out by \citet{bravi2018}, the selected binary properties do not significantly affect the final fitted results. The likelihood function of the PM distribution ($\mu_\alpha \cos\delta$ and $\mu_\delta$) is described by two Gaussian profiles \citep[Equations 1--3 in][]{pang2018}: the component of the cluster members, and the field component (the latter accounts for 5\%). We use the Markov Chain Monte Carlo (MCMC) method to obtain the best-fit values and the corresponding uncertainties for the RV and PM velocity dispersions (columns 9--11 in Table~\ref{tab:morph_kin}). The derived velocity dispersions can be used to quantify the rate of expansion of each cluster. Clusters with a higher velocity dispersion tend to expand faster. Based on the half-mass radius, $r_{\rm h}$ (Table~\ref{tab:general}), and the 3D velocity dispersion of each cluster (Table~\ref{tab:morph_kin}), we estimate their dynamical masses using Equation~1 in \citet{fleck2006}. The resulting dynamical masses (M$_{\rm dyn}$) of each cluster ranges from 263\,M$_\sun$ to 3368\,M$_\sun$ (Table~\ref{tab:general} column~8), higher than the estimated photometric masses, 101\,M$_\sun$ to 1973\,M$_\sun$ (column~7 in Table~\ref{tab:general}). This discrepancy is not resolved, even when we correct the mass of faint stars below {\it Gaia} EDR\,3's detection limit by extrapolating the mass function \citep[see demonstration in][]{tang2019}. Therefore, this suggest the majority of clusters might be supervirial and may end up expanding as the kinetic energy overtakes the gravitational potential. We display the dependence of ratio between the dynamical mass and the photometric mass on the cluster age in Figure~\ref{fig:mass_ratio}. The ratio M$_{\rm dyn}$/M$_{\rm cl}$ increases as clusters grow older, especially after 300\,Myr. The oldest cluster in the sample, NGC\,6774, has the highest ratio of M$_{\rm dyn}$/M$_{\rm cl}$, further confirms its state of disruption. On the contrary, the youngest cluster in the sample, NGC\,2232, has the lowest ratio (M$_{\rm dyn}$/M$_{\rm cl}\sim1.2$), which is consistent with the scenario suggested by \citet{pang2020} that this cluster is likely undergoing a phase of revirialization. The cluster probably reaches its maximal expansion before its re-collapse to form a virialised cluster \citep[e.g.,][]{kroupa2001}. \begin{figure}[tb!] \centering \includegraphics[angle=0, width=1.\columnwidth]{mass_ratio_age.pdf} \caption{ The relation between the ratio of dynamical mass over photometric mass, M$_{\rm dyn}$/M$_{\rm cl}$, and cluster age. The large values of the ratio suggests that most of the clusters may be super-virial. } \label{fig:mass_ratio} \end{figure} As suggested from simulations \citep{baumgardt2007}, stellar members will acquire highly-anisotropic velocity dispersions after rapid gas expulsion, and isotropic velocity dispersion after slow gas removal. Some degree of velocity anisotropy is observed in target clusters with detected elongated structures (NGC\,2547, NGC\,2451B, NGC\,2516, NGC\,6633, NGC\,6774, Blanco\,1, Coma Berenices) but not in NGC\,2232 (Table~\ref{tab:morph_kin}, columns 9--11). Velocity anisotropy may also originate from global rotation in star clusters. Global rotation is not uncommon; it was recently discovered in the open cluster Tr\,15 \citep{kuhn2019}. Although OCs may inherit angular momentum from parental GMCs, the merging substructures, or merging clusters \citep[e.g.,][]{priyatikanto2016, zhong2019, darma2019}, few attempts have been made to measure rotation in OCs. Unlike OCs, rotation has been measured in globular clusters \citep[e.g.,][]{bianchini2018,kamann2018}. According to $N$-body simulations by \citet{einsel1999} and \citet{hong2013}, rotation enhances mass loss and therefore speeds up the disruption process of clusters. The global rotation speeds are typically much lower in OCs than globular clusters, down to sub~km~s$^{-1}$ level. Higher-resolution of spectroscopy is required in order to quantify the rotational properties of our target OCs. \subsection{Comparison with numerical models}\label{sec:nbody} In order to determine the properties of the gas expulsion process of the 13 target clusters in this study, we carry out $N$-body simulations of OCs with four different sets of initial conditions, and compare our numerical findings with the observations. \subsubsection{Initial conditions} The initial mass $M_{\rm cl}(0)$ of the models is chosen such that the cluster mass $M_{\rm cl}$ at an evolved stage at the age $t$ is comparable to the mass of the observed clusters. Accordingly, we adopt $M_{\rm cl}(0) = 250\,{\rm M_\odot}$, $500\,{\rm M_\odot}$, $1000\,{\rm M_\odot}$, $2000\,{\rm M_\odot}$, and $4000\,{\rm M_\odot}$, which is consistent with the simulations of cluster-formation in molecular clouds \citep{bate2012}. All the cluster models are initialized with a Plummer model in virial equilibrium \citep{Aarseth1974a}, that is characterised by the initial cluster mass $M_{\rm cl}(0)$ and the half-mass radius $r_{\rm h}$. We note that a much larger variety of initial conditions, including non-spherically symmetric substructures, are possible \citep[e.g.][]{Moeckel2010,Fujii2015a}. However, such the substructure typically disappears quickly \citep[e.g., through feedback from photoionization;][]{gonzalez2020}, as the cluster relaxes and obtains a spherically symmetric configuration \citep{kroupa2001,Goodwin2004,Sills2018,Banerjee2018}. This process occurs prior to the onset of gas expulsion in our models. Since the uncertainty of the mechanism expelling the gas has likely a much more prominent impact on the cluster dynamics than the initial substructure, we focus on the gas expulsion mechanism in spherical systems in the present work. Stellar masses are sampled from the \citet{Kroupa2001a} initial mass function (IMF), with a minimum mass of $m_{\rm min}=0.08\,{\rm M}_\odot$, and a maximum mass that is obtained following the $m_{\rm max} - M_{\rm cl}$ relation of \citet{Weidner2013}, where $m_{\rm max}$ is the maximum mass of a star formed in a cluster of mass $M_{\rm cl}$. We assume a binary fraction of 100\% among member stars \citep[see, e.g.,][]{goodwin2005} and initial distributions for the orbital elements (periods or binding energies, mass-ratios, and eccentricities) derived from unifying the observed populations in very young populations that are in different stages of dynamical processing with the Galactic field population \citep{kroupa1995a, kroupa1995b, kroupa2001, marks2011, belloni2017}. The clusters move on circular orbits through the Galaxy at galactocentric radius $d_{\rm GC} = 8$\,kpc, and with an orbital speed of 220~km~s$^{-1}$. The simulated clusters are evolved until $t=100$\,Myr; dynamical evolution is most prominent during this time span. This age range covers two thirds of the age of our target clusters. All the present models are initialized as embedded star clusters, i.e. containing both the stellar and gaseous components. Over time, the gas is removed from the cluster due to feedback from massive stars. Technical details of the $N$-body simulations are described in Appendix~\ref{sec:apx_nbody}. In the first model~S0, the initial half-mass radius $r_{\rm h}$ of the cluster is related to the initial cluster mass $M_{\rm cl}(0)$, following the relation of \citet{Marks2012}. These initial conditions generate star clusters of rather compact sizes ($r_{\rm h}\approx 0.3$\,pc) for clusters with initial mass $M_{\rm cl}(0) = 4000\,{\rm M_\odot}$. Model~S0 has a star formation efficiency (SFE) of $1/3$ and a gas expulsion timescale $\tau_{\rm M} = 0.03$\,Myr, which removes the gas on a time-scale shorter than the stellar crossing time. In other words, the gas expulsion is impulsive. No primordial mass segregation is present in this model. The clusters in the second model~S5 are identical to model~S0, apart from its primordial mass segregation. Mass segregation is generated using the method of \citet{Subr2008}, with an initial mass segregation index of $S = 0.5$. The clusters in the third scenario (model~AD) have a longer gas expulsion time-scale of $\tau_{\rm M} = 1$\,Myr. Thus, the gas is removed on a time-scale longer than stellar crossing time; i.e., the gas expulsion is adiabatic. Adiabatic gas expulsion typically impacts the cluster less than impulsive gas expulsion of the same SFE \citep[e.g.,][]{baumgardt2007,dinnbier2020b}. Clusters in the fourth scenario (model WG) contain no gas, i.e. $M_{\rm gas}(0) = 0$ and $\mathrm{SFE} = 0$, and have a large initial $r_{\rm h}$ of $1$\,pc. \subsubsection{Comparison with target clusters} Figure~\ref{fig:rh_mass_nbody} shows the relationship between the cluster's half-mass radius $r_{\rm h}$, its total mass $M_{\rm cl}$ and its age $t$ (color-scheme). Immediately after gas expulsion, models~S0 and~S5 increase their $r_{\rm h}$ substantially. They revirialize, at which stage both $r_{\rm h}$ and $M_{\rm cl}$ decrease. Primordially mass-segregated clusters (model~S5) expand somewhat less. The evolution of cluster mass and half-mass radius for both models~S0 and~S5 is in agreement with the majority of the target clusters (triangles), where only two clusters (IC~2391 and IC~2602) have their radii too compact for their age and mass. In contrast, the properties of models AD and WG are inconsistent with many of the target clusters. The agreement between the models S0 and S5 with the target clusters more massive than $\log_{10}(M_{cl}(t))\ga2.4$ suggests that these clusters formed with a relatively low SFE (around 1/3), and with a process gas expulsion operating on a time-scale shorter than the stellar crossing time. This finding was also presented in earlier work \citep{Banerjee2017} for more massive star clusters ($\log_{10}(M_{cl}(t)) \gtrsim 4$), extending their result to clusters down to mass $\approx 250\,{\rm M}_\odot$. Another piece of evidence for rapid gas expulsion in these clusters is the higher value of dynamical mass as compared to the photometric mass. As mentioned above, the state of the two most compact clusters (IC~2391 and IC~2602) appears to be in disagreement with models S0 and S5. However, these two clusters are consistent with models AD. It is impossible to draw a firm conclusion from this based only on two star clusters, but the data may suggest that the gas expulsion time-scale transitions from adiabatic to impulsive at cluster mass of $\approx 250\,{\rm M}_\odot$, while the SFE does not change substantially. A decrease of the gas expulsion time-scale with cluster mass is expected theoretically because the maximum stellar mass in a cluster increases with the mass of the cluster \citep[see][and references therein]{Weidner2013}, so that the total energy of photoionising feedback of the cluster increases with cluster mass. The reduction of the gas expulsion time-scale with increasing cluster mass is also reported in hydrodynamic simulations of \citet{dinnbier2020c} and the observational analysis carried out by \citet{Pfalzner2020}. \begin{figure*}[p] \centering \includegraphics[angle=0, width=1.\textwidth]{radius_evolv_4p-2.pdf} \caption{ Evolution of the cluster half-mass radius $r_{\rm h}$ versus the cluster mass $M_{\rm cl}$ in $N$-body simulations. Each panel features a different set of cluster initial conditions as indicated by the model name at the upper-right corner of each panel. The evolution of clusters is shown with the curves, where the thickness of the curve represents the initial cluster mass, and its colour represents the age (shown in the colourbar). The observed 13 target clusters are represented with triangles, again with colors indicating their ages. Note that models~S0 and~S5 are consistent with most of the observational data (particularly with clusters of $M_{\rm cl} \gtrsim 250\,{\rm M}_\odot$), while models~AD are consistent only with clusters of $M_{\rm cl} \lesssim 250\,{\rm M}_\odot$. Models~WG are largely inconsistent with most observed clusters. } \label{fig:rh_mass_nbody} \end{figure*} \subsection{Mass segregation}\label{sec:mass_seg} Mass segregation is commonly found in embedded clusters and young star clusters, and can be a consequence of internal dynamical relaxation, violent relaxation and/or primordial mass segregation \citep{hillenbrand1998,allison2009,pang2013,pavlik2019}. The youngest cluster in our targets, NGC\,2232 does not manifest any evidence of mass segregation, based on measurements of the mean mass in different annuli \citep{pang2020}. Two mediate-age clusters in our sample, however, do show mass segregation. Coma Berenices shows evidence of mass segregation that was quantified by comparing the mass distributions in different annuli \citep{tang2018}, and Blanco\,1 shows evidence of mass segregation obtained using the $\Lambda$-method \citep{zhang2020}. The $\Lambda$-method, developed by \citet{allison2009}, is a tool to analyse the degree of mass segregation a star cluster without the necessity of determining of cluster center. The $\Lambda$-method compares the minimum path length among the $N_{\rm massive}$ most massive members ($l_{\rm massive}$) of the cluster, to that of the minimum path length of $N_{\rm normal}$ random members ($l_{\rm normal}$). This average minimum path length $l$ is calculated from the minimum spanning tree (MST) of the sample of stars, which is obtained using the Python package \texttt{MiSTree} \citep{naidoo2019}. When the $N_{\rm massive}$ massive stars are segregated, the average path length for this set of stars, $l_{\rm massive}$, is smaller than that for the set of randomly selected stars ($l_{\rm normal}$). Previous studies have applied the $\Lambda$-method to star clusters using the observed 2D positions of stars in the clusters. Examples include the studies of NGC\,3603 in \citet{pang2013} and of Blanco\,1 in \citet{zhang2020}. However, the 2D projection can overestimate the degree of segregation by projecting background stars that are located behind the cluster center into the inner region. With the distance-corrected 3D spatial positions of the target cluster members, we are able to improve the determination of the degree of mass segregation, in 3D space. The significance of the mass segregation is measured using the ``mass segregation ratio'' \citep[$\Lambda_{\rm MSR}$, ][]{allison2009}, which defined as \begin{equation}\label{eq:MSR} \Lambda_{\rm MSR} = \frac{\langle l_{\rm normal} \rangle}{l_{\rm massive}} \pm \frac{\sigma_{\rm normal}}{l_{\rm massive}} \quad , \end{equation} where $\sigma_{\rm normal}$ is the standard deviation of the 100 different sets of $l_{\rm normal}$, and $\langle l_{\rm normal} \rangle$ is the average length of a hundred random sets. Figure~\ref{fig:mst} presents the $\Lambda_{\rm MSR}$ for the clusters, based on the 3D and 2D positions of members in each cluster. As can be seen from the figure, robust evidence of mass segregation is found in six clusters, NGC\,2422 (segregated down to 3.6 M$_\odot$), NGC\,6633 (2.2 M$_\odot$), NGC\,6774 (1.6 M$_\odot$), NGC\,2232 (2.1 M$_\odot$), Blanco\,1 (1.5 M$_\odot$) and Coma Berenices (1.1~M$_\odot$), consistent with previous works \citep{prinstinzao2003,kraus2007,moraux2007,tang2018,yeh2019}. In Coma Berenices, the most massive stars are not the most concentrated; they have likely been expelled of the cluster center via close encounter with binary stars \citep[e.g.,][]{oh2015,oh2018}. No evidence for mass segregation is found for the other seven target clusters. The 2D projected distance will reduce the value of $\langle l_{\rm normal} \rangle$ in Equation~\ref{eq:MSR} by projecting stars that are located further away from cluster center onto the inner region. This will result in a decrease in $\Lambda_{\rm MSR}$. Therefore, the 2D MST will most likely under-estimate the degree of mass segregation in a star cluster (Figure~\ref{fig:mst}). \begin{figure*}[tb!] \centering \includegraphics[angle=0, width=1.\textwidth]{MST_allOCs_edr3.pdf} \caption{ The ``mass segregation ratio'' ($\Lambda_{\rm MST}$) for the 60 most massive members, with a bin size of 12 stars in each target cluster. The dashed line ($\Lambda_{\rm MST} = 1$) indicates an absence of mass segregation. Increasing value of $\Lambda_{\rm MST}$ indicate a more significant degree of mass segregation. The error bars indicate the uncertainties obtained from a hundred realizations of $\emph{l}_{\rm normal}$. The bin size is selected to avoid large stochastic errors in $\langle l_{\rm normal} \rangle$ for small $N_{\rm MST}$.} \label{fig:mst} \end{figure*} \section{Summary}\label{sec:summary} Utilizing high-precision {\it Gaia} EDR\,3 astrometry and photometry, we apply the cluster finding method \textsc{StarGO} to identify member stars in 13 target clusters: IC\,2391, IC\,2602, IC\,4665, NGC\,2422, NGC\,2516, NGC\,2547, NGC\,6633, NGC\,6774, NGC\,2451A, and NGC\,2451B, NGC\,2232, Blanco\,1, and Coma Berenices in the 5D phase space of stars ($X, Y, Z$, $\mu_\alpha \cos\delta, \mu_\delta$). The selected members are cross-matched with members in catalogs of \citet{cantat2020} and \citet{liu2019}. The ages obtained from isochrone fitting for each cluster agree with those of previous studies. Altogether we have 13 target clusters with members determined via the same method, covering an age range from 25\,Myr to 2.65\,Gyr, and located in the solar neighbourhood up to a distance of 500\,pc. We analyze the 3D morphology and cluster dynamics of these 13 clusters, and quantify their morphology and dynamical state. $N$-body simulations are carried out to determine which gas expulsion scenario best describes the history of these star clusters. Our findings can be summarized as follows. \begin{enumerate} \item We recovered the individual distance of each candidate member from the parallax by means of a Bayesian method. The uncertainties in the corrected distances are estimated by simulations of spherical clusters with a uniform spatial distribution of members, and of clusters with elongated shapes. The estimated distance for a uniform-density, spherical model has an uncertainty of 3.0\,pc in the distance when the cluster is located at 500\,pc. Elongated models suffer from larger uncertainty. Notably, when the elongation is along the line-of-sight, uncertainties in the distance reach 6.3\,pc at the distance of 500\,pc. \item We have determined the 3D morphology of 13 target OCs, with corrected position in the Cartesian heliocentric coordinates ($X$, $Y$, and $Z$). An ellipsoid model is chosen to fit the spatial distribution of the stars within the tidal radius in all clusters. The semi-major axis $a$, semi-intermediate axis $b$, and semi-minor axis $c$ of the ellipsoid are obtained from fitting. We use the axes lengths $a$, $b$, $c$, and axis ratios $b/a$ and $c/a$ as morphological parameters to quantify the 3D distribution of the stellar population within the tidal radii of the OCs. We consider the direction of the major axis as the direction of the morphological elongation of each cluster. Most clusters have semi-major axes $a$ parallel to the Galactic plane or slightly inclined with respect to the Galactic plane. A notable exception is Blanco\,1, for which $a$ is closer to the vertical ($Z$) direction. The shapes of the distribution of the stellar population within the tidal radius for five clusters (NGC\,2547, NGC\,2516, NGC\,2451A, NGC\,2451B, and NGC\,2232) resemble that of an oblate spheroid, while those of other five clusters (IC\,2602, IC\,4665, NGC\,2422, Blanco\,1 and Coma Berenices) resemble prolate spheroids. The shape of the stellar population within the tidal radii of the other three clusters (IC\,2391, NGC\,6633, NGC\,6774) are well-described by triaxial ellipsoids. \item A significant elongation is observed for the bound regions of NGC\,2422, NGC\,2457, NGC\,6633 and Blanco\,1. Considering that the uncertainty in the corrected distance is much smaller than the size of elongated structures, the elongations measured for these clusters are robust. Among these, Blanco\,1 is notable in the sense that its elongated shape is significantly inclined (by $78^\circ$) with respect to the Galactic plane. The elongation of the bound region might be driven by evaporation of stars via the two Lagrange points. The 3D morphology of Blanco\,1 might be a result of expansion due to fast gas expulsion and virialisation. Elongated filament-like substructures are found in three young clusters, NGC\,2232, NGC\,2547 and NGC\,2451B, while tidal-tail-like substructures are found in older clusters NGC\,2516, NGC\,6633, NGC\,6774. Giant tidal tails are again confimed in Blanco\,1 and Coma Berenices with {\it Gaia} EDR\,3. \item We combine {\it Gaia} EDR\,3 PMs and RVs, together with RVs from \citet{jackson2020} and \citet{bailey2018} to measure the 3D velocity of stellar members in the 13 target clusters. All clusters show evidence of expansion in their 3D velocity distributions. There is an anisotropy in the velocity dispersion for stars inside the tidal radius, which may be driven by gas expulsion. \item Four models of $N$-body simulations are carried out to determine the properties of the the gas expulsion process that have occurred in the target clusters: (i) a non mass-segregated model with impulsive gas expulsion; (ii) a mass-segregated model with impulsive gas expulsion; (iii) a model with adiabatic gas expulsion; and (iv) a model without gas. All the target clusters with a mass larger than 250\,M$_\odot$ are consistent with models of rapid (impulsive) gas expulsion with a rather low SFE of $\approx 1/3$, for models both with and without primordial mass segregation. The target clusters with mass smaller than 250\,M$_\odot$ are consistent with models of slow (adiabatic) gas expulsion with an SFE of $\approx 1/3$. Although the results for clusters with masses above 250\,M$_\odot$ appear to be robust, the results for lower mass clusters are only tentative as they are based only on a sample of two clusters. If the decrease of gas expulsion time-scale with increasing cluster mass is confirmed for more clusters in the future, this may point towards a prominent role of feedback from massive stars on the early evolution of star clusters. Models without gas expulsion, i.e., models assuming a star formation efficiency of 100~percent, are not compatible with the data. \item In order to quantify the degree of mass segregation in each cluster, we apply both 3D and 2D MST methods to the OCs in the sample. Six of the OCs in our sample are found to have mass segregation: NGC\,2422, NGC\,6633, and NGC\,6774, NGC\,2232, Blanco\,1, and Coma Berenices. \end{enumerate} Our study of these 13 open clusters is a pioneering attempt in quantitative study of cluster morphology and its relation to the formation and early evolution of star clusters. The methods developed in this work can be applied to study a much larger sample of OCs covering different locations with data from the {\it Gaia} EDR\,3 and DR\,3, with the aim of achieving a better explanation of the dependence of 3D morphology of open clusters on the location of the star clusters in the Galaxy. \acknowledgments We wish to express our gratitude to the anonymous referee for providing comments and suggestions that helped to improve the quality of this paper. X.Y.P. is grateful to the financial support of the research development fund of Xi'an Jiaotong-Liverpool University (RDF-18--02--32). This study is supported by XJTLU Undergraduate Summer Internship in Physics (X-SIP). X.Y.P. gave thanks two grants of National Natural Science Foundation of China, No: 11503015 and 11673032. M.B.N.K. expresses gratitude to the National Natural Science Foundation of China (grant No. 11573004) and the Research Development Fund (grant RDF-16--01--16) of Xi'an Jiaotong-Liverpool University (XJTLU). Franti\v{s}ek Dinnbier and Pavel Kroupa acknowledge support from the Grant Agency of the Czech Republic under grant number 20-21855S as well as support through the DAAD East-European partnership exchange programme. M.P.'s contribution to this material is based upon work supported by Tamkeen under the NYU Abu Dhabi Research Institute grant CAP$^3$. This work made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). This study also made use of the SIMBAD database and the VizieR catalogue access tool, both operated at CDS, Strasbourg, France. \software{ \texttt{Astropy} \citep{astropy2013,astropy2018}, \texttt{SciPy} \citep{millman2011}, \texttt{TOPCAT} \citep{taylor2005}, and \textsc{StarGO} \citep{yuan2018} } \clearpage
1,108,101,563,507
arxiv
\section{Introduction} \label{s:intro} Much has been published on binary statistics and much of this has been concerned with the statistics of spectroscopic binaries (SBs) (e.g. Goldberg, Mazeh \& Latham 2003; Halbwachs et al. 2003; Boffin, Cerf \& Paulus 1993; Boffin, Paulus \& Cerf 1992). Among the statistics discussed three of the most important have been the distributions of period ($P$), primary mass ($m_1$) and mass-ratio ($q\,=\,m_2/m_1$, where the primary in this paper is always taken to be the more massive star). These three parameters in particular are important as they present a set of variables that suffice to describe the principal properties of a binary system and its evolutionary path. Many papers have also been published specifically just on the mass-ratio distributions of SBs (e.g. Hogeveen 1991; Trimble 1974, 1978, 1987, 1990). The question of whether there is a peak in the $q$ distribution near $q = 1$ has been a matter of debate for some time, on which we hope to shed some light in this paper. For instance the recent paper of Goldberg et al. (2003) found a distinct bimodal distribution; however, unlike ours, their sample was not confined to a particular volume and so might be expected to exhibit biases related to this fact. The Initial Mass Function (IMF) of binaries is also of interest and, together with the period distribution, is important for the validation of star formation models. The IMF and $P$ distributions are also important for understanding the chemical evolution of the Galaxy: interacting binary systems have more complex evolutionary pathways by which material can be lost to the Interstellar Medium (ISM) than single stars, leading, for example, to important systematic corrections to predictions of the carbon yield (Tout et al. 1999). Binary-system population synthesis models may then give a more complete idea of the enrichment of chemical elements in the ISM. This kind of modelling has been done before for single stars but to date studies of this kind for double stars have had to make certain assumptions about the $P$, $m_1$ and $q$ distributions and the IMF. There have also been studies of stars within galactic clusters (e.g. K\"ahler 1999) which are thus magnitude-limited and occupy specific limited volumes, as is the case for our sample, but which constitute very specific subsets of stars all of a particular age. Cluster stars are also in the process of diffusing from the cluster at a significant rate over time-scales of the order of $10^8$ yrs. Hence the statistics of cluster binaries, while interesting in their own right, are not necessarily representative of binaries in general, or field binaries in particular and we do not consider them further. What has not been done before to our knowledge is a study of the statistics of a distance and luminosity-limited sample of field binaries that is, as far as is possible with existing data, evolutionarily unbiased and complete. In this paper we derive the distributions for the period $P$, primary mass $m_1$, mass ratio $q = m_2/m_1$ ($m_2$ being the mass of the secondary so that $q<1$) and the IMF for a distance and luminosity-limited sample of SBs in the local solar neighbourhood $d \le 100$\,pc and $M_{\rm{V}} \le 4$ and from there are able to make some deductions about these same distributions for the general population of field binaries. Because our sample is luminosity-limited, the only systems we will be missing completely will be those with primary mass $m_1 < 1.1 \, \mathrm{M_\odot}$ on the Main Sequence. This luminosity cut-off makes the sample as unbiased as possible in an evolutionary sense. While the sample is certainly \emph{not} complete to the distance and luminosity limits stated, we believe that it is as complete as it is possible to make it without seriously compromising the sample size, and so without also compromising the conclusions drawn from the study. Although the sample is far from complete in absolute terms, we are able to make an assessment of the incompletenesses and are thus able to attempt to compensate for them. We thus believe that our sample is the best approximation to a volume-limited sample possible with the data currently available (to be truly volume-limited the sample would have to include \emph{all} objects to the specified distance and luminosity limits). \section{The Sample} The spectroscopic binary data for the study was taken from the Eighth Catalogue of Orbital Elements of Spectroscopic Binary Systems (Batten, Fletcher \& MacCarthy 1989), hereafter referred to in this paper as the `Batten' catalogue, supplemented by other data of R.F. Griffin, both published (see the synopsis paper Griffin 2000 and others of that series) and unpublished (private communication), hereafter referred to as `RFG' data. The Batten catalogue was until recently\footnote{The Ninth Catalogue has just appeared (Pourbaix et al. 2004).} the most comprehensive catalogue of SBs available, its selection criterion simply being to include all SB data available at the time of compilation. It thus encompasses a very diverse range of motivations for observation from all the many contributors, and so will contain a variety of selection effects, which are largely unknown apart from the tendency mentioned below to favour shorter-period systems. The inclusion of the RFG data was designed firstly to increase the size of the available dataset (many more objects having been observed since publication of the Batten catalogue in 1989, many of them by Roger Griffin), and secondly to attempt to compensate for the inevitable bias of the Batten catalogue towards shorter-period systems (inevitable because of the difficulties involved in sustaining consistent observing programmes for longer-period systems, as Roger Griffin has been able to do). The Batten catalogue consists of 1469 SBs and the RFG data of 498 SBs. The combined Batten and RFG dataset (after removing duplicated objects) was filtered to a distance of 100\,pc and limiting absolute magnitude of 4 ($d \le 100$\,pc and $M_{\rm{V}} \le 4$) by correlating entries with the \emph{Hipparcos} catalogue. The correlation was done by Henry Draper catalogue (HD) number, or by coordinates (corrected for precession) if no HD number existed, using the \emph{Hipparcos} parallaxes and apparent magnitudes to calculate distances and absolute magnitudes. The cutoff for absolute magnitude was chosen in such a way that the sample should (to that magnitude) be as complete and homogeneous as possible, while at the same time having a sufficient number of systems to derive reasonable statistics. The chosen limiting absolute magnitude of 4.0 translates to an apparent magnitude of 7.5 at 50\,pc and to 9.0 at 100\,pc. The \emph{Hipparcos} catalogue is complete to $m_{\rm V}= 7.3$, and in some areas down to $m_{\rm V} = 9.0$ (Perryman et~al. 1997; Schr\"oder \& Pagel 2003), so all the SBs in our initial dataset of 1803 distinct stars that are closer than 50\,pc (more precisely 46\,pc) and brighter than $M_{\rm{V}} = 4$ should have been identified via the comparison with \emph{Hipparcos}, and a reasonable fraction of those closer than 100\,pc should also have been identified. A fainter limit would have created an inconsistent sample that, while having data from more systems at close distances, would be missing a lot of fainter systems at greater distances; it is already clear from the number in our final sample that many of the systems in the SB catalogues lie beyond 100 pc. The fainter limit would have led to a much steeper fall-off of completeness with distance than we find in our chosen sample, creating a sample that was harder to analyse. As mentioned above, because our sample is luminosity-limited with an absolute magnitude limit of ($M_{\rm{V}} \le 4$), the only systems we will be missing completely will be those with primary mass $m_1 < 1.1 \, \mathrm{M_\odot}$ on the Main Sequence. When filtered in this way, the sample consists of 371 SBs: 145 double-lined SBs (SB2s) and 226 single-lined SBs (SB1s). It is this sample that we work with in the analysis that follows in the rest of the paper. \section{Distributions} \subsection{Period ($P$) distribution} The period distribution was found directly from the spectroscopic binary data. For later use in evolutionary studies the periods were divided into the four categories given in Table~\ref{table:period/evol-cats}. It is intended that the four period categories should correspond roughly to the following four evolutionary scenarios: \begin{enumerate} \item $P\ge 500$ d. \ Systems that will interact, if at all, at a late stage in their evolution, with the primary well on its way up the Asymptotic Giant Branch (AGB). \item 500 d $>P\ge$ 10 d. \ Systems that interact as the primary evolves onto the Red Giant Branch (RGB) or the AGB. \item 10 d $>P\ge$ 1 d. \ Systems that interact on the early RGB or as the primary leaves the Main-Sequence and enters the `Hertzsprung Gap'. \item $P<1$ d. \ Systems that interact while on the late Main Sequence or earlier (contact systems). \end{enumerate} \begin{table} \caption{Period/evolutionary categories. The numbers of SBs are for $d \le 100$\,pc and $M_{\rm{V}} \le 4$.} \begin{tabular}{ccrrr} \hline Category & Period range\,/\,days & SB1s & SB2s & Total SBs\\ \hline (i) & $\;\;\,P\ge$ 500 & 73 & 15 & 88\\ (ii) & 500 $>P\ge$ 10 & 84 & 60 & 144\\ (iii) & 10 $>P\ge$ 1 & 62 & 63 & 125\\ (iv) & $\;\;P\,< $ 1 & 7 & 7 & 14\\ \hline TOTALS & & 226 & 145 & 371\\ \hline \end{tabular} \label{table:period/evol-cats} \end{table} To investigate the behaviour of the observed period distributions within different volumes \emph{within} our 100\,pc sample, the fractions of SBs (of all systems, single and multiple, determined from \emph{Hipparcos}) in the above period categories were determined for volumes of radius 20 to 100\,pc in steps of 2\,pc for $M_{\rm{V}} \le 4$. The absolute magnitude cutoff (the same as for the rest of the study) avoids the increasing incompleteness that would otherwise result with increasing distance (without the cutoff we would include too many faint stars, many of which could reasonably be supposed to be unrecognised binaries). The results are shown in Fig.~\ref{fig:Pdistribs}. The figure demonstrates well the quality of the data to 30\,pc. For volumes much smaller than $d = 30$\,pc the data are of little statistical significance due to the low absolute numbers involved (e.g. at 25\,pc there are 12, 11, 8 and 0 systems in $P$ categories (i) to (iv) respectively out of a total of 284 \emph{Hipparcos} objects). The low numbers cause the fractions to vary somewhat erratically below 30\,pc and so are also of little use in determining the trends. The results will be discussed in more detail in Section~\ref{sect:discussion}. Clearly, a significant fraction of the systems are missing, but we can estimate the complete fractions for each category by extrapolation to 0\,pc. Although the data are for SBs, the extrapolations will necessarily be the fractions of \emph{all} binary/multiples (with respect to all stellar objects, single and multiple) as all binaries will be detectable as SBs at 0\,pc. The extrapolation works because the further out in the sample we go, the less likely it is that a binary system is known as an SB. Therefore, as we go in the opposite direction, towards 0\,pc, the fraction of systems that are detectable as SBs increases, and we get closer to the values of the unbiassed fractions. Fitting curves to the data from 100\,pc to 30\,pc, we were thus able to determine what the complete fractions \emph{would} be at $d=0$\,pc. Categories (i) and (ii) were fitted by cubic polynomials and category (iii) by a quadratic curve. The choice of curve was determined by its shape as well as the closeness of the fit: quadratics for categories (i) and (ii) and a cubic for category (iii) all gave curves that changed from convex to concave with distance and hence were not plausible. A cubic fit may be preferred on the grounds that it is what might be expected for number density within a spherical volume (the 100\,pc limit is not large enough for the thickness of the galactic disc to become relevant (Schr\"oder \& Pagel 2003), in which case a quadratic fit might have been expected instead) but they are also more suseptible to errors in extrapolations due to the higher order term. The levelling out of the curves at larger distances is roughly what one would expect on the basis of the kind of bias expected at different distance regimes. At closer distances an apparent magnitude bias would be expected, as the brighter a system's $m_{\rm V}$ the more likely it would be to be known as an SB, while at larger distances one would expect a random selection bias due to the effectively random basis on which systems are chosen for study, causing the fraction of SBs to tend to a constant value at larger distances. For category (iv) it wasn't possible, or realistic, to do anything more with the data than to make a linear fit over the same distance range, this category being considerably limited in what we can do with it by its low absolute numbers. The fits were made over as large a range of distances as possible before low absolute numbers made the data unreliable, in each case from 100\,pc to 30\,pc. Clearly it is difficult to estimate the errors on such extrapolations and the fractions are not necessarily going to be very accurate. However the fractions given by quadratic fits for categories (i) and (ii) instead of the cubic fits used can be used to give some idea of the errors involved. In both case the quadratic fits had lower constants, $D$, and hence lower fractions at 0\,pc. Taking these differences into account the accuracy of the extrapolations is estimated to be of the order of 10\%. Details for all four categories are given in Table~\ref{table:period_polyfits}. The extrapolations give the following estimates for the complete fractions of SBs in the four period categories (equal to the value of the constants $D$ in Table~\ref{table:period_polyfits}): 0.140, 0.158, 0.0580 and 0.00154, giving a total binary fraction of $0.36 \pm 0.11$ (for stars brighter than $M_{\rm V} = 4$). \begin{table} \caption{Polynomial fits for Period categories: {\em Fraction} $= Ad^3+Bd^2+Cd+D$, where $d$ is distance in pc. There are 36 data for each category (30\,pc to 100\,pc in steps of 2\,pc). The constant $D$ is also equal to the extrapolated fraction at 0\,pc.} \begin{tabular}{ccccc} \hline & $A$ & $B$ & $C$ & $D $ \\ \hline (i) & $-1.05 \cdot 10^{-7}$ & $3.33 \cdot 10^{-5}$ & $-3.59 \cdot 10^{-3}$ & $0.140$ \\ (ii) & $-1.89 \cdot 10^{-7}$ & $4.72 \cdot 10^{-5}$ & $-4.28 \cdot 10^{-3}$ & $0.158$ \\ (iii) & $0$ & $3.82 \cdot 10^{-6}$ & $-8.40 \cdot 10^{-4}$ & $5.80 \cdot 10^{-2}$\\ (iv) & $0$ & $0$ & $0$ & $1.54 \cdot 10^{-3}$\\ \hline \end{tabular} \label{table:period_polyfits} \end{table} The above completes the more general analysis of the incompleteness of the sample. Selective incompletenesses will be considered in Section~\ref{sect:discussion}. \begin{figure} \includegraphics[scale=0.35,angle=-90]{fig1.eps} \caption{Period distributions of all SBs in the sample for the four period categories given in Table~\ref{table:period/evol-cats}, together with extrapolations to 0\,pc. Fractions to a particular distance are the fractions of \emph{all} systems (single and multiple) with $M_{\rm V} \le 4$ in the \emph{Hipparcos} catalogue to that distance.} \label{fig:Pdistribs} \end{figure} \subsection{Primary-mass ($m_1$) distribution} \label{subsect:Primary-mass distrib} Firstly, the \emph{Hipparcos} parallaxes and apparent visual magnitudes of the systems, $m_{\rm{V}}$, were used to calculate the absolute visual magnitudes, $M_{\rm{V}}$. The primary masses of the SBs were then estimated from $M_{\rm{V}}$ by correcting for an average contribution of the secondary and applying a mass-luminosity relationship. The absolute magnitude of the primaries, $M_{\rm{V}_1}$, were estimated using the following magnitude offsets from the absolute magnitude of the system, $M_{\rm{V}}$: \begin{eqnarray} M_{\rm{V}_1} = M_{\rm{V}} + 0.50 \mathrm{\ \ (for\ SB2s)}\\ M_{\rm{V}_1} = M_{\rm{V}} + 0.20 \mathrm{\ \ (for\ SB1s)}\label{eqn:SB1_offset} \end{eqnarray} \\ The offsets are necessary as the presence of even a visually unseen companion can make a significant difference to the magnitude of the system (see the discussion of $\zeta$ Aurigae systems below). The offset for SB2s was determined by the clear peak of their $q$ values (Section~\ref{subsect:q distrib}) near 1, with an average $q$ of $\approx 0.84$ (see Fig.~\ref{fig:qSB2and1} in that section). This means that the average SB2 secondary is about 40 per cent less luminous than the primary and so would contribute about 0.5 mag to the system's $M_{\rm{V}}$. (It should be noted in passing that the individual magnitudes of the SB primaries and secondaries are not usually known, even in the case of SB2s. Where the Batten catalogue sometimes quotes two magnitudes these are in fact the maximum and minimum magnitudes of the system if the apparent magnitude of the system is variable.) The offset for SB1s is a mean figure suggested by two considerations. Firstly, if the luminosity of the secondary were, on average, 30 per cent less than that of the primary (equivalent to a magnitude difference between the primary and the system of 0.3) then the contrast of the secondary's spectral lines would be enough for them to be visible, and the system would be observed as an SB2 rather than an SB1. Secondly, the depths of eclipse in $\zeta$ Aurigae systems (typically also catalogued as SBs, for example $\zeta$ Aurigae itself is in the Batten catalogue) are equivalent to the secondary contributing 0.1 - 0.2 mag. to the magnitude of the system. (See, for example, the photometry of the January 1989 eclipse of $\tau$ Persei given in Hall et al. 1991 and of the 1988 eclipse of 22 Vulpeculae in Griffin et al. 1993. It should also be noted that the only reason that $\zeta$ Aurigae and 22 Vulpeculae are known as SB2s is because their secondaries are visible in the ultraviolet band. For observations of $\zeta$ Aurigae systems in the visible waveband they are usually only seen as SB1s, as for example is 22 Vul which is given in the Batten catalogue as an SB1.) From these two considerations we therefore adopted an offset of 0.2 mag to account for an average SB1 secondary's contribution to the luminosity of the system. Alternatively, we can argue from the $q$ distribution that results from our calculations. From Fig.~\ref{fig:qSB2and1}, we see that typically $q\simeq0.5$, with a rather large uncertainty because of the flat distribution. Using the same argument as above for SB2s, this would lead to a contribution of about 0.13 to $M_{\rm V}$, which is (within the uncertainties) consistent with the 0.2 offset we have assumed. To test the effect of the offset, we have run the calculation below again with a zero offset, and the results are qualitatively similar (see Section~\ref{subsect:q distrib}). A number of different mass-luminosity relationships were used to determine $m_1$ from $M_{\rm{V}_{1}}$ according to the evolutionary status of the primary. The evolutionary status was determined from its location on an Hertzsprung-Russell Diagram (HRD) using the $B\!\!-\!\!V$ colour index from the \emph{Hipparcos} catalogue and the value of $M_{\rm{V}_1}$ already calculated. The HRD was divided into a number of regions based on the characteristic regions used by Schr\"oder (1998) and Schr\"oder \& Sedlmayr (2001) (see Fig.~\ref{fig:evolHRD}). For main sequence stars the mass-luminosity relationship used was for stars half-way through their H core-burning phase obtained from detailed theoretical stellar models computed with the well-tested evolution code of Peter Eggleton (Pols et al. 1998). The other mass-luminosity relationships are from Schr\"oder \& Sedlmayr (2001) and Schr\"oder (1998). We thus have the following set of equations for $m_1$: \\ \\For Main Sequence primaries, both \begin{eqnarray} B\!-\!V & < & (M_{\rm{V}_1} + 1.50)/5.16\\ \&\quad\quad M_{\rm{V}_1} & \geq & -1.50 \end{eqnarray} and \begin{eqnarray} B\!-\!V & < & 0\\ \&\quad\quad M_{\rm{V}_1} & < & -1.50 \end{eqnarray} use the following mass-luminosity relation \\ \begin{eqnarray} \nonumber m_1 & = & (3.57 - 1.40M_{\rm{V}_1} + 0.311M_{\rm{V}_1}{}^2\\ & & \ \ - 0.027M_{\rm{V}_1}{}^3)\ \mathrm{M_\odot} \label{eqn:mass_main-seq} \end{eqnarray} For Blue-loop giants: \begin{eqnarray} B\!-\!V & \geq & (M_{\rm{V}_1} + 1.50)/5.16\\ M_{\rm{V}_1} & < & 0.6\\ m_1 & = & (-0.852M_{\rm{V}_1} + 2.81)\ \mathrm{M_\odot} \label{eqn:mass_BL} \end{eqnarray} For K giant clump stars (subgroup 1): \begin{eqnarray} B\!-\!V & \geq & (M_{\rm{V}_1} + 1.50)/5.16\\ 0.6\ \ \le & M_{\rm{V}_1} & <\ \ 0.8\\ m_1 & = & 1.8\ \mathrm{M_\odot} \label{eqn:mass_K1} \end{eqnarray} For K giant clump stars (subgroup 2): \begin{eqnarray} B\!-\!V & \geq & (M_{\rm{V}_1} + 1.50)/5.16\\ 0.8\ \ \le & M_{\rm{V}_1} & <\ \ 1.0\\ m_1 & = & (-0.852M_{\rm{V}_1} + 2.2)\ \mathrm{M_\odot} \label{eqn:mass_K2} \end{eqnarray} For Lower RGB giants: \begin{eqnarray} B\!-\!V & \geq & (M_{\rm{V}_1} + 1.50)/5.16\\ M_{\rm{V}_1} & \ge & 1.0\\ m_1 & = & 1.25\ \mathrm{M_\odot} \label{eqn:mass_lower_RGB} \end{eqnarray} \begin{figure} \includegraphics[scale=0.45]{fig2.eps} \caption{Evolutionary categories for SB2s and SB1s. These are used to determine which mass-luminosity relationship to use in estimating the masses of the primaries.} \label{fig:evolHRD} \end{figure} The resulting distributions of estimated primary masses are shown in Fig.~\ref{fig:m1}. \begin{figure} \includegraphics[scale=0.50]{fig3.eps} \caption{$m_1$ distributions determined in Section~\ref{subsect:Primary-mass distrib}.} \label{fig:m1} \end{figure} \subsection{Mass-ratio ($q = m_2/m_1$) distribution} \label{subsect:q distrib} The $q$ distribution for SB2s was found directly from the observed orbital semi-amplitudes, $K_2$ and $K_1$, the period, $P$, and the eccentricity, $e$, via: \begin{equation} \label{eqn:msin3i} m_{1,2} \sin^3 i = 1.036\cdot10^{-7} (1-e^2)^{3/2} (K_2 + K_1)^2 K_{2,1} P\\ \end{equation} where $m_{1,2}$ have units of solar masses (M$_\odot$), $K_{1,2}$ have units of km\,s$^{-1}$ and $P$ has units of days (Hilditch 2001, p46). Hence: \begin{equation} q = \frac{m_2}{m_1} = \frac{m_2 \sin^3 i}{m_1 \sin^3 i} = \frac{K_1}{K_2}. \end{equation} The $q$ distribution for SB1s however is not so easy to determine. The closest one can get to $q$ directly is the following function of the mass function, $f(m)$, and the primary mass, $m_1$: \begin{equation} \label{eqn:f(m)/m1} \frac{f(m)}{m_1} = \frac{q^3 \sin^3 i}{(1 + q)^2}. \end{equation} The mass function is calculated from the observed period, $P$, primary orbital semi-amplitude, $K_1$, and eccentricity, $e$, \begin{equation} \label{eqn:fm} f(m) = 1.036\cdot 10^{-7} \,(1-e^2)^{3/2} \,{K_1}^3 \,P \end{equation} using the same units as for equation~\ref{eqn:msin3i}. The primary mass, $m_1$, may be determined as in Section~\ref{subsect:Primary-mass distrib}. There are a number of different methods for determining the underlying $q$ distribution from the $f(m)/m_1$ distribution. Two of the most commonly used are: \begin{enumerate} \renewcommand{\theenumi}{(\arabic{enumi})} \item Richardson-Lucy iterative method (not used in this paper). This is a method that was first developed by Richardson (1972) for image restoration in optics and then first adapted for astronomical use by Lucy (1974) for the deconvolution of unknown distributions. It has since been used on a number of occasions for deconvolving $q$ distributions from observed distributions (e.g. Hogeveen 1991). As it is not used in this paper no further details of this method shall be given here. \\ \item Monte-Carlo simulation (used in this paper in a refined form). This involves calculating the $q^3\sin^3 i/(1 + q)^2$ distributions from a variety of postulated $q$ distributions and matching them to the observed $q^3\sin^3 i/(1 + q)^2$ distribution. \end{enumerate} [For other methods see, for instance, Halbwachs (1987).] \\ While the Monte-Carlo method might be considered to be somewhat unsophisticated, especially given some of the assumptions that have had to be made in the past to make the method work (see below), we have been able to introduce a number of constraints and tests that we hope improve it somewhat and make it more robust. By contrast, methods such as Richardson-Lucy are not as direct and are also more dependent on initial assumptions. \subsubsection{A short review of previous Monte-Carlo studies} This method has been used on many occasions before, but a number of restrictions and/or assumptions have always had to be made in order to make it work. For example Boffin et al. (1993) restricted the sample to 213 spectroscopic binaries with red giant primaries, assumed a constant mass for the primaries and also an average value of $\sin^3 i$. Trimble's study (Trimble 1990) was also limited to a subset of SBs, 164 in this case, most of which were K giants, their primary masses being determined from their spectral classes. A major problem encountered in Monte-Carlo determinations of $q$ distributions is how to take account of the unknown orbital inclination angle $i$. The effect of any assumed $i$ distribution is made more critical by the dependence of $f(m)/m_1$ on $\sin^3 i$. Typically the unknown $i$ s have been accommodated by assuming an average value for $\sin^3 i$. As well as being a bit crude, and so perhaps of better use where more rigorous methods may not be possible, this method does have other distinct problems. Boffin et al. (1993) show that, using this method, it is possible to obtain correct looking results for a decreasing $q$ distribution while giving a totally wrong result for the case where $f(q) \propto 1/q$. In fact, for this reason Boffin et al. (1993) discard a simple Monte-Carlo approach despite others such as Trimble (1990) and Hogeveen (1991) finding that it gives results similar to more sophisticated approaches such as the Richardson-Lucy method. Mazeh \& Goldberg (1992) also find that it produces erroneous results; in their paper they show two graphs of simulations with invented $q$ distributions and demonstrate how badly they are reconstructed using this method: an even distribution is reconstructed as a decreasing distribution with constant gradient, and a distribution increasing towards $q=1$ is reconstructed as an upside-down U-shaped distribution. They then go on to show that these results are a consequence of some of the initial assumptions of the method being invalid. Another procedure is to adopt a `model-fitting' approach where the probability of detecting a system with a certain inclination, $i$, is set by theoretical considerations. This however usually involves making somewhat \emph{ad hoc} theoretical assumptions. Trimble (1974, 1990) took two approaches to the unknown $\sin^3 i$ values: the direct approach using an average value of $\sin^3 i$, and a model-fitting approach that assumed the probability, $p$, of there being an orbit of inclination, $i$, to be proportional to a certain function: $p(i) \propto \sin i$ in Trimble (1990), as suggested in Halbwachs (1987), and $p(i) \propto \sin^2 i$ in Trimble (1974). However the average $\sin^3 i$ values were themselves determined by assuming that the probability of detecting a system was proportional to a particular function. Halbwachs (1987) used three methods: the average $\sin^3 i$ method, and two other methods not discussed here: one due to Abt \& Levy (1985) and the other due to Jaschek \& Ferrer (1972). None of these methods is entirely satisfactory. \subsubsection{Refined Monte-Carlo Method (used in this paper)} \label{subsubsect:refined_monte_carlo} In the present study we attempt to avoid some of the problems previously encountered with this method by introducing some of our own refinements, ones that are largely made possible by the use of the \emph{Hipparcos} catalogue. For instance, the subset of better-known SB2s was used to derive constraints that could then be applied to all SBs. For the masses, we were able to go a step further than previous studies by using the \emph{Hipparcos} parallaxes to determine absolute magnitudes, and from there obtain the primary masses directly from their evolutionary status determined from their position on an HRD (as discussed above in section~\ref{subsect:Primary-mass distrib}). We thus circumvented the dual problems of having to make assumptions about the unknown masses and of being restricted to a limited sample of stars of a particular luminosity class. The study was hence opened up to stars of any evolutionary status, subject only to a limiting absolute magnitude of 4 as in the rest of our study. We also used a method of random inclinations, $i$, to avoid having to assume an average value of $\sin^3 i$. The angle $i$ varies between $0^\circ$ and $180^\circ$, but in practice systems with inclinations near $0^\circ$ or $180^\circ$ will not be easily visible as the components of their radial velocity in our line of sight will be too small to measure reliably. Systems with $i$ near to $0^\circ$ or $180^\circ$ will thus tend to be missed in surveys. For our Monte-Carlo procedure we have assumed therefore that $i$ varies between a minimum cutoff angle $\alpha_0$ and $90^\circ$ (sufficient as $i$ only appears as $\sin^3 i$ in the equations). The probability of detecting a system is therefore zero for $i$ less than $\alpha_0$ and proportional to $\sin i$ between $\alpha_0$ and 90$^\circ$, i.e. $p(i) = 0$ for $i < \alpha_0$ and $p(i) \propto \sin i$ for $\alpha_0 \le i \le 90^\circ$. The reason for the proportionality to $\sin i$ is that the projection of the SB's orbit onto the line of sight is proportional to $\sin i$, and hence so is the probability of observing the system as an SB. This means that a initial uniformly random variate, $x$, has to be transformed to $i = \arccos (1 - x)$ (the mathematical reasoning behind the transformation is given in section 7.2 of Press et al. 1993). This is, we think, more reasonable than previous assumptions that have been made (and indeed have had to be made). A problem with the estimated mass procedure as implemented is that the masses for the main sequence primaries (that is, the greater proportion of them) are taken from half-way through their H core-burning phase. In reality the masses will be `smudged out' to either side of these values. To simulate this when calculating the $f(m)/m_1$ distribution from random inclinations a `Smudge Factor', $\zeta$, was therefore introduced so that the simulated $f(m)/m_1$ values are multiplied by a random factor of up to $(1\pm\zeta)$. The values of $\alpha_0$ and $\zeta$ were found by comparing the $f(m)/m_1$ distribution for SB2s from two different sources: (i) from their $q$ values and random $i$ values, $\alpha_0 \le i \leq 90^\circ$ for different values of $\alpha_0$ and $\zeta$ (1000 random $i$ values per $q$ value) and (ii) from their mass functions, $f(m)$, and estimated primary masses, $m_1$. The $f(m)/m_1$ distributions were plotted as histograms, 5 bins per 0.1 on the $f(m)/m_1$ axis (see Fig.~\ref{fig:alpha_0_cal} for an example of the histograms from each source for the $\alpha_0$ calibration). \begin{figure} \includegraphics[scale=0.9]{fig4.eps} \caption{Examples of histograms for the $\alpha_0$ calibration taken from the two sources given in Section~\ref{subsubsect:refined_monte_carlo}. The $\zeta$ calibration has already been performed so that the maximum values of $f(m)/m_1$ are the same from both sources.} \label{fig:alpha_0_cal} \end{figure} The main effect of the smudging out of the masses on the $f(m)/m_1$ distribution from (ii) was that the maximum value of $f(m)/m_1$ became greater than the maximum theoretical value of 0.25. (A smaller additional effect was that the slope of the distribution towards the maximum $f(m)/m_1$ value was slightly shallower than that from (i).) From a range of values of $\zeta$ from 0.0 to 0.5 in steps of 0.1, $\zeta = 0.20 \pm 0.05$ was selected as giving the same maximum value of $q^3\sin^3 i/(1 + q^2) = f(m)/m_1$, from (i) as from (ii). Once $\zeta$ had been determined in this way the value of $\alpha_0$ could then be found by comparing the ratios of the first bin ($f(m)/m_1$ from 0 to 0.025) to the total frequency. By this means the best match was found to be for $\alpha_0 = 20.5\pm1.0^\circ$. The determination of $\alpha_0$ and $\zeta$ were both cases demonstrating how the better known SB2s could be used to calibrate parameters for the Monte-Carlo approach to the SB1s. Not only were we able to validate methods before using them for the unknown SB1 $q$ distribution but we were able to determine fine-tuning parameters for the simulation as a whole. We did then have to assume that the values of $\alpha_0$ and $\zeta$ were the same for SB1s as for SB2s. This would be reasonable if the SB2s and SB1s have similar selection criteria and if the probabilities of detecting a spectroscopic binary as an SB2 or an SB1 are independent. The similar behaviour of the fractions of SB2s and SB1s in Fig.~\ref{fig:fracs_M_v} in Section~\ref{sect:discussion} gives us some confidence that this is indeed the case. To perform the Monte-Carlo simulation a variety of plausible $q$ distributions were constructed by dividing the range of $q$ values from 0 to 1 into ten equal bins and choosing frequencies for each bin. The resultant $f(m)/m_1$ distributions were then calculated for random values of the inclination, $i$, from $\alpha_0$ to $90^\circ$, $p(i) \propto \sin i$, with random values of the smudge factor between $1-\zeta$ and $1+\zeta$. For each data point on the $q$ distribution a thousand random values of $i$ were used to make the resultant $f(m)/m_1$ distribution as smooth as possible. The total frequencies for the constructed $f(m)/m_1$ distributions ranged from $2\times10^4$ to $2\times10^5$. The $f(m)/m_1$ distribution was divided in each case into 25 bins from 0 to 0.25 (the maximum value of $f(m)/m_1$ for $q\le1$ being 0.25). We then had to assume, albeit from reasonable arguments, the offset to add to $M_{\rm{V}}$ to determine $M_{\rm{V}_1}$ for the SB1s (see equation~\ref{eqn:SB1_offset} and the text afterwards). The $q$ distributions tried were systematic variations on the following `types': exponential-like functions increasing in frequency towards $q=1$ but with the rise starting at varying $q$ values, `hump-functions' with a pronounced maximum at varying $q$ values and different levels upon either side, and `step-functions' with the step at varying $q$ values and of varying size. As the resultant $f(m)/m_1$ distributions proved to be rather insensitive to many of the variations tried we settled on a step function as being a minimal solution requiring the fewest arbitrary assumptions. However, given that other authors have found a preference for a bimodal distribution of $q$, with a secondary peak at around $q=0.2$ (e.g. Staniucha 1979), we also present results from peaked $q$ distributions to see whether the resulting $f(m)/m_1$ distributions match observations better or worse than the simple step function. The test used to determine the optimal $q$ distribution was to compare two ratios: the ratio of the frequency of the first bin ($0<f(m)/m_1\le0.01$) to the second ($0.01<f(m)/m_1\le0.02$) and the ratio of the frequency of the first bin to the total for the 5th to 25th bins ($0.04<f(m)/m_1\le0.25$). The first gives a measure of the rise in frequency at the low end of the distribution while the second ratio summarises the relationship of the size of the peak (invariably near $f(m)/m_1 = 0$) to the rest of the distribution. The observed ratios for the SB1 $f(m)/m_1$ distribution were 3.250 and 1.492 respectively. The best fit to these ratios for the $q$ distributions tried was for a $q$ distribution consisting of a level portion at a value of 20 for $0<q\le0.8$ followed by a step down to zero and then continuing at zero for $0.8<q\le1$. When normalised to the total number of observed SB1s this gave frequencies of 28.3, 28.3, 28.3, 28.3, 28.3, 28.3, 28.3, 28.3, 0 and 0 for the ten equally spaced bins from $q = 0$ to 1. \begin{figure} \includegraphics[scale=0.5]{fig5.eps} \caption{Histogram of the $q$ distribution for SB2s and SB1s combined, SB2s on the bottom (filled-in) and SB1s on top (unfilled), for Monte-Carlo simulations of \emph{stepped} SB1 $q$ distributions. The solid line of the SB1s is the best fit, the broken line is the next best fit.} \label{fig:qSB2and1} \end{figure} \begin{figure} \includegraphics[scale=0.5]{fig6.eps} \caption{Histogram of $q$ distribution for SB2s and SB1s combined for Monte-Carlo simulations of \emph{peaked} SB1 $q$ distributions. SB2s are on the bottom (filled-in) and SB1s on top (unfilled). The solid line of the SB1s is the best fit, the broken line is the next best fit.} \label{fig:qSB2and1-pks} \end{figure} \begin{table} \caption{A table of the two different sources used in the paper for the histograms of combined SB2 and SB1 $q$ distributions. Given are the (normalised) frequencies for the 10 bins, $q = $0 to 1, and a comparison of the main features of the distributions.} \begin{tabular}{|l|c|l|} \hline \bf{Source} & \bf{Combined $q$} & \bf{General features}\\ & \bf{distribution} & \\ \hline Observed SB2 & 29, 29, 30, 32, 31, & Peak, $q = $ 0.9--1.0, \\ + stepped SB1 & 32, 36, 45, 27, 76 & Plateau, $q = $ 0--0.6,\\ Monte-Carlo & & at 0.4 of max.\\ \hline Observed SB2 & 27, 30, 34, 33, 29, & Main peak, $q = $ 0.9--1.0, \\ + peaked SB1 & 26, 27, 33, 41, 89 & Broad peak, $q = $ 0.2--0.4,\\ Monte-Carlo & & at 0.38 of max.\\ \hline \end{tabular} \label{table:comparison} \end{table} The Monte Carlo SB1 distribution is normalised to the total number of SB1s observed in our sample, to make it directly comparable with the observed SB2 distribution. If we then simply add the normalised SB1 and observed SB2 distributions with equal weight to produce an overall distribution we are assuming that there is no obvious bias in our sample towards observing SB2s. Looking at the statistics of SB2s and SB1s in the Batten catalogue and in the data from Griffin, there is if anything a bias towards SB1s: our final sample contains 61\%\ SB1s and 39\%\ SB2s. This lack of bias can probably be explained by the fact that most of the data requires long-term observing programmes that have only been possible where telescopes have been dedicated to the programme and the choice of which stars to observe has not been limited by the exigencies of time allocation committees. Target lists are compiled on the basis of detected variability, usually long before it is known whether the target is an SB1 or an SB2, and objects are kept on the target list until an orbit has been determined. This is certainly true for the Griffin data and seems likely to be true for the Batten catalogue as well, most of which dates from the era of long-term programmes at national or private university telescopes. Adding the Monte-Carlo SB1 $q$ distribution to the observed SB2 distribution according to this equal weight prescription gives the combined SB $q$ distribution shown for the step function in Fig.~\ref{fig:qSB2and1}. This figure also shows the effect of adding the next best-fitting SB1 $q$ distribution, the difference being slight as far as the overall shape of the total distribution is concerned. The figure clearly shows a peak towards $q=1$. Furthermore, the peak comes primarily from the SB2 contribution, derived directly from the observed data, and so is unaffected by uncertainties in the SB1 distribution. Nonetheless, the $q$ distribution is qualitatively similar if a zero offset is used in equation~\ref{eqn:SB1_offset} instead of 0.2; quantitatively, the first four SB1 bins in the best fit with zero offset have frequencies of 32.3 instead of 28.3, and all the remaining bins have frequencies of 16.1. This has the effect, if anything, of accentuating the $q=1$ peak in the overall distribution. Curiously, the zero offset case (which corresponds to the fainter component of the SB1 making no significant contribution to the total luminosity) puts some stars into the $q=0.9-1$ bin. These systems presumably have evolved primaries that are much brighter than their unevolved companions. The distribution with zero offset gives a hint of a second peak for low $q$, so it is worth looking at the best-fitting $q$ distributions with a peak. These are shown in Fig.\ref{fig:qSB2and1-pks}. However, the fit to the observed $f(m)/m_1$ distribution is much less good than for our preferred stepped $q$ distribution, the peak for the best fit peaked distribution is not very pronounced (the best fit histogram is not very different from the one for a stepped distribution in Fig.\ref{fig:qSB2and1}), and the peak at $q=1$ is still dominant. It is also interesting to compare our best-fitting $q$ distribution with the one that would be predicted if we took the components at random from a steep IMF. This prediction has been made by Tout (1991), who considered an IMF steep above 1\,M$_\odot$ but flat for smaller masses. For the SB2 distribution, he took the lower mass cut-off for both components to be 1\,M$_\odot$ and found a curve that rose steeply from $q=0$ to $q=1$, similar to our SB2 distribution, although not so concentrated around $q=1$ . For the SB1 distribution he took the same lower mass cut-off for the primary but chose 0.2\,M$_\odot$ as the cut-off for the secondary. This gave a $q$ distribution with a strong peak at $q=0.2$ and a curve that dropped smoothly to a low value at $q=1$ (see Figure 6 of Tout 1991). The joint distribution is thus bimodal, similar to the result found by Staniucha (1979) illustrated in Figure 1 of Tout (1991). This strongly double-peaked distribution is not consistent with our $q$ distribution, which is very flat for $q<0.7$, even for the peaked distribution that we tried (Figs~\ref{fig:qSB2and1} and \ref{fig:qSB2and1-pks}). We conclude that the components in our sample of binaries were {\em not} chosen independently and at random from the steep IMF that they seem to obey (see discussion in the next section). A similar conclusion was reached by Eggleton, Fitchett \&\ Tout (1989) for a more restricted sample of visual binaries with two bright components. Fig.\ref{fig:fracs_M_v} demonstrates that the fractions of SB2s and SB1s of all stars (all entries in HIPPARCOS within 100\,pc and the same absolute magnitude limit) behave rather similarly for varying limiting absolute magnitudes (the deviation at brighter $M_{\rm{V}}$ for SB2s being due to low absolute numbers). This to some degree justifies our using parameters derived from SB2s, such as the value of $\alpha_0$, for SB1s as well. Note that here, as in the next section, we are dealing with $M_{\rm{V}}$, and not $M_{\rm{V}_1}$, as it is the selection biases on the \emph{systems} that we are interested in. Given that the SB2s and SB1s seem to have essentially the same selection criteria, the two distributions will also be independent of each other, the probability of detecting one set of lines or two then depending only upon the detector resolution. In this case, detecting one set of lines will be independent of detecting the other and it is thus justifiable to add the two $q$ distributions together in the way we have done to produce Fig.~\ref{fig:qSB2and1}. \section{Discussion} \label{sect:discussion} So far, we have derived the observed period, primary mass and mass-ratio distributions for our distance-limited sample. If there are any serious selection effects, however, the true distributions could well be significantly different. Selection effects acting on the $q$ distribution have already been discussed in detail in Section~\ref{subsect:q distrib} and thus we can be confident of the reality of the peak in the $q$ distribution near $q = 1$. \begin{figure} \includegraphics[scale=0.35, angle=-90]{fig7.eps} \caption{Fraction of SBs, distance $\le 100$\,pc, for different limiting $M_{\rm{V}}$. Fractions are defined as in Fig.~\ref{fig:Pdistribs}.} \label{fig:fracs_M_v} \end{figure} \begin{figure} \includegraphics[scale=0.35, angle=-90]{fig8.eps} \caption{Frequency distribution of masses (in solar masses) of SB primaries and of single stars.} \label{fig:m1_SBs_nonSBs} \end{figure} \begin{figure} \includegraphics[scale=0.35, angle=-90]{fig9.eps} \caption{Fraction of SBs, $M_{\rm{V}} \le 4$, for different volumes up to 100\,pc. Fractions are defined as in Fig.~\ref{fig:Pdistribs}.} \label{fig:fracs_vol} \end{figure} There is however another obvious selection effect possibly acting on the $m_1$ distribution that has not yet been considered: the possibility of a lower detection rate for less luminous binaries. This would be reflected in a less pronounced increase in the observed $m_1$ distribution towards smaller masses compared to the true present-day mass function (PDMF). In the \emph{observed} distribution in Fig.~\ref{fig:m1_SBs_nonSBs}, $dN/d\log{m_1} \propto m_1^{-2.8}$, while for single stars it is approximately $\propto m^{-4.8}$ (Schr\"oder 1998). However, we need to know if this difference is genuine or due to a selection effect (or possibly both). To do this we look at the variation in detected SB fraction with volume (Fig.~\ref{fig:fracs_vol}) and compare it with the variation with limiting $M_{\rm{V}}$ within 100\,pc (Fig.~\ref{fig:fracs_M_v}). \begin{table} \caption{Comparing the change in fraction with limiting $M_{\rm V}$ (Fig. \ref{fig:fracs_M_v}) with the change with distance (Fig. \ref{fig:fracs_vol}). } \begin{tabular}{p{3.7cm}p{3.7cm}} \hline {\bfseries Fig. \ref{fig:fracs_M_v}:} change of fraction with $M_{\rm V}$. & {\bfseries Fig. \ref{fig:fracs_vol}:} change of fraction with volume (max. distance of sample).\\ \hline Decreasing average $m_{\rm V}$ \& decreasing average mass going from left to right. & Decreasing average $m_{\rm V}$ but {\bfseries \emph{same}} average mass going from left to right.\\ \hline Fraction falls by a factor of $\sim$ 4.3 from $M_{\rm V}$ = 1 to 4. & Fraction falls by a factor of $\sim$ 5 from 25 to 100\,pc. \\ \hline Could be due to a shallower PDMF and/or a selection effect. & Could {\bfseries \emph{only}} be due to an increasing incompleteness with increasing vol. \& decreasing $m_{\rm V}$.\\ \hline \end{tabular} \begin{tabular}{p{7.5cm}} Factor is approximately the same, therefore the decrease with $M_{\rm{V}}$ (and hence PDMF) is due to a selection effect. Therefore the PDMF of SB primaries is the same as that of single field stars.\\ \end{tabular} \label{table:selection_effects} \end{table} Table~\ref{table:selection_effects} summarizes the following argument. The variation with $M_{\rm{V}}$ shows fractions with decreasing average mass and decreasing average \emph{apparent} brightness as the absolute magnitude limit becomes fainter, while the variation with volume shows fractions which again have decreasing apparent brightness as the volume is increased but now have the \emph{same} average mass for all volumes. The variation with volume shows a decrease in fraction by a factor of $\sim 5$ from 25 to 100\,pc, while the variation with limiting $M_{\rm{V}}$ shows a decrease by a factor of $\sim 4.3$ over a corresponding range of $M_{\rm{V}} = 1$ to 4 (a factor of 4 in distance being equivalent to a difference of 3 in magnitude). The decrease with limiting $M_{\rm{V}}$ could again be due to a shallower PDMF or a selection effect, but the decrease with volume could only be due to the increasing incompleteness as the volume enlarges (and apparent brightness decreases). The fact that the two fractions fall off by approximately the same factor shows that the decrease with $M_{\rm{V}}$, and hence the shallower PDMF, is indeed due to a selection effect. The true PDMF and IMF of the binary primaries are therefore nearly identical to those of single field stars in the solar neighbourhood (Schr\"oder \& Pagel 2003 and references contained therein). \section*{Acknowledgments} We wish to express our special gratitude to R.F. Griffin for the generous use of his unpublished SB data which constitutes a significant fraction of the total sample studied. We also thank the \emph{Centre de Donn\'ees astronomiques de Strasbourg} (CDS) for their excellent internet database through which access was gained to the \emph{Hipparcos} and Batten catalogues, and also the referee, Chris Tout, and colleagues at the Astronomy Centre at the University of Sussex, who gave much valuable advice on improving the paper. JF wishes to acknowledge the support of a Postgraduate Assistantship from the University of Sussex.
1,108,101,563,508
arxiv
\section{Introduction} Automatic segmentation is an important task in medical images acquired with different modalities visualising a wide range of anatomical structures. A common approach to automatic segmentation is the use of supervised voxel classification, where a classifier is trained to assign a class label to each voxel. The classical approach to supervised classification is to train a classifier that discriminates between tissue classes based on a set of hand-crafted features. In contrast to this approach, convolutional neural networks (CNNs) automatically extract features that are optimised for the classification task at hand. CNNs have been successfully applied to medical image segmentation of e.g. knee cartilage \cite{Pras13}, brain regions \cite{Breb15,Moes16}, the pancreas \cite{Roth15a}, and coronary artery calcifications \cite{Wolt15}. Each of these studies employed CNNs, but problem-specific optimisations with respect to the network architecture were still performed and networks were only trained to perform one specific task. CNNs have not only been used for processing of medical images, but also for natural images. CNN architectures designed for image classification in natural images \cite{Kriz12} have shown great generalisability for divergent tasks such as image segmentation \cite{Shel16}, object detection \cite{Girs16}, and object localisation in medical image analysis \cite{Vos16}. Hence, CNN architectures may have the flexibility to be used for different tasks with limited modifications. In this study, we first investigate the feasibility of using a single CNN \textit{architecture} for different medical image segmentation tasks in different imaging modalities visualising different anatomical structures. Secondly, we investigate the feasibility of using a single \textit{trained instance} of this CNN architecture for different segmentation tasks. Such a system would be able to perform multiple tasks in different modalities without problem-specific tailoring of the network architecture or hyperparameters. Hence, the network recognises the modality of the image, the anatomy visualised in the image, and the tissues of interest. We demonstrate this concept using three different and potentially adversarial medical image segmentation problems: segmentation of six brain tissues in brain MRI, pectoral muscle segmentation in breast MRI, and coronary artery segmentation in cardiac CT angiography (CTA). \begin{figure}[t] \includegraphics[width=\textwidth]{figures/architecture_NEW.pdf} \caption{Example 51$\times$51 triplanar input patches (\textit{left}). CNN architecture with 25 shared convolution layers, 2 fully connected layers and an output layer with at most 9 classes, including a background class common among tasks (\textit{centre}). Output classes included in each training experiment (\textit{right}).} \label{fig:architecture} \end{figure} \begin{figure}[th] \centering \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/dicecb} \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/dicebgt}\\ \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/diceven} \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/dicewm}\\ \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/dicebs} \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/dicecgm} \\ \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/dicebreast} \includegraphics[trim=5mm 0mm 15mm 5mm,clip,width=0.49\textwidth]{figures/diceheart} \caption{Learning curves showing Dice coefficients for tissue segmentation in brain MRI (\textit{top three rows}), breast MRI (\textit{bottom left}), and cardiac CTA (\textit{bottom right}), reported at 1000 mini-batch intervals for experiments including that task. The line colours correspond to the training experiments in Fig. \ref{fig:architecture}.} \label{fig:evaluation} \end{figure} \section{Data} \paragraph{Brain MRI --} 34 T\textsubscript{1}-weighted MR brain images from the OASIS project \cite{Marc07} were acquired on a Siemens Vision 1.5 T scanner, as provided by the MICCAI challenge on multi-atlas labelling \cite{Land12}\footnote{\url{https://masi.vuse.vanderbilt.edu/workshop2012}}. The images were acquired with voxel sizes of 1.0$\times$1.0$\times$1.25 mm\textsuperscript{3} and resampled to isotropic voxel sizes of 1.0$\times$1.0$\times$1.0 mm\textsuperscript{3}. The images were manually segmented, in the coronal plane, into 134 classes that were, for the purpose of this paper, combined into six commonly used tissue classes: white matter, cortical grey matter, basal ganglia and thalami, ventricular cerebrospinal fluid, cerebellum, and brain stem. \paragraph{Breast MRI --}34 T\textsubscript{1}-weighted MR breast images were acquired on a Siemens Magnetom 1.5 T scanner with a dedicated double breast array coil \cite{Veld15}. The images were acquired with in-plane voxel sizes between 1.21 and 1.35 mm and slice thicknesses between 1.35 and 1.69 mm. All images were resampled to isotropic voxel sizes corresponding to their in-plane voxel size. The pectoral muscle was manually segmented in the axial plane by contour drawing. \paragraph{Cardiac CTA --} Ten cardiac CTA scans were acquired on a 256-detector row Philips Brilliance iCT scanner using 120 kVp and 200-300 mAs, with ECG-triggering and contrast enhancement. The reconstructed images had between 0.4 and 0.5 mm in-plane voxel sizes and 0.45/0.90 mm slice spacing/thickness. All images were resampled to isotropic 0.45$\times$0.45$\times$0.45 mm\textsuperscript{3} voxel size. To set a manual reference standard, a human observer traversed the scan in the craniocaudal direction and painted voxels in the main coronary arteries and their branches in the axial plane. \section{Method} All voxels in the images were labelled by a CNN using seven different training experiments (Fig. \ref{fig:architecture}). \subsection{CNN Architecture} For each voxel, three orthogonal (axial, sagittal, and coronal) patches of 51$\times$51 voxels centred at the target voxel were extracted. For each of these three patches, features were determined using a deep stack of convolution layers. Each convolution layer contained 32 small (3$\times$3 voxels) convolution kernels for a total of 25 convolution layers \cite{Simo14}. To prevent over- or undersegmentation of structures due to translational invariance, no subsampling layers were used. To reduce the number of trainable parameters in the network and hence the risk of over-fitting, the same stack of convolutional layers was used for the axial, sagittal and coronal patches. The output of the convolution layers were 32 features for each of the three orthogonal input patches, hence, 96 features in total. These features were input to two subsequent fully connected layers, each with 192 nodes. The second fully connected layer was connected to a softmax classification layer. Depending on the tasks of the network, this layer contained 2, 3, 7, 8 or 9 output nodes. The fully connected layers were implemented as 1$\times$1 voxel convolutions, to allow fast processing of arbitrarily sized images. Exponential linear units \cite{Clev16} were used for all non-linear activation functions. Batch normalisation \cite{Ioff15} was used on all layers and dropout \cite{Sriv14} was used on the fully connected layers. \begin{figure}[!th] \centering \fcolorbox{black}{black}{\includegraphics[trim=5mm 15mm 25mm 10mm,clip,width=0.27\textwidth]{figures/brain-image-1101-137-patch}} \fcolorbox{black}{black}{\includegraphics[trim=5mm 15mm 25mm 10mm,clip,width=0.27\textwidth]{figures/brain-reference-1101-137}} \fcolorbox{red}{black}{\includegraphics[trim=5mm 15mm 25mm 10mm,clip,width=0.27\textwidth]{figures/brain-onlybrain-1101-137}}\\ \fcolorbox{blue}{black}{\includegraphics[trim=5mm 15mm 25mm 10mm,clip,width=0.27\textwidth]{figures/brain-brainbreast-1101-137}} \fcolorbox{green}{black}{\includegraphics[trim=5mm 15mm 25mm 10mm,clip,width=0.27\textwidth]{figures/brain-brainheart-1101-137}} \fcolorbox{purple}{black}{\includegraphics[trim=5mm 15mm 25mm 10mm,clip,width=0.27\textwidth]{figures/brain-all-1101-137}}\\ \fcolorbox{black}{black}{\includegraphics[trim=12mm 0mm 15mm 0mm,clip,width=0.27\textwidth]{figures/breast-image-31-143ax-patch}} \fcolorbox{black}{black}{\includegraphics[trim=12mm 0mm 15mm 0mm,clip,width=0.27\textwidth]{figures/breast-reference-31-143ax}} \fcolorbox{pink}{black}{\includegraphics[trim=12mm 0mm 15mm 0mm,clip,width=0.27\textwidth]{figures/breast-onlybreast-31-143ax}}\\ \fcolorbox{yellow}{black}{\includegraphics[trim=12mm 0mm 15mm 0mm,clip,width=0.27\textwidth]{figures/breast-heartbreast-31-143ax}} \fcolorbox{blue}{black}{\includegraphics[trim=12mm 0mm 15mm 0mm,clip,width=0.27\textwidth]{figures/breast-brainbreast-31-143ax}} \fcolorbox{purple}{black}{\includegraphics[trim=12mm 0mm 15mm 0mm,clip,width=0.27\textwidth]{figures/breast-all-31-143ax}}\\ \fcolorbox{black}{black}{\includegraphics[trim=30mm 40mm 30mm 20mm,clip,width=0.27\textwidth]{figures/heart-image-7-74-patch}} \fcolorbox{black}{black}{\includegraphics[trim=30mm 40mm 30mm 20mm,clip,width=0.27\textwidth]{figures/heart-reference-7-74}} \fcolorbox{orange}{black}{\includegraphics[trim=30mm 40mm 30mm 20mm,clip,width=0.27\textwidth]{figures/heart-onlyheart-7-74}}\\ \fcolorbox{yellow}{black}{\includegraphics[trim=30mm 40mm 30mm 20mm,clip,width=0.27\textwidth]{figures/heart-heartbreast-7-74}} \fcolorbox{green}{black}{\includegraphics[trim=30mm 40mm 30mm 20mm,clip,width=0.27\textwidth]{figures/heart-brainheart-7-74}} \fcolorbox{purple}{black}{\includegraphics[trim=30mm 40mm 30mm 20mm,clip,width=0.27\textwidth]{figures/heart-all-7-74}}\\ \caption{Example segmentations for (\textit{top to bottom}) brain MRI, breast MRI, and cardiac CTA. Shown for each task: (\textit{left to right, first row}) image with an input patch as shown in Fig. \ref{fig:architecture}, reference standard, segmentation by task-specific training, (\textit{left to right, second row}) two segmentations by networks with an additional task, segmentation by a network combining all tasks. The coloured borders correspond to the training experiments in Fig. \ref{fig:architecture} and Fig. \ref{fig:evaluation}.} \label{fig:segmentations} \end{figure} \subsection{Training Experiments} The same model was trained for each combination of the three tasks. In total seven training experiments were performed (Fig. \ref{fig:architecture}, right): three networks were trained to perform one task (Experiments 1--3), three networks were trained to perform two tasks (Experiments 4--6), and one network was trained to perform three tasks (Experiment 7). The number of output nodes in the CNN was modified accordingly. In each experiment, background classes of the target tasks were merged into one class. Each CNN was trained using mini-batch learning. A mini-batch contained 210 samples, equally balanced over the tasks of the network. For each task, the training samples were randomly drawn from all training images, balanced over the task-specific classes. All voxels with image intensity $>$ 0 were considered samples. The network parameters were optimized using Adam stochastic optimisation \cite{King15} with categorical cross-entropy as the cost-function. \section{Experiments and Results} The data for brain MRI, breast MRI and cardiac CTA were split into 14/20, 14/20 and 6/4 training/test images, respectively. Four results were obtained for each task: one with a network trained for only that task, two with networks trained for that task and an additional task, and one with a network trained for all tasks together. Each network was trained with 25000 mini-batches per task. No post-processing steps other than probability thresholding for evaluation purposes were performed. The results are presented on the full test set. In brain MRI, the voxel class labels were determined by the highest class activation. The performance was evaluated per brain tissue type, using the Dice coefficient between the manual and automatic segmentations. In breast MRI and cardiac CTA, precision-recall curve analysis was performed to identify the optimal operating point, defined, for each experiment, as the highest Dice coefficient over the whole test set. The thresholds at this optimal operating point were then applied to all images. Fig. \ref{fig:evaluation} shows the results of the described quantitative analysis, performed at intervals of 1000 mini-batches per task. As the networks learned, the obtained Dice coefficients increased and the stability of the results improved. For each segmentation task, the learning curves were similar for all experiments. Nevertheless, slight differences were visible between the obtained learning curves. To assess whether these differences were systematic or caused by the stochastic nature of CNN training, the training experiment using only brain MR data (Experiment 1) was repeated (dashed line in Fig. \ref{fig:evaluation}), showing similar inter-experiment variation. Fig. \ref{fig:segmentations} shows a visual comparison of results obtained for the three different tasks. For all three tasks, all four networks were able to accurately segment the target tissues. Confusion between tasks was very low. For the network trained with three tasks, the median percentage of voxels per scan labelled with a class alien to the target (e.g. cortical grey matter identified in breast MR) was $<0.0005\%$ for all tasks. \section{Discussion and Conclusions} The results demonstrate that a single CNN architecture can be used to train CNNs able to obtain accurate results in images from different modalities, visualising different anatomy. Moreover, it is possible to train a single CNN instance that can not only segment multiple tissue classes in a single modality visualising a single anatomical structure, but also multiple classes over multiple modalities visualising multiple anatomical structures. In all experiments, a fixed CNN architecture with triplanar orthogonal input patches was used. We have strived to utilise recent advances in deep learning such as batch normalisation \cite{Ioff15}, Adam stochastic optimisation \cite{King15}, exponential linear units \cite{Clev16}, and very deep networks with small convolution kernels \cite{Simo14}. Furthermore, the implementation of fully connected layers as 1$\times$1 convolution layers and the omission of downsampling layers allowed fast processing of whole images compared with more time-consuming patch-based scanning \cite{Pras13,Roth15a,Wolt15,Moes16}. The ability of the CNN to adapt to different tasks suggests that small architectural changes are unlikely to have a large effect on the performance. Volumetric 3D input patches might result in increased performance, but would require a high computational load due to the increased size of the network parameter space. The results for brain segmentation are comparable with previously published results \cite{Moes16}. Due to differences in image acquisition and patient population, the obtained results for pectoral muscle segmentation and coronary artery extraction cannot be directly compared to results reported in other studies. Nevertheless, these results appear to be in line with previously published studies \cite{Gube12,Zhen11}. No post-processing other than probability thresholding for evaluation purposes was applied. The output probabilities may be further processed, or directly used as input for further analysis, depending on the application. Including multiple tasks in the training procedure resulted in a segmentation performance equivalent to that of a network trained specifically for the task (Fig. \ref{fig:evaluation}). Similarities between the tasks, e.g. presence of the pectoral muscle in both breast MR and cardiac CTA, or similar appearance of brain and breast tissue in T\textsubscript{1}-weighted MRI, led to very limited confusion. In future work, we will further investigate the capacity of the current architecture with more data and segmentation tasks, and investigate to what extent the representations within the CNN are shared between tasks. \FloatBarrier \bibliographystyle{splncs03}
1,108,101,563,509
arxiv
\section{} There are two essential differences between nuclear reactions caused by binary and triple collisions. These differences can be classified as either kinematical or dynamical. The former are related to the selection rules which prevail in the two-- and three--body reactions. Some binary reactions are forbidden by conservation laws, e.g., conservation of angular momentum, parity, isospin etc. However such reactions could take place in the presence of a third particle as the three--body mechanism is kinematically less restricted. This may play a significant role in nuclear fusion in stellar plasma where the probability of triple collisions is quite high due to the high density of matter. The dynamical differences stem from the interdependence of different binary processes. For example the ${\rm e}+ {}^7{\rm Be}$ and ${\rm p}+{}^7{\rm Be}$ processes, become dependent, when the triple collisions ${\rm e}+{\rm p}+{}^7{\rm Be}$ in plasma are considered which leads to a completely different physical picture. In the present work we discuss the reaction rates of the following processes \begin{eqnarray} \rm e+p+d & \rightarrow & \rm {}^3He+e\ , \\ \rm e+p+{}^7Be &\rightarrow& \rm {}^8B +e\ , \\ \rm e+{}^3He+{}^4He &\rightarrow& \rm {}^7Be+e\ . \end{eqnarray} According to the standard model of the sun, the $^7$Be nucleus produced in the pp--cycle, undergoes transmutation via two different two-body reactions \begin{eqnarray} \label{eBeLin} {\rm e}+{}^7{\rm Be} &\to &\ {}^7{\rm Li}+\nu\ ,\\ \label{pBeBg} {\rm p}+{}^7{\rm Be} & \to & \ {}^8{\rm B}+\gamma\ . \end{eqnarray} Due to the high density of the plasma, however, electrons and protons can always be found in the vicinity of $^7$Be and therefore the initial three--body quantum state $ \left|{\rm p+e} + {}^7{\rm Be}\right\rangle\ $ could initiate the following reactions \begin{equation} \label{3body} {\rm p+e} + {}^7{\rm Be}\longrightarrow\left\{ \begin{array}{l} ^7{\rm Li+p}+\nu\\ ^8{\rm B}+\gamma+{\rm e}\\ ^8{\rm B+e}\\ \end{array} \right.\ . \end{equation} In the first two reactions the proton and electron are spectators while in the third all three initial particles are involved. It is obvious that the three processes (\ref{3body}) are not independent, for they all are generated by the same initial state. As a result the neutrino fluxes related to $^8$B and $^7$Be nuclei are not independent either. The question then arises on how strong is the mutual dependence of the processes~(\ref{3body}). This question can be formulated in an alternative form: Up to what extent can the two--body sub-systems, involved in (\ref{3body}), be considered as independent? Formally such independence would imply that the total wave function of the initial state could be represented by a direct product of the wave functions of the subsystems e+$^7$Be, p+${}^7$Be, and p+e, \begin{equation} \label{product} \left|{\rm p+e} + {}^7{\rm Be}\right\rangle\approx \left|{\rm e}+^7{\rm Be}\right\rangle\otimes \left|{\rm p}+^7{\rm Be}\right\rangle\otimes \left|{\rm p+e}\right\rangle\ . \end{equation} The possibility of such a factorization for three charged particles was investigated in Refs. [1, 2], where it was shown that the total wave function reduces to a product of the type (\ref{product}) only when all three particles are far away from each other. Such a configuration, however, can not contribute much to the transitions (\ref{3body}), for they are caused by the strong and weak interactions which vanish at large distances very fast. Therefore the treatment of the reactions (\ref{3body}) as independent processes requires further investigations. \section{} The reaction rates of the transitions (1)-(3) can be calculated via two approximations. In the first one the movement of the nuclei involved is treated adiabatically. This approximation should be reliable due to the mass ratio $m_e/m_A \ll 1 $ [3]. In the second approximation the electron--nucleus Coulomb interaction $V_e$ could be treated perturbatively. This is based on the fact that the corresponding Sommerfeld parameters at solar temperatures are small. Furthermore one should bear in mind, that in the center of the sun the average Coulomb interaction $<V_e>$ is of the order of 10--20\,eV which is much smaller than the average kinetic energy ($\sim 1.5$ keV) of the plasma particles. Therefore the higher order contributions of $V_e$ can be ignored and the reaction rate can be written as \begin{equation} \label{calr} {\cal R}_3(\vec k , \vec k') =2\pi \delta( E_f-E_i)\left|\left\langle\Psi_f ,\vec k' \right| V_e \left| \Psi_i, \vec k\right\rangle \right|^2n_{\rm e} n_A n_B\,. \end{equation} Here $\vec k$ and $\vec k'$ are the electron initial and final momenta, $n_e$ and $n_{A,B}$ the density of electrons and nuclei, respectively, while $\Psi_i$ and $\Psi_f$ are the relevant initial and final nuclear wave functions. The initial nuclear wave functions for all processes considered, Eqs. (1)--(3), were calculated by solving the Schr\"odinger equation using phenomenological short range potentials and screened Coulomb interactions. The nuclear final states for the processes (2) and (3) were calculated using the two-cluster models $\rm p+^7Be $ and $\rm^3He + {}^4He$. The wave function of $\rm ^3 He$ was obtained using a Faddeev-type integro-differential equation [4]. The results obtained for the ratio \begin{equation} \label{La} \Lambda_i(T)= \frac{{\cal R}^{(i)}_3(T)}{{\cal R}^{(i)}_2(T) } \end{equation} for all three processes considered are presented in Table 1. In the above ${\cal R}_3(T)$ is the reaction rate (\ref{calr}) averaged over the Maxwell distribution of the collision momenta at temperature $T$, and ${\cal R}_2(T)$ is the corresponding value for the binary process. The density of electrons used is $n_e=100 N_A\;\;\mbox{\rm cm}^{-3}$, where $N_A$ is the Avogadro number.\\[0.5cm] Table 1.\\ {\it Ratio triple to binary reaction rates as function of the temperature (in $10^6$ $^{\circ}$K) of the star.} \begin{center} \begin{tabular}{|rc|rc|rc|} \hline $T_6\phantom{--}$ & $\Lambda_1(T)\times 10^3$ & $T_6\phantom{--}$ & $\Lambda_2(T)\times 10^3$ & $T_6\phantom{--}$ & $\Lambda_3(T)\times 10^3$ \\ \hline $1\phantom{--} $ & $0.218$& $14\phantom{--}$ & $0.106$ & $10\phantom{--}$& $0.098$ \\ $5\phantom{--} $ & $0.147$& $15\phantom{--}$ & $0.102$ & $12\phantom{--}$& $0.108$ \\ $10\phantom{--} $ & $0.129$& $16\phantom{--}$ & $0.097$ & $14\phantom{--}$& $0.139$ \\ $20\phantom{--} $ & $0.107$& $18\phantom{--}$ & $0.094$ & $16\phantom{--}$& $0.158$ \\ $50\phantom{--} $ & $0.078$& $20\phantom{--}$ & $0.089$ & $18\phantom{--}$& $0.176$ \\ $100\phantom{--}$ & $0.055$& & & $20\phantom{--}$& $0.193$ \\ \hline \end{tabular} \end{center} \vspace{0.5cm} The binary and triple collision reaction rates increase with increasing temperature. However for the first two processes, (1) and (2), the triple reaction rate grows slower as compared to the binary one. As a result, the ratios $\Lambda_i$ in the first two colums of Table 1 decrease with increasing temperature. As can be seen in Table 1 the relative contribution of the triple processes to the fusion rates are rather small. The temperature dependence of the binary and triple reaction rates differ slightly. This is due to the fact that a Coulomb barrier exists only between two particles in all the systems considered. In contrast one can expect different temperature dependence in cases where all three particles are positively charged as, for example, in the system $\rm p+p+{}^7Be$. The two of the authors (VBB and SAR) are grateful for support received from the Russian Foundation for Basic Research in the form of Grant No. 96 02 18678.\\ {\Large References} [1] G. Garibotti and J. E. Miraglia, Phys. Rev. A {\bf 21}, 572 (1980). [2] M. Brauner, J. S. Briggs, and H. Klar, J. Phys. B {\bf 22}, 2265 (1989). [3] S.~A.~Rakityansky, S.~A.~Sofianos, L.~L.~Howell, M.~Braun, and V.~B.~Belyaev\\ \phantom{---------}Nucl.~Phys. A~{\bf 613}, 132 (1997). [4] M.~Fabre~de~la~Ripelle, H.~Fiedeldey, and S.~A.~Sofianos,\\ \phantom{---------}Phys.~Rev. C~{\bf 38}, 449 (1988). \end{document}
1,108,101,563,510
arxiv
\section{Introduction}\label{1} Let $A\supset\mathbb{Z}$ be a commutative domain which is finitely generated over $\mathbb{Z}$ as a $\mathbb{Z}$-algebra. As usual, we denote by $A^*$ the unit group of $A$. We consider equations \begin{equation}\label{1.1} a\varepsilon +b\eta =c\ \ \mbox{in } \varepsilon ,\eta\in A^* \end{equation} where $a,b,c$ are non-zero elements of $A$. Such equations, usually called \emph{unit equations}, have a great number of applications. For instance, the ring of $S$-integers in an algebraic number field is finitely generated over $\mathbb{Z}$, so the $S$-unit equation in two unknowns is a special case of \eqref{1.1}. In this paper, we consider equations \eqref{1.1} in the general case, where $A$ may contain transcendental elements, too. Siegel \cite[1921]{Sie21} proved that \eqref{1.1} has only finitely many solutions in the case that $A$ is the ring of integers of a number field, and Mahler \cite[1933]{Mah33} did this in the case that $A =\mathbb{Z} [1/p_1\cdots p_t]$ for certain primes $p_1,\ldots , p_t$. For $S$-unit equations over number fields, the finiteness of the number of solutions of \eqref{1.1} follows from work of Parry \cite[1950]{Par50}. Finally, Lang \cite[1960]{Lang60} proved for arbitrary finitely generated domains $A$ that \eqref{1.1} has only finitely many solutions. The proofs of all these results are ineffective. Baker \cite[1968]{Bak68} and Coates \cite[1968/69]{Coa68/69} implicitly proved effective finiteness results for certain special ($S$-)unit equations. Later, Gy\H{o}ry \cite[1974]{Gy74}, \cite[1979]{Gy79} showed, in the case that $A$ is the ring of $S$-integers in a number field, that the solutions of \eqref{1.1} can be determined effectively in principle. His proof is based on estimates for linear forms in ordinary and $p$-adic logarithms of algebraic numbers. In his papers \cite[1983]{Gy83} and \cite[1984]{Gy84}, Gy\H{o}ry introduced an effective specialization argument, and he used this to establish effective finiteness results for decomposable form equations and discriminant equations over a wide class of finitely generated domains $A$ containing both algebraic and transcendental elements, of which the elements have some ``good" effective representations. His results contain as a special case an effective finiteness result for equations \eqref{1.1} over these domains. Gy\H{o}ry's method of proof could not be extended to arbitrary finitely generated domains $A$. It is the purpose of this paper to prove an effective finiteness result for \eqref{1.1} over arbitrary finitely generated domains $A$. In fact, we give a quantitative statement, with effective upper bounds for the ``sizes" of the solutions $\varepsilon, \eta$. The main new ingredient of our proof is an effective result by Aschenbrenner \cite[2004]{Asc04} on systems of linear equations over polynomial rings over $\mathbb{Z}$. We introduce the notation used in our theorems. Let again $A\supset \mathbb{Z}$ be a commutative domain which is finitely generated over $\mathbb{Z}$, say $A=\mathbb{Z} [z_1,\ldots , z_r]$. Let $I$ be the ideal of polynomials $f\in\mathbb{Z} [X_1,\ldots , X_r]$ such that $f(z_1,\ldots , z_r)=0$. Then $I$ is finitely generated, hence \begin{equation}\label{1.2} A\cong \mathbb{Z} [X_1,\ldots , X_r]/I,\ \ I=(f_1,\ldots , f_m) \end{equation} for some finite set of polynomials $f_1,\ldots , f_m\in\mathbb{Z} [X_1,\ldots , X_r]$. We observe here that given $f_1,\ldots , f_m$, it can be checked effectively whether $A$ is a domain containing $\mathbb{Z}$. Indeed, this holds if and only if $I$ is a prime ideal of $\mathbb{Z} [X_1,\ldots , X_r]$ with $I\cap\mathbb{Z} =(0)$, and the latter can be checked effectively for instance using Aschenbrenner \cite[Prop. 4.10, Cor. 3.5]{Asc04}. Denote by $K$ the quotient field of $A$. For $\alpha \in A$, we call $f$ a \emph{representative} for $\alpha$, or say that $f$ represents $\alpha$ if $f\in\mathbb{Z} [X_1,\ldots , X_r]$ and $\alpha =f(z_1,\ldots , z_r)$. Further, for $\alpha\in K$, we call $(f,g)$ a \emph{pair of representatives} for $\alpha$ or say that $(f,g)$ represents $\alpha$ if $f,g\in\mathbb{Z} [X_1,\ldots , X_r]$, $g\not\in I$ and $\alpha =f(z_1,\ldots , z_r)/g(z_1,\ldots , z_r)$. We say that $\alpha\in A$ (resp. $\alpha\in K$) is given if a representative (resp. pair of representatives) for $\alpha$ is given. To do explicit computations in $A$ and $K$, one needs an \emph{ideal membership algorithm} for $\mathbb{Z} [X_1,\ldots , X_r]$, that is an algorithm which decides for any given polynomial and ideal of $\mathbb{Z} [X_1,\ldots , X_r]$ whether the polynomial belongs to the ideal. In the literature there are various such algorithms; we mention only the algorithm of Simmons \cite[1970]{Sim70}, and the more precise algorithm of Aschenbrenner \cite[2004]{Asc04} which plays an important role in our paper; see Lemma \ref{le:2.5} below for a statement of his result. One can perform arithmetic operations on $A$ and $K$ by using representatives. Further, one can decide effectively whether two polynomials $f_1,f_2$ represent the same element of $A$, i.e., $f_1-f_2\in I$, or whether two pairs of polynomials $(f_1,g_1),(f_2,g_2)$ represent the same element of $K$, i.e., $f_1g_2-f_2g_1\in I$, by using one of the ideal membership algorithms mentioned above. The \emph{degree} $\deg f$ of a polynomial $f\in\mathbb{Z} [X_1,\ldots , X_r]$ is by definition its total degree. By the \emph{logarithmic height} $h(f)$ of $f$ we mean the logarithm of the maximum of the absolute values of its coefficients. The \emph{size} of $f$ is defined by $s(f):=\max (1,\deg f,h(f))$. Clearly, there are only finitely many polynomials in $\mathbb{Z} [X_1,\ldots , X_r]$ of size below a given bound, and these can be determined effectively. \begin{theorem}\label{th:1.1} Assume that $r\geq 1$. Let $\widetilde{a},\widetilde{b},\widetilde{c}$ be representatives for $a,b,c$, respectively. Assume that $f_1,\ldots , f_m$ and $\widetilde{a},\widetilde{b},\widetilde{c}$ all have degree at most $d$ and logarithmic height at most $h$, where $d\geq 1$, $h\geq 1$. Then for each solution $(\varepsilon ,\eta )$ of \eqref{1.1}, there are representatives $\widetilde{\varepsilon},\widetilde{\varepsilon}',\widetilde{\eta},\widetilde{\eta}'$ of $\varepsilon,\varepsilon^{-1},\eta,\eta^{-1}$, respectively, such that \[ s(\widetilde{\varepsilon}),\, s(\widetilde{\varepsilon}'),\, s(\widetilde{\eta}),\, s(\widetilde{\eta}') \leq \exp \Big( (2d)^{c_1^r}(h+1)\Big), \] where $c_1$ is an effectively computable absolute constant $>1$. \end{theorem} By a theorem of Roquette \cite[1958]{Roq58}, the unit group of a domain finitely generated over $\mathbb{Z}$ is finitely generated. In the case that $A=O_S$ is the ring of $S$-integers of a number field it is possible to determine effectively a system of generators for $A^*$, and this was used by Gy\H{o}ry in his effective finiteness proof for \eqref{1.1} with $A=O_S$. However, no general algorithm is known to determine a system of generators for the unit group of an arbitrary finitely generated domain $A$. In our proof of Theorem \ref{th:1.1}, we do not need any information on the generators of $A^*$. By combining Theorem \ref{th:1.1} with an ideal membership algorithm for \\ $\mathbb{Z} [X_1,\ldots , X_r]$, one easily deduces the following: \begin{corollary}\label{th:1.2} Given $f_1,\ldots , f_m, a,b,c$, the solutions of \eqref{1.1} can be determined effectively. \end{corollary} \begin{proof} Clearly, $\varepsilon ,\eta$ is a solution of \eqref{1.1} if and only if there are polynomials $\widetilde{\varepsilon },\widetilde{\varepsilon}',\widetilde{\eta},\widetilde{\eta}'\in\mathbb{Z} [X_1,\ldots , X_r]$ such that $\widetilde{\varepsilon},\widetilde{\eta}$ represent $\varepsilon ,\eta$, and \begin{equation}\label{1.3} \widetilde{a}\cdot \widetilde{\varepsilon}+\widetilde{b}\cdot \widetilde{\eta}-\widetilde{c},\ \, \widetilde{\varepsilon}\cdot \widetilde{\varepsilon}' -1,\ \, \widetilde{\eta}\cdot\widetilde{\eta}' -1\in I. \end{equation} Thus, we obtain all solutions of \eqref{1.1} by checking, for each quadruple of polynomials $\widetilde{\varepsilon },\widetilde{\varepsilon}',\widetilde{\eta},\widetilde{\eta}'\in\mathbb{Z} [X_1,\ldots , X_r]$ of size at most $\exp \Big( (2d)^{c_1^r}(h+1)\Big)$ whether it satisfies \eqref{1.3}. Further, using the ideal membership algorithm, it can be checked effectively whether two different pairs $(\widetilde{\varepsilon},\widetilde{\eta})$ represent the same solution of \eqref{1.1}. Thus, we can make a list of representatives, one for each solution of \eqref{1.1}. \end{proof} Let $\gamma_1,\ldots ,\gamma_s$ be multiplicatively independent elements of $K^*$ (the multiplicative independence of $\gamma_1,\ldots ,\gamma_s$ can be checked effectively for instance using Lemma \ref{le:7.2} below). Let again $a,b,c$ be non-zero elements of $A$ and consider the equation \begin{equation}\label{1.4} a\gamma_1^{v_1}\cdots\gamma_s^{v_s}+b\gamma_1^{w_1}\cdots\gamma_s^{w_s}=c\ \ \mbox{in } v_1,\ldots , v_s,\, w_1,\ldots , w_s\in\mathbb{Z} . \end{equation} \begin{theorem}\label{th:1.3} Let $\widetilde{a},\widetilde{b},\widetilde{c}$ be representatives for $a,b,c$ and for $i=1,\ldots , s$, let $(g_{i1},g_{i2})$ be a pair of representatives for $\gamma_i$. Suppose that $f_1,\ldots , f_m$, $\widetilde{a},\widetilde{b},\widetilde{c}$, and $g_{i1},g_{i2}$ ($i=1,\ldots , s$) all have degree at most $d$ and logarithmic height at most $h$, where $d\geq 1$, $h\geq 1$. Then for each solution $(v_1,\ldots , w_s)$ of \eqref{1.4} we have \[ \max \big(|v_1|,\ldots , |v_s|,\, |w_1|,\ldots , |w_s|\big)\leq \exp \Big( (2d)^{c_2^{r+s}}(h+1)\Big), \] where $c_2$ is an effectively computable absolute constant $>1$. \end{theorem} \noindent An immediate consequence of Theorem \ref{th:1.3} is that for given $f_1,\ldots , f_m$, $a,b,c$ and $\gamma_1,\ldots , \gamma_s$, the solutions of \eqref{1.4} can be determined effectively. Since every domain finitely generated over $\mathbb{Z}$ has a finitely generated unit group, equation \eqref{1.1} maybe viewed as a special case of \eqref{1.4}. But since no general effective algorithm is known to find a finite system of generators for the unit group of a finitely generated domain, we cannot deduce an effective result for \eqref{1.1} from Theorem \ref{th:1.3}. In fact, we argue reversely, and prove Theorem \ref{th:1.3} by combining Theorem \ref{th:1.1} with an effective result on Diophantine equations of the type $\gamma_1^{v_1}\cdots\gamma_s^{v_s}=\gamma_0$ in integers $v_1,\ldots , v_s$, where $\gamma_1,\ldots ,\gamma_s,\gamma_0\in K^*$ (see Corollary \ref{co:7.3} below). The idea of the proof of Theorem \ref{th:1.1} is roughly as follows. We first estimate the degrees of the representatives of $\varepsilon ,\eta$ using Mason's effective result \cite[1983]{Mas83} on two term $S$-unit equations over function fields. Next, we apply many different specialization maps $A\to \overline{\mathbb{Q}}$ to \eqref{1.1} and obtain in this manner a large system of $S$-unit equations over number fields. By applying an existing effective finiteness result for such $S$-unit equations (e.g., Gy\H{o}ry and Yu \cite[2006]{GyYu06}) we collect enough information to retrieve an effective upper bound for the heights of the representatives of $\varepsilon ,\eta$. In our proof, we apply the specialization maps on a domain $B\supset A$ of a special type which can be dealt with more easily. In the construction of $B$, we use an effective result of Seidenberg \cite[1974]{Sei74} on systems of linear equations over polynomial rings over arbitrary fields. To be able to go back to equation \eqref{1.1} over $A$, we need an effective procedure to decide whether a given element of $B$ belongs to $A^*$. For this decision procedure, we apply an effective result of Aschenbrenner \cite[2004]{Asc04} on systems of linear equations over polynomial rings over $\mathbb{Z}$. The above approach was already followed by Gy\H{o}ry \cite[1983]{Gy83}, \cite[1984]{Gy84}. However, in these papers the domains $A$ are represented over $\mathbb{Z}$ in a different way. Hence, to select those solutions from $B$ of the equations under consideration which belong to $A$, certain restrictions on the domains $A$ had to be imposed. In a forthcoming paper, we will give some applications of our above theorems and our method of proof to other classes of Diophantine equations over finitely generated domains. \section{Effective linear algebra over polynomial rings}\label{2} We have collected some effective results for systems of linear equations to be solved in polynomials with coefficients in a field, or with coefficients in $\mathbb{Z}$. Here and in the remainder of this paper, we write \[ \log^* x:=\max (1,\log x)\ \mbox{for $x>0$, } \log^* 0 :=1. \] We use notation $O(\cdot )$ as an abbreviation for $c\times$ the expression between the parentheses, where $c$ is an effectively computable absolute constant. At each occurrence of $O(\cdot )$, the value of $c$ may be different. Given a commutative domain $R$, we denote by $R^{m,n}$ the $R$-module of $m\times n$-matrices with entries in $R$ and by $R^n$ the $R$-module of $n$-dimensional column vectors with entries in $R$. Further, ${\rm GL}_n(R)$ denotes the group of matrices in $R^{n,n}$ with determinant in the unit group $R^*$. The degree of a polynomial $f\in R[X_1,\ldots , X_N]$, that is, its total degree, is denoted by $\deg f$. From matrices $A,B$ with the same number of rows, we form a matrix $[A,B]$ by placing the columns of $B$ after those of $A$. Likewise, from two matrices $A,B$ with the same number of columns we form $\left[\begin{smallmatrix}A\\B\end{smallmatrix}\right]$ by placing the rows of $B$ below those of $A$. The logarithmic height $h(S)$ of a finite set $S=\{ a_1,\ldots , a_t\}\subset\mathbb{Z}$ is defined by $h(S):=\log\max (|a_1|,\ldots , |a_t|)$. The logarithmic height $h(U)$ of a matrix with entries in $\mathbb{Z}$ is defined by the logarithmic height of the set of entries of $U$. The logarithmic height $h(f)$ of a polynomial with coefficients in $\mathbb{Z}$ is the logarithmic height of the set of coefficients of $f$. \begin{lemma}\label{le:2.1} Let $U\in\mathbb{Z}^{m,n}$. Then the $\mathbb{Q}$-vector space of ${\bf y}\in\mathbb{Q}^n$ with $U{\bf y} ={\bf 0}$ is generated by vectors in $\mathbb{Z}^n$ of logarithmic height at most $mh(U)+\mbox{$\textstyle{\frac{1}{2}}$} m\log m$. \end{lemma} \begin{proof} Without loss of generality we may assume that $U$ has rank $m$, and moreover, that the matrix $B$ consisting of the first $m$ columns of $U$ is invertible. Let $\Delta := \det B$. By multiplying with $\Delta B^{-1}$, we can rewrite $U{\bf y} ={\bf 0} $ as $[\Delta I_m ,\, C]{\bf y} ={\bf 0}$, where $I_m$ is the $m\times m$-unit matrix, and $C$ consists of $m\times m$-subdeterminants of $U$. The solution space of this system is generated by the columns of $\left[\begin{smallmatrix}-C\\ \Delta I_{n-m}\end{smallmatrix}\right]$. An application of Hadamard's inequality gives the upper bound from the lemma for the logarithmic heights of these columns. \end{proof} \begin{proposition}\label{le:2.2} Let $F$ be a field, $N\geq 1$, and $R:=F [X_1,\ldots , X_N]$. Further, let $A$ be an $m\times n$-matrix and ${\bf b}$ and $m$-dimensional column vector, both consisting of polynomials from $R$ of degree $\leq d$ where $d\geq 1$. \\[0.15cm] (i) The $R$-module of ${\bf x}\in R^n$ with $A{\bf x} ={\bf 0}$ is generated by vectors ${\bf x}$ whose coordinates are polynomials of degree at most $(2md)^{2^N}$. \\[0.15cm] (ii) Suppose that $A{\bf x} ={\bf b}$ is solvable in ${\bf x}\in R^n$. Then it has a solution ${\bf x}$ whose coordinates are polynomials of degree at most $(2md)^{2^N}$. \end{proposition} \begin{proof} See Aschenbrenner \cite[Thms. 3.2, 3.4]{Asc04}. Results of this type were obtained earlier, but not with a completely correct proof, by Hermann \cite[1926]{Her26} and Seidenberg \cite[1974]{Sei74}. \end{proof} \begin{corollary}\label{co:2.3} Let $R:= \mathbb{Q} [X_1,\ldots , X_N]$. Further, Let $A$ be an $m\times n$-matrix of polynomials in $\mathbb{Z} [X_1,\ldots , X_N]$ of degrees at most $d$ and logarithmic heights at most $h$ where $d\geq 1$, $h\geq 1$. Then the $R$-module of ${\bf x}\in R^n$ with $A{\bf x} ={\bf 0}$ is generated by vectors ${\bf x}$, consisting of polynomials in $\mathbb{Z} [X_1,\ldots , X_N]$ of degree at most $(2md)^{2^N}$ and height at most $(2md)^{6^N}(h+1)$. \end{corollary} \begin{proof} By Proposition \ref{le:2.2} (i) we have to study $A{\bf x} ={\bf 0}$, restricted to vectors ${\bf x}\in R^n$ consisting of polynomials of degree at most $(2d)^{2^N}$. The set of these ${\bf x}$ is a finite dimensional $\mathbb{Q}$-vector space, and we have to prove that it is generated by vectors whose coordinates are polynomials in $\mathbb{Z} [X_1,\ldots , X_N]$ of logarithmic height at most $(2md)^{6^N}(h+1)$. If ${\bf x}$ consists of polynomials of degree at most $(2md)^{2^N}$, then $A{\bf x}$ consists of $m$ polynomials with coefficients in $\mathbb{Q}$ of degrees at most $(2md)^{2^N}+d$, all whose coefficients have to be set to $0$. This leads to a system of linear equations $U{\bf y} ={\bf 0}$, where ${\bf y}$ consists of the coefficients of the polynomials in ${\bf x}$ and $U$ consists of integers of logarithmic heights at most $h$. Notice that the number $m^*$ of rows of $U$ is $m$ times the number of monomials in $N$ variables of degree at most $(2md)^{2^N}+d$, that is \[ m^*\leq m\binom{(2md)^{2^N}+d+N}{N}. \] By Lemma \ref{le:2.1} the solution space of $U{\bf y} ={\bf 0}$ is generated by integer vectors of logarithmic height at most \[ m^*h+\mbox{$\textstyle{\frac{1}{2}}$} m^*\log m^*\leq (2md)^{6^N}(h+1). \] This completes the proof of our corollary. \end{proof} \begin{lemma}\label{le:2.4} Let $U\in\mathbb{Z}^{m,n}$, ${\bf b}\in\mathbb{Z}^m$ be such that $U{\bf y}={\bf b}$ is solvable in $\mathbb{Z}^n$. Then it has a solution ${\bf y}\in\mathbb{Z}^n$ with $h({\bf y} )\leq mh([U,{\bf b} ])+\mbox{$\textstyle{\frac{1}{2}}$} m\log m$. \end{lemma} \begin{proof} Assume without loss of generality that $U$ and $[U,{\bf b} ]$ have rank $m$. By a result of Borosh, Flahive, Rubin and Treybig \cite[1989]{BFRT89}, $U{\bf y} ={\bf b}$ has a solution ${\bf y}\in\mathbb{Z}^n$ such that the absolute values of the entries of ${\bf y}$ are bounded above by the maximum of the absolute values of the $m\times m$-subdeterminants of $[U,{\bf b} ]$. The upper bound for $h({\bf y} )$ as in the lemma easily follows from Hadamard's inequality. \end{proof} \begin{proposition}\label{le:2.5} Let $N\geq 1$ and let $f_1,\ldots , f_m,b\in\mathbb{Z} [X_1,\ldots , X_N]$ be polynomials of degrees at most $d$ and logarithmic heights at most $h$ where $d\geq 1$, $h\geq 1$, such that \begin{equation}\label{2.1} f_1x_1+\cdots +f_mx_m=b \end{equation} is solvable in $x_1,\ldots , x_m\in\mathbb{Z} [X_1,\ldots , x_N]$. Then \eqref{2.1} has a solution in polynomials $x_1,\ldots , x_m\in\mathbb{Z} [X_1,\ldots , X_N]$ with \begin{equation}\label{2.2} \deg x_i \leq (2d)^{\exp O(N\log^* N)}(h+1),\ \ h(x_i)\leq (2d)^{\exp O(N\log^* N)}(h+1)^{N+1} \end{equation} for $i=1,\ldots , m$. \end{proposition} \begin{proof} Aschenbrenner's main theorem \cite [Theorem A]{Asc04} states that Eq. \eqref{2.1} has a solution $x_1,\ldots , x_m\in\mathbb{Z} [X_1,\ldots , X_N]$ with $\deg x_i\leq d_0$ for $i=1,\ldots , m$, where \[ d_0=(2d)^{\exp O(N\log^*N)}(h+1). \] So it remains to show the existence of a solution with small logarithmic height. Let us restrict to solutions $(x_1,\ldots , x_m)$ of \eqref{2.1} of degree $\leq d_0$, and denote by ${\bf y}$ the vector of coefficients of the polynomials $x_1,\ldots , x_m$. Then \eqref{2.1} translates into a system of linear equations $U{\bf y} ={\bf b}$ which is solvable over $\mathbb{Z}$. Here, the number of equations, i.e., number of rows of $U$, is equal to $m^*:= \binom{d_0+d+N}{N}$. Further, $h(U, {\bf b} )\leq h$. By Lemma \ref{le:2.4}, $U{\bf y} ={\bf b}$ has a solution ${\bf y}$ with coordinates in $\mathbb{Z}$ of height at most \[ m^*h +\mbox{$\textstyle{\frac{1}{2}}$} m^*\log m^*\leq (2d)^{\exp O(N\log^* N)}(h+1)^{N+1}. \] It follows that \eqref{2.1} has a solution $x_1,\ldots , x_m\in\mathbb{Z} [X_1,\ldots , X_N]$ satisfying \eqref{2.2}. \end{proof} \noindent {\bf Remarks. 1.} Aschenbrenner gives in \cite{Asc04} an example which shows that the upper bound for the degrees of the $x_i$ cannot depend on $d$ and $N$ only. \\[0.15cm] {\bf 2.} The above lemma gives an effective criterion for ideal membership in $\mathbb{Z} [X_1,\ldots , X_N]$. Let $b\in\mathbb{Z} [X_1,\ldots , X_N]$ be given. Further, suppose that an ideal $I$ of $\mathbb{Z} [X_1,\ldots , X_N]$ is given by a finite set of generators $f_1,\ldots , f_m$. By the above lemma, if $b\in I$ then there are $x_1,\ldots , x_m\in\mathbb{Z} [X_1,\ldots , X_N]$ with upper bounds for the degrees and heights as in \eqref{2.2} such that $b=\sum_{i=1}^m x_if_i$. It requires only a finite computation to check whether such $x_i$ exist. \section{A reduction}\label{3} We reduce the general unit equation \eqref{1.1} to a unit equation over a domain $B$ of a special type which can be dealt with more easily. Let again $A=\mathbb{Z} [z_1,\ldots , z_r]\supset\mathbb{Z}$ be a commutative domain finitely generated over $\mathbb{Z}$ and denote by $K$ the quotient field of $A$. We assume that $r>0$. We have \begin{equation}\label{3.-1} A\cong \mathbb{Z} [X_1,\ldots , X_r]/I \end{equation} where $I$ is the ideal of polynomials $f\in\mathbb{Z} [X_1,\ldots , X_r]$ such that $f(z_1,\ldots , z_r)\\=0$. The ideal $I$ is finitely generated. Let $d\geq 1$, $h\geq 1$ and assume that \begin{equation}\label{3.0} I=(f_1,\ldots , f_m)\ \ \mbox{with } \deg f_i\leq d,\ \ h(f_i)\leq h\ (i=1,\ldots , m). \end{equation} Suppose that $K$ has transcendence degree $q\geq 0$. In case that $q>0$, we assume without loss of generality that $z_1,\ldots , z_q$ form a transcendence basis of $K/\mathbb{Q}$. We write $t:= r-q$ and rename $z_{q+1},\ldots , z_r$ as $y_1,\ldots , y_t$, respectively. In case that $t=0$ we have $A=\mathbb{Z} [z_1,\ldots , z_q]$, $A^*=\{\pm 1\}$ and Theorem \ref{th:1.1} is trivial. So we assume henceforth that $t>0$. Define \begin{eqnarray*} &&A_0:=\mathbb{Z} [z_1,\ldots , z_q],\ \ K_0:=\mathbb{Q} (z_1,\ldots , z_q)\ \ \mbox{if } q>0, \\[0.15cm] &&A_0:=\mathbb{Z},\ \ K_0:=\mathbb{Q}\ \ \mbox{if } q=0. \end{eqnarray*} Then \[ A=A_0[y_1,\ldots , y_t],\ \ K=K_0(y_1,\ldots , y_t). \] Clearly, $K$ is a finite extension of $K_0$, so in particular an algebraic number field if $q=0$. Using standard algebra techniques, one can show that there exist $y\in A$, $f\in A_0$ such that $K=K_0(y)$, $y$ is integral over $A_0$, and \[ A\subseteq B:=A_0[f^{-1},y],\ \ \ \ a,b,c\in B^*. \] If $\varepsilon,\eta\in A^*$ is a solution to \eqref{1.1}, then $\varepsilon_1:= a\varepsilon/c$, $\eta_1 :=b\eta/c$ satisfy \begin{equation}\label{3.-2} \varepsilon_1 +\eta_1 =1,\ \ \varepsilon_1 ,\eta_1\in B^*. \end{equation} At the end of this section, we formulate Proposition \ref{pr:3.6} which gives an effective result for equations of the type \eqref{3.-2}. More precisely, we introduce an other type of degree and height $\overline{{\rm deg}}\, (\alpha )$ and $\overline{h} (\alpha )$ for elements $\alpha$ of $B$, and give effective upper bounds for the $\overline{{\rm deg}}\,$ and $\overline{h}$ of $\varepsilon_1,\eta_1$. Subsequently we deduce Theorem \ref{th:1.1}. The deduction of Theorem \ref{th:1.1} is based on some auxiliary results which are proved first. We start with an explicit construction of $y,f$, with effective upper bounds in terms of $r$, $d$, $h$ and $a,b,c$ for the degrees and logarithmic heights of $f$ and of the coefficients in $A_0$ of the monic minimal polynomial of $y$ over $A_0$. Here we follow more or less Seidenberg \cite[1974]{Sei74}. Second, for a given solution $\varepsilon ,\eta$ of \eqref{1.1}, we derive effective upper bounds for the degrees and logarithmic heights of representatives for $\varepsilon$, $\varepsilon^{-1}$, $\eta$, $\eta^{-1}$ in terms of $\overline{{\rm deg}}\, (\varepsilon_1 )$, $\overline{h} (\varepsilon_1 )$, $\overline{{\rm deg}}\, (\eta_1)$, $\overline{h} (\eta_1)$. Here we use Proposition \ref{le:2.5} (Aschenbrenner's result). We introduce some further notation. First let $q>0$. Then since $z_1,\ldots , z_q$ are algebraically independent, we may view them as independent variables, and for $\alpha\in A_0$, we denote by $\deg\alpha$, $h(\alpha )$ the total degree and logarithmic height of $\alpha$, viewed as polynomial in $z_1,\ldots , z_q$. In case that $q=0$, we have $A_0=\mathbb{Z}$, and we agree that $\deg \alpha =0$, $h(\alpha )=\log |\alpha|$ for $\alpha\in A_0$. We frequently use the following estimate, valid for all $q\geq 0$: \begin{lemma}\label{le:3.0} Let $g_1,\ldots , g_n\in A_0$ and $g=g_1\cdots g_n$. Then \[ |h(g)-\sum_{i=1}^n h(g_i)|\leq q\deg g. \] \end{lemma} \begin{proof} See Bombieri and Gubler \cite[Lemma 1.6.11, pp. 27]{BoGu06}. \end{proof} We write ${\bf Y} =(X_{q+1},\ldots , X_r)$ and $K_0({\bf Y} ):= K_0(X_{q+1},\ldots , X_r)$, etc. Given $f\in \mathbb{Q} (X_1,\ldots , X_r)$ we denote by $f^*$ the rational function of $K_0({\bf Y} )$ obtained by substituting $z_i$ for $X_i$ for $i=1,\ldots , q$ (and $f^*=f$ if $q=0$). We view elements $f^*\in A_0[{\bf Y} ]$ as polynomials in ${\bf Y}$ with coefficients in $A_0$. We denote by $\deg_{{\bf Y}} f^*$ the (total) degree of $f^*\in K_0[{\bf Y} ]$ with respect to ${\bf Y}$. We recall that $\deg g$ is defined for elements of $A_0$ and is taken with respect to $z_1,\ldots , z_q$. With this notation, we can rewrite \eqref{3.-1}, \eqref{3.0} as \begin{equation}\label{3.0a} \left\{\begin{array}{l} A\cong A_0[{\bf Y} ]/(f_1^*,\ldots , f_m^*), \\[0.1cm] \deg_{{\bf Y}} f_i^* \leq d\ \mbox{for } i=1,\ldots , m, \\[0.1cm] \mbox{the coefficients of $f_1^*,\ldots , f_m^*$ in $A_0$ have degrees at most $d$} \\ \mbox{and logarithmic heights at most $h$.} \end{array}\right. \end{equation} Put $D:= [K:K_0]$ and denote by $\sigma_1,\ldots , \sigma_D$ the $K_0$- isomorphic embeddings of $K$ in an algebraic closure $\overline{K_0}$ of $K_0$. \begin{lemma}\label{le:3.1} (i) We have $D\leq d^t$. \\ (ii) There exist integers $a_1,\ldots , a_t$ with $|a_i|\leq D^2$ for $i=1,\ldots , t$ such that for $w:=a_1y_1+\cdots +a_ty_t$ we have $K=K_0(w)$. \end{lemma} \begin{proof} (i) The set \[ \mathcal{W} := \{ {\bf y} \in \overline{K_0}^t:\ f_1^*({\bf y} )=\cdots =f_m^*({\bf y} )=0\} \] consists precisely of the images of $(y_1,\ldots , y_t)$ under $\sigma_1,\ldots ,\sigma_D$. So we have to prove that $\mathcal{W}$ has cardinality at most $d^t$. In fact, this follows from a repeated application of B\'{e}zout's Theorem. Given $g_1,\ldots , g_k\in K_0[{\bf Y} ]$, we denote by $\mathcal{V}(g_1,\ldots , g_k)$ the common set of zeros of $g_1,\ldots , g_k$ in $\overline{K_0}^t$. Let $g_1:=f_1^*$. Then by the version of B\'{e}zout's Theorem in Hartshorne \cite[p. 53, Thm. 7.7]{Har77}, the irreducible components of $\mathcal{V}(g_1)$ have dimension $t-1$, and the sum of their degrees is at most $\deg_{{\bf Y}} g_1\leq d$. Take a $\overline{K_0}$-linear combination $g_2$ of $f_1^*,\ldots , f_m^*$ not vanishing identically on any of the irreducible components of $\mathcal{V} (g_1)$. For any of these components, say $\mathcal{V}$, the intersection of $\mathcal{V}$ and $\mathcal{V} (g_2)$ is a union of irreducible components, each of dimension $t-2$, whose degrees have sum at most $\deg_{{\bf Y}} g_2\cdot\deg\mathcal{V}\leq d\deg \mathcal{V}$. It follows that the irreducible components of $\mathcal{V} (g_1,g_2)$ have dimension $t-2$ and that the sum of their degrees is at most $d^2$. Continuing like this, we see that there are linear combinations $g_1,\ldots , g_t$ of $f_1^*,\ldots , f_m^*$ such that for $i=1,\ldots , t$, the irreducible components of $\mathcal{V} (g_1,\ldots , g_i)$ have dimension $d-i$ and the sum of their degrees is at most $d^i$. For $i=t$ it follows that $\mathcal{V} (g_1,\ldots , g_t)$ is a set of at most $d^t$ points. Since $\mathcal{W}\subseteq \mathcal{V}(g_1,\ldots , g_t)$ this proves (i). (ii) Let $a_1,\ldots , a_t$ be integers. Then $w:=\sum_{i=1}^t a_iy_i$ generates $K$ over $K_0$ if and only if $\sum_{j=1}^t a_j\sigma_i(y_j)$ ($i=1,\ldots , D$) are distinct. There are integers $a_i$ with $|a_i|\leq D^2$ for which this holds. \end{proof} \begin{lemma}\label{le:3.2} There are $\mathcal{G}_0,\ldots , \mathcal{G}_D\in A_0$ such that \begin{eqnarray} \label{3.1} &&\sum_{i=0}^D \mathcal{G}_iw^{D-i}=0,\ \ \mathcal{G}_0\mathcal{G}_D\not= 0, \\ \label{3.1a} &&\deg\mathcal{G}_i\leq (2d)^{\exp O(r)},\ \ \ h(\mathcal{G}_i )\leq (2d)^{\exp O(r)}(h+1)\ \ \ (i=0,\ldots , D). \end{eqnarray} \end{lemma} \begin{proof} In what follows we write ${\bf Y} =(X_{q+1},\ldots , X_r)$ and ${\bf Y}^{{\bf u}}:=X_{q+1}^{u_1}\cdots X_{q+t}^{u_t}$, $|{\bf u} |:= u_1+\cdots +u_t$ for tuples of non-negative integers ${\bf u} =(u_1,\ldots , u_t)$. Further, we define $W:=\sum_{j=1}^t a_jX_{q+j}$. $\mathcal{G}_0,\ldots , \mathcal{G}_D$ as in \eqref{3.1} clearly exist since $w$ has degree $D$ over $K_0$. By \eqref{3.0a}, there are $g_1^*,\ldots , g_m^*\in A_0[{\bf Y} ]$ such that \begin{equation}\label{3.2} \sum_{i=0}^D \mathcal{G}_iW^{D-i}= \sum_{j=1}^m g_j^*f_j^*. \end{equation} By Proposition \ref{le:2.2} (ii), applied with the field $F=K_0$, there are polynomials $g_j^*\in K_0[{\bf Y} ]$ (so with coefficients being rational functions in ${\bf z}$) satisfying \eqref{3.2} of degree at most $(2\max(d,D))^{2^t}\leq (2d^t)^{2^t}=: d_0$ in ${\bf Y}$. By multiplying $\mathcal{G}_0,\ldots ,\mathcal{G}_D$ with an appropriate non-zero factor from $A_0$ we may assume that the $g_j^*$ are polynomials in $A_0[{\bf Y} ]$ of degree at most $d_0$ in ${\bf Y}$. By considering \eqref{3.2} with such polynomials $g_j^*$, we obtain \begin{equation}\label{3.3} \sum_{i=0}^D \mathcal{G}_iW^{D-i}= \sum_{j=1}^m \Big(\sum_{|{\bf u} |\leq d_0} g_{j,{\bf u}}{\bf Y}^{{\bf u}}\Big)\cdot \Big(\sum_{|{\bf v} |\leq d} f_{j,{\bf v}}{\bf Y}^{{\bf v}}\Big), \end{equation} where $g_{j,{\bf u}}\in A_0$ and $f_j^*=\sum_{|{\bf v} |\leq d} f_{j,{\bf v}}{\bf Y}^{{\bf v}}$ with $f_{j,{\bf v}}\in A_0$. We view $\mathcal{G}_0,\ldots ,\mathcal{G}_D$ and the polynomials $g_{j,{\bf u}}$ as the unknowns of \eqref{3.3}. Then \eqref{3.3} has solutions with $\mathcal{G}_0\mathcal{G}_D\not= 0$. We may view \eqref{3.3} as a system of linear equations $\AA{\bf x} ={\bf 0}$ over $K_0$, where ${\bf x}$ consists of $\mathcal{G}_i$ ($i=0,\ldots , D$) and $g_{j,{\bf u}}$ ($j=1,\ldots , m$, $|{\bf u} |\leq d_0$). By Lemma \ref{le:3.1} and an elementary estimate, the polynomial $W^{D-i}=(\sum_{k=1}^t a_kX_{q+k})^{D-i}$ has logarithmic height at most $O(D\log (2D^2t))\leq (2d)^{O(t)}$. By combining this with \eqref{3.0a}, it follows that the entries of the matrix $\AA$ are elements of $A_0$ of degrees at most $d$ and logarithmic heights at most $h_0:=\max ((2d)^{O(t)},h)$. Further, the number of rows of $\AA$ is at most the number of monomials in ${\bf Y}$ of degree at most $d_0+d$ which is bounded above by $m_0:=\binom{d_0+d+t}{t}$. So by Corollary \ref{co:2.3}, the solution module of \eqref{3.3} is generated by vectors ${\bf x} =(\mathcal{G}_0,\ldots , \mathcal{G}_D,\,\{ g_{i,{\bf u}}\})$, consisting of elements from $A_0$ of degree and height at most \[ \big(2m_0d\big)^{2^q}\leq (2d)^{\exp O(r)},\ \ \big(2m_0d\big)^{6^q}(h_0+1)\leq (2d)^{\exp O(r)}(h+1), \] respectively. At least one of these vectors ${\bf x}$ must have $\mathcal{G}_0\mathcal{G}_D\not= 0$ since otherwise \eqref{3.3} would have no solution with $\mathcal{G}_0\mathcal{G}_D\not= 0$, contradicting \eqref{3.1}. Thus, there exists a solution ${\bf x}$ whose components $\mathcal{G}_0,\ldots ,\mathcal{G}_D$ satisfy both \eqref{3.1}, \eqref{3.1a}. This proves our lemma. \end{proof} It will be more convenient to work with \[ y:=\mathcal{G}_0w=\mathcal{G}_0\cdot (a_1y_1+\cdots +a_ty_t). \] In the case $D=1$ we set $y:=1$. The following properties of $y$ follow at once from Lemmas \ref{le:3.0}--\ref{le:3.2}. \begin{corollary}\label{co:3.3} We have $K=K_0(y)$, $y\in A$, $y$ is integral over $A_0$, and $y$ has minimal polynomial $\mathcal{F} (X)=X^D+\mathcal{F}_1X^{D-1}+\cdots +\mathcal{F}_D$ over $K_0$ with \[ \mathcal{F}_i\in A_0,\ \ \deg \mathcal{F}_i\leq (2d)^{\exp O(r)}, \ h(\mathcal{F}_i)\leq (2d)^{\exp O(r)}(h+1) \] for $i=1,\ldots , D$. \end{corollary} Recall that $A_0=\mathbb{Z}$ if $q=0$ and $\mathbb{Z} [z_1,\ldots , z_q]$ if $q>0$, where in the latter case, $z_1,\ldots , z_q$ are algebraically independent. Hence $A_0$ is a unique factorization domain, and so the gcd of a finite set of elements of $A_0$ is well-defined and up to sign uniquely determined. With every element $\alpha\in K$ we can associate an up to sign unique tuple $P_{\alpha ,0},\ldots , P_{\alpha ,D-1},Q_{\alpha}$ of elements of $A_0$ such that \begin{equation}\label{3.4} \alpha = Q_{\alpha}^{-1}\sum_{j=0}^{D-1} P_{\alpha ,j}y^j\ \ \mbox{with } Q_{\alpha}\not= 0,\ \gcd (P_{\alpha ,0},\ldots , P_{\alpha ,D-1},Q_{\alpha})=1. \end{equation} Put \begin{equation}\label{3.4b} \left\{\begin{array}{l} \overline{{\rm deg}}\, \alpha := \max (\deg P_{\alpha ,0},\ldots , \deg P_{\alpha ,D-1}, \deg Q_{\alpha}), \\[0.15cm] \overline{h} (\alpha ):= \max \big( h(P_{\alpha ,0}),\ldots , h(P_{\alpha ,D-1}),h(Q_{\alpha})\big) \end{array}\right. . \end{equation} Then for $q=0$ we have $\overline{{\rm deg}}\,\alpha =0$, $\overline{h} (\alpha )=\log\max \big(|P_{\alpha ,0}|,\ldots , |P_{\alpha ,D-1}|,|Q_{\alpha}|\big)$. \begin{lemma}\label{le:3.4} Let $\alpha\in K^*$ and let $(a,b)$ be a pair of representatives for $\alpha$, with $a,b\in\mathbb{Z} [X_1,\ldots , X_r]$, $b\not\in I$. Put $d^*:=\max (d,\deg a , \deg b)$, $h^*:=\max (h,h(a),h(b))$. Then \begin{equation} \label{3.4.a} \overline{{\rm deg}}\,\alpha \leq (2d^*)^{\exp O(r)},\ \ \overline{h} (\alpha )\leq (2d^*)^{\exp O(r)} (h^*+1). \end{equation} \end{lemma} \begin{proof} Consider the linear equation \begin{equation}\label{3.5} Q\cdot \alpha =\sum_{j=0}^{D-1} P_jy^j \end{equation} in unknowns $P_0,\ldots , P_{D-1},Q\in A_0$. This equation has a solution with $Q\not= 0$, since $\alpha\in K=K_0(y)$ and $y$ has degree $D$ over $K_0$. Write again ${\bf Y} =(X_{q+1},\ldots , X_r)$ and put $Y:=\mathcal{G}_0\cdot (\sum_{j=1}^t a_jX_{q+j})$. Let $a^*,\, b^*\in A_0[{\bf Y} ]$ be obtained from $a,b$ by substituting $z_i$ for $X_i$ for $i=1,\ldots , q$ ($a^*=a$, $b^*=b$ if $q=0$). By \eqref{3.0a}, there are $g_j^*\in A_0[{\bf Y} ]$ such that \begin{equation}\label{3.6} Q\cdot a^*-b^*\sum_{j=0}^{D-1} P_jY^j = \sum_{j=1}^m g_j^*f_j^*. \end{equation} By Proposition \ref{le:2.2} (ii) this identity holds with polynomials $g_j^*\in A_0[{\bf Y} ]$ of degree in ${\bf Y}$ at most $(2\max (d^* ,D))^{2^t}\leq (2d^*)^{t2^t}$, where possibly we have to multiply $(P_0,\ldots , P_{D-1},Q)$ with a non-zero element from $A_0$. Now completely similarly as in the proof of Lemma \ref{le:3.2}, one can rewrite \eqref{3.6} as a system of linear equations over $K_0$ and then apply Corollary \ref{co:2.3}. It follows that \eqref{3.5} is satisfied by $P_0,\ldots , P_{D-1},Q\in A_0$ with $Q\not= 0$ and \begin{eqnarray*} &&\deg P_i,\, \deg Q\leq (2d^*)^{\exp O(r)},\\ &&h(P_i),\, h(Q)\leq (2d^*)^{\exp O(r)}(h^*+1)\ \ (i=0,\ldots , D-1). \end{eqnarray*} By dividing $P_0,\ldots , P_{D-1},Q$ by their gcd and using Lemma \ref{le:3.0} we obtain $P_{\alpha,0},\ldots , P_{D-1,\alpha},Q_{\alpha}\in A_0$ satisfying both \eqref{3.4} and \begin{eqnarray*} &&\deg P_{i,\alpha },\, \deg Q_{\alpha}\leq (2d^*)^{\exp O(r)},\\ &&h(P_{i,\alpha }),\, h(Q_{\alpha})\leq (2d^*)^{\exp O(r)}(h^*+1)\ \ (i=0,\ldots , D-1). \end{eqnarray*} \end{proof} \begin{lemma}\label{le:3.4a} Let $\alpha_1,\ldots , \alpha_n\in K^*$. For $i=1,\ldots , n$, let $(a_i,b_i)$ be a pair of representatives for $\alpha_i$, with $a_i,b_i\in\mathbb{Z} [X_1,\ldots , X_r]$, $b_i\not\in I$. Put \begin{eqnarray*} d^{**}&:=&\max (d,\deg a_1,\deg b_1,\ldots , \deg a_n,\deg b_n), \\[0.1cm] h^{**}&:=&\max \big(h,h(a_1),h(b_1),\ldots , h(a_n),h(b_n)\big). \end{eqnarray*} Then there is a non-zero $f\in A_0$ such that \begin{eqnarray} \label{3.x1} &&A\subseteq A_0[y,f^{-1}],\ \alpha_1,\ldots ,\alpha_n\in A_0[y,f^{-1}]^*, \\[0.15cm] \label{3.x2} &&\deg f\leq (n+1)(2d^{**})^{\exp O(r)},\ h(f)\leq (n+1)(2d^{**})^{\exp O(r)}(h^{**}+1). \end{eqnarray} \end{lemma} \begin{proof} Take \[ f:=\prod_{i=1}^t Q_{y_i}\cdot \prod_{j=1}^n\big( Q_{\alpha_i}Q_{\alpha_i^{-1}}\big). \] Since in general, $Q_{\beta}\beta\in A_0[y]$ for $\beta\in K^*$, we have $f\beta\in A_0[y]$ for $\beta =y_1,\ldots , y_t,\alpha_1,\alpha_1^{-1},\ldots ,\alpha_n,\alpha_n^{-1}$. This implies \eqref{3.x1}. The inequalities \eqref{3.x2} follow at once from Lemmas \ref{le:3.4} and \ref{le:3.0}. \end{proof} \begin{lemma}\label{le:3.5} Let $\lambda\in K^*$ and let $\varepsilon$ be a non-zero element of $A$. Let $(a,b)$ with $a,b\in\mathbb{Z} [X_1,\ldots , X_r]$ be a pair of representatives for $\lambda$. Put \begin{eqnarray*} d_0&:=&\max (\deg f_1,\ldots , \deg f_m ,\deg a,\deg b, \overline{{\rm deg}}\, \lambda \varepsilon ), \\[0.15cm] h_0&:=&\max \big(h(f_1),\ldots , h(f_m) ,h(a),h(b), \overline{h} ( \lambda\varepsilon)\,\big). \end{eqnarray*} Then $\varepsilon$ has a representative $\widetilde{\varepsilon}\in\mathbb{Z} [X_1,\ldots , X_r]$ such that \[ \deg\widetilde{\varepsilon}\leq (2d_0)^{\exp O(r\log^* r)}(h_0+1),\ \ h(\widetilde{\varepsilon})\leq (2d_0)^{\exp O(r\log^* r)}(h_0+1)^{r+1}. \] If moreover $\varepsilon\in A^*$, then $\varepsilon^{-1}$ has a representative $\widetilde{\varepsilon}'\in\mathbb{Z} [X_1,\ldots , X_r]$ with \[ \deg \widetilde{\varepsilon}'\leq (2d_0)^{\exp O(r\log^* r)}(h_0+1),\ \ h(\widetilde{\varepsilon}')\leq (2d_0)^{\exp O(r\log^* r)}(h_0+1)^{r+1}. \] \end{lemma} \begin{proof} In case that $q>0$, we identify $z_i$ with $X_i$ and view elements of $A_0$ as polynomials in $\mathbb{Z} [X_1,\ldots , X_q]$. Put $Y:=\mathcal{G}_0\cdot (\sum_{i=1}^t a_iX_{q+i})$. We have \begin{equation}\label{3.9} \lambda\varepsilon =Q^{-1}\sum_{i=0}^{D-1} P_iy^i \end{equation} with $P_0,\ldots , P_{D-1},Q\in A_0$ and $\gcd (P_0,\ldots , P_{D-1},Q)=1$. According to \eqref{3.9}, $\widetilde{\varepsilon}\in\mathbb{Z} [X_1,\ldots , X_r]$ is a representative for $\varepsilon$ if and only if there are $g_1,\ldots , g_m\in\mathbb{Z} [X_1,\ldots , X_r]$ such that \begin{equation}\label{3.10} \widetilde{\varepsilon}\cdot (Q\cdot a) +\sum_{i=1}^m g_if_i = b\sum_{i=0}^{D-1} P_iY^i. \end{equation} We may view \eqref{3.10} as an inhomogeneous linear equation in the unknowns $\widetilde{\varepsilon},g_1,\ldots , g_m$. Notice that by Lemmas \ref{le:3.1}--\ref{le:3.4} the degrees and logarithmic heights of $Qa$ and $b\sum_{i=0}^{D-1} P_iY^i$ are all bounded above by $(2d_0)^{\exp O(r)}$, $(2d_0)^{\exp O(r)}(h_0+1)$, respectively. Now Proposition \ref{le:2.5} implies that \eqref{3.10} has a solution with upper bounds for $\deg\widetilde{\varepsilon}$, $h(\widetilde{\varepsilon})$ as stated in the lemma. Now suppose that $\varepsilon\in A^*$. Again by \eqref{3.9}, $\widetilde{\varepsilon}'\in\mathbb{Z} [X_1,\ldots , X_r]$ is a representative for $\varepsilon^{-1}$ if and only if there are $g_1',\ldots , g_m'\in\mathbb{Z} [X_1,\ldots , X_r]$ such that \[ \widetilde{\varepsilon}'\cdot b\sum_{i=0}^{D-1} P_iY^i +\sum_{i=1}^m g_i'f_i =Q\cdot a. \] Similarly as above, this equation has a solution with upper bounds for $\deg\widetilde{\varepsilon}'$, $h(\widetilde{\varepsilon}')$ as stated in the lemma. \end{proof} Recall that we have defined $A_0=\mathbb{Z} [z_1,\ldots , z_q]$, $K_0=\mathbb{Q} (z_1,\ldots , z_q)$ if $q>0$ and $A_0=\mathbb{Z}$, $K_0=\mathbb{Q}$ if $q=0$, and that in the case $q=0$, degrees and $\overline{{\rm deg}}\,$-s are always zero. Theorem \ref{th:1.1} can be deduced from the following Proposition, which makes sense also if $q=0$. The proof of this Proposition is given in Sections \ref{4}--\ref{6}. \begin{proposition}\label{pr:3.6} Let $f\in A_0$ with $f\not=0$, and let \[ \mathcal{F} =X^D+\mathcal{F}_1X^{D-1}+\cdots +\mathcal{F}_D\in A_0[X]\ \ (D\geq 1) \] be the minimal polynomial of $y$ over $K_0$. Let $d_1\geq 1$, $h_1\geq 1$ and suppose \[ \max (\deg f ,\deg\mathcal{F}_1,\ldots ,\deg\mathcal{F}_D)\leq d_1,\ \ \max (h(f) ,h(\mathcal{F}_1),\ldots , h(\mathcal{F}_D))\leq h_1. \] Define the domain $B:=A_0[y,f^{-1}]$. Then for each pair $(\varepsilon_1,\eta_1)$ with \begin{equation}\label{3.12} \varepsilon_1+\eta_1=1,\ \ \ \varepsilon_1,\eta_1\in B^* \end{equation} we have \begin{eqnarray} \label{3.13} &&\overline{{\rm deg}}\, \varepsilon_1 ,\overline{{\rm deg}}\, \eta_1 \leq 4qD^2\cdot d_1, \\[0.15cm] \label{3.14} &&\overline{h} (\varepsilon_1) , \overline{h} (\eta_1)\leq \exp O\Big( 2D(q+d_1)\log^* \{2D(q+d_1)\}+Dh_1\Big). \end{eqnarray} \end{proposition} \begin{proof}[Proof of Theorem \ref{th:1.1}] Let $a,b,c\in A$ be the coefficients of \eqref{1.3}, and $\widetilde{a},\widetilde{b},\widetilde{c}$ the representatives for $a,b,c$ from the statement of Theorem \ref{th:1.1}. By Lemma \ref{le:3.4a}, there exists non-zero $f\in A_0$ such that that $A\subseteq B:=A_0[y,f^{-1}]$, $a,b,c\in B^*$, and moreover, $\deg f\leq (2d)^{\exp O(r)}$ and $h(f)\leq (2d)^{\exp O(r)}(h+1)$. By Corollary \ref{co:3.3} we have the same type of upper bounds for the degrees and logarithmic heights of $\mathcal{F}_1,\ldots , \mathcal{F}_D$. So in Proposition \ref{pr:3.6} we may take $d_1= (2d)^{\exp O(r)}$, $h_1=(2d)^{\exp O(r)}(h+1)$. Finally, by Lemma \ref{le:3.1} we have $D\leq d^t$. Let $(\varepsilon ,\eta )$ be a solution of \eqref{1.1} and put $\varepsilon_1:=a\varepsilon /c$, $\eta_1:= b\eta/c$. By Proposition \ref{pr:3.6} we have \[ \overline{{\rm deg}}\, \varepsilon_1\leq 4qd^{2t} (2d)^{\exp O(r)}\leq (2d)^{\exp O(r)},\ \ \overline{h} (\varepsilon_1)\leq \exp \Big( (2d)^{\exp O(r)}(h+1)\Big). \] We apply Lemma \ref{le:3.5} with $\lambda =a/c$. Notice that $\lambda$ is represented by $(\widetilde{a},\widetilde{c})$. By assumption, $\widetilde{a}$ and $\widetilde{c}$ have degrees at most $d$ and logarithmic heights at most $h$. Letting $\widetilde{a},\widetilde{c}$ play the role of $a,b$ in Lemma \ref{le:3.5}, we see that in that lemma we may take $h_0=\exp \Big( (2d)^{\exp O(r)}(h+1)\Big)$ and $d_0=(2d)^{\exp O(r)}$. It follows that $\varepsilon ,\varepsilon^{-1}$ have representatives $\widetilde{\varepsilon}$, $\widetilde{\varepsilon}'\in\mathbb{Z} [X_1,\ldots , X_r]$ such that \[ \deg \widetilde{\varepsilon},\, \deg \widetilde{\varepsilon}',\, h(\widetilde{\varepsilon}),\, h(\widetilde{\varepsilon}') \leq \exp \Big( (2d)^{\exp O(r)}(h+1)\Big). \] We observe here that the upper bound for $\overline{h} (\varepsilon_1)$ dominates by far the other terms in our estimation. In the same manner one can derive similar upper bounds for the degrees and logarithmic heights of representatives for $\eta$ and $\eta^{-1}$. This completes the proof of Theorem \ref{th:1.1}. \end{proof} Proposition \ref{pr:3.6} is proved in Sections \ref{4}--\ref{6}. In Section \ref{4} we deduce the degree bound \eqref{3.13}. Here, our main tool is Mason's effective result on $S$-unit equations over function fields \cite[1983]{Mas83}. In Section \ref{5} we work out a more precise version of an effective specialization argument of Gy\H{o}ry \cite[1983]{Gy83}, \cite[1984]{Gy84}. In Section \ref{6} we prove \eqref{3.14} by combining the specialization argument from Section \ref{5} with a recent effective result for $S$-unit equations over number fields, due to Gy\H{o}ry an Yu \cite[2006]{GyYu06}. \section{Bounding the degree}\label{4} We start with recalling some results on function fields in one variable. Let ${\bf k}$ be an algebraically closed field of characteristic $0$ and let $z$ be transcendental over ${\bf k}$. Let $K$ be a finite extension of ${\bf k} (z)$. Denote by $g_{K/{\bf k}}$ the genus of $K$, and by $M_K$ the collection of valuations of $K/{\bf k}$, i.e, the valuations of $K$ with value group $\mathbb{Z}$ which are trivial on ${\bf k}$. Recall that these valuations satisfy the sum formula \[ \sum_{v\in M_K} v(x)=0\ \ \mbox{for $x\in K^*$.} \] As usual, for a finite subset $S$ of $M_K$ the group of $S$-units of $K$ is given by \[ O_S^*=\{ x\in K^*:\, v(x)=0\ \mbox{for } v\in M_K\setminus S\}. \] The (homogeneous) height of ${\bf x} =(x_1,\ldots , x_n)\in K^n$ relative to $K/{\bf k}$ is defined by \[ H_K({\bf x} )=H_K(x_1,\ldots , x_n):=-\sum_{v\in M_K}\min (v(x_1),\ldots , v(x_n)). \] By the sum formula, \begin{equation}\label{4.0} H_K(\alpha {\bf x} )=H_K({\bf x} )\ \ \mbox{for $\alpha\in K^*$.} \end{equation} The height of $x\in K$ relative to $K/{\bf k}$ is defined by \[ H_K(x):= H_K(1,x)=-\sum_{v\in M_K}\min (0,v(x)). \] If $L$ is a finite extension of $K$, we have \begin{equation}\label{4.1} H_L(x_1,\ldots , x_n)=[L:K]H_K (x_1,\ldots , x_n)\ \ \mbox{for } (x_1,\ldots , x_n) \in K^n. \end{equation} By $\deg f$ we denote the total degree of $f\in{\bf k} [z]$. Then for $f_1,\ldots , f_n\in{\bf k} [z]$ with $\gcd (f_1,\ldots , f_n)=1$ we have \begin{equation}\label{4.2} H_{{\bf k} (z)}(f_1,\ldots , f_n)=\max (\deg f_1,\ldots , \deg f_n). \end{equation} \begin{lemma}\label{le:4.1} Let $y_1,\ldots , y_m\in K$ and suppose that \[ X^m+f_1X^{m-1}+\cdots +f_m=(X-y_1)\cdots (X-y_m) \] for certain $f_1,\ldots , f_m\in{\bf k} [z]$. Then \[ [K:{\bf k} (z)]\max (\deg f_1,\ldots ,\deg f_m)=\sum_{i=1}^m H_K(y_i). \] \end{lemma} \begin{proof} By Gauss' Lemma we have for $v\in M_K$, \[ \min (v(f_1),\ldots , v(f_m))=\sum_{i=1}^m \min (0,v(y_i)). \] Now take the sum over $v\in M_K$ and apply \eqref{4.1}, \eqref{4.2}. \end{proof} \begin{lemma}\label{le:4.2} Let $K$ be the splitting field over ${\bf k} (z)$ of $F:=X^m+f_1X^{m-1}+\cdots +f_m$, where $f_1,\ldots , f_m\in {\bf k} [z]$. Then \[ g_{K/{\bf k}}\leq (d-1)m\cdot \max_{1\leq i\leq m} \deg f_i , \] where $d:=[K:{\bf k} (z)]$. \end{lemma} \begin{proof} This is Lemma H of Schmidt \cite[1978]{Schmidt78}. \end{proof} In what follows, the cardinality of a set $S$ is denoted by $|S|$. \begin{proposition}\label{le:4.3} Let $K$ be a finite extension of ${\bf k} (z)$ and $S$ be a finite subset of $M_K$. Then for every solution of \begin{equation}\label{4.3} x+y=1\ \ \mbox{in } x,y\in O_S^*\setminus {\bf k}^* \end{equation} we have $\max (H_K(x),H_K(y))\leq |S|+2g_{K/{\bf k}}-2$. \end{proposition} \begin{proof} See Mason \cite[1983]{Mas83}. \end{proof} We keep the notation from Proposition \ref{pr:3.6}. We may assume that $q>0$ because the case $q=0$ is trivial. Let as before $K_0=\mathbb{Q} (z_1,\ldots , z_q)$, $K=K_0(y)$, $A_0=\mathbb{Z} [z_1,\ldots , z_q]$, $B=\mathbb{Z} [z_1,\ldots , z_q,f^{-1},y]$. Fix $i\in \{ 1,\ldots , q\}$. Let ${\bf k}_i:=\mathbb{Q}(z_1,\ldots , z_{i-1},z_{i+1},\ldots , z_q)$ and $\overline{{\bf k}_{i}}$ its algebraic closure. Thus, the domain $A_0$ is contained in $\overline{{\bf k}_i}[z_i]$. Let $y^{(1)}=y,\ldots , y^{(D)}$ denote the conjugates of $y$ over $K_0$. Let $M_i$ denote the splitting field of the polynomial $X^{D}+\mathcal{F}_1X^{D-1}+\cdots +\mathcal{F}_D$ over $\overline{{\bf k}_i}(z_i)$, i.e. \[ M_i:=\overline{{\bf k}_i}(z_i,y^{(1)},\ldots ,y^{(D)}). \] The subring \[ B_i:=\overline{{\bf k}_i}[z_i,f^{-1}, y^{(1)},\ldots ,y^{(D)}] \] of $M_i$ contains $B=\mathbb{Z} [z_1,\ldots ,z_q,f^{-1},y]$ as a subring. Put $\Delta_i:=[M_i:\overline{{\bf k}_i}(z_i)]$. We apply Lemmas \ref{le:4.1}, \ref{le:4.2} and Proposition \ref{le:4.3} with $z_i,{\bf k}_i,M_i$ instead of $z,{\bf k},K$. Denote by $g_{M_i}$ the genus of $M_i/\overline{{\bf k}_i}$. The height $H_{M_i}$ is taken with respect to $M_i/\overline{{\bf k}_i}$. For $g\in A_0$, we denote by $\deg_{z_i} g$ the degree of $g$ in the variable $z_i$. \begin{lemma}\label{le:4.4} Let $\alpha\in K$ and denote by $\alpha^{(1)},\ldots , \alpha^{(D)}$ the conjugates of $\alpha$ over $K_0$. Then \[ \overline{{\rm deg}}\, \alpha \leq qD\cdot d_1+\sum_{i=1}^q \Delta_i^{-1}\sum_{j=1}^D H_{M_i}(\alpha^{(j)} ). \] \end{lemma} \begin{proof} We have \[ \alpha =Q^{-1}\sum_{j=0}^{D-1} P_jy^j \] for certain $P_0,\ldots , P_{D-1},Q\in A_0$ with $\gcd (Q,P_0,\ldots , P_{D-1})=1$. Clearly, \begin{equation}\label{4.3a} \overline{{\rm deg}}\,\alpha\leq \sum_{i=1}^q \mu_i,\ \ \mbox{where } \mu_i:=\max (\deg_{z_i} Q,\deg_{z_i} P_0,\ldots , \deg_{z_i} P_{D-1}). \end{equation} Below, we estimate $\mu_1,\ldots , \mu_q$ from above. We fix $i\in\{ 1,\ldots , q\}$ and use the notation introduced above. Obviously, \[ \alpha^{(k)} =Q^{-1}\sum_{j=0}^{D-1} P_j\cdot (y^{(k)})^j \ \mbox{for } k=1,\ldots , D . \] Let $\Omega$ be the $D\times D$-matrix with rows \[ (1,\ldots , 1),\, (y^{(1)},\ldots , y^{(D)}),\ldots , \big( (y^{(1)})^{D-1},\ldots , (y^{(D)})^{D-1}\big). \] By Cramer's rule, $P_j/Q=\delta_j/\delta$, where $\delta =\det\Omega$, and $\delta_j$ is the determinant of the matrix obtained by replacing the $j$-th row of $\Omega$ by $(\alpha^{(1)},\ldots ,\alpha^{(D)})$. Gauss' Lemma implies that $\gcd(P_0,\ldots , P_{D-1},Q)=1$ in the ring in ${\bf k}_i[z_i]$. By \eqref{4.2} (with $z_i$ in place of $z$) we have \begin{eqnarray*} \mu_i &=& \max (\deg_{z_i}Q,\deg_{z_i} P_0,\ldots , \deg_{z_i}P_{D-1}) \\[0.15cm] &=& H_{\overline{{\bf k}}(z_i)}(Q,P_0,\ldots , P_{D-1}). \end{eqnarray*} Using $[M_i:\overline{{\bf k}_i}(z_i)]=\Delta_i$, the identities \eqref{4.1}, \eqref{4.0} (with $z_i$ instead of $z$) and the fact that $(\delta ,\delta_1,\ldots , \delta_D)$ is a scalar multiple of $(Q,P_0,\ldots , P_{D-1})$ we obtain \begin{equation} \label{4.4} \Delta_i\mu_i = H_{M_i}(Q,P_0,\ldots , P_{D-1}) =H_{M_i}(\delta ,\delta_1,\ldots , \delta_D). \end{equation} We bound from above the right-hand side. A straightforward estimate yields that for every valuation $v$ of $M_i/\overline{{\bf k}_i}$, \begin{eqnarray*} &&-\min (v(\delta ),v(\delta_1),\ldots , v(\delta_D)) \\[0.15cm] &&\qquad\quad\leq -D\sum_{j=1}^D \min (0,v(y^{(j)}))-\sum_{j=1}^D\min (0,v(\alpha^{(j)})). \end{eqnarray*} Then summation over $v$ and an application of Lemma \ref{le:4.1} lead to \begin{eqnarray*} H_{M_i}(\delta ,\delta_1,\ldots , \delta_D )&\leq& D\sum_{j=1}^D H_{M_i}(y^{(j)})+\sum_{j=1}^D H_{M_i}(\alpha^{(j)}), \\[0.1cm] &\leq& D\Delta_i\max (\deg_{z_i} \mathcal{F}_1,\ldots ,\deg\mathcal{F}_D)+\sum_{j=1}^D H_{M_i}(\alpha^{(j)}) \\[0.1cm] &\leq& \Delta_i\cdot Dd_1 + \sum_{j=1}^D H_{M_i}(\alpha^{(j)}), \end{eqnarray*} and then a combination with \eqref{4.4} gives \[ \mu_i\leq Dd_1+\Delta_i^{-1}\sum_{j=1}^D H_{M_i}(\alpha^{(j)}). \] Now these bounds for $i=1,\ldots , q$ together with \eqref{4.3a} imply our Lemma. \end{proof} \begin{proof}[Proof of \eqref{3.13}] We fix again $i\in\{ 1,\ldots , q\}$ and use the notation introduced above. By Lemma \ref{le:4.2}, applied with ${\bf k}_i,z_i,M_i$ instead of ${\bf k}, z,K$ and with $F=\mathcal{F}=X^{D}+\mathcal{F}_1X^{D-1}+\cdots +\mathcal{F}_D$, we have \begin{equation}\label{4.6} g_{M_i}\leq(\Delta_i-1)D\max_j\deg_{z_i}(\mathcal{F}_j)\leq (\Delta_i-1)\cdot Dd_1. \end{equation} Let $S$ denote the subset of valuations $v$ of $M_i/\overline{{\bf k}_i}$ such that $v(z_i)<0$ or $v(f)>0$. Each valuation of $\overline{{\bf k}_i}(z_i)$ can be extended to at most $[M_i:\overline{{\bf k}_i}(z_i)]=\Delta_i$ valuations of $M_i$. Hence $M_i$ has at most $\Delta_i$ valuations $v$ with $v(z_i)<0$ and at most $\Delta_i\deg f$ valuations with $v(f)>0$. Thus, \begin{equation}\label{4.7} |S|\leq \Delta_i+\Delta_i\deg_{z_i} f\leq\Delta_i(1+\deg f)\leq \Delta_i(1+d_1). \end{equation} Every $\alpha\in M_i$ which is integral over $\overline{{\bf k}}_i[z_i,f^{-1}]$ belongs to $O_S$. The elements $y^{(1)},\ldots , y^{(D)}$ belong to $M_i$ and are integral over $A_0=\mathbb{Z} [z_1,\ldots , z_q]$ so they certainly belong to $O_S$. As a consequence, the elements of $B$ and their conjugates over $\mathbb{Q} (z_1,\ldots , z_q)$ belong to $O_S$. In particular, if $\varepsilon_1,\eta_1\in B^*$ and $\varepsilon_1+\eta_1=1$, then \begin{equation}\label{4.5} \varepsilon_1^{(j)}+\eta_1^{(j)}=1,\ \varepsilon_1^{(j)},\eta_1^{(j)}\in O_S^*\ \ \mbox{for } j=1,\ldots , D . \end{equation} We apply Proposition \ref{le:4.3} and insert the upper bounds \eqref{4.6}, \eqref{4.7}. It follows that for $j=1,\ldots , D$ we have either $\varepsilon_1^{(j)}\in\overline{{\bf k}_i}$ or \[ H_{M_i}(\varepsilon_1^{(j)})\leq |S|+2g_{M_i}-2\leq 3\Delta_i\cdot Dd_1; \] in fact the last upper bound is valid also if $\varepsilon_1^{(j)}\in\overline{{\bf k}_i}$. Together with Lemma \ref{le:4.4} this gives \[ \overline{{\rm deg}}\, \varepsilon_1\leq qDd_1 + qD\cdot 3Dd_1\leq 4qD^2d_1. \] For $\overline{{\rm deg}}\, \eta_1$ we derive the same estimate. This proves \eqref{3.13}. \end{proof} \section{Specializations}\label{5} In this section we prove some results about specialization homomorphisms from the domain $B$ from Proposition \ref{pr:3.6} to $\overline{\mathbb{Q}}$. We start with some notation and some preparatory lemmas. The set of places of $\mathbb{Q}$ is $M_{\mathbb{Q}}=\{ \infty\}\cup\{ {\rm primes}\}$. By $|\cdot |_{\infty}$ we denote the ordinary absolute value on $\mathbb{Q}$ and by $|\cdot |_p$ ($p$ prime) the $p$-adic absolute value, with $|p|_p=p^{-1}$. More generally, let $L$ be an algebraic number field and denote by $M_L$ its set of places. Given $v\in M_L$, we define the absolute value $|\cdot |_v$ in such a way that its restriction to $\mathbb{Q}$ is $|\cdot |_p$ if $v$ lies above $p\in M_{\mathbb{Q}}$. These absolute values satisfy the product formula $\prod_{v\in M_L} |x|_v^{d_v}=1$ for $x\in L^*$, where $d_v:=[L_v:\mathbb{Q}_p]/[L:\mathbb{Q} ]$. The (absolute logarithmic) height of ${\bf x} =(x_1,\ldots , x_m)\in L^m\setminus\{ {\bf 0}\}$ is defined by \[ h({\bf x} )=h(x_1,\ldots , x_m)=\log\prod_{v\in M_L}\big(\max (|x_1|_v,\ldots , |x_m|_v)\big)^{d_v}. \] By the product formula, $h(\alpha{\bf x} )=h({\bf x} )$ for $\alpha\in L^*$. Moreover, $h({\bf x} )$ depends only on ${\bf x}$ and not on the choice of the field $L$ such that ${\bf x}\in L^m$. So it defines a height on $\overline{\mathbb{Q}}^m\setminus\{{\bf 0}\}$. The (absolute logarithmic) height of $\alpha\in\overline{\mathbb{Q}}$ is defined by $h(\alpha ):=h((1,\alpha ))$. In case that $\alpha\in L$ we have \[ h(\alpha )=\log\prod_{v\in M_L} \max (1,|\alpha |_v^{d_v}). \] For ${\bf a} =(a_1,\ldots , a_m)\in\mathbb{Z}^m$ with $\gcd (a_1,\ldots , a_m)=1$ we have \begin{equation}\label{5.6a} h({\bf a} )=\log\max (|a_1|,\ldots , |a_m|). \end{equation} It is easy to verify that for $a_1,\ldots , a_m,b_1,\ldots , b_m\in\overline{\mathbb{Q}}$, \begin{equation}\label{5.7a} h(a_1b_1+\cdots +a_mb_m)\leq h(1,a_1,\ldots , a_m)+ h(1,b_1,\ldots , b_m)+\log m. \end{equation} Let $G$ be a polynomial with coefficients in $L$. If $a_1,\ldots , a_r$ are the non-zero coefficients of $G$, we put $|G|_v:=\max (|a_1|_v,\ldots , |a_r|_v)$ for $v\in M_L$. For a polynomial $G$ with coefficients in $\mathbb{Z}$ we define $h(G):=\log |G|_{\infty}$. We start with four auxiliary results that are used in the construction of our specializations. \begin{lemma}\label{le:5.1} Let $m\geq 1$, $\alpha_1,\ldots , \alpha_m\in\overline{\mathbb{Q}}$ and suppose that $G(X):=\prod_{i=1}^m (X-\alpha_i)\in\mathbb{Z} [X]$. Then \[ |h(G)-\sum_{i=1}^m h(\alpha _i)|\leq m. \] \end{lemma} \begin{proof} See Bombieri and Gubler \cite[Theorem 1.6.13, pp. 28]{BoGu06}. \end{proof} \begin{lemma}\label{le:5.2} Let $m\geq 1$, let $\alpha_1,\ldots ,\alpha_m\in\overline{\mathbb{Q}}$ be distinct and suppose that $G(X):=\prod_{i=1}^m (X-\alpha_i)\in\mathbb{Z} [X]$. Let $q,p_0,\ldots , p_{m-1}$ be integers with \[ \gcd (q,p_0,\ldots , p_{m-1})=1, \] and put \[ \beta_i:=\sum_{j=0}^{m-1} (p_j/q)\alpha_i^j\ \ (i=1,\ldots , m). \] Then \[ \log \max (|q|,|p_0|,\ldots , |p_{m-1}|)\leq 2m^2 + (m-1)h(G)+\sum_{j=1}^m h(\beta_j). \] \end{lemma} \begin{proof} For $m=1$ the assertion is obvious, so we assume $m\geq 2$. Let $\Omega$ be the $m\times m$ matrix with rows $(\alpha_1^i,\ldots , \alpha_m^i)$ ($i=0,\ldots , m-1)$. By Cramer's rule we have $p_i/q =\delta_i/\delta$ ($i=0,\ldots , m-1$), where $\delta =\det\Omega$ and $\delta_i$ is the determinant of the matrix, obtained by replacing the $i$-th row of $\Omega$ by $(\beta_1,\ldots ,\beta_m)$. Put $\mu :=\log \max (|q|,|p_0|,\ldots , |p_{m-1}|)$. Then by \eqref{5.6a}, \[ \mu = h(q,p_0,\ldots , p_{m-1})=h(\delta ,\delta_0,\ldots ,\delta_{m-1}). \] Let $L=\mathbb{Q} (\alpha_1,\ldots ,\alpha_m)$. By Hadamard's inequality for the infinite places and the ultrametric inequality for the finite places, we get \[ \max (|\delta |_v,|\delta_1|_v,\ldots ,|\delta_m|_v)\leq c_v\prod_{i=1}^m\max (1,|\alpha_i|_v)^{m-1}\max (1,|\beta_i|_v) \] for $v\in M_L$, where $c_v=m^{m/2}$ if $v$ is infinite and $c_v=1$ if $v$ is finite. By taking the product over $v\in M_L$ and then logarithms, it follows that \[ \mu\leq \mbox{$\textstyle{\frac{1}{2}}$} m\log m + \sum_{i=1}^m \big( (m-1)h(\alpha_i)+h(\beta_i)\big). \] A combination with Lemma \ref{le:5.1} implies our lemma. \end{proof} \begin{lemma}\label{le:5.3} Let $g\in\mathbb{Z} [z_1,\ldots , z_q]$ be a non-zero polynomial of degree $d$ and $\mathcal{N}$ a subset of $\mathbb{Z}$ of cardinality $>d$. Then \[ |\{ {\bf u}\in\mathcal{N}^q :\, g({\bf u} )=0\}|\leq d|\mathcal{N} |^{q-1}. \] \end{lemma} \begin{proof} We proceed by induction on $q$. For $q=1$ the assertion is clear. Let $q\geq 2$. Write $g=\sum_{i=0}^{d_0} g_i(z_1,\ldots , z_{q-1})z_q^i$ with $g_i\in \mathbb{Z} [z_1,\ldots , z_{q-1}]$ and $g_{d_0}\not= 0$. Then $\deg g_{d_0}\leq d-d_0$. By the induction hypothesis, there are at most $(d-d_0)|\mathcal{N} |^{q-2}\cdot |\mathcal{N} |$ tuples $(u_1,\ldots , u_q)\in\mathcal{N}^q$ with $g_{d_0}(u_1,\ldots , u_{q-1})=0$. Further, there are at most $|\mathcal{N} |^{q-1}\cdot d_0$ tuples ${\bf u}\in\mathcal{N}^q$ with $g_{d_0}(u_1,\ldots , u_{q-1})\not= 0$ and $g(u_1,\ldots , u_q)=0$. Summing these two quantities implies that $g$ has at most $d|\mathcal{N} |^{q-1}$ zeros in $\mathcal{N}^q$. \end{proof} \begin{lemma}\label{le:5.4} Let $g_1,g_2\in\mathbb{Z} [z_1,\ldots , z_q]$ be two non-zero polynomials of degrees $D_1,D_2$, respectively, and let $N$ be an integer $\geq\max(D_1,D_2)$. Define \[ \SS :=\{ {\bf u}\in\mathbb{Z}^q:\ |{\bf u} |\leq N,\, g_2({\bf u} )\not =0\}. \] Then $\SS$ is non-empty, and \begin{eqnarray}\label{5.4} &&|g_1|_p\leq (4N)^{qD_1(D_1+1)/2}\max\{ |g_1({\bf u} )|_p:\ {\bf u}\in\SS\} \\[0.15cm] \nonumber &&\hspace*{4cm} \mbox{for } p\in M_{\mathbb{Q}}=\{\infty\}\cup\{ {\rm primes}\}. \end{eqnarray} \end{lemma} \begin{proof} Put $C_p:=\max\{ |g_1({\bf u} )|_p:\ {\bf u}\in\SS\}$ for $p\in M_{\mathbb{Q}}$. We proceed by induction on $q$, starting with $q=0$. In the case $q=0$ we interpret $g_1,g_2$ as non-zero constants with $|g_1|_p=C_p$ for $p\in M_{\mathbb{Q}}$. Then the lemma is trivial. Let $q\geq 1$. Write \[ g_1=\sum_{j=0}^{D_1'} g_{1j}(z_1,\ldots , z_{q-1})z_q^j,\ \ g_2=\sum_{j=0}^{D_2'} g_{2j}(z_1,\ldots , z_{q-1})z_q^j, \] where $g_{1,D_1'},g_{2,D_2'}\not= 0$. By the induction hypothesis, the set \[ \SS ':=\{ {\bf u} ' \in\mathbb{Z}^{q-1}:\, |{\bf u} '|\leq N,\ g_{2,D_2'}({\bf u} ' )\not= 0\} \] is non-empty and moreover, \begin{equation}\label{5.5} \max_{0\leq j\leq D_1'} |g_{1j}|_p\leq (4N)^{(q-1)D_1(D_1+1)/2}C_p'\ \ \mbox{for } p\in M_{\mathbb{Q}} \end{equation} where \[ C_p':=\max\{ |g_{1j}({\bf u} ')|_p:\, {\bf u} '\in\SS',\, j=0,\ldots , D_1'\}. \] We estimate $C_p'$ from above in terms of $C_p$. Fix ${\bf u} '\in\SS '$. There are at least $2N+1-D_2'\geq D_1'+1$ integers $u_q$ with $|u_q|\leq N$ such that $g_2({\bf u} ',u_q)\not= 0$. Let $a_0,\ldots , a_{D_1'}$ be distinct integers from this set. By Lagrange's interpolation formula, \begin{eqnarray*} g_1({\bf u} ' ,X)&=&\sum_{j=0}^{D_1'} g_{1j}({\bf u} ')X^j \\[0.15cm] &=& \sum_{j=0}^{D_1'} g_1({\bf u} ',a_j)\prod_{\stackrel{i=0}{i\not= j}}^{D_1'}\frac{X-a_i}{a_j-a_i}. \end{eqnarray*} From this we deduce \begin{eqnarray*} \max_{0\leq j\leq D_1'} |g_{1j}({\bf u} ')|&\leq& C_{\infty}\sum_{j=0}^{D_1'}\prod_{\stackrel{i=0}{i\not= j}}^{D_1'}\frac{1+|a_i|}{|a_j-a_i|} \\[0.15cm] &\leq& C_{\infty}(D_1'+1)(N+1)^{D_1'}\leq (4N)^{D_1'(D_1'+1)/2}C_{\infty}. \end{eqnarray*} Now let $p$ be a prime and put $\Delta :=\prod_{1\leq i<j\leq D_1'}|a_j-a_i|$. Then \[ \max_{0\leq j\leq D_1'} |g_{1j}({\bf u} ')|_p\leq C_p|\Delta |_p^{-1}\leq \Delta C_p\leq (4N)^{D_1'(D_1'+1)/2}C_p. \] It follows that $C_p'\leq (4N)^{D_1'(D_1'+1)/2}C_p$ for $p\in M_{\mathbb{Q}}$. A combination with \eqref{5.5} gives \eqref{5.4}. \end{proof} We now introduce our specializations $B\to\overline{\mathbb{Q}}$ and prove some properties. We assume $q>0$ and apart from that keep the notation and assumptions from Proposition \ref{pr:3.6}. In particular, $A_0=\mathbb{Z} [z_1,\ldots , z_q ]$, $K_0=\mathbb{Q} (z_1,\ldots , z_q)$ and \[ K=\mathbb{Q} (z_1,\ldots , z_q,y),\ \ B=\mathbb{Z} [z_1,\ldots , z_q,f^{-1},y], \] where $f$ is a non-zero element of $A_0$, $y$ is integral over $A_0$, and $y$ has minimal polynomial \[ \mathcal{F} :=X^D+\mathcal{F}_1X^{D-1}+\cdots +\mathcal{F}_D\in A_0[X] \] over $K_0$. In the case $D=1$, we take $y=1$, $\mathcal{F} =X-1$. To allow for other applications (e.g., Lemma \ref{le:7.2} below), we consider a more general situation than what is needed for the proof of Proposition \ref{pr:3.6}. Let $d_1\geq d_0\geq 1$, $h_1\geq h_0\geq 1$ and assume that \begin{equation}\label{5.x} \left\{\begin{array}{l} \max (\deg \mathcal{F}_1,\ldots ,\deg \mathcal{F}_D)\leq d_0,\ \ \max (d_0,\deg f)\leq d_1, \\[0.15cm] \max \big(h(\mathcal{F}_1),\ldots , h(\mathcal{F}_D)\,\big)\leq h_0,\ \ \max (h_0,h(f))\leq h_1. \end{array}\right. \end{equation} Let ${\bf u} =(u_1,\ldots , u_q)\in\mathbb{Z}^q$. Then the substitution $z_1\mapsto u_1,\ldots , z_q\mapsto u_q$ defines a ring homomorphism (specialization) \[ \irS_{{\bf u}}:\ \alpha\mapsto \alpha ({\bf u} ):\ \{ g_1/g_2 :\, g_1,g_2\in A_0,\, g_2({\bf u} )\not= 0\}\to\mathbb{Q} . \] We want to extend this to a ring homomorphism from $B$ to $\overline{\mathbb{Q}}$ and for this, we have to impose some restrictions on ${\bf u}$. Denote by $\Delta_{\mathcal{F}}$ the discriminant of $\mathcal{F}$ (with $\Delta_{\mathcal{F}}:=1$ if $D=\overline{{\rm deg}}\,\mathcal{F} =1$), and let \begin{equation}\label{5.1} \mathcal{H}:= \Delta_{\mathcal{F}}\mathcal{F}_D\cdot f . \end{equation} Then $\mathcal{H}\in A_0$. Using that $\Delta_{\mathcal{F}}$ is a polynomial of degree $2D-2$ with integer coefficients in $\mathcal{F}_1,\ldots , \mathcal{F}_D$, it follows easily that \begin{equation}\label{5.2} \deg \mathcal{H} \leq (2D-1)d_0+d_1\leq 2Dd_1. \end{equation} Now assume that \begin{equation}\label{5.3} \mathcal{H} ({\bf u} )\not= 0. \end{equation} Then $f({\bf u} )\not= 0$ and moreover, the polynomial \[ \mathcal{F}_{{\bf u}}:= X^D+\mathcal{F}_1({\bf u} )X^{D-1}+\cdots +\mathcal{F}_D({\bf u} ) \] has $D$ distinct zeros which are all different from $0$, say $y_1({\bf u} ),\ldots , y_D({\bf u} )$. Thus, for $j=1,\ldots , D$ the assignment \[ z_1\mapsto u_1,\ldots , z_q\mapsto u_q,\ \ y\mapsto y_j({\bf u} ) \] defines a ring homomorphism $\irS_{{\bf u} ,j}$ from $B$ to $\overline{\mathbb{Q}}$; in the case $D=1$ it is just $\irS_{{\bf u}}$. The image of $\alpha\in B$ under $\irS_{{\bf u},j}$ is denoted by $\alpha_j({\bf u} )$. Recall that we may express elements $\alpha$ of $B$ as \begin{eqnarray}\label{5.6} &&\alpha =\sum_{i=0}^{D-1} (P_i/Q)y^i \\[0.1cm] \nonumber &&\qquad\mbox{with $P_0,\ldots , P_{D-1},Q\in A_0$, $\gcd (P_0,\ldots , P_{D-1},Q)=1$.} \end{eqnarray} Since $\alpha\in B$, the denominator $Q$ must divide a power of $f$, hence $Q({\bf u} )\not= 0$. So we have \begin{equation}\label{5.7} \alpha_j({\bf u} )=\sum_{i=0}^{D-1} (P_i({\bf u} )/Q({\bf u} ))y_j({\bf u} )^i\ \ (j=1,\ldots , D). \end{equation} It is obvious that $\irS_{{\bf u} ,j}$ is the identity on $B\cap\mathbb{Q}$. Thus, if $\alpha\in B\cap\overline{\mathbb{Q}}$, then $\irS_{{\bf u} ,j}(\alpha )$ has the same minimal polynomial as $\alpha$ and so it is conjugate to $\alpha$. For ${\bf u}=(u_1,\ldots , u_q)\in\mathbb{Z}^q$, we put $|{\bf u} |:=\max (|u_1|,\ldots , |u_q|)$. It is easy to verify that for any $g\in A_0$, ${\bf u}\in\mathbb{Z}^q$, \begin{equation}\label{5.8} \log |g({\bf u} )|\leq q\log \deg g +h(g)+\deg g\log \max(1,|{\bf u} |). \end{equation} In particular, \begin{equation}\label{5.8b} h(\mathcal{F}_{{\bf u}})\leq q\log d_0+h_0+d_0\log\max (1,|{\bf u} |) \end{equation} and so by Lemma \ref{le:5.2} (ii), \begin{equation}\label{5.8a} \sum_{j=1}^D h(y_j({\bf u} ))\leq D+ q\log d_0+h_0+d_0\log\max (1,|{\bf u} |). \end{equation} Define the algebraic number fields $K_{{\bf u} ,j}:=\mathbb{Q} ({\bf y}_j({\bf u} ))$ $(j=1,\ldots , D)$. Denote by $\Delta_L$ the discriminant of an algebraic number field $L$. We derive an upper bound for the discriminant $\Delta_{K_{{\bf u},j}}$ of $K_{{\bf u},j}$. \begin{lemma}\label{le:5.7} Let ${\bf u}\in\mathbb{Z}^q$ with $\mathcal{H} ({\bf u} )\not= 0$. Then for $j=1,\ldots , D$ we have $[K_{{\bf u} ,j}:\mathbb{Q} ]\leq D$ and \[ |\Delta_{K_{{\bf u} ,j}}|\leq D^{2D-1}\left( d_0^q\cdot e^{h_0}\max (1,|{\bf u} |)^{d_0}\right)^{2D-2}. \] \end{lemma} \begin{proof} Let $j\in\{ 1,\ldots , D\}$. The estimate for the degree is obvious. To estimate the discriminant, let $\mathcal{P}_j$ be the monic minimal polynomial of $y_j({\bf u} )$. Then $\Delta_{K_{{\bf u} ,j}}$ divides the discriminant $\Delta_{\mathcal{P}_j}$ of $\mathcal{P}_j$. Using the expression of the discriminant of a monic polynomial as the product of the squares of the differences of its zeros, one easily shows that $\Delta_{\mathcal{P}_j}$ divides $\Delta_{\mathcal{F}_{{\bf u}}}$ in the ring of algebraic integers and so also in $\mathbb{Z}$. Therefore, $\Delta_{K_{{\bf u}},j}$ divides $\Delta_{\mathcal{F}_{{\bf u}}}$ in $\mathbb{Z}$. It remains to estimate from above the discriminant of $\mathcal{F}_{{\bf u}}$. By, e.g., Lewis and Mahler \cite[bottom of p. 335]{LeMa61}, we have \[ |\Delta_{\mathcal{F}_{{\bf u}}}|\leq D^{2D-1}|\mathcal{F}_{{\bf u}}|^{2D-2}, \] where $|\mathcal{F}_{{\bf u}}|$ denotes the maximum of the absolute values of the coefficients of $\mathcal{F}_{{\bf u}}$. By \eqref{5.8b}, this is bounded above by $d_0^qe^{h_0}\max(1,|{\bf u} |)^{d_0}$, so \[ |\Delta_{\mathcal{F}_{{\bf u}}}|\leq D^{2D-1}\big(d_0^qe^{h_0}\max (1,|{\bf u} |)^{d_0}\big)^{2D-2}. \] This implies our lemma. \end{proof} We finish with two lemmas, which relate the height of $\alpha\in B$ to the heights of $\alpha_j({\bf u} )$ for ${\bf u}\in\mathbb{Z}^q$. \begin{lemma}\label{le:5.5} Let ${\bf u}\in\mathbb{Z}^q$ with $\mathcal{H} ({\bf u} )\not= 0$. Let $\alpha\in B$. Then for $j=1,\ldots , D$, \begin{eqnarray*} &&h(\alpha_j({\bf u} ))\leq D^2 +q(D\log d_0+\log \overline{{\rm deg}}\,\alpha ) + Dh_0 +\overline{h} (\alpha)\, + \\[0.1cm] &&\hspace*{6cm} + (Dd_0+\overline{{\rm deg}}\,\alpha )\log\max (1,|{\bf u} |). \end{eqnarray*} \end{lemma} \begin{proof} Let $P_0,\ldots , P_{D-1},Q$ as in \eqref{5.6} and write $\alpha_j({\bf u} )$ as in \eqref{5.7}. By \eqref{5.7a}, \begin{eqnarray}\label{5.9} &&h(\alpha_j({\bf u} )) \leq \log D + \\[0.1cm] \nonumber &&\qquad +h\big(1,P_0({\bf u} )/Q({\bf u} ),\ldots , P_{D-1}({\bf u} )/Q({\bf u} )\big)+(D-1)h(y_j({\bf u} )). \end{eqnarray} From \eqref{5.8} we infer \begin{eqnarray*} &&h(1,P_0({\bf u} )/Q({\bf u} ),\ldots , P_{D-1}({\bf u} )/Q({\bf u} )) \\[0.15cm] &&\qquad \leq \log\max (|Q({\bf u} )|,|P_0({\bf u} )|,\ldots , |P_{D-1}({\bf u} )|) \\[0.15cm] &&\qquad\leq q\log \overline{{\rm deg}}\,\alpha +\overline{h} (\alpha )+\overline{{\rm deg}}\, \alpha\cdot \log \max (1,|{\bf u} |). \end{eqnarray*} By combining \eqref{5.9} with this inequality and with \eqref{5.8a}, our lemma easily follows. \end{proof} \begin{lemma}\label{le:5.6} Let $\alpha\in B$, $\alpha\not= 0$, and let $N$ be an integer with \[ N\geq\max \big(\overline{{\rm deg}}\,\alpha ,\, 2Dd_0+2(q+1)(d_1+1)\, \big). \] Then the set \[ \SS :=\{ {\bf u}\in\mathbb{Z}^q:\ |{\bf u} |\leq N,\ \mathcal{H} ({\bf u} )\not= 0\} \] is non-empty, and \[ \overline{h} (\alpha )\leq 5N^4(h_1+1)^2+2D(h_1+1)H \] where $H:=\max\{ h(\alpha_j({\bf u} )) :\, {\bf u}\in\SS ,\, j=1,\ldots , D\}$. \end{lemma} \begin{proof} It follows from our assumption on $N$, \eqref{5.2}, and Lemma \ref{le:5.4} that $\SS$ is non-empty. We proceed with estimating $\overline{h} (\alpha )$. Let $P_0,\ldots , P_{D-1},Q\in A_0$ be as in \eqref{5.6}. We analyse $Q$ more closely. Let \[ f=\pm p_1^{k_1}\cdots p_m^{k_m}g_1^{l_1}\cdots g_n^{l_n} \] be the unique factorization of $f$ in $A_0$, where $p_1,\ldots , p_m$ are distinct prime numbers, and $\pm g_1,\ldots , \pm g_n$ distinct irreducible elements of $A_0$ of positive degree. Notice that \begin{eqnarray}\label{5.10} &m\leq h(f)/\log 2\leq h_1/\log 2,& \\[0.15cm] \label{5.11} &\displaystyle{\sum_{i=1}^n l_ih(g_i)\leq qd_1+h_1,}& \end{eqnarray} where the last inequality is a consequence of Lemma \ref{le:5.1}. Since $\alpha\in B$, the polynomial $Q$ is also composed of $p_1,\ldots , p_m$, $g_1,\ldots , g_n$. Hence \begin{equation}\label{5.11a} Q=a\widetilde{Q}\ \ \mbox{with } a=\pm p_1^{k_1'}\cdots p_m^{k_m'},\ \widetilde{Q}=g_1^{l_1'}\cdots g_n^{l_n'} \end{equation} for certain non-negative integers $k_1',\ldots , l_n'$. Clearly, \[ l_1'+\cdots +l_n'\leq\deg Q\leq\overline{{\rm deg}}\, \alpha\leq N, \] and by Lemma \ref{le:3.0} and \eqref{5.11}, \begin{equation}\label{5.12} h(\widetilde{Q})\leq q\deg Q +\sum_{i=1}^n l_i'h(g_i)\leq N(q+qd_1+h_1)\leq N^2(h_1+1 ). \end{equation} In view of \eqref{5.8}, we have for ${\bf u}\in\SS$, \begin{eqnarray*} \log |\widetilde{Q}({\bf u} )|&\leq& q\log d_1+h(\widetilde{Q})+\deg Q\log N \\[0.15cm] &\leq& \textfrac{3}{2}N\log N +N^2(h_1+1)\leq N^2(h_1+2). \end{eqnarray*} Hence \[ h(\widetilde{Q}({\bf u} )\alpha_j({\bf u} ))\leq N^2(h_1+2)+H \] for ${\bf u}\in\SS$, $j=1,\ldots , D$. Further, by \eqref{5.7}, \eqref{5.11} we have \[ \widetilde{Q}({\bf u} )\alpha_j({\bf u} )=\sum_{i=0}^{D-1} (P_i({\bf u} )/a)y_j({\bf u} )^i . \] Put \[ \delta ({\bf u} ):=\gcd (a,P_0({\bf u} ),\ldots , P_{D-1}({\bf u} )). \] Then by applying Lemma \ref{le:5.2} and then \eqref{5.8b} we obtain \begin{eqnarray}\label{5.13} &&\log\left(\frac{\max (|a|, |P_0({\bf u} )|,\ldots , |P_{D-1}({\bf u} )|}{\delta ({\bf u} )}\right) \\[0.15cm] \nonumber &&\leq 2D^2+(D-1)h(\mathcal{F}_{{\bf u}})+D\big( N^2(h_1+2)+H\big) \\[0.15cm] \nonumber &&\leq 2D^2+(D-1)(q\log d_1 +h_1+d_1\log N)+D\big( N^2(h_1+2)+H\big) \\[0.15cm] \nonumber && \leq N^3(h_1+2)+DH. \end{eqnarray} Our assumption that $\gcd (Q,P_0,\ldots , P_{D-1})=1$ implies that the gcd of $a$ and the coefficients of $P_0,\ldots , P_{D-1}$ is $1$. Let $p\in\{ p_1,\ldots , p_m\}$ be one of the prime factors of $a$. There is $j\in\{ 0,\ldots , D-1\}$ such that $|P_j|_p=1$. Our assumption on $N$ and \eqref{5.2} imply that $N\geq \max (\deg\mathcal{H} ,\deg P_j)$. This means that Lemma \ref{le:5.4} is applicable with $g_1= P_j$ and $g_2=\mathcal{H}$. It follows that \[ \max\{ |P_j({\bf u} )|_p:\ {\bf u}\in\SS\}\geq (4N)^{-qN(N+1)/2}. \] That is, there is ${\bf u}_0\in\SS$ with $|P_j({\bf u}_0 )|_p\geq (4N)^{-qN(N+1)/2}$. Hence \[ |\delta ({\bf u}_0 )|_p\geq (4N)^{-qN(N+1)/2}. \] Together with \eqref{5.13}, this implies \begin{eqnarray*} \log |a|_p^{-1}&\leq& \log |a/\delta ({\bf u}_0 )|+\log |\delta ({\bf u}_0 )|_p^{-1} \\[0.15cm] &\leq& N^3(h_1+2)+DH +\mbox{$\textstyle{\frac{1}{2}}$} N^3\log 4N\leq N^4(h_1+3)+DH. \end{eqnarray*} Combining this with the upper bound \eqref{5.10} for the number of prime factors of $a$, we obtain \begin{equation}\label{5.14} \log |a|\leq 2N^4h_1(h_1+3)+2Dh_1\cdot H. \end{equation} Together with \eqref{5.11a}, \eqref{5.12}, this implies \begin{eqnarray}\label{5.15} h(Q)&\leq& 2N^4h_1(h_1+3)+2Dh_1\cdot H +N^2(h_1+1) \\[0.15cm] \nonumber &\leq& 3N^4(h_1+1)^2+2Dh_1\cdot H. \end{eqnarray} Further, the right-hand side of \eqref{5.14} is also an upper bound for $\log \delta ({\bf u} )$, for ${\bf u}\in\SS$. Combining this with \eqref{5.13} gives \begin{eqnarray*} &&\log\max\{ |P_j({\bf u} )|:\ {\bf u}\in\SS ,\, j=0,\ldots , D-1\} \\[0.15cm] &&\quad \leq N^3(h_1+2)+DH + 3N^4(h_1+1)^2+2Dh_1\cdot H \\[0.15cm] &&\quad\leq 4N^4(h_1+1)^2+2D(h_1+1)\cdot H. \end{eqnarray*} Another application of Lemma \ref{le:5.4} yields \begin{eqnarray*} h(P_j)&\leq& \mbox{$\textstyle{\frac{1}{2}}$} qN(N+1)\log 4N + 4N^4(h_1+1)^2+2D(h_1+1)\cdot H \\[0.15cm] &\leq& 5N^4(h_1+1)^2+2D(h_1+1)\cdot H \end{eqnarray*} for $j=0,\ldots , D-1$. Together with \eqref{5.15} this gives the upper bound for $\overline{h} (\alpha )$ from our lemma. \end{proof} \section{Completion of the proof of Proposition \ref{pr:3.6}}\label{6} It remains only to prove the height bound in \eqref{3.14}. We use an effective result of Gy\H{o}ry and Yu \cite[2006]{GyYu06} on $S$-unit equations in number fields. To state this, we need some notation. Let $L$ be an algebraic number field of degree $d_L$. We denote by $O_L$, $M_L$, $\Delta_L$, $h_L$, $R_L$ the ring of integers, set of places, discriminant, class number and regulator of $L$. The norm of an ideal $\mathfrak{a}$ of $O_L$, i.e., $|O_L/\mathfrak{a} |$, is denoted by $N\mathfrak{a}$. Further, let $S$ be a finite set of places of $L$, containing all infinite places. Suppose $S$ has cardinality $s$. Recall that the ring of $S$-integers $O_S$ and the group of $S$-units $O_S^*$ are given by \begin{eqnarray*} O_S&=&\{ x\in L :\, |x|_v\leq 1\ \mbox{for } v\in M_L\setminus S\},\\[0.15cm] O_S^*&=&\{ x\in L :\, |x|_v=1\ \mbox{for } v\in M_L\setminus S\}. \end{eqnarray*} If case that $S$ consists only of the infinite places of $L$, we put $P:=2$, $Q:=2$. If $S$ contains also finite places, let $\mathfrak{p}_1,\ldots ,\mathfrak{p}_t$ denote the prime ideals corresponding to the finite places of $S$, and put \[ P:=\max\{ N\mathfrak{p}_1,\ldots , N\mathfrak{p}_t \},\ \ \ Q:= N(\mathfrak{p}_1\cdots\mathfrak{p}_t ). \] Further, let $R_S$ denote the $S$-regulator associated with $S$. In case that $S$ consists only of the infinite places of $L$ it is equal to $R_L$, while otherwise \[ R_S =h_SR_L\prod_{i=1}^t\log N\mathfrak{p}_i , \] where $h_S$ is a divisor of $h_L$ whose definition is not important here. By, e.g., formula (59) of \cite{GyYu06} (which is an easy consequence of formula (2) of Louboutin \cite[2000]{Lou00}) we have \[ h_LR_L\leq |\Delta_L|^{1/2}(\log^*|\Delta_L|)^{d_L-1}. \] By the inequality of the geometric and arithmetic mean, we have for $t>0$, \[ \prod_{i=1}^t\log N\mathfrak{p}_i\leq \Big( t^{-1}\log (N\mathfrak{p}_1\cdots N\mathfrak{p}_t))^t\leq (\log Q)^s \] and hence, \begin{equation}\label{6.1} R_S\leq |\Delta_L|^{1/2}(\log^*|\Delta_L|)^{d_L-1}\cdot (\log^* Q)^s. \end{equation} This is clearly true also if $t=0$. \begin{proposition}\label{le:6.1} Let $\varepsilon ,\eta$ such that \begin{equation}\label{6.2} \varepsilon +\eta =1,\ \ \varepsilon ,\eta\in O_S^*. \end{equation} Then \begin{equation}\label{6.3} \max (h(\varepsilon ),h(\eta ))\leq c_1PR_S\left( 1+\log ^*R_S/\log P\right), \end{equation} where \[ c_1=\max (1,\pi/d_L)s^{2s+3.5}2^{7s+27}(\log 2s)d_L^{2(s+1)}(\log^* 2d_L)^3. \] \end{proposition} \begin{proof} This is Theorem 1 of Gy\H{o}ry, Yu \cite{GyYu06} with $\alpha =\beta =1$. \end{proof} \begin{proof}[Proof of \eqref{3.14}] As before, we use $O(\cdot )$ to denote a quantity which is $c\times$ the expression between the parentheses, where $c$ is an effectively computable absolute constant which may be different at each occurrence of the $O$-symbol. We first consider the case $q>0$. Let $\varepsilon_1 ,\eta_1$ be a solution of \eqref{3.12}. Pick ${\bf u}\in\mathbb{Z}^q$ with $\mathcal{H} ({\bf u} )\not= 0$, pick $j\in\{ 1,\ldots , D\}$ and put $L:=K_{{\bf u} ,j}$. Further, let the set of places $S$ consist of all infinite places of $L$, and all finite places of $L$ lying above the rational prime divisors of $f({\bf u} )$. Note that $y_j({\bf u} )$ is an algebraic integer, and $f({\bf u} )\in O_S^*$. Hence $\irS_{{\bf u} ,j}(B)\subseteq O_S$ and $\irS_{{\bf u} ,j}(B^*)\subseteq O_S^*$. So \begin{equation}\label{6.y} \varepsilon_{1,j}({\bf u} )+\eta_{1,j}({\bf u} )=1,\ \ \varepsilon_{1,j}({\bf u} ),\, \eta_{1,j}({\bf u} )\in O_S^*, \end{equation} where $\varepsilon_{1,j}({\bf u} ),\eta_{1,j}({\bf u} )$ are the images of $\varepsilon_1,\eta_1$ under $\irS_{{\bf u} ,j}$. We estimate from above the upper bound \eqref{6.3} from Proposition \ref{le:6.1}. By assumption, $f$ has degree at most $d_1$ and logarithmic height at most $h_1$, hence \begin{equation}\label{6.x1} |f({\bf u} )|\leq d_1^qe^{h_1}\max(1,|{\bf u} |)^{d_1}=: R({\bf u} ). \end{equation} Since the degree of $L$ is $d_L\leq D$, the cardinality $s$ of $S$ is at most $s\leq D(1+\omega )$, where $\omega$ is the number of prime divisors of $f({\bf u} )$. Using the inequality from prime number theory, $\omega \leq O(\log |f({\bf u} )|/\log\log |f({\bf u})|)$, we obtain \begin{equation}\label{6.x2} s\leq O\Big(\frac{D\log^* R({\bf u} )}{\log^*\log^*R({\bf u})}\Big). \end{equation} From this, one easily deduces that \begin{equation}\label{6.x3} c_1\leq \exp O(D\log^* R({\bf u} )). \end{equation} Next, we estimate $P$ and $R_S$. By \eqref{6.x1}, we have \begin{equation}\label{6.x4} P\leq Q\leq |f({\bf u} )|^D\leq \exp O(D\log^* R({\bf u} )). \end{equation} To estimate $R_S$, we use \eqref{6.1}. By Lemma \ref{le:5.7} (using $d_0\leq d_1$) we have \[ |\Delta_L|\leq D^{2D-1} \big( d_1^qe^{h_1}\max (1,|{\bf u} |)^{d_1}\big)^{2D-2}\leq \exp O(D\log^* R({\bf u} )), \] and this easily implies \[ |\Delta_L|^{1/2}(\log^*\Delta_L)^{D-1}\leq \exp O(D\log^* R({\bf u} )). \] Together with the estimates \eqref{6.x2},\eqref{6.x4} for $s$ and $Q$, this leads to \begin{equation}\label{6.x5} R_S\leq\exp O\Big( D\log^* R({\bf u} )+s\log^*\log^* Q\Big)\leq \exp O(D\log^* R({\bf u} )). \end{equation} Now by collecting \eqref{6.x3}--\eqref{6.x5}, we infer that the right-hand side of \eqref{6.3} is bounded above by $\exp O(D\log^* R({\bf u} ))$. So applying Proposition \ref{le:6.1} to \eqref{6.y} gives \begin{equation}\label{6.x6} h(\varepsilon_{1,j}({\bf u} )),\, h(\eta_{1,j}({\bf u} ))\leq \exp O(D\log^* R({\bf u} )). \end{equation} We apply Lemma \ref{le:5.6} with $N:= 4D^2(q+d_1+1)^2$. From the already established \eqref{3.13} it follows that $\overline{{\rm deg}}\, \varepsilon_1,\, \overline{{\rm deg}}\, \eta_1\leq N$. Further, since $d_1\geq d_0$ we have $N\geq 2Dd_0+2(d_1+1)(q+1)$. So indeed, Lemma \ref{le:5.6} is applicable with this value of $N$. It follows that the set $\SS :=\{ {\bf u}\in\mathbb{Z}^q :\, |{\bf u} |\leq N,\, \mathcal{H} ({\bf u} )\not= 0\}$ is not empty. Further, for ${\bf u}\in\SS$, $j=1,\ldots , D$, we have \begin{eqnarray*} h(\varepsilon_{1,j}({\bf u} ))&\leq& \exp O(Dq\log d_1+Dh_1+Dd_1\log^* N) \\[0.15cm] &\leq& \exp O(N^{1/2}\log^* N +Dh_1), \end{eqnarray*} and so by Lemma \ref{le:5.6}, \[ \overline{h} (\varepsilon_1)\leq \exp O(N^{1/2}\log^* N +Dh_1). \] For $\overline{h} (\eta_1 )$ we obtain the same upper bound. This easily implies \eqref{3.14} in the case $q>0$. Now assume $q=0$. In this case, $K_0=\mathbb{Q}$, $A_0=\mathbb{Z}$ and $B=\mathbb{Z} [f^{-1},y]$ where $y$ is an algebraic integer with minimal polynomial $\mathcal{F} = X^D+\mathcal{F}_1X^{D-1}+\cdots +\mathcal{F}_D\in\mathbb{Z} [X]$ over $\mathbb{Q}$, and $f$ is a non-zero rational integer. By assumption, $\log |f|\leq h_1$, $\log |\mathcal{F}_i|\leq h_1$ for $i=1,\ldots , D$. Denote by $y_1,\ldots , y_D$ the conjugates of $y$, and let $L=\mathbb{Q} (y_j)$ for some $j$. By a similar argument as in the proof of Lemma \ref{le:5.7}, we have $|\Delta_L|\leq D^{2D-1}e^{(2D-2)h_1}$. The isomorphism given by $y\mapsto y_j$ maps $K$ to $L$ and $B$ to $O_S$, where $S$ consists of the infinite places of $L$ and of the prime ideals of $O_L$ that divide $f$. The estimates \eqref{6.x1}--\eqref{6.x5} remain valid if we replace $R({\bf u} )$ by $e^{h_1}$. Hence for any solution $\varepsilon_1,\eta_1$ of \eqref{3.12}, \[ h(\varepsilon_{1,j}),\, h(\eta_{1,j})\leq \exp O(Dh_1), \] where $\varepsilon_{1,j}$,$\eta_{1,j}$ are the $j$-th conjugates of $\varepsilon_1,\eta_1$, respectively. Now an application of Lemma \ref{le:5.2} with $g=\mathcal{F}$, $m=D$, $\beta_j=\varepsilon_{1,j}$ gives \[ \overline{h} (\varepsilon_1 )\leq \exp O(Dh_1). \] Again we derive the same upper bound for $\overline{h} (\eta_1)$, and deduce \eqref{3.14}. This completes the proof of Proposition \ref{pr:3.6}. \end{proof} \section{Proof of Theorem \ref{th:1.3}}\label{7} We start with some results on multiplicative (in)dependence. \begin{lemma}\label{le:7.1} Let $L$ be an algebraic number field of degree $d$, and $\gamma_0,\ldots , \gamma_s$ non-zero elements of $L$ such that $\gamma_0,\ldots ,\gamma_s$ are multiplicatively dependent, but any $s$ elements among $\gamma_0,\ldots , \gamma_s$ are multiplicatively independent. Then there are non-zero integers $k_0,\ldots , k_s$ such that \begin{eqnarray*} &&\gamma_0^{k_0}\cdots \gamma_s^{k_s}=1, \\[0.15cm] &&|k_i|\leq 58(s!e^s/s^s)d^{s+1}(\log d)h(\gamma_0)\cdots h(\gamma_s)/h(\gamma_i)\ \ \mbox{for } i=0,\ldots , s. \end{eqnarray*} \end{lemma} \begin{proof} This is Corollary 3.2 of Loher and Masser \cite[2004]{LoMa04}. They attribute this result to Yu Kunrui. Another result of this type was obtained earlier by Loxton and van der Poorten \cite[1983]{LvdP83}. \end{proof} We prove a generalization for arbitrary finitely generated domains. As before, let $A=\mathbb{Z} [z_1,\ldots , z_r]\supseteq \mathbb{Z}$ be a domain, and suppose that the ideal $I$ of polynomials $f\in\mathbb{Z} [X_1,\ldots , X_r]$ with $f(z_1,\ldots , z_r)=0$ is generated by $f_1,\ldots , f_m$. Let $K$ be the quotient field of $A$. Let $\gamma_0,\ldots ,\gamma_s$ be non-zero elements of $K$, and for $i=1,\ldots , s$, let $(g_{i1},g_{i2})$ be a pair of representatives for $\gamma_i$, i.e., elements of $\mathbb{Z} [X_1,\ldots , X_r]$ such that \[ \gamma_i =\frac{g_{i1}(z_1,\ldots , z_r)}{g_{i2}(z_1,\ldots , z_r)}. \] \begin{lemma}\label{le:7.2} Assume that $\gamma_0,\ldots ,\gamma_s$ are multiplicatively dependent. Further, assume that $f_1,\ldots , f_m$ and $g_{i1},g_{i2}$ ($i=0,\ldots , s$) have degrees at most $d$ and logarithmic heights at most $h$, where $d\geq 1$, $h\geq 1$. Then there are integers $k_0,\ldots , k_s$, not all equal to $0$, such that \begin{eqnarray} \label{7.2} && \gamma_0^{k_0}\cdots \gamma_s^{k_s}=1, \\[0.15cm] \label{7.3} && |k_i|\leq (2d)^{\exp O(r+s)}(h+1)^s\ \ \mbox{for } i=0,\ldots , s. \end{eqnarray} \end{lemma} \begin{proof} We assume without loss of generality that any $s$ numbers among $\gamma_0,\ldots ,\gamma_s$ are multiplicatively independent (if this is not the case, take a minimal multiplicatively dependent subset of $\{ \gamma_0,\ldots , \gamma_s\}$ and proceed further with this subset). We first assume that $q>0$. We use an argument of van der Poorten and Schlickewei \cite[1991]{vdPSchl91}. We keep the notation and assumptions from Sections \ref{3}--\ref{5}. In particular, we assume that $z_1,\ldots , z_q$ is a transcendence basis of $K$, and rename $z_{q+1},\ldots , z_r$ as $y_1,\ldots , y_t$, respectively. For brevity, we have included the case $t=0$ as well in our proof. But it should be possible to prove in this case a sharper result by means of a more elementary method. In the case $t>0$, $y$ and $\mathcal{F} =X^D+\mathcal{F}_1X^{D-1}+\cdots +\mathcal{F}_D$ will be as in Corollary \ref{co:3.3}. In the case $t=0$ we take $m=1$, $f_1=0$, $d=h=1$, $y=1$, $\mathcal{F} =X-1$, $D=1$. We construct a specialization such that among the images of $\gamma_0,\ldots ,\gamma_s$ no $s$ elements are multiplicatively dependent, and then apply Lemma \ref{le:7.1}. Let $V\geq 2d$ be a positive integer. Later we shall make our choice of $V$ more precise. Let \begin{eqnarray}\label{7.3a} &&\mathcal{V} :=\{ {\bf v} =(v_0,\ldots , v_s)\in\mathbb{Z}^{s+1}\setminus\{ {\bf 0}\}: \\[0.1cm] \nonumber &&\hspace*{2cm} |v_i|\leq V\ \mbox{for } i=0,\ldots , s, \mbox{and with $v_i=0$ for some $i$}\}. \end{eqnarray} Then \[ \gamma_{{\bf v}}:= \Big(\prod_{i=0}^s\gamma_i^{v_i}\Big)-1\ \ ({\bf v}\in\mathcal{V} ) \] are non-zero elements of $K$. It is not difficult to show that for ${\bf v}\in\mathcal{V}$, $\gamma_{{\bf v}}$ has a pair of representatives $(g_{1,{\bf v}},g_{2,{\bf v}})$ such that \[ \deg g_{1,{\bf v}},\, \deg g_{2,{\bf v}}\leq sdV. \] In the case $t>0$, there exists by Lemma \ref{le:3.4a} a non-zero $f\in A_0$ such that \[ A\subseteq B:= A_0[y,f^{-1}],\ \ \gamma_{{\bf v}}\in B^*\ \mbox{for ${\bf v}\in\mathcal{V}$} \] and \[ \deg f\leq V^{s+1}(2sdV)^{\exp O(r)}\leq V^{\exp O(r+s)}. \] In the case $t=0$ this holds true as well, with $y=1$ and $f=\prod_{{\bf v}\in\mathcal{V}} ((g_{1,{\bf v}}\cdot g_{2,{\bf v}})$. We apply the theory on specializations explained in Section \ref{5} with this $f$. We put $\mathcal{H} :=\Delta_{\mathcal{F}}\mathcal{F}_Df$, where $\Delta_{\mathcal{F}}$ is the discriminant of $\mathcal{F}$. Using Corollary \ref{co:3.3} and inserting the bound $D\leq d^t$ from Lemma \ref{le:3.1} we get for $t>0$, \begin{equation}\label{7.4} \left\{\begin{array}{l} d_0:=\max (\deg f_1,\ldots ,\deg f_m,\deg\mathcal{F}_1,\ldots ,\deg\mathcal{F}_D)\leq (2d)^{\exp O(r)} , \\[0.15cm] h_0:=\max \big(h(f_1),\ldots , h(f_m),h(\mathcal{F}_1),\ldots , h(\mathcal{F}_D)\,\big)\leq (2d)^{\exp O(r)}(h+1)\, ; \end{array}\right. \end{equation} with the provision $\deg 0 = h(0)= -\infty$ this is true also if $t=0$. Combining this with Lemma \ref{le:3.4}, we obtain \[ \deg\mathcal{H} \leq (2D-1)d_0 +\deg f \leq V^{\exp O(r+s)}. \] By Lemma \ref{le:5.3} there exists ${\bf u}\in\mathbb{Z}^q$ with \begin{equation}\label{7.5} \mathcal{H} ({\bf u} )\not =0,\ \ |{\bf u} |\leq V^{\exp O(r+s)}. \end{equation} We proceed further with this ${\bf u}$. As we have seen before, $\gamma_{{\bf v}}\in B^*$ for ${\bf v}\in\mathcal{V}$. By our choice of ${\bf u}$, there are $D$ distinct specialization maps $\irS_{{\bf u} ,j}$ ($j=1,\ldots , D)$ from $B$ to $\overline{\mathbb{Q}}$. We fix one of these specializations, say $\irS_{{\bf u}}$. Given $\alpha\in B$, we write $\alpha ({\bf u} )$ for $\irS_{{\bf u}}(\alpha )$. As the elements $\gamma_{{\bf v}}$ are all units in $B$, their images under $\irS_{{\bf u}}$ are non-zero. So we have \begin{equation}\label{7.5a} \prod_{i=0}^s \gamma_i({\bf u} )^{v_i}\not=1\ \ \mbox{for } {\bf v}\in\mathcal{V}, \end{equation} where $\mathcal{V}$ is defined by \eqref{7.3a}. We use Lemma \ref{le:5.5} to estimate the heights $h(\gamma_i({\bf u} ))$ for $i=0,\ldots , s$. Recall that by Lemma \ref{le:3.4} we have \[ \overline{{\rm deg}}\, \gamma_i\leq (2d)^{\exp O(r)},\ \ \overline{h} (\gamma_i)\leq (2d)^{\exp O(r)}(h+1) \] for $i=0,\ldots , s$. By inserting these bounds, together with the bound $D\leq d^t$ from Lemma \ref{le:3.1}, those for $d_0,h_0$ from \eqref{7.4} and that for ${\bf u}$ from \eqref{7.5} into the bound from Lemma \ref{le:5.5}, we obtain for $i=0,\ldots , s$, \begin{eqnarray}\label{7.6} h(\gamma_i({\bf u} ))&\leq& (2d)^{\exp O(r)}(1+h+\log\max (1,|{\bf u} |)) \\[0.15cm] \nonumber &\leq& (2d)^{\exp O(r+s)}(1+h+\log V). \end{eqnarray} Assume that among $\gamma_0({\bf u} ),\ldots , \gamma_s({\bf u} )$ there are $s$ numbers which are multiplicatively dependent. By Lemma \ref{le:7.1} there are integers $k_0,\ldots , k_s$, at least one of which is non-zero and at least one of which is $0$, such that \begin{eqnarray*} &&\prod_{i=0}^s \gamma_i({\bf u} )^{k_i}=0, \\ &&|k_i|\leq (2d)^{\exp O(r+s)}(1+h+\log V)^{s-1}\ \ \mbox{for } i=0,\ldots , s. \end{eqnarray*} Now for \begin{equation}\label{7.7} V=(2d)^{\exp O(r+s)}(h+1)^{s-1} \end{equation} (with a sufficiently large constant in the O-symbol), the upper bound for the numbers $|k_i|$ is smaller than $V$. But this would imply that $\prod_{i=0}^s \gamma_i({\bf u} )^{v_i}=1$ for some ${\bf v}\in\mathcal{V}$, contrary to \eqref{7.5a}. Thus we conclude that with the choice \eqref{7.7} for $V$, there exists ${\bf u}\in\mathbb{Z}^q$ with \eqref{7.5}, such that any $s$ numbers among $\gamma_0({\bf u} ),\ldots ,\gamma_s ({\bf u} )$ are multiplicatively independent. Of course, the numbers $\gamma_0({\bf u} ),\ldots ,\gamma_s({\bf u} )$ are multiplicatively dependent, since they are the images under $\irS_{{\bf u}}$ of $\gamma_0,\ldots ,\gamma_s$ which are multiplicatively dependent. Substituting \eqref{7.7} into \eqref{7.6} we obtain \begin{equation}\label{7.7a} h(\gamma_i({\bf u} ))\leq (2d)^{\exp O(r+s)}(h+1)\ \ \mbox{for } i=0,\ldots , s. \end{equation} Now Lemma \ref{le:7.1} implies that there are non-zero integers $k_0,\ldots , k_s$ such that \begin{eqnarray}\label{7.8} &&\prod_{i=0}^s \gamma_i({\bf u} )^{k_i}=1, \\[0.15cm] \label{7.9} &&|k_i|\leq (2d)^{\exp O(r+s)}(h+1)^s\ \ \mbox{for } i=0,\ldots , s. \end{eqnarray} Our assumption on $\gamma_0,\ldots , \gamma_s$ implies that there are non-zero integers $l_0,\ldots , l_s$ such that $\prod_{i=0}^s \gamma_i^{l_i}=1$. Hence $\prod_{i=0}^s \gamma_i({\bf u} )^{l_i}=1$. Together with \eqref{7.8} this implies \[ \prod_{i=1}^s \gamma_i({\bf u} )^{l_0k_i-l_ik_0}=1. \] But $\gamma_1({\bf u} ),\ldots , \gamma_s({\bf u} )$ are multiplicatively independent, hence $l_0k_i-l_ik_0=0$ for $i=1,\ldots , s$. That is, \[ l_0(k_0,\ldots , k_s)=k_0(l_0,\ldots , l_s). \] It follows that \[ \prod_{i=0}^s \gamma_i^{k_i}=\rho \] for some root of unity $\rho$. But $\irS_{{\bf u}}(\rho )=1$ and it is conjugate to $\rho$. Hence $\rho =1$. So in fact we have $\prod_{i=0}^s \gamma_i^{k_i}=1$ with non-zero integers $k_i$ satisfying \eqref{7.9}. This proves our Lemma, but under the assumption $q>0$. If $q=0$ then a much simpler argument, without specializations, gives $h(\gamma_i)\leq (2d)^{\exp O(r+s)}(h+1)$ for $i=0,\ldots , s$ instead of \eqref{7.7a}. Then the proof is finished in the same way as in the case $q>0$. \end{proof} \begin{corollary}\label{co:7.3} Let $\gamma_0,\gamma_1,\ldots ,\gamma_s\in K^*$, and suppose that $\gamma_1,\ldots ,\gamma_s$ are multiplicatively independent and \[ \gamma_0=\gamma_1^{k_1}\cdots\gamma_s^{k_s} \] for certain integers $k_1,\ldots , k_s$. Then \[ |k_i|\leq (2d)^{\exp O(r+s)}(h+1)^s\ \ \ \mbox{for $i=1,\ldots , s$.} \] \end{corollary} \begin{proof} By Lemma \ref{le:7.2}, and by the multiplicative independence of $\gamma_1,\ldots ,\gamma_s$, there are integers $l_0,\ldots , l_m$ such that \begin{eqnarray*} &&\prod_{i=0}^m \gamma_i^{l_i}=1, \\[0.15cm] &&l_0\not= 0,\ \ |l_i|\leq (2d)^{\exp O(r+s)}(h+1)^s\ \mbox{for $i=0,\ldots , s$.} \end{eqnarray*} Now clearly, we have also \[ \prod_{i=1}^s \gamma_i^{l_0k_i-l_i}=1, \] hence $l_0k_i-l_i=0$ for $i=1,\ldots , s$. It follows that $|k_i|=|l_i/l_0|\leq (2d)^{\exp O(r+s)}(h+1)^s$ for $i=1,\ldots , s$. This implies our Corollary. \end{proof} \begin{proof}[Proof of Theorem \ref{th:1.3}] We keep the notation and assumptions from the statement of Theorem \ref{th:1.3}. Define the domain \[ \widetilde{A}:= A[\gamma_1,\gamma_1^{-1},\ldots , \gamma_s,\gamma_s^{-1}]. \] Then \[ \widetilde{A}\cong \mathbb{Z} [X_1,\ldots , X_r,X_{r+1},\ldots , X_{r+2s}]/\widetilde{I} \] with \begin{eqnarray*} \widetilde{I}&=&\Big(f_1,\ldots , f_m,g_{12}X_{r+1}-g_{11},g_{11}X_{r+2}-g_{12},\ldots \\ &&\hspace*{2cm} \ldots , g_{s2}X_{r+2s-1}-g_{s1},g_{s1}X_{r+2s}-g_{s2}\Big). \end{eqnarray*} Let $(v_1,\ldots , w_s)$ be a solution of \eqref{1.4}, and put $\varepsilon :=\prod_{i=1}^s \gamma_i^{v_i}$, $\eta := \prod_{i=1}^s \gamma_i^{w_i}$. Then \[ a\varepsilon +b\eta =c,\ \ \ \varepsilon ,\eta\in \widetilde{A}^* . \] By Theorem \ref{th:1.1}, $\varepsilon$ has a representative $\widetilde{\varepsilon}\in\mathbb{Z} [X_1,\ldots , X_{r+2s}]$ of degree and logarithmic height both bounded above by \[ \exp\Big( (2d)^{\exp O(r+s)}(h+1)\Big). \] Now Corollary \ref{co:7.3} implies \[ |v_i|\leq \exp\Big( (2d)^{\exp O(r+s)}(h+1)\Big )\ \ \mbox{for } i=1,\ldots , s. \] For $|w_i|$ ($i=1,\ldots , s$) we derive a similar upper bound. This completes the proof of Theorem \ref{th:1.3}. \end{proof}
1,108,101,563,511
arxiv
\section{Benchmarks} \label{sec:benchmarks} In this section, we describe the benchmark library for SYNTCOMP\xspace 2017. We start by describing two new benchmark classes in TLSF, followed by a listing of the classes of benchmarks (in both TLSF and AIGER format) that have already been used in previous competitions. For more details on these existing benchmarks, we refer to the previous competition reports~\cite{SYNTCOMP14,SYNTCOMP15,SYNTCOMP16}. \subsection{New Benchmark Set: Decomposed AMBA} \label{sec:benchmarks-decomposed-amba} This set of benchmarks has first been presented as an example in the presentation of TLSF~\cite{JacobsK16}. It describes the well-known AMBA bus controller, decomposed into eight components that can be synthesized independently. Out of these eight components, three are parameterized in the number of systems that access the shared bus, providing more challenging synthesis problems for larger parameter values. These benchmarks have been translated to TLSF by F. Klein. \subsection{New Benchmark Set: Unrealizable Variants} \label{sec:benchmarks-unreal-variants} This set of benchmarks is based on a number of existing TLSF benchmark classes in the SYNTCOMP\xspace library: detector, full arbiter, load balancer, prioritized arbiter, round robin arbiter, simple arbiter (described below). All of the original benchmarks are parameterized in the number of systems that can send requests to the synthesized component. Moreover, all of them include a mutual exclusion property, and in this benchmark set have been modified in one of two ways to obtain unrealizable variants of the respective specification: \begin{enumerate} \item We add a requirement that the system has to serve multiple requests within a fixed number of steps, resulting in unrealizability due to a clash of mutual exclusion and the time limit on serving the requests. \item We add a requirement that forces the system to violate mutual exclusion not after a fixed time, but at some undetermined time in the future. \end{enumerate} \subsection{Existing Benchmarks: TLSF} \label{sec:benchmarks-tlsf} In addition to the new benchmarks described above, we briefly describe existing TLSF benchmarks. For more details consult the report on SYNTCOMP\xspace 2016~\cite{SYNTCOMP16} and the original sources. The existing benchmark library consists of the following classes of benchmarks: \begin{itemize} \item \textbf{Lily benchmark set:} the set of benchmarks originally included with the LTL synthesis tool \textsc{Lily}~\cite{JobstmannB06}. It includes $24$ benchmarks. \item \textbf{Acacia benchmark set:} the set of benchmarks originally included with the LTL synthesis tool Acacia+~\cite{bbfjr12}. It includes $65$ benchmarks. \item \textbf{Parameterized detector:} specifies a component that raises its single output infinitely often if and only if all its inputs are raised infinitely often. Parameterized in the number of inputs. \item \textbf{Parameterized arbiters:} four arbiter specifications of different complexity (simple arbiter, prioritized arbiter, round robin arbiter, full arbiter). Parameterized in the number of masters that the arbiter needs to serve. \item \textbf{Parameterized AMBA bus controller:} essentially an arbiter with a large number of features, including prioritization and locking of the bus for a fixed or arbitrary number of steps~\cite{Jobstmann07b}. Parameterized in the number of masters that the controller has to serve. \item \textbf{Parameterized load balancer:} a component that receives jobs and distributes them to a fixed number of servers~\cite{Ehlers12}. Parameterized in the number of servers that can handle the jobs. \item \textbf{Parameterized generalized buffer:} a family of buffers that transmit data from a number of senders to two receivers, based on a handshake protocol and a FIFO queue that is used to store data~\cite{Jobstmann07b}. The benchmark is parameterized in the number of senders. \item \textbf{Parameterized LTL to B\"uchi translation (LTL2DBA):} generation of deterministic B\"uchi automata that correspond to a specification taken from a set of parameterized LTL formulas~\cite{TianSDD15}. \end{itemize} \subsection{Existing Benchmarks: AIGER} \label{sec:benchmarks-aiger} We briefly describe the existing library of AIGER benchmarks. For more details, consult the previous competition reports~\cite{SYNTCOMP14,SYNTCOMP15,SYNTCOMP16} and the original sources. \begin{itemize} \item \textbf{HWMCC benchmarks:} based on a subset of the benchmarks from HWMCC 2012 and HWMCC 2014~\cite{HWMCC14}, where a subset of the inputs have been declared as controllable, and a safe controller for these inputs should be synthesized. This benchmark set contains $390$ benchmarks. \item \textbf{(Bounded) LTL to B\"uchi and LTL to parity translation (LTL2DBA/LTL2DPA):} based on the benchmarks for LTL2DBA benchmarks from Section~\ref{sec:benchmarks-tlsf}, but including also the synthesis of parity automata, and additionally parameterized in the liveness-to-safety approximation. $62$ instances. \item \textbf{Toy Examples:} a number of basic building blocks of circuits, such as an adder, a bitshifter, a counter, and a multiplier. The set consists of $176$ problem instances. \item \textbf{AMBA:} a version of the bus controller specification for AMBA~\cite{Jobstmann07b}, parameterized in three dimensions (number of masters, type and precision of the liveness-to-safety approximation). The benchmark set contains $952$ instances. \item \textbf{Genbuf:} a version of the generalized buffer specification~\cite{Jobstmann07b}, parameterized in the same way as the AMBA benchmarks. The set contains $866$ instances. \item \textbf{LTL2AIG:} several sets of benchmarks that are based on the benchmark set of synthesis tool Acacia+~\cite{bbfjr12}, translated using the LTL2AIG tool~\cite{SYNTCOMP14}. This includes versions of the Lily, generalized buffer, and load balancer benchmarks mentioned above. $197$ problem instances. \item \textbf{Factory Assembly Line:} a controller for two robot arms on an assembly line. This set contains $15$ problem instances. \item \textbf{Moving Obstacle Evasion:} a moving robot that should evade a moving obstacle in two-dimensional space. The set consists of $16$ problem instances. \item \textbf{Washing Cycle Scheduler:} a controller of a washing system, with water tanks that share pipes. Parameterized in the number of tanks, the maximum reaction delay, and the shared water pipes. The set contains $321$ instances. \item \textbf{Driver Synthesis:} specifies a driver for a hard disk controller with respect to a given operating system model~\cite{RyzhykWKLRSV14}. Parameterized in the level of data abstraction, the precision of the liveness-to-safety approximation, and the simplification of the specification circuit by \textsf{ABC}\xspace~\cite{abc}. $72$ instances. \item \textbf{Huffman Encoder:} specifies a given Huffman decoder, for which a suitable encoder should be synthesized~\cite{Khalimov15}. Parameterized in the liveness-to-safety approximation, resulting in $5$ instances. \item \textbf{HyperLTL:} based on benchmark problems from HyperLTL model checking~\cite{FinkbeinerRS15}. The goal is to synthesize a witness for a given HyperLTL property. This benchmark set contains $21$ instances. \item \textbf{Matrix Multiplication:} asks for a circuit that performs a single matrix multiplication, or repeated multiplication with a subset of controllable inputs and an additional safety goal. Parameterized in the size of the input matrices, resulting in $354$ problem instances. \end{itemize} \section{Conclusions} \label{sec:conclusions} SYNTCOMP\xspace 2017 consolidated the changes made last year, most importantly the introduction of the track for LTL specifications in the temporal logic synthesis format (TLSF). This year, two completely new tools have been entered in the competition: BoWSer\xspace and \texttt{ltlsynt}\xspace. Furthermore, four tools have received (sometimes major) updates, and four tools have been re-entered in the same version as last year. The only major change to the rules is the re-introduction of a quality ranking for the synthesis tracks. In the AIGER/safety tracks, we had rather small changes on the tools and on the benchmark set, and this is reflected in similar, but not identical results as last year: Simple BDD Solver\xspace (abs1) again wins the sequential realizability mode, and the parallel realizability mode this year goes to TermiteSAT\xspace (hybrid), which was a close second to AbsSynthe\xspace last year. In the synthesis track, SafetySynth\xspace (basic) and AbsSynthe\xspace (PC1) again solve most problems in the sequential and parallel mode, respectively. In the quality ranking, SafetySynth\xspace (basic) also is the best configuration in sequential mode, and \textsf{Demiurge}\xspace (P3synt) is the best in parallel mode. In the TLSF/LTL tracks, we had significant changes to both the tools and the benchmark set, including two new tools and a large number of new benchmarks. Consequently, the results look a lot different than last year. In fact, all of the winners are tools or configurations that did not participate last year: in sequential realizability and sequential synthesis, the new configuration (aiger) of \textsc{Party}\xspace solves most problems, followed by the new tool \texttt{ltlsynt}\xspace. In parallel realizability and synthesis, the new configuration \textsc{Party}\xspace (portfolio) solves most problems. Finally, in the quality ranking, new configuration BoSy\xspace (spot) has the highest accumulated quality in sequential mode, and \textsc{Party}\xspace (portfolio) wins the parallel mode. { \small \myparagraph{Acknowledgments} The organization of SYNTCOMP\xspace 2016 was supported by the Austrian Science Fund (FWF) through project RiSE (S11406-N23) and by the German Research Foundation (DFG) through project ``Automatic Synthesis of Distributed and Parameterized Systems'' (JA 2357/2-1), and its setup and execution by the European Research Council (ERC) Grant OSARES (No.~683300). The development of AbsSynthe\xspace and Acacia4Aiger\xspace was supported by an F.R.S.-FNRS and FWA fellowships, and the ERC inVEST (279499) project. The development of SafetySynth\xspace and BoSy\xspace was supported by the ERC Grant OSARES (No.~683300). } \section{Setup, Rules and Execution} \label{sec:setup} We give an overview of the setup, rules and execution of SYNTCOMP\xspace 2017. More details, and the reasoning behind different design choices, can be found in the first competition report~\cite{SYNTCOMP14} and previous work in which we also outlined plans for future extensions of the competition~\cite{JacobsB16}. \subsection{General Rules} \label{sec:rules} Like in the previous year, there are two main tracks: one is based on safety specifications in AIGER format (in the following: AIGER/safety-track), and the other on full LTL specifications in TLSF (in the following: TLSF/LTL-track). The tracks are divided into subtracks for \emph{realizability checking }and \emph{synthesis}, and into two execution modes: \emph{sequential} (using a single core of the CPU) and \emph{parallel} (using up to $4$ cores). In the following, we start with rules that are common to both tracks, followed by rules that are specific to one of the tracks. \paragraph{Submissions.} Tools are submitted as source code, with instructions for installation and a description of the algorithms and optimizations used to solve the synthesis problem. Every tool can run in up to three configurations per subtrack and execution mode. After the initial submission, every tool is tested on a small set of benchmarks from the SYNTCOMP\xspace library, and authors are informed about any problems and can submit bugfixes.\footnote{Besides revealing bugs or shortcomings in the participating tools themselves, this year these tests have also revealed a number of problems in tools that are used as subprocedures. All of these problems have subsequently been addressed, thus providing an additional benefit of the competition to the broader community of formal methods research.} \paragraph{Ranking Schemes.} In all tracks, there is a ranking based on the number of correctly solved problems: a correct answer within the timeout of $3600$s is rewarded with one point for the solver, and a wrong answer is punished by subtracting $4$ points. In the realizability tracks, correctness is determined by the realizability information stored in the files, if they have been used in previous competitions, or on a majority vote of the tools that solve the benchmark, otherwise. In the synthesis tracks, if the specification is realizable, then solution has to be model checked. This differs based on the input format, as explained below. Furthermore, in synthesis tracks there is a ranking based on the \emph{quality} of the solution, measured by the number of gates in the produced AIGER circuit. To this end, the size $s$ of the solution is compared to the size $\mathit{ref}$ of a reference solution, which is either the smallest solution obtained by competition tools (during competition runs or special reference runs), or, if no previous solution exists, the smallest solution obtained by a tool in the current competition. The number of points obtained for a correct solution decreases logarithmically in the ratio of $s$ and $\mathit{ref}$, i.e., a correct answer (within the timebound) is rewarded with $$2 - log_{10}\left(\frac{s+1}{\mathit{ref}+1}\right)$$ points. Roughly, this means that for a solution with the same size as the best known solution, $2$ points are awarded. If the new solution is $10$ times bigger, $1$ point is awarded. If it is $10$ times smaller, $3$ points are awarded. A solution that is more than $100$ times bigger than $\mathit{ref}$ is awarded $0$ points. Note that since some synthesis problems can be solved without using a single gate, we cannot use the ratio $\frac{s}{\mathit{ref}}$, and use $\frac{s+1}{\mathit{ref}+1}$ instead. \subsection{Specific Rules for AIGER/safety-Track} \paragraph{Input Format.} In the AIGER/safety-track, specifications are given in the Extended AIGER Format for Synthesis~\cite{SYNTCOMP-format,SYNTCOMP14}, modeling a single safety property. \paragraph{Correctness of Solutions.} In the synthesis subtrack, if the specification is realizable then tools must produce a solution in AIGER format. For \emph{syntactical correctness}, this solution must include the specification circuit, and must define how the inputs that are declared as \texttt{controllable} are computed from the \texttt{uncontrollable} inputs and the state variables of the circuit (for details, see the SYNTCOMP\xspace 2014 report~\cite{SYNTCOMP14}). To ensure also \emph{semantical correctness}, the solutions are additionally model checked, and only solutions that are both syntactically and semantically verified are counted as correct. To facilitate verification, synthesis tools can optionally output an inductive invariant that witnesses the correctness of the solution, e.g., the winning region resulting from the attractor computation. Such an invariant is used \emph{in addition} to model checking, i.e., if the invariant check is inconclusive we fall back to full model checking. \subsection{Specific Rules for TLSF/LTL-Track} \paragraph{Input Format.} In the TLSF/LTL-track, specifications are given in TLSF~\cite{JacobsK16}. The organizers supply the \emph{synthesis format conversion} (SyFCo) tool\footnote{SyFCo is available at \url{https://github.com/reactive-systems/syfco}. Accessed August 2017.} that can be used to translate the specification to a large number of existing specification formats. Specifications are interpreted according to standard LTL semantics, with respect to realizability as a Mealy machine. \paragraph{Correctness of Solutions.} In the synthesis subtrack, tools have to produce a solution in AIGER format if the specification is realizable. For \emph{syntactical correctness}, the sets of inputs and outputs of the specification must be identical to the sets of inputs and outputs of the solution. To verify \emph{semantical correctness}, the solutions are additionally model checked against the specification with existing model checking tools. Only a solution that can be verified both syntactically and semantically is counted as correct. \subsection{Selection of Benchmarks} \label{sec:selection} \paragraph{AIGER/safety-track.} In the AIGER-based track, the selection of benchmarks is based on information about the realizability and difficulty of benchmark problems that has been obtained from the results of previous competitions. This information is stored inside the benchmark files, as described in the SYNTCOMP\xspace 2015 report~\cite{SYNTCOMP15}. For realizable specifications, we additionally determined the smallest known solution, stored as a \emph{reference size}. In SYNTCOMP\xspace 2017, we used the same benchmark classes and the same number of problems per class as in the previous year, but we randomly exchanged some of the problems within any given class (for classes that contain more problems than are selected for the competition), while preserving an even distribution of difficulty over the given class. The number of selected problems from each category (cp. Section~\ref{sec:benchmarks-aiger}) is given in Table~\ref{tab:selected-benchmarks}. \begin{table}[h] \caption{Number of selected Benchmarks per Category, AIGER/safety-track} \label{tab:selected-benchmarks} \centering \def1.3{1.2} \begin{tabular}{ll|ll} Category & Benchmarks & Category & Benchmarks\\ \hline AMBA & 16 & Genbuf (LTL2AIG) & 8\\ (Washing) Cycle Scheduling & 16 & Add (Toy Examples)& 5\\ Demo (LTL2AIG)& 16 & Count (Toy Examples)& 5\\ Driver Synthesis & 16 & Bitshift (Toy Examples)& 5\\ Factory Assembly Line & 15 & Mult (Toy Examples)& 5\\ Genbuf & 16 & Mv/Mvs (Toy Examples)& 5\\ HWMCC & 16 & Stay (Toy Examples)& 5\\ HyperLTL & 16 & Huffman Encoder & 5\\ Load Balancer (LTL2AIG)& 16\\ LTL2DBA/LTL2DPA & 16\\ Moving Obstacle & 16 & \\ Matrix Multiplication & 16 & {\bf Total:} & {\bf 234}\\ \end{tabular} \end{table} \paragraph{TLSF/LTL-track.} In the TLSF-based track, for realizability checking we used all $24$ of the non-parameterized benchmarks from the Lily benchmark set, and $64$ from the Acacia benchmark set. Additionally, we used $6$ instances of each of the parameterized benchmarks. Overall, this amounts to $244$ problem instances. \subsection{Execution} \label{sec:execution} Like in the last two years, SYNTCOMP\xspace 2017 was run at Saarland University, on a set of identical machines with a single quad-core Intel Xeon processor (E3-1271 v3, 3.6GHz) and 32 GB RAM (PC1600, ECC), running a GNU/Linux system. Each node has a local 480 GB SSD that can be used as temporary storage. Also like in previous years, the competition was organized on the EDACC platform~\cite{BalintDGGKR11}, with a very similar setup. To ensure a high comparability and reproducability of our results, a complete machine was reserved for each job, i.e., one synthesis tool (configuration) running one benchmark. Olivier Roussel's \texttt{runsolver}~\cite{Roussel11} was used to run each job and to measure CPU time and wall time, as well as enforcing timeouts. As all nodes are identical and no other tasks were run in parallel, no other limits than a timeout of $3600$ seconds (CPU time in sequential mode, wall time in parallel mode) per benchmark was set. Like last year, we used wrapper scripts to execute solvers that did not conform completely with the output format specified by the competition, e.g., to filter extra information that was displayed in addition to the specified output. The model checker used for checking correctness of solutions for the AIGER/Safety track is IIMC\footnote{IIMC is available at \url{ftp://vlsi.colorado.edu/pub/iimc/}. Accessed August 2017.} in version 2.0. For solvers that supply an inductive invariant as a witness of correctness, we used a BDD-based invariant check to check correctness\footnote{Available from the SYNTCOMP\xspace repository at \url{https://bitbucket.org/swenjacobs/syntcomp/src}, in subdirectory \texttt{tools/WinRegCheck}. Accessed August 2017.}, and used full model-checking as a fallback solution if the invariant check failed. For the TLSF/LTL track, the model checker used was V3\footnote{V3 is available at \url{https://github.com/chengyinwu/V3}. Accessed August 2017.}~\cite{WuWLH14}. \section{Introduction} \label{sec:intro} The reactive synthesis competition (SYNTCOMP\xspace) has been founded in 2014~\cite{SYNTCOMP14} with the goal to increase the impact of theoretical advancements in the automatic synthesis of reactive systems. Reactive synthesis is one of the major challenges of computer science since the definition of the problem more than 50 years ago~\cite{Church62}. A large body of theoretical results has been developed since then, but their impact on the practice of system design has been rather limited. SYNTCOMP\xspace is designed to foster research in scalable and user-friendly implementations of synthesis techniques by establishing a standard benchmark format, maintaining a challenging public benchmark library, and providing a \emph{dedicated and independent} platform for the comparison of tools under consistent experimental conditions. Since its inception, SYNTCOMP\xspace is held annually, and is associated with the International Conference on Computer Aided Verification (CAV) and the Workshop on Synthesis (SYNT), where the competition results are presented to the community.~\cite{SYNTCOMP14,SYNTCOMP15,SYNTCOMP16} A design choice for the first two competitions was to focus on safety properties specified as monitor circuits in an extension of the AIGER format known from the hardware model checking competition~\cite{HWMCC14,SYNTCOMP-format}. SYNTCOMP\xspace 2016 introduced the first major extension of the competition by adding a new track that is based on properties in full linear temporal logic (LTL), given in the \emph{temporal logic synthesis format} (TLSF)~\cite{JacobsB16,JacobsK16}. The organization team of SYNTCOMP\xspace 2017 consisted of R. Bloem and S. Jacobs. \paragraph{Outline.} The rest of this paper describes the design, benchmarks, participants, and results of SYNTCOMP\xspace 2017. In Section~\ref{sec:benchmarks}, we present two new benchmark classes that have been added to the SYNTCOMP\xspace library and give an overview of the benchmark set for SYNTCOMP\xspace 2017. In Section~\ref{sec:setup}, we describe the setup, rules and execution of the competition. In Section~\ref{sec:participants} we give an overview of the participants of SYNTCOMP\xspace 2017, focusing on changes compared to last year's participants. The experimental results are presented and analyzed in Section~\ref{sec:results}, before we end with some concluding remarks in Section~\ref{sec:conclusions}. \section{Participants} \label{sec:participants} Overall, ten tools were entered into SYNTCOMP\xspace 2017: five in the AIGER/safety-track, and five in the TLSF/LTL-track. We briefly describe the participants and give pointers to additional information. \subsection{AIGER/safety-Track} This track had five participants in 2017, which we briefly describe in the following. For additional details on the implemented techniques and optimizations, we refer to the previous SYNTCOMP\xspace reports~\cite{SYNTCOMP14,SYNTCOMP15,SYNTCOMP16}. \subsubsection{Updated Tool: Swiss AbsSynthe\xspace v2.1} AbsSynthe\xspace was submitted by R. Brenguier, G. A. P\'erez, J.-F. Raskin, and O. Sankur, and competed in both the realizability and the synthesis track. It implements the classical backward-fixpoint-based approach to solving safety games using BDDs. As additional features, it supports decomposition of the problem into independent sub-games, as well as an abstraction approach~\cite{BrenguierPRS14,BrenguierPRS15}. This year, AbsSynthe\xspace contains a new approach to compositionality, where the problem is not separated into as many sub-games as possible, but rather merges some of the smaller sub-games in the hope that this will yield more useful information about the overall problem. It competes in the following sequential (SCx) and parallel (PCx) configurations: \begin{itemize} \item (SC1) uses a standard BDD-based fixpoint computation with several optimizations, but without compositionality or abstraction, \item (SC2) uses an abstraction algorithm, but no compositionality, and \item (SC3) uses a compositional algorithm, combined with an abstraction method. This is the only sequential configuration that changed this year, incorporating the new approach to compositionality mentioned above. \item (PC1) runs the three sequential configurations in parallel, plus one additional configuration that uses abstraction with fixed threshold, but no compositionality, \item (PC2) runs four copies of (PC1), only modified in the BDD reordering technique, and \item (PC3) runs four copies of (PC2), with the same set of different reordering techniques. \end{itemize} \paragraph{Implementation, Availability} The source code of AbsSynthe\xspace is is available at \url{https://github.com/gaperez64/AbsSynthe/tree/native-dev-par}. \subsubsection{Re-entered: \textsf{Demiurge}\xspace 1.2.0} \textsf{Demiurge}\xspace was submitted by R. K\"onighofer and M. Seidl, and competed in both the realizability and the synthesis track. \textsf{Demiurge}\xspace implements different SAT-based methods for solving the reactive synthesis problem~\cite{BloemKS14,SeidlK14}. In the competition, three of these methods are used --- one of them as the only method in sequential configuration (D1real) , and a combination of all three methods in parallel configuration (P3real). This year, \textsf{Demiurge}\xspace competed in the same version as last two years. \paragraph{Implementation, Availability} { \sloppy The source code of \textsf{Demiurge}\xspace is available at {\url{% https://www.iaik.tugraz.at/content/research/opensource/demiurge/}} under the GNU LGPL version 3. } \subsubsection{Re-entered: SafetySynth\xspace} SafetySynth\xspace was submitted by L. Tentrup, and competed in both the realizability and the synthesis track. SafetySynth\xspace is a re-implementation of Realizer\xspace that implements the standard BDD-based algorithm for safety games, using the optimizations that were most beneficial for BDD-based tools in SYNTCOMP\xspace 2014 and 2015~\cite{SYNTCOMP14,SYNTCOMP15}. It competed in the same version as in the previous year, with configurations (basic) and (alternative) that only differ in the BDD reordering heuristic. \paragraph{Implementation, Availability} The source code of SafetySynth\xspace is available online at: \url{https://www.react.uni-saarland.de/tools/safetysynth/}. \subsubsection{Re-entered: Simple BDD Solver\xspace} Simple BDD Solver\xspace was submitted by L. Ryzhyk and A. Walker, and competed in the realizability track. Simple BDD Solver\xspace implements the standard BDD-based algorithm for safety games, including a large number of optimizations in configuration (basic). The other two configurations additionally implement an abstraction-refinement approach inspired by de Alfaro and Roy~\cite{dealfaro} in two variants: with overapproximation of the winning region in configuration (abs1), or with both over- and underapproximation in (abs2). The version entered into SYNTCOMP\xspace 2017 is the same as last year. \paragraph{Implementation, Availability} The source code of Simple BDD Solver\xspace is available online at \url{https://github.com/adamwalker/syntcomp}. \subsubsection{Re-entered: TermiteSAT\xspace} TermiteSAT\xspace was submitted by A. Legg, N. Narodytska and L. Ryzhyk, and competed in the realizability track. TermiteSAT\xspace implements a novel SAT-based method for synthesis of safety specifications based on Craig interpolation. The only configuration in sequential mode implements this new approach, and the parallel configurations (portfolio) and (hybrid) run the new algorithm alongside one of the algorithms of Simple BDD Solver\xspace~\cite{LeggNR16}, where in (hybrid) there is even communication of intermediate results between the different algorithms. \paragraph{Implementation, Availability} The source code of TermiteSAT\xspace is available online at: \url{https://github.com/alexlegg/TermiteSAT}. \subsection{TLSF/LTL-Track} \label{sec:participants-TLSF} This track had five participants in 2017, two of which have not participated in previous competitions. All tools competed in both the realizability and the synthesis track. We describe the implemented techniques and optimizations of the new tools, followed by a brief overview of the updated tools. For additional details on the latter, we refer to the report of SYNTCOMP\xspace 2016~\cite{SYNTCOMP16}. \subsubsection{New Entrant: BoWSer\xspace} BoWSer\xspace was submitted by B. Finkbeiner and F. Klein. It implements different extensions of the bounded synthesis approach~\cite{Finkbeiner13} that solves the LTL synthesis problem by first translating the specification into a universal co-B\"uchi automaton, and then encoding acceptance of a transition system with bounded number of states into a constraint system. In this case, the constraints are encoded into propositional logic, i.e., a SAT problem. A solution to this SAT problem, i.e., a satisfying assignment, then represents a transition system that satisfies the original specification. To also check for unrealizability of a formula, the dual problem of whether there exists a winning strategy for the environment is also encoded into SAT. For the synthesis of solutions in the basic configuration the satisfying assignment from the SAT solver is encoded into an AIGER circuit, and then handed to Yosis for simplification. As a first extension, BoWSer\xspace implements \emph{bounded cycle synthesis}~\cite{FinkbeinerK16}, which restricts the structure of the solution with respect to the number of cycles in the transition system. To this end, it additionally encodes into SAT the existence of a witness structure that guarantees that the number of cycles in the implementation is smaller than a given bound (according to the approach of Tiernan~\cite{Tiernan70}). In addition, BoWSer\xspace supports another encoding into SAT that targets AIGER circuits more directly, where the numbers of gates and latches can be bounded independently. BoWSer\xspace competed in the following configurations: \begin{itemize} \item configuration (c0) implements bounded synthesis in the basic version mentioned above, \item configuration (c1) implements bounded cycle synthesis on top of bounded synthesis, i.e., in a first step it searches for a solution with a bounded number of states, and if that exists, it additionally bounds the number of cycles \item configuration (c2) also implements bounded cycle synthesis on top of bounded synthesis, with the additional direct encoding into bounded AIGER circuits mentioned above. \end{itemize} In sequential mode, these configurations spawn multiple threads that are executed on a single CPU core. The parallel configurations are essentially identical, except that the threads are distributed to multiple cores. \paragraph{Implementation, Availability} BoWSer\xspace is implemented in Haskell, and uses LTL3BA\footnote{LTL3BA \available{https://sourceforge.net/projects/ltl3ba/}}~\cite{ltl3ba} to convert specifications into automata, and MapleSAT\footnote{MapleSAT \available{https://sites.google.com/a/gsd.uwaterloo.ca/maplesat/}}~\cite{LiangGPC16} to solve SAT queries. For circuit generation, it uses the Yosis framework\footnote{Yosis \available{http://www.clifford.at/yosys/}}~\cite{Glaser2014}. The website of BoWSer\xspace is \url{https://www.react.uni-saarland.de/tools/bowser/}, where the source code will be made available soon. \subsubsection{New Entrant: \texttt{ltlsynt}\xspace} \texttt{ltlsynt}\xspace was submitted by M. Colange and T. Michaud and competed in a single configuration in both the sequential realizability and sequential synthesis tracks. To solve the synthesis problem, \texttt{ltlsynt}\xspace uses a translation to parity games. As a first step, the input LTL formula is translated into an $\omega$-automaton with a transition-based generalized B\"uchi acceptance condition. The resulting automata are more concise than classical state-based B\"uchi automata, which is important to make subsequent steps more efficient. Then, the automaton is simplified according to several heuristics, for example by removing non-accepting strongly connected components or redundant acceptance marks.~\cite{Duret16} To separate the actions of the environment and the controller, each transition of the obtained automaton is split in two consecutive transitions, corresponding to the uncontrollable inputs and the controllable inputs of the original transition, respectively. The (non-deterministic) split automaton is then translated into a deterministic parity automaton, which can be interpreted as a turn-based parity game that is equivalent to the original synthesis problem. Determinism is key in preserving the semantics of the synthesis problem: every action of the environment can be answered by the controller so that the resulting run satisfies the LTL specification. Here, the controller wins the parity game (recall that such games are determined~\cite{Martin75}) if and only if the original instance of the reactive synthesis problem has a solution. \texttt{ltlsynt}\xspace implements two algorithms that solve such parity games: the well-known recursive algorithm by Zielonka~\cite{Zielonka98}, and the recent quasi-polynomial time algorithm by Calude et al.~\cite{CaludeJKL017}. Note that the parity automata (and hence the parity games) produced by Spot are transition-based: priorities label transitions, not states. Again, this allows more concise automata, but required to adapt both algorithms to fit this class of automata. The default algorithm of \texttt{ltlsynt}\xspace is Zielonka's, since in preliminary experiments it outperformed the algorithm of Calude on the benchmarks of SYNTCOMP 2016~\cite{SYNTCOMP16}. In fact, the experiments also showed that the bottleneck of the procedure is the determinization step, and not the resolution of the parity game. A winning strategy for the controller in the parity game defines a satisfying implementation of the controller in the synthesis problem. Since parity games are memoryless, or more precisely positional, a winning strategy can be obtained by removing edges of the parity game, so that each controlled state has exactly one outgoing edge. After reversing the splitting operation by merging consecutive transitions in this strategy, it can be straightforwardly encoded into an AIGER circuit. Binary Decision Diagrams (BDDs) are used to represent sets of atomic proposition, allowing some simplifications in the expression of outputs and latches. \texttt{ltlsynt}\xspace also uses BDDs to cache the expressions represented by AIGER variables to avoid adding redundant gates. In contrast to its competitors, \texttt{ltlsynt}\xspace does not use external tools such as \textsf{ABC}\xspace or Yosis for the encoding of solutions into AIGER. \paragraph{Implementation, Availability} \texttt{ltlsynt}\xspace is implemented in C++ and integrated into a branch of the Spot automata library~\cite{Duret16}, which is used for translation of the specification into automata, and for manipulation of automata. Spot also integrates the BDD library BuDDy and the SAT solver PicoSAT. The source code of \texttt{ltlsynt}\xspace is available in branch \texttt{tm/ltlsynt-pg} of the GIT repository of Spot at \url{https://gitlab.lrde.epita.fr/spot/spot.git}. \subsubsection{Updated Tool: Acacia4Aiger\xspace} Acacia4Aiger\xspace was submitted by R. Brenguier, G. A. P\'erez, J.-F. Raskin, and O. Sankur. It is an extension of the reactive synthesis tool Acacia+\footnote{Acacia+ \available{http://lit2.ulb.ac.be/acaciaplus/}}, which solves the reactive synthesis problem for LTL specifications by a reduction to safety games, which are then solved efficiently by symbolic algorithms based on an \emph{antichain} representation of the sets of states~\cite{bbfjr12}. For SYNTCOMP\xspace 2017, Acacia+ has been extended with a parallel mode that searches independently for a system implementation and a counter-strategy for the environment. \paragraph{Implementation, Availability} Acacia4Aiger\xspace is implemented in Python and C. It uses the AaPAL library\footnote{AaPAL \available{http://lit2.ulb.ac.be/aapal/}} for the manipulation of antichains, and the Speculoos tools~\footnote{Speculoos \available{https://github.com/romainbrenguier/Speculoos}} to generate AIGER circuits. The source code of Acacia4Aiger\xspace is available online at: \url{https://github.com/gaperez64/acacia4aiger}. \subsubsection{Updated Tool: BoSy\xspace} BoSy\xspace was submitted by P. Faymonville, B. Finkbeiner and L. Tentrup, and competed in both the realizability and the synthesis track. BoSy\xspace uses the \emph{bounded synthesis} approach~\cite{Finkbeiner13} with an encoding into quantified Boolean formulas (QBF), as described by Faymonville et al.~\cite{FaymonvilleFRT17,FaymonvilleFT17}. To detect unrealizability, the existence of a bounded strategy of the environment to falsify the specification is encoded in a similar way, and checked in parallel. If no solution is found for a given bound on the size of the implementation, then the bound is increased in an exponential way. The resulting QBF formulas are first simplified, using the QBF preprocessor bloqqer\footnote{Bloqqer \available{http://fmv.jku.at/bloqqer}}. In realizability mode, the simplified formula is then directly solved, using the QBF solver RAReQS~\cite{JanotaKMC16}. In synthesis mode, BoSy\xspace uses a combination of RAReQS the certifying QBF solver QuAbS~\cite{Tentrup16}, and the certificate returned by QuAbS represents a solution to the synthesis problem. This solution is then converted into AIGER format, and further simplified using the \textsf{ABC}\xspace framework. Two configurations of BoSy\xspace competed in SYNTCOMP\xspace 2017, differing in the translation from LTL to automata: configuration (spot) uses the Spot framework~\cite{Duret16}, configuration (ltl3ba) uses LTL3BA~\cite{ltl3ba} for this task. Both configurations support a parallel mode, if more than one core is available. BoSy\xspace supports both Mealy and Moore semantics natively, as well as the extraction of a winning strategy for the environment in case of unrealizable specifications. \paragraph{Implementation, Availability.} BoSy\xspace is written in Swift. It uses LTL3BA or Spot\footnote{Spot \available{https://spot.lrde.epita.fr}} to convert LTL specifications into B\"uchi automata. It uses bloqqer, RAReQS\footnote{RAReQS \available{http://sat.inesc-id.pt/~mikolas/sw/areqs/}} and QuAbs\footnote{QuAbs \available{https://www.react.uni-saarland.de/tools/quabs/}} to solve QBF constraints, and \textsf{ABC}\xspace to simplify solutions. The code is available online at: \url{https://www.react.uni-saarland.de/tools/bosy/}. \subsubsection{Updated Tool: \textsc{Party}\xspace} \textsc{Party}\xspace was submitted by A. Khalimov, and competed in both the realizability and the synthesis track, with three configurations (int, bool, aiger) in sequential mode and one configuration (portfolio) in parallel mode. The tool and the basic algorithms and optimizations it implements have been described in more detail by Khalimov et al.~\cite{KhalimovJB13,KhalimovJB13a}. \textsc{Party}\xspace uses the bounded synthesis approach~\cite{Finkbeiner13} for solving the LTL synthesis problem. To detect unrealizability, it uses the standard approach of synthesizing the dualized specification, where the synthesizer searches for a strategy for the environment to falsify the specification. On a given benchmark, PARTY starts two threads to check realizability and unrealizability. The check for unrealizability is limited to $1200$ seconds, giving more resources to the realizability check. \textsc{Party}\xspace competed in four configurations: \begin{itemize} \item \textsc{Party}\xspace (int) is a re-entry from the previous year, with minor updates. In this configuration, the synthesis problem is encoded into SMT satisfiability as follows. The LTL formula is first translated into a universal co-B\"uchi automaton (UCA) using Spot~\cite{Duret16}. Then the emptiness of the product of the automaton and an unknown fixed-size system is encoded into SMT satisfiability. If the SMT query (in logic QF\_UFLRA) is satisfiable, then \textsc{Party}\xspace (int) extracts the state machine, encodes it into Verilog, and translates the Verilog file into AIGER using Yosis. If the SMT query is unsatisfiable, it increases the system size and repeats the previous steps. \item In \textsc{Party}\xspace (bool) a given LTL formula is translated into a UCA, similar to the approach of \textsc{Party}\xspace (int). Then, in contrast, \textsc{Party}\xspace (bool) translates the UCA into a \emph{$k$-liveness automaton}, where the number of visits to any final state of the original UCA is limited by a heuristically chosen bound. I.e., such an automaton approximates liveness properties up to some bound. The rest is as before, we only note that the resulting SMT query is in the simpler logic QF\_UF. \item \textsc{Party}\xspace (aiger) is a new entrant this year. Similarly to the two previous configurations, it follows the bounded synthesis approach~\cite{Finkbeiner13}, but the games-based one, not the SMT-based one. First, a given LTL formula is reduced into a $k$-liveness automaton as before. Since the result is a safety automaton, it can be determinized, and translated into a safety game that is then encoded in Verilog. (We use the basic subset construction for determinization, where for each automaton state we introduce one memory latch.) Such a Verilog circuit is then converted into AIGER using Yosis (this translation takes \emph{a lot} of time, likely due to optimizations that Yosis applies). The resulting AIGER synthesis problem is solved with the SDF solver that participated in SYNTCOMP 2016~\cite{SYNTCOMP16}. SDF was slightly modified to produce AIGER circuits adhering to the TLSF track requirements. If SDF does not find the solution, we increase the parameter $k$ and repeat. \item \textsc{Party}\xspace (portfolio) runs $5$ solvers in parallel: (1) \textsc{Party}\xspace (aiger), (2) \textsc{Party}\xspace (aiger) with formula strengthening, (3) \textsc{Party}\xspace (aiger) on dualized specifications, (4) \textsc{Party}\xspace (int) with system sizes from $1$ to $8$ (motivation: specifications requiring more than $8$ states are unsolvable anyway), and (5) \textsc{Party}\xspace (bool) with fixed system size of $16$ (motivation: solving many unsatisfiable queries for small system sizes takes a significant time). \textsc{Party}\xspace (portfolio) reports the first solution that any of the solvers find. The choice of these five solvers was based on the known strengths and weaknesses of the individual algorithms, and not on an analysis of their performance (individually or as portfolio) on the SYNTCOMP benchmark set. \end{itemize} \paragraph{Implementation, Availability.} \textsc{Party}\xspace is written in Python. It uses Spot to convert LTL specifications into B\"uchi automata, Z3\footnote{Z3 \available{https://github.com/Z3Prover/z3}}~\cite{Moura08} to solve SMT constraints, and Yosis to translate Verilog into AIGER. The code is available online at: \url{https://github.com/5nizza/party-elli}, branch syntcomp17. \section{Experimental Results} \label{sec:results} We present the results of SYNTCOMP\xspace 2017, separated into the AIGER/safety-track and the TLSF/LTL-track. Both main tracks are separated into realizability and synthesis subtracks, and parallel and sequential execution modes. Detailed results of the competition are also directly accessible via the web-frontend of our instance of the EDACC platform at \url{http://syntcomp.cs.uni-saarland.de}. \subsection{AIGER/safety-Track: Realizability} In the track for realizability checking of safety specifications in AIGER format, $5$ tools competed on $234$ benchmark instances, selected from the different benchmark categories as explained in Section~\ref{sec:selection}. Overall, $16$ different configurations entered this track, with $10$ using sequential execution mode and $6$ using parallel mode. In the following, we compare the results of these $16$ configurations on the $234$ benchmarks selected for SYNTCOMP\xspace 2017. We first restrict the evaluation of results to purely sequential tools, then extend it to include also the parallel versions, and finally give a brief analysis of the results. \paragraph{Sequential Mode.} In sequential mode, AbsSynthe\xspace competed with three configurations (seq1, seq2 and seq3), \textsf{Demiurge}\xspace with one configuration (D1real), Simple BDD Solver\xspace with three configurations (basic, abs1, abs2), SafetySynth\xspace with two configurations (basic and alternative), as well as TermiteSAT\xspace with one configuration. The number of solved instances per configuration, as well as the number of uniquely solved instances, are given in Table~\ref{tab:results-realseq}. No tool could solve more than $171$ out of the $234$ instances, or about $73\%$ of the benchmark set. $22$ instances could not be solved by any tool within the timeout. \begin{table}[h] \caption{Results: AIGER Realizability (sequential mode only)} \label{tab:results-realseq} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llll@{}} \toprule Tool & (configuration) & Solved & Unique \\ \midrule Simple BDD Solver\xspace & (abs1) & 171 & 0 \\ SafetySynth\xspace & (basic) & 165 & 1 \\ SafetySynth\xspace & (alternative) & 165 & 0 \\ Simple BDD Solver\xspace & (basic) & 165 & 0 \\ Simple BDD Solver\xspace & (abs2) & 165 & 0 \\ AbsSynthe\xspace & (SC3) & 160 & 3 \\ AbsSynthe\xspace & (SC2) & 156 & 0 \\ AbsSynthe\xspace & (SC1) & 148 & 0 \\ \textsf{Demiurge}\xspace & (D1real) & 127 & 11 \\ TermiteSAT\xspace & & 101 & 6 \\ \bottomrule \end{tabular} } \end{table} Figure~\ref{fig:cactus-realseq} gives a cactus plot for the runtimes of the best sequential configuration of each tool. \begin{figure} \centering \includegraphics[width=1\linewidth]{fig/AIGER-Realizability-2017-cactusseq-1.pdf} \caption{Runtime Cactus Plot of Best Sequential Configurations} \label{fig:cactus-realseq} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{fig/AIGER-Realizability-2017-cactuspar-1.pdf} \caption{Runtime Cactus Plot of Best Parallel Configurations} \label{fig:cactus-realall} \end{figure} \paragraph{Parallel Mode.} Three of the tools that entered the competition had parallel configurations for the realizability track: three configurations of AbsSynthe\xspace (par1, par2, par3), one configuration of \textsf{Demiurge}\xspace (P3real), and two configurations of TermiteSAT\xspace (portfolio, hybrid). These parallel configurations had to solve the same set of benchmark instances as in the sequential mode. In contrast to the sequential mode, runtime of tools is measured in wall time instead of CPU time. The results are given in Table~\ref{tab:results-realpar}. Compared to sequential mode, a number of additional instances could be solved: both AbsSynthe\xspace and TermiteSAT\xspace have one or more configurations that solve more then the best tool in sequential mode (about $79\%$ of the benchmark set). Only $15$ instances could not be solved by any tool in either sequential or parallel mode. \begin{table}[h] \caption{Results: AIGER Realizability (parallel mode only)} \label{tab:results-realpar} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llll@{}} \toprule Tool & (configuration) & Solved & Unique \\ \midrule TermiteSAT\xspace & (hybrid) & 186 & 0 \\ TermiteSAT\xspace & (portfolio) & 185 & 0 \\ AbsSynthe\xspace & (PC1) & 177 & 0 \\ \textsf{Demiurge}\xspace & (P3real) & 161 & 1 \\ AbsSynthe\xspace & (PC3) & 156 & 0 \\ AbsSynthe\xspace & (PC2) & 147 & 0 \\ \bottomrule \end{tabular} } \end{table} Note that in Table~\ref{tab:results-realpar} we only count a benchmark instance as uniquely solved if it is not solved by any other configuration, including the sequential configurations. Consequently, only \textsf{Demiurge}\xspace (P3real) produces a single unique solution. \paragraph{Both modes: Solved Instances by Category.} Figure~\ref{fig:bycat2} gives an overview of the number of solved instances per configuration and category, for the best sequential and parallel configuration of each tool and the categories defined in Table~\ref{tab:selected-benchmarks}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{fig/AIGER-Realizability-2017-bycat1-1.pdf} \includegraphics[width=\linewidth]{fig/AIGER-Realizability-2017-bycat2-1.pdf} \caption{AIGER/safety Realizability Track, Solved Instances by Category} \label{fig:bycat2} \end{figure} \paragraph{Analysis.} The best sequential configuration on this year's benchmark set is Simple BDD Solver\xspace (abs1), the same as last year. This is the configuration with the simpler form of abstraction. Second place is shared between the other configurations of Simple BDD Solver\xspace and the two configurations of SafetySynth\xspace. Notably, SafetySynth\xspace (basic) is the only configuration that solves \texttt{amba16f110y}, one of the most difficult (unrealizable) benchmarks from the AMBA class. These are followed by the three configurations of AbsSynthe\xspace, where configuration (SC3), with compositionality and abstraction, performs best and solves 3 benchmarks uniquely (\texttt{genbuf64f100y}, \texttt{mult\_bool\_matrix\_6\_6\_8}, and \texttt{mult\_bool\_matrix\_6\_7\_7}), followed by configuration (SC2) which only uses abstraction, and then (SC1) which uses neither compositionality nor abstraction. Like last year, AbsSynthe\xspace still uses CUDD v2.5.1, compared to v3.0.0 used in Simple BDD Solver\xspace and SafetySynth\xspace. We observed last year that the switch to the newer version gave a significant speed-up to Simple BDD Solver\xspace. Finally, \textsf{Demiurge}\xspace and TermiteSAT\xspace again solve less problems than the BDD-based approaches, but solve a relatively large number of problems uniquely --- \textsf{Demiurge}\xspace mainly in classes HWMCC, Load Balancer, LTL2DBA/DPA, gb (LTL2AIG) and Huffman, and TermiteSAT\xspace mainly in HWMCC, HyperLTL and stay. Among the parallel configuration, this year the two configurations of TermiteSAT\xspace solve most instances (with hybrid mode again only solving one additional instance compared to portfolio mode), followed by last year's winner AbsSynthe\xspace (PC1) and the parallel configuration \textsf{Demiurge}\xspace (P3real). All of these show that a well-chosen portfolio can solve a significantly higher number of problems than the individual algorithms they are based on. Finally, the parallel configurations (PC2) and (PC3) of AbsSynthe\xspace, which represent portfolios of (SC1) and (SC2), respectively, that only differ in the BDD reordering, solve roughly the same number as their sequential counterparts. \subsection{AIGER/safety-Track: Synthesis} In the track for synthesis from safety specifications in AIGER format, participants had to solve the same set of benchmarks as in the realizability track. Three tools entered the track: AbsSynthe\xspace, \textsf{Demiurge}\xspace and SafetySynth\xspace, in the same configurations as in the realizability track, except for additional synthesis of solutions. For SYNTCOMP\xspace 2017, we have two different rankings in the synthesis track: one is based on the number of instances that can be solved within the timeout, and the other gives a weight to solutions of realizable specifications, based on their size. Furthermore, a solution for a realizable specification is only considered as correct if it can be model-checked within a separate timeout of one hour (cf. Section~\ref{sec:setup}). As before, we first present the results for the sequential configurations, followed by parallel configurations, and end with an analysis of the results. \paragraph{Sequential Mode.} Table~\ref{tab:results-syntseq} summarizes the experimental results, including the number of solved benchmarks, the uniquely solved instances, the number of solutions that could not be model-checked within the timeout, and the accumulated quality of solutions. Note that the ``solved'' column gives the number of problems that have either been correctly determined unrealizable, or for which the tool has presented a solution that could be verified. With this requirement, no sequential configuration could solve more than $155$ or about $66\%$ of the benchmarks, and $48$ instances could not be solved by any tool. Very few potential solutions could not be model-checked within the timeout. None of the tools produced any wrong solutions.\footnote{In the EDACC system, benchmark \texttt{amba16c40y} is reported with \texttt{model checking failed} for configuration AbsSynthe\xspace (SC1). A closer inspection showed that this is not due to a faulty solution, but rather due to an uncaught exception in the model checker. Therefore, this solution is neither counted as an incorrect, nor as a correct solution.} \begin{table}[h] \caption{Results: AIGER Synthesis (sequential mode only)} \label{tab:results-syntseq} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llllll@{}} \toprule Tool & (configuration) & Solved & Unique & MC Timeout & Quality\\ \midrule SafetySynth\xspace & (basic) & 155 & 2 & 0 & \textbf{236}\\ SafetySynth\xspace & (alt) & 152 & 1 & 0 & 232\\ AbsSynthe\xspace & (SC2) & 149 & 0 & 1 & 191\\ AbsSynthe\xspace & (SC3) & 147 & 0 & 1 & 195\\ AbsSynthe\xspace & (SC1) & 141 & 2 & 0 & 183\\ \textsf{Demiurge}\xspace & (D1synt) & 118 & 20 & 1 & 175\\ \bottomrule \end{tabular} } \end{table} \paragraph{Parallel Mode.} Table~\ref{tab:results-syntpar} summarizes the experimental results, again including the number of solved benchmarks, the uniquely solved instances, the number of solutions that could not be verified within the timeout, and the accumulated quality of solutions. No tool solved more than $169$ problem instances, or about $72\%$ of the benchmark set. Again, there are only very few (potential) solutions that could not be verified within the timeout. None of the solutions have been identified as wrong by our verifiers. Like in the parallel realizability track, we only consider instances as uniquely solved if they are not solved by any other configuration, including sequential ones. Consequently, none of the configurations produced any unique solutions. \begin{table}[h] \caption{Results: AIGER Synthesis (parallel mode only)} \label{tab:results-syntpar} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llllll@{}} \toprule Tool & (configuration) & Solved & Unique & MC Timeout & Quality\\ \midrule AbsSynthe\xspace & (PC1) & 169 & 0 & 2 & 210\\ \textsf{Demiurge}\xspace & (P3Synt) & 158 & 0 & 0 & \textbf{266}\\ AbsSynthe\xspace & (PC3) & 148 & 0 & 2 & 198\\ AbsSynthe\xspace & (PC2) & 139 & 0 & 1 & 179\\ \bottomrule \end{tabular} } \end{table} \paragraph{Analysis.} Unsurprisingly, the number of solved instances for each tool in the synthesis track corresponds roughly to those solved in the realizability track. With respect to the smaller number of participants, \textsf{Demiurge}\xspace (D1synt) provides an additional $20$ unique solutions in the sequential mode, and a few unique solutions are also provided by AbsSynthe\xspace (SC1) and SafetySynth\xspace (basic and alt). In sequential mode, the tool with the overall highest quality of solutions is the one which provides also the highest number of solutions, SafetySynth\xspace (basic). Note also that here AbsSynthe\xspace (SC2) solves more problems than (SC3), but (SC3) has a higher quality score, which means that on average it gives better solutions than (SC2). In parallel mode, the quality score of \textsf{Demiurge}\xspace (P3synt) is about $25\%$ higher than for AbsSynthe\xspace (PC1), even though the latter solves about $7\%$ more problems. This means that on average the solutions of \textsf{Demiurge}\xspace are significantly smaller than those produced by AbsSynthe\xspace and SafetySynth\xspace. This can be seen in Figure~\ref{fig:size-cactus}, which plots the sizes of synthesized strategies for some of the configurations. We consider here not the size of the complete solution (which includes the specification circuit), but only the number of additional AND-gates, which corresponds to the strategy of the controller. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig/AIGER-Synthesis-2017-cactus-1.pdf} \caption{AIGER/safety Synthesis Track: Size of Solution Strategies for Selected Configurations} \label{fig:size-cactus} \end{figure} \subsection{TLSF/LTL-Track: Realizability} In the track for realizability checking of LTL specifications in TLSF, $5$ tools competed in $8$ sequential and $5$ parallel configurations. In the following, we compare the results of these $13$ configurations on the $244$ benchmarks that were selected for SYNTCOMP\xspace 2017, as explained in Section~\ref{sec:selection}. Again, we first restrict our evaluation to sequential configurations, then extend it to include parallel configurations, and finally give a brief analysis. \paragraph{Sequential Mode.} In sequential mode, Acacia4Aiger\xspace, BoWSer\xspace and \texttt{ltlsynt}\xspace each competed with one configuration\footnote{In fact, BoWSer\xspace was entered also in the realizability track in three configurations, but these configurations differ only in the synthesis step. Therefore, they all produced identical results, and are represented as a single configuration here.}, BoSy\xspace with two configurations (ltl3ba and spot), and \textsc{Party}\xspace with three configurations. The number of solved instances per configuration, as well as the number of uniquely solved instances, are given in Table~\ref{tab:results-realseq-tlsf}. No tool could solve more than $218$ out of the $244$ instances, or about $89\%$ of the benchmark set\footnote{For two configurations of \textsc{Party}\xspace, the numbers reported here differ from those given by the EDACC system. This is because configurations (bool) and (aiger) give result \texttt{unrealizable} on benchmarks \texttt{ltl2dba\_R\_10} and \texttt{ltl2dba\_R\_12} only after an uncaught error that is highlighted by the solver output. Therefore, these two (correct) results are not counted here.} $14$ instances could not be solved by any of the participants within the timeout. \begin{table}[h] \caption{Results: TLSF Realizability (sequential mode only)} \label{tab:results-realseq-tlsf} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llll@{}} \toprule Tool & (configuration) & Solved & Unique \\ \midrule \textsc{Party}\xspace & (aiger) & 218 & 7 \\ \texttt{ltlsynt}\xspace & & 195 & 3 \\ BoSy\xspace & (spot) & 181 & 0 \\ BoSy\xspace & (ltl3ba) & 172 & 0 \\ \textsc{Party}\xspace & (int) & 169 & 0 \\ BoWSer\xspace & & 165 & 0 \\ \textsc{Party}\xspace & (bool) & 164 & 0 \\ Acacia4Aiger\xspace & & 142 & 4 \\ \bottomrule \end{tabular} } \end{table} Figure~\ref{fig:cactus-realseq-tlsf} gives a cactus plot of the runtimes for all sequential algorithms in the realizability track. \paragraph{Parallel Mode.} All tools except \texttt{ltlsynt}\xspace also entered in one or more parallel configurations: one configuration for each of Acacia4Aiger\xspace, BoWSer\xspace and party, and two configurations for BoSy\xspace. As before, parallel configurations solve the same set of benchmark instances as in the sequential mode, but runtime is measured in wall time instead of CPU time. The results are given in Table~\ref{tab:results-realpar-tlsf}. The best parallel configuration solves $224$ out of the $244$ instances, or about $92\%$ of the benchmark set. $10$ benchmarks have not been solved by any configuration. \begin{table} \caption{Results: TLSF Realizability (parallel mode only)} \label{tab:results-realpar-tlsf} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llll@{}} \toprule Tool & (configuration) & Solved & Unique \\ \midrule \textsc{Party}\xspace & (portfolio) & 224 & 2\\ BoSy\xspace & (spot,par) & 181 & 0\\ BoWSer\xspace & (par) & 173 & 0\\ BoSy\xspace & (ltl3ba,par) & 170 & 0\\ Acacia4Aiger\xspace & (par) & 153 & 0\\ \bottomrule \end{tabular} } \end{table} In Table~\ref{tab:results-realpar-tlsf}, we again only count a benchmark instance as uniquely solved if it is not solved by any other sequential or parallel configuration. Then, \textsc{Party}\xspace (portfolio) solves $2$ instances that no other configuration can solve. Figure~\ref{fig:cactus-realall-tlsf} gives a cactus plot of the runtimes for the parallel and a selection of the sequential algorithms in the realizability track. \paragraph{Both modes: Solved Instances of Parameterized Benchmarks.} For both the sequential and the parallel configurations, Figure~\ref{fig:bycat-tlsf} gives an overview of the number of solved instances per configuration, for the $25$ parameterized benchmarks used in SYNTCOMP\xspace 2017. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{fig/TLSF-Realizability-2017-cactusseqnew-c.pdf} \caption{TLSF/LTL Realizability Track: Runtimes of Sequential Configurations} \label{fig:cactus-realseq-tlsf} \end{figure} \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{fig/TLSF-Realizability-2017-cactusall-1.pdf} \caption{TLSF/LTL Realizability Track: Runtimes of Parallel and Selected Sequential Configurations} \label{fig:cactus-realall-tlsf} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{fig/TLSF-bycat1new-1.pdf} \vspace{1em} \includegraphics[width=\linewidth]{fig/TLSF-bycat2-1.pdf} \vspace{1em} \includegraphics[width=\linewidth]{fig/TLSF-bycat3-1.pdf} \caption{TLSF/LTL Realizability Track: Solved Instances for Parameterized Benchmarks} \label{fig:bycat-tlsf} \end{figure} \paragraph{Analysis.} In contrast to last year, where the competitors essentially only implemented two different synthesis approaches, the entrants of SYNTCOMP\xspace 2017 are much more diverse: BoSy\xspace, BoWSer\xspace and \textsc{Party}\xspace all implement variants of bounded synthesis by encoding into a constraint satisfaction problem, and four other approaches are implemented in Acacia4Aiger\xspace, \texttt{ltlsynt}\xspace, and \textsc{Party}\xspace (bool and aiger).\footnote{Note that Acacia4Aiger\xspace had some technical problems with the syntax of a subset of the new benchmarks for this year, so the number of solved instances does not necessarily reflect the number of instances that could be solved with the implemented approach in principle.} Remarkable are the two completely new approaches implemented in \textsc{Party}\xspace (aiger) and \texttt{ltlsynt}\xspace, which both solve more problems than the best tools from last year. \textsc{Party}\xspace (aiger) achieves this by a translation to safety games which approximates liveness properties by bounded $k$-liveness. In contrast to all other approaches, \texttt{ltlsynt}\xspace does not work by a conversion to a bounded liveness or bounded size problem, which allows the other approaches to avoid determinization of specification automata. Instead, \texttt{ltlsynt}\xspace achieves very good results with an algorithm based on deterministic automata and parity games, which was widely assumed to be impractical before. We conjecture that this relies on at least two factors: i) very efficient translation and determinization algorithms implemented in Spot~\cite{Duret16,Redziejowski12}, and ii) the choice of automata with transition-based acceptance, because their conciseness is key to the efficiency of the overall algorithm. An analysis of the parameterized benchmarks in Figure~\ref{fig:bycat-tlsf} shows the different strengths of the approaches: Acacia4Aiger\xspace dominates on \texttt{generalized\_buffer} and \texttt{ltl2dba\_E}, the bounded synthesis-based approaches on \texttt{ltl2dba\_R} and \texttt{simple\_arbiter\_unreal2}, the (aiger) configuration of \textsc{Party}\xspace on \texttt{amba\_decompsed\_lock}, \texttt{detector}, \texttt{full\_arbiter}, \texttt{ltl2dba\_C2} and \texttt{ltl2dba\_U1}, and \texttt{ltlsynt}\xspace on \texttt{round\_robin\_arbiter} and \texttt{simple\_arbiter\_unreal1\_4}. For the other parameterized benchmarks, several approaches share the top spot, with the (aiger) configuration of \textsc{Party}\xspace almost always among the best. For the benchmarks that have been made unrealizable by adding additional requirements, we also note that the different differences between the approaches: compared to the realizable version \texttt{simple\_} \texttt{arbiter}, the first unrealizable version \texttt{simple\_arbiter\_unreal1\_4} is comparably hard for most tools and easier for some. In contrast, the second unrealizable version \texttt{simple\_arbiter\_unreal2} has a different behavior for most tools, being either significantly harder or significantly easier. Finally, we note that the two instances that are solved uniquely by \textsc{Party}\xspace (portfolio) (\texttt{prioritized\_} \texttt{arbiter\_7.tlsf} and \texttt{simple\_arbiter\_12.tlsf}) are solved by the second portfolio solver with formula strengthening, which did not compete in the sequential mode. \subsection{TLSF/LTL-Track: Synthesis} In the track for synthesis from LTL specifications in TLSF, participants had to solve the same benchmarks as in the LTL/TLSF-realizability track. Up to the BoWSer\xspace tool, the track had the same participants as the LTL/TLSF-realizability track: Acacia4Aiger\xspace with two configurations, BoSy\xspace with four configurations, \textsc{Party}\xspace with four configurations, and \texttt{ltlsynt}\xspace with one configuration. BoWSer\xspace competed in six configurations. As for the AIGER/safety-track, there are two rankings in the synthesis subtrack, one based on the number of instances that can be solved, and the other based on the quality of solutions, measured by their size. Again, a solution for a realizable specification is only considered correct if it can be model-checked within a separate timeout of one hour (cf. Section~\ref{sec:setup}). We start by presenting the results for the sequential configurations, followed by parallel configurations, and end with an analysis of the results. \paragraph{Sequential Mode.} Table~\ref{tab:results-syntseq-tlsf} summarizes the experimental results for the sequential configurations, including the number of solved benchmarks, the uniquely solved instances, and the number of solutions that could not be model-checked within the timeout. The last column gives the accumulated quality points over all correct solutions. As before, the ``solved'' column gives the number of problems that have either been correctly determined unrealizable, or for which the tool has presented a solution that could be verified. With this requirement, no sequential configuration could solve more than $200$ or about $82\%$ of the benchmarks, and $29$ instances could not be solved by any tool. None of the tools provided any wrong solutions. In this track, we note that all configurations that are not based on some form of bounded synthesis generated a significant number of solutions that could not be verified. \begin{table}[h] \caption{Results: TLSF Synthesis (sequential mode only)} \label{tab:results-syntseq-tlsf} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llllll@{}} \toprule Tool & (configuration) & Solved & Unique & MC Timeout & Quality\\ \midrule \textsc{Party}\xspace & (aiger) & 200 & 4 & 20 & 219 \\ \texttt{ltlsynt}\xspace & & 182 & 3 & 13 & 180 \\ BoSy\xspace & (spot) & 181 & 3 & 0 & \textbf{298}\\ \textsc{Party}\xspace & (int) & 167 & 0 & 0 & 249\\ BoSy\xspace & (ltl3ba) & 165 & 0 & 0 & 277\\ \textsc{Party}\xspace & (bool) & 163 & 1 & 0 & 222\\ BoWSer\xspace & (c0) & 162 & 0 & 0 & 273\\ BoWSer\xspace & (c1) & 155 & 0 & 0 & 260\\ Acacia4Aiger\xspace & & 110 & 2 & 17 & 91\\ BoWSer\xspace & (c2) & 93 & 0 & 0 & 141\\ \bottomrule \end{tabular} } \end{table} \paragraph{Parallel Mode.} In this mode, Acacia4Aiger\xspace, BoSy\xspace and BoWSer\xspace competed with parallel version of their configurations from the sequential track. Additionally, \textsc{Party}\xspace competed in a portfolio approach that combines its sequential configurations. Table~\ref{tab:results-syntpar-tlsf} summarizes the experimental results, in the same format as before. No configuration solved more than $203$ problem instances, or about $83\%$ of the benchmark set. $27$ benchmarks could not be solved by any tool. \textsc{Party}\xspace and Acacia4Aiger\xspace produced a significant number of solutions that could not be verified within the timeout. None of the solutions were determined to be wrong. As before, we only consider instances as uniquely solved if they are not solved by any other configuration, including sequential ones. Consequently, none of the solutions are unique. \begin{table}[h] \caption{Results: TLSF Synthesis (parallel mode only)} \label{tab:results-syntpar-tlsf} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llllll@{}} \toprule Tool & (configuration) & Solved & Unique & MC Timeout & Quality\\ \midrule \textsc{Party}\xspace & (portfolio) & 203 & 0 & 18 & \textbf{308} \\ BoSy\xspace & (spot,par) & 181 & 0 & 0 & 297\\ BoSy\xspace & (ltl3ba,par)& 169 & 0 & 0 & 286\\ BoWSer\xspace & (c0,par) & 169 & 0 & 0 & 285\\ BoWSer\xspace & (c1,par) & 169 & 0 & 0 & 285\\ BoWSer\xspace & (c2,par) & 168 & 0 & 0 & 290\\ Acacia4Aiger\xspace & (par) & 137 & 0 & 5 & 123\\ \bottomrule \end{tabular} } \end{table} \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{fig/TLSF-size1new-1.pdf} \caption{TLSF/LTL Synthesis Track: Solution Sizes of Sequential Configurations} \label{fig:TLSF-size1} \centering \includegraphics[width=\linewidth]{fig/TLSF-size2-1.pdf} \caption{TLSF/LTL Synthesis Track: Solution Sizes of Parallel Configurations and \texttt{ltlsynt}\xspace} \label{fig:TLSF-size2} \end{figure} \paragraph{Analysis.} As for the AIGER/safety-track, the number of solved instances for each tool in synthesis is closely related to the solved instances in realizability checking. The number of unique instances of Acacia4Aiger\xspace and \textsc{Party}\xspace (aiger) decreases, in part due to solutions that could not be verified. As a consequence, we also note that BoSy\xspace (spot) and \textsc{Party}\xspace (bool) now have some unique solutions, i.e., they do not provide the only solution, but the only solution that could be verified. Considering the quality ranking, BoSy\xspace (spot) is the best tool in the sequential subtrack: even though it produces about $10\%$ fewer solutions than \textsc{Party}\xspace (aiger), its accumulated quality points are about $36\%$ higher. In fact, in the ranking based on quality, all bounded synthesis-based configurations are better than \textsc{Party}\xspace (aiger), except for BoWSer\xspace (c2). A different picture unfolds in the parallel subtrack, were \textsc{Party}\xspace (aiger) combines the strengths of its different approaches to find high-quality solutions for at least some of the benchmarks, such that it not only solves the highest number of problems, but also achieves the highest accumulated quality. Figure~\ref{fig:TLSF-size1} plots the sizes of sequential configurations. It shows, as expected, that the bounded synthesis approaches produce much smaller solutions than the other approaches. In particular, the solution sizes of all configurations of BoSy\xspace and BoWSer\xspace are very similar, with BoWSer\xspace producing more very small solutions (with $<10$ AND-gates), and BoSy\xspace producing slightly better solutions for the remaining problems. The approach of \textsc{Party}\xspace (bool) falls somewhere in between. It also shows that the approach of \textsc{Party}\xspace (aiger) in many cases produces significantly smaller solutions than the approaches of Acacia4Aiger\xspace and \texttt{ltlsynt}\xspace. In parallel mode, depicted in Figure~\ref{fig:TLSF-size1}, we can see changes mostly for BoWSer\xspace (c2,par) and \textsc{Party}\xspace (portfolio). The latter manages to combine the strengths of its different approaches, providing small solutions for those problems that can still be solved by its (int) configuration, and otherwise falling back to the (aiger) configuration. BoWSer\xspace (c2,par) shows the strength of its approach to produce the smallest possible solutions: for more than $60$ benchmarks it provides the smallest solution. A further analysis on the quality and the size of implementations shows that BoWSer\xspace (c2,par) is the configuration that has the highest average quality for the problems that it does solve. Furthermore, it produces the highest number of new reference solutions, i.e., solutions that have a quality greater than $2$ according to the quality ranking scheme explained in Section~\ref{sec:rules}. The analysis for all tools is given in Table~\ref{tab:quality-synt-tlsf}. \begin{table}[h] \caption{TLSF Synthesis: Average Quality and New Reference Solutions} \label{tab:quality-synt-tlsf} \centering \def1.3{1.3} {\sffamily \small \begin{tabular}{@{}llll@{}} \toprule Tool & (configuration) & Avg. Quality & New Ref. Solutions \\ \midrule BoWSer\xspace & (c2,par) & 1.725 & 50\\ BoSy\xspace & (ltl3ba,par)& 1.691 & 31\\ BoWSer\xspace & (c1,par) & 1.689 & 40\\ BoWSer\xspace & (c0,par) & 1.688 & 40\\ BoWSer\xspace & (c0) & 1.686 & 37\\ BoSy\xspace & (ltl3ba) & 1.679 & 20\\ BoWSer\xspace & (c1) & 1.676 & 34\\ BoSy\xspace & (spot) & 1.644 & 30\\ BoSy\xspace & (spot,par) & 1.643 & 23\\ \textsc{Party}\xspace & (portfolio) & 1.517 & 27\\ BoWSer\xspace & (c2) & 1.514 & 20\\ \textsc{Party}\xspace & (int) & 1.493 & 20\\ \textsc{Party}\xspace & (bool) & 1.363 & 15\\ \textsc{Party}\xspace & (aiger) & 1.093 & 13\\ \texttt{ltlsynt}\xspace & & 0.988 & 8\\ Acacia4Aiger\xspace & (par) & 0.898 & 0\\ Acacia4Aiger\xspace & & 0.825 & 0\\ \bottomrule \end{tabular} } \end{table} \section{Synthesis Problem (TODO)} \label{sec:problem} We briefly summarize the reactive synthesis problem as it is considered in SYNTCOMP\xspace. A more detailed introduction into the problem, as well as into the different approaches for solving it, can be found in~\cite{SYNTCOMP14}. In SYNTCOMP\xspace, we consider the automatic synthesis of finite-state reactive systems that satisfy a safety specification, given as a monitor circuit that raises a special output \out when an unsafe state is visited. The specification is encoded in the SYNTCOMP\xspace format, an extension of the well-known AIGER format~\cite{aiger} that allows inputs of the circuit to be defined as either controllable or uncontrollable. In the traditional game-based approach to the synthesis of reactive systems~\cite{BL69,Rabin69,Thomas95}, such a specification gives rise to a \emph{game }between two players: states of the game are given by the valuation of latches in the monitor circuit, the environment player decides on the uncontrollable inputs of the specification circuit, and the system player decides on the controllable inputs. The goal of the system player is to satisfy the specification, i.e., to visit only safe states, independent of the environment behavior. Algorithms that solve this game usually take an approach that consists of two steps. In the first step, a so-called \emph{winning region} is computed. The winning region $W$ is the set of all states from which the system player can enforce to satisfy the specification, i.e., to visit only safe states in the subsequent computation. In the classical algorithm, this is done by computing the fixpoint of the \emph{uncontrollable predecessor} operation \upre on the error states, i.e., inductively computing all states from which the environment can force the game into the unsafe states. Since two-player safety games are determined, the complement of this set is the winning region $W$ of the system player. In a second step, a \emph{winning strategy} is derived from the winning region. For every (current) state and uncontrollable input, the winning strategy defines a set of controllable inputs that are okay for satisfying the specification. In order to obtain an implementation of this strategy as a circuit, a concrete choice for the controllable inputs has to be made for every state and uncontrollable input. In order to achieve acceptable scalability, it is important to implement synthesis algorithms symbolically, i.e., by manipulating formulas instead of enumerating states. In synthesis, these symbolic algorithms are usually implemented with Binary Decision Diagrams (BDDs)~\cite{bryant86,somenzi99,AlurMN05}. One reason for this is that solving games inherently involves dealing with quantifier alternations, and BDDs offer techniques for handling both kinds of quantification. However, BDDs also have their scalability issues. On the other hand, there have been enormous performance improvements in decision procedures for the satisfiability of (boolean) formulas over the last years and decades. This has lead to efficient tools like SAT- and QBF (Quantified Boolean Formulas) solvers, which can also be leveraged to obtain efficient symbolic synthesis algorithms. All of the tools that compete in SYNTCOMP\xspace 2015 implement symbolic game-based synthesis in some form. A description of the tools will be given in Section~\ref{sec:participants}.
1,108,101,563,512
arxiv
\section*{Acknowledgements} We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research. \section{Conclusion} This paper presents a novel two-model hierarchical architecture for real-time hand gesture recognition systems. The proposed architecture provides resource efficiency, early detections and single time activations, which are critical for real-time gesture recognition applications. The proposed approach is evaluated on two dynamic hand gesture datasets, and achieves similar results for both of them. For real-time evaluation, we have proposed to use a new metric, Levenshtein accuracy, which we believe is a suitable evaluation metric since it can measure misclassifications, multiple detections and missing detections at the same time. Moreover, we have applied weighted-averaging on the class probabilites over time, which improves the overall performance and allows early detection of the gestures at the same time. We acquired single-time activation per gesture by using difference between highest two average class probabilities as a confidence measure. However, we would like to investigate more on the statistical hypothesis testing for the confidence measure of the single-time activations as a future work. Also, we intend to utilize different weighting approaches in order to increase the performance even further. \section{Experiments} \label{sec:exp} The performance of the proposed approach is tested on two publicly available datasets: EgoGesture and NVIDIA Dynamic Hand Gestures dataset. \subsection{Offline Results Using EgoGesture Dataset} EgoGesture dataset is a recent multimodal large scale dataset for egocentric hand gesture recognition \cite{zhang_egogesture:_2018}. This dataset is created not only for segmented gesture classification, but also for gesture detection in continuous data. There are 83 classes of static and dynamic gestures collected from 6 diverse indoor and outdoor scenes. The dataset splits are created by distinct subjects with ratio 3:1:1 resulting in 1239 training, 411 validation and 431 testing videos, having 14416, 4768 and 4977 gesture samples, respectively. All models are first pretrained on Jester dataset \cite{jester}. For test set evaluations, we have used both training and validation set for training. We initially investigated the performance of C3D and ResNeXt architectures on the offline classification task. Table \ref{tab:egogesture_benchmark} shows the comparison of used architectures with the state-of-the-art approaches. ResNeXt-101 architecture with 32-frames input achieves the best performance. \begin{table}[t!] \centering \begin{tabular}{lccc} \specialrule{.1em}{0em}{.5em} \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Input}}} & \multicolumn{2}{c}{\textbf{Modality}} \\ \cline{3-4} \addlinespace \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{RGB}} & \multicolumn{1}{c}{\textbf{Dept}h} \\ \specialrule{.1em}{.3em}{.3em} ResNet-10 & 8-frames & 96.58 & \phantom{\textbf{*}}99.39\textbf{*} \\ ResNet-10 & 16-frames & 97.00 & 99.64 \\ ResNet-10 & 24-frames & 97.13 & 99.15 \\ ResNet-10 & 32-frames & 96.65 & \textbf{99.68} \\ \specialrule{.1em}{.3em}{0em} \end{tabular} \caption{Detector's binary classification accuracy scores on the test set of EgoGesture dataset.} \label{tab:egogesture_detector} \end{table} \begin{table}[t!] \centering \begin{tabular}{lccc} \specialrule{.1em}{0em}{.5em} \textbf{Modality} & \textbf{Recall} & \textbf{Precision} & \textbf{f1-score} \\ \specialrule{.1em}{.3em}{.3em} RGB & 96.64 & 97.10 & 96.87 \\ Depth & 99.37 & 99.43 & \textbf{99.40} \\ \specialrule{.1em}{.3em}{0em} \end{tabular} \caption{Detection results of 8-frames ResNet-10 architecture on the test set of EgoGesture dataset.} \label{tab:egogesture_det} \end{table} Secondly, we investigated the effect of the number of input frames on the gesture detection and classification performance. Results in Table \ref{tab:egogesture_classifier} and Table \ref{tab:egogesture_detector} show that we achieve a better performance as we increase the input size for all the modalities. This depends highly on the characteristics of the used datasets, especially on the average duration of the gestures. Thirdly, the RGB and depth modalities are investigated for different input sizes. We always observed that the models with depth modality show better performance than the models with RGB. Depth sensor filters out the background motion, and allows the models to focus more on the hand motion, hence more discriminative features can be obtained from depth modality. For real-time evaluation, ResNet-10 with depth modality and input size of 8-frames is chosen as the detector, since smaller window size allows the detector to discover the start and end of the gestures more robustly. The detailed results of this model are shown in Table \ref{tab:egogesture_det}. \begin{table}[t!] \centering \begin{tabular}{lcc} \specialrule{.1em}{0em}{.5em} \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{2}{c}{\textbf{Modality}} \\ \cline{2-3} \addlinespace \multicolumn{1}{c}{} & \textbf{RGB} & \textbf{Depth} \\ \specialrule{.1em}{.3em}{.3em} C3D & 73.86 & 77.18 \\ R3DCNN \cite{molchanov2016online} & 74.10 & 80.30 \\ ResNeXt-101 & 78.63 & \textbf{83.82} \\ \specialrule{.1em}{.3em}{0em} \end{tabular} \caption{Comparison with state-of-the-art on the test set of nvGesture dataset.} \label{tab:nvgesture_benchmark} \end{table} \begin{table}[t!] \centering \begin{tabular}{cccc} \specialrule{.1em}{0em}{.5em} \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Input}}} & \multicolumn{2}{c}{\textbf{Modality}} \\ \cline{3-4} \addlinespace \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{RGB}} & \multicolumn{1}{c}{\textbf{Dept}h} \\ \specialrule{.1em}{.3em}{.3em} C3D & 16-frames & 62.67 & 70.33 \\ C3D & 24-frames & 65.35 & 70.33 \\ C3D & 32-frames & 73.86 & \textbf{77.18} \\ \specialrule{.1em}{.3em}{.3em} ResNeXt-101 & 16-frames & 66.40 & 72.82 \\ ResNeXt-101 & 24-frames & 72.40 & 79.25 \\ ResNeXt-101 & 32-frames & 78.63 & \phantom{\textbf{*}}\textbf{83.82}\textbf{*} \\ \specialrule{.1em}{.3em}{0em} \end{tabular} \caption{Classifier's classification accuracy scores on the test set of nvGesture dataset.} \label{tab:nvgesture_classifier} \end{table} \subsection{Offline Results Using nvGesture Dataset} nvGesture dataset contains 25 gesture classes, each intended for human-computer interfaces. The dataset is recorded with multiple sensors and viewpoints at an indoor car simulator. There are in total 1532 weakly-segmented videos (i.e., there are no-gesture parts in the videos), which are split with ratio 7:3 resulting in 1050 training and 482 test videos each containing only one gesture. We again initially investigated the performance of C3D and ResNeXt architectures on the offline classification task, by comparing them with the state-of-the-art models. As shown in Table \ref{tab:nvgesture_benchmark}, ResNeXt-101 architecture achieves the best performance. Similar to EgoGesture dataset, we achieve a better classification and detection performance as we increase the input size, for all the modalities, as shown in Table \ref{tab:nvgesture_classifier} and Table \ref{tab:nvgesture_detector}. Depth modality again achieves better performance than RGB modality for all input sizes. Moreover, ResNet-10 with depth modality and input size of 8-frames is chosen as the detector in the online testing, whose detailed results are given in Table \ref{tab:nvgesture_det}. For real-time evaluation, we have selected 8-frames ResNet-10 detectors with depth modality and best performing classifiers in both dataset, which have \textbf{*} sign in corresponding tables. \subsection{Real-Time Classification Results} EgoGesture and nvGesture datasets have 431 and 482 videos, respectively in their test sets. We evaluated our proposed architecture on each video separately and calculated an average Levenshtein accuracy at the end. We achieve 91.04\% and 77.39\% Levenshtein accuracies in EgoGesture and nvGesture datasets, respectively. Moreover, the early detection times are investigated by simulating different early-detection threshold levels ($\tau_{early}$) varying from 0.2 to 1.0 with 0.1 steps. Fig. \ref{fig:stanv} compares early detection times of weighted averaging and uniform averaging approaches for both EgoGesture and nvGesture datasets. The Fig. \ref{fig:stanv} shows the importance of weighted averaging, which performs considerably better than uniform averaging. As we increase the threshold, we force the architecture to make decision towards the end of gestures, hence achieving better performance. However, we can gain considerable early detection performance by relinquishing little amount of performance. For example, if we set detection threshold $\tau_{early}$ to 0.4 for EgoGesture dataset, we can make our single time activations 9 frames earlier on average by relinquishing only 1.71\% Levenshtein accuracy. We also observe that mean early detection times are longer for nvGesture dataset since it contains weakly-segmented videos. \begin{table}[t!] \centering \begin{tabular}{lccc} \specialrule{.1em}{0em}{.5em} \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Input}}} & \multicolumn{2}{c}{\textbf{Modality}} \\ \cline{3-4} \addlinespace \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{RGB}} & \multicolumn{1}{c}{\textbf{Dept}h} \\ \specialrule{.1em}{.3em}{.3em} ResNet-10 & 8-frames & 70.22 & \phantom{\textbf{}} 97.30\textbf{*} \\ ResNet-10 & 16-frames & 85.90 & 97.82 \\ ResNet-10 & 24-frames & 89.00 & \textbf{98.02} \\ ResNet-10 & 32-frames & 93.88 & 97.30 \\ \specialrule{.1em}{.3em}{0em} \end{tabular} \caption{Detector's binary classification accuracy scores on the test set of nvGesture dataset.} \label{tab:nvgesture_detector} \end{table} \begin{table}[t!] \centering \begin{tabular}{lccc} \specialrule{.1em}{0em}{.5em} \textbf{Modality} & \textbf{Recall} & \textbf{Precision} & \textbf{f1-score} \\ \specialrule{.1em}{.3em}{.3em} RGB & 70.22 & 80.31 & 74.93 \\ Depth & 97.30 & 97.41 & \textbf{97.35} \\ \specialrule{.1em}{.3em}{0em} \end{tabular} \caption{Detection results of 8-frames ResNet-10 architecture on the test set of nvGesture dataset.} \label{tab:nvgesture_det} \end{table} Lastly, we investigated the execution performance of our two-model approach. Our system runs on average at 460 fps when there is no gesture (i.e. only detector is active) and 62 (41) fps in the presence of gesture (i.e. both detector and classifier are active) for ResNeXt-101 (C3D) as the classifier on a single NVIDIA Titan Xp GPU with batch size of 8. \begin{figure}[t!]% \centering \subfigure[]{% \includegraphics[height=0.7\linewidth]{egogesture_det}}% \label{fig:staego}% \qquad \subfigure[]{% \includegraphics[height=0.7\linewidth]{nv_det}}% \caption{Comparison of early detection time, early detection threshold and acquired Levenshtein accuracies for (a) EgoGesture and (b) nvGesture datasets. Numerals on each data point represent the Levenshtein accuracies. Early detection times are calculated only for correctly predicted gestures. Blue color refers to the "weighted" approach in single-time activation, and green color refers to "not weighted" approach. For both datasets, as early detection threshold increases, average early detection times reduce, but we achieve better Levenshtein accuracies.} \label{fig:stanv} \end{figure} \section{Introduction} Computers and computing devices are becoming an essential part of our lives day by day. The increasing demand for such computing devices increased the necessity of easy and practical computer interfaces. For this reason, systems using vision-based interaction and control are becoming more common, and as a result of this, gesture recognition is getting more and more popular in research community due to various application possibilities in human machine interaction. Compared to mouse and keyboard, any vision-based interface is more convenient, practical and natural because of the intuitiveness of gestures. \begin{figure}[t!] \centering \includegraphics[width=0.52\textwidth]{probs} \caption{Illustration of the proposed pipeline for real-time gesture recognition. The video stream is processed using a sliding window approach with stride of one. Top graph shows the detector probability scores which is activated when a gesture starts and kept active till it ends. The second graph shows classification score for each class with a different color. The third graph applies weighted-average filtering on raw classification scores which eliminates the ambiguity between possible gesture candidates. The bottom graph illustrates the single-time activations such that red arrows represent early detections and black ones represent detections after gestures finalize.} \label{fig:probs} \end{figure} Gesture recognition can be practiced with mainly three methods: Using \textit{(i)} glove-based wearable devices \cite{abhishek_glove-based_2016}, \textit{(ii)} 3-dimensional locations of hand keypoints \cite{wen_intraoperative_2010} and \textit{(iii)} raw visual data. The first method comes with the obligation of wearing an additional device with which lots of cables come even though it provides good results in terms of both accuracy and speed. The second, on the other hand, requires an extra step of hand-keypoints extraction, which brings additional time and computational cost. Lastly, for (iii), only an image capturing sensor is required such as camera, infrared sensor or depth sensor, which are independent of the user. Since the user does not require to wear a burdensome device to achieve an acceptable accuracy in recognition and sufficient speed in computation, this option stands out as the most practical one. It is important for the infrastructure of any gesture recognition system to be practical. After all, we aim to use it in real life scenarios. In this work, in order to provide a practical solution, we have developed a vision based gesture recognition approach using deep convolutional neural networks (CNNs) on raw video data. Currently, CNNs provide the state-of-the-art results for not only image based tasks such as object detection, image segmentation and classification, but also for video based tasks such as activity recognition and action localization as well as gesture recognition \cite{kopuklu2018motion, molchanov2016online, zhu2017multimodal}. In real-time gesture recognition applications, there are several characteristics that the system needs to satisfy: \textit{(i)} An acceptable classification accuracy, \textit{(ii)} fast reaction time, \textit{(iii)} resource efficiency and \textit{(iv)} single-time activation per each performed gesture. All these items contain utmost importance for a successful real-time vision based gesture recognition application. However, most of the previous research only considers \textit{(i)} and tries to increase the offline classification accuracy in gesture recognition disregarding the remaining items. Some proposed approaches are even impossible to run in real-time since they consist of several deep CNNs on multiple input modalities, which is forcing the limits of memory and power budget \cite{narayana2018gesture}. In this paper, we propose a hierarchical architecture for the task of real-time hand gesture detection and classification that allows us to integrate offline working models and still satisfy all the above-mentioned attributes. Our system consists of an offline-trained deep 3D CNN for gesture classification (classifier) and a light weight, shallow 3D CNN for gesture detection (detector). Fig. \ref{fig:probs} illustrates the pipeline of the proposed approach. A sliding window is used over the incoming video stream feeding the input frames to the detector via detector queue. Top graph in Fig. \ref{fig:probs} shows the detector probability scores which become active when the gestures are being performed, and remain inactive for the rest of the time. The classifier becomes active only when the detector detects a gesture. This is very critical since most of the time, no gesture is performed in real-time gesture recognition applications. Therefore, there is no need to keep the high-performance classifier always active, which increases the memory and power consumption of the system considerably. The second graph shows the raw classification scores of each class with a different color. As it can be seen from the graph, scores of similar classes become simultaneously high especially at the beginning of the gestures. In order to resolve these ambiguities, we have weighted the class scores to avoid making a decision at the beginning of the gestures (third graph in Fig. \ref{fig:probs}). Lastly, the bottom graph illustrates the single-time activations, where red arrows represent the early detections and black ones represent the detections after gestures end. Our system can detect gestures earlier in their nucleus part, which is the part distinguishing the gesture from the rest. We propose to use the Levenshtein distance as an evaluation metric to compare the captured single-time activations with ground-truth labels. This metric is more suitable and evaluative since it can measure misclassifications, multiple detections and missing detections at the same time. We evaluated our approach on two publicly available datasets, which are EgoGesture Dataset \cite{zhang_egogesture:_2018} and NVIDIA Dynamic Hand Gesture Dataset \cite{molchanov2016online} (nvGesture) \footnote{NVIDIA Dynamic Hand Gesture Dataset is referred as 'nvGesture' in this work.}. For the classifier of the proposed approach, any offline working CNN architecture can be used. For our experiments, we have used well-known C3D \cite{tran2015learning} and ResNeXt-101 \cite{hara3dcnns}. We have achieved the state-of-the-art offline classification accuracies of 94.03\% and 83.82\% on depth modality with ResNeXt-101 architecture on EgoGesture and nvGesture datasets, respectively. For real-time detection and classification, we achieve considerable early detections by relinquishing little amount of recognition performance. The rest of the paper is organized as follows. In Section \RN{2}, the related work in the area of offline and real-time gesture recognition is presented. Section \RN{3} introduces our real-time gesture recognition approach, and elaborates training and evaluation processes. Section \RN{4} presents experiments and results. Lastly, Section \RN{5} concludes the paper. \section{Methodology} In this section, we elaborate on our two-model hierarchical architecture that enables the-state-of-the-art CNN models to be used in real-time gesture recognition applications as efficiently as possible. After introducing the architecture, training details are described. Finally, we give a detailed explanation for the used post processing strategies that allow us to have single-time activation per gesture in real-time. \subsection{Architecture} Recently, with the availability of large datasets, CNN based models have proven their ability in action/gesture recognition tasks. 3D CNN architectures especially stand out for video analysis since they make use of the temporal relations between frames together with their spatial content. However, there is no clear description of how to use these models in a real-time dynamic system. With our work, we aim to fill this research gap. Fig. \ref{fig:workflow} illustrates the used workflow for an efficient real-time recognition system using a sliding window approach. In contrary to offline testing, we do not know when a gesture starts or ends. Because of this, our workflow starts with a detector which is used as a switch to activate classifier if a gesture gets detected. Our detector and classifier models are fed by a sequence of frames with size $n$ and $m$, respectively, such as $n \ll m$ with an overlapping factor as shown in Fig. \ref{fig:workflow}. The stride value used for the sliding window is represented by $s$ in Fig. \ref{fig:workflow}, and it is same for both the detector and the classifier. Although higher stride provides less resource usage, we have chosen $s$ as 1 since it is small enough not to miss any gestures and allows us to achieve better performance. In addition to the detector and classifier models, one post-processing and one single-time activation service is introduced to the workflow. In the following parts, we are going to explain these blocks in detail. \begin{figure}[t!] \centering \includegraphics[width = 0.50\textwidth]{arch} \caption{The general workflow of the proposed two-model hierarchical architecture. Sliding windows with stride \textit{s} run through incoming video frames where detector queue placed at the very beginning of classifier queue. If the detector recognizes an action/gesture, then the classifier is activated. The detector's output is post-processed for a more robust performance, and the final decision is made using single-time activation block where only one activation occurs per performed gesture.} \label{fig:workflow} \end{figure} \subsubsection{Detector} The purpose of the detector is to distinguish between \textit{gesture} and \textit{no gesture} classes by running on a sequence of images, which detector queue masks. Its main and only role is to act as a switch for the classifier model, meaning that if it detects a \textit{gesture}, then the classifier is activated and fed by the frames in the classifier queue. Since the overall accuracy of this system highly depends on the performance of detector, we require the detector to be (i) robust, (ii) accurate in detection of true positives (gestures), and (iii) lightweight as it runs continuously. For the sake of (i), the detector runs on smaller number of frames than the classifier to which we refer as detector and classifier queues. For (ii), detector queue is placed on the very beginning of classifier queue as shown in Fig. \ref{fig:workflow}, and this enables the detector to activate the classifier whenever a gesture starts regardless of the gesture duration. Moreover, the detector model is trained with a weighted-cross entropy loss in order to decrease the likelihood of false positives (i.e., achieve higher recall rate). The class weights for \textit{no gesture} and \textit{gesture} classes are selected as 1 and 3, respectively as our experiments showed that this proportion is sufficient to have 98+\% and 97+\% recall rates in EgoGesture and nvGesture datasets, respectively. Besides that, we post-process the output probabilities, and set a counter for the consecutive number of \textit{no gesture} predictions in decision of deactivating classifier. For (iii), ResNet-10 architecture is constructed using the ResNet block in Fig. \ref{fig:blocks} with very small feature sizes in each layer as given in Table \ref{tab:architecture}, which results in less than 1M ($\approx$ 862K) parameters. \textit{F} and \textit{N} correspond to the number of feature channels and the number blocks in corresponding layers, respectively. \textit{BN}, \textit{ReLU} and \textit{group} in Fig. \ref{fig:blocks} refers to batch normalization, rectified linear unit nonlinearities and the number of group convolutions, respectively. \begin{figure}[t!] \centering \includegraphics[width=0.30\textwidth]{blocks} \caption{ResNet and ResNeXt blocks used in the detector and classifier architectures.} \label{fig:blocks} \end{figure} \subsubsection{Classifier} Since we do not have any limitation regarding the size or complexity of the model, any architecture providing a good classification performance can be selected as classifier. This leads us to use two recent 3D CNN architectures (C3D \cite{tran_learning_2014} and ResNext-101 \cite{he_deep_2015}) as our classifier model. However, it is important to note that our architecture is independent of the model type. For C3D model, we have used the exact same model as in \cite{tran_learning_2014}, but only changed the number of nodes in the last two fully connected layers from 4096 to 2048. For ResNeXt-101, we have followed the guidelines of \cite{hara3dcnns} and chosen the model parameters as given in Table \ref{tab:architecture} with ResNeXt block as given in Fig. \ref{fig:blocks}. Since the number of parameters for 3D CNNs are much more than 2D CNNs, they require more training data in order to prevent overfitting. Because of this reason, we pretrained our classifier architectures first on Jester dataset \cite{jester}, which is the largest publicly available hand gesture dataset, and then fine tune our model on EgoGesture and nvGesture datasets. This approach increased the accuracy and shortened the training duration drastically. \textit{Training Details: } We use stochastic gradient descent (SGD) with Nesterov momentum = 0.9, damping factor = 0.9, and weight decay = 0.001 as optimizer. After pretraining on Jester dataset, the learning rate is started with 0.01, and divided by 10 at $10^{th}$ and $25^{th}$ epochs, and training is completed after 5 more epochs. For regularization, we used a weight decay ($\gamma = 1 \times 10^{-3}$), which is applied on all the parameters of the network. We also used dropout layers in C3D and several data augmentation techniques throughout training. For data augmentation, three methods were used: (1) Each image is randomly cropped with size $112 \times 112$ and scaled randomly with one of \{1, $\frac{1}{2^{1/4}}$, $\frac{1}{2^{3/4}}$, $\frac{1}{2}$\} scales. (2) Spatial elastic displacement \cite{simard2003best} with $\alpha = 1$ and $\sigma = 2$ is applied on the cropped and scaled images. For temporal augmentation, (3) we randomly select consecutive frames according to the size of input sample duration from the entire gesture videos. If the sample duration is more than the number of frames in target gesture, we append frames starting from the very first frame in a cyclic fashion. We also normalized the images into 0-1 scale using mean and standard deviation of the whole training sets in order to force models to learn faster. The same training details are used for the detector and classifier models. During offline and online testing, we scale images and apply center cropping to get $112 \times 112$ images. Then only normalization is performed for the sake of consistency between training and testing. \begin{table}[t!] \centering \begin{tabular}{l|c|c|c} \hline \rule{0pt}{12pt} \textbf{Layer} & \textbf{Output Size} & \textbf{ResNeXt-101} & \textbf{ResNet-10} \\[0.1cm] \hline \rule{0pt}{12pt} conv1 & L x 56 x 56 & \multicolumn{2}{c}{conv(3x7x7), stride (1, 2, 2)} \\[0.1cm] \hline \rule{0pt}{12pt} pool & L/2 x 28 x 28 & \multicolumn{2}{c}{ MaxPool(3x3x3), stride (2, 2, 2)} \\[0.1cm] \hline \rule{0pt}{12pt} conv2\_x & L/2 x 28 x 28 & N:3, F:128 & N:1, F:16 \\[0.1cm] \hline \rule{0pt}{12pt} conv3\_x & L/4 x 14 x 14 & N:24, F:256 & N:1, F:32 \\[0.1cm] \hline \rule{0pt}{12pt} conv4\_x & L/8 x 7 x 7 & N:36, F:512 & N:1, F:64 \\[0.1cm] \hline \rule{0pt}{12pt} conv5\_x & L/16 x 4 x 4 & N:3, F:1024 & N:1, F:128 \\[0.1cm] \hline \rule{0pt}{14pt} & \textit{NumCls} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}global average pooling,\\ fc layer with softmax\end{tabular}} \\[0.2cm] \hline \end{tabular} \caption{Detector (ResNet-10) and Classifier (ResNeXt-101) architectures. For ResNet-10, max pooling is not applied when input of 8-frames is used.} \label{tab:architecture} \end{table} \subsubsection{Post-processing} \label{subsec:pp} In dynamic hand gestures, it is possible that the hand gets out of the camera view while performing gestures. Even though the previous predictions of the detector are correct, any misclassification reduces the overall performance of the proposed architecture. In order to make use of previous predictions, we add the raw softmax probabilities of the previous detector predictions into a queue ($q_k$) with size $k$, and apply filtering on these raw values and obtain final detector decisions. With this approach, detector increases its confidence in decision making, and clears out most of the misclassifications in consecutive predictions. The size of the queue ($k$) is selected as 4, which achieved the best results for stride $s$ of 1 in our experiments. We have applied $(i)$ average, $(ii)$ exponentially-weighted average and $(iii)$ median filtering separately on the values in $q_k$. While average filtering simply takes the mean value of $q_k$, median filtering takes the median. Exponentially-weighted average filtering, on the other hand, takes the weighted average of the samples using the weight function of $w_i = \exp^{-{{(1-(k-i))}/k}}$ where $i$ stands for the index of the $i^{th}$ previous sample and satisfies $0 \leq i < k$, and $w_i$ is the weight for the $i^{th}$ previous sample. Out of these three filtering strategies, we have used median filtering since it achieves slightly better results. \begin{figure}[t!]% \centering \subfigure[]{% \includegraphics[width=0.45\linewidth]{egogestureframesdist.png}}% \label{fig:egoframe}% \qquad \subfigure[]{% \includegraphics[width=0.45\linewidth]{sigmoid.png}}% \caption{(a) Histogram of the gesture durations for the EgoGesture dataset, (b) Sigmoid-like weight function used for single-time activations according to the Equation \eqref{eq:weight}.} \label{fig:sigmoid} \end{figure} \subsubsection{Single-time Activation} \label{subsec:sta} In real-time gesture recognition systems, it is extremely important to have smaller reaction time and single-time activation for each gesture. Pavlovic et al. states that dynamic gestures have \textit{preparation}, \textit{nucleus} (\textit{peak} or \textit{stroke} \cite{mcneill1980conceptual}) and \textit{retraction} parts \cite{pavlovic1997visual}. Out of all parts, nucleus is the most discriminative one, since we can decide which gesture is performed in nucleus part even before it ends. Single-time activation is achieved through two level control mechanism. Either a gesture is detected when a confidence measure reaches a threshold level before the gesture actually ends (early-detection), or the gesture is predicted when the detector deactivates the classifier (late-detection). In late-detection, we assume that the detector should not miss any gesture since we assure that the detector has a very high recall rate. The most critical part of the early-detection is that, the gestures should be detected after their nucleus parts for a better recognition performance. Because several gestures can contain a similar preparation part which creates an ambiguity at the beginning of the gestures, as can be seen on the top graph of Fig. \ref{fig:weight}. Therefore, we have applied weighted-averaging on class scores with a weight function as in \mbox{Fig. \ref{fig:sigmoid} (b)}, and its formula is given as: \begin{equation} w_j = \frac{1}{(1+\exp^{-0.2\times(j-t)})} \label{eq:weight} \end{equation} \noindent where $j$ is the iteration index of an active state, at which a gesture is detected, and $t$ is calculated by using the following formula: \begin{equation} t = \left \lfloor{\frac{\mu}{4 \times s}} \right \rfloor \end{equation} \noindent where $\mu$ corresponds to the mean duration of the gestures (in number of frames) in the dataset and $s$ is the stride length. For EgoGesture dataset, $\mu$ is equal to 38,4 and for stride of $s = 1$, $t$ is calculated as 9, which is similar for also nvGesture dataset. When a gesture starts, we start to multiply raw class scores with weights $w_j$ and apply averaging. These parameters allow us to have weights equal to or higher than 0.5 in the nucleus part of the gestures on average. Fig. \ref{fig:weight} shows the probability scores of five gestures over each iteration and their corresponding weighted-averages. It can easily be observed that the ambiguity of the classifier at the preparation part of the gestures is successfully resolved with this approach. \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{weight_effect.jpg} \caption{Raw (top) and weighted (bottom) classification scores. At top graph, we observe a lot of noise at the beginning of all gestures; however, near to the end of each gesture the classifier gets more confident. The bottom graph shows that we can remove this noise part by assigning smaller weights to the beginning part of the gestures.} \label{fig:weight} \end{figure} \begin{algorithm}[b!] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \caption{Single-time activation in real-time gesture recognition } \centering \label{online_testing} \begin{algorithmic}[1] \Require Incoming frames from video data. \Ensure Single-time activations. \For{each "frame-window" $w_i$ of length $m$} \If{a gesture is detected} \State{state $\leftarrow$ "Active"} \State {$\alpha \leftarrow probs_{j-1}\times(j-1) $} \State {$mean_{}probs = (\alpha + w_j\times probs_{j})/j$ } \State {$(max_1,max_2)=\max\limits_{gesture}{[mean_{}probs]_2}$ } \If{$(max1 - max2) \geq \tau_{\mathbf{early}}$} \State {\textit{early-detection} $\leftarrow$ "True"} \State \Return {gesture with $max_1$} \EndIf \State {$j \leftarrow j + 1$} \EndIf \If{the gesture ends} \State{state $\leftarrow$ "Passive"} \If{\textit{early-detection} $ \neq$ "True" \& $max_1 \geq \tau_{\mathbf{late}} $} \State \Return {gesture with $max_1$} \EndIf \EndIf \State {$i \leftarrow i + 1$} \EndFor \end{algorithmic} \end{algorithm} With this weighted-averaging strategy, we force our single-time activator to make decision at mid-late part of the gestures after capturing their nucleus parts. On the other hand, we need a confidence measure for early-detections in real-time since the duration of gestures varies. Hence, we decided to use the difference between weighted average scores of each classes as our confidence measure for early-detection. When the detector switches the classifier on, weighted average probabilities for each class is calculated at each iteration. If the difference between two highest average probabilities is more than a threshold $\tau_{early}$, then early-detection is triggered; otherwise, we wait for the detector to switch off the classifier and the class with the highest score above $\tau_{late}$ (fixed to 0.15 as it showed the best results in our experiments) is predicted as late-detection. Details for this strategy can be found in Algorithm \ref{online_testing}. \subsubsection{Evaluation of the Activations} As opposed to offline testing which usually considers only about class accuracies, we must also consider the following scenarios for our real-time evaluation: \begin{itemize} \item Misclassification of the gesture due to the classifier, \item Not detecting the gesture due to the detector, \item Multiple detections in a single gesture. \end{itemize} Considering these scenarios, we propose to use the Levenshtein distance as our evaluation metric for online experiments. The Levenshtein distance is a metric that measures distance between sequences by counting the number of item-level changes (insertion, deletion, or substitutions) to transform one sequence into the other. For our case, one video and the gestures in this video correspond to a sequence and the items in this sequence, respectively. For example, lets consider the following ground truth and predicted gestures of a video: \abovedisplayskip=-10pt \begin{align*} & Ground Truth\phantom{aa,,} [1, 2, 3, 4, 5, 6, 7, 8, 9] &\\ & Predicted\phantom{aaaa,} [1, 2, 7, 4, 5, 6, 6, 7, 8, 9] & \end{align*} \begin{table}[t!] \centering \begin{tabular}{lccc} \specialrule{.1em}{.5em}{.5em} \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Input}}} & \multicolumn{2}{c}{\textbf{Modality}} \\ \cline{3-4} \addlinespace \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \textbf{RGB} & \textbf{Depth} \\ \specialrule{.1em}{.3em}{.3em} VGG-16 \cite{zhang_egogesture:_2018} & 16-frames & 62.50 & 62.30 \\ VGG-16 + LSTM \cite{zhang_egogesture:_2018} & 16-frames & 74.70 & 77.70 \\ C3D & 16-frames & 86.88 & 88.45 \\ ResNeXt-101 & 16-frames & 90.94 & 91.80 \\ C3D+LSTM+RSTTM \cite{zhang_egogesture:_2018} & 16-frames & 89.30 & 90.60 \\ \specialrule{.1em}{.3em}{.3em} ResNeXt-101 & 32-frames & 93.75 & \phantom{\textbf{*}}\textbf{94.03}\textbf{*} \\ \specialrule{.1em}{.5em}{.5em} \end{tabular} \caption{Comparison with state-of-the-art on the test set of EgoGesture dataset.} \label{tab:egogesture_benchmark} \end{table} \begin{table}[t!] \centering \begin{tabular}{cccc} \specialrule{.1em}{.5em}{.5em} \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{Input}}} & \multicolumn{2}{c}{\textbf{Modality}} \\ \cline{3-4} \addlinespace \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{RGB}} & \multicolumn{1}{c}{\textbf{Dept}h} \\ \specialrule{.1em}{.3em}{.3em} C3D & 16-frames & 86.88 & 88.45 \\ C3D & 24-frames & 89.20 & 89.07 \\ C3D & 32-frames & 90.57 & \textbf{91.44 } \\ \specialrule{.1em}{.3em}{.3em} ResNeXt-101 & 16-frames & 90.94 & 91.80 \\ ResNeXt-101 & 24-frames & 92.89 & 93.47 \\ ResNeXt-101 & 32-frames & 93.75 & \phantom{\textbf{*}}\textbf{94.03}\textbf{*} \\ \specialrule{.1em}{.5em}{.5em} \end{tabular} \caption{Classifier's classification accuracy scores on the test set of EgoGesture dataset.} \label{tab:egogesture_classifier} \end{table} For this example, the Levenshtein distance is 2: The deletion of one of "6" which is detected two times, and the substitution of "7" with "3". We average this distance over the number of true target classes. For this case, the average distance is $2/9 = 0.2222$ and we subtract this value from 1 since we want to measure closeness (in this work it is referred as the Levenshtein accuracy) of our results, which is equal to $(1-0.2222) \times 100 = 77.78\%$. \section{Related Work} The success of CNNs in object detection and classification tasks \cite{krizhevsky2012imagenet, Girshick2014rich} has created a growing trend to apply them also in the other areas of computer vision. For video analysis tasks, CNNs have been initially extended to be applied for video action and activity recognition and they have achieved state-of-the-art performances \cite{simonyan2014two, donahue2015long}. There have been various approaches using CNNs to extract spatio-temporal information from video data. Due to the success of 2D CNNs in static images, video analysis based approaches initially applied 2D CNNs. In \cite{simonyan2014two, Karpathy2014largescale}, video frames are treated as multi-channel inputs to 2D CNNs. Temporal Segment Network (TSN) \cite{wang2016temporal} divides video into several segments, extracts information from color and optical flow modalities for each segment using 2D CNNs, and then applies spatio-temporal modeling for action recognition. A convolutional long short-term memory (LSTM) architecture is proposed in \cite{donahue2015long}, where the authors extract first the features from video frames by a 2D CNN and then apply LSTM for global temporal modeling. The strength of all these approaches comes from the fact that there are plenty of very successful 2D CNN architectures, and these architectures can be pretrained using the very large-scale ImageNet dataset \cite{deng2009imagenet}. Although 2D CNNs perform pretty well on video analysis tasks, they are limited to model temporal information and motion patterns. Therefore 3D CNNs have been proposed in \cite{tran2015learning, tran2017convnet, hara3dcnns}, which use 3D convolutions and 3D pooling to capture discriminative features along both spatial and temporal dimensions. Different from 2D CNNs, 3D CNNs take a sequence of video frames as inputs. In this work, we also use the variants of 3D CNNs. The real-time systems for hand gesture recognition requires to apply detection and classification simultaneously on continuous stream of video. There are several works addressing detection and classification separately. In \cite{ohn2014hand}, authors apply histogram of oriented gradient (HOG) algorithm together with an SVM classifier. The authors in \cite{molchanov2015multi} use a special radar system to detect and segment gestures. In our work, we have trained a light weight 3D CNN for gesture detection. Moreover, in human computer interfaces, performed gestures must be recognized only once (i.e. single-time activations) by the computers. This is very critical and this problem has not been addressed well yet. In \cite{molchanov2016online}, the authors apply connectionless temporal classification (CTC) loss to detect consecutive similar gestures. However, CTC does not provide single time activations. To the best of our knowledge, in this study, it is the first time single-time activations are performed for deep learning based hand gesture recognition.
1,108,101,563,513
arxiv
\section{Equations for the application of the Principle of Equivalent Time} \label{sec:rPETequations} \setcounter{table}{0} \renewcommand{\thetable}{A\arabic{table}} \input{TablePETResults} The proposed method to deal with annealing in thermal histories with variable temperatures is also based on the Arrhenius equation and was first proposed by \cite{ goswami1984quantitative}. The Principle of Equivalent Time (PET), on which this method is founded, states that the annealing rate of a track does not depend on its previous thermal history, but only on its current length. The procedure to solve this problem is to infer the magnitude of annealing recursively, dividing the thermal history into appropriate intervals $\Delta t_i$ and starting from a given value of $r$. For each step, annealing is carried out at constant temperature. In the first step, centered at $(t_1, T_1)$. The procedure will be shown for the Parallel Arrhenius model. The reduced fission-track length is calculated using Eq.~\eqref{eq:PA}, along with $g(r) = \ln(1-r)$: \begin{equation} r_1 = 1-(\Delta t_1)^{c_1}\exp(c_0+\frac{c_2}{R T_1}) \end{equation} For the next step, the calculation of $r_2$ incorporates the principle in the form that this new shortening is postulated to be independent of the previous one. Thus, one can find a length of time, $\tau_{r_1}$, that will yield the length $r_1$ but for heating at the temperature of the second interval, $T_2$: \begin{equation} \tau_{r_1} = (1-r_1)^{-1/c_1}\exp(-\frac{1}{R T_2}\frac{c_2}{c_1}+\frac{c_0}{c_1}). \end{equation} The value of $r_2$ is then found by the application of the annealing model to the interval $\tau_{r_1} + \Delta t_2$: \begin{equation} r_2 = 1-(\tau_{r_1}+ \Delta t_2)^{c_1}\exp(c_0+\frac{c_2}{R T_2}). \end{equation} This procedure is interactively repeated for the entire T-t path. The last value of $r$ will be the reduced length of the population born in the first interval after experiencing the entire thermal history. For each interval $j$, the formulas above are: \begin{subequations} \begin{eqnarray} r_{j}&= 1-(\tau_{r_{j-1}}+ \Delta t_{j})^{c_1}\exp(c_0+\frac{c_2}{R T_{j}})\\ \tau_{r_{j-1}}&= (1-r_{j-1})^{-1/c_1}\exp(-\frac{1}{R T_{j}}\frac{c_2}{c_1}+\frac{c_0}{c_1}) \end{eqnarray} \end{subequations} The formulas for the PA, PC, CM, FA, and FC models are presented in Table~\ref{tab:PETresults}. \section{}\label{} \begin{figure}[<options>] \centering \includegraphics[<options>]{} \caption{}\label{fig1} \end{figure} \begin{table}[<options>] \caption{}\label{tbl1} \begin{tabular*}{\tblwidth}{@{}LL@{}} \toprule & \\ \midrule & \\ & \\ & \\ & \\ \bottomrule \end{tabular*} \end{table} \section{}\label{} \printcredits \bibliographystyle{cas-model2-names} \section{}\label{} \begin{figure}[<options>] \centering \includegraphics[<options>]{} \caption{}\label{fig1} \end{figure} \begin{table}[<options>] \caption{}\label{tbl1} \begin{tabular*}{\tblwidth}{@{}LL@{}} \toprule & \\ \midrule & \\ & \\ & \\ & \\ \bottomrule \end{tabular*} \end{table} \section{}\label{} \printcredits \bibliographystyle{cas-model2-names} \section{Introduction} \label{Intro} \input{introduction} \section{Method} \label{Method} \input{methodology} \section{Results and Discussion} \label{sec:Results} \input{results_and_discussion} \section{Concluding remarks} \label{Conclusion} Departing from Eq.~\eqref{eq:rateIntGen}, a physicochemical framework was built to deal with the effects of annealing in variable temperature T-t paths. The basic building blocks are the reaction function $f_r(r)$ and the effective rate constant, $k_{ef}(t,T)$. The parallel models (PA, PC, and CM) were shown to be consistent with the single activation energy rate constants given by Eqs.~\eqref{eq:general ka}-\eqref{eq:general kc} and with the reaction-order function (Eq.~\eqref{eq:reaction order model}). The fanning models (FA and FC) are the representation of multiple concurrent recombination processes with different activation energies \citep{rufino2022arrhenius}. The $k_{ef}(t,T)$ functions were built (Eq.~\eqref{eq:kef}) to be consistent with the reaction-order function (Eq.\eqref{eq:reaction order model}). Obtaining FA and FC rate constants from first principle, i.e., a composition of rate constants for individual processes, and validating them experimentally is still an open issue that has to be dealt with. The Eq.~\eqref{eq:rRCIgeneral} is the line integral related to Eq.~\eqref{eq:rateIntGen} from which length shortening due to cooling T-t paths can be directly calculated, independently of whether the rate constant represents a single or multiple activation energy mechanism. The Principle of Equivalent Time, on the other hand, is only valid for single activation energy equations, which, for the fission-track system, are the parallel models. In these cases, the RCI-based calculations are in agreement with the PET ones (Fig.~\ref{fig:rParallel}), indicating the robustness of RCI formulation. For the fanning models, the use of the PET has long been recognized as an approximation \citep{duddy1988thermal}. Deviations have indeed been observed between RCI and PET-based calculations (Fig.~\ref{fig:rFAFC}). Compared to the application of RCI, the PET calculation underestimates annealing effects in variable temperature T-t paths (Table~\ref{tab:thermal_indexes}). The PET along with FA or FC models is the calculation method used to infer most published thermal histories. This procedure introduces a systematic deviation that should be considered in the geological interpretation of the thermal history modeling. Alternatively, the rate constant integral (Eq.~\eqref{eq:rRCIgeneral}) could be considered to substitute the PET in inversion thermal history codes \citep{ketcham2005software, gallager2012software}. Computationally, solving an integral, even numerically, is a routine faster than the interactive steps necessary to apply the PET. More importantly, if the rate constants are representative of the track annealing kinetics, this framework results, in principle, in more accurate predictions of the annealing effects in samples submitted to variable temperature thermal histories. \section*{Appendix} \subsection{The physicochemical perspective of fission track annealing} The kinetics of chemical reactions can be described by the Arrhenius equation \citep{Arrhenius1889}, which relates the temperature derivative of the reaction rate $k$, the universal gas constant $R$ and a constant $q$, related to a change in the standard internal energy \cite[p.494]{Laidler1984Arrhenius}: \begin{equation}\label{eq:difArr} \dv{\ln k(T)}{T} = \frac{q}{RT^2}. \end{equation} Eq. (\ref{eq:difArr}) can be solved for the reaction rate as a function of temperature $k(T)$ using a pre-exponential factor $A$ and the Arrhenius activation energy $E_a$: \begin{equation}\label{eq:arrk} k(T) = A\exp\left(-E_a/RT\right). \end{equation} Chemical processes that obey Eq.~\eqref{eq:arrk} result in straight lines with slope $-E_a/R$ in Arrhenius plots ($\ln k \times 1/T$). Among the fission-track annealing models, the Parallel Arrhenius is the only one that actually fits this formulation of a single constant activation energy. Deviations from Eq.~\eqref{eq:arrk} are quite common. To enable a more complete study of chemical reactions, the International Union of Pure and Applied Chemistry (IUPAC) has defined the Arrhenius activation energy \citep{cohen2007iupac}: \begin{equation}\label{eq:Ea_definition} E_a = - R \dv{\ln(k)}{(1/T)}. \end{equation} The Arrhenius activation energy, as defined by Eq.~\eqref{eq:Ea_definition}, is an empirical quantity aimed to be a kinetic parameter that can vary with the temperature of the reaction medium. Its determination depends on the previous knowledge of the rate constant, the quantity that encodes the reaction kinetics. Thus, for application to the fission-track system, the reaction rate constant associated with the annealing mechanisms must be found, which can be done using the formalism of studies in solid state processes \citep{Vyazovkin2015Book}. The annealing kinetics of fission tracks is described by empirical \citep{laslett1987thermal, crowley1991Arrhenius, LaslettGabraith1996, Rana2021} or semi-empirical \citep{Carlson1990model, guedes2006kinetic, guedes2013improved} equations relating the reduced track length, $r = L/L_0$ (where $L$ is the length of the fission track after heating and $L_0$ is the unannealed fission-track length), with the duration, $t$, of the constant temperature ($T$) heating. The general form of the annealing equations is: \begin{equation}\label{eq:general annealing equation form} g(r) = f(t,T), \end{equation} \noindent in which $g(r)$ is a transformation of $r$ and $f(t,T)$ defines the geometrical characteristics of the isoretention curves in the pseudo-Arrhenius space ($\ln t \, \times \, 1/T$, Fig.~\ref{fig:pseudoArrhenius}). The Parallel Arrhenius (PA, Eq.~\eqref{eq:PA}) and Fanning Arrhenius (FA, Eq.~\eqref{eq:FA}) equations \citep{laslett1987thermal}, the Parallel Curvilinear (PC, Eq.~\eqref{eq:PC}) and Fanning Curvilinear (FC, Eq.~\eqref{eq:FC}) models \citep{crowley1991Arrhenius} as well as the Carlson Model (CM, Eq.~\eqref{eq:CM}) that mixes the Parallel Arrhenius and Parallel Curvilinear models in the same equation \citep{Carlson1990model}, are used in this analysis. The transformation function $g(r)=\ln(1-r)$ was chosen because it carries no fitting parameters and was shown to produce good fits to annealing data \citep{guedes2022generalization}. In addition, it arises naturally from the physicochemical formulation of the fission track annealing \citep{rufino2022arrhenius}, as will be shown below. More comprehensive descriptions of the annealing models can be found elsewhere \citep{Carlson1990model, Ketcham2019, guedes2022generalization}. \begin{figure}[!h] \centering \subfloat[]{\label{fig:pseudoArrheniusLinear} \includegraphics[width=0.49\linewidth]{figs/pltPsArrLinear.png} } \subfloat[]{\label{fig:pseudoArrheniusCurvilinear} \includegraphics[width=0.49\linewidth]{figs/pltPsArrCurvilinear.png} } \caption{Representation of the Arrhenius fission-track annealing models in the pseudo-Arrhenius plot. (a) Fanning Arrhenius, Parallel Arrhenius, and Carlson models. (b) Fanning Curvilinear and Parallel Curvilinear models. Laboratory annealing data are c-axis projected reduced fission-track lengths from Durango apatite \citep{Carlson1999data}. Data from the geological benchmark KTB \citep{Wauschkuhn2015KTB} are included only for reference. The models are represented as isoretention curves. Points on these curves are the temperature and time of constant temperature heating resulting in the same reduced length. } \label{fig:pseudoArrhenius} \end{figure} The annealing data set on the c-axis projected fission tracks for Durango apatite \citep{Carlson1999data} was used for model fitting. Durango apatite annealing data was chosen because Durango is a well-known standard sample often used in methodological studies \citep{Green1986, Carlson1990model, Ketcham1999, ketcham2007improved, Rana2021, guedes2022generalization, rufino2022arrhenius}. The fitting parameters for PA, PC, FA and FC models are the same presented in \citet{rufino2022arrhenius}. They were numerically determined using the function \texttt{nlsLM} of the package \texttt{minpack.lm} \citep{Elzhov2016} written in \texttt{R} language, which applies the Levenberg– Marquardt algorithm to minimize the residual sum of square (RSS), using the squared inverse of $r$ uncertainties as weights. With the same method, fitting parameters were also obtained for the CM model. The fitting parameters are presented in the last column of Table \ref{tab:synthesis}. \include{TableResults} Fission tracks are formed by displaced atoms and vacant sites, in concentrations high enough to change the structure of the mineral in a volume of about 2-10~nm in diameter and around 20 $\mu$m in length. The annealing process is the recombination of defects and vacancies, which also changes the neighbor structure and consequently the recombination rate. This kind of solid-state reaction is described by the conversion rate equation \citep{Vyazovkin2015Book}: \begin{equation}\label{eq:general conversion rate} \frac{\text{d}\alpha}{\text{d}t} = k(T)f_{\alpha}(\alpha). \end{equation} The Eq.~\eqref{eq:general conversion rate} relates the rate of conversion of the reactant $\alpha$ with the constant rate and with the reaction function $f_\alpha(\alpha)$. For the fission tracks, $\alpha$ is the concentration of recombined atoms, and $f_\alpha(\alpha)$ describes how the recombination process changes the surrounding structure. The track length can be used as a proxy for the concentration of displaced atoms \citep{rufino2022arrhenius} and: \begin{equation}\label{eq:fission track conversion rate} \alpha=\frac{L_0-L}{L_0}=1-r \end{equation} \noindent and with this change of variable: \begin{equation}\label{eq:diff rkf} \frac{\text{d}r}{\text{d}t}=-k_{ef}(t,T)f_r(r). \end{equation} The rate constant has been replaced with an effective rate constant, $k_{ef}(t,T)$, which may depend on time and temperature and is suitable to describe more complex reactions \citep{vyazovkin2016time}. For the reaction function, the reaction-order function has already been shown to produce consistent results mainly for the single activation energy mechanisms of annealing \citep{green1988can, rufino2022arrhenius}: \begin{equation}\label{eq:reaction order model} f_r=(1-r)^n \end{equation} \noindent in which $n$ is the reaction order. Eq.~\eqref{eq:diff rkf} is a differential equation that can be solved by the separation of variables. To define the limits of the integral, consider that at the beginning of the thermal history ($t=0$), the track is unannealed ($r=1$). After a heating duration $t$, the track length has been shortened to $r$. Then: \begin{equation}\label{eq:rateIntGen} \int_r^1 \frac{\text{d}r}{f_r(r)} = \int_0^t -k_{ef}(t,T) \text{d}t. \end{equation} Eq.~\eqref{eq:rateIntGen} is the basic equation from which the annealing kinetics can be studied from a physicochemical perspective. Once the reaction function and the rate constant are chosen, the dependence of the reduced fission-track length can be calculated over any T-t path. Let's start with the known case of constant temperature heating, from which the annealing equations should be obtained. For the single activation energy models, PA, PC, and CM, the rate constants are given by: \begin{subequations}\label{eq:general k} \begin{alignat}{2} k_{ef}(T)_{PA} &= A_{1} \exp\left(\frac{-Q_{1}}{R T}\right), \label{eq:general ka}\\ k_{ef}(T)_{PC} &= A_{2} (RT)^{m}, \label{eq:general kb}\\ k_{ef}(T)_{CM} &= A_{3} (R T) \exp\left(-\frac{Q_{3}}{R T}\right), \label{eq:general kc} \end{alignat} \end{subequations} \noindent where $A_i$, $Q_i$, and $m$ are constants. Eq.~\eqref{eq:general ka} is the original Arrhenius equation from which the PA equation is derived. $Q_1$ can be directly identified with the activation energy only in this case. Eq.~\eqref{eq:general kb} generates the PC equation with a temperature-dependent activation energy (Table~\ref{tab:synthesis}, Eq.~\eqref{eq:EaPC}). Eq.~\eqref{eq:general kc} generates the Carlson Model, also with a temperature-dependent activation energy (Table~\ref{tab:synthesis}, Eq.~\eqref{eq:EaCM}). It is the product of Eqs.~\eqref{eq:general ka} and \eqref{eq:general kb}, with $m=1$, and has been proposed soon after the original Arrhenius equation to deal with reactions that deviate from the expected Arrhenius behavior \citep{Kooij1893}. Note that although the activation energies in the Eqs.~\eqref{eq:general kb} and \eqref{eq:general kc} depend on temperature, they still fall into the category of single activation energy processes, meaning that all recombination events at a given temperature have the same activation energy. Annealing experiments are isothermal heating procedures. Then, substituting the effective rate constants (Eqs.~\eqref{eq:general k}) into the integral equation (Eq. \ref{eq:rateIntGen}) together with the reaction-order function defined in Eq. \eqref{eq:reaction order model} and solving it considering the temperature as a constant results in: \begin{subequations}\label{eq:int solutions} \begin{alignat}{2} \ln(1-r) &= \frac{\ln\left[ A_1(1-n)\right]}{1-n} + \frac{1}{1-n} \ln (t) - \frac{Q_1}{1-n}\frac{1}{RT}, \label{eq:int solution PA}\\ \ln(1-r) &= \frac{\ln[A_2 (1-n)]}{1-n}+\frac{1}{1-n}\ln (t) -\frac{m}{1-n} \ln (\frac{1}{RT}), \label{eq:int solution PC}\\ \ln(1-r) &= \frac{\ln \left[A_3 (1-n)\right]}{1-n}+\frac{1}{1-n}\ln(t) - \frac{Q_3}{1-n}\frac{1}{RT} -\frac{1}{1-n} \ln (\frac{1}{RT}), \label{eq:int solution CM} \end{alignat} \end{subequations} \noindent which are the equations for the PA \eqref{eq:int solution PA}, PC \eqref{eq:int solution PC}, CM \eqref{eq:int solution CM} models with $g(r)=\ln (1-r)$. For the chosen reaction function, the integral only has a real solution if $n < 1$. Comparing the right sides of these equations respectively with Eqs.~\eqref{eq:PA}, \eqref{eq:PC}, and \eqref{eq:CM}, one can find out that the rate constant parameters are related to the fitting parameters of the annealing equations as \begin{align} \text{PA}: n&=\frac{c_1-1}{c_1} & Q_1 &= -\frac{c_2}{c_1} & A_1&=c_1\exp(c_0/c_1) \label{eq:rec PA}\\ \text{PC}: n&=\frac{c_1-1}{c_1} & m &=-\frac{c_2}{c_1} & A_2&=c_1\exp(c_0/c_1) \label{eq:rec PC}\\ \text{CM}: n&=\frac{c_1-1}{c_1} & Q_3 &= -\frac{c_2}{c_1} & A_3&=c_1\exp(c_0/c_1)\label{eq:rec CM} \end{align} In this way, the rate constants can be expressed in terms of the fitting parameters of the annealing models as shown in Eqs. \eqref{eq:kPA}, \eqref{eq:kPC}, and \eqref{eq:kCM} of Table~\ref{tab:synthesis}. The values for the reaction order $n$ for the three models are $n\approx -4$, in agreement with a similar analysis carried out by \citet{green1988can} for the PA model. Therefore, the parallel models are not compatible with first-order annealing kinetics, meaning that the neighbor structure has a strong influence on the rate of defect recombination during annealing. There are no obvious expressions for the rate constant for the fanning models. A physicochemical analysis of their trends indicates that multiple concurring processes with different activation energies are occurring during the annealing of the fission tracks \citep{rufino2022arrhenius}, in agreement with previous suggestions \citep{green1988can, tamer2020low}. \citet{rufino2022arrhenius} derived an expression from Eq.~\eqref{eq:diff rkf} to find the effective rate constant from the annealing model: \begin{equation}\label{eq:kef} k_{ef}(t,T) = -\left.\frac{1}{f_r(r)}\left[ \pdv{g(r)}{r} \right] ^{-1}\pdv{f(t,T)}{t} \right|_{T} \end{equation} Eq. \eqref{eq:kef} provides a direct way to calculate this effective reaction rate constant from the model functions that fit the experimental annealing data, $f(t,T)$ and $g(r)$, and from the reaction function $f_r(r)$. The partial derivative in relation to time is taken because the annealing models were designed to describe constant temperature experiments. As a check, before applying Eq.~\eqref{eq:kef} to the fanning models, one can show that Eqs.~\eqref{eq:kPA}, \eqref{eq:kPC}, and \eqref{eq:kCM} are found by the application of Eq.~\eqref{eq:kef} respectively to Eqs.~\eqref{eq:PA}, \eqref{eq:PC}, and \eqref{eq:CM}, with $g(r)=\ln(1-r)$ and $f_r(r)=(1-r)^n$. The same procedure can be applied to Eq.~\eqref{eq:FA} and \eqref{eq:FC} to find the effective reaction rates respectively for the Fanning Arrhenius (Eq.~\eqref{eq:kFA}) and Fanning Curvilinear (Eq.~\eqref{eq:kFC}) models. An alternative way to infer the effective rate constants for the FA and FC models is departing from the hypothesis that the Arrhenius activation energies and, therefore, the rate constants are dependent on the fission-track reduced length. Then, integration of Eq.~\eqref{eq:rateIntGen}, on the isothermal condition, results in \begin{equation} \label{eq:krIntegral} -\int_1^r \frac{\text{d}r}{f_r(r)k_{ef}(r)}=\int_0^t \text{d}t=t. \end{equation} It can be shown that the primitive functions that make Eq.~\eqref{eq:krIntegral} true for the FA and FC models, with $f_r(r)=(1-n)^n$ and $g(r)=\ln(1-r)$ are the ones with the effective rate constants given by Eqs.~\eqref{eq:kFA} and \eqref{eq:kFC}. This approach also illustrate how the incorporation of the time in the rate constant and, therefore, in the activation energies for the fanning models are implied from the dependence of the activation energies on the values of $r$. To obtain the reaction order $n$ for the FA and FC models, the effective reaction rate constants given by Eqs.~\eqref{eq:kFA} and \eqref{eq:kFC} are integrated with Eq.~\eqref{eq:rateIntGen} considering constant temperatures (isothermal experiments): \begin{subequations}\label{eq:int solutions} \begin{alignat}{2} \int_{1}^{r} \frac{\text{d}r}{(1-r)^n} &= -\int_{0}^{t} \frac{c_1}{\frac{1}{RT}-c_3}\frac{1}{t}\exp\left[-(n-1)\left(c_0+c_1\frac{\ln t - c_2}{\frac{1}{RT}-c_3}\right)\right] \text{d}t \label{eq:int equation FA}\\ \int_{1}^{r} \frac{\text{d}r}{(1-r)^n} &= -\int_{0}^{t} \frac{c_1}{\ln(\frac{1}{RT})-c_3}\frac{1}{t}\exp\left[-(n-1)\left(c_0+c_1\frac{\ln t - c_2}{\ln(\frac{1}{RT})-c_3}\right)\right] \text{d}t\label{eq:int equation FC} \end{alignat} \end{subequations} With the necessary condition of $n<1$, the solution of the integral equation \eqref{eq:int equation FA} is \begin{equation} (1-r)=(-1)^{-1/(n-1)}\exp\left[c_0+c_1\frac{\ln(t)-c_2}{\frac{1}{RT}-c_3}\right] \end{equation} As the solution of this equation is to represent the shortening of the fission tracks, $(1-r)$ must be a real value between 0 and 1, which is true only if $-1/(n -1)$ is an even and positive integer value $2j$. Then, the values of $n$ are restricted to \begin{equation} \label{eq:frac n} n = \frac{2j-1}{2j} \end{equation} \noindent where $j=1,2,3,...$. With this condition and $g(r)=\ln(1-r)$, the FA model (Table~\ref{tab:synthesis}, Eq.\eqref{eq:FA}) is recovered. The solutions for Eqs.~\eqref{eq:int equation FA} and \eqref{eq:int equation FC} are similar, differing only on the logarithm of $1/RT$ for the FC model instead of the $1/RT$ for the FA model, which are both constants in this case. The previous analysis holds also for the FC model. The values of $n$ will be fractional for FA and FC ($n=1/2, 3/4, 5/6, 7/8,...$), according to Eq.~\eqref{eq:frac n}. Fractional reaction orders are characteristics of multiple-step reactions or some more complex kinetic mechanism, as it has been explained for the decomposition of acetaldehyde \citep{laidler1965chemical}, a well know example of fractional reaction order in chemistry. However, for the fission tracks, where the displaced atoms and vacant sites take the role of reactants and the deformed track structure is the reaction medium, explanations of the kinetics of a single reactant via a mean-field approximation (MFA) may not be appropriate \citep{cordoba2003fractional}. Thus, for the effective reaction rate constant $k_{ef}$ of the fanning annealing models, mechanistic modeling considering the intermediate steps, i.e., recognizing the reaction order of each mechanism involved in annealing, would be desirable to elucidate the meaning of the fractional reaction order found \citep{koga1992fractional}. The rate constants for FA and FC are to be viewed as effective equations constraining the general behavior of annealing but that does not allow the description of the specifics of the annealing kinetics. In this physicochemical framework of the fission-track annealing, the effective reaction rate constant, $k_{ef}(t,T)$, and the reaction function, $f_r(r)$, are the fundamental building blocks from which the fission-track annealing kinetics can be studied. The application to the constant temperature annealing made it possible to determine the rate constant parameters from the empirically determined parameters of the annealing equations. The calculation of the Arrhenius activation energies ($E_a$) for different models becomes possible through Eq.~\eqref{eq:Ea_definition}. The Arrhenius activation energies of the parallel annealing models (Eq. \ref{eq:EaPA}, \ref{eq:EaPC} and \ref{eq:EaCM}) will be constants with respect to the variable $t$. As for the fanning equations, $E_a$ will vary with time and temperature (Eq.~\ref{eq:EaFA} and \ref{eq:EaFC}). However, the main advantage of this approach is the possibility of calculating the fission-track length reduction over any T-t path using Eq.~\eqref{eq:rateIntGen}, without recurring to the interactive application of the Principle of Equivalent Time. \subsection{Fission track annealing under variable temperature thermal histories} Fission-track thermal history inference is based on the Principle of Equivalent Time (PET) \citep{goswami1984quantitative,duddy1988thermal}, which is an interactive method that allows the application of isothermal annealing models to variable temperature T-t paths. It is detailed in Appendix \ref{sec:rPETequations}. In general, a given variable temperature thermal history is divided into finite time intervals $\Delta t_i$, centered at times $t_i$ and temperatures $T_i$. At the time interval in which the population was born, a first reduced length is calculated by applying the annealing equation, using the temperature of the T-t path and the duration of the interval ($T_i,\Delta t_i$). In the next interval, at a different temperature on T-t path, the annealing model is used to find an equivalent time capable of producing the same length shortening of the previous interval but at the new temperature. A new length shortening is then calculated by applying the annealing model to the period of time that is the sum of the equivalent time and the length of the time interval. This procedure is repeated and at any given temperature, $T_i$ on the T-t path, an equivalent time, $\tau_i$, which reproduces the length shortening at the previous interval, $r_{i-1}$, is determined, so that the new length shortening can be calculated as if the track had been at the same constant temperature from the beginning. The reduced length is updated ($r_i$) by calculating it as a result of heating at $T_i$ for the duration $\tau_i+\Delta t_i$. The hypothesis that the annealing kinetics does not depend on the previous thermal history of the track, but only on its current length so that any previous T-t path can be replaced with a constant temperature heating resulting in this length, is the basis of this procedure and defines the Principle of Equivalent Time. This means, in practice, that the track will have no memory of the material conditions of time and temperature of its previous shortenings. The equations for the application of the PET with the PA, PC, CM, FA, and FC models can be found in Table~\ref{tab:PETresults} in Appendix~\ref{sec:rPETequations}. The physicochemical tool presented in the previous section provides an alternative way to access variable temperature annealing kinetics by solving the integral in the right side of Eq.~\eqref{eq:rateIntGen} over a T-t path. Eq.~\eqref{eq:rateIntGen} is solved as a line integral. A suitable parameterization is: \begin{equation}\label{eq:Sparametrization} s = \begin{cases} T=T(u)\\ t=u \end{cases}, \\ \text{d}s=\sqrt{1+\left(\frac{\text{d}T}{\text{d}u}\right)^2}\text{d}u. \end{equation} Implementing the parameterized variables on the right side of the integral equation (Eq. \ref{eq:rateIntGen}), \begin{equation}\label{eq:intkefparamimplies} I = \int_{0}^tk_{ef}\left(t(u),T(u)\right)\frac{\text{d}s}{\sqrt{1+\left(\frac{\text{d}T}{\text{d}u}\right)^2}} \implies I = \int_{0}^tk_{ef}\left(t(u),T(u)\right)\text{d}u. \end{equation} Solving the left side of Eq.~\eqref{eq:rateIntGen} for the $f_r(r)$ function given by Eq.~\eqref{eq:reaction order model} and the parameterized integral for the rate constant (Eq.~\eqref{eq:intkefparamimplies}), the reduced length, after the track has experienced the thermal history given by the T-t path, is \begin{equation}\label{eq:rRCIgeneral} r = 1-\left((1-n)\int_{0}^tk_{ef}\left(t(u),T(u)\right)\text{d}u\right)^{1/1-n}, \end{equation} \noindent in which $n<1$ as usual. At a first glance, the advantage of the Rate Constant path Integral (RCI, Eq.~\eqref{eq:rRCIgeneral}) is that it is a one-shot calculation of the reduced track length. In addition, there was no need to restrict the form of the rate constant function and therefore the annealing mechanism it is related to. \subsection{Parallel models} \label{sec:resParallel} To solve the RCI, the effective rate constant functions for the parallel models (Table~\ref{tab:synthesis}, Eqs. \eqref{eq:kPA}, \eqref{eq:kPC}, and \eqref{eq:kCM}) are inserted in Eq.~\eqref{eq:rRCIgeneral} with the variable $T$ replaced with $T(t)=T_0-\dot{T}t$ wherever it appears. The analytical solutions for the reduced length shortening calculated for the PA, PC, and CM are \begin{subequations}\label{eq:RCI parallels} \begin{alignat}{2} r_{PA} &= 1-\left(\frac{e^{c_0/c_1} \left(c_2 \text{Ei}\left(\frac{c_2}{c_1 R (T_0- \dot{T}t) }\right)-c_2 \text{Ei}\left(\frac{c_2}{c_1 R T_0}\right)+c_1 R \left((\dot{T} t-T_0) e^{\frac{c_2}{c_1 R (T_0-\dot{T} t)}}+T_0 e^{\frac{c_2}{c_1 R T_0}}\right)\right)}{c_1 \dot{T} R}\right)^{c_1},\label{eq:RCI PA}\\ r_{PC} &= 1-\left(\frac{c_1 e^{c_0/c_1} \left((\dot{T} t-T_0) (R (T_0-\dot{T} t))^{-\frac{c_2}{c_1}}+T_0 (R T_0)^{-\frac{c_2}{c_1}}\right)}{c_1 \dot{T} -c_2 \dot{T} }\right)^{c_1},\label{eq:RCI PC}\\ \begin{split} r_{CM} = 1-2^{-c_1}e^{c_0}\frac{ c_1^{-2 c_1}}{(\dot{T} R)^{c_1}}\left[c_1 R \left(T_0 e^{\frac{c_2}{c_1 R T_0}} (c_1 R T_0+c_2)-(T_0-\dot{T} t) e^{\frac{c_2}{c_1 R T_0-c_1 \dot{T} R t}} (c_1 R (T_0-\dot{T} t)+c_2)\right)\right.\\ +c_2^2 \left.\left(\text{Ei}\left(\frac{c_2}{c_1 R T_0-c_1 R t \dot{T} }\right)-\text{Ei}\left(\frac{c_2}{c_1 R T_0}\right)\right)\right]^{c_1},\label{eq:RCI CM} \end{split} \end{alignat} \end{subequations} \noindent where Ei is the exponential integral function. Eqs. \eqref{eq:RCI PA} - \eqref{eq:RCI CM} give the resulting reduced length $r$ for the parallel models, as functions of the three variables that characterize the thermal history: the duration of the T-t path ($t$), the cooling rate ($\dot{T}$), and the temperature at the time when the track was born ($T_0$). The parameters $c_i$ are given in the last column of Table~\ref{tab:synthesis}. The values of $r$ for the cooling path with the cooling rate $\dot{T}=1.0 ^\circ$C/Ma calculated with the three parallel models are presented in Fig.~\ref{fig:rParallel}. For each point, the value of $r$ is the length reduction after a linear cooling duration $t$ and measured in the present. Values of $r=0$ mean that the tracks have been erased before the present. Values obtained by the RCI solutions (Eqs.~\eqref{eq:RCI PA} - \eqref{eq:RCI CM}) are represented as red curves marked with red circles and the values calculated using the PET are represented as blue curves marked with blue squares. RCI and PET calculations produce very close values of $r$ for the three parallel models (Figs.~\ref{fig:rPA}, \ref{fig:rPC}, \ref{fig:rCM}). \begin{figure}[!h] \centering \subfloat[]{\label{fig:rPA}, \includegraphics[width=0.32\linewidth]{figs/pltPA.png} } \subfloat[]{\label{fig:rPC} \includegraphics[width=0.32\linewidth]{figs/pltPC.png} } \subfloat[]{\label{fig:rCM} \includegraphics[width=0.32\linewidth]{figs/pltCM.png} } \caption{Values of the reduced track lengths ($r$), after a linear cooling ($\dot{T}=1.0^\circ$C/Ma) starting at time $t$ and ending at the present time at a fixed temperature of 20$^\circ$C, calculated using the Parallel models: (a) Arrhenius, (b) Curvilinear, and (c) Carlson). The points that form the curve in red (circle marks) were calculated by applying the RCI (Eq. \ref{eq:RCI PA}, \ref{eq:RCI PC}, and \ref{eq:RCI CM}). The values calculated using the PET are in blue (square marks). } \label{fig:rParallel} \end{figure} The temperature indexes Closure Temperature ($T_C$) and Total Annealing Temperature ($T_A$) were also calculated for the three parallel models, applying both methods of calculation, for cooling T-t paths with cooling rates of 1, 10, and 100 $^\circ$C/Ma. $T_C$ is, for a monotonic cooling thermal history, the temperature at the apparent sample age \citep{dodson1973closure}. $T_A$ is the age of the oldest track that has not been erased and can be counted in the sample \citep{issler1996TA}. Details for the method of calculation of the index temperatures can be found in \citet{guedes2013improved}. $T_C$ and $T_A$ are meaningful quantities that allow quantifying the impact of using the RCI instead of the interactive PET calculation. The uncertainties in $T_C$ and in $T_A$ were estimated by simple error propagation of apparent ($T_C$) or retention ($T_A$) ages and present time temperature. Results are shown in Table~\ref{tab:thermal_indexes}. Setting the PET results as the reference values, given that PET is the method established in the literature, a relative error analysis can be carried out to verify the internal consistency between PET and RCI calculations. The relative error between PA PET and PA RCI is on average 0.69$\%$ for $T_C$ and 1.19$\%$ $T_A$. The same trend is found for calculations of $T_C$ and $T_A$ with PC and CM: 0.66$\%$ and 0.58$\%$ (PC) and 0.7$\%$ and 1.19$\%$ (CM). All errors are well inside the estimated uncertainties for temperature index values and are most probably artifacts of the PET numerical calculation. \input{TableThermalIndexes} The PET was formulated under the hypothesis that the annealing of fission tracks is a single activation energy process \citet{duddy1988thermal}. The internal consistency between PET and RCI values of $r$, $T_C$, and $T_A$ calculated with the parallel models is a check for the robustness of the physicochemical approach to deal with variable temperature thermal histories. It is to be noted that not only the PA model, in which the activation energy is temperature-independent (Table~\ref{tab:synthesis}, Eq.~\eqref{eq:EaPA}), but also the PC model, in which the activation energy is temperature-dependent (Table~\ref{tab:synthesis}, Eqs.~\eqref{eq:EaFC}), show such internal consistency. The same agreement is observed for CM. The CM activation energy may vary with temperature but, with the parameters shown in Table~\ref{tab:synthesis}, its Arrhenius activation energy is approximately constant since the value of $c_2/c_1$ (54.9 kcal/mol) is much higher than typical values of $RT$ (< 1.0 kcal/mol). Although the activation energies may vary with temperature, these models imply that at any given temperature, the recombination events are taking place with the same activation energy. This is a sufficient condition for the applicability of the PET. \subsection{Fanning models} \label{sec:resFanning} The values of $r$, $T_C$, and $T_A$ for the cooling T-t path were also calculated for the fanning models, using both the interactive PET and RCI methods. However, the RCI (Eq.~\eqref{eq:rRCIgeneral}) could not be solved analytically for the FA and FC rate constants (Table~\ref{tab:synthesis}, Eqs.~\eqref{eq:kFA} and \eqref{eq:kFC}). The integrals were then solved numerically with the Wolfram Mathematica software \citep{Mathematica}. For validation, the integrals for the parallel models were also solved numerically resulting in exactly the same values obtained with the analytical solutions (Eqs.~\eqref{eq:RCI PA}-\eqref{eq:RCI CM}). Another feature to be considered is that there are certain fractional values allowed for the reaction order $n$, given by Eq.~\eqref{eq:frac n}. The analysis will be limited to $n= 0.5$, $n=0.75$ and $n=0.9$. The numerical method breaks down when $n>0.95$ although its mathematical upper bound is $n<1$. The reduced length calculation results are shown in Fig.~\ref{fig:rFAFC}. Values calculated with the PET are shown in blue, with triangle marks, while values found by solving the RCI are shown, in red ($n=0.5$), purple ($n=0.75$), and light purple ($n=0.9$), respectively with circle, square, and diamond marks. RCI $r$ curves are very close to each other but depart from the $r$ values calculated with the PET. Significant differences between RCI and PET $r$ values are observed for the FC (Fig.~\ref{fig:rFC}) and for the FA (Fig.~\ref{fig:rFA}) models. \begin{figure}[!h] \centering \subfloat[]{\label{fig:rFA} \includegraphics[width=0.4\linewidth]{figs/pltFA.png} } \subfloat[]{\label{fig:rFC} \includegraphics[width=0.4\linewidth]{figs/pltFC.png} } \caption{Values of the reduced track lengths ($r$), after a linear cooling ($\dot{T}=1.0^\circ$C/Ma) starting at time $t$ and ending at the present time at a fixed temperature of 20$^\circ$C, calculated using the Fanning models: (a) Arrhenius and (b) Curvilinear. The points that form the curves in red (circle marks), purple (square marks), and light purple (diamond marks) were calculated by applying the RCI, respectively with $n=0.5$, $n=0.75$, and $n=0.9$. The values calculated using the PET are in blue. } \label{fig:rFAFC} \end{figure} The values of $T_C$ and $T_A$ for the fanning models, calculated using the PET and the RCI ($n=0.5$) are in Table~\ref{tab:thermal_indexes}. For the FA model, the mean relative errors between the PET and the RCI $T_C$ and $T_A$ calculations are respectively 6.25$\%$ and 5.68$\%$. For the FC model, the same comparisons result in still more significant differences: 9.57$\%$ ($T_C$) and 8.48$\%$ ($T_A$). The deviations between the values calculated using the PET and the RCI are much more significant than the ones found for the parallel model calculations. One major issue is that the fanning models do not fulfill the single activation energy hypothesis on which the PET is founded. The fanning models emerge from multiple concurring processes with different activation energies \citep{tamer2020low, rufino2022arrhenius}. The effective Arrhenius activation energies incorporate a time dependence (Table~\ref{tab:synthesis}, Eqs.~\eqref{eq:EaFA} and \eqref{eq:EaFC}) that is the consequence of their dependence on the current reduced fission-track length (different slopes of the isoretention curves in the pseudo-Arrhenius plot). On the other hand, the rate constant integral (Eq.~\eqref{eq:rRCIgeneral}) was obtained in a physicochemical framework developed to deal with chemical reactions that did not fit the single activation energy Arrhenius law \citep{Vyazovkin2015Book}. It is by design suitable for complex activation energy systems like the ones pictured by the fanning models. Note also that the presented figures are particular of the fitting parameters in Table~\ref{tab:synthesis}. A different set of parameters would result in different values without changing the conclusion that RCI and PET predictions deviate from each other. \subsection{Implications for the thermal history modeling} \label{sec:implications} The fanning models, especially the Fanning Curvilinear, have been shown to produce better fits to laboratory data and better geological extrapolation of annealing effects \citep{ketcham2007improved, guedes2013improved}. However, the application of the FC along with the PET is an approximation. Compared with the RCI formulation, it underestimates the annealing effect in about 10$\%$, i.e., it predicts that higher temperatures are necessary for the same length shortening as calculated with the RCI for the tested cooling histories. In the context of the inverse problem of inferring T-t paths from the FT age and track length distribution of a mineral sample, it implies the requirement of a longer residence time in the partial annealing zone. For instance, compare, in Table~\ref{tab:thermal_indexes}, the FC $T_A$ calculated with RCI for a cooling rate of 10$^\circ$C/Ma (148$^\circ$C) with the FC $T_A$ calculated with PET for a cooling rate of 1$^\circ$C/Ma (143$^\circ$C). The same analysis applies to the FA model with less significant relative error figures (about 6$\%$). The Parallel models (PA, PC, and CM), which can be safely applied along with the PET, have long been ruled out for FTT studies \citep{laslett1987thermal, guedes2013improved, Ketcham1999, ketcham2007improved, Ketcham2019}. \citet{duddy1988thermal} had argued that the FA deviated only slightly from the PA model and applied it along with the PET. The isoretention curves for the two models follow approximately the same trends (Fig.~\ref{fig:pseudoArrheniusLinear}). The same behavior is observed for the curvilinear models (Fig.~\ref{fig:pseudoArrheniusCurvilinear}). FC and PC isoretention curves bend together towards lower temperatures. Their argument can be better appreciated in Fig.~\ref{fig:rAll}. All the PET and RCI predictions for reduced lengths after the track underwent the cooling history are gathered in the same plot. Note that the linear models (PA and FA) and the approximately linear CM form a cluster, while the curvilinear models (PC and FC) form a separate set. The predictions with the fanning models and PET are closer to the predictions of the parallel models for track populations born when the sample passed through intermediate temperatures (partial annealing zone), which results in closer $T_C$ values (compare values in Table~\ref{tab:thermal_indexes}). For populations born at higher temperatures, the fanning-PET predictions depart from the parallel model ones, resulting in a more significant difference between calculated $T_A$ values. Calculations with RCI approximate fanning and parallel model predictions for populations born at higher temperatures. Within this approximation, it could be possible to engineer fanning model parameters to make the model even closer to a parallel model. \begin{figure}[ht] \centering \includegraphics[width=0.4\linewidth]{figs/pltAll20.png} \caption{Values of the reduced track lengths ($r$), after a linear cooling ($\dot{T}=1.0^\circ$C/Ma) starting at time $t$ and ending at the present time at a fixed temperature of 20$^\circ$C, calculated for the parallel and fanning models. Calculations using RCI are shown as solid geometric forms, while calculations using PET are represented by empty geometric forms. } \label{fig:rAll} \end{figure}
1,108,101,563,514
arxiv
\section{Introduction} Eclipse observations made as an exoplanet passes through superior conjunction allow the emission from the planetary dayside hemisphere to be inferred. Dozens of eclipse measurements have been published for broad photometric passbands, most notably from the \textit{Kepler} space mission \citep[e.g.][]{2009Sci...325..709B,2011MNRAS.417L..88K,2013ApJ...771...26F,2015PASP..127.1113A,2015ApJ...804..150E}, the \textit{Spitzer Space Telescope} Infrared Array Camera (IRAC) \citep[e.g.][]{2005ApJ...626..523C,2010ApJ...720.1569K,2010Natur.464.1161S,2014ApJ...781..116B,2019arXiv190107040G}, Infrared Spectrograph (IRS) \citep{2012ApJ...754..136S,2014ApJ...797...42C}, and Multiband Imaging Photometer \citep{2005Natur.434..740D,2008ApJ...686.1341C,2012ApJ...752...81C}, as well as ground-based telescopes \citep[e.g.][]{2009A&A...493L..31S,2010MNRAS.404L.114G,2011A&A...528A..49D,2019A&A...624A..62M,2019A&A...625A..80K}. A smaller number of spectroscopic eclipse measurements have also been published, including two with the \textit{Spitzer} IRS \citep{2007Natur.445..892R,2007ApJ...658L.115G,2008Natur.456..767G}, two with the \textit{Hubble Space Telescope} (HST) Near-Infrared Camera Multi-Object Spectrometer \citep{2009ApJ...690L.114S,2009ApJ...704.1616S}, and fourteen with the HST Wide Field Camera 3 (WFC3) infrared spectrograph \citep{2014ApJ...783..113W,2014ApJ...785..148R,2014Sci...346..838S,2014ApJ...791...36S,2014ApJ...793L..27K,2018AJ....156...17K,2014ApJ...795..166C,2015ApJ...806..146H,2016AJ....152..203L,2017AJ....154..158B,2017ApJ...850L..32S,2017Natur.548...58E,2018MNRAS.474.1705N,2018ApJ...855L..30A,2018AJ....156...10M}. Spectroscopic observations are of particular value by allowing opacity bands to be resolved, which in turn encode information about chemical abundances and the vertical temperature profile of the atmosphere. The WFC3 instrument offers two grisms for infrared spectroscopy:\ G102 covering wavelengths $0.8$-$1.1\,\mu\textnormal{m}$ and G141 covering wavelengths $1.1$-$1.6\,\mu\textnormal{m}$. All of the exoplanet emission spectra published to date have used the G141 grism for two main reasons: (i) the longer wavelength coverage provides a more favorable planet-to-star brightness ratio; and (ii) the G141 passband provides access to stronger opacity bands than the G102 passband, in particular the $1.4\,\mu\textnormal{m}$ H$_2$O band. The observations made with WFC3 G141 have resulted in detections of H$_2$O absorption for WASP-43b \citep{2014Sci...346..838S,2014ApJ...793L..27K}, HD\,189733b \citep{2014ApJ...795..166C}, HD\,209458b \citep{2016AJ....152..203L}, and Kepler-13Ab \citep{2017AJ....154..158B}, and H$_2$O emission for WASP-121b \citep{2017Natur.548...58E}. Other spectral features reported include TiO emission for WASP-33b \citep{2015ApJ...806..146H} and CO absorption for WASP-18b \citep{2017ApJ...850L..32S}. The remaining seven spectra do not exhibit significant spectral features and typically appear blackbody-like \citep{2014ApJ...783..113W,2014ApJ...785..148R,2018MNRAS.474.1705N,2018AJ....156...10M,2018AJ....156...17K}, while the CO absorption feature claimed for WASP-18b has been challenged \citep{2018ApJ...855L..30A}. Recent studies \citep{2018ApJ...855L..30A,2018ApJ...866...27L,2018A&A...617A.110P,2018AJ....156...17K} have suggested that continuum opacity due to H$^-$ and thermal dissociation of H$_2$O itself can explain the lack of spectral features observed for the ultrahot Jupiters, such as WASP-18b, HAT-P-7b, and WASP-103b, which have dayside temperatures well in excess of 2000\,K. Another such ultrahot Jupiter, WASP-121b, is the subject of this study. Discovered by \cite{2016MNRAS.tmp..312D}, WASP-121b has an exceptionally inflated radius ($1.8\,R_J$) and a dayside photospheric temperature of approximately 2700\,K \citep{2017Natur.548...58E}. The high temperature results from WASP-121b orbiting at a distance of only 0.025\,AU from its F6V host star, where it is subjected to strong tidal forces and may be undergoing atmospheric escape via Roche lobe overflow \citep{2016MNRAS.tmp..312D}. Observational support for this picture has recently been provided by near-ultraviolet (NUV) transit measurements made with \textit{Swift} UVOT that are significantly deeper than those measured at optical wavelengths \citep{2019A&A...623A..57S}. This could be explained by an extended atmosphere filling the Roche lobe, which is relatively opaque at NUV wavelengths due, for instance, to heavy metal absorption lines. At near-infrared wavelengths, the HST WFC3 spectrograph has been used to measure both the transmission spectrum \citep{2016ApJ...822L...4E} and emission spectrum \citep{2017Natur.548...58E} of WASP-121b. Absorption due to the $1.4\,\mu\textnormal{m}$ H$_2$O band is observed in the transmission spectrum, while this same band is seen in emission at secondary eclipse, revealing a thermal inversion on the dayside hemisphere. The latter indicates significant absorption of incident stellar radiation at NUV-optical wavelengths for pressures less than $\sim 100$\,mbar on the dayside hemisphere \citep[e.g.][]{2010A&A...520A..27G}. This is consistent with the low geometric albedo of $A_g = 0.16 \pm 0.11$ inferred by \cite{2019A&A...624A..62M} from the $z^\prime$ secondary eclipse measurement of \cite{2016MNRAS.tmp..312D}. Possible absorbers include TiO and VO, both of which have strong opacity bands throughout the optical \citep{2003ApJ...594.1011H,2008ApJ...678.1419F}. Indeed, the optical transmission spectrum measured using the HST Space Telescope Imaging Spectrograph (STIS) does show evidence for VO absorption at the day-night limb, although TiO is not seen \citep{2018AJ....156..283E}. A steep rise toward NUV wavelengths is also recovered in the STIS transmission spectrum, which may be caused by the same absorber/s responsible for the deep \textit{Swift} UVOT transit. One candidate proposed in \cite{2018AJ....156..283E} is SH, which has been predicted as a product of non-equilibrium chemistry in hot Jupiter atmospheres by \cite{2009ApJ...701L..20Z}, and, if present on the dayside hemisphere, could potentially produce the thermal inversion. As alluded to above, absorption by heavy metals such as Fe and Mg might also simultaneously explain the deep NUV transits and dayside thermal inversion. At optical wavelengths, other candidate absorbers that could play a role in generating the thermal inversion include H$^-$ ions and molecules such as NaH, MgH, FeH, SiO, AlO, and CaO \citep{2018ApJ...866...27L,2018A&A...617A.110P,2019MNRAS.485.5817G}, although no compelling evidence has been claimed for any of these species based on the published transmission spectrum \citep{2016ApJ...822L...4E,2018AJ....156..283E}. Unlike the transmission spectrum --- which probes a region of the atmosphere very different to the ultrahot dayside of WASP-121b --- a detection of one or more strong optical absorbers in the emission spectrum would provide a definitive link between the radiatively active species present and the thermal inversion. Motivated by this, we acquired secondary eclipse observations of WASP-121b with the G102 grism of WFC3, extending the wavelength coverage into the red optical where emission bands due to species such as TiO, VO, and FeH may be detectable, as well as H$^-$ continuum opacity. We describe these observations and our data reduction in Section \ref{sec:observations_datared}. Our analyses of the white and spectroscopic light curves are presented in Sections \ref{sec:whitelc} and \ref{sec:speclcs}, respectively. The results are discussed in Section \ref{sec:discussion} and we conclude in Section \ref{sec:conclusion}. \begin{figure*} \centering \includegraphics[width=\linewidth]{{whiteraw_auxv}.pdf} \caption{Extracted time series for the flux, dispersion drift ($x$), cross-dispersion drift ($y$), and background level of both datasets. In all panels, colored symbols indicate data points that were included in the analysis and gray crosses indicate those that were excluded for reasons explained in the main text.} \label{fig:timeseries} \end{figure*} \section{Observations and data reduction} \label{sec:observations_datared} We observed two secondary eclipses of WASP-121b with HST/WFC3 using the G102 grism, which covers a wavelength range of approximately $0.8$-$1.1\,\mu\textnormal{m}$ with a spectral resolving power of $R \sim 200$ at $\lambda = 1\,\mu\textnormal{m}$ (G.O.\ 15135; P.I.\ Mikal-Evans). The first visit was made on 2017 November 6 and the second visit was made on 2017 December 9. We refer to these two visits as the G102v1 and G102v2 datasets, respectively. For both visits, the target was observed for 6.9 hours over five consecutive HST orbits with identical observing setups. The relative timing of the two visits was designed to provide full phase coverage of the eclipse, using the previously determined ephemerides of WASP-121b. Observations were made in spectroscopic mode with a forward scanning rate of $0.062$\,arcsec\,s$^{-1}$ along the cross-dispersion axis. To reduce overheads, only the $512 \times 512$ pixel subarray of the detector containing the target spectrum was read out for each exposure. We adopted the SPARS10 sampling sequence with 15 non-destructive reads per exposure ($\textnormal{NSAMP}=15$) resulting in total integration times of 103\,s and scans across approximately 50 pixel rows of the cross-dispersion axis. With this setup, we obtained 14 exposures in the first HST orbit following acquisition and 16 exposures in each subsequent HST orbit. Typical peak frame counts were $\sim 32,000$ electrons per pixel for both visits. This translates to $\sim 13,000$\,data numbers (DN) per pixel given the detector gain of 2.5 electrons per DN, which is well within the linear regime of the WFC3 detector \citep[see Figure 1 of][]{2008wfc..rept...39H}. Spectra were extracted from the raw data frames using a custom-built Python pipeline, which has been described previously \citep{2016ApJ...822L...4E,2017Natur.548...58E}. We started with the IMA files produced by the \textit{calwf3} pipeline version 3.4.1, which already have basic calibrations such as flat fielding, bias subtraction, and nonlinearity correction applied. The target flux was extracted from each exposure by taking the difference between successive non-destructive reads. To do this, we first estimated and subtracted the background flux for each read, by taking the median pixel count within a $10 \times 170$ pixel box which was chosen to be as large as possible while avoiding sources within the field and the detector edges. Typical background levels integrated over the full $103\,$s exposures were approximately $70$--$80$\,electrons\,pixel$^{-1}$, rising to over $100$\,electrons\,pixel$^{-1}$ at the end of each HST orbit (Figure \ref{fig:timeseries}). For each read-difference frame, we then determined the flux-weighted centre of the scanned spectrum along the cross-dispersion axis. All pixel values located more than 30 pixels above and below this row were set to zero, effectively removing flux contributions from nearby contaminant stars and cosmic ray strikes outside a rectangular aperture. Final reconstructed frames were produced by adding together the read-differences produced in this manner. During this process, we also estimated how the spectrum drifted across detector over the course of the observations. For both visits, we measure a drift of $\sim 0.1$-$0.2$\,pixel along the dispersion axis and $\sim 0.6$\,pixel along the cross-dispersion axis (Figure \ref{fig:timeseries}). \begin{figure} \centering \includegraphics[width=\columnwidth]{{example_spectra}.pdf} \caption{Example spectra for the WFC3 G102 and G141 grisms. Dark and light vertical bands indicate the wavelength channels adopted for the spectroscopic light curves.} \label{fig:example_spectra} \end{figure} The target spectrum was then extracted from each frame by summing the flux within a rectangular aperture spanning the full dispersion axis and 80 pixels along the cross-dispersion axis, centered on the central cross-dispersion row of the scan. The wavelength solution was determined by cross correlating each of these extracted spectra against a model spectrum for the WASP-121 host star modulated by the throughput of the G102 grism, as in \cite{2016ApJ...822L...4E,2017Natur.548...58E}. In addition to the G102 data, a single secondary eclipse of WASP-121b was observed on 2016 Nov 10 with the G141 grism (G.O.\ 14767; P.I.s Sing and L\'{o}pez-Morales). This dataset was originally published in \cite{2017Natur.548...58E}, to which the reader is referred for further details. Example G102 and G141 spectra are shown in Figure \ref{fig:example_spectra}. Together, both grisms provide continuous wavelength coverage between $\sim 0.8$-$1.6\,\mu\textnormal{m}$. \section{White Light Curve Analyses} \label{sec:whitelc} White light curves were produced for both visits by integrating each spectrum across the full dispersion axis (Figure \ref{fig:timeseries}). The light curves are affected by the well-known hook systematic that correlates with HST orbital phase and is understood to be caused by charge trapping in the WFC3 detector \citep{2017AJ....153..243Z}. The baseline flux level also exhibits a longer term drift, which is approximately linear in time for both light curves. Prior to light curve fitting, we chose to discard the first HST orbit of each visit, as these exhibit much stronger hooks than subsequent orbits. Although methods exist to correct the WFC3 first-orbit systematics \citep[e.g.][]{2017AJ....153..243Z,2018NatAs...2..214D}, we opted for this approach to be consistent with our previous analyses \citep{2016ApJ...822L...4E,2017Natur.548...58E} and to avoid modeling the baseline trend over the full five-orbit visits, which is less likely to be well approximated as linear. The resulting four-orbit white light curves were fit using a similar methodology to that described in \cite{2017Natur.548...58E}, in which the systematics are treated as a Gaussian process (GP). In the present study, we modeled the eclipse signal using the \texttt{batman} software package \citep{2015PASP..127.1161K}, allowing the eclipse depth ($D$) and eclipse mid-time ($T_{\textnormal{mid}}$) to vary as free parameters, while holding the remaining parameters fixed to previously determined values (Table \ref{table:whitefit}). We varied the eclipse depths jointly for both visits and allowed the mid-times to vary separately for each visit. For the GP, we employed a Mat\'{e}rn $\nu=3/2$ kernel with HST orbital phase ($\phi$), dispersion drift ($x$), and cross-dispersion drift ($y$) as input variables. We chose not to use time ($t$) as an input variable, as from past experience we have found its inclusion to make little difference to the final result \citep{2018AJ....156..283E}. The GP free parameters were the covariance amplitude ($A$) and the correlation length scales for each input variable ($L_\phi,L_x,L_y$). In practice, as in \cite{2017Natur.548...58E,2018AJ....156..283E}, we fit for the natural log of the inverse correlation length scale, $\ln\eta_k = \ln L_k^{-1}$, where $k = \{ \phi, x, y \}$. We also parameterized the white noise as $\sigma=\beta\sigma_0$, where $\sigma_0$ is the formal photon noise floor and $\beta$ is a rescaling factor that was allowed to vary in the fits. We adopted uniform priors for all eclipse parameters, and adopted the same priors as in \cite{2018AJ....156..283E} for the remaining parameters. Marginalization of the posterior distribution was performed using affine-invariant Markov chain Monte Carlo (MCMC) as implemented by the \texttt{emcee} software package \citep{2013PASP..125..306F}. \begin{table} \begin{minipage}{\columnwidth} \centering \caption{MCMC results for the joint fit to the G102v1 and G102v2 white light curves. Quoted values are the posterior medians and uncertainties give the $\pm 34$\% credible intervals about the median. Values adopted for fixed parameters are reported at the bottom of the table. \label{table:whitefit}} \begin{tabular}{cccc} \hline \\ Free & G102v1 & G102v2 \medskip \\ \cline{1-3} &&& \\ $D$ (ppm) & \multicolumn{2}{c}{ $682_{-73}^{+73}$ } \\ $T_{\textnormal{mid}}$ (MJD) & $58063.7624_{-0.0023}^{+0.0053}$ & $58096.9047_{-0.0015}^{+0.0024}$ \smallskip \\ $\beta$ & $1.21_{-0.08}^{+0.08}$ & $1.15_{-0.09}^{+0.09}$ \smallskip \\ $\sigma$ (ppm) & $87_{-6}^{+6}$ & $83_{-6}^{+6}$ \smallskip \\ \hline \\ Fixed & Value & Reference \medskip \\ \cline{1-3} &&& \\ $P$ (d) & $1.2749255$ & \cite{2016MNRAS.tmp..312D} \smallskip \\ $a/\Rs$ & $3.86$ & \cite{2018AJ....156..283E} \smallskip \\ $i$ ($^\circ$) & $89.1$ & \cite{2018AJ....156..283E} \smallskip \\ $b$ & $0.06$ & \cite{2018AJ....156..283E} \smallskip \\ \hline \end{tabular} \end{minipage} \end{table} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{{whitefit}.pdf} \caption{(a) Raw white light curve for the G102v1 dataset with best-fit model indicated by orange lines and (b) the same for the G102v2 dataset. (c) Combined white light curve after removing the GP systematics component of the best-fit models, leaving only the eclipse signal. (d) Model residuals after subtracting the best-fit models from the raw light curves. (e) Normalized histograms of residuals obtained by subtracting from the data a random subset of GP mean functions obtained in the MCMC sampling. Solid black lines correspond to Gaussian distributions with standard deviations equal to photon noise (i.e.\ prior to rescaling by the $\beta$ factors described in the main text).} \label{fig:whitefit} \end{figure} The resulting posterior distributions are summarized in Table \ref{table:whitefit}, and the best-fit light curve models are shown in Figure \ref{fig:whitefit}. We obtain eclipse depth measurements of $682 \pm 73$\,ppm, with inferred $\beta$ values of $1.21 \pm 0.08$ and $1.15 \pm 0.09$ for the G102v1 and G102v2 light curves, respectively. The latter imply high frequency scatter approximately $20$\,\% above the photon noise floor for both light curves, which is not accounted for by the Mat\'{e}rn kernel and is evident in the model residuals shown in Figure \ref{fig:whitefit}. As a check, we also repeated the light curve fitting using the squared exponential kernel \citep[see e.g.][]{2012MNRAS.419.2683G} and obtained results for the eclipse depth and mid-times that were fully consistent with those reported in Table \ref{table:whitefit} to well within $1\sigma$. However, the squared exponential fit gave uncertainties that were approximately 5\% smaller for the eclipse depth, 40\% smaller for the G102v1 mid-time, and 10\% smaller for the G102v2 mid-time. For this reason, we adopt the results obtained with the Mat\'{e}rn kernel to be conservative. \section{Spectroscopic Light Curve Analyses} \label{sec:speclcs} Spectroscopic light curves were constructed using a similar method to that described by \cite{2013ApJ...774...95D}, which we have also used previously in \cite{2016ApJ...822L...4E,2017Natur.548...58E}. This involved cross correlating each spectrum against a master spectrum constructed by taking the median of out-of-eclipse exposures, in order to remove wavelength-independent systematics, including those arising due to pointing drift across the detector dispersion axis over the course of each visit. The flux was then binned into the 17 wavelength channels shown in Figure \ref{fig:example_spectra}, each spanning 8 pixel columns on the detector ($\Delta \lambda = 0.02 \, \mu\textnormal{m}$). The resulting light curves are shown in Figure \ref{fig:speclcs}. To fit the spectroscopic light curves, we used the same approach as described in Section \ref{sec:whitelc}. The only exception was that we fixed $T_{\textnormal{mid}}$ to the best-fit values listed in Table \ref{table:whitefit}. Thus, for the spectroscopic eclipse signals, the only free parameter was the eclipse depth $D$, which we varied jointly across both the G102v1 and G102v2 light curves. Systematics were again accounted for using GPs with Mat\'{e}rn $\nu=3/2$ kernels and white noise rescaling factors (i.e.\ $\beta$ parameters). The inferred eclipse depths and $\beta$ values are reported in Table \ref{table:specfits}. \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{{speclcs}.pdf} \caption{(a) Raw spectroscopic light curves for the G102v1 dataset and (b) the same for the G102v2 dataset, with best-fit models indicated by orange lines. (c) Combined spectroscopic light curves after removing the GP systematics component of the best-fit models, leaving only the eclipse signal. (d) Residuals after subtracting the best-fit models from the raw light curves.} \label{fig:speclcs} \end{figure*} \begin{table} \begin{minipage}{\columnwidth} \centering \scriptsize \caption{Similar to Table \ref{table:whitefit}, but reporting the MCMC results for the joint fit to the G102v1 and G102v2 spectroscopic light curves. \label{table:specfits}} \begin{tabular}{ccccccc} \hline \\ & & \multicolumn{2}{c}{G102v1} && \multicolumn{2}{c}{G102v2} \\ \cline{3-4} \cline{6-7} &&&&& \\ $\lambda$ ($\mu\textnormal{m}$) & $D$ (ppm) & $\beta$ & $\sigma$ (ppm) && $\beta$ & $\sigma$ (ppm) \medskip \\ \cline{1-7} &&&&& \\ $0.800$-$0.820$ & $367_{-152}^{+154}$ & $1.10_{-0.08}^{+0.08}$ & $476_{-33}^{+34}$ && $1.04_{-0.08}^{+0.08}$ & $449_{-34}^{+35}$ \smallskip \\ $0.820$-$0.839$ & $330_{-107}^{+106}$ & $1.03_{-0.08}^{+0.08}$ & $391_{-30}^{+31}$ && $1.03_{-0.07}^{+0.07}$ & $389_{-25}^{+26}$ \smallskip \\ $0.839$-$0.859$ & $487_{-77}^{+81}$ & $0.99_{-0.07}^{+0.08}$ & $342_{-24}^{+27}$ && $1.09_{-0.07}^{+0.07}$ & $377_{-25}^{+26}$ \smallskip \\ $0.859$-$0.878$ & $657_{-81}^{+81}$ & $1.06_{-0.07}^{+0.07}$ & $350_{-23}^{+24}$ && $1.01_{-0.08}^{+0.08}$ & $330_{-25}^{+26}$ \smallskip \\ $0.878$-$0.898$ & $595_{-69}^{+72}$ & $1.00_{-0.07}^{+0.07}$ & $305_{-22}^{+23}$ && $0.95_{-0.08}^{+0.08}$ & $288_{-25}^{+25}$ \smallskip \\ $0.898$-$0.917$ & $574_{-65}^{+70}$ & $1.06_{-0.07}^{+0.08}$ & $310_{-21}^{+22}$ && $1.02_{-0.07}^{+0.07}$ & $299_{-21}^{+21}$ \smallskip \\ $0.917$-$0.937$ & $667_{-63}^{+67}$ & $1.06_{-0.07}^{+0.07}$ & $298_{-19}^{+21}$ && $0.99_{-0.07}^{+0.07}$ & $277_{-20}^{+21}$ \smallskip \\ $0.937$-$0.956$ & $671_{-83}^{+78}$ & $1.01_{-0.08}^{+0.08}$ & $277_{-21}^{+22}$ && $1.00_{-0.08}^{+0.08}$ & $274_{-21}^{+22}$ \smallskip \\ $0.956$-$0.976$ & $743_{-94}^{+98}$ & $1.08_{-0.10}^{+0.09}$ & $291_{-27}^{+24}$ && $1.08_{-0.08}^{+0.08}$ & $289_{-20}^{+22}$ \smallskip \\ $0.976$-$0.995$ & $733_{-65}^{+70}$ & $1.14_{-0.07}^{+0.07}$ & $303_{-18}^{+19}$ && $1.18_{-0.07}^{+0.07}$ & $314_{-18}^{+19}$ \smallskip \\ $0.995$-$1.015$ & $797_{-68}^{+64}$ & $1.12_{-0.07}^{+0.07}$ & $298_{-18}^{+19}$ && $1.03_{-0.07}^{+0.08}$ & $276_{-20}^{+21}$ \smallskip \\ $1.015$-$1.034$ & $795_{-67}^{+70}$ & $1.14_{-0.07}^{+0.07}$ & $298_{-18}^{+19}$ && $0.98_{-0.08}^{+0.08}$ & $258_{-20}^{+21}$ \smallskip \\ $1.034$-$1.054$ & $736_{-56}^{+58}$ & $1.02_{-0.07}^{+0.07}$ & $269_{-19}^{+20}$ && $1.00_{-0.07}^{+0.07}$ & $265_{-19}^{+20}$ \smallskip \\ $1.054$-$1.073$ & $852_{-63}^{+62}$ & $1.14_{-0.07}^{+0.07}$ & $306_{-18}^{+20}$ && $1.03_{-0.07}^{+0.08}$ & $277_{-19}^{+21}$ \smallskip \\ $1.073$-$1.093$ & $832_{-62}^{+62}$ & $1.04_{-0.07}^{+0.07}$ & $283_{-20}^{+20}$ && $1.10_{-0.07}^{+0.07}$ & $300_{-20}^{+20}$ \smallskip \\ $1.093$-$1.112$ & $791_{-65}^{+62}$ & $1.08_{-0.07}^{+0.07}$ & $299_{-19}^{+20}$ && $1.01_{-0.07}^{+0.08}$ & $279_{-20}^{+21}$ \smallskip \\ $1.112$-$1.132$ & $895_{-74}^{+71}$ & $1.05_{-0.08}^{+0.08}$ & $292_{-21}^{+21}$ && $1.04_{-0.07}^{+0.08}$ & $289_{-20}^{+21}$ \\ \\ \hline \end{tabular} \end{minipage} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{{emspec.g102.withKs}.pdf} \caption{(a) Measured eclipse depths from a joint analysis of the G102v1 and G102v2 datasets using 17 and 34 channel binnings. For the 17 channel binning, results obtained when the G102v1 and G102v2 datasets are analyzed separately are also shown. Consistent results are recovered in all cases. (b) Wavelength-dependent brightness temperatures calculated from the measured G102 eclipse depths.} \label{fig:emspec_g102} \end{figure} The resulting emission spectrum is shown in Figure \ref{fig:emspec_g102}, as measured eclipse depths in the top panel and corresponding brightness temperatures in the bottom panel. Also shown are the results obtained when the G102v1 and G102v2 datasets are analyzed separately, and when the data are rebinned into 34 channels. Good agreement is obtained for all spectroscopic channels, verifying the repeatability of the measurement. We also re-analyzed the G141 eclipse data published in \cite{2017Natur.548...58E}. To be fully consistent with the G102 analysis, we fixed the values of $a/\Rs$ and $i$ to the values listed in Table \ref{table:whitefit}, whereas the original G141 analysis had adopted the values reported in \cite{2016MNRAS.tmp..312D}. This gave statistically identical results to those reported in \cite{2017Natur.548...58E}, which is to be expected as $a/\Rs$ and $i$ primarily affect the eclipse duration, rather than the eclipse depth. \section{Discussion} \label{sec:discussion} The secondary eclipse spectrum measured to date for WASP-121b is shown in Figure \ref{fig:emspec_all}, along with the corresponding brightness temperatures. In addition to the G102 data spanning $0.8$-$1.1\,\mu\textnormal{m}$ presented in the current study, this includes the G141 data spanning $1.1$-$1.6\,\mu\textnormal{m}$ from \cite{2017Natur.548...58E}, ground-based photometric measurements in the $z^\prime$ \citep{2016MNRAS.tmp..312D} and $K_s$ \citep{2019A&A...625A..80K} passbands, and the IRAC data at $3.6\,\mu\textnormal{m}$ and $4.5\,\mu\textnormal{m}$ from \cite{2019arXiv190107040G}. The G102 and G141 measurements agree extremely well at the point of overlap between the two passbands, without any adjustment to the level of either dataset. Similarly, the G102 data are fully consistent with the $z^\prime$ measurement without any adjustment. \begin{figure} \centering \includegraphics[width=\columnwidth]{{emspec.all.withTbright.withKs}.pdf} \caption{(a) Published eclipse depth measurements for WASP-121b. Dark yellow line shows the expected spectrum if the planet were to radiate as a blackbody at the best-fit temperature of 2720\,K. Pale yellow envelope indicates a plausible range of emission assuming the planet has zero albedo and radiates as a blackbody, with the lower limit corresponding to uniform emission from the dayside and nightside hemispheres at a temperature of 2330\,K, and the upper limit corresponding to emission from the dayside only at a temperature of 2970\,K. Red line shows the best-fit model from the retrieval analysis, with spectral emission features due to H$^-$, H$_2$O, and CO labeled. (b) Brightness temperatures derived from the measured eclipse depths. These correspond to the temperatures required to match the implied fluxes in each channel if the planet were to radiate as a blackbody.} \label{fig:emspec_all} \end{figure} \subsection{Blackbody fits and heat redistribution} \label{sec:discussion:blackbody} To interpret the data, we first consider the simple case in which the planet is assumed to radiate as an isothermal blackbody. Fitting such a model to the full dataset gives a best-fit temperature of $2720 \pm 8$\,K, with predictions for the wavelength-dependent secondary eclipse depth indicated by the dark yellow line in Figure \ref{fig:emspec_all}. The data approximately follow the shape of this curve, however, the reduced $\chi^2$ is 2.92 for 47 degrees of freedom, allowing it to be ruled out at $6.5\sigma$ confidence. For comparison, in \cite{2017Natur.548...58E} we ruled out a blackbody model at $5\,\sigma$ confidence by fitting to the data available at that time; namely, the G141, $z^\prime$, and IRAC $3.6\,\mu\textnormal{m}$ measurements. The addition of the G102 and IRAC $4.5\,\mu\textnormal{m}$ measurements has therefore increased the discrepancy between the data and a blackbody model. We also experimented with fitting a blackbody model to different subsets of the data. For instance, if we repeat the fit to the full dataset with the only exception being that we use the G141 white eclipse depth rather than the G141 spectroscopic eclipse depths, we obtain a best-fit temperature of $2754 \pm 11$\,K with a reduced $\chi^2$ of 1.97 for 21 degrees of freedom, reducing the confidence with which such a model can be ruled out to $2.8\sigma$. Alternatively, if we repeat the fit to the full dataset but exclude the G102 spectroscopic eclipse depths --- similar to \cite{2017Natur.548...58E} but with the addition of the $K_s$ and IRAC $4.5\,\mu\textnormal{m}$ points --- we obtain a best-fit temperature of $2691 \pm 9$\,K with a reduced $\chi^2$ of 2.88 for 30 degrees of freedom, ruling it out at $5.2\sigma$ confidence. These results imply that, statistically, the departure from a blackbody spectrum is largely driven by the spectroscopic information contained in the G141 data. In \cite{2017Natur.548...58E}, we attributed this departure to a muted H$_2$O emission band at $1.4\,\mu\textnormal{m}$, as well as a tentative VO emission band at $1.25\,\mu\textnormal{m}$. We revisit this interpretation in Section \ref{sec:discussion:modeling} with a retrieval analysis of the updated dataset. Here we note that even without the G141 spectroscopic information, considerable tension remains between the data and a blackbody model, mainly due to the mismatch between the overall slope of the data and a blackbody spectrum. This can be appreciated in Figure \ref{fig:emspec_all}, which shows a systematic decrease in brightness temperature over the near-infrared wavelength range covered by the G102 and G141 passbands. As we discuss in Section \ref{sec:discussion:modeling}, much of this variation can be explained by the $1.4\,\mu\textnormal{m}$ H$_2$O band, as well as continuum opacity due to H$^-$ ions. \begin{figure} \centering \includegraphics[width=\columnwidth]{recirc_albedo.pdf} \caption{Degeneracy between heat redistribution efficiency and Bond albedo given the temperature of the dayside hemisphere. Thick pink line shows allowed values assuming the dayside radiates as a blackbody with best-fit temperature 2720\,K. Pale green region indicates the allowed values implied by the range of brightness temperatures shown in Figure \ref{fig:emspec_all}. Thin gray lines show contours for different dayside temperatures separated by 50\,K increments. } \label{fig:recircalbedo} \end{figure} Figure \ref{fig:recircalbedo} shows the approximate range of Bond albedos ($A_B$) and heat redistribution efficiencies ($\varepsilon$) for WASP-121b allowed by the emission data. The $\varepsilon$ parameter is defined following \cite{2011ApJ...729...54C}, with $\varepsilon=0$ corresponding to zero heat redistribution and $\varepsilon=1$ corresponding to uniform heat redistribution. Measurements of low Bond albedos for numerous hot Jupiters \citep[e.g.][]{2011MNRAS.417L..88K,2013ApJ...777..100H,2017ApJ...847L...2B,2018AJ....156...44M,2019A&A...624A..62M} suggest a low Bond albedo is also likely for WASP-121b, which would be in line with the evidence for significant optical absorption in the transmission spectrum \citep{2018AJ....156..283E}. Under this scenario ($A_B\lesssim 0.1$), the heat redistribution efficiency would be $\varepsilon \approx 0.4$. However, if heat redistribution is inefficient ($\varepsilon \approx 0$), the maximum allowable Bond albedo is $A_B \approx 0.3$. A phase curve measurement would allow this degeneracy to be broken, by providing a direct constraint on the heat flux from the nightside hemisphere, and hence on $\varepsilon$. \subsection{Atmosphere modeling of the dayside hemisphere} \label{sec:discussion:modeling} \begin{figure} \centering \includegraphics[width=\columnwidth]{{emspec.all.linscale.withKs}.pdf} \caption{(a) Similar to panel (a) of Figure \ref{fig:emspec_all}, but with a linear horizontal scale and inset showing the IRAC data to allow closer inspection of the measured eclipse depths and models. (b) Planet surface flux derived from the eclipse depths shown in panel (a). Blue lines show measured M8 and L1 dwarf spectra \citep{2010ApJS..190..100K}, with arbitrary normalization applied. These objects have comparable photospheric temperatures to WASP-121b and exhibit deep H$_2$O absorption bands at $1.4\,\mu\textnormal{m}$, as they do not have thermal inversions at the photosphere.} \label{fig:emspec_all_lin} \end{figure} We performed an atmospheric retrieval analysis for the secondary eclipse data shown in Figure \ref{fig:emspec_all_lin} using the \texttt{ATMO} code, which has been described extensively elsewhere \citep{2014A&A...564A..59A,2015ApJ...804L..17T,2016ApJ...817L..19T,2017ApJ...841...30T,2016A&A...594A..69D,2018MNRAS.474.5158G}. In brief, \texttt{ATMO} was originally developed to solve the one-dimensional (1D) plane-parallel radiative transfer equation assuming hydrostatic balance and radiative-convective equilibrium, with a two-dimensional implementation subsequently introduced by \cite{2017ApJ...841...30T}. For a given atmospheric composition and pressure-temperature (PT) profile, chemical equilibrium gas phase abundances can be determined by Gibbs energy minimization, or alternatively, arbitrary mixing ratios can be specified. The PT profile can also be provided as an input, or otherwise computed self-consistently given the atmospheric opacity sources, an internal heat flux, and the irradiation from the host star. Condensation can be treated either locally or using a rainout approach \citep[see][]{2019MNRAS.482.4503G}. The planetary spectrum viewed by an external observer is produced as output, allowing \texttt{ATMO} to be used for inferring atmospheric properties from primary transit \citep{2016ApJ...822L...4E,2018AJ....156..283E,2017Sci...356..628W,2018AJ....155...29W,2018Natur.557..526N,2018AJ....156..298A} and secondary eclipse \citep{2017Natur.548...58E,2018MNRAS.474.1705N} measurements. \begin{figure*} \centering \includegraphics[width=\linewidth]{{RetrievalPosterior.withKs}.pdf} \caption{Posterior distributions for the free parameters of the retrieval analysis. Panels along the diagonal show the marginalized distributions for each individual parameter. Off-diagonal panels show the marginalized distributions for parameter pairs, with contours indicating the 68\% and 95\% credible ranges. } \label{fig:posterior} \end{figure*} We previously used \texttt{ATMO} in \cite{2017Natur.548...58E} to analyze the G141, $z^\prime$, and IRAC $3.6\,\mu\textnormal{m}$ emission data for WASP-121b. In that study, we assumed uniform mixing ratios with pressure and allowed the abundances of H$_2$O and VO to vary as free parameters. In the present study, we add the G102, $K_s$, and IRAC $4.5\,\mu\textnormal{m}$ data to our analysis and make a number of changes to our retrieval methodology, motivated by recent work highlighting the importance of thermal dissociation and ionization in ultrahot Jupiter atmospheres \citep{2018ApJ...855L..30A,2018A&A...617A.110P,2018ApJ...866...27L,2018AJ....156...17K,2018AJ....156...10M}. First, rather than fitting for unconstrained abundances of pre-defined opacity sources, we assumed chemical equilibrium with relative elemental abundances set to solar values and varied metallicity ($[\textnormal{M}/\textnormal{H}]$), as well as the carbon and oxygen elemental abundances ($[\textnormal{C}/\textnormal{H}]$, $[\textnormal{O}/\textnormal{H}]$) separately.\footnote{Under this formulation, $[\textnormal{M}/\textnormal{H}]$ controls the number density of all elements heavier than helium except for carbon and oxygen.} Second, we accounted for condensation using the rainout scheme described in \cite{2019MNRAS.482.4503G} and included the effect of gas phase scattering. Third, we allowed for thermal dissociation and ionization of atoms and molecules, which can result in strongly pressure-dependent abundances for many chemical species throughout layers of the atmosphere probed in emission. We note that \texttt{ATMO} has always accounted for these effects in determining chemical equilibrium abundances of neutral species, but this was not included in our previous retrieval, as we assumed a uniform distribution of H$_2$O and VO in pressure and allowed the abundances to vary freely. Ionic species were, however, added to \texttt{ATMO} for the present study following \cite{1994GordonMcBride}, and the ion-neutral composition was benchmarked against the open source GGChem code \citep{2018A&A...614A...1W}. The resulting chemical system consisted of 175 neutral gas phase species, 93 condensates, and the ionized species e$^-$, H$^+$, H$^-$, He$^+$, Na$^+$, K$^+$, C$^+$, Ca$^+$, and Si$^+$. The most important radiatively active neutral gas phase species were H$_2$O, CO$_2$, CO, CH$_4$, NH$_3$, Na, K, Li, R, Cs, TiO, VO, FeH, PH$_3$, H$_2$S, HCN, C$_2$H$_2$, SO$_2$, and Fe(g). Collision-induced absorption due to H$_2$-H$_2$ and H$_2$-He was also included. As in \cite{2017Natur.548...58E}, we fit for the PT profile using the analytic solution derived by \cite{2010A&A...520A..27G}, which is parameterized in terms of the infrared opacity ($\kappa_\textnormal{IR}$), the ratio of the visible-to-infrared opacity ($\gamma=\kappa_{\textnormal{V}}/\kappa_{\textnormal{IR}}$), and an irradiation efficiency factor ($\psi$). The latter is identical to the $\beta$ term defined by \cite{2013ApJ...775..137L}, but we denote it as $\psi$ to avoid confusion with the white noise rescaling factors used in Sections \ref{sec:whitelc} and \ref{sec:speclcs}. It effectively allows for nonzero albedo values and varying degrees of heat recirculation from the dayside to nightside hemisphere. \begin{table} \begin{minipage}{\columnwidth} \centering \caption{Retrieval prior ranges and MCMC results \label{table:retrieval}} \begin{tabular}{cccc} \hline \\ Parameter & Unit & Allowed range & Result \medskip \\ \cline{1-4} &&& \\ $[\textnormal{M}/\textnormal{H}]$ & dex & $-1$ to $2$ & ${1.09}_{-0.69}^{+0.57}$ \smallskip \\ $[\textnormal{C}/\textnormal{H}]$ & dex & $-1$ to $2$ & ${-0.29}_{-0.48}^{+0.61}$ \smallskip \\ $[\textnormal{O}/\textnormal{H}]$ & dex & $-1$ to $2$ & ${0.18}_{-0.60}^{+0.64}$ \smallskip \\ $\log_{10}(\kappa_\textnormal{IR})$ & dex\,cm$^2$\,g$^{-1}$ & $-5$ to $0.5$ & ${-3.01}_{-0.62}^{+0.56}$ \smallskip \\ $\log_{10}(\gamma)$ & dex & $-4$ to $1.5$ & ${0.64}_{-0.16}^{+0.19}$ \smallskip \\ $\psi$ & --- & $0$ to $2$ & ${0.99}_{-0.09}^{+0.06}$ \\ \vspace{-4pt} \\ \hline \end{tabular} \end{minipage} \end{table} We adopted uniform priors for all six retrieval parameters (i.e. $[\textnormal{M}/\textnormal{H}]$, $[\textnormal{C}/\textnormal{H}]$, $[\textnormal{O}/\textnormal{H}]$, $\log_{10}\kappa_\textnormal{IR}$, $\log_{10}\gamma$, $\psi$), with the allowed ranges listed in Table \ref{table:retrieval}. Differential-evolution MCMC was used to marginalize the posterior distribution, using the publicly available software of \cite{2013PASP..125...83E}. We ran twelve chains each for 30,000 steps and discarded the first 20\% of each chain as burn-in before combining them into a single chain. The resulting parameter distributions are reported in Table \ref{table:retrieval} and shown in Figure \ref{fig:posterior}. The best-fit emission spectrum is shown in Figure \ref{fig:emspec_all_lin} and has a $\chi^2$ of 43.61 for $\nu=42$ degrees of freedom, indicating an excellent fit to the data with reduced $\chi^2_\nu=1.04$. \begin{figure} \centering \includegraphics[width=\columnwidth]{{COratio.withKs}.pdf} \caption{Normalized posterior distribution for the carbon-to-oxygen ratio, obtained by combining the $[\textnormal{C}/\textnormal{H}]$ and $[\textnormal{O}/\textnormal{H}]$ samples shown in Figure \ref{fig:posterior}.} \label{fig:co} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{{vertstruct.withKs}.pdf} \caption{(a) PT profile inferred from the retrieval analysis, which uses the analytic solution referred to in the main text. Gray lines show a subset of the MCMC samples, black line shows the median temperature at each pressure level, and dark yellow lines demarcate the 68\% credible interval. Red line shows the PT profile for the best-fit model plotted in Figures \ref{fig:emspec_all} and \ref{fig:emspec_all_lin}. Dashed lines show the PT distribution presented previously in \protect\cite{2017Natur.548...58E} - see text for discussion. (b) Contribution functions for the best-fit model, integrated over the different HST and \textit{Spitzer} passbands. (c) Pressure-dependent abundances for the best-fit model. Important opacity sources such as H$_2$O, TiO, VO, and FeH are heavily depleted for pressures less than $\sim 50$\,mbar due to thermal dissociation, whereas CO is relatively unaffected. Note also that VO is more abundant than TiO at pressures greater than $\sim 10\,$mbar, because for the specific model shown here TiO has condensed and partially rained out but VO has not. For other PT profiles sampled from the posterior, TiO did not condense and had a higher abundance than VO, in line with the relative Ti and V abundances of the Sun. Given that the available data does not show robust evidence for TiO or VO spectral features, these differences did not affect the fit quality.} \label{fig:vertresults} \end{figure*} We infer a metallicity of $[\textnormal{M}/\textnormal{H}] = {1.09}_{-0.69}^{+0.57}$, translating to a 68\% credible interval of $\sim 5$-$50\times$ solar. This is consistent with the plausible metallicity range of $\sim 10$-$30\times$ solar we found for the transmission spectrum \citep{2018AJ....156..283E}. Note that the WASP-121 host star itself shows only mild evidence for heavy metal enrichment relative to solar, with $[\textnormal{Fe}/\textnormal{H}]=0.13 \pm 0.09$ \citep{2016MNRAS.tmp..312D}. We also obtain 68\% credible intervals of $-0.77$ to $0.33$ for $[\textnormal{C}/\textnormal{H}]$ and $-0.42$ to $0.82$ for $[\textnormal{O}/\textnormal{H}]$. These latter ranges are somewhat lower than that inferred for $[\textnormal{M}/\textnormal{H}]$, but are within an order of magnitude. Figure \ref{fig:co} shows the corresponding carbon-to-oxygen ratio, obtained by combining the $[\textnormal{C}/\textnormal{H}]$ and $[\textnormal{O}/\textnormal{H}]$ MCMC chains. We estimate $\textnormal{C/O} = 0.49_{-0.37}^{+0.65}$, which is consistent with the solar value of $0.54$ \citep{2009ARA&A..47..481A}. Together, the updated emission spectrum presented here and the transmission spectrum presented in \cite{2018AJ....156..283E} suggest a metallicity of $\sim 5$-$50\times$ solar for the atmosphere of WASP-121b at pressures below $\sim 100$\,mbar, while the C/O ratio is weakly constrained but fully consistent with that measured for the Sun. These results are broadly in line with theoretical predictions for the heavy element content of gas giant atmospheres. For example, the interior structure models of \cite{2019ApJ...874L..31T} predict a metallicity of $21 \pm 4 \ \times$ solar for the upper atmosphere of WASP-121b, well within the ranges favored by the transmission and emission spectra. However, our revised metallicity estimate is in contrast to the anomalously high abundances for H$_2$O and VO reported in \cite{2017Natur.548...58E} for the dayside atmosphere of WASP-121b. In that study we obtained a 95\% credible lower limit of $1,000\times$ solar for the VO abundance, which was driven by the apparent flux excess measured across the $1.20$-$1.25\,\mu\textnormal{m}$ wavelength range. Although the equilibrium chemistry model presented here fails to replicate this feature (Figure \ref{fig:emspec_all_lin}), the overall goodness of fit it provides to the full dataset (i.e. $\chi^2_\nu=1.04$) suggests it is more likely a statistical fluctuation than a VO emission band. The ability of the chemical equilibrium model to fit the data with abundances closer to solar values is primarily due to the effects of thermal dissociation and ionization, and the opacities of the resulting products such as H$^-$\textbf, which as noted above were not treated in our previous retrieval analysis. \cite{2018A&A...617A.110P} were the first to include these effects in a detailed analysis of the WASP-121b dayside emission data and made a similar observation. Those authors employed a three-dimensional (3D) general circulation model (GCM), which has the advantage of treating the radiative transfer and dynamics of the atmosphere self-consistently. However, a disadvantage of 3D GCMs is their computational expense, which typically prevents the free parameters (e.g.\ metallicity, frictional drag, etc.) being optimized to match a given dataset. This practical limitation may explain the somewhat poorer match to the data provided by the GCM of Parmentier et al.\ compared to that obtained by our retrieval analysis shown in Figure \ref{fig:emspec_all_lin}. While we stress that GCMs represent the state-of-the-art for modelling the 3D interplay between circulation, radiation, and chemistry, by implementing a simpler model, our retrieval analysis is able to more fully explore its parameter space and optimize the match to the data. This comes at the costs of approximating the atmosphere as 1D, not solving for the PT profile self-consistently given the atmospheric opacity, and ignoring dynamical effects. For each MCMC sample, the retrieval instead simply takes a step in the six-dimensional parameter space and uses the resulting values for $\kappa_\textnormal{IR}$, $\gamma$, and $\psi$ to evaluate the PT profile, and the values of $[\textnormal{M}/\textnormal{H}]$, $[\textnormal{C}/\textnormal{H}]$, and $[\textnormal{O}/\textnormal{H}]$ to determine the elemental abundances. These two inputs --- the PT profile and elemental abundances --- are then used to solve for the chemical equilibrium abundances, with the effects of thermal dissociation and ionization accounted for appropriately. Figure \ref{fig:vertresults} shows the PT profile, contribution functions, and abundances obtained in this way from our retrieval analysis. Specifically, the results shown are for the best-fit model (i.e.\ the MCMC sample with the lowest $\chi^2$ value), with panel (a) also displaying the distribution of PT profiles across all MCMC samples. As in \cite{2017Natur.548...58E}, we find the PT profile exhibits an unambiguous thermal inversion. However, our new retrieval analysis puts the $\sim 2700$\,K photosphere much deeper within the atmosphere than implied by the PT distribution presented in \cite{2017Natur.548...58E}, which is indicated by dashed lines in panel (a) of Figure \ref{fig:vertresults}. For the latter, $2700$\,K coincides with pressures $\sim 10\,\mu$bar, versus $\sim 10$\,mbar for the updated PT profile. This is a consequence of the different modeling assumptions made in the two studies, as detailed above. In particular, the \cite{2017Natur.548...58E} analysis did not enforce chemical equilibrium, and consequently dialled the VO abundance up to unrealistically high values in order to fit the $1.25\mu\textnormal{m}$ bump in the measured spectrum. Since the VO abundance was assumed to be uniform across all pressure levels, this in turn increased the opacity throughout the entire atmosphere column, explaining why the photosphere occurs at such low pressures for that model. In the updated retrieval analysis, the abundances of important absorbers such as H$_2$O, VO, and TiO are greatly reduced at low pressures due to thermal dissociation (Figure \ref{fig:vertresults}), moving the photosphere to higher pressures. We find that the temperature increases from approximately $2500$\,K to $2800$\,K across the pressures probed by the data (i.e.\ $\sim 30$ to $5$\,mbar for the best-fit model as seen in Figure \ref{fig:vertresults}), which is close to the range spanned by the brightness temperatures shown in Figure \ref{fig:emspec_all}. The inference of a thermal inversion is driven by various spectral features in the data appearing in emission rather than absorption, notably: H$^-$ within the G102 passband; H$_2$O within the G141 passband; and unresolved CO and H$_2$O bands within the IRAC $4.5\,\mu\textnormal{m}$ channel (Figures \ref{fig:emspec_all} and \ref{fig:emspec_all_lin}). As others have highlighted \citep[][]{2018ApJ...855L..30A,2018ApJ...866...27L,2018A&A...617A.110P,2018AJ....156...17K,2018AJ....156...10M}, the H$_2$O bands are significantly muted due to thermal dissociation in the upper layers of the atmosphere, owing to the intense irradiation by the nearby host star. This can be appreciated by examining panel (c) of Figure \ref{fig:vertresults}, which shows the effect of thermal dissociation and ionization for various species in our best-fit model. For H$_2$O, the volume mixing ratio decreases by a factor of $\sim 300$ between pressures of 100 mbar and 1 mbar. \begin{figure} \centering \includegraphics[width=\columnwidth]{{abundance_weighted_opacities.withKs}.pdf} \caption{Absorption cross-sections of important species for temperature 2700\,K and pressure 10\,mbar. Cross-sections have been weighted by the best-fit abundances at 10\,mbar, which approximately coincides with the near-infrared photosphere (see Figure \ref{fig:vertresults}).} \label{fig:opacities} \end{figure} The contribution of different species to the atmospheric opacity is illustrated in Figure \ref{fig:opacities}. It shows absorption cross-sections for radiatively active species weighted by the corresponding abundances indicated in Figure \ref{fig:vertresults} at a pressure of 10\,mbar, coincident with the near-infrared photosphere. According to the model, the primary opacity source is H$^{-}$ across the G102 and $z^\prime$ passbands, as well as the short-wavelength half of the G141 passband. At longer wavelengths, H$_2$O dominates across the remainder of the G141 passband, as well as the $K_s$ and IRAC $3.6\mu\textnormal{m}$ passbands, while CO dominates within the IRAC $4.5\mu\textnormal{m}$ passband. Although we do not detect any spectral features due to TiO or VO in our emission data, the presence of the muted H$_2$O band in the G141 passband could be hinting at the presence of these strong optical absorbers. Both \cite{2018ApJ...866...27L} and \cite{2018A&A...617A.110P} find that when TiO and VO are excluded as opacity sources in models of ultrahot Jupiter atmospheres, the $1.4\,\mu\textnormal{m}$ H$_2$O band is entirely absent from the emission spectrum. This is because TiO and VO absorb a significant amount of incident stellar radiation at pressure levels above the near-infrared photosphere, even when their abundances have been depleted by thermal dissociation. When TiO and VO are removed as opacity sources, stellar radiation is able to penetrate deeper into the atmosphere, raising temperatures at the near-infrared photosphere by over 100\,K \citep{2018ApJ...866...27L}. This in turn should increase the thermal dissociation rates for H$_2$O enough to completely nullify the $1.4\,\mu\textnormal{m}$ spectral band. Therefore, the fact that we observe a muted $1.4\,\mu\textnormal{m}$ H$_2$O emission band is consistent with the presence of one or more optical absorbers, such as TiO and VO. Direct observational confirmation of the optical absorber/s responsible for the thermal inversion on the dayside hemisphere of WASP-121b will be challenging. Evidence for significant optical absorption at the day-night limb has been uncovered in the transmission spectrum, which may be due to VO, although no strong evidence for TiO has been found \citep{2018AJ....156..283E}. However, even if one or both of these species are present in significant quantities on the dayside, their broad emission bandheads are likely weakened by thermal dissociation and overlapping H$^-$ continuum opacity. One possible workaround could be to use high resolution spectroscopy with large aperture ground-based telescopes to doppler-resolve the narrow cores of the strongest TiO and VO lines just prior to eclipse, as these will be less affected \citep[e.g.][]{2017AJ....154..221N}. Using Figure \ref{fig:opacities} as a guide, TiO could supersede H$^{-}$ as the dominant opacity source regulating the emission for wavelengths shortward of $\sim 0.7\mu\textnormal{m}$. Measurements made by the Transiting Exoplanet Survey Satellite (TESS) -- which observed WASP-121 throughout January 2019 -- will also help inform this picture. For example, if TiO and/or VO are present, a deeper eclipse depth would be expected within the TESS passband compared to the G102 passband, as the former extends across the $\sim 0.6$-$0.95\mu\textnormal{m}$ wavelength range. Looking further ahead to the \textit{James Webb Space Telescope}, the second order of the Near Infrared Imager and Slitless Spectrograph (NIRISS) single-object spectrograph (SOSS) provides wavelength coverage across $\sim 0.6$-$0.8\,\mu\textnormal{m}$, which encompasses a number of significant TiO and VO bands (Figure \ref{fig:opacities}). At even shorter wavelengths, the high resolution spectroscopy approach could in principle be used to doppler-resolve emission lines due to metals such as iron and titanium \citep{2018Natur.560..453H} or photochemical products such as SH \citep{2009ApJ...701L..20Z,2018AJ....156..283E}. Although these latter species are not included in the model shown in Figure \ref{fig:opacities}, they have strong absorption lines at wavelengths shortward of $\sim 0.5\,\mu\textnormal{m}$ and could potentially heat the upper atmosphere enough to produce the observed thermal inversion \citep{2018ApJ...866...27L}. Finally, we note that the G102 data presented here provide the most direct evidence yet for H$^-$ emission in an exoplanet atmosphere. This is due to the clear departure from a blackbody at wavelengths shortward of $1.1\,\mu\textnormal{m}$ (Figure \ref{fig:emspec_all_lin}), whereas previous claims have instead relied on model-dependent interpretations of G141 spectra that are indistinguishable from blackbodies \citep{2018ApJ...855L..30A,2018AJ....156...17K,2018AJ....156...10M}. The extensive ionization implied by this result for pressures below $\sim 100$\,mbar (Figure \ref{fig:vertresults}) could have important implications for atmospheric dynamics and energy transfer, including increased day-night heat redistribution due to H$_2$ recombination on the nightside hemisphere \citep{2018ApJ...857L..20B,2018RNAAS...2b..36K} and increased magnetic drag due to Lorentz forces \citep{2018ApJ...866...27L}. As \cite{2018AJ....156...17K} have shown for WASP-103b --- an ultrahot Jupiter that is in many respects similar to WASP-121b --- spectroscopic phase curves offer the most promising means of further constraining these fundamentally 3D phenomena. \section{Conclusion} \label{sec:conclusion} We have presented new secondary eclipse observations for the ultrahot Jupiter WASP-121b acquired with the G102 grism of HST/WFC3. These data extend the wavelength coverage of the measured emission spectrum from $1.1\,\mu\textnormal{m}$ down to $0.8\,\mu\textnormal{m}$. We performed a retrieval analysis of the combined emission dataset, improving upon our previous efforts by incorporating the effects of thermal dissociation and ionization. We confirm the detection of a thermal inversion and our best-fit model indicates that the temperature profile increases from approximately $2500$\,K to $2800$\,K across the $\sim 30$\,mbar to $5$\,mbar pressure range. The spectrum is well explained by H$^-$ emission for wavelengths shortward of $1.1\,\mu\textnormal{m}$, a muted H$_2$O emission band at $1.4\,\mu\textnormal{m}$, and overlapping CO and H$_2$O emission bands within the $4.5\,\mu\textnormal{m}$ channel. Under the assumption of chemical equilibrium, we find the dayside atmospheric metallicity is likely enriched by at least a factor of a few relative to solar and uncover no evidence for anomalous carbon and oxygen abundances. \section*{Acknowledgements} The authors are grateful to the anonymous referee for constructive feedback that improved the quality of this manuscript. Support for program GO-15135 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. JMG acknowledges funding from a Leverhulme Trust Research Project Grant and University of Exeter PhD Studentship. ALC is funded by a UK Science and Technology Facilities Council (STFC) studentship. BD acknowledges support from an STFC Consolidated Grant (ST/R000395/1). PT acknowledges support by the European Research Council under Grant Agreement ATMO 757858. \bibliographystyle{apj}